Navigation
CATEGORY: DevOps

Story of Mr. IT & Ms. Cloud

The way you think and operate your infrastructure and private cloud, is crucial to how your organization perceives and utilizes new technologies.
IT Architects often overlook the term ‘Cloud’ as a buzzword, rather, a pattern of deploying infrastructure.

Automation, blueprinting & orchestration… they’re all OK. But, they miss some very basic truths about what cloud is at it’s core.

Previously, we revisited the term “Hybrid Cloud”, and determined why it should be used more accurately, embodying the fact that enterprises should adopt and take advantage of the cloud model offered by Public Clouds.

This is where most blog posts around this topic will raise the ‘cattle vs pets’ arguments. Though I agree, I’d like us to dive a bit deeper and have another perspective. The way I see it, it’s not necessarily about the ‘pets’ themselves, but rather their owner – Mr. IT.

Thinking like Mr. IT

What Mr. IT had done since the dawn of virtualization, (and prior, in the x86 native era) was to do what ever it takes to make common infrastructure, such as disks, compute, and memory to become more & more resilient. Throughout the years, IT departments had paid billions and trillions of dollars in hopes of making infrastructure fault-resistant. Investing in redundant networks, clustered computing, and smart storage machines that can often replicate seamlessly between data centers (Metro Clusters, VPLEX, and their friends) all of which, end up costing a lot of money.

Mr IT, had a very good reason to do all of the above. Applications built in the client-server era, were architected as monoliths, and virtualized as-is from the days of early x86 rack servers. In this world, scale-up is the prominent methodology. Mr. IT continues in building his redundant hardware layers, increasing app (and business) performance via means of adding better compute / memory / storage.

While you can argue that these kind solutions pay off as they aim to eliminate business critical outages, today, more and more disruptive technologies & patterns allow developers to shift the old app paradigm. Eventual consistency data models, NoSQL, Micro-services etc, all allow and require infrastructure to be treated differently.

Thinking like Ms. Cloud

Public Cloud, as it’s provided today by amazon AWS , Azure, and GCP, is an implementation of an infrastructure deployment pattern. This pattern is very straight forward. Instead of saying – “My infrastructure will make sure that no workload will ever fail”, It uses a different line of thinking. One that says: “I know that my infrastructure might fail, but if it does, it’ll be contained in this specific area.” This, is the paradigm used by Ms. Cloud

When AWS took this approach to sell IaaS (and later on, other services) they didn’t just invent a new pattern of deploying hardware, but in fact they created a new paradigm for developing software, by putting the availability ‘burden’ mostly on their customers – software developers, rather on the AWS ‘Cloud Service’ IT team.

This same paradigm, can and should be implemented in the enterprise. However, it does require a management mind shift, and the correct tools.

When Amazon’s own S3 storage fumbled last march, most complaints we’re towards App developers such as Slack, Giphy, etc… Not developing for redundancy. No one was really pointing a huge blaming finger at Amazon, since as long as S3 kept it’s 99.9% availability of infrastructure SLA, Amazon have kept their part of the bargain.

And this, is where Service Driven Infrastructure comes into play. If your developers have full visibility  into understanding the extents of your organization’s infra, capabilities, limits, and fault/availability zones, they would and should, take it upon themselves to guarantee proper application redundancy.

More importantly though, their managers, peers, and IT Administrators, should drive them to do so.

Thinking like Ms Cloud, often means:
1. Offering self service infra, with visibility into underlying constructs, such as clustered racks & storage pools.
– This will serve your devs in knowing where & how to deploy services, and architect their software.
2. Supplying with general-purpose building blocks, that allow developers to build modern apps.
– Redundant Storage, Network, Compute, are the building blocks of the client-server era.
– Cloud native era building blocks consist of – DBs, Queues, LBs, and Name registration (often for micro-services discovery capabilities)

Conclusion

As a CIO, an IT Manager, or a system admin, you should always consider the costs of your infrastructure. Try to determine whether your developers make the best use of it. When infrastructure down-time occurs, and heads are flying, ask your CTO / R&D Managers whether they would blame IT for an Amazon cloud outage.
The first shift in making Private Clouds great (again), is to treat them exactly the same way as you would a public cloud.

Finally, Lets conclude with a thought experiment. What’s the cheaper, most cost-effective option?

1. Spending $2M worth of SAN storage rack, including it’s fabric, with a premium hypervisor attached to it.
2. Rewriting the app using your infrastructure, using 10 software engineers in a 1-year project.

Should you invest time & money in maintaining top-tier hardware, or in modernizing your business & process via software?

To Be Continued …

Docker Machine on OSX, with VMware Fusion

For those of you who want to experiment with new technologies, e.g , docker – here’s a nice tutorial on how to run docker on OSX, without boot2docker which uses VirtualBox. Since I have VMware Fusion on my Mac, I was looking for a way to get rid of VirtualBox while still running docker on OSX with ease, using VMware Fusion instead, and getting direct access to the docker containers like you would on native linux.

Prequel

In OSX, Docker-daemon can’t run natively, since docker uses a linux kernel feature for containers (or FreeBSD ‘Jails’ ) which Mac OSX simply doesn’t have / was removed / not exposed. Bottom line – you cannot run docker on OSX Natively.

To the rescue – Boot2Docker.

mac_docker_host

Boot2Docker is a nice cli & package that installs VirtualBox on your Mac, creates a small VM , with a preconfigured Boot2Docker iso to boot from. Boot2Docker is also a tiny linux distro aimed at doing one thing only – running docker.

As this is usually not a real issue, the one thing you cannot do, is connect to the container itself while your on a remote host. that is, unless you expose specific ports to do so, since docker also creates an internal NAT on its hosting box.

So to summarize the problem again – How can I use docker on my OSX Mac, without having to use VirtualBox, while also being able to natively connect to my container’s network?

Fusion & Docker-Machine

To the rescue – Docker-Machine & VMware Fusion!
Docker-Machine is a utility (in beta) that allows you to start simple docker-hosts with ease, and on multiple cloud providers / private cloud providers. It support a really long list of providers at the moment like :

  • VMware vSphere/vCloudAir/Fusion 
  • OpenStack
  • Azure
  • Google
  • EC2
  • list goes on…

So essentially what id does is kind of like the boot2docker cli only not limited to just VirtualBox ( in fact i think they’ve merged their code base with docker-machine, but never mind :) )
First we’ll install docker-machine for OSX by simply downloading it here , renaming the downloaded file to “docker-machine” , giving it right permissions, and moving it to a generally available folder on OSX.
Also, lets download the docker client for OSX with brew.

Now, lets create a docker-machine on our fusion instance

This will create a new VM within fusion named osxdock. Now, we can work with our docker as we would on VirtualBox. If we would expose ports to the Boot2Docker host, we would be able to access them by accessing the VMs local ip and port allocated by Fusion.

To get to our docker client to connect to our docker vm , we’ll run the command $(docker-machine env osxdock) which will basically set our environment variables to our current active docker-machine.

Now, in order for us to be able to natively connect to our docker machine, like we would on other linux distros running docker directly, we’ll fiddle around with Fusion networking.

First, lets create a new network (don’t want to modify existing networks, although it should be ok) VMwareFusion – > Prefrences – > Network – > Click Lock to unlock, and Add a network with the + sign.

Mark only “Connect the host Mac to this network” & “Provide addresses on this network via DHCP”. I prefer to assign something close to the docker network, like 172.18.0.0/16 as my dhcp subnet. Lets take our docker-machine offline so we can add it with a network interface:


Open the machine within Fusion and add a NIC that uses the new network you had just created. In order for us to communicate with the containers we’ll have to create a route to that NIC, so each time our MAC tries to access the docker container itself, it would route through the docker-machine new NIC.

Now for the last part. In order for our route to remain static, and for the docker-machine which essentially just boots a Boot2Docker iso, thus , has no persistence, we’ll modify the Fusion network.
open up a terminal and go to the network setting (vmnet<number>) in my case, 4:

We will actually add a DHCP reservation. So our osxdock VM will always get the same IP automatically on the new NIC we’ve configured.
Add the following configuration to the bottom of the file:

Where EE:EE… stands for osxdock’s MAC address, and fixed IP is the fixed IP you want to give it. Make sure that you are allocating an IP from that DHCP’s range configured in the first section of the file.
you will need to restart Fusion after this configuration change, or just run this command to restart Fusion Netwroking:

Lastly, bring up your docker-machine, and make sure it gets the ip address you’ve configured by running ifconfig on the docker vm

Now tell your Mac to route all traffic to container subnets which is by default 172.17.0.0/16 to the osxdock vm “static” ip like so:


You can also create a permanent static route using the OSX StartupItems options, but I’ll let you google this one since it isn’t short unfortunately.

Presto! You can now run docker with a “native feel” on OSX, using VMware Fusion! You’ll be able to ping containers, access them without exporting ports, and work as if you are working in a native linux environment.

To check this, simply run a docker machine on osxdock, inspect it, and ping it, with the following command:

Happy devving!

vCAC (vRA) Cloud Client is GA!

This is something that has gone a bit under the radar generally speaking (even mine!). A couple of days ago, VMware released its vRealize Cloud Client. But what is it you ask?
Well, essentially cloud client is a tool built to automate various tasks within vCAC 6.X, like:

 

  • Creating blueprints / catalog items
  • Requesting catalog items
  • Activating vCAC Actions on existing items
  • Creating IaaS Endpoints
  • Automate SRM fail-overs under vCAC management(!!!)
  • Launch vCO Workflows (!!!!)
  • Write scripts using cloudclient cli

A Bit of Background

So, this awesome great tool, was initially built internally to help support some of the complex automation we do around here at VMware R&D , hence, cloudclient is now in version v3.0. Personally, I like tools like these, that come out directly from an engineering necessity. Mainly, because they come from the purest of use-cases – our own VMware internal use-cases.

Unlike vCAC CLI which went out as GA with vCAC 6.1, and is also more of a cli tool to operate vCAC’s REST APIs, CloudClient lets you do a lot of things within vCAC , with simple, one-lined commands!

What Can You Use it For?

From a customer perspective, first this tool brings great openstack-novacli-like functionality and can help your developers to consume Infrastructure as a Service, without interacting with the vCAC GUI, and to automate the request of machines using scripts.

So lets say I want to test a build using Jenkins, I can call cloud client from any shell (cmd / linux) or external script, and request a predefined catalog vm for my testing automatically. After that, you can list your items and operate on the VM / MutliMachine environment you got with cloudclient.

Using Cloud Client

First, grab cloudclient ! After you’ve done that, you’ll need to make sure that wherever you run it (bash / cmd ) ‘java’ is set as an operable program – meaning , you have the “C:\program files\Java\jre7\bin\” folder configured to your ‘Path’ environment variable so you can run java.exe from where you’re running cloudclient.

After everything is set, just run cloudclient.bat / cloudclient.sh (and accept the EULA once, be patient! this awesome cli thing is FREE ! )
Once you accepted the EULA, you should see this:

Screen Shot 2014-10-20 at 20.44.07

Next , if you’re wondering about the options in this thing, is to type ‘help’ which will show you all commands available with CloudClient.
Keep in mind, that you can always use the Tab key to auto-complete what commands can come next! Also, if you’re clicking Tab and nothing appears, just try adding minus signs like: “vra command –” and then press tab to see what parameters are available.

In order to log-in to vRA, we’ll type:

[code]vra login userpass –user user@domain.com –password MyPassword –server vcac-va.domain.com –tenant mytenant
[/code]

If you’ve done it right, you should get a ‘Successful’ prompt back! For out next example, lets list all available catalog items:

[code]vra catalog list[/code]

The output should be:
catalog list

And finally, to make a request happen, we’ll need to perform a command similar to this:

[code]vra catalog request submit –groupid vmdemo –id CentOSx64 –reason Because –properties vminfo.project=ERP,provider-VirtualMachine.CPU.Count=2,provider-VirtualMachine.Memory.Size=2048[/code]

Inspecting this command carefully, you can see i’ve submitted a couple of properties with the request:

  • vminfo.project
  • provider-VirtualMachine.CPU.Count
  • provider-VirtualMachine.Memory.Size

So CPU Count & Memory Size are regular vCAC (vRA) properties, though when submitted through API , they need to have the ‘provider-‘ prefix , which is the same as saw when exploring the REST API through Firefox.

Some behaviour changes with 6.0 / 6.1 – In 6.1, if CPU/Memory are not set, request will go through with minimum CPU/Memory for the blueprint. In 6.0 (though I haven’t tested it) I believe the request will fail. So FYI :)

I must say, this is just a very short introduction to cloudclient and it’s capabilities. So go ahead, explore it, and if any more posts are needed – i’ll be sure to write them.

So leave your comments below! If you want, the official download page for CloudClient is linked Here

Enabling DevOps using vCAC 6

Its been a long time since I last wrote a post, unfortunately (or fortunately, i’ll admit) i’ve been working frantically on a very special use case for vCAC 6, using it to demonstrate DevOps scenario.
In this scenario, vCAC will be used as an IaaS platform to let Jenkins deploy a full continuous integration process. Building code, deploying it on a Multi Machine environment (a customer’s requirement). And generating a new build for QE to test.

In order to build this, I used some of the principles demonstrated in my previous article , writing a short code activity for a part of the implementation, while also using a new vCO plugin that isn’t released yet, and is actually in pre-beta phases.

The general description of this very interesting use case, goes like this:
1 – Jenkins requests vCAC to provide with clean ‘vanilla’ IaaS environment from vCAC 6, with REST API calls.
2 – vCAC then deploys , and runs a puppet command to install a CI night build on the VMs in the MultiMachine environment, as an “On” state workflow
3 – Jenkins executes a Multi Machine custom action – ‘Clone To Catalog’ after install is done, through vCAC API.
4 – VMs are cloned , blueprints are created, a new catalog item is created in ‘Night builds’ service
5 – Jenkins executes a final command, massively requesting the new catalog item for sleeping QE personnel so when they arrive to work at morning, they have a new environment ready for QA testing.

I believe a lot of organisations should adjust their development cycles to be efficient. This results in faster then ever time to market, and software versions being rapidly developed. 
DevOps is real, and cloud, is definitely the answer for it.

This is also a great example of utilizing the vCAC 6 APIs (which are currently in beta state) and maximizing the use of VMware’s main cloud solution.

Check out the demonstration video I made below, and be sure to leave any comments in the comment section!

There are no more results.