Navigation

Docker Machine on OSX, with VMware Fusion

For those of you who want to experiment with new technologies, e.g , docker – here’s a nice tutorial on how to run docker on OSX, without boot2docker which uses VirtualBox. Since I have VMware Fusion on my Mac, I was looking for a way to get rid of VirtualBox while still running docker on OSX with ease, using VMware Fusion instead, and getting direct access to the docker containers like you would on native linux.

Prequel

In OSX, Docker-daemon can’t run natively, since docker uses a linux kernel feature for containers (or FreeBSD ‘Jails’ ) which Mac OSX simply doesn’t have / was removed / not exposed. Bottom line – you cannot run docker on OSX Natively.

To the rescue – Boot2Docker.

mac_docker_host

Boot2Docker is a nice cli & package that installs VirtualBox on your Mac, creates a small VM , with a preconfigured Boot2Docker iso to boot from. Boot2Docker is also a tiny linux distro aimed at doing one thing only – running docker.

As this is usually not a real issue, the one thing you cannot do, is connect to the container itself while your on a remote host. that is, unless you expose specific ports to do so, since docker also creates an internal NAT on its hosting box.

So to summarize the problem again – How can I use docker on my OSX Mac, without having to use VirtualBox, while also being able to natively connect to my container’s network?

Fusion & Docker-Machine

To the rescue – Docker-Machine & VMware Fusion!
Docker-Machine is a utility (in beta) that allows you to start simple docker-hosts with ease, and on multiple cloud providers / private cloud providers. It support a really long list of providers at the moment like :

  • VMware vSphere/vCloudAir/Fusion 
  • OpenStack
  • Azure
  • Google
  • EC2
  • list goes on…

So essentially what id does is kind of like the boot2docker cli only not limited to just VirtualBox ( in fact i think they’ve merged their code base with docker-machine, but never mind :) )
First we’ll install docker-machine for OSX by simply downloading it here , renaming the downloaded file to “docker-machine” , giving it right permissions, and moving it to a generally available folder on OSX.
Also, lets download the docker client for OSX with brew.

Now, lets create a docker-machine on our fusion instance

This will create a new VM within fusion named osxdock. Now, we can work with our docker as we would on VirtualBox. If we would expose ports to the Boot2Docker host, we would be able to access them by accessing the VMs local ip and port allocated by Fusion.

To get to our docker client to connect to our docker vm , we’ll run the command $(docker-machine env osxdock) which will basically set our environment variables to our current active docker-machine.

Now, in order for us to be able to natively connect to our docker machine, like we would on other linux distros running docker directly, we’ll fiddle around with Fusion networking.

First, lets create a new network (don’t want to modify existing networks, although it should be ok) VMwareFusion – > Prefrences – > Network – > Click Lock to unlock, and Add a network with the + sign.

Mark only “Connect the host Mac to this network” & “Provide addresses on this network via DHCP”. I prefer to assign something close to the docker network, like 172.18.0.0/16 as my dhcp subnet. Lets take our docker-machine offline so we can add it with a network interface:


Open the machine within Fusion and add a NIC that uses the new network you had just created. In order for us to communicate with the containers we’ll have to create a route to that NIC, so each time our MAC tries to access the docker container itself, it would route through the docker-machine new NIC.

Now for the last part. In order for our route to remain static, and for the docker-machine which essentially just boots a Boot2Docker iso, thus , has no persistence, we’ll modify the Fusion network.
open up a terminal and go to the network setting (vmnet<number>) in my case, 4:

We will actually add a DHCP reservation. So our osxdock VM will always get the same IP automatically on the new NIC we’ve configured.
Add the following configuration to the bottom of the file:

Where EE:EE… stands for osxdock’s MAC address, and fixed IP is the fixed IP you want to give it. Make sure that you are allocating an IP from that DHCP’s range configured in the first section of the file.
you will need to restart Fusion after this configuration change, or just run this command to restart Fusion Netwroking:

Lastly, bring up your docker-machine, and make sure it gets the ip address you’ve configured by running ifconfig on the docker vm

Now tell your Mac to route all traffic to container subnets which is by default 172.17.0.0/16 to the osxdock vm “static” ip like so:


You can also create a permanent static route using the OSX StartupItems options, but I’ll let you google this one since it isn’t short unfortunately.

Presto! You can now run docker with a “native feel” on OSX, using VMware Fusion! You’ll be able to ping containers, access them without exporting ports, and work as if you are working in a native linux environment.

To check this, simply run a docker machine on osxdock, inspect it, and ping it, with the following command:

Happy devving!

It’s vBlog Voting Season!

It’s voting season in the blog-sphere, and you get to influence! For the past ±2 years i’ve been blogging about vRealize Automation (formerly vCAC ) ever since version 5.1 came out :) it only seems like yesterday but I guess its been a while ain’t it?
Since my role shift to the vRealize Air team i’ve been VERY busy, but thats a positive thing! I’m now starting to set up a new lab, and will also blog a whole lot about vRealize Air Automation (vRAA) which is our currently in Beta vRealize Automation as a Service, hosted in vCloud Air. So I expect a LOT of great blogposts in the upcoming year, just for you, my readers! Its very worthwhile to stay tuned via RSS, and even more worth while to help my blog get to the Top 100 mark in the vSphere-Land vBlog contest!

If you’ve enjoyed my work, used my site, asked a question and was replied promptly (I do try my best) please take 5 minutes to vote for my blog, as it would mean a lot to me, and will let me know i’m doing this blogging thing right :)
You can cast your vote here 

Sincerely,
Omer

vRealize Automation 6.2 (vCAC) – GA! What’s New?

Another yearly quarter, and another product release! vRealize Automation 6.2 is now GA (Grab it!) and it brings so much awaited functionality to the table! This is the first release to have some impressive integration with VMware’s vROPS (formerly vCOPS) product, which is also GA today with version 6.0.

New Features!

vR Automation 6.2 / vR Operations 6.0 integration

Allows for Health badges to be viewed directly from item page, allowing the user to have a brief health summary of his VM. Also, vC-Ops-128xresource reclamation is available with insights from vROPs 6.0. When you’ll filter a VMs performance once vROPs is configured into the system, you’ll be able to search the metrics from vROPs rather then vCenter.
These two features are a huge benefit for day-to-day management of your private cloud VMs, and will surely drive great value in any environment where vROPs & vRA are integrated.

ASD Form vCO Workflow Execution

The ASD functionality of vRealize Automation always keep getting better and better. The latest improvements include being able to invoke a vCO action from the item form, when the form is displayed to the user. This, enables us to retrieve data for that request from 3rd party systems, or calculating addition information within vCO. This also applies to any custom day 2 operations you can build for vRealize Automation.

ASD Import/Export Content

This release also enables us to import and export content from vRA! Stuff like service blueprints can now be transferred from instance to instance.

VRAVAMI

VAMI Node Mgmt

Proxy Configuration for vCloud Air Endpoint

For those of you managing vCloud Air VMs with vRealize Automation, leveraging vCloud Air through an enterprise proxy is now possible via a special proxy setting in the vCloud endpoint

Centrelized Node info/log collection in vRA-VA

The vRealize Automation VA had it’s management interface revamped, and now allows for a full view of the installed components from a single point, checking if all components had communicated recently with the VA, and also the ability to collect logs in a centrelized fashion, straight from the VA’s VAMI management UI.

vRealize Applications & Custom Properties

For those of you who started exploring vRealize Applications, you can now expose the provisioned App ‘User queried’  properties, into the vRA request. Giving the user the ability to easily modify an application deployment at request time.

Come-Back Features!

Approval Props

Setting Approval Policy

Some of the (awesome) features that went away during the whole 5.2->6.0 transition are now doing a comeback! And these are GREAT news since these were REALLY handy stuff!

Editing Custom Properties on Approval

With vRA 6.2 you will be able to set custom properties to be editable by the approver! This is a wonderful feature that allows for a whole set of business logic approval use cases to be utilized again with vRA 6.2!

Calendar Widget

Approver

Approval Editing Properties

The all mighty calendar of events widget is now back, and allows you to view a calendar with all your item expiry / archiving / deletion dates right on the vRealize Automaion home page for every user! This is valuable to keep track on your VMs and leases and is a wonderful feature I really missed from 5.2

Hidden Features!

Wait… What?
So you’re probably wondering – “Why are there hidden features?” Well, the features that i’m going to describe below aren’t really meant for regular use, but are more related to my new role. I did think of some enterprise use cases where these could be beneficial, but keep in mind that they are considered ‘Experimental’.

Installing DEMs / Agents in Different Domains

One of the issues that I came across with some of my former customers (you know who you are!) was the need to manage several, unrelated domains with one vRA instance. This, naturally gets you to a decision point where IAAS is installed in a certain domain, be it management, prod, or where ever you see fit.

CalendarOfEvents

Calendar of Events

In order to have DEMs / vSphere Agents in other domains , you would need to do some nasty things. In this release, using the silent installer, you are able to grant a certain user in a remote domain (without any trust relationship) full access tothe IAAS repository. Meaning, that a vSphere Agent / DEM can be installed in a totally separate domain environment, under a different set of credentials from the main IAAS server components!

In order to do so, you’ll need to add two properties to the silent install for the DEM, named WorkflowManagerInstaller.msi . The properties are : DEM_REPO_USER / DEM_REPO_PASSWORD . An example of a silent install including these two properties should look something like this:

[code]msiexec.exe /i WorkflowManagerInstaller.msi /qn /norestart /Lvoicewarmup! DEM_INSTALL.log ADDLOCAL=All REPO_SERVER_URL="https://iaas-server.domain.com/repository/" REPO_HOSTNAME="iaas-server.domain.com" SERVICE_USER_NAME="Domain-B\user" SERVICE_USER_PASSWORD="Password1" REPOSITORY_USER="Domain-A\user" REPOSITORY_USER_PASSWORD="Password1" DEM_REPO_USER="Domain-A\user" DEM_REPO_PASSWORD="Password1" DEM_NAME="DEMW" DEM_NAME_DESCRIPTION="DEM Worker" INSTALLLOCATION="C:\Program Files….." VALID_DEM_NAME="1" MSINEWINSTANCE="1" TRANSFORMS=":DemInstanceId01" DEM_ROLE="Worker" HTTPS_SUPPORT=1 ENABLE_SSL=true MANAGERSERVICE_HOSTNAME="iaas-server.domain.com"
[/code]

For this trick to work on a vSphere Agent, simply run a config command after the agent is installed:

[code] VRMAgent.exe -Repo-SetCredentials –user <username> –password <password> –domain <domain> [/code]

Proxying DEM Workers

When installing a DEM Worker with the silent installer, you’ll be able to add a proxy configuration for that DEM. So if you have any DMZ environments that you want to manage, this could be a great way to do so!

To proxy the DEM / Agent , simply use the silent installer mentioned in the paragraph above, and activate 3 new install parameters:

  • PROXY_ADDRESS <proxy ip addres>:<port>
  • USE_SYSDEFAULT <true/false>
  • BYPASS_ONLOCAL <true/false>

These are pretty self explanatory. The USE_SYSDEFAULT, tells the DEM to grab proxy configuration from the default system configuration found in IE proxy settings. BYPASS_ONLOCAL , will order the DEM to bypass the proxy when it detects a call from the same network he’s on.

vCAC Day 2: Detach Linked Clone VM

This post will demonstrate another very cool day 2 operation, detaching a linked clone VM. Though I didn’t implement it as part of a customer engagement, it just occurred to me that this can be done with some of the great vCO / ASD Day 2 operations.
In vCAC, when creating a linked clone blueprint, one must properly design the hardware infrastructure underneath. Or more correctly, your underlying storage.

There are many storage considerations that needs to take place when designing a Dev / QA typed linked clone environment with vCAC, since vCAC unlike vCD, doesn’t currently abstract the underlying linked clone machine management.
While this is not an issue if planned correctly, performance hits are something that can always lurk behind dark corners, when many VMs are using the same Read-Only replica.

This, got me thinking. If vCAC doesn’t do any abstraction on the VM provisioning layer like vCD, you could actually detach, or inflate, a vCAC linked clone vm, with a simple API call to vSphere. Thus, turning the VM to a full-clone machine, totally non-dependent on the ‘source replica’ it was generated from.
This is also an API call generally available in vCD as well as a ‘Consolidate’ operation. It’s available in the vApp VM menu given the right permission, and that the VM is powered off. BUT, in vCD, if a VM is consolidated it still won’t be able to do stuff as hard disk resizing – while vCAC VMs – CAN!

Thus, the main difference though is that vCAC VMs , when detached from their linked-clone replicas, just turn into day to day vsphere VMs, while vCD VMs will still be under vCD’s ‘Hard’ management thus a bit more tricky to back up / restore etc, and still retain the OvDC Linked Clone policy in some forms.

The Technical Details

Essentially, the API call we are going to utilize is for a VC:VirtualMachine object. The method is called ‘promoteDisks_Task’ we can see in vCO, that this method accepts two parameters,an array of which disks do you want to promote, and whether to detach them from the link clone replica.

Screen Shot 2014-11-09 at 11.18.14

We’ll create a workflow for our Day 2 Op. Since the VM needs to be powered off during this operation, we will prompt the user for the best time for shut-down of his VM, by choosing an input of ‘date’ type (ASD will take care to present it automatically with a nice calendar. How cool is that?!) The second input for the workflow is the VM itself of course, of type VC:VirtualMachine.

With vCAC 6.1, we can create 2 workflows to run at separate cases, when the VM is Off, or On. The Off state workflow will not contain a date input, and the On Status workflow will. For this blog’s purpose, i’ll demonstrate only the On status workflow.

The workflow, will wait for the time specified by the user, then power off the VM grace fully, then, it will invoke the promote disks api call (we’ll get that in a second) and last it will wait for the generated task to end. Of course you can also opt to power the VM back on again.

Screen Shot 2014-11-06 at 18.12.15So what is in that scriptable task you ask? first , the scriptable takes in the VC:VirtualMachine parameter, and as out parameter has a VC:Task object (the task generated from promoting disks). The actual code that’s in there is:

[code]vcTask = vm.promoteDisks_Task(true)[/code]

This line of code will execute the promoteDisks task, on all of the VM disks, by specifying a null array, or only one parameter. Second parameter is ‘true’ for detaching disks (rather then just consolidating them).
Once the user will submit the request, vCO will do its thing, and by morning, that VM will turn into a regular boy! (VM!). The time this task takes may vary according to disk size, etc.

Once we have the workflow ready, we’ll configure it as an ‘On Status’ Day 2 operation like so:

Screen Shot 2014-11-15 at 12.24.16

 

And create an appropriate form for it:

Screen Shot 2014-11-17 at 11.41.12

As far as IaaS reservations , and policies for the user using up storage space, a check can be made to see if the VMs data store has enough room for the operation. Part from that, vCAC will actually calculate a full-disk usage on your reservation when you use linked-clones. This is due to the fact that linked clones are able to reach maximum VMDK size, thus , causing over allocation of their owner reservation. With this default method of calculating reservations, we can give our users the option to detach their VMs without worrying that their reservation will be over-subscribed.

As funny as it may seem, I couldn’t get a working vCAC system to demonstrate this fully and take screenshots, but really wanted to get this post out as soon as possible. I‘ll update it hopefully soon with the rest of the screenshots showing the VMs turning to full clones after the day 2 initiated.

Also, a thank you is deserved for Niran Even-Chen my VCDX friend for letting me use his 6.1 system, to take the screenshots above. Go check out his website – http://Cloud-Abstract.com

vCAC (vRA) Cloud Client is GA!

This is something that has gone a bit under the radar generally speaking (even mine!). A couple of days ago, VMware released its vRealize Cloud Client. But what is it you ask?
Well, essentially cloud client is a tool built to automate various tasks within vCAC 6.X, like:

 

  • Creating blueprints / catalog items
  • Requesting catalog items
  • Activating vCAC Actions on existing items
  • Creating IaaS Endpoints
  • Automate SRM fail-overs under vCAC management(!!!)
  • Launch vCO Workflows (!!!!)
  • Write scripts using cloudclient cli

A Bit of Background

So, this awesome great tool, was initially built internally to help support some of the complex automation we do around here at VMware R&D , hence, cloudclient is now in version v3.0. Personally, I like tools like these, that come out directly from an engineering necessity. Mainly, because they come from the purest of use-cases – our own VMware internal use-cases.

Unlike vCAC CLI which went out as GA with vCAC 6.1, and is also more of a cli tool to operate vCAC’s REST APIs, CloudClient lets you do a lot of things within vCAC , with simple, one-lined commands!

What Can You Use it For?

From a customer perspective, first this tool brings great openstack-novacli-like functionality and can help your developers to consume Infrastructure as a Service, without interacting with the vCAC GUI, and to automate the request of machines using scripts.

So lets say I want to test a build using Jenkins, I can call cloud client from any shell (cmd / linux) or external script, and request a predefined catalog vm for my testing automatically. After that, you can list your items and operate on the VM / MutliMachine environment you got with cloudclient.

Using Cloud Client

First, grab cloudclient ! After you’ve done that, you’ll need to make sure that wherever you run it (bash / cmd ) ‘java’ is set as an operable program – meaning , you have the “C:\program files\Java\jre7\bin\” folder configured to your ‘Path’ environment variable so you can run java.exe from where you’re running cloudclient.

After everything is set, just run cloudclient.bat / cloudclient.sh (and accept the EULA once, be patient! this awesome cli thing is FREE ! )
Once you accepted the EULA, you should see this:

Screen Shot 2014-10-20 at 20.44.07

Next , if you’re wondering about the options in this thing, is to type ‘help’ which will show you all commands available with CloudClient.
Keep in mind, that you can always use the Tab key to auto-complete what commands can come next! Also, if you’re clicking Tab and nothing appears, just try adding minus signs like: “vra command –” and then press tab to see what parameters are available.

In order to log-in to vRA, we’ll type:

[code]vra login userpass –user user@domain.com –password MyPassword –server vcac-va.domain.com –tenant mytenant
[/code]

If you’ve done it right, you should get a ‘Successful’ prompt back! For out next example, lets list all available catalog items:

[code]vra catalog list[/code]

The output should be:
catalog list

And finally, to make a request happen, we’ll need to perform a command similar to this:

[code]vra catalog request submit –groupid vmdemo –id CentOSx64 –reason Because –properties vminfo.project=ERP,provider-VirtualMachine.CPU.Count=2,provider-VirtualMachine.Memory.Size=2048[/code]

Inspecting this command carefully, you can see i’ve submitted a couple of properties with the request:

  • vminfo.project
  • provider-VirtualMachine.CPU.Count
  • provider-VirtualMachine.Memory.Size

So CPU Count & Memory Size are regular vCAC (vRA) properties, though when submitted through API , they need to have the ‘provider-‘ prefix , which is the same as saw when exploring the REST API through Firefox.

Some behaviour changes with 6.0 / 6.1 – In 6.1, if CPU/Memory are not set, request will go through with minimum CPU/Memory for the blueprint. In 6.0 (though I haven’t tested it) I believe the request will fail. So FYI :)

I must say, this is just a very short introduction to cloudclient and it’s capabilities. So go ahead, explore it, and if any more posts are needed – i’ll be sure to write them.

So leave your comments below! If you want, the official download page for CloudClient is linked Here