Navigation

The Hybrid Cloud Misconception

“Hybrid Cloud”, is a common term used in the IT industry. It’s mostly reduced to an empty marketer term, not depicting what the “Hybrid Cloud” should be about. Lets analyze it, and define the true meaning of it.

When people use or think about the term “Hybrid Cloud” I believe they assume one of these two cases.

Public Cloud Migration

This scenario is about enterprises, having both a legacy datacenter for older workloads, and newer workloads running on AWS/Azure/Google etc.

They are migrating some of the applications that will benefit from better agility enabled by the public cloud, and while they do so will have to control costs, and maintain IT governance and control over both environments, which are very different by nature.

Existing Workload Mobility

This scenario talks about enterprises, having applications that migrate as necessary from private to public cloud (often, monolithic apps). These apps are being moved without any changes to the code base. They are just an effort for CIO’s to claim they are “in the cloud”. While this may help to bring IT’s TCO down, it won’t actually improve business agility in any significant way.

Breaking the Myth

Let’s stop and break-down these use cases into their actual meaning and business value. Assuming that IT’s main purpose is to enable the business, whether it’s doing it by helping developers move fast, or by keeping the lights on.

In the first ‘public cloud migration’ scenario, the hybrid cloud is nothing more than a migration phase necessity. This scenario, uses the business value in public cloud, assisting the developers to move quickly by using elastic cloud services thus providing business agility. Problem here though, is cloud lock-in, and rising TCO with every month that goes by using services like lambdas, RDS, and object storage.

The second ‘workload mobility’ scenario, helps lowering TCO of existing apps by removing DC hardware, co-locating to the public cloud; but misses the opportunity to enable the business properly by faster time to market. Lowered cost is achieved by keeping the workload itself as is, without using any cloud services other than the basics.

Hence, in order to utilize the best of both these scenarios, “Hybrid Cloud” should really be redefined as “Compatible Hybrid Cloud”.
Where in “Cloud” I mean Public Cloud, and by “Compatible” I refer to developer APIs and every other process involved in deploying, delivering, and operating software.

Lets imagine such a “Compatible Hybrid Cloud”. It’ll provide us with the known costs of our private cloud, helping us leveraging our current investment, together with the modern way of developing cloud native applications. In this approach, we gain both cost control and business agility enabled by faster R&D.

But actually, we don’t even need to imagine such a solution. Such solutions already exist in the likes of Stratoscale’s Symphony or Microsoft’s Azure Stack.

Conclusion

The original term or model known as “Hybrid Cloud” misses the point entirely. Private clouds, as they are built today, mostly focus on automating old & redundant ITSM and ITIL processes.

These kinds of private clouds implementations, mostly undermine the true meaning of cloud. Often missing the business goal of enabling agile R&D and business development.

As Sysadmins, DevOps personas, CIO & CTOs, we should all strive to enable businesses to move faster than before. Having a fast business, means supplying developers with the modern tools & building blocks they need in order to develop applications quickly.

To be continued…

Docker for vSphere Admins

In this post I want to explain what Docker is, in such a simple way, so that every vSphere Admin would get it in seconds :) I was initially having a difficult time understading the value of Docker (vs VMs) etc, but I think the analogy i’ve come up with should be quite simple.

The TL;DR version of what docker is, simply put – Docker is the new shiny linux based Thinapp, for not-necessarily-end-user applications but more for dev apps – DBs, Backends, etc

YES. I’ve said it, and i’ll repeat, docker, in my eyes, is VERY comparable to Thinapp / Appv / any other application virtualization software.

I’ll elaborate.

What docker aims to do initially, is to help applications delivered seamlessly throughout environments using something called LXC / or Linux Containers, which are essentially a method of isolating application processes within linux, dictating which resources are and aren’t available for your specific app.

In a sense, this is exactly what Thinapp / Appv had tried to do in the past, only for the EUC market. You take IE6, you install it in an isolated ‘fake’ environment (with fake registry, face C: , etc) , and there you go. you can deploy it on any windows OS.

Essentially, when you build a Dockerfile, used to create docker images, you are doing the same. You decide on a base OS, with statements like :
“FROM centos7”
or any other OS you wish to build your application on. Then, you describe how your app is installed, whether its running scripts, or yum commands, using the different tools docker provides you with like environment variables etc. So commands like:
“RUN yum install httpd” for example, depict what is necessary to dockerize an apache web server.

Lastly, you create the endpoint for the running container. Meaning, you select which ‘.exe’ should run when the container is started (for those of you who’ve used thinapp) so in linux, you should see something like running a script, starting a service, or running a binary file.

So now, I can build my apache web server docker image, using my Docker file, and run it on any linux distro I want to, since the dockerized apache *thinks* its running on ‘centos7’ using the centos image (which is essentially the centos libraries, and linux filesystem structure) and using the same linux kernel.

That’s why containers best practices dictate its better off to run a single process in a container, since its just virtualizing an app for its OS environment. In that sense, Dockers and VMs are not contrasts at all. VMs, solve an operations issue, for an OS being dependent on hardware, and Docker, solves a developer issue, for an App being dependent on a specific linux OS, or OS packages.

So to finalize my weird but valid (IMO) comparison, if linux had IE6, using docker, you could run IE6 & IE 7 at the same time without any OS issues.

Reset Photon OS root Password

So, you’ve gone cloud native and installed Photon OS. Since my team develops quite a few things here at VMware using Photon OS and docker, i’ve come across a situation where i’ve needed to reset Photon’s root password. Here is how it’s done.

Photon's boot screenphoton_grub2

Shutdown your PhotonOS VM and re-start it, press the ‘e’ button when you get to the boot screen (as seen below). This in turn will get you to Photon’s GRUB2 interface.

From there, the process is fairly simple, as all you have to do is to add a couple of parameters to the end of the linux boot parameters line.
Add:

 rw init=/bin/bash 

This should look something like this when you’re done.

photon_grub2_editted

Now press Ctrl+X in order to continue with the boot sequence, and your boot process will end in a bash shell, punching in the ‘passwd’ command will reset your root user password.

And remember kids –
He who has console rights to server = owns root/admin privileges (true to any OS)

Docker Machine on OSX, with VMware Fusion

For those of you who want to experiment with new technologies, e.g , docker – here’s a nice tutorial on how to run docker on OSX, without boot2docker which uses VirtualBox. Since I have VMware Fusion on my Mac, I was looking for a way to get rid of VirtualBox while still running docker on OSX with ease, using VMware Fusion instead, and getting direct access to the docker containers like you would on native linux.

Prequel

In OSX, Docker-daemon can’t run natively, since docker uses a linux kernel feature for containers (or FreeBSD ‘Jails’ ) which Mac OSX simply doesn’t have / was removed / not exposed. Bottom line – you cannot run docker on OSX Natively.

To the rescue – Boot2Docker.

mac_docker_host

Boot2Docker is a nice cli & package that installs VirtualBox on your Mac, creates a small VM , with a preconfigured Boot2Docker iso to boot from. Boot2Docker is also a tiny linux distro aimed at doing one thing only – running docker.

As this is usually not a real issue, the one thing you cannot do, is connect to the container itself while your on a remote host. that is, unless you expose specific ports to do so, since docker also creates an internal NAT on its hosting box.

So to summarize the problem again – How can I use docker on my OSX Mac, without having to use VirtualBox, while also being able to natively connect to my container’s network?

Fusion & Docker-Machine

To the rescue – Docker-Machine & VMware Fusion!
Docker-Machine is a utility (in beta) that allows you to start simple docker-hosts with ease, and on multiple cloud providers / private cloud providers. It support a really long list of providers at the moment like :

  • VMware vSphere/vCloudAir/Fusion 
  • OpenStack
  • Azure
  • Google
  • EC2
  • list goes on…

So essentially what id does is kind of like the boot2docker cli only not limited to just VirtualBox ( in fact i think they’ve merged their code base with docker-machine, but never mind :) )
First we’ll install docker-machine for OSX by simply downloading it here , renaming the downloaded file to “docker-machine” , giving it right permissions, and moving it to a generally available folder on OSX.
Also, lets download the docker client for OSX with brew.

Now, lets create a docker-machine on our fusion instance

This will create a new VM within fusion named osxdock. Now, we can work with our docker as we would on VirtualBox. If we would expose ports to the Boot2Docker host, we would be able to access them by accessing the VMs local ip and port allocated by Fusion.

To get to our docker client to connect to our docker vm , we’ll run the command $(docker-machine env osxdock) which will basically set our environment variables to our current active docker-machine.

Now, in order for us to be able to natively connect to our docker machine, like we would on other linux distros running docker directly, we’ll fiddle around with Fusion networking.

First, lets create a new network (don’t want to modify existing networks, although it should be ok) VMwareFusion – > Prefrences – > Network – > Click Lock to unlock, and Add a network with the + sign.

Mark only “Connect the host Mac to this network” & “Provide addresses on this network via DHCP”. I prefer to assign something close to the docker network, like 172.18.0.0/16 as my dhcp subnet. Lets take our docker-machine offline so we can add it with a network interface:


Open the machine within Fusion and add a NIC that uses the new network you had just created. In order for us to communicate with the containers we’ll have to create a route to that NIC, so each time our MAC tries to access the docker container itself, it would route through the docker-machine new NIC.

Now for the last part. In order for our route to remain static, and for the docker-machine which essentially just boots a Boot2Docker iso, thus , has no persistence, we’ll modify the Fusion network.
open up a terminal and go to the network setting (vmnet<number>) in my case, 4:

We will actually add a DHCP reservation. So our osxdock VM will always get the same IP automatically on the new NIC we’ve configured.
Add the following configuration to the bottom of the file:

Where EE:EE… stands for osxdock’s MAC address, and fixed IP is the fixed IP you want to give it. Make sure that you are allocating an IP from that DHCP’s range configured in the first section of the file.
you will need to restart Fusion after this configuration change, or just run this command to restart Fusion Netwroking:

Lastly, bring up your docker-machine, and make sure it gets the ip address you’ve configured by running ifconfig on the docker vm

Now tell your Mac to route all traffic to container subnets which is by default 172.17.0.0/16 to the osxdock vm “static” ip like so:


You can also create a permanent static route using the OSX StartupItems options, but I’ll let you google this one since it isn’t short unfortunately.

Presto! You can now run docker with a “native feel” on OSX, using VMware Fusion! You’ll be able to ping containers, access them without exporting ports, and work as if you are working in a native linux environment.

To check this, simply run a docker machine on osxdock, inspect it, and ping it, with the following command:

Happy devving!

It’s vBlog Voting Season!

It’s voting season in the blog-sphere, and you get to influence! For the past ±2 years i’ve been blogging about vRealize Automation (formerly vCAC ) ever since version 5.1 came out :) it only seems like yesterday but I guess its been a while ain’t it?
Since my role shift to the vRealize Air team i’ve been VERY busy, but thats a positive thing! I’m now starting to set up a new lab, and will also blog a whole lot about vRealize Air Automation (vRAA) which is our currently in Beta vRealize Automation as a Service, hosted in vCloud Air. So I expect a LOT of great blogposts in the upcoming year, just for you, my readers! Its very worthwhile to stay tuned via RSS, and even more worth while to help my blog get to the Top 100 mark in the vSphere-Land vBlog contest!

If you’ve enjoyed my work, used my site, asked a question and was replied promptly (I do try my best) please take 5 minutes to vote for my blog, as it would mean a lot to me, and will let me know i’m doing this blogging thing right :)
You can cast your vote here 

Sincerely,
Omer