Navigation

Story of Mr. IT & Ms. Cloud

The way you think and operate your infrastructure and private cloud, is crucial to how your organization perceives and utilizes new technologies.
IT Architects often overlook¬†the term ‘Cloud’ as a buzzword, rather, a pattern¬†of deploying infrastructure.

Automation, blueprinting &¬†orchestration… they’re all OK. But, they miss some very basic truths about what cloud is at it’s core.

Previously, we revisited¬†the term “Hybrid Cloud”, and determined why it should be used more accurately, embodying the fact that enterprises should adopt and take advantage of the cloud model offered by Public Clouds.

This is where most blog posts around this topic will raise the ‘cattle vs pets’ arguments. Though¬†I agree, I’d like us to dive a bit deeper and have¬†another perspective. The way I see it, it’s not necessarily about the ‘pets’ themselves, but rather their owner – Mr. IT.

Thinking like Mr. IT

What Mr. IT had done since the dawn of virtualization, (and prior, in the x86 native era) was to do what ever it takes to make common infrastructure, such as disks, compute, and memory to become more & more resilient. Throughout the years, IT departments had paid billions and trillions of dollars in hopes of making infrastructure fault-resistant. Investing in redundant networks, clustered computing, and smart storage machines that can often replicate seamlessly between data centers (Metro Clusters, VPLEX, and their friends) all of which, end up costing a lot of money.

Mr IT, had a very good reason to do all of the above. Applications built in the client-server era, were architected as monoliths, and virtualized as-is from the days of early x86 rack servers. In this world, scale-up is the prominent methodology. Mr. IT continues in building his redundant hardware layers, increasing app (and business) performance via means of adding better compute / memory / storage.

While you can argue that these kind solutions pay off as they aim to eliminate business critical outages, today, more and more disruptive technologies & patterns allow developers to shift the old app paradigm. Eventual consistency data models, NoSQL, Micro-services etc, all allow and require infrastructure to be treated differently.

Thinking like Ms. Cloud

Public Cloud, as it’s provided¬†today by amazon AWS , Azure, and GCP, is an implementation of an infrastructure deployment pattern. This pattern is very straight forward. Instead of saying – “My infrastructure will make sure that no workload will ever fail”, It uses a different line of thinking. One that says: “I know that my infrastructure might fail, but if it does, it’ll be contained in this specific area.” This, is the paradigm used by Ms. Cloud

When AWS took this approach to sell IaaS (and later on, other services) they didn’t just invent a new pattern¬†of deploying hardware, but in fact they created a new paradigm for developing software, by putting the availability ‘burden’ mostly on their customers – software developers, rather on the AWS ‘Cloud Service’ IT team.

This same paradigm, can and should be implemented in the enterprise. However, it does require a management mind shift, and the correct tools.

When Amazon’s own S3 storage fumbled last march, most complaints we’re towards App developers such as Slack, Giphy, etc… Not developing for redundancy. No one was really pointing a huge blaming finger at Amazon, since as long as S3 kept it’s 99.9% availability of infrastructure SLA,¬†Amazon have kept their part of the bargain.

And this, is where Service Driven Infrastructure comes into play. If your developers have full visibility¬† into understanding the extents of your organization’s infra,¬†capabilities, limits, and fault/availability¬†zones, they¬†would and should, take it upon themselves to guarantee proper application redundancy.

More importantly though, their managers, peers, and IT Administrators, should drive them to do so.

Thinking like Ms Cloud, often means:
1. Offering self service infra, with visibility into underlying constructs, such as clustered racks & storage pools.
РThis will serve your devs in knowing where & how to deploy services, and architect their software.
2. Supplying with general-purpose building blocks, that allow developers to build modern apps.
– Redundant Storage, Network, Compute, are the building blocks of the client-server era.
РCloud native era building blocks consist of РDBs, Queues, LBs, and Name registration (often for micro-services discovery capabilities)

Conclusion

As a CIO, an IT Manager, or a system admin, you should always consider the costs of your infrastructure. Try to determine whether your developers make the best use of it. When infrastructure down-time occurs, and heads are flying, ask your CTO / R&D Managers whether they would blame IT for an Amazon cloud outage.
The first shift in making Private Clouds great (again), is to treat them exactly the same way as you would a public cloud.

Finally, Lets conclude with a thought experiment. What’s the cheaper, most cost-effective option?

1. Spending $2M worth of SAN storage rack, including it’s fabric, with a premium hypervisor attached to it.
2. Rewriting the app using your infrastructure, using 10 software engineers in a 1-year project.

Should you invest time & money in maintaining top-tier hardware, or in modernizing your business & process via software?

To Be Continued …

Broadening Your Cloud Horizons

Since I still haven’t attended this officially, I thought I’d share some personal thoughts on my recent move from VMware.

After five exciting years implementing and developing cloud technology at VMware, I have decided to leave, and join Stratoscale. Stratoscale, is a startup, doing some new and disruptive work in on-premise cloud computing. Stratoscale’s vision, is to enable an AWS-like cloud experience on premises, which can be easily deployed and operated. The product has matured quite a bit and deployed at a few leading organizations, but of course it’s a huge scope of features to cover. As a product manager I’ll be helping the company realize this complete vision. Quite a change!

I feel like that VMware had become a too big of a company, and had actually lost some of the initial agility and inspiration it once had given myself and others. It’s offerings no longer aimed at breaking new technology grounds, but rather, being able to sell more and sustain itself.

Personally, I see the public cloud market, and I truly believe that the current way people implement ‘private¬†cloud’ must change. The enterprise should adopt “the cloud” but not in terms of where it deploys it’s workloads (in house or csp) but rather in how¬†the enterprise deploy it’s workloads, and how those workloads behave. You can read more about this in my previous blog post, and be sure i’ll continue expressing is line of thinking in my blog going forward, so stay tuned.

As I started learning about Stratoscale‚Äôs product and some of the technologies it uses such as Openstack, I discovered another ‚Äúgem‚ÄĚ the company is building, not a technology but a content resource. It‚Äôs called the Private Cloud Wiki, and it aims to be a complete directory of all the knowledge that exists on the Internet around private cloud technology, strategy and best practices. There is a growing section on VMware vRealize, another on OpenStack, and numerous general topics like private cloud economics, planning and deployment, operations, etc.¬†

The Hybrid Cloud Misconception

“Hybrid Cloud”, is a common term used in the IT industry. It’s mostly reduced to an empty marketer term, not depicting what the “Hybrid Cloud” should be about. Lets analyze it, and define the true meaning of it.

When people use or think about the term “Hybrid Cloud” I believe¬†they assume¬†one of these two¬†cases.

Public Cloud Migration

This scenario is about enterprises, having both a legacy datacenter for older workloads, and newer workloads running on AWS/Azure/Google etc.

They are migrating some of the applications that will benefit from better agility enabled by the public cloud, and while they do so will have to control costs, and maintain IT governance and control over both environments, which are very different by nature.

Existing Workload Mobility

This scenario talks about enterprises, having applications that migrate as necessary from private to public cloud (often, monolithic apps). These apps are being moved without any changes to the code base. They are just an effort for CIO’s to claim they are “in the cloud”. While this may help to bring IT’s TCO down, it won’t actually improve business agility in any significant way.

Breaking the Myth

Let’s stop and break-down these use cases into their actual meaning and business value. Assuming that IT’s¬†main purpose is to¬†enable the business, whether it’s doing it by¬†helping developers move fast, or by keeping the lights on.

In the first ‘public cloud migration’ scenario, the hybrid cloud is nothing more than a migration phase necessity. This scenario, uses¬†the business value in public cloud, assisting the developers to move quickly by using elastic cloud services¬†thus providing¬†business agility. Problem here though, is cloud lock-in, and¬†rising TCO with every month that goes by using services like lambdas, RDS, and object storage.

The second ‘workload mobility’ scenario, helps¬†lowering TCO of existing apps¬†by removing DC hardware, co-locating to the public cloud; but misses the opportunity to¬†enable the business properly by faster time to market. Lowered cost is¬†achieved by keeping the workload itself as is, without using¬†any cloud services other than the basics.

Hence, in order to utilize the best of both these scenarios, “Hybrid Cloud” should really be redefined as “Compatible Hybrid Cloud”.
Where in¬†“Cloud” I mean Public Cloud, and by “Compatible” I refer to developer APIs and every other process involved in deploying, delivering, and operating software.

Lets imagine such a “Compatible Hybrid Cloud”. It’ll¬†provide¬†us with the known¬†costs of our private cloud, helping us leveraging¬†our current investment, together with the modern way of developing cloud native applications. In this approach, we gain¬†both¬†cost control¬†and business¬†agility enabled by faster R&D.

But actually, we don’t even need to imagine such a solution. Such solutions already exist in the likes of¬†Stratoscale’s Symphony¬†or¬†Microsoft’s Azure Stack.

Conclusion

The original term or model known¬†as “Hybrid Cloud” misses the point entirely. Private clouds, as they are built today, mostly focus on automating old & redundant ITSM and ITIL processes.

These kinds of private clouds implementations, mostly undermine the true meaning of cloud. Often missing the business goal of enabling agile R&D and business development.

As Sysadmins, DevOps personas, CIO & CTOs, we should all strive to enable businesses to move faster than before. Having a fast business, means supplying developers with the modern tools & building blocks they need in order to develop applications quickly.

To be continued…

Docker for vSphere Admins

In this post I want to explain what Docker is, in such a simple way, so that every vSphere Admin would get it in seconds :) I was initially having a difficult¬†time understading the value of Docker (vs VMs) etc, but I think the analogy i’ve come up with should be quite simple.

The TL;DR version of what docker is, simply put – Docker is the new shiny linux based Thinapp, for not-necessarily-end-user applications but more for dev apps – DBs, Backends, etc

YES. I’ve said it, and i’ll repeat, docker, in my eyes, is VERY comparable to Thinapp / Appv / any other application virtualization software.

I’ll elaborate.

What docker aims to do initially, is to help applications delivered seamlessly throughout environments using something called LXC / or Linux Containers, which are essentially a method of isolating application processes within linux, dictating which resources are and aren’t available for your specific app.

In a sense, this is exactly what Thinapp / Appv had tried to do in the past, only for the EUC market. You take IE6, you install it in an isolated ‘fake’ environment (with fake registry, face C: , etc) , and there you go. you can deploy it on any windows OS.

Essentially, when you build a Dockerfile, used to create docker images, you are doing the same. You decide on a base OS, with statements like :
“FROM centos7”
or any other OS you wish to build your application on. Then, you describe how your app is installed, whether its running scripts, or yum commands, using the different tools docker provides you with like environment variables etc. So commands like:
“RUN yum install httpd” for example, depict what is necessary to dockerize an apache web server.

Lastly, you create the endpoint for the running container. Meaning, you select which ‘.exe’ should run when the container is started (for those of you who’ve used thinapp) so in linux, you should see something like running a script, starting a service, or running a binary file.

So now, I can build my apache web server docker image, using my Docker file, and run it on any linux distro I want to, since the dockerized apache *thinks* its running on ‘centos7’ using the centos image (which is essentially the centos libraries, and linux filesystem structure) and using the same linux kernel.

That’s why containers best practices dictate its better off to run a single process in a container, since its just virtualizing an app for its OS environment. In that sense, Dockers and VMs are not contrasts at all. VMs, solve an operations issue, for an OS being dependent on hardware, and Docker, solves a developer issue, for an App being dependent on a specific linux OS, or OS packages.

So to finalize my weird but valid (IMO) comparison, if linux had IE6, using docker, you could run IE6 & IE 7 at the same time without any OS issues.

Reset Photon OS root Password

So, you’ve gone cloud native and installed Photon OS. Since my team¬†develops quite a few things here at VMware using Photon OS and docker, i’ve come across a situation where i’ve needed to reset Photon’s root password. Here is how it’s done.

Photon's boot screenphoton_grub2

Shutdown your PhotonOS VM and re-start it, press¬†the ‘e’ button when you get to the boot screen (as seen below). This in turn will get you to Photon’s GRUB2¬†interface.

From there, the process is fairly simple, as all you have to do is to add a couple of parameters to the end of the linux boot parameters line.
Add:

 rw init=/bin/bash 

This should look something like this when you’re done.

photon_grub2_editted

Now press Ctrl+X in order to continue with the boot sequence, and your boot process will end in a bash shell, punching in the¬†‘passwd’ command¬†will reset your root user¬†password.

And remember¬†kids –
He who has console rights to server = owns root/admin privileges (true to any OS)