In the first part of this three-part series on cloud computing and the convergence of development and operations known as "devops," I explained how cloud computing is shifting the unit of deployment from the server to the application. This fundamental change in the focus of operations is having a profound effect on the nature of IT operations.
Why? The answer begins with understanding the relationship between applications, services and data (aka "payloads"), and cloud services. The infrastructure and platforms that you find in cloud computing are designed specifically so they can handle as wide a variety of functional needs as possible within the stated purpose of the service.
Infrastructure services, such as Rackspace CloudServers, try to handle as many x86-based applications as they possibly can, within constraints like networking architecture, service latency, available capacity, etc. Heroku, a Ruby on Rails platform service provider, tries to support as many Ruby applications as it can; as long as its Ruby, there's a great chance it will run on Heroku with little or no modifications.
Thus, these services are designed so that they anticipate as little as possible about any valid payload delivered to them. They do allow configurability, to varying extents, but they must be told what to do.
So, what tells them what to do?
The application as the center of attention
One of the things about an application-centric view of operations is that you describe things like infrastructure needs, configurations, architectures, and service level requirements in terms of the application, not the infrastructure. So, since the application is the primary unit of deployment, and the policy all applies to the application, the application becomes the dictator of its own needs.
That, in turn, has a profound effect on how the application is developed. In the bad-old server-centric models, developers typically build the app, test a few run-time scenarios, deploy into production and then hand off rules for how to fix things that go wrong to the system administrators who take over the app.
This "wall of confusion," as Andrew Clay Shafer called it in a recent overview of devops, is a real source of tension between the application development team and the systems administration team. Developers architect just enough to cover deployment and the express requirements of the application. In an effort to prevent unexpected issues, system administrators dictate (or attempt to dictate) rules for deployments that turn out to be architectural constraints on applications.
In the new application-centric operations models, developers and operators have to work together to assure that any application deployment handles as many anticipated and unanticipated operational needs as possible. To do this, the entire application ecosystem needs to be engineered, as a system--which is only possible if the underlying infrastructure ecosystem can be manipulated through software.
Some of this can be done through the application code itself, but you don't want to tightly couple code to a specific infrastructure profile. That makes portability messy.
So, the concept of application policies becomes paramount: descriptive or prescriptive "rules," input parameters and/or functional code that basically tell the cloud service/infrastructure what to do on behalf of the application. Policies are delivered to the cloud service through well defined APIs, then mapped to specific infrastructure service profiles as necessary by the cloud provider.
Since unique policies are generally declared for each application to be deployed, defining those policies suddenly becomes a part of the application design itself. This single fact means that developers and operations staff must be more tightly aligned than ever.
The operations disruption has begun
The result is an explosion of interest in new ways to build both applications and the server images that contain them. Early comers to the space, such as provisioning automation vendor, rPath or recent EMC acquisition, Fastscale, have suddenly become very interesting to both software vendors and enterprises. Open source projects, such as Chef and Puppet, have gained tremendous traction. Cloud brokers, such as Rightscale and enStratus have either built their own devops scripting languages, or adopted others--or both.
In the long term, devops needs to standardize both the policies and the APIs used to synchronize payloads to infrastructure. Unfortunately, that may take a few years. That's one of the reasons that I suggested the pCard concept as a cloud payload "calling card." It gives a way for customers to evaluate a provider's ability to handle required policies. OCCI and OVF are two leading standards efforts to watch in this space.
By the way, devops is having an effect on the design of cloud service infrastructures, in return. There is growing investment in both systems companies and cloud management software providers to include more dynamic configurability and service level assurance automation into their products. Orchestration capabilities are becoming increasingly critical to evaluating both cloud service infrastructure and cloud services themselves.
In my final post in this series, I want to give you some examples of devops in action, and a little glimpse of the effect that devops may have on cloud computing infrastructure. If you haven't thought about how cloud computing will change more than your processes, policies and practices, but your very culture itself, it's probably time you do.