Recently, I outlined my thoughts around simplifying application delivery into cloud-computing environments. At the time, I thought what was needed was a way to package applications in a universal format, whether targeted for infrastructure or platform services, Java or Ubuntu, VMs or disk drives.
The core concept was to define this format so that it combines the actual bits being delivered with the deployment logic and run-time service level parameters required to successfully make the application work in a cloud. I wasn't very clear initially about the core motivation for this proposal, but I will make them explicitly clear here:
I believe that in order to enable a truly elastic marketplace for compute and storage capacity, customers need a universal description of the payloads they wish to deploy, and the services they need to support those payloads.
Service providers need a consistent way to evaluate payload needs, both to determine immediate ability to support the payload, and to enable innovative ways to map needs to new service capabilities.
Quite frankly, without a universal way to describe and evaluate payloads, I'm not sure a liquid marketplace for cloud capabilities is possible without significant changes to the design of applications, data center infrastructure, and the Internet itself.
Thankfully, I received tremendous feedback on the application packaging post, both in the comments on CNET and from a large number of followers on Twitter. The feedback was amazing, and it forced me to reconsider my original proposal.
I now know that my target proposal was wrong on two key accounts:
It wasn't just about applications, as just about any software payload--a set of 1's and 0's to be pushed out the cloud--was eligible. For instance, raw data sets, "middleware" of various sorts and even extensions to SaaS applications could count as a payload to be evaluated.
It wasn't about packaging, but was more about description. A universal packaging format is almost impossible to achieve in a highly innovative and diverse world like enterprise IT. Besides, sending the whole software bundle to each cloud provider for evaluation before deciding on which one you want to deploy to is highly inefficient.
So, after contemplating these observations for a week or so, I've started to formulate a much better proposal for what is needed to allow for simpler real-time selection of cloud infrastructure.
What I propose is something I call the pCard (code-named "Jean-Luc"?), named as such because it is somewhat analogous to the infamous "vCard" electronic business card format. In a sense, a pCard is a calling card for a software payload--whether simple single container payloads, or complex multi-container distributed payloads--that contains the information needed by a service provider to determine a) if they can meet the needs of the payload, and b) what kind of services are required to do so (and their costs).
The structure of a pCard would be very similar to what I originally proposed, but tweaked toward description, not executables:
The four elements displayed are:
Metadata describing the pCard format and contents. This metadata should describe enough that a cloud provider could determine whether it could process the rest of the pCard.
A description of how the application bits are packaged and the expected infrastructure to process that package (e.g. untar, WebLogic manager, etc.). This description should focus on the individual packaging needs, not the overall deployment, though packages that combine the two (such as the Open Virtualization Format (OVF)) could be included and pointed to from the deployment section (see below).
A description of the core services and deployment architecture that must be supported by the cloud environment. This would include data like VM sizing, storage requirements, and network topology. It would also include requirements for automation that sits outside of the payload package, which would be need to be executed as a part of initial configuration.
Pointers to relevant information inside the payload package description would be allowed.
The information could be proprietary to a single vendor, but in the interest of some level of portability, I would hope we would see some more generalized standards for each application classification.
Orchestration and service level policies required to handle the automated run-time operation of the application bits. Also included in this section might be requirements for specific management feed formats or data required by the customer to monitor the payload(s).
Again, I would hope to see some standards appear in this space, but this section should allow for a variety of ways to declare the required information.
I presented this concept to a group working on the next generation of cloud-ready public network infrastructure last week, to a fairly enthusiastic response. In a sense, this is the input into the "cloud" that enables the cloud to automate service allocation to support a variety of payloads.
What are your thoughts? Does this seem like a concept worth pursuing further; perhaps even forming a formal working group to further explore? I've set up a public Google group for those wishing to join an ongoing dialogue about this concept.
Or, alternatively, why won't this work for your cloud payloads? What are the alternatives?