I don't like the term "private cloud." My reason is straightforward. The big-picture concept underpinning cloud computing is that the economic efficiencies associated with megascale service providers will be compelling. And, conversely, because they lack the scale of big providers, local computer operations will operate at a significant cost penalty.
To use the electric-utility analogy popularized by Nick Carr and others, efficient power generation takes place at a centralized power plant, not at an individual factory or office building.
There's ongoing debate about just how important these scale effects are and what form, exactly, they take. However, if one accepts this fundamental premise of cloud computing, then the future of computing lies predominantly in multitenant shared facilities of massive size. (Size here refers not necessarily to a single physical facility but to a shared resource pool that may, and probably will, be geographically distributed.)
In other words, a "private cloud" lacks the economic model that makes cloud computing such an intriguing concept in the first place. Put another way, the whole utility metaphor breaks down.
This is not to say that all computing will take place off-premises through these large service providers. In fact, there are lots of reasons why a great deal of computing will continue to happen locally.
For example, Chuck Hollis, global marketing chief technology officer at EMC, writes in The Emergence Of Private Clouds:
IT organizations and service providers that use the same standards will eventually be able to dynamically share workloads, much the way that's done in networks, power grids, and distribution today.
Fully virtualizing traditional enterprise IT internal resources creates substantial advantages--that much is becoming clear.
And if you're an outsourcer or other IT infrastructure service provider, the advantages of virtualizing your capabilities to do multitenancy better is probably clear as well.
And in a post titled "The argument for private clouds James Urquhart of Cisco Systems (and a fellow CNET Blog Network blogger) argues that:
Disruptive online technologies have almost always had an enterprise analog. The Internet itself had the intranet: the use of HTTP and TCP/IP protocols to deliver linked content to an audience through a browser. The result was a disruptive technology similar to its public counterpart but limited in scope to each individual enterprise.
Cloud computing itself may primarily represent the value derived from purchasing shared resources over the Internet, but again, there is an enterprise analog: the acquisition of shared resources within the confines of an enterprise network. This is a vast improvement over the highly siloed approach IT has taken with commodity server architectures to date.
The result is that much of the same disruptive economics and opportunity that exists in the "public cloud" can be derived at a much smaller (scope) from within an enterprise's firewall. It is the same technology, the same economic model, and the same targeted benefits, but focused only on what can be squeezed out of on-premises equipment.
I do have a couple of quibbles:
- Data center architectures are indeed getting more modular and more dynamic. However, it seems an unreasonably large step to take this overall direction and lump it under the cloud-computing banner. If any arbitrary data center environment is considered a "private cloud," then the already fuzzy term surely loses all meaning.
- While there are cloud concepts that can be rolled into in-house operations, the fundamental model posited by cloud computing assumes a shared utility. Returning to the electric utility metaphor, individual companies can install their own electric generators that are compatible with and can interoperate with the public utility. Doing so takes advantage of the standards in the delivery and consumption of power. It also provides a backup in the event of power failures. But these smaller generators do not deliver power as cost effectively as the utility can.
But I mostly agree with the overall sentiment of these posts.
Applications and services will continue to run both inside enterprise firewalls and in the cloud for reasons of technology, switching costs, and control.
On the technical front, many of today's applications were written with a tightly coupled system architecture in mind (for example, a high-performance fibre channel disk connected to large SMP servers) and can't simply be moved to a more loosely coupled cloud environment.
For existing ("legacy") applications, there's also the switching cost and time to move to a new software model. In fact, one of the big arguments for standardized, outsourced IT--allowing companies to focus on their competitive differentiators--can also argue against making investments to change functional software systems (and their associated business processes), especially if the financial benefits are long-term and somewhat amorphous.
Security and compliance are also major concerns today. We can argue about the degree to which they're justified. But ultimately, perception is reality.
And there is a certain convergence between how many applications run in the cloud and how they run in the enterprise. Web standards and virtualization are major drivers here, and they certainly make a degree of interoperability and mobility between enterprise and service provider (over time) entirely thinkable.
Existing applications (and operational procedures associated with them) change slowly, and many of them will continue to run inside corporate firewalls as a result. We'll also start to see "federated" and "hybrid" architectures that bridge the enterprise data center and the shared-services provider. Cloud computing will evolve in concert with enterprise applications, not in isolation from them.
But we shouldn't lose track of the fact that cloud computing is posited to be a disruptive change to the computing landscape. If that is the case, then the "cloud" moniker shouldn't be slapped onto evolutionary changes to the way we run applications.