If you begin with the premise that the abstraction of data center resources into software representations (such as virtual machines) decouples IT workloads from the physical systems they rely on, then it makes sense to reconsider the way you buy and build your data centers.
Simply having a uniform (or near-uniform) software layer between the physical infrastructure and your compute workloads means you can begin to assemble a homogeneous physical infrastructure to support a heterogeneous abstract IT environment.
No more custom-tailoring your systems for each application, only to find those systems difficult to alter to either meet the needs of a new workload or the changing needs of the existing one.
No more adding a unique network card to each server to support a shared management plane, just to find it locks you into that management architecture long after something better comes along.
No more trying to figure out which servers have storage area networking and which have local disk...they all can have both--making it much easier to reuse the physical system for workloads that require either one.
This is not a spiel for any one vendor or even for a group of competitive vendors. Instead, focus on what this evolution means to the way you will buy and operate enterprise computing equipment in the coming years. While the highly customized computing systems of our siloed past meant buying "pieces/parts" was the logical way to go, its been a little like buying a car by getting the engine from Honda, the chassis from Ford, and the wheels from Costco. You could probably build a pretty decent ride, assuming you could get it all to work together.… Read more