In an interview this week, Greg Ness, a senior director at network automation vendor Infoblox, outlines the problems lurking in today's network architectures and processes in the face of dynamic distributed computing models like cloud computing and data center virtualization.
The interview focuses on the concepts behind Infrastructure 2.0, and how vendors and enterprises are working together to address the many opportunities and challenges they present.
Take a look at the core TCP/IP and Ethernet networks that we all use today, and how enterprise IT manages those services. Not long ago, I wrote an article that described how most corporations relied heavily on manual labor to manage everything from IP addresses and domain names to routing and switching configuration. At the time, I cited a survey that indicated that a full 63 percent of enterprises were still using spreadsheets to manage IP addresses.
I think Greg summarizes the basic issue quite well when he notes that "today's networks are run like yesterday's businesses." Unfortunately, as we move into an era of data center virtualization and cloud computing, spreadsheets just don't cut it anymore. Logging into switches one by one, or even executing a manual update to a set of switches at once simply can't be fast and agile enough to react to the changing needs of an automated application and server infrastructure. We need to take a systems view of our entire infrastructure, and build our automation around the end-to-end architecture of that system.
As Doug Gourlay, my former colleague at Cisco Systems and now vice president of marketing for Arista Networks, once observed, data center virtualization breaks our existing enterprise networking models, and cloud computing will break the Internet.
The problem isn't just arcane practices like IP address or domain name management by spreadsheet. It goes further into the challenges that infrastructure such as today's decades-old Domain Name Service (DNS) and network peering systems face when the location, capabilities and even existence of software payloads changes much more unpredictably.
Right now, Infrastructure 2.0 is one of those "squishy" terms that can potentially incorporate a lot of different network automation characteristics. As is hinted at in the introduction to Ness' interview, there is a working group of network luminaries trying to sort out the details and propose an architectural framework, but we are still very early in the game.
There has been a tremendous amount written about Infrastructure 2.0 already, so I don't want to repeat it all here. Rather, if you are interested in learning more, I highly recommend reading the following:
The Infrastructure 2.0 blog, specifically:
"Virtualization, Clouds and Meta Orchestration" (Greg Ness) -- an excellent description of how the relationship between those three concepts forms a basis for Infrastructure 2.0.
"Next-Gen Data Center Management Should be More Like Facebook" (Lori Macvittie) -- an interesting exploration of how to make computer networking leverage some of the lessons learned from human networking.
What To Do When Your "Core" Infrastructure Services Aren't In Your "Core?" (Chris Hoff) -- Hoff's "aha" moment surrounding the change from a product to services thinking about everything, including core networking capabilities.
The Emotion of VMotion... (Chris Hoff) -- Hoff's classic analysis of the reality of cross-cloud live motion of virtual machines, and the need for next generation infrastructure that it would generate.
Why virtualization is shaking up IT data centers (James Urquhart) -- a description I wrote some time ago comparing the creation of infrastructure in a virtualized IT architecture with the manufacturing of automobiles, including the effect that has on the flexibility of underlying physical systems.
There are other works by each of these authors and still others they link to that are worth reading as well, depending on whether your interest leans toward the data center, the Internet or other forms of core infrastructure.
Is the "Infrastructure 2.0" work happening today going to evolve into a body of standards that will have the same impact as BGP or DNS? I believe it will, though I make no promises about how those standards develop or who develops them. Rather, I believe that it is the changing nature of systems architecture that will force the evolution of the networks within those systems, and the networks that connect them.
And when those changes enter the public network--the Internet itself--we have the makings of the Intercloud...a whole other kettle of "squishy-ness".