April 19, 2007 4:00 AM PDT

FAQ: Detangling virtualization

For anyone buying servers or server software, and even many buying PCs, virtualization is getting hard to avoid.

The term typically refers to running multiple operating systems simultaneously on the same computer. It's long been around on high-end servers, but new software and hardware options mean mainstream users are starting to have to worry about virtualization. For example, both major commercial versions of Linux now have virtualization built in, and the next version of Windows for servers will, too.

Virtualization is complicated. But there are reasons you might want to take it seriously.

Mac users can run Windows to tap into the corporate e-mail system, or someone with a Windows Vista PC can run software that will only run on Windows XP. But in practice, the technology today is most likely to appeal to server customers, with advantages ranging from scrapping old hardware to cutting electricity bills.

Virtualization is a classic case of disruptive technology with a steep learning curve. For example, at the upcoming HP Technology Forum, there are 84 presentations to help Hewlett-Packard customers understand virtualization.

Here are some answers about what's happening today with virtualization.

What exactly does virtualization mean?
The term virtualization means that software is running on some sort of virtual foundation rather than the physical hardware it typically expects. Instead of a single operating system controlling a computer's hardware, the virtualization software controls it, providing multiple compartments called virtual machines for the operating systems to run in. Inserting a virtual layer can be liberating. For example, a running operating system can be moved to a fresh server if the one it's running on is suffering a failing memory bank or overtaxed processors.

Virtualization actually has been around the computer industry for decades, for example to run multiple jobs on mainframe computers or to hide the particulars of individual hard drives in a storage system. But now, it's no longer just a high-end technology.

Why is virtualization catching on now?
Because the technology is maturing and can help fix some common problems. Much of the credit for making virtualization a reality goes to an EMC subsidiary called VMware, which brought the technology to computers using mainstream x86 processors such as Intel's Pentium and Advanced Micro Devices' Opteron. In the first quarter of 2007, VMware's revenue grew 96 percent from the year-earlier period to $256 million, so there's no doubt the market is real and growing fast.

VMware built its business gradually. It began on desktop computers, where programmers could harmlessly test crash-prone new software in virtual machines or run Linux and Windows on the same computer, for example. In more recent years, the company's server software business became more lucrative as virtualization enabled customers to replace several inefficiently used servers with a single server running multiple virtual machines. Now the company is moving to a grander virtualization-based vision in which multiple tasks can run with shifting priorities on a pool of centrally managed machines.

Do I get a choice of suppliers here?
Plenty of competitors want a piece of VMware's action. First on the scene was Xen, an open-source project sponsored by Linux sellers, server makers and a start-up called XenSource. Virtual Iron is another start-up that's trying to make a business out of Xen. On the proprietary software side of the industry, Microsoft acquired a company called Connectix to counter VMware's products, but has had only modest success. The real fight will begin by June 2008, when the forthcoming "Longhorn Server" version of Windows gets updated with virtualization software code-named Viridian. Despite the fact that Xen is here now, VMware marketing director Bogomil Balkansky said Viridian is his top concern.

Although Xen got the jump, a newer open-source virtualization project called KVM has stolen some attention. Red Hat and another Linux rival, Canonical's Ubuntu, have blessed KVM, and many Linux programmer heavyweights like its approach.

Another flavor of virtualization lets a single operating system be carved up into several virtual compartments, a lighter-weight approach that's been popular for Web site hosting. SWsoft's Virtuozzo, based on the open-source OpenVZ project, employs this approach, while Sun Microsystems' Solaris built the technology into its Solaris 10 operating system. Microsoft has said it's considering a similar move for Windows.

CONTINUED: Adding a level of complexity…
Page 1 | 2

See more CNET content tagged:
virtualization, VMware, virtual machine, Xen, server software

23 comments

Join the conversation!
Add your comment
I love Virtualization but.....
It only exists because the operating system makers of the world don't do a good job. Much like the anti-virus makers of the world, VMware and others are doomed if MS, Sun, Red Hat, Novel and others bake these abilities into the OS or fix the OS so multiple packages can run together. Needing separate serves for SQL, Exchange, AD, and IIS is ridiculous. The same goes for ORCL, Apache, Java, and bind and whatever else you want or need to run on *nx. The only reason to have to add hardware should be processor utilization, maxed out memory, constrained IO, or to support multiple environments for FO, HA, or development lifecycle. It may be years away but virtualization belongs in the OS or the OS needs to not need it. It's all very similar to the anti-virus-spam-malware you need for your windows environment. We all agree that these packages are necessary today, but someday Microsoft might either add it all right in the OS or better yet make an OS that does not need it. It's funny that if the accomplish these goals, they will be seen as anti-competitive for doing something they should have done from the beginning.
VMware and the others are thriving on the shortcomings of the operating systems of the world. They will do very well for a while but in the long term they will need to adapt. Thriving on someone else?s folly can only last so long. If they are lucky in the long term, VMware, ZEN, VI and the other virtual players will be gobbled up by MS, SUN, Novell, or maybe even ORCL.
BTW, I have championed VMware for several years as it has made my professional life much easier and saved my companies millions. They make great stuff. It is just a shame that the need the need for them exists at all.
Posted by tgrenier (256 comments )
Reply Link Flag
You are 100% correct...read my post
concerning the True Parallel Processing Foundational Platform,
it's that future software you speak of. It's written at the machine
level, full general purpose, from top to bottom, tools included.
And yes it will do what you suggested needs to be done. Best of
all, it can reside on a floppy, all of it.

I share this with you, because strangely enough the timing is
here. Years ago bringing up the discussion of parallel processing
on multiple cores, would have brought a dead silence during a
cocktail party chit-chat. But now thanks to multiple cores, I can
walk the average joe/jane, through a discussion concerning
parallel processing.

But you are correct, everything you stated was correct. It was the
poor foundational development of the traditional OS,
surrounding mechanisms such as scheduling and use of
memory, while providing sound security; which has lead to the
use of technologies such as virtualization. Really and unneeded
development, but it will make money for some during this
interim.

You understand the problems and the processes.
Posted by thecatch (49 comments )
Link Flag
i am waiting for this technology to mature
i'm a low tech customer & there are many like me, we don't have time for steep learning curves and are willing to pay for the advantages of VM. It sure would be nice if i could replace my entire setup at the end of the day if i made some crashing mistake..with no more effort than running a reg. cleaner. Then i would be much more willing to try new programs, betas & etc...
There are big bucks to be made here when someoone comes out with consumer friendly VM, engineered to work with older OS/machines [as in up to 2yrs older] J Bo
Posted by jstacat (7 comments )
Reply Link Flag
vmware server
Vmware server and player are free and very very easy to use.
Posted by tgrenier (256 comments )
Link Flag
Don't Wait
VMWare Server is very easy to install, does not require a "special" version of Windows to install (or Linux).

If you are able to reinstall Windows, you will be able to install VMWare Server. VMWare is the mature technology in the market.

I own a business providing disaster recovery services using racks of 4 core dual processor motherboards. I use VMWares big package VM Ware Infrastructure 3, which is very expensive ($4K per server). This is a very mature technology.

Go to <a class="jive-link-external" href="http://www.vmware.com" target="_newWindow">http://www.vmware.com</a> and download Server, it's free, and works very well.

I have a lab at my home, I am running VMWare Server on a Pentium 3 1GHz, running Windows 2000 Server w/ 2 VMs (RH Linux &#38; Ubuntu), and it works nicely. Not ideal for production, but very usable as a test or small server.

I have used Xen (crap), MS Viirtual Server (also crap), but VMWare is the best.
Posted by ThePenguin (30 comments )
Link Flag
Big benefits
There are some huge benefits to VM solutions. Reduction of needed datacenter floor space, heating and electrical capacity are some. The big advanatages that we like are the disconnection or abstraction of the server and its applications from the hardware layer. With VM's, upgrading the hardware is so easy its almost a joke, just build a new host farm and move the VM images over. New server builds are quick. SAN storage allocation is more flexible. Also the new high availability features in VMWare are a big plus. We have traditionally used clusters for high availability, but they have to shutdown to move to other hardware. VM's can stay online and move to a new host. Also with active\passive clusters there is alot of idle hardware just sitting there, while in VM's that excess reserve capacity is communal and can be much less, which reduces hardware costs even more. Speed is a big deal for us as well, the business comes and says we need a new test server, instead of filling out a PO, shipping HW, racking, cabling, etc.. we just clone the OS image they want, give it an IP and we're done. Decommissions are quick, too, just shut it off and delete the files, the capacity it used goes back into the communal pool. Disaster recovery is easier too, we just need to recover the VM file and stick in the DR VM host farm and start it up.

Virtualization is the future. Get on board now.
Posted by baike (39 comments )
Reply Link Flag
Parallelization will displace Virtualization in 2 years...
so hold on to your cash.

Virtualization has been around for years, mainframe some 40
years ago. But Virtualization does not improve performance, it
can't. Because you are sacrificing the computers resources in an
attempt to run either multiple OS (legacies usually), or
attempting to run multiple jumps or tasks. This isn't an
improvement in performance, it is an attempt to improve
organization, in an attempt to save time.

And the consolidation of computers is nice, but why were fools
buying into the server farm (SUN) snow job in the first place.

BUT MOST importantly, Parallelization will displace vitrualization
anyway. With the advent of the multiple core chips,
parallelization software will provide, both a virtualization
platform if desired, but it will also provide a true parallelization
foundational platform, that will take advantage of parallel
processing on the multiple cores.

Go to INTELS home site it's everywhere you look, the discussion
of parallel processing being the future platform. They think
2015, we think in less then two years in products, (every
platform you can think of).

And all the big boys can actually utilize it, that includes CISCO,
in the router, network &#38; security. AMD &#38; INTEL or others in the
CPU's. Microsoft and all others at the application level,
regardless of what OS is involved too.

But everyone did bite into the virtualization bug, because it
made development easier, and anything sounded better then 1
server/ 1 job.

But think about this, both virtulaization and parallelization rely
on the importance of the scheduler. And which would you rather
have working for you, a foundational virtualization platform
scheduler, or a parallel processing foundational scheduler?
Nothing like you have seen offered in the current, but one that
will fully utilize all cores.

And guess what? Those cores can be performing virtualization
task, while also running multiple jobs, I mean multiple jobs/
multi-users, all on the same platform. (If you understand this
concept, you understand the difference between virtualization &#38;
parallelization, and the impact true parallel processing
foundational software will have on the industry).

It's going to get wild, fast and extremely productive real soon,
and there is no limit; because the tube will always get filled,
there will always be more to do.
Posted by thecatch (49 comments )
Reply Link Flag
some questions
I guess I dont' quite get parellelization. How is a multicore environment going to help issues at OS level with crapp like conflicting DLL's? For the most part our servers are chugging righ along at 2-15% porcessor ultilzation all day. We have lots of capacity we're not using. Virtualization is not supposed to increase preformance. Now if your servers are choking all day and pinning processors, then vrtualization is not really for you. At least not for the purpose of consolidating servers. I would ESX and VC just for the managment features alone. Consolidation is nice but I view ESX as a very important layer between the bios and the OS. In fact, I wis VMware would partner with HP and ESX right on the mother board with it's onw small processor.
Posted by tgrenier (256 comments )
Link Flag
What are you talking about???
What do you mean by "parallelization"?
If you mean some sort of multithreading it doesnt' have a thing to do with virtualization. It is like saying that the mouse will replace the hard disk. Completely different spaces.
And if you mean running virtual OSs scattered among several physical machines, that's a pipe dream. It's been suggested about 20 years ago and has not made a single progress since. It has actually become further away as hardware has progressed. While single system performance has become faster at an exponential rate, system interconnect has lagged behind and grown at a much slower rate. And the techniques for developing distributed apps have not progressed much. Applications that are efficient in ssytems with more than four processors are scarce, and that's with local interconnect speeds. Efficient applications in distributed systems are scarce, and limited to some scientific processing and financial number crunching.
And the objective of virtualization has nothing to do with performance: it is about convenience. Being able to run several platforms on a single machine, to provision servers in minutes instead of weeks, to apply high end FT features to low end loads and to be able to recover failed systems in minutes. Parallelization doesn't even attempt to solve any of those problems.
Posted by herby67 (144 comments )
Link Flag
You are right in that, there is a lot of talk about parallelization. However, the software environment needed for parallelization is severely limited. We are no where close to having automatic parallelization at the compiler level. It will take at least 7-10 years for parallel programming to become the norm at colleges. Considering these barriers, I think you are over optimistic on the parallelization front.
Posted by aravindrao (1 comment )
Link Flag
SAN
All of my VMWare Infrastructure Servers boot from the SAN. No local storage on any of my servers, no optical, no floppy, nothing spinning but the fans.

When you pair VMWare Infra + a vistualization layer on the SAN, this is the easiest and most efficient. When you pair those with dual port Infiniband 10Gb/s as your sole network device, then it really sings. the only ethernet is for OOB management. Even the keyboard/video/mouse is sent over the IB link. Easy, elegant and efficient, what more could you ask for?
Posted by ThePenguin (30 comments )
Reply Link Flag
Not
Apples and Oranges.

Paralellization will COMPLEMENT virtualization, not replace it.
Posted by ThePenguin (30 comments )
Reply Link Flag
Virtualization will be a minor feature on
a True Parallelization Processing foundational platform.

The impact of this foundation will far outweigh any benefit
virtualization has brought to computing.

Parallelization will bring speed and performance gains, that will
astonish the industry. Virtualization does not improve
performance, as I said it can't. In a virtual environment you are
taking the resources from the hardware in an attempt to run
legacy apps. or multi jobs. You are making things neater, not
quicker, or more powerful to perform better.

Most users of virtualization claim the main gain they have
achieved from switching over is better organization, because of
consolidation. That speaks nothing about better computing
performance. Virtualization is a new environment, it is not the
availability of new resources, or the ability to take advantage of
the gains in new hardware.

Virtualization provides that new environment, desired or
undesired. Parallelization will provide extreme levels of new
performance gains in any environment, we think that will be
desired by all.

In a true parallel processing environment you are using a parallel
processing platform to fully utilize the full capabilities of the
multiple core CPU's now in use.

This means running both multi-tasks or multi-jobs on the single
platform, fully utilizing each core. And I'm not talking about
INTEL'S hyper-threading, Mitosis scheduling of work-loads form
of processing; I'm talking about using the full capacity of each
core to execute concurrent parallel processing multiple
applications, for the single or multiple user.

This platform will defy Amdahl's law dealing with Parallel
Processing, and when you see it in use, you will understand why
virtualization will be a minor feature in a true foundational
parallel processing platform.

Think foundation, as this platform will run all applications,
regardless of their origin, (language).

And as I said, the scheduler is most critical to all of this. Sending
that workload to each core as needed.

The idea that saving energy and resources is the motivation
behind letting cores sit idle is a programming inside joke. They
sit idle, because the current OS's and their current schedulers,
can't handle the workload balance. And the current OS's can't
handle, or should we say, can't take the advantage of the
capabilities behind the new multiple cores. Because they were
never properly written well at their foundation level.

And now that fundamental problem that has been haunting us
will be fixed. It's the future.
Posted by thecatch (49 comments )
Link Flag
Amdahl's law....look it up....will be disproved...
And I stated, that we do not consider single core processing
TRUE PARALLEL PROCESSING, although we are able to effectively
multi task on a single core as well, to the point where we are
able to run concurrent multiple applications, (many).

We are using both the Woodcrest and the new Quad Cores to out
perform a Google Cluster.

And if you want to sit here all day and discuss Parallel
Processing, I would be happy to oblige.

Let me stress this though, we are talking about providing
parallel processing methodology for both Business Enterprise
Software, as well as Massively Parallel Environment Software
Applications.

And my simple point is, that if you have a Parallel Processing
Software Application, that can concurrently execute multiple
application of any origin on the single platform, there will be
less need, if any need at all, to stack virtual machines running
multiple applications. Especially when you consider speed &#38;
performance into the equation. And consolidation will continue,
because as I said two modest servers could run an entire small
company. And a few more could run an enterprise.

So conceive of that, and you see the future, and the significant
difference between Parallelization and Virtualization, apples to
apples.

Virtual Machines will be utilized within the Parallel Processing
Environment, when needed, and in the developmental realm of
operations.

See you when we get there.
Posted by thecatch (49 comments )
Reply Link Flag
Virtual and Parallel and Amdahl's law
There's a lot to agree with on both sides of this discussion. Although after 25 years in the supercomputing and parallel processing business I am always sceptical when someone says Amdahl's law will be disproved, since it just states that performance is limited by the irreducible serial non-parallel portion of the code. To do better than that one would have to change the serial portion to be parallel, and then by definition....the biggest effect one sees that helps get some superlinear speedups is when a problem which was not fitting in cache well, does, because now one benefits from the effectively larger combined N caches in the parallel processor.

And there are always other irreducible portions, such as I/O, which can be troublesome to parallelize.

Multi-core will tend to drive up Virtualization opportunities in the short run, since now the 1 RU servers will be even more powerful and can consolidate even more workload with each generation.

Nevertheless, there is an important point being made here that multi-core will be a strong driver, finally, to force the issue for parallelizing more business applications (at least the long running ones). We happen to think that, because of scaling limitations with Amdahl's law that the parallelization in the business world, and increasingly in the technical HPC world as well will shift more toward the embarassingly parallel throughput style. This can be used for design optimization, parameter studies, processing multiple images, genomics sequencing and the like where you run the same executable with different parameters simultaneously across a significant number of cores.

Then you force the need for more dynamic, real-time scheduling to those cores. And you'd like to have - for management flexibility and operations reasons - real-time virtual provisioning, so that different apps, which may require different software stacks, can be quickly moved in and out of the clusters or Grids.

Real-time scheduling and provisioning will be an important place where Parallelism and Virtualization meet.
Posted by perrenod (1 comment )
Link Flag
 

Join the conversation

Add your comment

The posting of advertisements, profanity, or personal attacks is prohibited. Click here to review our Terms of Use.

What's Hot

Discussions

Shared

RSS Feeds

Add headlines from CNET News to your homepage or feedreader.