April 19, 2007 4:00 AM PDT

FAQ: Detangling virtualization

(continued from previous page)

Useful technology, lots of buying options--sounds swell. Why doesn't everybody do this?
Mainly because it's new to most people. Also, it can hurt performance as virtualization software intercepts communications between hardware and software, and to use it, computers need more network capacity and more memory. Virtualization also adds a new level of complexity, and administrators must test it with their hardware and software.

It doesn't sound so complex to me. Software just runs in a different compartment, right?
Consider some of the repercussions of unshackling software from its hardware. Much server software is priced on the basis of how many processors a server has. What happens when, through virtualization, you're running a particular application on two of a computer's four processors? Then what happens when you boost the virtual machine size to three processors? And how about moving that virtual machine over to a different system altogether? The software industry has only begun adapting to the new reality.

Here's another wrinkle: some software, during installation, records what amounts to the hardware fingerprints of the computer it's running on to counter piracy, while other packages require a hardware "dongle" to be attached. So there are serious constraints to shuttling virtual machines around with abandon.

OK, now I'm intimidated. Is this just a fad that I can wait out?
If you're a server administrator, you probably can't and shouldn't avoid virtualization forever. Xen now is built into both major commercial versions of Linux--Novell's Suse Linux Enterprise Server and Red Hat Enterprise Linux. And Intel and AMD are racing to build virtualization into their chips. Newer processors from both companies have hardware support for some virtualization tasks, making it possible to run Windows on Xen, for example. Future features will improve performance of memory access. Virtualization on the PC, though, isn't likely to catch on widely anytime soon.

What can I do with virtualization on a PC?
Software from Parallels will let Mac users run Windows on the newer Intel-based machines. VMware is working on its own software, called Fusion, to accomplish the same end. That can be handy when Mac users need to fit in better with a Windows-dominated world. For Windows users, VMware's player software can be used to try out Linux, run older software on a newer system, and isolate personal and work tasks. Intel thinks administrators will like to run their own management software in a separate virtual machine, letting them fix worm-infested PCs remotely. Developers get the ability to debug programs in virtual machines that can simulate diverse combinations of software and that don't corrupt hard disk data if they crash. For administrators, another nice PC virtualization technology involves replacing a standalone system and running virtual PCs on central servers to cut energy and maintenance costs.

VMware offers free versions of its software, but there are other fees. To run Windows on a Mac, you need a full--not upgrade--version of the operating system. And with Vista, the restrictions get even tighter: only the pricier Ultimate and Business versions are permitted. Businesses with a volume license agreement with Microsoft may run up to four instances of Windows Vista Enterprise on a single PC, but others must pay for each copy.

What are the server costs?
Xen is built into Red Hat Enterprise Linux and Suse Linux Enterprise Server at no extra cost beyond the support subscriptions. Novell customers may run as many SLES virtual machines as they want on a single computer for one support subscription. Red Hat prices similarly with its RHEL Advanced Platform version, but imposes a four-virtual machine limit for its basic RHEL Server version.

VMware's prices have come down. For example, its former GSX Server product became the free VMware Server product. But there still are significant fees. For a two-processor machine, ESX server and higher-level components that make up the company's Virtual Infrastructure 3 product cost a minimum of $1,675 for a dual-processor server, including support and subscription costs. The fuller-featured Enterprise version of that product costs $6,957 for the same hardware. Doubling a server's processor count doubles the price. It sounds steep, but it's still likely to be less expensive than buying a new server or three.

Previous page
Page 1 | 2

See more CNET content tagged:
virtualization, VMware, virtual machine, Xen, server software


Join the conversation!
Add your comment
I love Virtualization but.....
It only exists because the operating system makers of the world don't do a good job. Much like the anti-virus makers of the world, VMware and others are doomed if MS, Sun, Red Hat, Novel and others bake these abilities into the OS or fix the OS so multiple packages can run together. Needing separate serves for SQL, Exchange, AD, and IIS is ridiculous. The same goes for ORCL, Apache, Java, and bind and whatever else you want or need to run on *nx. The only reason to have to add hardware should be processor utilization, maxed out memory, constrained IO, or to support multiple environments for FO, HA, or development lifecycle. It may be years away but virtualization belongs in the OS or the OS needs to not need it. It's all very similar to the anti-virus-spam-malware you need for your windows environment. We all agree that these packages are necessary today, but someday Microsoft might either add it all right in the OS or better yet make an OS that does not need it. It's funny that if the accomplish these goals, they will be seen as anti-competitive for doing something they should have done from the beginning.
VMware and the others are thriving on the shortcomings of the operating systems of the world. They will do very well for a while but in the long term they will need to adapt. Thriving on someone else?s folly can only last so long. If they are lucky in the long term, VMware, ZEN, VI and the other virtual players will be gobbled up by MS, SUN, Novell, or maybe even ORCL.
BTW, I have championed VMware for several years as it has made my professional life much easier and saved my companies millions. They make great stuff. It is just a shame that the need the need for them exists at all.
Posted by tgrenier (256 comments )
Reply Link Flag
You are 100% correct...read my post
concerning the True Parallel Processing Foundational Platform,
it's that future software you speak of. It's written at the machine
level, full general purpose, from top to bottom, tools included.
And yes it will do what you suggested needs to be done. Best of
all, it can reside on a floppy, all of it.

I share this with you, because strangely enough the timing is
here. Years ago bringing up the discussion of parallel processing
on multiple cores, would have brought a dead silence during a
cocktail party chit-chat. But now thanks to multiple cores, I can
walk the average joe/jane, through a discussion concerning
parallel processing.

But you are correct, everything you stated was correct. It was the
poor foundational development of the traditional OS,
surrounding mechanisms such as scheduling and use of
memory, while providing sound security; which has lead to the
use of technologies such as virtualization. Really and unneeded
development, but it will make money for some during this

You understand the problems and the processes.
Posted by thecatch (49 comments )
Link Flag
i am waiting for this technology to mature
i'm a low tech customer & there are many like me, we don't have time for steep learning curves and are willing to pay for the advantages of VM. It sure would be nice if i could replace my entire setup at the end of the day if i made some crashing mistake..with no more effort than running a reg. cleaner. Then i would be much more willing to try new programs, betas & etc...
There are big bucks to be made here when someoone comes out with consumer friendly VM, engineered to work with older OS/machines [as in up to 2yrs older] J Bo
Posted by jstacat (7 comments )
Reply Link Flag
vmware server
Vmware server and player are free and very very easy to use.
Posted by tgrenier (256 comments )
Link Flag
Don't Wait
VMWare Server is very easy to install, does not require a "special" version of Windows to install (or Linux).

If you are able to reinstall Windows, you will be able to install VMWare Server. VMWare is the mature technology in the market.

I own a business providing disaster recovery services using racks of 4 core dual processor motherboards. I use VMWares big package VM Ware Infrastructure 3, which is very expensive ($4K per server). This is a very mature technology.

Go to <a class="jive-link-external" href="http://www.vmware.com" target="_newWindow">http://www.vmware.com</a> and download Server, it's free, and works very well.

I have a lab at my home, I am running VMWare Server on a Pentium 3 1GHz, running Windows 2000 Server w/ 2 VMs (RH Linux &#38; Ubuntu), and it works nicely. Not ideal for production, but very usable as a test or small server.

I have used Xen (crap), MS Viirtual Server (also crap), but VMWare is the best.
Posted by ThePenguin (30 comments )
Link Flag
Big benefits
There are some huge benefits to VM solutions. Reduction of needed datacenter floor space, heating and electrical capacity are some. The big advanatages that we like are the disconnection or abstraction of the server and its applications from the hardware layer. With VM's, upgrading the hardware is so easy its almost a joke, just build a new host farm and move the VM images over. New server builds are quick. SAN storage allocation is more flexible. Also the new high availability features in VMWare are a big plus. We have traditionally used clusters for high availability, but they have to shutdown to move to other hardware. VM's can stay online and move to a new host. Also with active\passive clusters there is alot of idle hardware just sitting there, while in VM's that excess reserve capacity is communal and can be much less, which reduces hardware costs even more. Speed is a big deal for us as well, the business comes and says we need a new test server, instead of filling out a PO, shipping HW, racking, cabling, etc.. we just clone the OS image they want, give it an IP and we're done. Decommissions are quick, too, just shut it off and delete the files, the capacity it used goes back into the communal pool. Disaster recovery is easier too, we just need to recover the VM file and stick in the DR VM host farm and start it up.

Virtualization is the future. Get on board now.
Posted by baike (39 comments )
Reply Link Flag
Parallelization will displace Virtualization in 2 years...
so hold on to your cash.

Virtualization has been around for years, mainframe some 40
years ago. But Virtualization does not improve performance, it
can't. Because you are sacrificing the computers resources in an
attempt to run either multiple OS (legacies usually), or
attempting to run multiple jumps or tasks. This isn't an
improvement in performance, it is an attempt to improve
organization, in an attempt to save time.

And the consolidation of computers is nice, but why were fools
buying into the server farm (SUN) snow job in the first place.

BUT MOST importantly, Parallelization will displace vitrualization
anyway. With the advent of the multiple core chips,
parallelization software will provide, both a virtualization
platform if desired, but it will also provide a true parallelization
foundational platform, that will take advantage of parallel
processing on the multiple cores.

Go to INTELS home site it's everywhere you look, the discussion
of parallel processing being the future platform. They think
2015, we think in less then two years in products, (every
platform you can think of).

And all the big boys can actually utilize it, that includes CISCO,
in the router, network &#38; security. AMD &#38; INTEL or others in the
CPU's. Microsoft and all others at the application level,
regardless of what OS is involved too.

But everyone did bite into the virtualization bug, because it
made development easier, and anything sounded better then 1
server/ 1 job.

But think about this, both virtulaization and parallelization rely
on the importance of the scheduler. And which would you rather
have working for you, a foundational virtualization platform
scheduler, or a parallel processing foundational scheduler?
Nothing like you have seen offered in the current, but one that
will fully utilize all cores.

And guess what? Those cores can be performing virtualization
task, while also running multiple jobs, I mean multiple jobs/
multi-users, all on the same platform. (If you understand this
concept, you understand the difference between virtualization &#38;
parallelization, and the impact true parallel processing
foundational software will have on the industry).

It's going to get wild, fast and extremely productive real soon,
and there is no limit; because the tube will always get filled,
there will always be more to do.
Posted by thecatch (49 comments )
Reply Link Flag
some questions
I guess I dont' quite get parellelization. How is a multicore environment going to help issues at OS level with crapp like conflicting DLL's? For the most part our servers are chugging righ along at 2-15% porcessor ultilzation all day. We have lots of capacity we're not using. Virtualization is not supposed to increase preformance. Now if your servers are choking all day and pinning processors, then vrtualization is not really for you. At least not for the purpose of consolidating servers. I would ESX and VC just for the managment features alone. Consolidation is nice but I view ESX as a very important layer between the bios and the OS. In fact, I wis VMware would partner with HP and ESX right on the mother board with it's onw small processor.
Posted by tgrenier (256 comments )
Link Flag
What are you talking about???
What do you mean by "parallelization"?
If you mean some sort of multithreading it doesnt' have a thing to do with virtualization. It is like saying that the mouse will replace the hard disk. Completely different spaces.
And if you mean running virtual OSs scattered among several physical machines, that's a pipe dream. It's been suggested about 20 years ago and has not made a single progress since. It has actually become further away as hardware has progressed. While single system performance has become faster at an exponential rate, system interconnect has lagged behind and grown at a much slower rate. And the techniques for developing distributed apps have not progressed much. Applications that are efficient in ssytems with more than four processors are scarce, and that's with local interconnect speeds. Efficient applications in distributed systems are scarce, and limited to some scientific processing and financial number crunching.
And the objective of virtualization has nothing to do with performance: it is about convenience. Being able to run several platforms on a single machine, to provision servers in minutes instead of weeks, to apply high end FT features to low end loads and to be able to recover failed systems in minutes. Parallelization doesn't even attempt to solve any of those problems.
Posted by herby67 (144 comments )
Link Flag
You are right in that, there is a lot of talk about parallelization. However, the software environment needed for parallelization is severely limited. We are no where close to having automatic parallelization at the compiler level. It will take at least 7-10 years for parallel programming to become the norm at colleges. Considering these barriers, I think you are over optimistic on the parallelization front.
Posted by aravindrao (1 comment )
Link Flag
All of my VMWare Infrastructure Servers boot from the SAN. No local storage on any of my servers, no optical, no floppy, nothing spinning but the fans.

When you pair VMWare Infra + a vistualization layer on the SAN, this is the easiest and most efficient. When you pair those with dual port Infiniband 10Gb/s as your sole network device, then it really sings. the only ethernet is for OOB management. Even the keyboard/video/mouse is sent over the IB link. Easy, elegant and efficient, what more could you ask for?
Posted by ThePenguin (30 comments )
Reply Link Flag
Apples and Oranges.

Paralellization will COMPLEMENT virtualization, not replace it.
Posted by ThePenguin (30 comments )
Reply Link Flag
Virtualization will be a minor feature on
a True Parallelization Processing foundational platform.

The impact of this foundation will far outweigh any benefit
virtualization has brought to computing.

Parallelization will bring speed and performance gains, that will
astonish the industry. Virtualization does not improve
performance, as I said it can't. In a virtual environment you are
taking the resources from the hardware in an attempt to run
legacy apps. or multi jobs. You are making things neater, not
quicker, or more powerful to perform better.

Most users of virtualization claim the main gain they have
achieved from switching over is better organization, because of
consolidation. That speaks nothing about better computing
performance. Virtualization is a new environment, it is not the
availability of new resources, or the ability to take advantage of
the gains in new hardware.

Virtualization provides that new environment, desired or
undesired. Parallelization will provide extreme levels of new
performance gains in any environment, we think that will be
desired by all.

In a true parallel processing environment you are using a parallel
processing platform to fully utilize the full capabilities of the
multiple core CPU's now in use.

This means running both multi-tasks or multi-jobs on the single
platform, fully utilizing each core. And I'm not talking about
INTEL'S hyper-threading, Mitosis scheduling of work-loads form
of processing; I'm talking about using the full capacity of each
core to execute concurrent parallel processing multiple
applications, for the single or multiple user.

This platform will defy Amdahl's law dealing with Parallel
Processing, and when you see it in use, you will understand why
virtualization will be a minor feature in a true foundational
parallel processing platform.

Think foundation, as this platform will run all applications,
regardless of their origin, (language).

And as I said, the scheduler is most critical to all of this. Sending
that workload to each core as needed.

The idea that saving energy and resources is the motivation
behind letting cores sit idle is a programming inside joke. They
sit idle, because the current OS's and their current schedulers,
can't handle the workload balance. And the current OS's can't
handle, or should we say, can't take the advantage of the
capabilities behind the new multiple cores. Because they were
never properly written well at their foundation level.

And now that fundamental problem that has been haunting us
will be fixed. It's the future.
Posted by thecatch (49 comments )
Link Flag
Amdahl's law....look it up....will be disproved...
And I stated, that we do not consider single core processing
TRUE PARALLEL PROCESSING, although we are able to effectively
multi task on a single core as well, to the point where we are
able to run concurrent multiple applications, (many).

We are using both the Woodcrest and the new Quad Cores to out
perform a Google Cluster.

And if you want to sit here all day and discuss Parallel
Processing, I would be happy to oblige.

Let me stress this though, we are talking about providing
parallel processing methodology for both Business Enterprise
Software, as well as Massively Parallel Environment Software

And my simple point is, that if you have a Parallel Processing
Software Application, that can concurrently execute multiple
application of any origin on the single platform, there will be
less need, if any need at all, to stack virtual machines running
multiple applications. Especially when you consider speed &#38;
performance into the equation. And consolidation will continue,
because as I said two modest servers could run an entire small
company. And a few more could run an enterprise.

So conceive of that, and you see the future, and the significant
difference between Parallelization and Virtualization, apples to

Virtual Machines will be utilized within the Parallel Processing
Environment, when needed, and in the developmental realm of

See you when we get there.
Posted by thecatch (49 comments )
Reply Link Flag
Virtual and Parallel and Amdahl's law
There's a lot to agree with on both sides of this discussion. Although after 25 years in the supercomputing and parallel processing business I am always sceptical when someone says Amdahl's law will be disproved, since it just states that performance is limited by the irreducible serial non-parallel portion of the code. To do better than that one would have to change the serial portion to be parallel, and then by definition....the biggest effect one sees that helps get some superlinear speedups is when a problem which was not fitting in cache well, does, because now one benefits from the effectively larger combined N caches in the parallel processor.

And there are always other irreducible portions, such as I/O, which can be troublesome to parallelize.

Multi-core will tend to drive up Virtualization opportunities in the short run, since now the 1 RU servers will be even more powerful and can consolidate even more workload with each generation.

Nevertheless, there is an important point being made here that multi-core will be a strong driver, finally, to force the issue for parallelizing more business applications (at least the long running ones). We happen to think that, because of scaling limitations with Amdahl's law that the parallelization in the business world, and increasingly in the technical HPC world as well will shift more toward the embarassingly parallel throughput style. This can be used for design optimization, parameter studies, processing multiple images, genomics sequencing and the like where you run the same executable with different parameters simultaneously across a significant number of cores.

Then you force the need for more dynamic, real-time scheduling to those cores. And you'd like to have - for management flexibility and operations reasons - real-time virtual provisioning, so that different apps, which may require different software stacks, can be quickly moved in and out of the clusters or Grids.

Real-time scheduling and provisioning will be an important place where Parallelism and Virtualization meet.
Posted by perrenod (1 comment )
Link Flag

Join the conversation

Add your comment

The posting of advertisements, profanity, or personal attacks is prohibited. Click here to review our Terms of Use.

What's Hot



RSS Feeds

Add headlines from CNET News to your homepage or feedreader.