by Robin Bloor, Partner
I’m not sure which year it was that virtualization became a new word. Maybe it was 2002 or 2003. I remember IBM introducing its “On Demand” marketing vision and using the term as though it were already in Webster’s. I thought: “nice idea, but actually only VMware is delivering this right now”. In those days virtualization meant virtualization of the OS, which in turn meant Linux and Windows, because that’s what VMware could deliver. Actually, I’m being a little unfair to IBM, here. The mainframe has been able to virtualize an OS for as long as I’ve known about mainframes – long before VMware became the flavor of the month. And, IBM had been virtualizing Linux on the mainframe for quite a while.
But when people spoke of virtualization a few years ago, they usually meant using VMware. Virtualization has grown much bigger than that now. The Intel-based Apple Macs can use virtualization software from Parallels to virtualize Windows. Actually this is not the only way of virtualizing a desktop. Using the capabilities provided by Softricity (recently acquired by Microsoft) and Ardence, you can stream a desktop image to a PC where it executes. This improves the robustness of Windows considerably and delivers a virtual desktop capability. You can access your specific desktop from any physical PC.
Storage can be virtualized too. Nowadays, just about anyone who is anyone in the storage market claims to deliver some kind of virtualization solution.
So What’s Going Down Here?
The simple truth is that the remorseless increase in CPU power – the gift that keeps on giving – has given us the ability to wrap whole environments and run them. Almost since the birth of computing it has been possible to run emulations, but emulations cost in CPU power. Well, when CPU power doesn’t cost much, emulations cost diddly-squat. And this means that virtualization is the order of the day.
Now if you look at a computer network globally, then most networks run a variety orf workloads. For example, email is processor-light and storage-heavy, whereas SAP is the opposite (processor-heavy and storage-light) because it is transactional and the transactions need the compute power. Not so long ago (maybe 15 years ago) most ap-plications were transactional. Then came data warehouse and email, neither of which are transactional. Each present distinctly different workloads. Nowadays we have VoIP and the onset of video applications, which are also distinctly different workloads. VoIP and video need to be intelligently managed if you are going to deploy them over networks and, you will be doing that at some point, if not now. So, suddenly network bandwidth is a resource that needs careful management.
So if you now consider the management of a network running a mix of these applications and a large number of desktops too, you quickly realize that it’s a bit of a nightmare. You simply cannot manage all of this at the level of a server or a cluster of servers. You have to optimize the whole network.
I was reminded of this reality in a recent briefing from Avocent, a company that manufactures appliances for managing networks and which proudly champions its “agentless” technology. Avocent presented the idea of “virtual presence”.
What do they mean?
Here’s the conundrum. Once you try to manage the whole network as a single resource space (and nowadays you need to do this), you suddenly discover that where the man-agement software actually runs is itself a problem. Let’s just hypothesize for the moment that you are actually able to optimize a network in an effective way; dynamically provisioning virtual capabilities at all of its nodes and truly exploiting the whole set of re-sources to best advantage.
Now imagine that a server or two fails. If either of these servers is running your management software, then it suddenly disappeares too and the whole network is floating free and rudderless. Not acceptable I’m afraid.
Avocent’s approach of a virtual presence suggests that the management software really needs to have its own circuit. Avocent goes some way to delivering this. Its agentless software doesn’t die if a server dies. It makes use of IPMI firmware, which means that it can get management data from a processor board in an agentless manner, and it offers a consolidated console for viewing the behavior of the network.
Avocent provides fundamental system management capability but it doesn’t deliver anything close to the whole spectrum of infrastructure management capability. (It sees itself as complementary to the wide variety of other management products.) The problem is that most of the other management products have no concept whatever of a “virtual presence”. They need to have this, for many reasons.
The most important one is this:
In a large computer network with diverse workloads, the management software needs to behave like an operating system behaves on a single computer. It needs to allocate re-sources, monitor activities and preemptively prevent resource overloads. To do this it needs to run with the highest priority, higher than any of the applications it manages. It needs its own exclusive resources. It needs to run on its own circuit.
If this is what Avocent means by “virtual presence,” then I’m in favor.