When does the data center become the cloud?

March 28, 2008

When does the data center become the cloud?

This is the beginning of another season of analyst meetings. Today I am at IBM’s Linux and Open Source meeting. Next week I will be with HP at their industry analyst meeting (hardware, software, services — no printers or PCs), that will be followed by IBM’s Impact (SOA conference), CA’s analyst meeting, and finally I will attend Microsoft’s tools and servers analyst meeting. I could attend many more but there aren’t enough days in the week and I still have to get some work done!

The overall Linux and Open Source meeting was quite interesting. But what I wanted to talk about is cloud computing. Irving Wladawsky-Berger started off with a fascinating discussion on cloud computing. I have known Irving Wladawsky-Berger for many years. For those of you who missed knowing him, he is one of the most interesting researchers /innovators and thinkers in the IBM organization. Irving retired from IBM last year and is now Chairman Emeritus of IBM Academy of Technology and a visiting professor at MIT. I first met Irving when he was the key thought leader in IBM’s e-business strategy and then the web strategy and autonomic computing — among others. So, it is always interesting to see what he is thinking about.

I wasn’t surprised when I found that he was thinking about Cloud Computing. From his view there is a continuum from the early internet days to clouds. He makes a connection between the disruptive qualities of IBM’s e-business strategy. However, he correctly points out that this strategy was not implemented in a vaccum. In fact, IBM both focused on innovation in context with legacy systems and applications that dominated the real world of its customers.

Today we are definitely at an inflection point. Irving makes an important observation that as we move to virtualized systems beginning with grids are simply a stage in a path to distributed computing. What links all resources together is Service Oriented Architecture based protocols. If you can encapsulate components and add clearly defined interfaces, you can move to a distributed world. The real challenge is how do you move from where we are today with virtualization to a systems wide approach to virtualization. To his credit, Irving concedes that this is going to be complicated and disruptive.

I am a firm believer that there is no such thing as brand new technology that emerges out of nowhere. Irving agrees and suggests that Cloud Computing is an evolution of everything that has been tried over the last 15 years — Internet, grids, clusters, etc. The cloud is massive implementation of virtualization.

Where do we go with this? One of the most important points that Irving mentioned that I think is at the heart of making cloud computing and any type of distributed computing a success is industrialization. In short, how do you make things work when workloads grow at an astronomical pace each year. Can you cram more and more servers into a traditional data center?

This thought opens up a lot of interesting debates that I will write more about in the next few days. What exactly is a cloud? Is it simply a new type of data center? Does it have to be multi-tenancy? Does it have to be “utility computing”? Because clouds are still new and are an evolution of vitualization, I think that the definitions will evolve over time. I don’t think there will be a single type of cloud in the market. I also think that because the cloud sits behind the view of most customers and users that it will not be clear what is a cloud and what is smoke and mirrors. That will certainly make the data center world an interesting place.

The reality, as usual, is more complicated. The real issues around clouds will be the same issues that we have always had in data centers — how do you manage at a massive scale without bringing on armies of people? How do you know which processes are allowed to exchange information with other processes safely? How do you remove the complexity?

One of the issues that Irving mentioned during his discussion is the ideas of ensembles. Basically these are a way of ordering components of the enterprise into like entities. These could be physical assets as well as software components (like business processes wrapped as business services). One of the keys to success is that these services must have clearly defined interfaces. What I find very interesting that in to create the next generation of distributed systems we must move to a service oriented approach.

IBM is clearly making a play for clouds as a path forward. It sees clouds as a way to manage increasingly expanding workloads by applying modularity and simplicity to the problem. IBM is making the connection between the movement towards clouds and autonomic computing. Obviously, if you are going to scale in this way an autonomic approach makes perfect sense (at least to me). In essence, IBM is presenting the view of clouds by three dimensions: simplified, shared, and dynamic. It is not clear how quickly IBM and others will be able to make this next generation of virtualization operational but these dimensions demonstrate that the thinking is on the right track.

Cloud Computing
About Judith Hurwitz

Judith Hurwitz is an author, speaker and business technology consultant with decades of experience.

One Comment
  1. One of the things I believe folks are overlooking is that it is highly likely that the kind of things that will be done with cloud computing will be quite different. Sure you could run the same relatively linear solutions that you run today, like ERP systems… but why stop there. With relatively unlimited access to large scale parallel processing you could begin to perform functions that are designed for that environment, in addition to what we think of as traditional IT. This shift will radically change the type and amount of value generated by IT. After all, IT was created to create value for the enterprise, not just cut costs.

Leave a Reply

Your email address will not be published.