When Service Oriented Architecture (SOA) first emerged as a driving trend, we at Hurwitz & Associates quickly concluded that virtualization would be a foundation of SOA. And while much of the virtualization discussion has focused around hardware, we think the ability to virtualize software is the real innovation.
Hardware virtualization focuses on partitioning servers to behave as though they were multiple machines or, alternatively uniting a number of them to become a grid, which behaves as if it were a single giant system. Software virtualization works in harmony with this. It enables software components to be distributed across heterogeneous platforms and quickly instantiated or relocated according to business priorities. Software virtualization is harder to achieve than it might seem at first glance.
Software virtualization delivers the ability to loosely couple business services so they can be used in lots of different situations across highly distributed heterogeneous environments. For this to work you need to establish the equivalent of an operating system for the entire hardware resource space – and this is what Hurwitz & Associates calls the SOA supervisor.
If you are interested in learning more about the SOA Supervisor, I recommend you read the chapter in our Service Oriented Architectures for Dummies that discusses the details of this approach. In brief, a SOA Supervisor keeps track of the component parts of a business process and makes sure they are executing on the dynamic requirements that are at the heart of SOA.
Together, hardware and software virtualization are the keys to making utility computing real. The utility computing concept has been a dream for decades. Organizations have long hoped that at some point computing would be able to operate according to the utility model by which telecommunications services are provided. Now, at last there are companies, both emerging as well as well established that provide the pieces to make it a reality. One of the companies that hope to capitalize on this is Cassatt of San Jose, California.
I recently sat down with Bill Coleman, the CEO of Cassatt. If his name sounds familiar, it is because Bill was the founder of BEA. Not satisfied to rest on his BEA laurels, Bill turned his attention to a software virtualization platform that he hopes will achieve a leadership role in utility computing. It will be no small feat if it does.
Cassatt was founded in 2003 by Coleman and three partners; Rob Gingell, a Sun Fellow and Solaris Chief Architect, Steve Oberlin, Designer of the Cray T3 Series, and Karen Willem who was the CFO at Brio. The company is funded by the same folks that funded BEA (Warburg Pincus) and New Enterprise Associates. With more than $100 million in funding the new company is on a fast track. It already has more than 100 employees and it has signed its first 100 customers.
Cassatt, Autonomic Computing and SOA
So, what is this about?
Cassatt does “autonomic computing” – a term that was first coined by IBM Tivoli to describe operational capabilities that are very similar in nature to the autonomic nervous system in the body – in other words, self managing. This self-management can involve everything from moving a software component from one server to another, based on a business rule, to allocating more resources to a process.
Cassatt uses its autonomic approach to turn the software components of a data center into a set of software services. Its product, Collage 4.0 provides a set of utilities that discover an application and decomposed it into its object code. That object code is stored in a series of “containers” that are, in essence, virtual machines. As a set of virtual machines, it allows customers to determine where various services will physically run. The software enables the flow of the application to be prioritized and can reconfigure the way the network resources are allocated based on the business requirements of the application.
Cassatt coined the term Service Level Automation to describe its approach. It says that its software virtualization can deliver guaranteed level of service in a cost effective manner. What is intriguing about Cassatt’s approach is its tie in to SOA. It turns the data center into a distributed computing platform where all the components are linked together based on the quality of service required by the organization.
While this concept of a distributed platform is still evolving, it is clear that it is an important component in the SOA environment. What are the characteristics that need to be in place to make autonomic or distributed infrastructures a reality for SOA? In essence the supporting infrastructure must be independent of the software implementation it supports. It will be impossible to restructure and recode either the infrastructure itself or the supporting software on a regular basis.
Cassatt isn’t the only player in this space. The company is focused on one of the components of utility computing but there are other services required such as provisioning, workflow, and security – to name a few. We expect that the major infrastructure vendors including IBM, HP, Sun and others will see software virtualization as an extension of their hardware and software platforms and will thus attempt to dominate the space.
However, there is room for innovative approaches such as the one Cassatt is developing. We especially appreciate the fact that Cassatt does not require changes to the underlying hardware or software. Cassatt automates the runtime rules governing how software is distributed in the data center, based on set policies. Cassatt’s biggest challenge will not be technology; it will be convincing business that this new approach to managing the data center is a better and more pragmatic business approach.