Workload Portability – is it the Backbone of The Cloud?

For those who know the history of PlateSpin, the term Workload Portability is synonymous with much of its product strategy. An idea that surfaced almost 10 years ago, came to fruition in the height of PlateSpin’s success, is enjoying a renaissance at Novell and perhaps finally being recognized as one of the underpinnings of the ideal Cloud architecture. It’s personally satisfying to see the vision we all shared come to be a valued core of where the systems management market is heading — albeit under many banners and confusing terminology.

In short, workload portability essentially makes it possible to migrate elements of a business service (e.g. an instance of an application+OS+data) to the appropriate infrastructure so that it can service the needs of the user. Given user need change over time, both short and long term time frames, the more portable the workload the more flexibility the IT service can leverage to satisfy customer needs. The more flexible, the more economically efficient the service operates which is the pinnacle goal many CIO’s and service providers chase.

For those who spend time thinking about how to make workloads portable, there are many challenges to overcome — some already addressed and others yet to emerge in a reliable production-ready form. The newest need seems to be coming from a desire to move workloads in and out of external Cloud services. Using workload portability vernacular, let’s denote that as P*2*C or V*2*C (e.g. physical or virtual to/from Cloud). We may also need C’*2*C” if we don’t trust the Cloud provider and need a way to move workloads between Cloud providers. The alphabet notation can get confusing but the essential functionality is the same — pick up the OS, applications and data and deposit them somewhere else in the web-connected infrastructure and set it running. Ideally this all occurs with little to no user interruption, but crossing the various infrastructure boundaries without down time is essentially a pipe dream with today’s complex and non-standard-conforming run-time environments (I will give that you can create use cases where down-time is all but eliminated, but that’s not the general case, not yet). A great problem to solve would be virtualizing network connectivity on a global basis, standardizing disk architectures and normalizing CPU instruction sets — down-time would largely be a thing of the past — oh, forgot to mention the need for a CMDB to hold all the relevant knowledge needed to make workload portability transformations possible.

I see a few new companies emerging focusing on the broader problem of workload portability as it relates to large scale business service management. I also see some of the incumbents starting to address this as well — one thing that is clear, unless they internalize a multi-infrastructure approach to their architectures, they are not likely ready to fit into the dynamic nature (oh i meant hodge podge nature) of most enterprise IT environments.

This entry was posted in Entrepreneur, Technology, Virtualization. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *