Cloud computing. Ah, how the buzzwords love to flock. This is no different from the 1970's when it appeared that the future was going to be large timesharing services. You could deploy your applications in that "cloud" and have redundancy, automatic backups, and so forth without the time and trouble of maintaining your own infrastructure. If you needed more storage an additional DASD in your VM virtual machine could be easily deployed from the "cloud" of storage devices available at the timesharing service, if you needed more CPU your application could be deployed on a VM given access to more of a CPU or even to a whole CPU, and so on and so forth. Large time sharing services with IBM 370's and follow-ons were doing cloud computing before the word existed. There is no (zero) functional difference between an IBM 370 running VM in 1973 and a Dell server running VMware ESXi today, other than the fact that the Dell is much faster and has much larger hard drives of course. But both do the exact same task, and arguably the IBM 370 did it better since the IBM 370 would even let you migrate all processes off of a CPU and take that CPU offline and remove it entirely for service, *with no disruption to running jobs*. Something which, I might add, its descendant IBM mainframes are still capable of doing, and which VMware wishes it could do.
So what happened, you ask? Why did large corporations move to networked microcomputers and their own insourced mainframes, and why did smaller businesses move to microcomputers and servers? Part of the reason was one of data security -- having your data reside with a third party entity that might go out of business at any time was a business risk that was not acceptable. But also, they ran into the same problem that cloud computing runs into when you try to deploy large enterprise databases into the cloud: a lack of I/O bandwidth. We are talking about an era where 300 baud acoustic couplers were high tech, remember, and where the backbones of the primitive data networks ran at 56kbit and operated in block mode. As a result, user interfaces were crude and based around block transfers of screen data, since real-time updates of screen contents based on immediate response to keystrokes was simply impossible. When microcomputers were invented with their megahertz-speed connections to their video displays and keyboards, that made possible entire classes of applications that were simply impossible on the prior timesharing systems, such as spreadsheets and real WSIWYG text editing and word processing.
Pipes to cloud computing facilities today are similarly constrained compared to local pipes. 10 gigabit network backbones are now not unusual at major corporations, yet most ISP connections are going to be DS3's that are operating at 45 megabits per second. It is clear that cloud computing runs into the same communications problem that prior time-sharing operations ran into, except in reverse -- where the problem with the time sharing systems was inadequate bandwidth to the user interface, the problem with cloud computing is inadequate bandwidth to the database. Most major corporations generate gigabytes of data every day. One major producer of graphics cards, for example, has so many NetApp appliances filled with simulation data for their cards that they had to expand their data center twice in the past five years. This is not a problem for a 10 gigabit backbone, but you are not going to move that data into the cloud, you're hard pressed to save it to local servers.
So what makes sense to deploy to the cloud? Well, primarily applications that are Internet-centric and operate upon a limited set of data. A web site for a book store that occasionally makes a query to a back end database server to get current inventory works fine for cloud computing. Presumably the cloud servers are colocated at critical points in the Internet infrastructure so that buyers from around the world can reach your book store and order at any given time, and the data requirements to the back end are modest and, because much of the content is static (War and Peace is always going to have the same ISBN and description for example), much of the data can be cached in those same data centers to reduce bandwidth to the central inventory system. I can imagine that this bookstore might even decide to sell access to their own internally developed system for managing this "cloud" of web application servers to third parties (hmm, I wonder who this bookstore could be? :-). Another possible application is for "bursting" -- where you need to serve a significant number of web pages for only a small part of the year. The Elections Results web site, for example, only gets hammered maybe six times per year, and gets *really* hammered only once every four years (when the Presidential race hits town). It serves a limited amount of data to the general public that is easy to push to data centers and serve from local caches there, and maintaining huge infrastructure that will be used only once every four years makes no sense from a financial point of view. Cloud computing makes a lot of sense there.
But one thing to remember about cloud computing: Even for those applications where it does make sense, it is no panacea. Yes, it removes the necessity to purchase actual hardware servers and find a location for them either in your own data center or in a colo, and provide plumbing for them. But you still have all the OS and software management problems that you have if the servers were local. You still need to deploy an OS and manage it, you still need to deploy software and manage it, you have simplified your life only in that you no longer need to worry about hardware.
At the recent Cloudworld trade show, one of the roundtables made the observation that "the next stage in cloud computing is going to be simplifying deployment of applications into the cloud." I.e., previously it had been about actually creating the cloud infrastructure. Now we have to figure out how to get applications out there and manage them. And past that point I cannot go :-).
-EG