Wednesday, October 28, 2009

The smartphone maze

Much has been made of recent improvements in Google Android phone sales. Android phones are now available (or will be available by November 1) on all major U.S. carriers except AT&T, and many carriers will have multiple Android phones. There are some who say that this will doom the Palm Pre, which along with the iPhone has the slickest user interface of all the various smartphones out there. But my own analysis is that this isn't so: The smartphone OS that Android is supplanting is not RIM's or Palm's, but, rather, Windows Mobile.

It is little secret that the development of the next generation of Windows Mobile is a disaster. Windows Mobile 6.5 has been announced for the end of this year to collective yawns -- nobody thinks anybody is going to actually ship a phone based on it. Windows Mobile 7 has been announced for next year to barely concealed guffaws. Nobody who is serious expects a viable Windows Mobile 7 to come out anytime before the end of next year. What has happened, during this era of stagnation in Windows Mobile, is that WM vendors are now migrating to Android for their new smartphones. Android supports the new features of the new smartphone hardware, while Windows Mobile doesn't. And while Android is a user interface disaster, so is Windows Mobile -- both systems embody pre-iPhone paradigms of how to do things where each application has its own unique user interface, as vs. the new multi-touch common-user-interface paradigm where all user interface coding must go through a library that enforces a common look and feel. In short, where geeks used to go to WM because it was a (relatively) open platform with a lot of capabilities such as multi-tasking that the competition did not have, now they're going to Android instead because it has those same attributes but supports newer hardware.

So what seems to be falling out of all this is that Windows Mobile is going to go the way of old-school PalmOS shortly. The current vendors of WM phones such as HTC appear to be engaged in a mass migration to Android. But this does not mean that sales of the iPhone and Palm Pre will be hurt by Android. They are simply different markets -- Android, due to its fundamental design and development processes, will simply never be able to match Apple or Palm on ease of use or consistency of user interface between applications. Like Windows Mobile, Android is a geek product. Plenty of geeks will likely end up migrating to Android, but there is a huge market for smartphones as people max out the capabilities of standard candybar/flip phones between Twittering and everything else they want to do with phones, and most of these people are not geeks. Vendors like Apple and Palm are well positioned to go after that market... but Android simply doesn't play there, any more than RIM does with their crackberries.

-E

Monday, October 26, 2009

People are NOT fungible

One of the things that happened during the transformation from being "employees" to being "human resources" is that large corporations apparently decided that employees are fungible. That is, if you have two employees, employee A and employee B, and employee A is making a lot more money than employee B, it's fine to just drive off employee A and replace his position with employee B then hire a contractor for even less to fill employee B's position. Hey, an employee is an employee, right? Interchangeable, just like cogs, eh?

Much has been said about Microsoft's T-Mobile Sidekick disaster and what that says about the notion of "cloud computing" (hint: As I said earlier, cloud computing does not eliminate normal IT tasks other than actual hardware maintenance). But it says even more about the whole concept of "human resources". The infrastructure that Microsoft purchased with Danger included Oracle databases, Sun servers, and a set of non-Microsft NAS or SAN systems. None of these are things that Microsoft has experience with. The current hypothesis is that Microsoft contracted a contractor to do an Oracle database upgrade, and the contractor did exactly that, and Oracle -- as it often does -- ate its database during the upgrade. This was compounded by, apparently, the database backups being unreadable by the new version of the database. All of this is remediable if you have sufficient Oracle expertise on staff, but apparently neither Microsoft nor their contractor had such expertise -- they'd all left Danger after the acquisition after being shifted to positions dealing with other technologies that they didn't like or did not have the skills to do successfully.

Lesson for managers: Identify the critical skills that you need in order to continue to have a viable business, and retain those people. It's a lot easier to retain the people you need, than to find new people with those same skills to replace them once they do leave and you discover that suddenly you no longer have a viable enterprise because critical tasks are no longer being done for lack of expertise to do them. Employees are not fungible. You simply cannot replace an Oracle database expert with a contractor hired off the streets or with an expert in Microsoft databases, Oracle databases are black magic and the people who can successfully maintain them are worth every penny you pay them.

Of course, it's easy to throw stones at Microsoft here, but it's not a Microsoft problem. It's an industry-wide problem. Managers industry-wide are failing at the task of properly identifying the skills they need in order to perform the tasks needed to have a viable business, and when the people with critical skills leave are blind-sided. Thus you get disasters like Sprint's Nextel disaster, or this Sidekick disaster, where critical infrastructure people left and the infrastructure fell apart and rendered the enterprise non-viable. Employees are not fungible, and if you fail in your job of identifying the skills needed to keep your business operating and retaining the people with those skills, you may not get the press of the Sidekick disaster, but your business will operate slower, less efficiently, and have difficulty getting product out the door. And especially look at IT and operations people. That's not sexy stuff, but both Sprint/Nextel and Microsoft/Danger show that you simply cannot fire all the operations people you just acquired and replace them with your experienced employees who are experienced with a totally incompatible technology. It doesn't work. It just doesn't. And remedying the disaster that arises after you do this will be far, far more expensive than just retaining those critical infrastructure people in the first place.

-E

Monday, October 12, 2009

SSD in a low-end netbook

Netbooks tend to live a hard life. They're used in moving cars, they spend a lot of time banging around in backpacks, and so forth. Early netbooks like the Asus eee that practically defined the category used Linux and a small flash memory chip. This dealt quite well with the problem of durability -- flash memory chips don't care about vibration (at least, not about levels of vibration that wouldn't utterly disintegrate the whole computer) and The problem is that people want to use their netbooks to view multimedia content, and Linux is woefully inadequate in that area due to the fact that Linux users today are either utter geeks (parodied in this Xkcd comic) or are using it for servers where multimedia is not an issue (other than serving it via a web server). So netbooks have moved to using Windows XP rather than Linux.

The problem is that Windows XP does not run well off of the slow flash memory chips included with first-generation netbooks, thus netbooks have moved towards the cheapest hard drives available. Unfortunately this brings two problems: 1) those hard drives are still painfully slow compared to current state-of-the-art hard drives, and 2) those hard drives have the same vibration sensitivity and G sensitivity of all hard drives, making them a poor fit for netbooks.

The solution would be a high-speed SSD drive. They perform much better than low-end hard drives, and the only vibration or G forces that could destroy them would turn the entire netbook into a pile of shards. The problem is that SSD's were typically expensive. Until now: 64GB SSD for $150, in this case a Kingston SSDNow V-Series.

64GB doesn't sound like a lot of storage, but I examined the hard drive on my Acer Aspire One netbook and discovered that I was using a whole 20GB of hard drive space. I think my usage of the netbook is probably typical of most people's usage of a netbook -- Internet browsing and light word processing. These aren't computers that you buy to do video processing or music recording, they don't have the CPU horsepower for that, but they're perfectly acceptable for Internet browsing. When I'm bouncing around in my Jeep on field expeditions I don't want to haul around my expensive Macbook Pro, I want something small and durable for doing quick email checks whenever I get near civilization, and the Aspire One suffices for that. Except for the hard drive issue.

Thus I purchased the above SSD and installed it in my Aspire One. I had previously purchased the disk imaging CD/DVD set from Acer to allow re-imaging my netbook when the hard drive failed (note the "when", not "if" -- netbooks live hard lives), and it installed fine onto the SSD. The results have been gratifying. Performance is much better than with the low-end hard drive, and the durability is excellent. The second-generation SSD's have now conquered the stuttering problems that plagued the first-generation SSD's, at least for applications such as netbooks where large writes are rare -- I have never encountered stuttering problems.

What does this mean for the future? It means yet more low-power energy-efficient netbooks, perhaps higher in price than current netbooks but with better durability and performance. Netbooks will be relegated to the long-battery-life small-storage-capacity category rather than being marketed based on low performance and low price. You will start seeing some netbooks in the $700 range, around the same as a "real" notebook, assuming that sufficient performance can be obtained to justify that price. The question is whether Intel will deliberately cripple their Pineview follow-on to the current Atom processors the way they currently cripple it by forcing netbook makers to use the antiquated high-power-use 945 chipset, which has atrocious graphics performance (i.e., cannot even play HD videos from YouTube without stuttering, which is a major problem given that many people buy these things to browse Internet multimedia content). If they do, expect rival chips from AMD and VIA to gain popularity, albeit not with major vendors due to Intel's anti-competitive behavior of charging vendors more for chips if computer vendors use a rival's chips for more than 5% of their shipping computers. Given that there are major markets where Intel's chips are the only available chips, this clearly is going to limit how many jump ship to AMD and VIA. But if Intel can't deliver the performance that people want, somebody will jump ship to AMD or VIA, even if it isn't Dell or HP...

-E

Monday, October 5, 2009

In the Cloud

Cloud computing. Ah, how the buzzwords love to flock. This is no different from the 1970's when it appeared that the future was going to be large timesharing services. You could deploy your applications in that "cloud" and have redundancy, automatic backups, and so forth without the time and trouble of maintaining your own infrastructure. If you needed more storage an additional DASD in your VM virtual machine could be easily deployed from the "cloud" of storage devices available at the timesharing service, if you needed more CPU your application could be deployed on a VM given access to more of a CPU or even to a whole CPU, and so on and so forth. Large time sharing services with IBM 370's and follow-ons were doing cloud computing before the word existed. There is no (zero) functional difference between an IBM 370 running VM in 1973 and a Dell server running VMware ESXi today, other than the fact that the Dell is much faster and has much larger hard drives of course. But both do the exact same task, and arguably the IBM 370 did it better since the IBM 370 would even let you migrate all processes off of a CPU and take that CPU offline and remove it entirely for service, *with no disruption to running jobs*. Something which, I might add, its descendant IBM mainframes are still capable of doing, and which VMware wishes it could do.

So what happened, you ask? Why did large corporations move to networked microcomputers and their own insourced mainframes, and why did smaller businesses move to microcomputers and servers? Part of the reason was one of data security -- having your data reside with a third party entity that might go out of business at any time was a business risk that was not acceptable. But also, they ran into the same problem that cloud computing runs into when you try to deploy large enterprise databases into the cloud: a lack of I/O bandwidth. We are talking about an era where 300 baud acoustic couplers were high tech, remember, and where the backbones of the primitive data networks ran at 56kbit and operated in block mode. As a result, user interfaces were crude and based around block transfers of screen data, since real-time updates of screen contents based on immediate response to keystrokes was simply impossible. When microcomputers were invented with their megahertz-speed connections to their video displays and keyboards, that made possible entire classes of applications that were simply impossible on the prior timesharing systems, such as spreadsheets and real WSIWYG text editing and word processing.

Pipes to cloud computing facilities today are similarly constrained compared to local pipes. 10 gigabit network backbones are now not unusual at major corporations, yet most ISP connections are going to be DS3's that are operating at 45 megabits per second. It is clear that cloud computing runs into the same communications problem that prior time-sharing operations ran into, except in reverse -- where the problem with the time sharing systems was inadequate bandwidth to the user interface, the problem with cloud computing is inadequate bandwidth to the database. Most major corporations generate gigabytes of data every day. One major producer of graphics cards, for example, has so many NetApp appliances filled with simulation data for their cards that they had to expand their data center twice in the past five years. This is not a problem for a 10 gigabit backbone, but you are not going to move that data into the cloud, you're hard pressed to save it to local servers.

So what makes sense to deploy to the cloud? Well, primarily applications that are Internet-centric and operate upon a limited set of data. A web site for a book store that occasionally makes a query to a back end database server to get current inventory works fine for cloud computing. Presumably the cloud servers are colocated at critical points in the Internet infrastructure so that buyers from around the world can reach your book store and order at any given time, and the data requirements to the back end are modest and, because much of the content is static (War and Peace is always going to have the same ISBN and description for example), much of the data can be cached in those same data centers to reduce bandwidth to the central inventory system. I can imagine that this bookstore might even decide to sell access to their own internally developed system for managing this "cloud" of web application servers to third parties (hmm, I wonder who this bookstore could be? :-). Another possible application is for "bursting" -- where you need to serve a significant number of web pages for only a small part of the year. The Elections Results web site, for example, only gets hammered maybe six times per year, and gets *really* hammered only once every four years (when the Presidential race hits town). It serves a limited amount of data to the general public that is easy to push to data centers and serve from local caches there, and maintaining huge infrastructure that will be used only once every four years makes no sense from a financial point of view. Cloud computing makes a lot of sense there.

But one thing to remember about cloud computing: Even for those applications where it does make sense, it is no panacea. Yes, it removes the necessity to purchase actual hardware servers and find a location for them either in your own data center or in a colo, and provide plumbing for them. But you still have all the OS and software management problems that you have if the servers were local. You still need to deploy an OS and manage it, you still need to deploy software and manage it, you have simplified your life only in that you no longer need to worry about hardware.

At the recent Cloudworld trade show, one of the roundtables made the observation that "the next stage in cloud computing is going to be simplifying deployment of applications into the cloud." I.e., previously it had been about actually creating the cloud infrastructure. Now we have to figure out how to get applications out there and manage them. And past that point I cannot go :-).

-EG