Saturday, September 26, 2009

The product cycle does not end at the doors of QA

Most engineers, I've found, have a very limited view of the product cycle. They get a spec from product marketing. They implement this spec. They hand the product off to QA. The product gets shipped to customers after bugs are fixed. They're done.

The problem is that this limited view of the product cycle utterly ignores the most important question of all for customers, the question that causes the most pain and headache for customers: How will this product be distributed and deployed in the enterprise?

Product marketing's spec doesn't answer this question, usually. For that matter, the customer often has no idea. The end result is that you end up with atrocities like the Windows install cycle, where deploying Windows and Windows applications across your enterprise requires massive manpower and huge amounts of time and trouble.

When you're working on your product's detailed functional specification and architecture, you must be also thinking about how to automate its deployment. You're the one who knows how the technology will work internally. So let's look at the evolution of deployment...

My first contract job after leaving teaching was a school discipline tracking program, necessary in order to meet new federal requirements for tracking disciplinary offenses. Once I had the actual program working, the next question I had was, "how will this be deployed?" I knew this company had over a dozen clients scattered all over the state, each of which had at least five schools. There was no way that we were going to deploy it by hand to all of these places. The program also required a new database table inserted into the database, so you couldn't just place the program into some location and have it work. And the program also required a modification of the main menu file to add the new program to the list of programs. So there was at least three files involved here. My solution, given we had to get this thing deployed rapidly, was to write a shell script (this was on Xenix back in the day), tar it all up, and then give short directions:

  • Place distribution floppy in drive
  • Go to Unix shell prompt by logging in as user 'secret' then selecting menu option 2/3/4 .
  • Type:
    • cd /
    • tar xvf /dev/floppy
    • sh /tmp/install-program.sh
  • Exit all logged-in screens, and re-login.
This worked, but still required us to swiftly ship floppy disks to the central office technology coordinators of all these school districts, and required them to physically go to each school and perform these operations. So the next thing I did was look at the technology available for modem communications on Unix, and decide, "you know, we could do all this with uux!" The modern equivalent would be 'ssh', but this was years before ssh existed.

By the time I left that company, several years later, I could sit at my desk and press one key and send a software update to every school district in our state, which the school technology coordinator could then review and trial at the central office, then, once it was approved, himself (or herself) push one key and send that software to every school in his district. This was all being done via modems and UUCP, since this predated Internet access for schools, but because this was also the era of green screen dumb terminals where 64K-byte programs were large programs, 2400 baud modems were plenty to do the job. We had arrived at a system of deployment that used the minimum manpower possible to deploy this software across a geographically dispersed enterprise. Because of the swiftly changing requirements of state and federal regulators (who would often require updates several times during the course of the year as they decided to collect new chunks of data), this system gave us a considerable cost advantage in the marketplace compared to our competitors, who still required technicians to physically go to each school and install software updates by hand.

Now, this was a specific environment with specific needs. But you should still go into any project aimed at the enterprise with a specific goal to make it as easy as possible to deploy into the enterprise. The customer should have to enter as little data as possible into the program to make it function. It should Just Work, for the most part. And lest you say, "but my program requires complex configurations in order to make it work!", you should investigate and make sure that's true. It was thought to be true about enterprise tape backup software, for example -- that setting up enterprise tape backup software required huge amounts of technical expertise in order to configure SCSI tape jukeboxes. It was the insight of my VP of Engineering at that time, however, that all the information we needed in order to configure the entire SCSI tape subsystem was already exported either by Linux or by the SCSI devices themselves. We told customers to place one tape into each jukebox, then press the ENTER key. They pressed the ENTER key and my scanning routine went out and did the work for them. What was a half-day affair with competitors became a single button press.

The point is that a) you must think about how to distribute software updates to multiple enterprises and across enterprises with the minimum necessory human intevention, and b) even the initial deployment of seemingly complex products can become easy to deploy once you look at what technology is involved and figure out ways to automate the configuration. But you need to be thinking about deployment -- how is this going to be deployed into the enterprise -- or it's not going to happen. Instead what happens is the typical product on the market today which is expensive to deploy across the enterprise and thus it doesn't happen, or happens only haphazardly. And being typical, in today's day and age, is hardly a way towards success...

-E

No comments:

Post a Comment