Saturday, September 26, 2009

The product cycle does not end at the doors of QA

Most engineers, I've found, have a very limited view of the product cycle. They get a spec from product marketing. They implement this spec. They hand the product off to QA. The product gets shipped to customers after bugs are fixed. They're done.

The problem is that this limited view of the product cycle utterly ignores the most important question of all for customers, the question that causes the most pain and headache for customers: How will this product be distributed and deployed in the enterprise?

Product marketing's spec doesn't answer this question, usually. For that matter, the customer often has no idea. The end result is that you end up with atrocities like the Windows install cycle, where deploying Windows and Windows applications across your enterprise requires massive manpower and huge amounts of time and trouble.

When you're working on your product's detailed functional specification and architecture, you must be also thinking about how to automate its deployment. You're the one who knows how the technology will work internally. So let's look at the evolution of deployment...

My first contract job after leaving teaching was a school discipline tracking program, necessary in order to meet new federal requirements for tracking disciplinary offenses. Once I had the actual program working, the next question I had was, "how will this be deployed?" I knew this company had over a dozen clients scattered all over the state, each of which had at least five schools. There was no way that we were going to deploy it by hand to all of these places. The program also required a new database table inserted into the database, so you couldn't just place the program into some location and have it work. And the program also required a modification of the main menu file to add the new program to the list of programs. So there was at least three files involved here. My solution, given we had to get this thing deployed rapidly, was to write a shell script (this was on Xenix back in the day), tar it all up, and then give short directions:

  • Place distribution floppy in drive
  • Go to Unix shell prompt by logging in as user 'secret' then selecting menu option 2/3/4 .
  • Type:
    • cd /
    • tar xvf /dev/floppy
    • sh /tmp/install-program.sh
  • Exit all logged-in screens, and re-login.
This worked, but still required us to swiftly ship floppy disks to the central office technology coordinators of all these school districts, and required them to physically go to each school and perform these operations. So the next thing I did was look at the technology available for modem communications on Unix, and decide, "you know, we could do all this with uux!" The modern equivalent would be 'ssh', but this was years before ssh existed.

By the time I left that company, several years later, I could sit at my desk and press one key and send a software update to every school district in our state, which the school technology coordinator could then review and trial at the central office, then, once it was approved, himself (or herself) push one key and send that software to every school in his district. This was all being done via modems and UUCP, since this predated Internet access for schools, but because this was also the era of green screen dumb terminals where 64K-byte programs were large programs, 2400 baud modems were plenty to do the job. We had arrived at a system of deployment that used the minimum manpower possible to deploy this software across a geographically dispersed enterprise. Because of the swiftly changing requirements of state and federal regulators (who would often require updates several times during the course of the year as they decided to collect new chunks of data), this system gave us a considerable cost advantage in the marketplace compared to our competitors, who still required technicians to physically go to each school and install software updates by hand.

Now, this was a specific environment with specific needs. But you should still go into any project aimed at the enterprise with a specific goal to make it as easy as possible to deploy into the enterprise. The customer should have to enter as little data as possible into the program to make it function. It should Just Work, for the most part. And lest you say, "but my program requires complex configurations in order to make it work!", you should investigate and make sure that's true. It was thought to be true about enterprise tape backup software, for example -- that setting up enterprise tape backup software required huge amounts of technical expertise in order to configure SCSI tape jukeboxes. It was the insight of my VP of Engineering at that time, however, that all the information we needed in order to configure the entire SCSI tape subsystem was already exported either by Linux or by the SCSI devices themselves. We told customers to place one tape into each jukebox, then press the ENTER key. They pressed the ENTER key and my scanning routine went out and did the work for them. What was a half-day affair with competitors became a single button press.

The point is that a) you must think about how to distribute software updates to multiple enterprises and across enterprises with the minimum necessory human intevention, and b) even the initial deployment of seemingly complex products can become easy to deploy once you look at what technology is involved and figure out ways to automate the configuration. But you need to be thinking about deployment -- how is this going to be deployed into the enterprise -- or it's not going to happen. Instead what happens is the typical product on the market today which is expensive to deploy across the enterprise and thus it doesn't happen, or happens only haphazardly. And being typical, in today's day and age, is hardly a way towards success...

-E

Monday, September 21, 2009

iFool

My main computing platform, the one I use for all my software development (via VMware which lets me develop for a half dozen variants of Linux), is a top-of-the-line Apple MacBook Pro 13.3". My phone is an iPhone 3G 16GB. Am I an iFool? Have I drank the kool-aid? Shouldn't a hardcore Linux penguin be using an Android phone and running Linux on his laptop?

Well, folks, that would be fine and dandy except for one thing: I want to get things done, I don't want to spend all my time trying to get Linux drivers working on my work laptop. And for my phone, I want it to seamlessly sync music with my laptop, I want it to be able to be plugged into the iPod input of my car stereo and just work, I want it to seamlessly sync the address book and bookmarks and notes from my laptop without having to do anything special. In short, I want it to Just Work.

So for my laptop I bought a MacBook several years ago. It Just Worked. And was Unix to boot, so all my normal tools were available from the command line, and GUI versions of them (like Emacs, The Gimp, etc.) just worked. The entire developer toolkit came for free with the computer, so I could even type './configure ; make' in my project and compile it under MacOS if I wanted to do so. I chose the 13.3" form factor because it fits on an airline tray, where bigger laptops won't. I've upgraded to the latest and greatest where it gives me a real advantage -- most recently to the new aluminum MacBook Pro with the 7-hour battery and Firewire 800 and nVidia graphics chipset -- and whenever I upgrade, the new MacBook sucks in my accumulated years of files, notes, photos, and videos without a problem. It Just Works, meaning I can do my job, instead of fiddle with the technology all day long.

For my phone, I used a Palm Treo running the Palm OS for many years after it was obsolete. Yes, the email and web clients sucked. But it synced my addresses, notes, ToDo lists, and calendars without a problem. The problem came when I was stuck in an airport waiting room needing to check on what was up with my itinerary. The hoary old Palm web browser just completely choked on the airline's web site. So what to do?

Now, I have an HTC Wizard that I used on T-Mobile some years ago. It runs Windows Mobile 5. WM5 will not sync with my Mac without extra-cost software, and my experience with WM5 was that it was technically astute, but had the user interface from hell. The Treo's user interface was simple, plain, and easy to use one-handed, the WM5 user interface really wanted three hands -- one to hold the phone sideways, and two to thumb on the thumb-board.

Then I looked at Android. And Android, alas, reminded me sorely of Windows Mobile 5. It is technically astute, but its user interface was similarly designed by geeks for geeks, rather than designed for simplicity and ease of use. As with WM5, getting anything done requires two hands, and the user interface is complex and, well, ugly.

So I arrived at the iPhone by default. It syncs seamlessly with my MacBook, and the user interface, while more complex than that of my old Treo, is fairly simple to use to do ordinary things. When I plunk it into its WindowSeat in my Jeep I can do the most common operations with one finger of one hand (mostly selecting an iTunes playlist since I'm using it for music while hooked into my car stereo at that point). No fumbling with a stylus, no poking at tiny indecipherable little pictures with the point of said stylus, everything is big and bold and easy to reach out and touch.

So what's the lessons here? First of all, if you're designing a user interface, complexity is the enemy. The iPhone was blasted for its simple -- some say simplistic -- user interface, much as PalmOS was blasted for its user interface. Yet both manage to make a virtue of simplicity to make their device much easier to use. Geeks love adding complexity to products, as do product managers who are looking to satisfy marketing checklists. It is your job as a software development manager to push back on the continual drive for user interface complexity. I once had one of my engineers give me a design proposal that was five web pages worth of highly technical stuff, all of which was useful to geeks but which would simply be gibberish for our user base. I sent it back to him with all five pages X'ed out and, on the launch page which led into that long series, I drew a selection box and a button beside it. The user selected the previously-downloaded configuration to upload, then the program just did what it was supposed to do. We probably missed out on some marketing checkboxes somewhere, but the end product was much more usable, because most users simply do not want, need, or care about all the technical details of what exactly is supposed to happen behind the scenes. They just want the computer to do the right thing. They want it to Just Work.

The second lesson is that lack of a marketing checkbox often means nothing in real life. The iPhone lacked cut-and-paste for the first two years of its life. This was a missing marketing checkbox, but it didn't hurt the iPhone's sales any. The iPhone became the best-selling smartphone in the USA despite not having a feature that everybody claimed was "necessary". The simplicity of use that Apple got from not having that feature was far more attractive to customers than the additional feature would have been, and when Apple finally developed a way to add cut-and-paste that would not impact the simplicity of the product, it was just icing on an already tasty cake as far as most iPhone customers were concerned.

The final lesson is that people just want to get work done. They want to get work done without having to fight the technology all the time, and without having to look at the internals of the technology to figure out what's wrong and how to fix it. I'm very good at debugging our products. When there's a defect that nobody else can figure out, I'm the guy who looks at it, goes and puts a few printf statements in the right place (source code debuggers are for wimps, heh!), says "Ah, I see," and then tells the appropriate programmer what went wrong and why and how to fix it. But while I'm doing that, I don't want to have to be debugging my laptop too. Both Windows and Linux force me to fix OS stuff all the time rather than actually do my work. MacOS just works. And that should be your product too -- the customer should install it, maybe type in a few setup options, and then it Just Works.

So am I an iFool? Well, yes. And you should be too. By which I do not mean that you should go out and buy an iPhone and a Mac (especially with the recent release of the Palm Pre, which appears to have learned some of the lessons of its predecessors), but, rather, that you should embrace the lessons that these products teach -- simplicity as a virtue, simplicity as not being a barrier to sales, and a product that just works, without a constant need to tweak or maintain it in order to keep it working. Do that, and your product has a chance to become the next iFool's purchase. Make it a typical overly complex difficult to manage product, on the other hand... well, then you're just another mess in a large marketplace full of messes, and will stand out about as well as a bowling pin at a bowling alley, just one more indistinguishable product in a marketplace full of indistinguishable products. Which isn't what you were setting out to do, right?

-E

Wednesday, September 16, 2009

So what about China?

Yet more paranoia about China and outsourcing engineering to China.

Pretty much every laptop computer you buy today is already made in China. That Macbook Pro that the paranoid executive won't take to China? Made in China.

There are some reasons to suspect corporate espionage, but the nationality of the perpetrator is irrelevant. I'm unclear about the exact motivations for the rampant China bashing that seems to happen in our media today, but the Chinese as a people have every motivation to not engage in dishonest dealing -- they need the West's money in order to continue modernizing their economy. China was a third world nation only 20 years ago, with an industrial base similar to that of most Western nations in the 1950's but 10 times more people to support with that industrial base. They've come a long ways in the past twenty years, but still have a lot further to go and they know it. Engaging in organized skulduggery (as vs. the ordinary disorganized industrial espionage that happens between business competitors) is not in their best interests and the least of our worries.

I've managed Chinese programmers working on security products. They're smart, but green. They still have a lot to learn about what it takes to get products through the whole product cycle from concept to final delivered product in the customer's hands, and they know it. Perhaps at some time in the future we'll need to worry about Chinese programmers inserting time bombs into security products, but today? Again, they have too much to lose, especially if their code is being regularly reviewed by senior American engineers, as was true in our case.

In short, you should definitely follow your normal procedures for detecting and closing security vulnerabilities, but singling out one nation -- China -- for special scrutiny is just plain silly. Yes, follow good practices -- don't leave your cell phone out and about, same deal with your laptop computer, make sure your firewall software is running, don't stick foreign media into your computer's ports or hard drives or install unsigned programs, if you've outsourced development have regular design and code reviews to catch security issues early, etc. But I have to think that all this emphasis upon one nation as a "threat" has more to do with politics than with technology, and distracts us from the real problems of securing our computers against real threats -- which are more likely to come from Eastern European virus writers than anything coming out of China.

-E

Monday, September 14, 2009

A new gadget

This is an HP OfficeJet Pro 8500. It is a fax machine, copy machine, scanner, photo printer, and just plain printer. It uses a water-resistant pigment-based ink so while it's not a laser printer, it fills the same basic need. It's fast, and the inks are priced reasonably on a per-page basis. So far, so good.

It replaces a laser printer, an inkjet printer, a scanner, and a fax machine, and frees up a huge amount of space in my home office even if it does sort of overlap the sides of my filing cabinet. And because it's network-connected to my Airport Extreme (via a network cable, not via WiFi), I can print wirelessly from my MacBook Pro while I'm at the dining room table or in the bedroom.

It just showed up in my Bonjour window when I went into Printer Preferences to add it as a printer. The scanner just showed up there too. Both work, and the document feeder works too for both copying and scanning, albeit I wish it had a duplexer. But for under $200, it's hard to be disappointed that the document feeder "only" gets one side of pages you feed it...

-Eric

Sunday, September 13, 2009

Language wars!

This one gets all the flames when development teams meet to decide on what language to use for the next project. The crusty old Unix guy in the corner says "C, it was good enough for Dennis and Ken, it's good enough for us." The Microsofty says "C++ of course. C is for Neanderthals." The J2E guy says, "Why does anybody want to use those antiquated languages full of stack smash attacks and buffer overflows anyhow? Write once, run everywhere!" And finally, the Python/Ruby guy says, "look, I can write a program all by my lonesome in two weeks that would take months for a whole team to write in Java, why is there even any question?"

And all of them are wrong, and here's why: They're talking about technologies, when they should be talking about the problem that needs solving and the business realities behind that.

I'll put myself pretty firmly in the Python/Ruby camp. My first Python product hit the market in 2000, and is still being sold today. My most recent project was also written primarily in Python. I also had a released product written largely in Ruby, albeit that was supposed to be only a proof-of-concept for the Java-based final version (but the proof of concept shipped as the real product, huh!). Still, none of these products are in Python or Ruby because of language wars, and indeed these products also included major portions written in "C" that did things like encryption, fast network file transfer, fast clustered network database transactions, and so forth. Rather, the language chosen was chosen because it was the right tool for the job. The first product was basically a dozen smaller projects written in "C" with a Python glue layer around them to provide a good user interface and database tracking of items. The second product, the one prototyped in Ruby, was prototyped in Ruby because Ruby and Java are quite similar in many conceptual ways (both allow only single inheritance, both have a similar network object execution model, etc.) and it made sense to prototype a quicky proof-of-concept in a language similar to what the final product would be written in. The last project was written in Python because Python provided easy-to-use XML and XML-RPC libraries that made the project much quicker to get to market, but also included major "C" components written as Unix programs.

So, how do you choose what language to use? Here's how NOT to choose:

  1. The latest fad is XYZ, so we will use XYZ.
  2. ABC programmers are cheap on the market, so we'll use ABC.
  3. DEF is the fastest language and is the native language of our OS, so we'll use DEF.
  4. I just read about this cool language GHI in an airline magazine...
Rather, common sense business logic needs to be used:
  1. We need to get a product out the door as quickly as possible to beat the competition to market.
  2. We need tools that support rapid development of object and component-oriented programs that are easy to modify according to what user feedback says future versions will need .
  3. Performance must be adequate on the most common platforms our customers will be using
  4. Whatever tools we use must allow a reasonable user experience.
The reality is that if you're trying to get a product to market as quickly as possible, you want to use the highest level language that'll support what you're trying to do. Don't write Python code when you can describe something in XML. Don't write Java code where Python will do the job. Don't write C++ code where Java will do the job. Don't write "C" code where C++ will do the job. Don't write assembly code where "C" will do the job. In short, try to push things to the highest level language possible, and don't be ashamed to mix code. I once worked on a product that had Ruby, Java, and "C" components according to what was needed for a particular part of the product set. There were places where Ruby lacked functionality and its performance was too poor to handle the job, for example, but Java would do the job just fine. And there was places where absolute performance was needed or where we were interfacing to low-level features of the Linux operating system where we went straight to "C", either as a network component accepting connections and data in a specified format, or as a JNI or Ruby equivalent.

The whole point is to get product out the door in a timely manner. If you decide, "I will write everything in 'C' because it is the native language of Unix and my product will be smaller and faster", you can get a product out the door... in three years, long after your competition's product hits the market and gains traction. At that point you'll be just an also-ran. What you have to do is get product out the door, get it out the door as quickly as possible with the necessary functionality and features and performance (not theoretical best, but "good enough"), and then work on getting traction in the marketplace. Perfection is the enemy of good enough, and seeking perfection often doesn't even produce a product that's any closer to perfection than the product originally written to be "good enough". That program that took three years to write? That company went bankrupt, and one reason was because the constant search for perfection ended up with a product that was inflexible, difficult to modify to work for different problem sets, and, frankly, poorly architected. Seeking the smallest memory footprint and theoretical best performance resulted in a product that failed in the marketplace because they missed the main reason we're writing programs in the first place: To meet customer needs. A program whose architecture is about memory footprint and performance at all costs is unlikely to have an architecture capable of being changed to meet changing customer needs, so not only were they late to market -- their product sucked. And the hilarious thing is, they didn't even manage to achieve the good performance their head architect claimed would happen with their architecture! So much for the three year "write everything in C" plan... late to market, poor performance, hard to modify, and they claim that those of us who advocate "good enough" rapid application development in the highest-level language feasible are "sloppy" and "advocating non-optimal practices"? Heh!

Next up... I talk about tools, and the odd story of how I got into the Linux industry in the first place. Note - I was the guy against using Linux in our shop. But that's a tale for another post :).

_E

Is the end of rotational storage near?

Okay, so this one's been predicted for over 20 years now. I remember reading Jerry Pournelle in Byte Magazine in the early 80's making this prediction. But I just went over to NewEgg.com and found a decent 64GB SSD for $150. While that's not going to replace the 2 terabyte RAID array in my Linux server, for the average consumer that's more than enough storage for anything they're ever going to do with a computer, and the 128GB SSD's are going to be just as cheap within six months, which is more storage than the average consumer will ever use.

Okay, now I hear you laughing. "64 gigabytes? Heck, my collection of unboxing videos is bigger than that!" But you're not the average person. The average person looks supiciously like me when I'm using my Acer Aspire One netbook on my Jeep expeditions rather than my top-of-the-line Macbook for development. My Aspire One gets used pretty much the same way the typical person uses their computer -- it does Internet browsing and email, I suck photos out of my camera into the Acer and resize them and post them on the Internet, and it handles a single major application -- in my case, loading detailed topographical maps and imagery into my DeLorme PN-40 GPS. It came with a 120gb hard drive. I'm using about 35GB of that hard drive right now, because I copy all my photos off onto an external drive when I get home (because I don't trust the integrity of a hard drive that's been bouncing around in a Jeep on an expedition into the Mojave Desert). Why wouldn't I put a 64GB "hard drive" into the thing that won't ever fail (at least, not because I bounce it around in a Jeep, anyhow)? After all, if I need more space, I can always plug in an external drive and copy stuff off. Frankly, 64GB is plenty of space for everything that the average person ever does with a personal computer, and 128GB is going to be $150 by the end of this year and most decidedly is enough space for the average consumer.

So maybe the end of rotational storage isn't here. But unless some new consumer application comes up that requires huge amounts of storage, it may be that we're seeing the last gasp of rotational storage in consumer hardware. After all, who needs a 500gb hard drive that might crash and fail, if a 128gb SSD costs the same amount of money and is plenty big for everything the average person wants to do with a computer?

-E

Thursday, September 10, 2009

Leadership vs. arrogance

At a recent meeting, we were talking about Carly Fiorina, what she did to HP that almost destroyed the company, and the sad state of leadership in corporate America today. "They're arrogant," someone said. "They had it good for so many years, and thought it was all their own doing." I disagreed. "Steve Jobs is arrogant. The problem is leadership, not arrogance."

To put it bluntly: Steve Jobs is an arrogant jerk. He has decided opinions about what his products should look like, and if you're one of his employees and going to go up against him, you better have a darn good reason for why what you're going to do is going to make the product easier to use for the majority of customers or add functionality that the majority of customers (not just a few geeks) need. You better have data, reasons, pretty pictures showing how it makes the product more consistent and easier to use, and so forth. If you can't do that, he'll tell you to go away and do it his way. When Steve came back to Apple he came back to a company in chaos, where multiple fiefdoms were feuding, where all attempts to replace the failing MacOS 9 were shot down as "too risky", where Apple's products were starting to look just like everybody else's beige boxes except running an OS far inferior to Windows NT or even Windows 95 on any technical basis. There was no shortage of talent. What there was, was a shortage of leadership -- someone who would take that talent, listen to what they had to say, process it, then say "This is what we're going to do, bang bang bang" and people walk out with their marching orders knowing exactly what they're going to do, why they're going to do it, and that failing to do it is not an option. It took, in effect, a real jerk being in charge who could rein in all these prima donnas and get them all rowing in the same direction.

Unfortunately, it's as if all of modern-day corporate America thinks being a jerk is how to be a leader. It's not. Rather, having decided opinions about what makes a product line the best in the world, opinions informed by listening to dozens of people then applying your own judgement, is what makes a leader. It's that whole vision thing, once again. That takes a certain degree of arrogance to push that vision onto a company because, let's face it, most of us in engineering are pretty arrogant ourselves. If we've been in the business for a number of years, we've seen companies come and go, we've seen products come and go, we've developed our own opinions about what makes a good product and what makes a bad one. But what a lot of managers and far too many CEO's get confused about is that they think being arrogant is leadership. It's not. It's a common product of leadership, but leadership is something else entirely -- call it vision. And given that many CEO's got their job by sucking up to the Board of Directors rather than having any vision of their own, they're followers by nature, not leaders. They're like the former CEO of DEC whose idea of "leadership" was to survey current owners of DEC VAX minicomputers asking them what they wanted as future products from DEC. Their answer was, of course, bigger and faster DEC VAX minicomputers. You'll notice that DEC is no longer in business -- because following is not leading whether you're following customers, following industry trends, or following conventional wisdom. And you can follow for only so long before you fall so far behind that you end up going down the tubes.

Yet these followers believe that, by being arrogant and expressing uninformed opinions that they refuse to change despite all data to the contrary, that this turns them into leaders. That is the problem with far too many companies today -- they are led by scared followers who are afraid that they're going to be found out as frauds, as not really being leaders, and who thus try to bluster their way into being seen as leaders via arrogance and inflexibility. But for a real manager, whether we're talking about a team lead or a CEO, all that manages to do is create an unmotivated, disillusioned work force that in the end is not going to create the innovations needed to move the company forward.

And that's the end of today's discussion of leadership. I'm going to return to this more in the future, because leadership is one of those things that is hard to quantify, but you know it when you see it. I'll further attempt to quantify it anyhow because, well, I'm just arrogant that way (heh), I always want to understand things and the best way to do that is to quantify it. And hopefully that effort might be useful both to myself, and to future team leads. We'll see, eh?

- Eric

Tuesday, September 8, 2009

Real life vs. movie

So there is apparently a EMP bomb doomsday conference going on today, presumably to generate some leads for hardening financial institution computers. Must be a shortage of ways to spend taxpayer money, presumably.

In reality, terrorists do not need EMP devices to disable internet-connected devices. All they need to do is rent a botnet to do a DDOS. Furthermore, the biggest risk to the integrity of financial data and computing systems is not the explosion of a nuclear device, which is what would be required to create an EMP pulse capable of wiping out data (and I might add that if you just got vaporized in a nuclear explosion you're unlikely to care anymore anyhow). Rather, the biggest risk is insider fraud and technological meltdowns caused by inadequate internal controls. Remember, Countrywide and its fellow sub-prime lenders and securitizers such as Goldman Sachs did more damage to the nation's financial system than any nuke has ever done.

In short, as usual, people are focusing on flash, not meat. Computer security and financial integrity in general is not, in reality, a glamour product. It's a multi-level system of firewalls, security software, policies, procedures, regulatory activities, and checks and balances intended to detect and remedy any intrusion, extrusion, or fraud before it can culminate in actual data loss or financial loss. Or as Bruce Schneier is fond of saying, security is a process, not a product. It's a bunch of hard work, and too often not done correctly, but I suppose talking about the hypothetical effects of fictional nuclear devices is more exciting for our mindless press to blather about...

-Eric

An introduction

Welcome to my new blog. It may seem odd that, just as other forms of social media are taking off, I'd take up blogging, which is now widely regarded as "old school". But in a sense, I was blogging at my old site long before blogging existed as such, I just quit doing it because it was not fun blogging with /bin/vi as your main blogging tool and I never had time to write any automated blogging tools worth the name between architecting and releasing multiple products for multiple companies.

This blog is primarily for my technology ramblings. I'm not going to talk about politics, what I did over the Labor Day weekend, or anything like that. I will point out interesting new technologies, talk about development methodologies, and talk about failed projects and un-failed projects and what's the difference between good management and poor management. I released my first product in 1988, over twenty years ago, so I have a little bit of experience in the area. I am proud to say that I have never been part of a failed project (i.e., one that failed to ship), though in more than one case that was despite incompetent management that resulted in a development process much more protracted than necessary. I've had good managers, I've had bad managers, and I've managed multiple teams of my own. Sharing some of this expertise with younger engineers and freshly minted managers might seem a bit arrogant, but I have the battle scars -- and delivered product -- to justify it.

How often will posts appear here? Good question. I'm going to aim for at least twice a week. Like most good engineers I have no shortage of opinions about what makes a good piece of software or what is the difference between a well-managed company and a poorly-managed one, and as a serial startup guy and general geek I love talking about virtualization vs. containerization and cloud computing and things of that sort, even if it just to remind folks that only the terminology is new, there are no fundamentally new concepts at work here. So I don't have a shortage of material. What I do have is a shortage of time -- that whole startup thing, remember? So we'll see. In the meantime... Virtualization is successful because operating systems are weak. Read. Think. Discuss. Enjoy :).

- Eric