Chapter 1. What Is the Software Paradox?

par·a·dox

ˈparəˌdäks/

NOUN: paradox; PLURAL NOUN: paradoxes.

A statement or proposition that, despite sound (or apparently sound) reasoning from acceptable premises, leads to a conclusion that seems senseless, logically unacceptable, or self-contradictory.

On Wednesday, August 12, 1981, IBM introduced the Model 5150, which the world would come to know as the Personal Computer (PC). The base price for a version without disk drives was $1,565, or just over $4,000 in today’s dollars after adjusting for inflation. While it was launched with much fanfare and would become the foundation for a revolution in hardware, the PC was not the first of its kind to market. Steve Jobs, Steve Wozniak, and Ronald Wayne had introduced the Apple I, in fact, five years earlier in July of 1976. The Apple II followed in 1977, the same year that Commodore’s PET 2001 was announced at the Consumer Electronics Show.

Though its focus had historically been on technology for large businesses, the PC market, which transcended enterprise and consumer markets, was for IBM, both opportunity and threat. The argument can be made, in fact, that the 5150 was rushed to production, a hasty response to a market whose potential IBM had substantially underestimated. Certainly it represented a departure from the Armonk giant’s historical design process, in which IBM hardware was built using components designed and built by IBM. With demand for personal computing exploding, the company resorted to outsourcing. Unlike its traditional mainframe hardware, the PC was built instead from available off-the-shelf components sourced from external suppliers. Instead of incorporating the IBM 801 processor, for example, the PC relied on the less powerful Intel 8088 chip. By optimizing for components that could be efficiently sourced, the product’s time to market was greatly accelerated: the 5150 was designed in about a year.

With startups like Apple growing quickly and large existing vendors like IBM validating the market, the age of the PC was at hand. As Time Magazine acknowledged, in 1982, its Person of the Year was not a person, but “The Computer.”

In retrospect, the most interesting aspect to the launch of the PC was how unimportant the software appeared to be. Following one of journalism’s cardinal laws, most of the attention followed the money, which led inevitably to hardware. Commercial software businesses existed, to be sure—Oracle, for example, was four years old when the PC was launched—but software at the time was viewed as more of an enabler for hardware than a standalone market. When the PC debuted, hardware-centric IBM was worth almost 34 billion dollars; neither of the software-based duo of Microsoft and Oracle would even be publicly traded for another five years.

As a result, the software powering the PC was something of an afterthought. Viewing the operating system software that would serve as the foundation for its new platform as even less strategically important than its hardware components, IBM was content to contract the development of the software to a third party. After failing to come to terms with Gary Kildall of Digital Research, they turned to a small company called Microsoft. Microsoft, in turn, purchased the basis for their PC operating system from yet another third party, Tim Paterson of Seattle Computer Products. In the end, Microsoft’s MS-DOS operating system, rebranded as PC-DOS on the IBM PC, became the default operating system for a new wave of hardware, shipped in volumes without precedent.

For the small company that Microsoft was at the time, a distribution deal with a behemoth like IBM would have been, by itself, akin to a winning lottery ticket. But like his contemporary from another industry, Bill Gates had a much bigger prize in mind.

When George Lucas was negotiating with 20th Century Fox prior to the filming of the original Star Wars film, he had the option to negotiate for more upfront compensation. His 1973 film American Graffiti had been an unexpected success, and highly profitable for the studio. Instead of using this leverage to maximize his upfront capital return, however, he instead obtained from the studio control of the final cut, 40% of the box office gross, and most important, merchandising rights associated with the franchise. In a deal that will never be repeated in Hollywood, George Lucas left a few hundred thousand dollars on the table in his contract in exchange for hundreds of millions of dollars of future income.

Just as 20th Century Fox dramatically underestimated the value of those rights, so too did IBM fail to comprehend the importance of the software operating system. Gates, however, had uniquely perceived the revenue opportunity in software as a standalone entity when he and Paul Allen had been building BASIC compilers for various operating systems in the late 1970s. In what would later look like a heist, he was able to extract from IBM the contractual ability to license and sell MS-DOS outside the 5150 product. While this looks like a foolish mistake in retrospect, it is less surprising if you consider the context of the time, which was a market that attached little commercial value to software as an asset. IBM was unable on a fundamental level to comprehend the commercial opportunities that software represented, because it shared the wider market’s opinion that the money was in hardware, not software.

Five years after the release of the IBM 5150, Microsoft went public. On March 31, 1986, the company was worth $679 million. On that same date, IBM was worth $93 billion.

Fewer than 10 years later, Microsoft—the one-time David to IBM’s Goliath—was worth more than IBM. The bulk of this valuation, of course, was fueled by software—specifically Office and Windows. At its peak on December 27, 1999, in fact, Microsoft was worth $613 billion dollars, or a little more than three times what its one-time partner IBM was worth at that time.

Software, it seems, had some commercial value after all.

The past few decades have, in general, been good ones for software. Once an afterthought, software became not just a means to an end but an end in and of itself. Trillions of dollars of wealth were created by software vendors and the markets they created and owned. The ascension of software was perhaps best described in a now-famous Wall Street Journal op-ed by Marc Andreessen on August 20, 2011, “Why Software Is Eating the World.” In the piece, the man whose fortune was made in part by the $2.1 billion IPO of the software company Netscape described the present state as the following:

More and more major businesses and industries are being run on software and delivered as online services—from movies to agriculture to national defense. Many of the winners are Silicon Valley-style entrepreneurial technology companies that are invading and overturning established industry structures. Over the next 10 years, I expect many more industries to be disrupted by software, with new world-beating Silicon Valley companies doing the disruption in more cases than not.

Marc Andreessen

By the time Andreessen wrote those words, there were few who would disagree with the core thesis. Those who would were most likely to be employed by industries in the process of being actively disrupted by software. Software was, and still is, the new reality for most industries. Much as Amazon is now more appropriately described as a technology company than a retailer, so too are an increasing number of businesses in an ever-widening number of industries.

A curious thing was happening while software was hungrily consuming the world, however. Even as it was becoming more and more vital and disruptive, software’s commercial value was declining. Software that would have once generated billions in revenue per quarter is increasingly made available for free. Companies that once battled each other and struggled to differentiate similar proprietary products now collaborate with each other on a common platform, competing on implementations and service. Developers that solve interesting problems with software see more benefit than cost to making it available for free than attempting to charge for it.

This is the Software Paradox: the most powerful disruptor we have ever seen and the creator of multibillion-dollar net new markets is being commercially devalued, daily. Just as the technology industry was firmly convinced in 1981 that the money was in hardware, not software, the industry today is largely built on the assumption that the real revenue is in software. The evidence, however, suggests that software is less valuable—in the commercial sense—than many are aware, and becoming less so by the day. And that trend is, in all likelihood, not reversible. The question facing an entire industry, then, is what next?

This is the question the following pages intend to answer.

Get The Software Paradox now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.