INTRODUCTION: THE WTF? ECONOMY

THIS MORNING, I SPOKE OUT LOUD TO A $150 DEVICE IN MY kitchen, told it to check if my flight was on time, and asked it to call a Lyft to take me to the airport. A car showed up a few minutes later, and my smartphone buzzed to let me know it had arrived. And in a few years, that car might very well be driving itself. Someone seeing this for the first time would have every excuse to say, “WTF?”

At times, “WTF?” is an expression of astonishment. But many people reading the news about technologies like artificial intelligence and self-driving cars and drones feel a profound sense of unease and even dismay. They worry about whether their children will have jobs, or whether the robots will have taken them all. They are also saying “WTF?” but in a very different tone of voice. It is an expletive.

Astonishment: phones that give advice about the best restaurant nearby or the fastest route to work today; artificial intelligences that write news stories or advise doctors; 3-D printers that make replacement parts—for humans; gene editing that can cure disease or bring extinct species back to life; new forms of corporate organization that marshal thousands of on-demand workers so that consumers can summon services at the push of a button in an app.

Dismay: the fear that robots and AIs will take away jobs, reward their owners richly, and leave formerly middle-class workers part of a new underclass; tens of millions of jobs here in the United States that don’t pay people enough to live on; little-understood financial products and profit-seeking algorithms that can take down the entire world economy and drive millions of people from their homes; a surveillance society that tracks our every move and stores it in corporate and government databases.

Everything is amazing, everything is horrible, and it’s all moving too fast. We are heading pell-mell toward a world shaped by technology in ways that we don’t understand and have many reasons to fear.

WTF? Google AlphaGo, an artificial intelligence program, beat the world’s best human Go player, an event that was widely predicted to be at least twenty years in the future—until it happened in 2016. If AlphaGo can happen twenty years early, what else might hit us even sooner than we expect? For starters: An AI running on a $35 Raspberry Pi computer beat a top US Air Force fighter pilot trainer in combat simulation. The world’s largest hedge fund has announced that it wants an AI to make three-fourths of management decisions, including hiring and firing. Oxford University researchers estimate that up to 47% of human tasks, including many components of white-collar jobs, may be done by machines within as little as twenty years.

WTF? Uber has put taxi drivers out of work by replacing them with ordinary people offering rides in their own cars, creating millions of part-time jobs worldwide. Yet Uber is intent on eventually replacing those on-demand drivers with completely automated vehicles.

WTF? Without owning a single room, Airbnb has more rooms on offer than some of the largest hotel groups in the world. Airbnb has under 3,000 employees, while Hilton has 152,000. New forms of corporate organization are outcompeting businesses based on best practices that we’ve followed for the lifetimes of most business leaders.

WTF? Social media algorithms may have affected the outcome of the 2016 US presidential election.

WTF? While new technologies are making some people very rich, incomes have stagnated for ordinary people, and for the first time, children in developed countries are on track to earn less than their parents.

What do AI, self-driving cars, on-demand services, and income inequality have in common? They are telling us, loud and clear, that we’re in for massive changes in work, business, and the economy.

But just because we can see that the future is going to be very different doesn’t mean that we know exactly how it’s going to unfold, or when. Perhaps “WTF?” really stands for “What’s the Future?” Where is technology taking us? Is it going to fill us with astonishment or dismay? And most important, what is our role in deciding that future? How do we make choices today that will result in a world we want to live in?

I’ve spent my career as a technology evangelist, book publisher, conference producer, and investor wrestling with questions like these. My company, O’Reilly Media, works to identify important innovations, and by spreading knowledge about them, to amplify their impact and speed their adoption. And we’ve tried to sound a warning when a failure to understand how technology is changing the rules for business or society is leading us down the wrong path. In the process, we’ve watched numerous technology booms and busts, and seen companies go from seemingly unstoppable to irrelevant, while early-stage technologies that no one took seriously went on to change the world.

If all you read are the headlines, you might have the mistaken idea that how highly investors value a company is the key to understanding which technologies really matter. We hear constantly that Uber is “worth” $68 billion, more than General Motors or Ford; Airbnb is “worth” $30 billion, more than Hilton Hotels and almost as much as Marriott. Those huge numbers can make the companies seem inevitable, with their success already achieved. But it is only when a business becomes profitably self-sustaining, rather than subsidized by investors, that we can be sure that it is here to stay. After all, after eight years Uber is still losing $2 billion every year in its race to get to worldwide scale. That’s an amount that dwarfs the losses of companies like Amazon (which lost $2.9 billion over its first five years before showing its first profits in 2001). Is Uber losing money like Amazon, which went on to become a hugely successful company that transformed retailing, publishing, and enterprise computing, or like a dot-com company that was destined to fail? Is the enthusiasm of its investors a sign of a fundamental restructuring of the nature of work, or a sign of an investment mania like the one leading up to the dot-com bust in 2001? How do we tell the difference?

Startups with a valuation of more than a billion dollars understandably get a lot of attention, even more so now that they have a name, unicorn, the term du jour in Silicon Valley. Fortune magazine started keeping a list of companies with that exalted status. Silicon Valley news site TechCrunch has a constantly updated “Unicorn Leaderboard.”

But even when these companies succeed, they may not be the surest guide to the future. At O’Reilly Media, we learned to tune in to very different signals by watching the innovators who first brought us the Internet and the open source software that made it possible. They did what they did out of love and curiosity, not a desire to make a fortune. We saw that radically new industries don’t start when creative entrepreneurs meet venture capitalists. They start with people who are infatuated with seemingly impossible futures.

Those who change the world are people who are chasing a very different kind of unicorn, far more important than the Silicon Valley billion-dollar valuation (though some of them will achieve that too). It is the breakthrough, once remarkable, that becomes so ubiquitous that eventually it is taken for granted.

Tom Stoppard wrote eloquently about a unicorn of this sort in his play Rosencrantz & Guildenstern Are Dead:

A man breaking his journey between one place and another at a third place of no name, character, population or significance, sees a unicorn cross his path and disappear. . . . “My God,” says a second man, “I must be dreaming, I thought I saw a unicorn.” At which point, a dimension is added that makes the experience as alarming as it will ever be. A third witness, you understand, adds no further dimension but only spreads it thinner, and a fourth thinner still, and the more witnesses there are the thinner it gets and the more reasonable it becomes until it is as thin as reality, the name we give to the common experience.

The world today is full of things that once made us say “WTF?” but are already well on their way to being the stuff of daily life.

The Linux operating system was a unicorn. It seemed downright impossible that a decentralized community of programmers could build a world-class operating system and give it away for free. Now billions of people rely on it.

The World Wide Web was a unicorn, even though it didn’t make Tim Berners-Lee a billionaire. I remember showing the World Wide Web at a technology conference in 1993, clicking on a link, and saying, “That picture just came over the Internet all the way from the University of Hawaii.” People didn’t believe it. They thought we were making it up. Now everyone expects that you can click on a link to find out anything at any time.

Google Maps was a unicorn. On the bus not long ago, I watched one old man show another how the little blue dot in Google Maps followed us along as the bus moved. The newcomer to the technology was amazed. Most of us now take it for granted that our phones know exactly where we are, and not only can give us turn-by-turn directions exactly to our destination—by car, by public transit, by bicycle, and on foot—but also can find restaurants or gas stations nearby or notify our friends where we are in real time.

The original iPhone was a unicorn even before the introduction of the App Store a year later utterly transformed the smartphone market. Once you experienced the simplicity of swiping and touching the screen rather than a tiny keyboard, there was no going back. The original pre-smartphone cell phone itself was a unicorn. As were its predecessors, the telephone and telegraph, radio and television. We forget. We forget quickly. And we forget ever more quickly as the pace of innovation increases.

AI-powered personal agents like Amazon’s Alexa, Apple’s Siri, the Google Assistant, and Microsoft Cortana are unicorns. Uber and Lyft too are unicorns, but not because of their valuation. Unicorns are the kinds of apps that make us say, “WTF?” in a good way.

Can you still remember the first time you realized that you could get the answer to virtually any question with a quick Internet search, or that your phone could route you to any destination? How cool that was, before you started taking it for granted? And how quickly did you move from taking it for granted to complaining about it when it doesn’t work quite right?

We are layering on new kinds of magic that are slowly fading into the ordinary. A whole generation is growing up that thinks nothing of summoning cars or groceries with a smartphone app, or buying something from Amazon and having it show up in a couple of hours, or talking to AI-based personal assistants on their devices and expecting to get results.

It is this kind of unicorn that I’ve spent my career in technology pursuing.

So what makes a real unicorn of this amazing kind?

       1.  It seems unbelievable at first.

       2.  It changes the way the world works.

       3.  It results in an ecosystem of new services, jobs, business models, and industries.

We’ve talked about the “at first unbelievable” part. What about changing the world? In Who Do You Want Your Customers to Become? Michael Schrage writes:

Successful innovators don’t ask customers and clients to do something different; they ask them to become someone different. . . . Successful innovators ask users to embrace—or at least tolerate—new values, new skills, new behaviors, new vocabulary, new ideas, new expectations, and new aspirations. They transform their customers.

For example, Schrage points out that Apple (and now also Google and Microsoft and Amazon) asks their “customers to become the sort of people who wouldn’t think twice about talking to their phone as a sentient servant.” Sure enough, there is a new generation of users who think nothing of saying things like:

“Siri, make me a six p.m. reservation for two at Camino.”

“Alexa, play ‘Ballad of a Thin Man.’

“Okay, Google, remind me to buy currants the next time I’m at Piedmont Grocery.”

Correctly recognizing human speech alone is hard, but listening and then performing complex actions in response—for millions of simultaneous users—requires incredible computing power provided by massive data centers. Those data centers support an ever-more-sophisticated digital infrastructure.

For Google to remind me to buy currants the next time I’m at my local supermarket, it has to know where I am at all times, keep track of a particular location I’ve asked for, and bring up the reminder in that context. For Siri to make me a reservation at Camino, it needs to know that Camino is a restaurant in Oakland, and that it is open tonight, and it must allow conversations between machines, so that my phone can lay claim to a table from the restaurant’s reservation system via a service like OpenTable. And then it may call other services, either on my devices or in the cloud, to add the reservation to my calendar or to notify friends, so that yet another agent can remind all of us when it is time to leave for our dinner date.

And then there are the alerts that I didn’t ask for, like Google’s warnings:

“Leave now to get to the airport on time. 25 minute delay on the Bay Bridge.”

or

“There is traffic ahead. Faster route available.”

All of these technologies are additive, and addictive. As they interconnect and layer on each other, they become increasingly powerful, increasingly magical. Once you become accustomed to each new superpower, life without it is like having your magic wand turn into a stick again.

These services have been created by human programmers, but they will increasingly be enabled by artificial intelligence. That’s a scary word to many people. But it is the next step in the progression of the unicorn from the astonishing to the ordinary. While the term artificial intelligence or AI suggests a truly autonomous intelligence, we are far, far from that eventuality. AI is still just a tool, still subject to human direction.

The nature of that direction, and how we must exercise it, is a key subject of this book. AI and other unicorn technologies have the potential to make a better world, in the same way that the technologies of the first industrial revolution created wealth for society that was unimaginable two centuries ago. AI bears the same relationship to previous programming techniques that the internal combustion engine does to the steam engine. It is far more versatile and powerful, and over time we will find ever more uses for it.

Will we use it to make a better world? Or will we use it to amplify the worst features of today’s world? So far, the “WTF?” of dismay seems to have the upper hand.

“Everything is amazing,” and yet we are deeply afraid. Sixty-three percent of Americans believe jobs are less secure now than they were twenty to thirty years ago. By a two-to-one ratio, people think good jobs are difficult to find where they live. And many of them blame technology. There is a constant drumbeat of news that tells us that the future is one in which increasingly intelligent machines will take over more and more human work. The pain is already being felt. For the first time, life expectancy is actually declining in America, and what was once its rich industrial heartland has too often become a landscape of despair.

For everyone’s sake, we must choose a different path.

Loss of jobs and economic disruption are not inevitable. There is a profound failure of imagination and will in much of today’s economy. For every Elon Musk—who wants to reinvent the world’s energy infrastructure, build revolutionary new forms of transport, and settle humans on Mars—there are far too many companies that are simply using technology to cut costs and boost their stock price, enriching those able to invest in financial markets at the expense of an ever-growing group that may never be able to do so. Policy makers seem helpless, assuming that the course of technology is inevitable, rather than something we must shape.

And that gets me to the third characteristic of true unicorns: They create value. Not just financial value, but real-world value for society.

Consider past marvels. Could we have moved goods as easily or as quickly without modern earthmoving equipment letting us bore tunnels through mountains or under cities? The superpower of humans + machines made it possible to build cities housing tens of millions of people, for a tiny fraction of our people to work producing the food that all the rest of us eat, and to create a host of other wonders that have made the modern world the most prosperous time in human history.

Technology is going to take our jobs! Yes. It always has, and the pain and dislocation are real. But it is going to make new kinds of jobs possible. History tells us technology kills professions, but does not kill jobs. We will find things to work on that we couldn’t do before but now can accomplish with the help of today’s amazing technologies.

Take, for example, laser eye surgery. I used to be legally blind without huge Coke-bottle glasses. Twelve years ago, my eyes were fixed by a surgeon who would never have been able to do the job without the aid of a robot, who was now able to do something that had previously been impossible.

After more than forty years of wearing glasses so strong that I was legally blind without them, I could see clearly on my own. I kept saying to myself for months afterward, “I’m seeing with my own eyes!”

But in order to remove my need for prosthetic vision, the surgeon ended up relying on prosthetics of her own, performing the surgery on my cornea with the aid of a computer-controlled laser. During the actual surgery, apart from lifting the flap she had cut by hand in the surface of my cornea and smoothing it back into place after the laser was done, her job was to clamp open my eyes, hold my head, utter reassuring words, and tell me, sometimes with urgency, to keep looking at the red light. I asked what would happen if my eyes drifted and I didn’t stay focused on the light. “Oh, the laser would stop,” she said. “It only fires when your eyes are tracking the dot.”

Surgery this sophisticated could never be done by an unaugmented human being. The human touch of my superb doctor was paired with the superhuman accuracy of complex machines, a twenty-first-century hybrid freeing me from assistive devices first invented eight centuries earlier in Italy. The revolution in sensors, computers, and control technologies is going to make many of the daily activities of the twentieth century seem quaint as, one by one, they are reinvented in the twenty-first. This is the true opportunity of technology: It extends human capability.

In the debate about technology and the shape of the future, it’s easy to forget just how much technology already suffuses our lives, how much it has already changed us. As we get past that moment of amazement, and it fades into the new normal, we must put technology to work solving new problems. We must commit to building something new, strange to our past selves, but better, if we commit to making it so.

We must keep asking: What will new technology let us do that was previously impossible? Will it help us build the kind of society we want to live in?

This is the secret to reinventing the economy. As Google chief economist Hal Varian said to me, “My grandfather wouldn’t recognize what I do as work.”

What are the new jobs of the twenty-first century? Augmented reality—the overlay of computer-generated data and images on what we see—may give us a clue. It definitely meets the WTF? test. The first time a venture capitalist friend of mine saw one unreleased augmented reality platform in the lab, he said, “If LSD were a stock, I’d be shorting it.” That’s a unicorn.

But what is most exciting to me about this technology is not the LSD factor, but how augmented reality can change the way we work. You can imagine how augmented reality could enable workers to be “upskilled.” I’m particularly fond of imagining how the model used by Partners in Health could be turbocharged by augmented reality and telepresence. The organization provides free healthcare to people in poverty using a model in which community health workers recruited from the population being served are trained and supported in providing primary care. Doctors can be brought in as needed, but the bulk of care is provided by ordinary people. Imagine a community health worker who is able to tap on Google Glass or some next-generation wearable, and say, “Doctor, you need to see this!” (Trust me. Glass will be back, when Google learns to focus on community health workers, not fashion models.)

It’s easy to imagine how rethinking our entire healthcare system along these lines could reduce costs, improve both health outcomes and patient satisfaction, and create jobs. Imagine house calls coming back into fashion. Add in health monitoring by wearable sensors, health advice from an AI made as available as Siri, the Google Assistant, or Microsoft Cortana, plus an Uber-style on-demand service, and you can start to see the outlines of one small segment of the next economy being brought to us by technology.

This is only one example of how we might reinvent familiar human activities, creating new marvels that, if we are lucky, will eventually fade into the texture of everyday life, just like wonders of a previous age such as airplanes and skyscrapers, elevators, automobiles, refrigerators, and washing machines.

Despite their possible wonders, many of the futures we face are fraught with unknown risks. I am a classicist by training, and the fall of Rome is always before me. The first volume of Gibbon’s Decline and Fall of the Roman Empire was published in 1776, the same year as the American Revolution. Despite Silicon Valley’s dreams of a future singularity, an unknowable fusion of minds and machines that will mark the end of history as we know it, what history teaches us is that economies and nations, not just companies, can fail. Great civilizations do collapse. Technology can go backward. After the fall of Rome, the ability to make monumental structures out of concrete was lost for nearly a thousand years. It could happen to us.

We are increasingly facing what planners call “wicked problems”—problems that are “difficult or impossible to solve because of incomplete, contradictory, and changing requirements that are often difficult to recognize.”

Even long-accepted technologies turn out to have unforeseen downsides. The automobile was a unicorn. It afforded ordinary people enormous freedom of movement, led to an infrastructure for transporting goods that spread prosperity, and enabled a consumer economy where goods could be produced far away from where they are consumed. Yet the roads we built to enable the automobile carved up and hollowed out cities, led to more sedentary lifestyles, and contributed mightily to the overpowering threat of climate change.

Ditto cheap air travel, container shipping, the universal electric grid. All of these were enormous engines of prosperity that brought with them unintended consequences that only came to light over many decades of painful experience, by which time any solution seems impossible to attempt because the disruption required to reverse course would be so massive.

We face a similar set of paradoxes today. The magical technologies of today—and choices we’ve already made, decades ago, about what we value as a society—are leading us down a path with complex contingencies, unseen dangers, and decisions that we don’t even know we are making.

AI and robotics in particular are at the heart of a set of wicked problems that are setting off alarm bells among business and labor leaders, policy makers and academics. What happens to all those people who drive for a living when the cars start driving themselves? AIs are flying planes, advising doctors on the best treatments, writing sports and financial news, and telling us all, in real time, the fastest way to get to work. They are also telling human workers when to show up and when to go home, based on real-time measurement of demand. Computers used to work for humans; increasingly it’s now humans working for computers. The algorithm is the new shift boss.

What is the future of business when technology-enabled networks and marketplaces let people choose when and how much they want to work? What is the future of education when on-demand learning outperforms traditional universities in keeping skills up to date? What is the future of media and public discourse when algorithms decide what we will watch and read, making their choice based on what will make the most profit for their owners?

What is the future of the economy when more and more work can be done by intelligent machines instead of people, or only done by people in partnership with those machines? What happens to workers and their families? And what happens to the companies that depend on consumer purchasing power to buy their products?

There are dire consequences to treating human labor simply as a cost to be eliminated. According to the McKinsey Global Institute, 540 to 580 million people—65 to 70% of households in twenty-five advanced economies—had incomes that had fallen or were flat between 2005 and 2014. Between 1993 and 2005, fewer than 10 million people—less than 2%—had the same experience.

Over the past few decades, companies have made a deliberate choice to reward their management and “superstars” incredibly well, while treating ordinary workers as a cost to be minimized or cut. Top US CEOs now earn 373x the income of the average worker, up from 42x in 1980. As a result of the choices we’ve made as a society about how to share the benefits of economic growth and technological productivity gains, the gulf between the top and the bottom has widened enormously, and the middle has largely disappeared. Recently published research by Stanford economist Raj Chetty shows that for children born in 1940, the chance that they’d earn more than their parents was 92%; for children born in 1990, that chance has fallen to 50%.

Businesses have delayed the effects of declining wages on the consumer economy by encouraging people to borrow—in the United States, household debt is over $12 trillion (80% of gross domestic product, or GDP, in mid-2016) and student debt alone is $1.2 trillion (with more than seven million borrowers in default). We’ve also used government transfers to reduce the gap between human needs and what our economy actually delivers. But of course, higher government transfers must be paid for through higher taxes or through higher government debt, either of which political gridlock has made unpalatable. This gridlock is, of course, a recipe for disaster.

Meanwhile, in hopes that “the market” will deliver jobs, central banks have pushed ever more money into the system, hoping that somehow this will unlock business investment. But instead, corporate profits have reached highs not seen since the 1920s, corporate investment has shrunk, and more than $30 trillion of cash is sitting on the sidelines. The magic of the market is not working.

We are at a very dangerous moment in history. The concentration of wealth and power in the hands of a global elite is eroding the power and sovereignty of nation-states while globe-spanning technology platforms are enabling algorithmic control of firms, institutions, and societies, shaping what billions of people see and understand and how the economic pie is divided. At the same time, income inequality and the pace of technology change are leading to a populist backlash featuring opposition to science, distrust of our governing institutions, and fear of the future, making it ever more difficult to solve the problems we have created.

That has all the hallmarks of a classic wicked problem.

Wicked problems are closely related to an idea from evolutionary biology, that there is a “fitness landscape” for any organism. Much like a physical landscape, a fitness landscape has peaks and valleys. The challenge is that you can only get from one peak—a so-called local maximum—to another by going back down. In evolutionary biology, a local maximum may mean that you become one of the long-lived stable species, unchanged for millions of years, or it may mean that you become extinct because you’re unable to respond to changed conditions.

And in our economy, conditions are changing rapidly. Over the past few decades, the digital revolution has transformed media, entertainment, advertising, and retail, upending centuries-old companies and business models. Now it is restructuring every business, every job, and every sector of society. No company, no job—and ultimately, no government and no economy—is immune to disruption. Computers will manage our money, supervise our children, and have our lives in their “hands” as they drive our automated cars.

The biggest changes are still ahead, and every industry and every organization will have to transform itself in the next few years, in multiple ways, or fade away. We need to ask ourselves whether the fundamental social safety nets of the developed world will survive the transition, and more important, what we will replace them with.

Andy McAfee, coauthor of The Second Machine Age, put his finger on the consequence of failing to do so while talking with me over breakfast about the risks of AI taking over from humans: “The people will rise up before the machines do.”

This book provides a view of one small piece of this complex puzzle, the role of technology innovation in the economy, and in particular the role of WTF? technologies such as AI and on-demand services. I lay out the difficult choices we face as technology opens new doors of possibility while closing doors that once seemed the sure path to prosperity. But more important, I try to provide tools for thinking about the future, drawn from decades on the frontiers of the technology industry, observing and predicting its changes.

The book is US-centric and technology-centric in its narrative; it is not an overview of all of the forces shaping the economy of the future, many of which are centered outside the United States or are playing out differently in other parts of the world. In No Ordinary Disruption, McKinsey’s Richard Dobbs, James Manyika, and Jonathan Woetzel point out quite correctly that technology is only one of four major disruptive forces shaping the world to come. Demographics (in particular, changes in longevity and the birth rate that have radically shifted the mix of ages in the global population), globalization, and urbanization may play at least as large a role as technology. And even that list fails to take into account catastrophic war, plague, or environmental disruption. These omissions are not based on a conviction that Silicon Valley’s part of the total technology innovation economy, or the United States, is more important than the rest; it is simply that the book is based on my personal and business experience, which is rooted in this field and in this one country.

The book is divided into four parts. In the first part, I’ll share some of the techniques that my company has used to make sense of and predict innovation waves such as the commercialization of the Internet, the rise of open source software, the key drivers behind the renaissance of the web after the dot-com bust and the shift to cloud computing and big data, the Maker movement, and much more. I hope to persuade you that understanding the future requires discarding the way you think about the present, giving up ideas that seem natural and even inevitable.

In the second and third parts, I’ll apply those same techniques to provide a framework for thinking about how technologies such as on-demand services, networks and platforms, and artificial intelligence are changing the nature of business, education, government, financial markets, and the economy as a whole. I’ll talk about the rise of great world-spanning digital platforms ruled by algorithm, and the way that they are reshaping our society. I’ll examine what we can learn about these platforms and the algorithms that rule them from Uber and Lyft, Airbnb, Amazon, Apple, Google, and Facebook. And I’ll talk about the one master algorithm we so take for granted that it has become invisible to us. I’ll try to demystify algorithms and AI, and show how they are not just present in the latest technology platforms but already shape business and our economy far more broadly than most of us understand. And I’ll make the case that many of the algorithmic systems that we have put in place to guide our companies and our economy have been designed to disregard the humans and reward the machines.

In the fourth part of the book, I’ll examine the choices we have to make as a society. Whether we experience the WTF? of astonishment or the WTF? of dismay is not foreordained. It is up to us.

It’s easy to blame technology for the problems that occur in periods of great economic transition. But both the problems and the solutions are the result of human choices.

During the industrial revolution, the fruits of automation were first used solely to enrich the owners of the machines. Workers were often treated as cogs in the machine, to be used up and thrown away. But Victorian England figured out how to do without child labor, with reduced working hours, and their society became more prosperous.

We saw the same thing here in the United States during the twentieth century. We look back now on the good middle-class jobs of the postwar era as something of an anomaly. But they didn’t just happen by chance. It took generations of struggle on the part of workers and activists, and growing wisdom on the part of capitalists, policy makers, political leaders, and the voting public. In the end we made choices as a society to share the fruits of productivity more widely.

We also made choices to invest in the future. That golden age of postwar productivity was the result of massive investments in roads and bridges, universal power, water, sanitation, and communications. After World War II, we committed enormous resources to rebuild the lands destroyed by war, but we also invested in basic research. We invested in new industries: aerospace, chemicals, computers, and telecommunications. We invested in education, so that children could be prepared for the world they were about to inherit.

The future comes in fits and starts, and it is often when times are darkest that the brightest futures are being born. Out of the ashes of World War II we forged a prosperous world. By choice and hard work, not by destiny. The Great War of a generation earlier had only amplified the cycle of dismay. What was the difference? After World War I, we punished the losers. After World War II, we invested in them and raised them up again. After World War I, the United States beggared its returning veterans. After World War II, we sent them to college. Wartime technologies such as digital computing were put into the public domain so that they could be transformed into the stuff of the future. The rich taxed themselves to finance the public good.

In the 1980s, though, the idea that “greed is good” took hold in the United States and we turned away from prosperity. We accepted the idea that what was good for financial markets was good for everyone and structured our economy to drive stock prices ever higher, convincing ourselves that “the market” of stocks, bonds, and derivatives was the same as Adam Smith’s market of real goods and services exchanged by ordinary people. We hollowed out the real economy, putting people out of work and capping their wages in service to corporate profits that went to a smaller and smaller slice of society.

We made the wrong choice forty years ago. We don’t need to stick with it. The rise of a billion people out of poverty in developing economies around the world at the same time that the incomes of ordinary people in most developed economies have been going backward should tell us that we took a wrong turn somewhere.

The WTF? technologies of the twenty-first century have the potential to turbocharge the productivity of all our industries. But making what we do now more productive is just the beginning. We must share the fruits of that productivity, and use them wisely. If we let machines put us out of work, it will be because of a failure of imagination and a lack of will to make a better future.

Get WTF?: What's the Future and Why It's Up to Us now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.