Foreword

In the late 1990s, I had the honor of working at Forte Software, a company that produced a development and operations platform for what at the time passed for “large scale distributed systems.” The brainchild of Paul Butterworth—a man I believe gets less credit than he deserves in the annuls of distributed systems history—Forte allowed developers to develop applications in a scalable software services model, and then easily deploy the resulting user interfaces and services.

Auto-scaling was built into the platform, though you had to architect smartly to use it well. Redundancy and failure recovery were also selectable by checking a box and entering the number of instances you wanted of any given service. To this day, I have yet to see another developer environment that provides the experience Forte offered for distributed systems—modern Platform-as-a-Service offerings included.

This was my first experience with an environment that bridged development and operations functions, and—though I didn’t recognize it at the time—it created a different dynamic in the teams that built and deployed applications. I remember frequent conversations between the operations, quality assurance, and development teams; three or four people sitting at a computer discussing the best way to organize, test, scale, and replicate services to maximize availability and performance while minimizing cost—both financially and psychologically.

These days, multidisciplinary approaches to software are a best practice—necessary, even, in enabling the extremes of scale we’ve reached in digital era. We’ve even created a term to discuss these concepts: devops. The work functions of analysis and design, development, quality assurance, and operations are no longer a linear flow from one to the other, but a set of activities that must all be executed in the face of constant change.

Multidisciplinary software practices, to me, are at the heart of what makes the modern information technology possible: The retraining of technologists to understand and empathize with not only the end user, but also the other technologists that must work on or with the system. The breakdown and rebuilding of organizations to reflect the complex systems nature of the software they are creating. The creation of entire industry ecosystems around services, monitoring, tools, and practices that reflect complexity and constant motion.

In 2011, when I began working at Enstratius (later purchased by Dell), my frequent trips to my old stomping grounds in Minneapolis brought me in touch with a gentleman I had only recently met through Twitter. He was extremely curious about cloud computing, and knew of me through my former blog, The Wisdom of Clouds, on CNET. Over coffee, Jeff Sussna and I sat down and discussed what the adoption of cloud computing, and related changes to application development, were having on our respective areas of interest.

For me, who was primarily focused on application development and operations, the effect of continuous deployment, agility, and “lean” principles signaled the importance of multidisciplinary practices. But Jeff introduced me to another facet of what was changing with respect to software practices.

Jeff’s background in quality assurance led him to ask questions about how you can possibly ensure quality in an ever-changing and complex application environment. He was intrigued by complex adaptive systems science, and how it was being applied to application design and operations. However, he introduced me to the importance of empathy to assure that the resulting software thrived—rather than dying a lonely death.

The term design thinking meant little to me before I started talking to Jeff, but over the intervening years he helped me understand. I had read Norbert Wiener’s Cybernetics, but Jeff went much farther than I had in applying those concepts to actual software practices. I’d thought about Mark Burgess’s promise theory since I first heard about it in 2008 or so, but Jeff placed promises in the context of design with a focus on extracting quality user experiences from complex software systems.

I’m overjoyed that Jeff wrote Designing Delivery. Not only because I’m excited to see his ideas and observations collected in a single (and beautifully written) narrative, but also because I think there are no other books that come at the overall problem of digital services development from the same angle. This most certainly is not just another “devops,” “lean,” or “agile” book. Here, the concepts of cybernetics, service design, and promise theory enable the reader to look beyond technical methodologies to understand the motivations that drive end user behavior, and thus design better, higher quality service experiences.

Those experiences will define the winners and losers of the digital services era. As Jeff will show you, the ability of IT to constantly define and improve those experiences will define its own success in the coming decades.

Get Designing Delivery now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.