Posted on by & filed under Devops, Information Technology, infrastructure, IT, Operations, Tech.

Last week at Devops Days in Boston, I had the opportunity to attend back to back presentations that complemented each other and helped bring into focus an idea that has been hanging around just beyond the horizon of my awareness. Namely, when it comes to infrastructure software, we are working through a time of major transitions that affect both our tools and the structure and processes of our work.

Expecting conflict and adapting

The first presentation that started turning my vague sense of what’s been happening into a revelation was Nikolas Katsimpras presenting on conflict within organizations. He described various types of conflict between: individuals and individuals, individuals and groups, different departments, and so on. Although the talk was nontechnical, it was easy to apply many of the concepts Katsimpras described to infrastructure software and devops, where our work tends to be driven by other departments, whose needs define our priorities. This arrangement causes many organizations to get stuck maintaining the status quo with inefficient compensation patterns rather than changing with time and technology.

Katsimpras emphasized the importance of responsive adaptivity and described Nelson Mandela as brilliantly adaptive in that he was willing and able to adjust himself as circumstances changed. During another portion of the presentation, Katsimpras defined “double-loop learning,” which is a term used to describe the process of questioning initial assumptions when seeking to change outcomes, rather than focusing on refining strategies and goals. This concept strikes me as particularly salient given the rise of automated configuration management and test-driven infrastructure.

Today’s tools are not tomorrow’s

After the constructive conflict presentation, Kelsey Hightower went on to discuss CoreOS. I found myself completely rapt and pondering the near future, in which CoreOS is the solution to all my high-traffic, high-availability web app problems. But then it happened—I felt conflicted as to whether CoreOS would solve problems I’m already solving with Chef. I felt a pang of worry. On the one hand, we are still developing our Chef-based infrastructure: expanding test coverage, updating and standardizing dev tools, among other things. But on the other hand, the purpose of our business is to serve customers, not to commit to a particular system configuration management tool for all time. So maybe I ought not get defensive about keeping today’s infrastructure tools around for tomorrow. Here it was, double-loop learning in my own everyday life!

Three years ago, the tools needed for automatic dependency resolution and local testing in Chef were only starting to manifest in discussions and as ideas. Today, many of these tools—which were often initiated by third-party developers in the user community—are part of what is considered the Chef standard toolkit. By incorporating behavior- and test-driven development practices into infrastructure software, companies are improving the customer experience by separating the customer from the config and deployment errors.

These initiatives in the developer communities have led to an advent of tools that have dramatically improved efficiency and productivity in ways that would have sounded like magic a decade ago. But the outcomes are consistent with the double-loop learning principle of returning to initial assumptions before changing practices. For instance, just five years ago, most reasonable people would have agreed that it was difficult-to-impossible to test infrastructure changes on a local workstation. Now this type of testing is increasingly the norm, thanks once again to developers questioning the original assumptions.

Similarly, today we use all kinds of virtualization infrastructure in order to accommodate and scale complex web software, so our current patterns are generally predicated on using VMs as individual hosts: this reliance on VMs is just one example of a concept that is no longer relevant in CoreOS. During his talk, Hightower mentioned his personal interest in Golang, which made me contemplate how many of today’s tools that are written in C, C++, and even Java will be implemented in Go over the next decade.

While searching to make sure I wasn’t stepping on someone else’s title, an option I tried was “The Future is Adaptive.”  When I saw Ian Clatworthy’s essay from seven years ago, I knew I had found mine. In his essay, Clatworthy, a Bazaar dev, discussed the importance of adaptability in context of version control and described the tensions that arise from diverging priorities. Today we are even further down that road, and we’re going faster. Clatworthy’s ideas are still applicable and interesting today, even though the specific technologies change. It’s easy to forget that we will pass through many moments of now on our way to what lies ahead. Between now and the future we will implement the changes that distinguish the two from each other. While these changes will create situations that are ripe for conflict, we must learn to leverage contention as a constructive force. We must do this because the nature of software engineering and soft product development has been and will continue shifting under our feet.

Tags: chef, devops, Go, infrastructure, IT, java, operations, virtualization,

Comments are closed.