You are previewing Beautiful Code.

Beautiful Code

Cover of Beautiful Code by Greg Wilson... Published by O'Reilly Media, Inc.
  1. Beautiful Code
    1. SPECIAL OFFER: Upgrade this ebook with O’Reilly
    2. A Note Regarding Supplemental Files
    3. Foreword
    4. Preface
      1. How This Book Is Organized
      2. Conventions Used in This Book
      3. Using Code Examples
      4. How to Contact Us
      5. Safari® Enabled
    5. 1. A Regular Expression Matcher
      1. 1.1. The Practice of Programming
      2. 1.2. Implementation
      3. 1.3. Discussion
      4. 1.4. Alternatives
      5. 1.5. Building on It
      6. 1.6. Conclusion
    6. 2. Subversion's Delta Editor: Interface As Ontology
      1. 2.1. Version Control and Tree Transformation
      2. 2.2. Expressing Tree Differences
      3. 2.3. The Delta Editor Interface
      4. 2.4. But Is It Art?
      5. 2.5. Abstraction As a Spectator Sport
      6. 2.6. Conclusions
    7. 3. The Most Beautiful Code I Never Wrote
      1. 3.1. The Most Beautiful Code I Ever Wrote
      2. 3.2. More and More with Less and Less
      3. 3.3. Perspective
      4. 3.4. What Is Writing?
      5. 3.5. Conclusion
      6. 3.6. Acknowledgments
    8. 4. Finding Things
      1. 4.1. On Time
      2. 4.2. Problem: Weblog Data
      3. 4.3. Problem: Who Fetched What, When?
      4. 4.4. Search in the Large
      5. 4.5. Conclusion
    9. 5. Correct, Beautiful, Fast (in That Order): Lessons from Designing XML Verifiers
      1. 5.1. The Role of XML Validation
      2. 5.2. The Problem
      3. 5.3. Version 1: The Naïve Implementation
      4. 5.4. Version 2: Imitating the BNF Grammar O(N)
      5. 5.5. Version 3: First Optimization O(log N)
      6. 5.6. Version 4: Second Optimization: Don't Check Twice
      7. 5.7. Version 5: Third Optimization O(1)
      8. 5.8. Version 6: Fourth Optimization: Caching
      9. 5.9. The Moral of the Story
    10. 6. Framework for Integrated Test: Beauty Through Fragility
      1. 6.1. An Acceptance Testing Framework in Three Classes
      2. 6.2. The Challenge of Framework Design
      3. 6.3. An Open Framework
      4. 6.4. How Simple Can an HTML Parser Be?
      5. 6.5. Conclusion
    11. 7. Beautiful Tests
      1. 7.1. That Pesky Binary Search
      2. 7.2. Introducing JUnit
      3. 7.3. Nailing Binary Search
      4. 7.4. Conclusion
    12. 8. On-the-Fly Code Generation for Image Processing
    13. 9. Top Down Operator Precedence
      1. 9.1. JavaScript
      2. 9.2. Symbol Table
      3. 9.3. Tokens
      4. 9.4. Precedence
      5. 9.5. Expressions
      6. 9.6. Infix Operators
      7. 9.7. Prefix Operators
      8. 9.8. Assignment Operators
      9. 9.9. Constants
      10. 9.10. Scope
      11. 9.11. Statements
      12. 9.12. Functions
      13. 9.13. Array and Object Literals
      14. 9.14. Things to Do and Think About
    14. 10. The Quest for an Accelerated Population Count
      1. 10.1. Basic Methods
      2. 10.2. Divide and Conquer
      3. 10.3. Other Methods
      4. 10.4. Sum and Difference of Population Counts of Two Words
      5. 10.5. Comparing the Population Counts of Two Words
      6. 10.6. Counting the 1-Bits in an Array
      7. 10.7. Applications
    15. 11. Secure Communication: The Technology Of Freedom
      1. 11.1. The Heart of the Start
      2. 11.2. Untangling the Complexity of Secure Messaging
      3. 11.3. Usability Is the Key
      4. 11.4. The Foundation
      5. 11.5. The Test Suite
      6. 11.6. The Functioning Prototype
      7. 11.7. Clean Up, Plug In, Rock On…
      8. 11.8. Hacking in the Himalayas
      9. 11.9. The Invisible Hand Moves
      10. 11.10. Speed Does Matter
      11. 11.11. Communications Privacy for Individual Rights
      12. 11.12. Hacking the Civilization
    16. 12. Growing Beautiful Code in BioPerl
      1. 12.1. BioPerl and the Bio::Graphics Module
      2. 12.2. The Bio::Graphics Design Process
      3. 12.3. Extending Bio::Graphics
      4. 12.4. Conclusions and Lessons Learned
    17. 13. The Design of the Gene Sorte
      1. 13.1. The User Interface of the Gene Sorter
      2. 13.2. Maintaining a Dialog with the User over the Web
      3. 13.3. A Little Polymorphism Can Go a Long Way
      4. 13.4. Filtering Down to Just the Relevant Genes
      5. 13.5. Theory of Beautiful Code in the Large
      6. 13.6. Conclusion
    18. 14. How Elegant Code Evolves with Hardware The Case of Gaussian Elimination
      1. 14.1. The Effects of Computer Architectures on Matrix Algorithms
      2. 14.2. A Decompositional Approach
      3. 14.3. A Simple Version
      4. 14.4. LINPACK's DGEFA Subroutine
      5. 14.5. LAPACK DGETRF
      6. 14.6. Recursive LU
      7. 14.7. ScaLAPACK PDGETRF
      8. 14.8. Multithreading for Multi-Core Systems
      9. 14.9. A Word About the Error Analysis and Operation Count
      10. 14.10. Future Directions for Research
      11. 14.11. Further Reading
    19. 15. The Long-Term Benefits of Beautiful Design
      1. 15.1. My Idea of Beautiful Code
      2. 15.2. Introducing the CERN Library
      3. 15.3. Outer Beauty
      4. 15.4. Inner Beauty
      5. 15.5. Conclusion
    20. 16. The Linux Kernel Driver Model: The Benefits of Working Together
      1. 16.1. Humble Beginnings
      2. 16.2. Reduced to Even Smaller Bits
      3. 16.3. Scaling Up to Thousands of Devices
      4. 16.4. Small Objects Loosely Joined
    21. 17. Another Level of Indirection
      1. 17.1. From Code to Pointers
      2. 17.2. From Function Arguments to Argument Pointers
      3. 17.3. From Filesystems to Filesystem Layers
      4. 17.4. From Code to a Domain-Specific Language
      5. 17.5. Multiplexing and Demultiplexing
      6. 17.6. Layers Forever?
    22. 18. Python's Dictionary Implementation: Being All Things to All People
      1. 18.1. Inside the Dictionary
      2. 18.2. Special Accommodations
      3. 18.3. Collisions
      4. 18.4. Resizing
      5. 18.5. Iterations and Dynamic Changes
      6. 18.6. Conclusion
      7. 18.7. Acknowledgments
    23. 19. Multidimensional Iterators in NumPy
      1. 19.1. Key Challenges in N-Dimensional Array Operations
      2. 19.2. Memory Models for an N-Dimensional Array
      3. 19.3. NumPy Iterator Origins
      4. 19.4. Iterator Design
      5. 19.5. Iterator Interface
      6. 19.6. Iterator Use
      7. 19.7. Conclusion
    24. 20. A Highly Reliable Enterprise System for NASA's Mars Rover Mission
      1. 20.1. The Mission and the Collaborative Information Portal
      2. 20.2. Mission Needs
      3. 20.3. System Architecture
      4. 20.4. Case Study: The Streamer Service
      5. 20.5. Reliability
      6. 20.6. Robustness
      7. 20.7. Conclusion
    25. 21. ERP5: Designing for Maximum Adaptability
      1. 21.1. General Goals of ERP
      2. 21.2. ERP5
      3. 21.3. The Underlying Zope Platform
      4. 21.4. ERP5 Project Concepts
      5. 21.5. Coding the ERP5 Project
      6. 21.6. Conclusion
    26. 22. A Spoonful of Sewage
    27. 23. Distributed Programming with MapReduce
      1. 23.1. A Motivating Example
      2. 23.2. The MapReduce Programming Model
      3. 23.3. Other MapReduce Examples
      4. 23.4. A Distributed MapReduce Implementation
      5. 23.5. Extensions to the Model
      6. 23.6. Conclusion
      7. 23.7. Further Reading
      8. 23.8. Acknowledgments
      9. 23.9. Appendix: Word Count Solution
    28. 24. Beautiful Concurrency
      1. 24.1. A Simple Example: Bank Accounts
      2. 24.2. Software Transactional Memory
      3. 24.3. The Santa Claus Problem
      4. 24.4. Reflections on Haskell
      5. 24.5. Conclusion
      6. 24.6. Acknowledgments
    29. 25. Syntactic Abstraction: The syntax-case Expander
      1. 25.1. Brief Introduction to syntax-case
      2. 25.2. Expansion Algorithm
      3. 25.3. Example
      4. 25.4. Conclusion
    30. 26. Labor-Saving Architecture: An Object-Oriented Framework for Networked Software
      1. 26.1. Sample Application: Logging Service
      2. 26.2. Object-Oriented Design of the Logging Server Framework
      3. 26.3. Implementing Sequential Logging Servers
      4. 26.4. Implementing Concurrent Logging Servers
      5. 26.5. Conclusion
    31. 27. Integrating Business Partners the RESTful Way
      1. 27.1. Project Background
      2. 27.2. Exposing Services to External Clients
      3. 27.3. Routing the Service Using the Factory Pattern
      4. 27.4. Exchanging Data Using E-Business Protocols
      5. 27.5. Conclusion
    32. 28. Beautiful Debugging
      1. 28.1. Debugging a Debugger
      2. 28.2. A Systematic Process
      3. 28.3. A Search Problem
      4. 28.4. Finding the Failure Cause Automatically
      5. 28.5. Delta Debugging
      6. 28.6. Minimizing Input
      7. 28.7. Hunting the Defect
      8. 28.8. A Prototype Problem
      9. 28.9. Conclusion
      10. 28.10. Acknowledgments
      11. 28.11. Further Reading
    33. 29. Treating Code As an Essay
    34. 30. When a Button Is All That Connects You to the World
      1. 30.1. Basic Design Model
      2. 30.2. Input Interface
      3. 30.3. Efficiency of the User Interface
      4. 30.4. Download
      5. 30.5. Future Directions
    35. 31. Emacspeak: The Complete Audio Desktop
      1. 31.1. Producing Spoken Output
      2. 31.2. Speech-Enabling Emacs
      3. 31.3. Painless Access to Online Information
      4. 31.4. Summary
      5. 31.5. Acknowledgments
    36. 32. Code in Motion
      1. 32.1. On Being "Bookish"
      2. 32.2. Alike Looking Alike
      3. 32.3. The Perils of Indentation
      4. 32.4. Navigating Code
      5. 32.5. The Tools We Use
      6. 32.6. DiffMerge's Checkered Past
      7. 32.7. Conclusion
      8. 32.8. Acknowledgments
      9. 32.9. Further Reading
    37. 33. Writing Programs for "The Book"
      1. 33.1. The Nonroyal Road
      2. 33.2. Warning to Parenthophobes
      3. 33.3. Three in a Row
      4. 33.4. The Slippery Slope
      5. 33.5. The Triangle Inequality
      6. 33.6. Meandering On
      7. 33.7. "Duh!"—I Mean "Aha!"
      8. 33.8. Conclusion
      9. 33.9. Further Reading
    38. A. Afterword
    39. B. Contributors
    40. Colophon
    41. SPECIAL OFFER: Upgrade this ebook with O’Reilly

Problem: Who Fetched What, When?

Running a couple of quick scripts over the logfile data reveals that there are 12,600,064 instances of an article fetch coming from 2,345,571 different hosts. Suppose we are interested in who was fetching what, and when? An auditor, a police officer, or a marketing professional might be interested.

So, here's the problem: given a hostname, report what articles were fetched from that host, and when. The result is a list; if the list is empty, no articles were fetched.

We've already seen that a language's built-in hash or equivalent data structure gives the programmer a quick and easy way to store and look up key/value pairs. So, you might ask, why not use it?

That's an excellent question, and we should give the idea a try. There are reasons to worry that it might not work very well, so in the back of our minds, we should be thinking of a Plan B. As you may recall if you've ever studied hash tables, in order to go fast, they need to have a small load factor; in other words, they need to be mostly empty. However, a hash table that holds 2.35 million entries and is still mostly empty is going to require the use of a whole lot of memory.

To simplify things, I wrote a program that ran over all the logfiles and pulled out all the article fetches into a simple file; each line has the hostname, the time of the transaction, and the article name. Here are the first few lines: 1166406026 2003/04/08/Riffs 1166406027 2006/05/03/MARS-T-Shirt 1166406040 2003/03/27/Scanner

(The second field, the 10-digit number, is the standard Unix/Linux representation of time as the number of seconds since the beginning of 1970.)

Then I wrote a simple program to read this file and load a great big hash. Example 4-5 shows the program.

Example 4-5. Loading a big hash

1 class BigHash
3   def initialize(file)
4     @hash = {} 
5     lines = 0   
6 do |line| 
7       s = line.split 
8       article = s[2].intern
9       if @hash[s[0]]
10        @hash[s[0]] << [ s[1], article ]  
11      else
12        @hash[s[0]] = [ s[1], article ] 
13      end
14      lines += 1
15      STDERR.puts "Line: #{lines}" if (lines % 100000) == 0
16    end
17  end
19  def find(key)
20    @hash[key]
21  end
23 end

The program should be fairly self-explanatory, but line 15 is worth a note. When you're running a big program that's going to take a lot of time, it's very disturbing when it works away silently, maybe for hours. What if something's wrong? What if it's going incredibly slow and will never finish? So, line 15 prints out a progress report after every 100,000 lines of input, which is reassuring.

Running this program was interesting. It took about 55 minutes of CPU time to load up the hash, and the program grew to occupy 1.56 GB of memory. A little calculation suggests that it costs around 680 bytes to store the information for each host, or slicing the data another way, about 126 bytes per fetch. This is a little scary, but probably reasonable for a hash table.

Retrieval performance was excellent. I ran 2,000 queries, half of which were randomly selected hosts from the log and thus succeeded, while the other half were those same hostnames reversed, none of which succeeded. The 2,000 queries completed in an average of about .02 seconds, so Ruby's hash implementation can look up records in a hash containing 12 million or so records thousands of times per second.

Those 55 minutes to load up the data are troubling, but there are some tricks to address that. You could, for example, load it up once, then serialize the hash out and read it back in. And I didn't try particularly hard to optimize the program.

The program was easy and quick to write, and it runs fast once it's initialized, so its performance is good both in terms of waiting-for-the-program time and waiting-for-the-programmer time. Still, I'm unsatisfied. I have the feeling that there ought to be a way to get this kind of performance while burning less memory, less startup time, or both. It involves writing our own search code, though.

Binary Search

Nobody gets a Computer Science degree without studying a wide variety of search algorithms: trees, heaps, hashes, lists, and more. My favorite among all these is binary search. Let's try it on the who-fetched-what-when problem and then look at what makes it beautiful.

My first attempt at putting binary search to use was quite disappointing; while the data took 10 minutes less to load, it required almost 100 MB more memory than with the hash. Clearly, there are some surprising things about the Ruby array implementation. The search also ran several times slower (but still in the range of thousands per second), but this is not surprising at all because the algorithm is running in Ruby code rather than with the underlying hardcoded hash implementation.

The problem is that in Ruby everything is an object, and arrays are fairly abstracted things with lots of built-in magic. So, let's reimplement the program in Java, in which integers are just integers, and arrays come with very few extras.[2]

Nothing could be simpler, conceptually, than binary search. You divide your search space in two and see whether you should be looking in the top or bottom half; then you repeat the exercise until done. Instructively, there are a great many ways to code this algorithm incorrectly, and several widely published versions contain bugs. The implementation mentioned in "On the Goodness of Binary Search," and shown in Java in Binary Search, is based on one I learned from Gaston Gonnet, the lead developer of the Maple language for symbolic mathematics and currently Professor of Computer Science at ETH in Zürich.

Example 4-6. Binary search

1 package binary;
3  public class Finder {
4    public static int find(String[] keys, String target) {
5      int high = keys.length;
6      int low = -1;
7      while (high - low > 1) {
8        int probe = (low + high) >>> 1;
9        if (keys[probe].compareTo(target) > 0)
10         high = probe;
11       else
12         low = probe;
13     }
14     if (low == -1 || keys[low].compareTo(target) != 0)
15       return -1;
16     else
17       return low;
18   }
19 }

Key aspects of this program are as follows:

  • In lines 5–6, note that the high and low bounds are set one off the ends of the array, so neither are initially valid indices. This eliminates all sorts of corner cases.

  • The loop that starts in line 7 runs until the high and low bounds are adjacent; there is no testing to see whether the target has been found. Think for a minute whether you agree with this choice; we'll return to the question later.

  • The loop has two invariants. low is either –1 or points to something less than or equal to the target value. high is either one off the top of the array or points to something strictly greater than the target value.

  • Line 8 is particularly interesting. In an earlier version it read:

    probe = (high + low) / 2;

    but in June 2006, Java guru Josh Bloch showed how, in certain obscure circumstances, that code could lead to integer overflow (see It is sobering indeed that, many decades into the lifetime of computer science, we are still finding bugs in our core algorithms. (The issue is also discussed by Alberto Savoia in Chapter 7.)

    At this point, Rubyists will point out that modern dynamic languages such as Ruby and Python take care of integer overflow for you, and thus don't have this bug.

  • Because of the loop invariant, once I'm done with the loop, I just need to check low (lines 14–17). If it's not –1, either it points to something that matches the target, or the target isn't there.

The Java version took only six and a half minutes to load, and it ran successfully, using less than 1 GB of heap. Also, while it's harder to measure CPU time in Java than in Ruby, there was no perceptible delay in running the same 2,000 searches.

Binary Search Trade-offs

Binary search has some very large advantages. First of all, its performance is O(log2 N). People often don't really grasp how powerful this is. On a 32-bit computer, the biggest log2 you'll ever encounter is 32 (similarly, 64 on a 64-bit computer), and any algorithm that competes in an upper bound of a few dozen steps will be "good enough" for many real-world scenarios.

Second, the binary-search code is short and simple. Code that is short and simple is beautiful, for a bunch of reasons. Maybe the most important is that it's easier to understand, and understanding code is harder than writing it. There are fewer places for bugs to hide. Also, compact code plays better with instruction sets, I-caches, and JIT compilers, and thus tends to run faster.

Third, once you've got that sorted array, you don't need any more index structures; binary search is very space-efficient.

The big downside to binary search is that the data has to be kept in order in memory. There are some data sets for which this is impossible, but fewer than you might think. If you think you have too much data to fit in memory, check the price of RAM these days and make sure. Any search strategy that requires going to disk is going to be immensely more complex, and in many scenarios slower.

Suppose you need to update the data set; you might think that would rule out binary search because you have to update a huge, contiguous array in memory. But that turns out to be easier than you might think. In fact, your program's memory is scattered randomly all over the computer's physical RAM, with the operating system's paging software making it look sequential; you can do the same kind of trick with your own data.

Some might argue that since a hash table is O(1), that has to be better than binary search's O(log2 N). In practice, the difference may not be that significant; set up an experiment sometime and do some measurements. Also, consider that hash tables, with the necessary collision-resolution code, are considerably more complex to implement.

I don't want to be dogmatic, but in recent years, I've started to take the following approach to search problems:

  1. Try to solve it using your language's built-in hash tables.

  2. Then try to solve it with binary search.

  3. Only then should you reluctantly start to consider other more complex options.

Escaping the Loop

Some look at my binary-search algorithm and ask why the loop always runs to the end without checking whether it's found the target. In fact, this is the correct behavior; the math is beyond the scope of this chapter, but with a little work, you should be able to get an intuitive feeling for it—and this is the kind of intuition I've observed in some of the great programmers I've worked with.

Let's think about the progress of the loop. Suppose you have n elements in the array, where n is some really large number. The chance of finding the target the first time through is 1/n, a really small number. The next iteration (after you divide the search set in half) is 1/(n/2)—still small—and so on. In fact, the chance of hitting the target becomes significant only when you're down to 10 or 20 elements, which is to say maybe the last four times through the loop. And in the case where the search fails (which is common in many applications), those extra tests are pure overhead.

You could do the math to figure out when the probability of hitting the target approaches 50 percent, but qualitatively, ask yourself: does it make sense to add extra complexity to each step of an O(log2 N) algorithm when the chances are it will save only a small number of steps at the end?

The take-away lesson is that binary search, done properly, is a two-step process. First, write an efficient loop that positions your low and high bounds properly, then add a simple check to see whether you hit or missed.

[2] This discussion of binary search borrows heavily from my 2003 piece, "On the Goodness of Binary Search," available online at

The best content for your career. Discover unlimited learning on demand for around $1/day.