You are previewing Real World Haskell.

Real World Haskell

Cover of Real World Haskell by John Goerzen... Published by O'Reilly Media, Inc.
  1. Real World Haskell
    1. SPECIAL OFFER: Upgrade this ebook with O’Reilly
    2. A Note Regarding Supplemental Files
    3. Preface
      1. Have We Got a Deal for You!
      2. What to Expect from This Book
      3. What to Expect from Haskell
      4. A Brief Sketch of Haskell’s History
      5. Helpful Resources
      6. Conventions Used in This Book
      7. Using Code Examples
      8. Safari® Books Online
      9. How to Contact Us
      10. Acknowledgments
    4. 1. Getting Started
      1. Your Haskell Environment
      2. Getting Started with ghci, the Interpreter
      3. Basic Interaction: Using ghci as a Calculator
      4. Command-Line Editing in ghci
      5. Lists
      6. Strings and Characters
      7. First Steps with Types
      8. A Simple Program
    5. 2. Types and Functions
      1. Why Care About Types?
      2. Haskell’s Type System
      3. What to Expect from the Type System
      4. Some Common Basic Types
      5. Function Application
      6. Useful Composite Data Types: Lists and Tuples
      7. Functions over Lists and Tuples
      8. Function Types and Purity
      9. Haskell Source Files, and Writing Simple Functions
      10. Understanding Evaluation by Example
      11. Polymorphism in Haskell
      12. The Type of a Function of More Than One Argument
      13. Why the Fuss over Purity?
      14. Conclusion
    6. 3. Defining Types, Streamlining Functions
      1. Defining a New Data Type
      2. Type Synonyms
      3. Algebraic Data Types
      4. Pattern Matching
      5. Record Syntax
      6. Parameterized Types
      7. Recursive Types
      8. Reporting Errors
      9. Introducing Local Variables
      10. The Offside Rule and Whitespace in an Expression
      11. The case Expression
      12. Common Beginner Mistakes with Patterns
      13. Conditional Evaluation with Guards
    7. 4. Functional Programming
      1. Thinking in Haskell
      2. A Simple Command-Line Framework
      3. Warming Up: Portably Splitting Lines of Text
      4. Infix Functions
      5. Working with Lists
      6. How to Think About Loops
      7. Anonymous (lambda) Functions
      8. Partial Function Application and Currying
      9. As-patterns
      10. Code Reuse Through Composition
      11. Tips for Writing Readable Code
      12. Space Leaks and Strict Evaluation
    8. 5. Writing a Library: Working with JSON Data
      1. A Whirlwind Tour of JSON
      2. Representing JSON Data in Haskell
      3. The Anatomy of a Haskell Module
      4. Compiling Haskell Source
      5. Generating a Haskell Program and Importing Modules
      6. Printing JSON Data
      7. Type Inference Is a Double-Edged Sword
      8. A More General Look at Rendering
      9. Developing Haskell Code Without Going Nuts
      10. Pretty Printing a String
      11. Arrays and Objects, and the Module Header
      12. Writing a Module Header
      13. Fleshing Out the Pretty-Printing Library
      14. Creating a Package
      15. Practical Pointers and Further Reading
    9. 6. Using Typeclasses
      1. The Need for Typeclasses
      2. What Are Typeclasses?
      3. Declaring Typeclass Instances
      4. Important Built-in Typeclasses
      5. Automatic Derivation
      6. Typeclasses at Work: Making JSON Easier to Use
      7. Living in an Open World
      8. How to Give a Type a New Identity
      9. JSON Typeclasses Without Overlapping Instances
      10. The Dreaded Monomorphism Restriction
      11. Conclusion
    10. 7. I/O
      1. Classic I/O in Haskell
      2. Working with Files and Handles
      3. Extended Example: Functional I/O and Temporary Files
      4. Lazy I/O
      5. The IO Monad
      6. Is Haskell Really Imperative?
      7. Side Effects with Lazy I/O
      8. Buffering
      9. Reading Command-Line Arguments
      10. Environment Variables
    11. 8. Efficient File Processing, Regular Expressions, and Filename Matching
      1. Efficient File Processing
      2. Filename Matching
      3. Regular Expressions in Haskell
      4. More About Regular Expressions
      5. Translating a glob Pattern into a Regular Expression
      6. An important Aside: Writing Lazy Functions
      7. Making Use of Our Pattern Matcher
      8. Handling Errors Through API Design
      9. Putting Our Code to Work
    12. 9. I/O Case Study: A Library for Searching the Filesystem
      1. The find Command
      2. Starting Simple: Recursively Listing a Directory
      3. A Naive Finding Function
      4. Predicates: From Poverty to Riches, While Remaining Pure
      5. Sizing a File Safely
      6. A Domain-Specific Language for Predicates
      7. Controlling Traversal
      8. Density, Readability, and the Learning Process
      9. Another Way of Looking at Traversal
      10. Useful Coding Guidelines
    13. 10. Code Case Study: Parsing a Binary Data Format
      1. Grayscale Files
      2. Parsing a Raw PGM File
      3. Getting Rid of Boilerplate Code
      4. Implicit State
      5. Introducing Functors
      6. Writing a Functor Instance for Parse
      7. Using Functors for Parsing
      8. Rewriting Our PGM Parser
      9. Future Directions
    14. 11. Testing and Quality Assurance
      1. QuickCheck: Type-Based Testing
      2. Testing Case Study: Specifying a Pretty Printer
      3. Measuring Test Coverage with HPC
    15. 12. Barcode Recognition
      1. A Little Bit About Barcodes
      2. Introducing Arrays
      3. Encoding an EAN-13 Barcode
      4. Constraints on Our Decoder
      5. Divide and Conquer
      6. Turning a Color Image into Something Tractable
      7. What Have We Done to Our Image?
      8. Finding Matching Digits
      9. Life Without Arrays or Hash Tables
      10. Turning Digit Soup into an Answer
      11. Working with Row Data
      12. Pulling It All Together
      13. A Few Comments on Development Style
    16. 13. Data Structures
      1. Association Lists
      2. Maps
      3. Functions Are Data, Too
      4. Extended Example: /etc/passwd
      5. Extended Example: Numeric Types
      6. Taking Advantage of Functions as Data
      7. General-Purpose Sequences
    17. 14. Monads
      1. Revisiting Earlier Code Examples
      2. Looking for Shared Patterns
      3. The Monad Typeclass
      4. And Now, a Jargon Moment
      5. Using a New Monad: Show Your Work!
      6. Mixing Pure and Monadic Code
      7. Putting a Few Misconceptions to Rest
      8. Building the Logger Monad
      9. The Maybe Monad
      10. The List Monad
      11. Desugaring of do Blocks
      12. The State Monad
      13. Monads and Functors
      14. The Monad Laws and Good Coding Style
    18. 15. Programming with Monads
      1. Golfing Practice: Association Lists
      2. Generalized Lifting
      3. Looking for Alternatives
      4. Adventures in Hiding the Plumbing
      5. Separating Interface from Implementation
      6. The Reader Monad
      7. A Return to Automated Deriving
      8. Hiding the IO Monad
    19. 16. Using Parsec
      1. First Steps with Parsec: Simple CSV Parsing
      2. The sepBy and endBy Combinators
      3. Choices and Errors
      4. Extended Example: Full CSV Parser
      5. Parsec and MonadPlus
      6. Parsing a URL-Encoded Query String
      7. Supplanting Regular Expressions for Casual Parsing
      8. Parsing Without Variables
      9. Applicative Functors for Parsing
      10. Applicative Parsing by Example
      11. Parsing JSON Data
      12. Parsing a HTTP Request
    20. 17. Interfacing with C: The FFI
      1. Foreign Language Bindings: The Basics
      2. Regular Expressions for Haskell: A Binding for PCRE
      3. Passing String Data Between Haskell and C
      4. Matching on Strings
    21. 18. Monad Transformers
      1. Motivation: Boilerplate Avoidance
      2. A Simple Monad Transformer Example
      3. Common Patterns in Monads and Monad Transformers
      4. Stacking Multiple Monad Transformers
      5. Moving Down the Stack
      6. Understanding Monad Transformers by Building One
      7. Transformer Stacking Order Is Important
      8. Putting Monads and Monad Transformers into Perspective
    22. 19. Error Handling
      1. Error Handling with Data Types
      2. Exceptions
      3. Error Handling in Monads
    23. 20. Systems Programming in Haskell
      1. Running External Programs
      2. Directory and File Information
      3. Program Termination
      4. Dates and Times
      5. Extended Example: Piping
    24. 21. Using Databases
      1. Overview of HDBC
      2. Installing HDBC and Drivers
      3. Connecting to Databases
      4. Transactions
      5. Simple Queries
      6. SqlValue
      7. Query Parameters
      8. Prepared Statements
      9. Reading Results
      10. Database Metadata
      11. Error Handling
    25. 22. Extended Example: Web Client Programming
      1. Basic Types
      2. The Database
      3. The Parser
      4. Downloading
      5. Main Program
    26. 23. GUI Programming with gtk2hs
      1. Installing gtk2hs
      2. Overview of the GTK+ Stack
      3. User Interface Design with Glade
      4. Event-Driven Programming
      5. Initializing the GUI
      6. The Add Podcast Window
      7. Long-Running Tasks
      8. Using Cabal
    27. 24. Concurrent and Multicore Programming
      1. Defining Concurrency and Parallelism
      2. Concurrent Programming with Threads
      3. Simple Communication Between Threads
      4. The Main Thread and Waiting for Other Threads
      5. Communicating over Channels
      6. Useful Things to Know About
      7. Shared-State Concurrency Is Still Hard
      8. Using Multiple Cores with GHC
      9. Parallel Programming in Haskell
      10. Parallel Strategies and MapReduce
    28. 25. Profiling and Optimization
      1. Profiling Haskell Programs
      2. Controlling Evaluation
      3. Understanding Core
      4. Advanced Techniques: Fusion
    29. 26. Advanced Library Design: Building a Bloom Filter
      1. Introducing the Bloom Filter
      2. Use Cases and Package Layout
      3. Basic Design
      4. The ST Monad
      5. Designing an API for Qualified Import
      6. Creating a Mutable Bloom Filter
      7. The Immutable API
      8. Creating a Friendly Interface
      9. Creating a Cabal Package
      10. Testing with QuickCheck
      11. Performance Analysis and Tuning
    30. 27. Sockets and Syslog
      1. Basic Networking
      2. Communicating with UDP
      3. Communicating with TCP
    31. 28. Software Transactional Memory
      1. The Basics
      2. Some Simple Examples
      3. STM and Safety
      4. Retrying a Transaction
      5. Choosing Between Alternatives
      6. I/O and STM
      7. Communication Between Threads
      8. A Concurrent Web Link Checker
      9. Practical Aspects of STM
    32. A. Installing GHC and Haskell Libraries
      1. Installing GHC
      2. Installing Haskell Software
    33. B. Characters, Strings, and Escaping Rules
      1. Writing Character and String Literals
      2. International Language Support
      3. Escaping Text
    34. Index
    35. About the Authors
    36. Colophon
    37. SPECIAL OFFER: Upgrade this ebook with O’Reilly
O'Reilly logo

How to Think About Loops

Unlike traditional languages, Haskell has neither a for loop nor a while loop. If we’ve got a lot of data to process, what do we use instead? There are several possible answers to this question.

Explicit Recursion

A straightforward way to make the jump from a language that has loops to one that doesn’t is to run through a few examples, looking at the differences. Here’s a C function that takes a string of decimal digits and turns them into an integer:

int as_int(char *str)
    int acc; /* accumulate the partial result */

    for (acc = 0; isdigit(*str); str++) {
	acc = acc * 10 + (*str - '0');

    return acc;

Given that Haskell doesn’t have any looping constructs, how should we think about representing a fairly straightforward piece of code such as this?

We don’t have to start off by writing a type signature, but it helps to remind us of what we’re working with:

-- file: ch04/IntParse.hs
import Data.Char (digitToInt) -- we'll need digitToInt shortly

asInt :: String -> Int

The C code computes the result incrementally as it traverses the string; the Haskell code can do the same. However, in Haskell, we can express the equivalent of a loop as a function. We’ll call ours loop just to keep things nice and explicit:

-- file: ch04/IntParse.hs
loop :: Int -> String -> Int

asInt xs = loop 0 xs

That first parameter to loop is the accumulator variable we’ll be using. Passing zero into it is equivalent to initializing the acc variable in C at the beginning of the loop.

Rather than leap into blazing code, let’s think about the data we have to work with. Our familiar String is just a synonym for [Char], a list of characters. The easiest way for us to get the traversal right is to think about the structure of a list: it’s either empty or a single element followed by the rest of the list.

We can express this structural thinking directly by pattern matching on the list type’s constructors. It’s often handy to think about the easy cases first; here, that means we will consider the empty list case:

-- file: ch04/IntParse.hs
loop acc [] = acc

An empty list doesn’t just mean the input string is empty; it’s also the case that we’ll encounter when we traverse all the way to the end of a nonempty list. So we don’t want to error out if we see an empty list. Instead, we should do something sensible. Here, the sensible thing is to terminate the loop and return our accumulated value.

The other case we have to consider arises when the input list is not empty. We need to do something with the current element of the list, and something with the rest of the list:

-- file: ch04/IntParse.hs
loop acc (x:xs) = let acc' = acc * 10 + digitToInt x
                  in loop acc' xs

We compute a new value for the accumulator and give it the name acc'. We then call the loop function again, passing it the updated value acc' and the rest of the input list. This is equivalent to the loop starting another round in C.

Single quotes in variable names

Remember, a single quote is a legal character to use in a Haskell variable name, and it is pronounced prime. There’s a common idiom in Haskell programs involving a variable—say, foo—and another variable—say, foo'. We can usually assume that foo' is somehow related to foo. It’s often a new value for foo, as just shown in our code.

Sometimes we’ll see this idiom extended, such as foo''. Since keeping track of the number of single quotes tacked onto the end of a name rapidly becomes tedious, use of more than two in a row is thankfully rare. Indeed, even one single quote can be easy to miss, which can lead to confusion on the part of readers. It might be better to think of the use of single quotes as a coding convention that you should be able to recognize, and less as one that you should actually follow.

Each time the loop function calls itself, it has a new value for the accumulator, and it consumes one element of the input list. Eventually, it’s going to hit the end of the list, at which time the [] pattern will match and the recursive calls will cease.

How well does this function work? For positive integers, it’s perfectly cromulent:

ghci> asInt "33"

But because we were focusing on how to traverse lists, not error handling, our poor function misbehaves if we try to feed it nonsense:

ghci> asInt ""
ghci> asInt "potato"
*** Exception: Char.digitToInt: not a digit 'p'

We’ll defer fixing our function’s shortcomings to Exercises.

Because the last thing that loop does is simply call itself, it’s an example of a tail recursive function. There’s another common idiom in this code, too. Thinking about the structure of the list, and handling the empty and nonempty cases separately, is a kind of approach called structural recursion.

We call the nonrecursive case (when the list is empty) the base case (or sometimes the terminating case). We’ll see people refer to the case where the function calls itself as the recursive case (surprise!), or they might give a nod to mathematical induction and call it the inductive case.

As a useful technique, structural recursion is not confined to lists; we can use it on other algebraic data types, too. We’ll have more to say about it later.

What’s the big deal about tail recursion?

In an imperative language, a loop executes in constant space. Lacking loops, we use tail recursive functions in Haskell instead. Normally, a recursive function allocates some space each time it applies itself, so it knows where to return to.

Clearly, a recursive function would be at a huge disadvantage relative to a loop if it allocated memory for every recursive application—this would require linear space instead of constant space. However, functional language implementations detect uses of tail recursion and transform tail recursive calls to run in constant space; this is called tail call optimization (TCO).

Few imperative language implementations perform TCO; this is why using any kind of ambitiously functional style in an imperative language often leads to memory leaks and poor performance.

Transforming Every Piece of Input

Consider another C function, square, which squares every element in an array:

void square(double *out, const double *in, size_t length)
    for (size_t i = 0; i < length; i++) {
	out[i] = in[i] * in[i];

This contains a straightforward and common kind of loop, one that does exactly the same thing to every element of its input array. How might we write this loop in Haskell?

-- file: ch04/Map.hs
square :: [Double] -> [Double]

square (x:xs) = x*x : square xs
square []     = []

Our square function consists of two pattern-matching equations. The first deconstructs the beginning of a nonempty list, in order to get its head and tail. It squares the first element, then puts that on the front of a new list, which is constructed by calling square on the remainder of the empty list. The second equation ensures that square halts when it reaches the end of the input list.

The effect of square is to construct a new list that’s the same length as its input list, with every element in the input list substituted with its square in the output list.

Here’s another such C loop, one that ensures that every letter in a string is converted to uppercase:

#include <ctype.h>

char *uppercase(const char *in)
    char *out = strdup(in);
    if (out != NULL) {
	for (size_t i = 0; out[i] != '\0'; i++) {
	    out[i] = toupper(out[i]);

    return out;

Let’s look at a Haskell equivalent:

-- file: ch04/Map.hs
import Data.Char (toUpper)

upperCase :: String -> String

upperCase (x:xs) = toUpper x : upperCase xs
upperCase []     = []

Here, we’re importing the toUpper function from the standard Data.Char module, which contains lots of useful functions for working with Char data.

Our upperCase function follows a similar pattern to our earlier square function. It terminates with an empty list when the input list is empty; when the input isn’t empty, it calls toUpper on the first element, then constructs a new list cell from that and the result of calling itself on the rest of the input list.

These examples follow a common pattern for writing recursive functions over lists in Haskell. The base case handles the situation where our input list is empty. The recursive case deals with a nonempty list; it does something with the head of the list and calls itself recursively on the tail.

Mapping over a List

The square and upperCase functions that we just defined produce new lists that are the same lengths as their input lists, and they do only one piece of work per element. This is such a common pattern that Haskell’s Prelude defines a function, map, in order to make it easier. map takes a function and applies it to every element of a list, returning a new list constructed from the results of these applications.

Here are our square and upperCase functions rewritten to use map:

-- file: ch04/Map.hs
square2 xs = map squareOne xs
    where squareOne x = x * x

upperCase2 xs = map toUpper xs

This is our first close look at a function that takes another function as its argument. We can learn a lot about what map does by simply inspecting its type:

ghci> :type map
map :: (a -> b) -> [a] -> [b]

The signature tells us that map takes two arguments. The first is a function that takes a value of one type, a, and returns a value of another type, b.

Because map takes a function as an argument, we refer to it as a higher-order function. (In spite of the name, there’s nothing mysterious about higher-order functions; it’s just a term for functions that take other functions as arguments, or return functions.)

Since map abstracts out the pattern common to our square and upperCase functions so that we can reuse it with less boilerplate, we can look at what those functions have in common and figure out how to implement it ourselves:

-- file: ch04/Map.hs
myMap :: (a -> b) -> [a] -> [b]

myMap f (x:xs) = f x : myMap f xs
myMap _ _      = []

What are those wild cards doing there?

If you’re new to functional programming, the reasons for matching patterns in certain ways won’t always be obvious. For example, in the definition of myMap in the preceding code, the first equation binds the function we’re mapping to the variable f, but the second uses wild cards for both parameters. What’s going on?

We use a wild card in place of f to indicate that we aren’t calling the function f on the righthand side of the equation. What about the list parameter? The list type has two constructors. We’ve already matched on the nonempty constructor in the first equation that defines myMap. By elimination, the constructor in the second equation is necessarily the empty list constructor, so there’s no need to perform a match to see what its value really is.

As a matter of style, it is fine to use wild cards for well-known simple types such as lists and Maybe. For more complicated or less familiar types, it can be safer and more readable to name constructors explicitly.

We try out our myMap function to give ourselves some assurance that it behaves similarly to the standard map:

ghci> :module +Data.Char
ghci> map toLower "SHOUTING"
ghci> myMap toUpper "whispering"
ghci> map negate [1,2,3]

This pattern of spotting a repeated idiom, and then abstracting it so we can reuse (and write less!) code, is a common aspect of Haskell programming. While abstraction isn’t unique to Haskell, higher-order functions make it remarkably easy.

Selecting Pieces of Input

Another common operation on a sequence of data is to comb through it for elements that satisfy some criterion. Here’s a function that walks a list of numbers and returns those that are odd. Our code has a recursive case that’s a bit more complex than our earlier functions—it puts a number in the list it returns only if the number is odd. Using a guard expresses this nicely:

-- file: ch04/Filter.hs
oddList :: [Int] -> [Int]

oddList (x:xs) | odd x     = x : oddList xs
               | otherwise = oddList xs
oddList _                  = []

Let’s see that in action:

ghci> oddList [1,1,2,3,5,8,13,21,34]

Once again, this idiom is so common that the Prelude defines a function, filter, which we already introduced. It removes the need for boilerplate code to recurse over the list:

ghci> :type filter
filter :: (a -> Bool) -> [a] -> [a]
ghci> filter odd [3,1,4,1,5,9,2,6,5]

The filter function takes a predicate and applies it to every element in its input list, returning a list of only those for which the predicate evaluates to True. We’ll revisit filter again later in this chapter in Folding from the Right.

Computing One Answer over a Collection

It is also common to reduce a collection to a single value. A simple example of this is summing the values of a list:

-- file: ch04/Sum.hs
mySum xs = helper 0 xs
    where helper acc (x:xs) = helper (acc + x) xs
          helper acc _      = acc

Our helper function is tail-recursive and uses an accumulator parameter, acc, to hold the current partial sum of the list. As we already saw with asInt, this is a natural way to represent a loop in a pure functional language.

For something a little more complicated, let’s take a look at the Adler-32 checksum. It is a popular checksum algorithm; it concatenates two 16-bit checksums into a single 32-bit checksum. The first checksum is the sum of all input bytes, plus one. The second is the sum of all intermediate values of the first checksum. In each case, the sums are computed modulo 65521. Here’s a straightforward, unoptimized Java implementation (it’s safe to skip it if you don’t read Java):

public class Adler32 
    private static final int base = 65521;

    public static int compute(byte[] data, int offset, int length)
	int a = 1, b = 0;

	for (int i = offset; i < offset + length; i++) {
	    a = (a + (data[i] & 0xff)) % base;
	    b = (a + b) % base;

	return (b << 16) | a;

Although Adler-32 is a simple checksum, this code isn’t particularly easy to read on account of the bit-twiddling involved. Can we do any better with a Haskell implementation?

-- file: ch04/Adler32.hs
import Data.Char (ord)
import Data.Bits (shiftL, (.&.), (.|.))

base = 65521

adler32 xs = helper 1 0 xs
    where helper a b (x:xs) = let a' = (a + (ord x .&. 0xff)) `mod` base
                                  b' = (a' + b) `mod` base
                              in helper a' b' xs
          helper a b _     = (b `shiftL` 16) .|. a

This code isn’t exactly easier to follow than the Java code, but let’s look at what’s going on. First of all, we’ve introduced some new functions. The shiftL function implements a logical shift left; (.&.) provides a bitwise and; and (.|.) provides a bitwise or.

Once again, our helper function is tail-recursive. We’ve turned the two variables that we updated on every loop iteration in Java into accumulator parameters. When our recursion terminates on the end of the input list, we compute our checksum and return it.

If we take a step back, we can restructure our Haskell adler32 to more closely resemble our earlier mySum function. Instead of two accumulator parameters, we can use a pair as the accumulator:

-- file: ch04/Adler32.hs
adler32_try2 xs = helper (1,0) xs
    where helper (a,b) (x:xs) =
              let a' = (a + (ord x .&. 0xff)) `mod` base
                  b' = (a' + b) `mod` base
              in helper (a',b') xs
          helper (a,b) _     = (b `shiftL` 16) .|. a

Why would we want to make this seemingly meaningless structural change? Because as we’ve already seen with map and filter, we can extract the common behavior shared by mySum and adler32_try2 into a higher-order function. We can describe this behavior as do something to every element of a list, updating an accumulator as we go, and returning the accumulator when we’re done.

This kind of function is called a fold, because it folds up a list. There are two kinds of fold-over lists: foldl for folding from the left (the start), and foldr for folding from the right (the end).

The Left Fold

Here is the definition of foldl:

-- file: ch04/Fold.hs
foldl :: (a -> b -> a) -> a -> [b] -> a

foldl step zero (x:xs) = foldl step (step zero x) xs
foldl _    zero []     = zero

The foldl function takes a step function, an initial value for its accumulator, and a list. The step takes an accumulator and an element from the list and returns a new accumulator value. All foldl does is call the stepper on the current accumulator and an element of the list, and then passes the new accumulator value to itself recursively to consume the rest of the list.

We refer to foldl as a left fold because it consumes the list from left (the head) to right.

Here’s a rewrite of mySum using foldl:

-- file: ch04/Sum.hs
foldlSum xs = foldl step 0 xs
    where step acc x = acc + x

That local function step just adds two numbers, so let’s simply use the addition operator instead, and eliminate the unnecessary where clause:

-- file: ch04/Sum.hs
niceSum :: [Integer] -> Integer
niceSum xs = foldl (+) 0 xs

Notice how much simpler this code is than our original mySum. We’re no longer using explicit recursion, because foldl takes care of that for us. We’ve simplified our problem down to two things: what the initial value of the accumulator should be (the second parameter to foldl) and how to update the accumulator (the (+) function). As an added bonus, our code is now shorter, too, which makes it easier to understand.

Let’s take a deeper look at what foldl is doing here, by manually writing out each step in its evaluation when we call niceSum [1,2,3]:

-- file: ch04/Fold.hs
foldl (+) 0 (1:2:3:[])
          == foldl (+) (0 + 1)             (2:3:[])
          == foldl (+) ((0 + 1) + 2)       (3:[])
          == foldl (+) (((0 + 1) + 2) + 3) []
          ==           (((0 + 1) + 2) + 3)

We can rewrite adler32_try2 using foldl to let us focus on the details that are important:

-- file: ch04/Adler32.hs
adler32_foldl xs = let (a, b) = foldl step (1, 0) xs
                   in (b `shiftL` 16) .|. a
    where step (a, b) x = let a' = a + (ord x .&. 0xff)
                          in (a' `mod` base, (a' + b) `mod` base)

Here, our accumulator is a pair, so the result of foldl will be, too. We pull the final accumulator apart when foldl returns, and then bit-twiddle it into a proper checksum.

Why Use Folds, Maps, and Filters?

A quick glance reveals that adler32_foldl isn’t really any shorter than adler32_try2. Why should we use a fold in this case? The advantage here lies in the fact that folds are extremely common in Haskell, and they have regular, predictable behavior.

This means that a reader with a little experience will have an easier time understanding a use of a fold than code that uses explicit recursion. A fold isn’t going to produce any surprises, but the behavior of a function that recurses explicitly isn’t immediately obvious. Explicit recursion requires us to read closely to understand exactly what’s going on.

This line of reasoning applies to other higher-order library functions, including those we’ve already seen, map and filter. Because they’re library functions with well-defined behavior, we need to learn what they do only once, and we’ll have an advantage when we need to understand any code that uses them. These improvements in readability also carry over to writing code. Once we start to think with higher-order functions in mind, we’ll produce concise code more quickly.

Folding from the Right

The counterpart to foldl is foldr, which folds from the right of a list:

-- file: ch04/Fold.hs
foldr :: (a -> b -> b) -> b -> [a] -> b

foldr step zero (x:xs) = step x (foldr step zero xs)
foldr _    zero []     = zero

Let’s follow the same manual evaluation process with foldr (+) 0 [1,2,3] as we did with niceSum earlier in the section The Left Fold:

-- file: ch04/Fold.hs
foldr (+) 0 (1:2:3:[])
          == 1 +           foldr (+) 0 (2:3:[])
          == 1 + (2 +      foldr (+) 0 (3:[])
          == 1 + (2 + (3 + foldr (+) 0 []))
          == 1 + (2 + (3 + 0))

The difference between foldl and foldr should be clear from looking at where the parentheses and the empty list elements show up. With foldl, the empty list element is on the left, and all the parentheses group to the left. With foldr, the zero value is on the right, and the parentheses group to the right.

There is a lovely intuitive explanation of how foldr works: it replaces the empty list with the zero value, and replaces every constructor in the list with an application of the step function:

-- file: ch04/Fold.hs
1 : (2 : (3 : []))
1 + (2 + (3 + 0 ))

At first glance, foldr might seem less useful than foldl: what use is a function that folds from the right? But consider the Prelude’s filter function, which we last encountered earlier in this chapter in Selecting Pieces of Input. If we write filter using explicit recursion, it will look something like this:

-- file: ch04/Fold.hs
filter :: (a -> Bool) -> [a] -> [a]
filter p []   = []
filter p (x:xs)
    | p x       = x : filter p xs
    | otherwise = filter p xs

Perhaps surprisingly, though, we can write filter as a fold, using foldr:

-- file: ch04/Fold.hs
myFilter p xs = foldr step [] xs
    where step x ys | p x       = x : ys
                    | otherwise = ys

This is the sort of definition that could cause us a headache, so let’s examine it in a little depth. Like foldl, foldr takes a function and a base case (what to do when the input list is empty) as arguments. From reading the type of filter, we know that our myFilter function must return a list of the same type as it consumes, so the base case should be a list of this type, and the step helper function must return a list.

Since we know that foldr calls step on one element of the input list at a time, then with the accumulator as its second argument, step’s actions must be quite simple. If the predicate returns True, it pushes that element onto the accumulated list; otherwise, it leaves the list untouched.

The class of functions that we can express using foldr is called primitive recursive. A surprisingly large number of list manipulation functions are primitive recursive. For example, here’s map written in terms of foldr:

-- file: ch04/Fold.hs
myMap :: (a -> b) -> [a] -> [b]

myMap f xs = foldr step [] xs
    where step x ys = f x : ys

In fact, we can even write foldl using foldr!

-- file: ch04/Fold.hs
myFoldl :: (a -> b -> a) -> a -> [b] -> a

myFoldl f z xs = foldr step id xs z
    where step x g a = g (f a x)

Understanding foldl in terms of foldr

If you want to set yourself a solid challenge, try to follow our definition of foldl using foldr. Be warned: this is not trivial! You might want to have the following tools at hand: some headache pills and a glass of water, ghci (so that you can find out what the id function does), and a pencil and paper.

You will want to follow the same manual evaluation process as we just outlined to see what foldl and foldr were really doing. If you get stuck, you may find the task easier after reading Partial Function Application and Currying.

Returning to our earlier intuitive explanation of what foldr does, another useful way to think about it is that it transforms its input list. Its first two arguments are what to do with each head/tail element of the list, and what to substitute for the end of the list.

The identity transformation with foldr thus replaces the empty list with itself and applies the list constructor to each head/tail pair:

-- file: ch04/Fold.hs
identity :: [a] -> [a]
identity xs = foldr (:) [] xs

It transforms a list into a copy of itself:

ghci> identity [1,2,3]

If foldr replaces the end of a list with some other value, this gives us another way to look at Haskell’s list append function, (++):

ghci> [1,2,3] ++ [4,5,6]

All we have to do to append a list onto another is substitute that second list for the end of our first list:

-- file: ch04/Fold.hs
append :: [a] -> [a] -> [a]
append xs ys = foldr (:) ys xs

Let’s try this out:

ghci> append [1,2,3] [4,5,6]

Here, we replace each list constructor with another list constructor, but we replace the empty list with the list we want to append onto the end of our first list.

As our extended treatment of folds should indicate, the foldr function is nearly as important a member of our list-programming toolbox as the more basic list functions we saw in Working with Lists. It can consume and produce a list incrementally, which makes it useful for writing lazy data-processing code.

Left Folds, Laziness, and Space Leaks

To keep our initial discussion simple, we use foldl throughout most of this section. This is convenient for testing, but we will never use foldl in practice. The reason has to do with Haskell’s nonstrict evaluation. If we apply foldl (+) [1,2,3], it evaluates to the expression (((0 + 1) + 2) + 3). We can see this occur if we revisit the way in which the function gets expanded:

-- file: ch04/Fold.hs
foldl (+) 0 (1:2:3:[])
          == foldl (+) (0 + 1)             (2:3:[])
          == foldl (+) ((0 + 1) + 2)       (3:[])
          == foldl (+) (((0 + 1) + 2) + 3) []
          ==           (((0 + 1) + 2) + 3)

The final expression will not be evaluated to 6 until its value is demanded. Before it is evaluated, it must be stored as a thunk. Not surprisingly, a thunk is more expensive to store than a single number, and the more complex the thunked expression, the more space it needs. For something cheap such as arithmetic, thunking an expression is more computationally expensive than evaluating it immediately. We thus end up paying both in space and in time.

When GHC is evaluating a thunked expression, it uses an internal stack to do so. Because a thunked expression could potentially be infinitely large, GHC places a fixed limit on the maximum size of this stack. Thanks to this limit, we can try a large thunked expression in ghci without needing to worry that it might consume all the memory:

ghci> foldl (+) 0 [1..1000]

From looking at this expansion, we can surmise that this creates a thunk that consists of 1,000 integers and 999 applications of (+). That’s a lot of memory and effort to represent a single number! With a larger expression, although the size is still modest, the results are more dramatic:

ghci> foldl (+) 0 [1..1000000]
*** Exception: stack overflow

On small expressions, foldl will work correctly but slowly, due to the thunking overhead that it incurs. We refer to this invisible thunking as a space leak, because our code is operating normally, but it is using far more memory than it should.

On larger expressions, code with a space leak will simply fail, as above. A space leak with foldl is a classic roadblock for new Haskell programmers. Fortunately, this is easy to avoid.

The Data.List module defines a function named foldl' that is similar to foldl, but does not build up thunks. The difference in behavior between the two is immediately obvious:

ghci> foldl  (+) 0 [1..1000000]
*** Exception: stack overflow
ghci> :module +Data.List
ghci> foldl' (+) 0 [1..1000000]

Due to foldl’s thunking behavior, it is wise to avoid this function in real programs, even if it doesn’t fail outright, it will be unnecessarily inefficient. Instead, import Data.List and use foldl'.

Further Reading

The article “A tutorial on the universality and expressiveness of fold” by Graham Hutton ( is an excellent and in-depth tutorial that covers folds. It includes many examples of how to use simple, systematic calculation techniques to turn functions that use explicit recursion into folds.

The best content for your career. Discover unlimited learning on demand for around $1/day.