You are previewing Compiling with Continuations.
O'Reilly logo
Compiling with Continuations

Book Description

The control and data flow of a program can be represented using continuations, a concept from denotational semantics that has practical application in real compilers. This book shows how continuation-passing style is used as an intermediate representation on which to perform optimisations and program transformations. Continuations can be used to compile most programming languages. The method is illustrated in a compiler for the programming language Standard ML. However, prior knowledge of ML is not necessary, as the author carefully explains each concept as it arises. This is the first book to show how concepts from the theory of programming languages can be applied to the producton of practical optimising compilers for modern languages like ML. This book will be essential reading for compiler writers in both industry and academe, as well as for students and researchers in programming language theory.

Table of Contents

  1. Cover
  2. Half Title
  3. Title Page
  4. Copyright
  5. Contents
  6. Acknowledgments
  7. 1. Overview
    1. 1.1 Continuation-passing style
    2. 1.2 Advantages of CPS
    3. 1.3 What is ML?
    4. 1.4 Compiler organization
  8. 2. Continuation-passing style
    1. 2.1 The CPS datatype
    2. 2.2 Functions that escape
    3. 2.3 Scope rules
    4. 2.4 Closure conversion
    5. 2.5 Spilling
  9. 3. Semantics of the CPS
  10. 4. ML-specific optimizations
    1. 4.1 Data representation
    2. 4.2 Pattern matching
    3. 4.3 Equality
    4. 4.4 Unboxed updates
    5. 4.5 The mini-ML sublanguage
    6. 4.6 Exception declarations
    7. 4.7 The lambda language
    8. 4.8 The module system
  11. 5. Conversion into CPS
    1. 5.1 Variables and constants
    2. 5.2 Records and selection
    3. 5.3 Primitive arithmetic operators
    4. 5.4 Function calls
    5. 5.5 Mutually recursive functions
    6. 5.6 Data constructors
    7. 5.7 Case statements
    8. 5.8 Exception handling
    9. 5.9 Call with current continuation
  12. 6. Optimization of the CPS
    1. 6.1 Constant folding and β-contraction
    2. 6.2 Eta reduction and uncurrying
    3. 6.3 Cascading optimizations
    4. 6.4 Implementation
  13. 7. Beta expansion
    1. 7.1 When to do in-line expansion
    2. 7.2 Estimating the savings
    3. 7.3 Runaway expansion
  14. 8 Hoisting
    1. 8.1 Merging FIX definitions
    2. 8.2 Rules for hoisting
    3. 8.3 Hoisting optimizations
  15. 9. Common subexpressions
  16. 10. Closure conversion
    1. 10.1 A simple example
    2. 10.2 A bigger example
    3. 10.3 Closure-passing style
    4. 10.4 The closure-conversion algorithm
    5. 10.5 Closure representation
    6. 10.6 Callee-save registers
    7. 10.7 Callee-save continuation closures
    8. 10.8 Stack allocation of closures
    9. 10.9 Lifting function definitions to top level
  17. 11. Register spilling
    1. 11.1 Rearranging the expression
    2. 11.2 The spilling algorithm
  18. 12. Space complexity
    1. 12.1 Axioms for analyzing space
    2. 12.2 Preserving space complexity
    3. 12.3 Closure representations
    4. 12.4 When to initiate garbage collection
  19. 13. The abstract machine
    1. 13.1 Compilation units
    2. 13.2 Interface with the garbage collector
    3. 13.3 Position-independent code
    4. 13.4 Special-purpose registers
    5. 13.5 Pseudo-operations
    6. 13.6 Instructions of the continuation machine
    7. 13.7 Register assignment
    8. 13.8 Branch prediction
    9. 13.9 Generation of abstract-machine instructions
    10. 13.10 Integer arithmetic
    11. 13.11 Unboxed floating-point values
  20. 14. Machine-code generation
    1. 14.1 Translation for the VAX
      1. 14.1.1 Span-dependent instructions
    2. 14.2 Translation for the MC68020
    3. 14.3 Translation for the MIPS and SPARC
      1. 14.3.1 PC-relative addressing
      2. 14.3.2 Instruction scheduling
      3. 14.3.3 Anti-aliasing
      4. 14.3.4 Alternating temporaries
    4. 14.4 An example
  21. 15. Performance evaluation
    1. 15.1 Hardware
    2. 15.2 Measurements of individual optimizations
    3. 15.3 Tuning the parameters
    4. 15.4 More about caches
    5. 15.5 Compile time
    6. 15.6 Comparison with other compilers
    7. 15.7 Conclusions
  22. 16. The runtime system
    1. 16.1 Efficiency of garbage collection
    2. 16.2 Breadth-first copying
    3. 16.3 Generational garbage collection
    4. 16.4 Runtime data formats
    5. 16.5 Big bags of pages
    6. 16.6 Asynchronous interrupts
  23. 17. Parallel programming
    1. 17.1 Coroutines and semaphores
    2. 17.2 Better programming models
    3. 17.3 Multiple processors
    4. 17.4 Multiprocessor garbage collection
  24. 18. Future directions
    1. 18.1 Control dependencies
    2. 18.2 Type information
    3. 18.3 Loop optimizations
    4. 18.4 Garbage collection
    5. 18.5 Static single-assignment form
    6. 18.6 State threading
  25. A. Introduction to ML
    1. A.1 Expressions
    2. A.2 Patterns
    3. A.3 Declarations
    4. A.4 Some examples
  26. B. Semantics of the CPS
  27. C. Obtaining Standard ML of New Jersey
  28. D. Readings
  29. Bibliography
  30. Index