Chapter 5. Processor Management

  • 5.1 Threads Implementations

    • 5.1.1 Strategies

      • 5.1.1.1 One-Level Model

      • 5.1.1.2 Two-Level Model

      • 5.1.1.3 Scheduler Activations

    • 5.1.2 A Simple Threads Implementation

    • 5.1.3 Multiple Processors

  • 5.2 Interrupts

    • 5.2.1 Interrupt Handlers

      • 5.2.1.1 Synchronization and Interrupts

      • 5.2.1.2 Interrupt Threads

    • 5.2.2 Deferred Work

    • 5.2.3 Directed Processing

      • 5.2.3.1 Asynchronous Procedure Calls

      • 5.2.3.2 Unix Signals

  • 5.3 Scheduling

    • 5.3.1 Strategy

      • 5.3.1.1 Simple Batch Systems

      • 5.3.1.2 Multiprogrammed Batch Systems

      • 5.3.1.3 Time-Sharing Systems

      • 5.3.1.4 Shared Servers

      • 5.3.1.5 Real-Time Systems

    • 5.3.2 Tactics

      • 5.3.2.1 Handoff Scheduling

      • 5.3.2.2 Preemption Control

      • 5.3.2.3 Multiprocessor Issues

    • 5.3.3 Case Studies

      • 5.3.3.1 Scheduling in Linux

      • 5.3.3.2 Scheduling in Windows

  • 5.4 Conclusions

  • 5.5 Exercises

  • 5.6 References

This chapter covers the many aspects of managing processors. We begin by discussing the various strategies for implementing threads packages, not only within the operating system but in user-level libraries as well. Next we cover the issues involved in handling interrupts. Certain things done in reaction to interrupts must be done right away, others must be deferred. Some can be done in a rather arbitrary interrupt context, others in specific process contexts. Finally we cover scheduling, looking at its basics as well as implementations on Linux and Windows.

THREADS IMPLEMENTATIONS

We start this chapter by discussing how threads are implemented. We first examine high-level strategies for structuring the ...

Get Operating Systems in Depth now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.