O'Reilly logo

Professional CUDA C Programming by Ty McKercher, Max Grossman, John Cheng

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Chapter 4Global Memory

What's in this chapter?

  • Learning the CUDA memory model
  • Managing CUDA memory
  • Programming with global memory
  • Exploring global memory access patterns
  • Probing global memory data layout
  • Programming with unified memory
  • Maximizing global memory throughput

In the last chapter, you studied how threads are executed on GPUs and learned how to optimize kernel performance by manipulating warp execution. However, kernel performance cannot be explained purely through an understanding of warp execution. Recall in Chapter 3, from the section titled “Checking Memory Options with nvprof,” that setting the innermost dimension of a thread block to half the warp size caused memory load efficiency to drop substantially. This performance loss could not be explained by the scheduling of warps or the amount of exposed parallelism. The real cause of the performance loss was poor access patterns to global memory.

In this chapter, you are going to dissect kernel interaction with global memory to help understand how those interactions affect performance. This chapter will explain the CUDA memory model and, by analyzing different global memory access patterns, teach you how to use global memory efficiently from your kernel.

Introducing the CUDA Memory ...

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required