Chapter 4, Kernels, Threads, Blocks, and Grids

  1. Try it.
  2. All of the threads don't operate on the GPU simultaneously. Much like a CPU switching between tasks in an OS, the individual cores of the GPU switch between the different threads for a kernel.
  3. O( n/640 log n), that is, O(n log n).
  4. Try it.

  1. There is actually no internal grid-level synchronization in CUDA—only block-level (with __syncthreads). We have to synchronize anything above a single block with the host.
  2. Naive: 129 addition operations. Work-efficient: 62 addition operations.
  3. Again, we can't use __syncthreads if we need to synchronize over a large grid of blocks. We can also launch over fewer threads on each iteration if we synchronize on the host, freeing up more resources for ...

Get Hands-On GPU Programming with Python and CUDA now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.