Questions

  1. In the first CUDA-C program that we wrote, we didn't use a cudaDeviceSynchronize command after the calls we made to allocate memory arrays on the GPU with cudaMalloc. Why was this not necessary? (Hint: Review the last chapter.)
  2. Suppose we have a single kernel that is launched over a grid consisting of two blocks, where each block has 32 threads. Suppose all of the threads in the first block execute an if statement, while all of the threads in the second block execute the corresponding else statement. Will all of the threads in the second block have to "lockstep" through the commands in the if statement as the threads in the first block are actually executing them?
  3. What if we executed a similar piece of code, only over a grid consisting ...

Get Hands-On GPU Programming with Python and CUDA now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.