Transferring data to and from the GPU with gpuarray

As we note from writing our prior deviceQuery program in Python, a GPU has its own memory apart from the host computer's memory, which is known as device memory. (Sometimes this is known more specifically as global device memory, to differentiate this from the additional cache memory, shared memory, and register memory that is also on the GPU.) For the most part, we treat (global) device memory on the GPU as we do dynamically allocated heap memory in C (with the malloc and free functions) or C++ (as with the new and delete operators); in CUDA C, this is complicated further with the additional task of transferring data back and forth between the CPU to the GPU (with commands such as cudaMemcpyHostToDevice ...

Get Hands-On GPU Programming with Python and CUDA now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.