Cluster computing and MPI

Another topic is cluster computing, that is, writing programs that make collective use of a multitude of servers containing GPUs. These are the server farms that populate the data-processing facilities of well-known internet companies such as Facebook and Google, as well as the scientific supercomputing facilities used by governments and militaries. Clusters are generally programmed with a programming paradigm called message-passing interface (MPI), which is an interface used with languages such as C++ or Fortran that allows you to program many computers that are connected to the same network.

More information about using CUDA with MPI is available here: https://devblogs.nvidia.com/introduction-cuda-aware-mpi/.

Get Hands-On GPU Programming with Python and CUDA now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.