Multi-GPU models in Keras

Many cloud computing platforms can provision instances that include multiple GPUs. As our models grow in size and complexity you might want to be able to parallelize the workload across multiple GPUs. This can be a somewhat involved process in native TensorFlow, but in Keras, it's just a function call.

Build your model, as normal, as shown in the following code:

model = Model(inputs=inputs, outputs=output)

Then, we just pass that model to keras.utils.multi_gpu_model, with the help of the following code:

model = multi_gpu_model(model, num_gpu)

In this example, num_gpu is the number of GPUs we want to use.

Get Deep Learning Quick Reference now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.