Data parallelism

Data parallelism is a way of speeding up computation by making data a central entity. This is in contrast to the coroutine and thread-based parallelism that we have seen so far. In those cases, we first determine tasks that can be run independently. We then distribute available data to those tasks as needed. This approach is often called task parallelism. Our topic of discussion in this section is data parallelism. In this case, we need to figure out what parts of the input data can be used independently; then multiple tasks can be assigned to individual parts. This also conforms to the divide and conquer approach, one strong example being mergesort.

The Rust ecosystem has a library called Rayon that provides simple APIs ...

Get Network Programming with Rust now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.