Chapter 3. DATA STRUCTURES

Scientific applications are among the most computationally intensive programs in existence. This places enormous emphasis on the efficiency of such programs. However, much time can be wasted by optimizing fundamentally inefficient algorithms and concentrating on low-level optimizations when much more productive higher-level optimizations remain to be exploited.

Too commonly, a given problem is shoe-horned into using arrays because more sophisticated data structures are prohibitively complicated to implement in many common languages. Examples of this problem, endemic in scientific computing, are rife. For example, Finite element materials simulations, numerical differential equation solvers, numerical integration, implicit surface tesselation and simulations of particle or fluid dynamics based around uniformly subdivided arrays when they should be based upon adaptively subdivided trees.

Occasionally, the poor performance of these inappropriately-optimized programs even drives the use of alternative (often approximate) techniques. Examples of this include the use of padding to round vector sizes up to integer-powers of two when computing numerical Fourier transforms (Fourier series). In order to combat this folklore-based approach to optimization, we shall introduce a more formal approach to quantifying the efficiency of computations. This approach is well known in computer science as complexity theory.

The single most important choice determining the efficiency ...

Get F# for Scientists now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.