O'Reilly logo

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required


Book Description

This book is the first on the topic and explains the most cutting-edge methods needed for precise calculations and explores the development of powerful algorithms to solve research problems. Multipoint methods have an extensive range of practical applications significant in research areas such as signal processing, analysis of convergence rate, fluid mechanics, solid state physics, and many others. The book takes an introductory approach in making qualitative comparisons of different multipoint methods from various viewpoints to help the reader understand applications of more complex methods. Evaluations are made to determine and predict efficiency and accuracy of presented models useful to wide a range of research areas along with many numerical examples for a deep understanding of the usefulness of each method. This book will make it possible for the researchers to tackle difficult problems and deepen their understanding of problem solving using numerical methods. Multipoint methods are of great practical importance, as they determine sequences of successive approximations for evaluative purposes. This is especially helpful in achieving the highest computational efficiency. The rapid development of digital computers and advanced computer arithmetic have provided a need for new methods useful to solving practical problems in a multitude of disciplines such as applied mathematics, computer science, engineering, physics, financial mathematics, and biology.

  • Provides a succinct way of implementing a wide range of useful and important numerical algorithms for solving research problems
  • Illustrates how numerical methods can be used to study problems which have applications in engineering and sciences, including signal processing, and control theory, and financial computation
  • Facilitates a deeper insight into the development of methods, numerical analysis of convergence rate, and very detailed analysis of computational efficiency
  • Provides a powerful means of learning by systematic experimentation with some of the many fascinating problems in science
  • Includes highly efficient algorithms convenient for the implementation into the most common computer algebra systems such as Mathematica, MatLab, and Maple

Table of Contents

  1. Cover image
  2. Title page
  3. Table of Contents
  4. Copyright
  5. Preface
  6. Chapter 1. Basic concepts
    1. 1.1 Classification of iterative methods
    2. 1.2 Order of convergence
    3. 1.3 Computational efficiency of iterative methods
    4. 1.4 Initial approximations
    5. 1.5 One-point iterative methods for simple zeros
    6. 1.6 Methods for determining multiple zeros
    7. 1.7 Stopping criterion
    8. References
  7. Chapter 2. Two-point methods
    1. 2.1 Cubically convergent two-point methods
    2. 2.2 Ostrowski’s fourth-order method and its generalizations
    3. 2.3 Family of optimal two-point methods
    4. 2.4 Optimal derivative free two-point methods
    5. 2.5 Kung-Traub’s multipoint methods
    6. 2.6 Optimal two-point methods of Jarratt’s type
    7. 2.7 Two-point methods for multiple roots
    8. References
  8. Chapter 3. Three-point non-optimal methods
    1. 3.1 Some historical notes
    2. 3.2 Methods for constructing sixth-order root-finders
    3. 3.3 Ostrowski-like methods of sixth order
    4. 3.4 Jarratt-like methods of sixth order
    5. 3.5 Other non-optimal three-point methods
    6. References
  9. Chapter 4. Three-point optimal methods
    1. 4.1 Optimal three-point methods of Bi, Wu, and Ren
    2. 4.2 Interpolatory iterative three-point methods
    3. 4.3 Optimal methods based on weight functions
    4. 4.4 Eighth-order Ostrowski-like methods
    5. 4.5 Derivative free family of optimal three-point methods
    6. References
  10. Chapter 5. Higher-order optimal methods
    1. 5.1 Some comments on higher-order multipoint methods
    2. 5.2 Geum-Kim’s family of four-point methods
    3. 5.3 Kung-Traub’s families of arbitrary order of convergence
    4. 5.4 Methods of higher-order based on inverse interpolation
    5. 5.5 Multipoint methods based on Hermite’s interpolation
    6. 5.6 Generalized derivative free family based on Newtonian interpolation
    7. References
  11. Chapter 6. Multipoint methods with memory
    1. 6.1 Early works
    2. 6.2 Multipoint methods with memory constructed by inverse interpolation
    3. 6.3 Efficient family of two-point self-accelerating methods
    4. 6.4 Family of three-point methods with memory
    5. 6.5 Generalized multipoint root-solvers with memory
    6. 6.6 Computational aspects
    7. References
  12. Chapter 7. Simultaneous methods for polynomial zeros
    1. 7.1 Simultaneous methods for simple zeros
    2. 7.2 Simultaneous method for multiple zeros
    3. 7.3 Simultaneous inclusion of simple zeros
    4. 7.4 Simultaneous inclusion of multiple zeros
    5. 7.5 Halley-like inclusion methods of high efficiency
    6. References
  13. Bibliography
  14. Glossary
  15. Index