You are previewing Intelligent Systems for Engineers and Scientists, Third Edition, 3rd Edition.
O'Reilly logo
Intelligent Systems for Engineers and Scientists, Third Edition, 3rd Edition

Book Description

The third edition of this bestseller examines the principles of artificial intelligence and their application to engineering and science, as well as techniques for developing intelligent systems to solve practical problems. Covering the full spectrum of intelligent systems techniques, it incorporates knowledge-based systems, computational intelligence, and their hybrids.

Using clear and concise language, Intelligent Systems for Engineers and Scientists, Third Edition features updates and improvements throughout all chapters. It includes expanded and separated chapters on genetic algorithms and single-candidate optimization techniques, while the chapter on neural networks now covers spiking networks and a range of recurrent networks. The book also provides extended coverage of fuzzy logic, including type-2 and fuzzy control systems. Example programs using rules and uncertainty are presented in an industry-standard format, so that you can run them yourself.

The first part of the book describes key techniques of artificial intelligence—including rule-based systems, Bayesian updating, certainty theory, fuzzy logic (types 1 and 2), frames, objects, agents, symbolic learning, case-based reasoning, genetic algorithms, optimization algorithms, neural networks, hybrids, and the Lisp and Prolog languages. The second part describes a wide range of practical applications in interpretation and diagnosis, design and selection, planning, and control.

The author provides sufficient detail to help you develop your own intelligent systems for real applications. Whether you are building intelligent systems or you simply want to know more about them, this book provides you with detailed and up-to-date guidance.

Check out the significantly expanded set of free web-based resources that support the book at: http://www.adrianhopgood.com/aitoolkit/

Table of Contents

  1. Preface
  2. The Author
  3. Chapter 1: Introduction
    1. 1.1 Intelligent Systems
    2. 1.2 A Spectrum of Intelligent Behavior
    3. 1.3 Knowledge-Based Systems
    4. 1.4 The Knowledge Base
      1. 1.4.1 Rules and Facts
      2. 1.4.2 Inference Networks
      3. 1.4.3 Semantic Networks
    5. 1.5 Deduction, Abduction, and Induction
    6. 1.6 The Inference Engine
    7. 1.7 Declarative and Procedural Programming
    8. 1.8 Expert Systems
    9. 1.9 Knowledge Acquisition
    10. 1.10 Search
    11. 1.11 Computational Intelligence
    12. 1.12 Integration with Other Software
    13. Further Reading
  4. Chapter 2: Rule-Based Systems
    1. 2.1 Rules and Facts
    2. 2.2 A Rule-Based System for Boiler Control
    3. 2.3 Rule Examination and Rule Firing
    4. 2.4 Maintaining Consistency
    5. 2.5 The Closed-World Assumption
    6. 2.6 Use of Local Variables within Rules
    7. 2.7 Forward Chaining (a Data-Driven Strategy)
      1. 2.7.1 Single and Multiple Instantiation of Local Variables
      2. 2.7.2 Rete Algorithm
    8. 2.8 Conflict Resolution
      1. 2.8.1 First Come, First Served
      2. 2.8.2 Priority Values
      3. 2.8.3 Metarules
    9. 2.9 Backward Chaining (a Goal-Driven Strategy)
      1. 2.9.1 The Backward-Chaining Mechanism
      2. 2.9.2 Implementation of Backward Chaining
      3. 2.9.3 Variations of Backward Chaining
      4. 2.9.4 Format of Backward-Chaining Rules
    10. 2.10 A Hybrid Strategy
    11. 2.11 Explanation Facilities
    12. 2.12 Summary
    13. Further Reading
  5. Chapter 3: Handling Uncertainty: Probability and Fuzzy Logic
    1. 3.1 Sources of Uncertainty
    2. 3.2 Bayesian Updating
      1. 3.2.1 Representing Uncertainty by Probability
      2. 3.2.2 Direct Application of Bayes’ Theorem
      3. 3.2.3 Likelihood Ratios
      4. 3.2.4 Using the Likelihood Ratios
      5. 3.2.5 Dealing with Uncertain Evidence
      6. 3.2.6 Combining Evidence
      7. 3.2.7 Combining Bayesian Rules with Production Rules
      8. 3.2.8 A Worked Example of Bayesian Updating
      9. 3.2.9 Discussion of the Worked Example
      10. 3.2.10 Advantages and Disadvantages of Bayesian Updating
    3. 3.3 Certainty Theory
      1. 3.3.1 Introduction
      2. 3.3.2 Making Uncertain Hypotheses
      3. 3.3.3 Logical Combinations of Evidence
        1. 3.3.3.1 Conjunction
        2. 3.3.3.2 Disjunction
        3. 3.3.3.3 Negation
      4. 3.3.4 A Worked Example of Certainty Theory
      5. 3.3.5 Discussion of the Worked Example
      6. 3.3.6 Relating Certainty Factors to Probabilities
    4. 3.4 Fuzzy Logic: Type-1
      1. 3.4.1 Crisp Sets and Fuzzy Sets
      2. 3.4.2 Fuzzy Rules
      3. 3.4.3 Defuzzification
        1. 3.4.3.1 Stage 1: Scaling the Membership Functions
        2. 3.4.3.2 Stage 2: Finding the Centroid
        3. 3.4.3.3 Defuzzifying at the Extremes
        4. 3.4.3.4 Sugeno Defuzzification
        5. 3.4.3.5 A Defuzzification Anomaly
    5. 3.5 Fuzzy Control Systems
      1. 3.5.1 Crisp and Fuzzy Control
      2. 3.5.2 Fuzzy Control Rules
      3. 3.5.3 Defuzzification in Control Systems
    6. 3.6 Fuzzy Logic: Type-2
    7. 3.7 Other Techniques
      1. 3.7.1 Dempster–Shafer Theory of Evidence
      2. 3.7.2 Inferno
    8. 3.8 Summary
    9. Further Reading
  6. Chapter 4: Agents, Objects, and Frames
    1. 4.1 Birds of a Feather: Agents, Objects, and Frames
    2. 4.2 Intelligent Agents
    3. 4.3 Agent Architectures
      1. 4.3.1 Logic-Based Architectures
      2. 4.3.2 Emergent Behavior Architectures
      3. 4.3.3 Knowledge-Level Architectures
      4. 4.3.4 Layered Architectures
    4. 4.4 Multiagent Systems
      1. 4.4.1 Benefits of a Multiagent System
      2. 4.4.2 Building a Multiagent System
      3. 4.4.3 Contract Nets
      4. 4.4.4 Cooperative Problem-Solving (CPS)
      5. 4.4.5 Shifting Matrix Management (SMM)
      6. 4.4.6 Comparison of Cooperative Models
      7. 4.4.7 Communication between Agents
    5. 4.5 Swarm Intelligence
    6. 4.6 Object-Oriented Systems
      1. 4.6.1 Introducing OOP
      2. 4.6.2 An Illustrative Example
      3. 4.6.3 Data Abstraction
        1. 4.6.3.1 Classes
        2. 4.6.3.2 Instances
        3. 4.6.3.3 Attributes (or Data Members)
        4. 4.6.3.4 Operations (or Methods or Member Functions)
        5. 4.6.3.5 Creation and Deletion of Instances
      4. 4.6.4 Inheritance
        1. 4.6.4.1 Single Inheritance
        2. 4.6.4.2 Multiple and Repeated Inheritance
        3. 4.6.4.3 Specialization of Methods
        4. 4.6.4.4 Class Browsers
      5. 4.6.5 Encapsulation
      6. 4.6.6 Unified Modeling Language (UML)
      7. 4.6.7 Dynamic (or Late) Binding
      8. 4.6.8 Message Passing and Function Calls
      9. 4.6.9 Metaclasses
      10. 4.6.10 Type Checking
      11. 4.6.11 Persistence
      12. 4.6.12 Concurrency
      13. 4.6.13 Active Values and Daemons
      14. 4.6.14 OOP Summary
    7. 4.7 Objects and Agents
    8. 4.8 Frame-Based Systems
    9. 4.9 Summary: Agents, Objects, and Frames
    10. Further Reading
  7. Chapter 5: Symbolic Learning
    1. 5.1 Introduction
    2. 5.2 Learning by Induction
      1. 5.2.1 Overview
      2. 5.2.2 Learning Viewed as a Search Problem
      3. 5.2.3 Techniques for Generalization and Specialization
        1. 5.2.3.1 Universalization
        2. 5.2.3.2 Replacing Constants with Variables
        3. 5.2.3.3 Using Conjunctions and Disjunctions
        4. 5.2.3.4 Moving up or down a Hierarchy
        5. 5.2.3.5 Chunking
    3. 5.3 Case-Based Reasoning (CBR)
      1. 5.3.1 Storing Cases
        1. 5.3.1.1 Abstraction Links and Index Links
        2. 5.3.1.2 Instance-of Links
        3. 5.3.1.3 Scene Links
        4. 5.3.1.4 Exemplar Links
        5. 5.3.1.5 Failure Links
      2. 5.3.2 Retrieving Cases
      3. 5.3.3 Adapting Case Histories
        1. 5.3.3.1 Null Adaptation
        2. 5.3.3.2 Parameterization
        3. 5.3.3.3 Reasoning by Analogy
        4. 5.3.3.4 Critics
        5. 5.3.3.5 Reinstantiation
      4. 5.3.4 Dealing with Mistaken Conclusions
    4. 5.4 Summary
    5. Further Reading
  8. Chapter 6: Single-Candidate Optimization Algorithms
    1. 6.1 Optimization
    2. 6.2 The Search Space
    3. 6.3 Searching the Parameter Space
    4. 6.4 Hill-Climbing and Gradient Descent Algorithms
      1. 6.4.1 Hill-Climbing
      2. 6.4.2 Steepest Gradient Descent or Ascent
      3. 6.4.3 Gradient-Proportional Descent or Ascent
      4. 6.4.4 Conjugate Gradient Descent or Ascent
      5. 6.4.5 Tabu Search
    5. 6.5 Simulated Annealing
    6. 6.6 Summary
    7. Further Reading
  9. Chapter 7: Genetic Algorithms for Optimization
    1. 7.1 Introduction
    2. 7.2 The Basic GA
      1. 7.2.1 Chromosomes
      2. 7.2.2 Algorithm Outline
      3. 7.2.3 Crossover
      4. 7.2.4 Mutation
      5. 7.2.5 Validity Check
    3. 7.3 Selection
      1. 7.3.1 Selection Pitfalls
      2. 7.3.2 Fitness-Proportionate Selection
      3. 7.3.3 Fitness Scaling for Improved Selection
        1. 7.3.3.1 Linear Fitness Scaling
        2. 7.3.3.2 Sigma Scaling
        3. 7.3.3.3 Boltzmann Fitness Scaling
        4. 7.3.3.4 Linear Rank Scaling
        5. 7.3.3.5 Nonlinear Rank Scaling
        6. 7.3.3.6 Probabilistic Nonlinear Rank Scaling
        7. 7.3.3.7 Truncation Selection
        8. 7.3.3.8 Transform Ranking
      4. 7.3.4 Tournament Selection
      5. 7.3.5 Comparison of Selection Methods
    4. 7.4 Elitism
    5. 7.5 Multiobjective Optimization
    6. 7.6 Gray Code
    7. 7.7 Variable Length Chromosomes
    8. 7.8 Building Block Hypothesis
      1. 7.8.1 Schema Theorem
      2. 7.8.2 Inversion
    9. 7.9 Selecting GA Parameters
    10. 7.10 Monitoring Evolution
    11. 7.11 Finding Multiple Optima
    12. 7.12 Genetic Programming
    13. 7.13 Other Forms of Population-Based Optimization
    14. 7.14 Summary
    15. Further Reading
  10. Chapter 8: Neural Networks
    1. 8.1 Introduction
    2. 8.2 Neural Network Applications
      1. 8.2.1 Classification
      2. 8.2.2 Nonlinear Estimation
      3. 8.2.3 Clustering
      4. 8.2.4 Content-Addressable Memory
    3. 8.3 Nodes and Interconnections
    4. 8.4 Single and Multilayer Perceptrons
      1. 8.4.1 Network Topology
      2. 8.4.2 Perceptrons as Classifiers
      3. 8.4.3 Training a Perceptron
      4. 8.4.4 Hierarchical Perceptrons
      5. 8.4.5 Buffered Perceptrons
      6. 8.4.6 Some Practical Considerations
    5. 8.5 Recurrent Networks
      1. 8.5.1 Simple Recurrent Network (SRN)
      2. 8.5.2 Hopfield Network
      3. 8.5.3 MAXNET
      4. 8.5.4 The Hamming Network
    6. 8.6 Unsupervised Networks
      1. 8.6.1 Adaptive Resonance Theory (ART) Networks
      2. 8.6.2 Kohonen Self-Organizing Networks
      3. 8.6.3 Radial Basis Function Networks
    7. 8.7 Spiking Neural Networks
    8. 8.8 Summary
    9. Further Reading
  11. Chapter 9: Hybrid Systems
    1. 9.1 Convergence of Techniques
    2. 9.2 Blackboard Systems for Multifaceted Problems
    3. 9.3 Parameter Setting
      1. 9.3.1 Genetic–Neural Systems
      2. 9.3.2 Genetic–Fuzzy Systems
    4. 9.4 Capability Enhancement
      1. 9.4.1 Neuro–Fuzzy Systems
      2. 9.4.2 Baldwinian and Lamarckian Inheritance in Genetic Algorithms
      3. 9.4.3 Learning Classifier Systems
    5. 9.5 Clarification and Verification of Neural Network Outputs
    6. 9.6 Summary
    7. Further Reading
  12. Chapter 10: Artificial Intelligence Programming Languages
    1. 10.1 A Range of Intelligent Systems Tools
    2. 10.2 Features of AI Languages
      1. 10.2.1 Lists
      2. 10.2.2 Other Data Types
      3. 10.2.3 Programming Environments
    3. 10.3 Lisp
      1. 10.3.1 Background
      2. 10.3.2 Lisp Functions
      3. 10.3.3 A Worked Example
    4. 10.4 Prolog
      1. 10.4.1 Background
      2. 10.4.2 A Worked Example
      3. 10.4.3 Backtracking in Prolog
    5. 10.5 Comparison of AI Languages
    6. 10.6 Summary
    7. Further Reading
  13. Chapter 11: Systems for Interpretation and Diagnosis
    1. 11.1 Introduction
    2. 11.2 Deduction and Abduction for Diagnosis
      1. 11.2.1 Exhaustive Testing
      2. 11.2.2 Explicit Modeling of Uncertainty
      3. 11.2.3 Hypothesize-and-Test
    3. 11.3 Depth of Knowledge
      1. 11.3.1 Shallow Knowledge
      2. 11.3.2 Deep Knowledge
      3. 11.3.3 Combining Shallow and Deep Knowledge
    4. 11.4 Model-Based Reasoning
      1. 11.4.1 The Limitations of Rules
      2. 11.4.2 Modeling Function, Structure, and State
        1. 11.4.2.1 Function
        2. 11.4.2.2 Structure
        3. 11.4.2.3 State
      3. 11.4.3 Using the Model
      4. 11.4.4 Monitoring
      5. 11.4.5 Tentative Diagnosis
        1. 11.4.5.1 The Shotgun Approach
        2. 11.4.5.2 Structural Isolation
        3. 11.4.5.3 The Heuristic Approach
      6. 11.4.6 Fault Simulation
      7. 11.4.7 Fault Repair
      8. 11.4.8 Using Problem Trees
      9. 11.4.9 Summary of Model-Based Reasoning
    5. 11.5 Case Study: A Blackboard System for Interpreting Ultrasonic Images
      1. 11.5.1 Ultrasonic Imaging
      2. 11.5.2 Agents in DARBS
      3. 11.5.3 Rules in DARBS
      4. 11.5.4 The Stages of Image Interpretation
        1. 11.5.4.1 Arc Detection Using the Hough Transform
        2. 11.5.4.2 Gathering the Evidence
        3. 11.5.4.3 Defect Classification
      5. 11.5.5 The Use of Neural Networks
        1. 11.5.5.1 Defect Classification Using a Neural Network
        2. 11.5.5.2 Echodynamic Classification Using a Neural Network
        3. 11.5.5.3 Combining the Two Applications of Neural Networks
      6. 11.5.6 Rules for Verifying Neural Networks
    6. 11.6 Summary
    7. Further Reading
  14. Chapter 12: Systems for Design and Selection
    1. 12.1 The Design Process
    2. 12.2 Design as a Search Problem
    3. 12.3 Computer-Aided Design
    4. 12.4 The Product Design Specification (PDS): A Telecommunications Case Study
      1. 12.4.1 Background
      2. 12.4.2 Alternative Views of a Network
      3. 12.4.3 Implementation
      4. 12.4.4 The Classes
        1. 12.4.4.1 Network
        2. 12.4.4.2 Link
        3. 12.4.4.3 Site
        4. 12.4.4.4 Information Stream
        5. 12.4.4.5 Equipment
      5. 12.4.5 Summary of PDS Case Study
    5. 12.5 Conceptual Design
    6. 12.6 Constraint Propagation and Truth Maintenance
    7. 12.7 Case Study: Design of a Lightweight Beam
      1. 12.7.1 Conceptual Design
      2. 12.7.2 Optimization and Evaluation
      3. 12.7.3 Detailed Design
    8. 12.8 Design as a Selection Exercise
      1. 12.8.1 Overview
      2. 12.8.2 Merit Indices
      3. 12.8.3 The Polymer Selection Example
      4. 12.8.4 Two-Stage Selection
      5. 12.8.5 Constraint Relaxation
      6. 12.8.6 A Naive Approach to Scoring
      7. 12.8.7 A Better Approach to Scoring
      8. 12.8.8 Case Study: Design of a Kettle
      9. 12.8.9 Reducing the Search Space by Classification
    9. 12.9 Failure Mode and Effects Analysis (FMEA)
    10. 12.10 Summary
    11. Further Reading
  15. Chapter 13: Systems for Planning
    1. 13.1 Introduction
    2. 13.2 Classical Planning Systems
    3. 13.3 STRIPS
      1. 13.3.1 General Description
      2. 13.3.2 An Example Problem
      3. 13.3.3 A Simple Planning System in Prolog
    4. 13.4 Considering the Side Effects of Actions
      1. 13.4.1 Maintaining a World Model
      2. 13.4.2 Deductive Rules
    5. 13.5 Hierarchical Planning
      1. 13.5.1 Description
      2. 13.5.2 Benefits of Hierarchical Planning
      3. 13.5.3 Hierarchical Planning with ABSTRIPS
    6. 13.6 Postponement of Commitment
      1. 13.6.1 Partial Ordering of Plans
      2. 13.6.2 The Use of Planning Variables
    7. 13.7 Job-Shop Scheduling
      1. 13.7.1 The Problem
      2. 13.7.2 Some Approaches to Scheduling
    8. 13.8 Constraint-Based Analysis
      1. 13.8.1 Constraints and Preferences
      2. 13.8.2 Formalizing the Constraints
      3. 13.8.3 Identifying the Critical Sets of Operations
      4. 13.8.4 Sequencing in Disjunctive Case
      5. 13.8.5 Sequencing in Nondisjunctive Case
      6. 13.8.6 Updating Earliest Start Times and Latest Finish Times
      7. 13.8.7 Applying Preferences
      8. 13.8.8 Using Constraints and Preferences
    9. 13.9 Replanning and Reactive Planning
    10. 13.10 Summary
    11. Further Reading
  16. Chapter 14: Systems for Control
    1. 14.1 Introduction
    2. 14.2 Low-Level Control
      1. 14.2.1 Open-Loop Control
      2. 14.2.2 Feedforward Control
      3. 14.2.3 Feedback Control
      4. 14.2.4 First- and Second-Order Models
      5. 14.2.5 Algorithmic Control: The PID Controller
      6. 14.2.6 Bang-Bang Control
    3. 14.3 Requirements of High-Level (Supervisory) Control
    4. 14.4 Blackboard Maintenance
    5. 14.5 Time-Constrained Reasoning
      1. 14.5.1 Prioritization of Processes
      2. 14.5.2 Approximation
        1. 14.5.2.1 Approximate Search
        2. 14.5.2.2 Data Approximations
        3. 14.5.2.3 Knowledge Approximations
      3. 14.5.3 Single and Multiple Instantiation
    6. 14.6 Fuzzy Control
    7. 14.7 The BOXES Controller
      1. 14.7.1 The Conventional BOXES Algorithm
      2. 14.7.2 Fuzzy BOXES
    8. 14.8 Neural Network Controllers
      1. 14.8.1 Direct Association of State Variables with Action Variables
      2. 14.8.2 Estimation of Critical State Variables
    9. 14.9 Statistical Process Control (SPC)
      1. 14.9.1 Applications
      2. 14.9.2 Collecting the Data
      3. 14.9.3 Using the Data
    10. 14.10 Summary
    11. Further Reading
  17. Chapter 15: The Future of Intelligent Systems
    1. 15.1 Benefits
    2. 15.2 Trends in Implementation
    3. 15.3 Intelligent Systems and the Internet
    4. 15.4 Ubiquitous Intelligent Systems
    5. 15.5 Conclusion
  18. References