You are previewing Decision Theory Models for Applications in Artificial Intelligence.
O'Reilly logo
Decision Theory Models for Applications in Artificial Intelligence

Book Description

Decision Theory Models for Applications in Artificial Intelligence: Concepts and Solutions provides an introduction to different types of decision theory techniques, including MDPs, POMDPs, Influence Diagrams, and Reinforcement Learning, and illustrates their application in artificial intelligence. This book provides insights into the advantages and challenges of using decision theory models for developing intelligent systems.

Table of Contents

  1. Cover
  2. Title Page
  3. Copyright Page
  4. Editorial Advisory Board and List of Reviewers
    1. List of Reviewers
  5. Foreword
  6. Preface
  7. Acknowledgment
  8. Section 1: Fundamentals
    1. Chapter 1: Introduction
      1. Abstract
      2. Artificial Intelligence and Decision Theory
      3. Decision Theory: Fundamentals
      4. Overview
      5. Final Remarks
    2. Chapter 2: Introduction to Bayesian Networks and Influence Diagrams
      1. ABSTRACT
      2. INTRODUCTION
      3. TYPES OF MODELS
      4. BAYESIAN NETWORKS
      5. DYNAMIC BAYESIAN NETWORKS
      6. INFLUENCE DIAGRAMS
      7. SUMMARY
      8. FURTHER READING
    3. Chapter 3: An Introduction to Fully and Partially Observable Markov Decision Processes
      1. Abstract
      2. INTRODUCTION
      3. MARKOV DECISION PROCESSES
      4. PARTIALLY OBSERVABLE MARKOV DECISION PROCESSES
      5. CHALLENGES FOR DEPLOYMENT IN APPLICATIONS
      6. Conclusion
    4. Chapter 4: An Introduction to Reinforcement Learning
      1. ABSTRACT
      2. INTRODUCTION
      3. SOLUTION TECHNIQUES
      4. SOME RECENT DEVELOPMENTS
      5. ABSTRACTIONS AND HIERARCHIES
      6. ADDITIONAL GUIDANCE
      7. FINAL REMARKS
  9. Section 2: Concepts
    1. Chapter 5: Inference Strategies for Solving Semi-Markov Decision Processes
      1. ABSTRACT
      2. Introduction
      3. AN EM ALGORITHM FOR SMDPs
      4. DISCRETE MODELS WITH GAMMA-DISTRIBUTED TIME
      5. CONCLUSION
    2. Chapter 6: Multistage Stochastic Programming
      1. ABSTRACT
      2. INTRODUCTION
      3. BACKGROUND
      4. THE DECISION MODEL
      5. COMPARISON TO RELATED APPROACHES
      6. PRACTICAL SCENARIO-TREE APPROACHES
      7. MACHINE LEARNING BASED APPROACH
      8. CASE STUDY
      9. TIME INCONSISTENCY AND BOUNDED RATIONALITY LIMITATIONS
      10. CONCLUSION
    3. Chapter 7: Automatically Generated Explanations for Markov Decision Processes
      1. ABstract
      2. introduction
      3. Background
      4. automatic explanations for markov decision processes
      5. experiments
      6. evaluation through user study
      7. future research directions
    4. Chapter 8: Dynamic LIMIDS
      1. ABSTRACT
      2. INTRODUCTION
      3. DYNAMIC LIMITED-MEMORY INFLUENCE DIAGRAMS (DLIMIDS)
      4. REAL-WORLD EXAMPLE: TREATMENT OF CARCINOID TUMORS
      5. DISCUSSION: RELATED MODELS
      6. CONCLUSION AND FUTURE WORK
      7. APPENDIX: PROOFS OF THE THEOREMS
    5. Chapter 9: Relational Representations and Traces for Efficient Reinforcement Learning
      1. ABSTRACT
      2. INTRODUCTION
      3. CONCLUSION AND FUTURE WORK
  10. Section 3: Solutions
    1. Chapter 10: A Decision-Theoretic Tutor for Analogical Problem Solving
      1. ABSTRACT
      2. Introduction
      3. Related Work
      4. the EA-Coach: Overview
      5. Introduction to the EA-Coach Student Model
      6. The EA-Coach Example-Selection Mechanism
      7. Using the Ea-coach student model in assessment mode
      8. Evaluation of the EA-Coach Decision-Theoretic Approach
      9. Discussion and Future Work
    2. Chapter 11: Dynamic Decision Networks Applications in Active Learning Simulators
      1. Abstract
      2. INTRODUCTION
      3. ACtive learning simulators
      4. DYNAMIC DECISION NETWORKS
      5. PROBABILISTIC RELATIONAL MODELS
      6. Enhanced learning using Intelligent Tutoring Systems in Active Learning Simulators
      7. Probabilistic relational model
      8. DDN application in ALS
      9. Experiment design
      10. case study
      11. EVALUATION PROCESS
      12. RESULTS AND DISCUSSION
      13. Conclusion and future work
    3. Chapter 12: An Intelligent Assistant for Power Plant Operation and Training Based on Decision-Theoretic Planning
      1. ABSTRACT
      2. INTRODUCTION
      3. FUNDAMENTALS
      4. CASE STUDY
      5. EXPERIMENTAL RESULTS
      6. DISCUSSION AND CONCLUSION
    4. Chapter 13: POMDP Models for Assistive Technology
      1. ABSTRACT
      2. INTRODUCTION
      3. RELATED WORK
      4. GENERAL MODEL
      5. ACTIVITY MODELS
      6. HEALTH MONITORING AND EMERGENCY RESPONSE
      7. CONCLUSION AND FUTURE WORK
    5. Chapter 14: A Case Study of Applying Decision Theory in the Real World
      1. ABSTRACT
      2. INTRODUCTION
      3. BACKGROUND: SPOKEN DIALOG SYSTEMS
      4. CASTING A SPOKEN DIALOG SYSTEM AS A POMDP
      5. REAL-WORLD POMDP-BASED DIALOG SYSTEMS
      6. CONCLUSION AND OPEN PROBLEMS
    6. Chapter 15: Task Coordination for Service Robots Based on Multiple Markov Decision Processes
      1. Abstract
      2. Introduction
      3. Markov Decision Processes
      4. MDPs with Multiple Actions: No Conflicts
      5. MDPs with Multiple Actions: Solving Policy Conflicts
      6. Analysis
      7. Experimental Results
      8. Conclusion and Future Work
    7. Chapter 16: Applications of DEC-MDPs in Multi-Robot Systems
      1. ABSTRACT
      2. INTRODUCTION
      3. DECENTRALIZED CONTROL IN MULTI-ROBOT SYSTEMS
      4. RESCUE MISSIONS
      5. DECENTRALIZED MARKOV DECISION PROCESSES
      6. 2V-DEC-MDP FOR FLOCKING AND PLATOONING
      7. CONCLUSION
  11. Compilation of References
  12. About the Contributors
  13. Index