Machine Learning Proceedings 1995

Book description

Machine Learning Proceedings 1995

Table of contents

  1. Front Cover
  2. Machine Learning
  3. Copyright Page
  4. Table of Contents (1/2)
  5. Table of Contents (2/2)
  6. Preface
  7. Advisory Committee
  8. Program Committee
  9. Auxiliary Reviewers
  10. Workshops
  11. Tutorials
  12. PART 1: CONTRIBUTED PAPERS
    1. Chapter 1. On-line Learning of Binary Lexical Relations Using Two-dimensional Weighted Majority Algorithms
      1. ABSTRACT
      2. 1 Introduction
      3. 2 On-line Learning Model for Binary Relations
      4. 3 Two-dimensional Weighted Majority Prediction Algorithms
      5. 4 Experimental Results
      6. 5 Theoretical Performance Analysis
      7. 6 Concluding Remarks
      8. Acknowledgement
      9. References
    2. Chapter 2. On Handling Tree-Structured Attributes in Decision Tree Learning
      1. Abstract
      2. 1 Introduction
      3. 2 Decision Trees With Tree-Structured Attributes
      4. 3 Pre-processing Approaches
      5. 4 A Direct Approach
      6. 5 Analytical Comparison
      7. 6 Experimental Comparison
      8. 7 Summary and Conclusion
      9. Acknowledgement
      10. References
    3. Chapter 3. Theory and Applications of Agnostic PAC-Learning with Small Decision Trees
      1. Abstract
      2. 1 INTRODUCTION
      3. 2 THE AGNOSTIC PAC-LEARNING ALGORITHM T2
      4. 3 EVALUATION OF T2 ON "REAL-WORLD" CLASSIFICATION PROBLEMS
      5. 4 LEARNING CURVES FOR DECISION TREES OF SMALL DEPTH
      6. 5 CONCLUSION
      7. Acknowledgement
      8. References
    4. Chapter 4. Residual Algorithms: Reinforcement Learning with Function Approximation
      1. ABSTRACT
      2. 1 INTRODUCTION
      3. 2 ALGORITHMS FOR LOOKUP TABLES
      4. 3 DIRECT ALGORITHMS
      5. 4 RESIDUAL GRADIENT ALGORITHMS
      6. 5 RESIDUAL ALGORITHMS
      7. 6 STOCHASTIC MDPS AND MODELS
      8. 7 MDPS WITH MULTIPLE ACTIONS
      9. 8 RESIDUAL ALGORITHM SUMMARY
      10. 9 SIMULATION RESULTS
      11. 10 CONCLUSIONS
      12. Acknowledgments
      13. References
    5. Chapter 5. Removing the Genetics from the Standard Genetic Algorithm
      1. Abstract
      2. 1. THE GENETIC ALGORITHM (GA)
      3. 2. FOUR PEAKS: A PROBLEM DESIGNED TO BE GA-FRIENDLY
      4. 3. SELECTING THE GA'S PARAMETERS
      5. 4. POPULATION-BASED INCREMENTAL LEARNING
      6. 5. EMPIRICAL ANALYSIS ON THE FOUR PEAKS PROBLEM
      7. 6. DISCUSSION
      8. 7. CONCLUSIONS
      9. ACKNOWLEDGEMENTS
      10. REFERENCES
    6. Chapter 6. Inductive Learning of Reactive Action Models
      1. Abstract
      2. 1 INTRODUCTION
      3. 2 CONTEXT OF THE LEARNER
      4. 3 ACTIONS AND TELEO-OPERATORS
      5. 4 COLLECTING INSTANCES FOR LEARNING
      6. 5 THE INDUCTIVE LOGIC PROGRAMMING ALGORITHM
      7. 6 EVALUATION
      8. 7 RELATED WORK
      9. 8 FUTURE WORK
      10. Acknowledgements
      11. References
    7. Chapter 7. Visualizing High-Dimensional Structure with the Incremental Grid Growing Neural Network
      1. Abstract
      2. 1 INTRODUCTION
      3. 2 INCREMENTAL GRID GROWING
      4. 3 COMPARISON USING MINIMUM SPANNING TREEDATA
      5. 4 DEMONSTRATION USING REALWORLD SEMANTIC DATA
      6. 5 DISCUSSION AND FUTURE WORK
      7. 6 CONCLUSION
      8. References
    8. Chapter 8. Empirical support for Winnow and Weighted-Majority based algorithms: results on a calendar scheduling domain
      1. Abstract
      2. 1 Introduction
      3. 2 The learning problem
      4. 3 Description of the algorithms
      5. 4 Experimental results
      6. 5 Theoretical results
      7. Acknowledgements
      8. References
      9. Appendix
    9. Chapter 9. Automatic Selection of Split Criterion during Tree Growing Based on Node Location
      1. Abstract
      2. 1 DECISION TREE CONSTRUCTION
      3. 2 SITUATIONS IN WHICH ACCURACY IS THE BEST SPLITCRITERION
      4. 3 IMPLICATIONS FOR TREE-GROWING ALGORITHMS
      5. 4 EMPIRICAL SUPPORT OF THE HYPOTHESIS
      6. 5 FUTURE DIRECTIONS
      7. References
    10. Chapter 10. A Lexically Based Semantic Bias for Theory Revision
      1. Abstract
      2. 1 INTRODUCTION
      3. 2 BACKGROUND
      4. 3 CLARUS
      5. 4 RESULTS
      6. 5 Discussion
      7. 6 CONCLUSION
      8. Acknowledgments
      9. References
    11. Chapter 11. A Comparative Evaluation of Voting and Meta-learning on Partitioned Data
      1. Abstract
      2. 1 Introduction
      3. 2 Common Voting and Statistical Techniques
      4. 3 Meta-learning Techniques
      5. 4 Experiments and Results
      6. 5 Arbiter Tree
      7. 6 Discussion
      8. 7 Concluding Remarks
      9. References
    12. Chapter 12. Fast and Efficient Reinforcement Learning with Truncated Temporal Differences
      1. Abstract
      2. 1 INTRODUCTION
      3. 2 TD-BASED ALGORITHMS
      4. 3 TRUNCATED TEMPORAL DIFFERENCES
      5. 4 EXPERIMENTAL STUDIES
      6. 5 CONCLUSION
      7. Acknowledgements
      8. References
    13. Chapter 13. K*: An Instance-based Learner Using an Entropie Distance Measure
      1. Abstract
      2. 1 INTRODUCTION
      3. 2 ENTROPY AS A DISTANCE MEASURE
      4. 3 K* ALGORITHM
      5. 4 RESULTS
      6. 5 CONCLUSIONS
      7. Acknowledgments
      8. References
    14. Chapter 14. Fast Effective Rule Induction
      1. Abstract
      2. 1 INTRODUCTION
      3. 2 PREVIOUS WORK
      4. 3 EXPERIMENTS WITH IREP
      5. 4 IMPROVEMENTS TO IREP
      6. 5 CONCLUSIONS
      7. References
    15. Chapter 15. Chapter Text Categorization and Relational Learning
      1. Abstract
      2. 1 INTRODUCTION
      3. 2 TEXT CATEGORIZATION
      4. 3 AN EXPERIMENTAL TESTBED
      5. 4 THE LEARNING METHOD
      6. 5 EVALUATING THERELATIONAL ENCODING
      7. 6 RELATION SELECTION
      8. 7 MONOTONICITY CONSTRAINTS
      9. 8 COMPARISON TO OTHER METHODS
      10. 9 CONCLUSIONS
      11. Acknowledgements
      12. References
    16. Chapter 16. Protein Folding: Symbolic Refinement Competes with Neural Networks
      1. Abstract
      2. 1 INTRODUCTION
      3. 2 THE PROTEIN FOLDING DOMAIN
      4. 3 RELATED WORK
      5. 4 KRUST'S SYMBOLIC REFINEMENT
      6. 5 EXPERIMENTAL RESULTS
      7. 6 SUMMARY
      8. References
    17. Chapter 17. A Bayesian Analysis of Algorithms for Learning Finite Functions
      1. Abstract
      2. 1 Introduction
      3. 2 Preliminaries
      4. 3 Algorithms and priors
      5. 4 Approaches to prior and algorithm selection
      6. 5 Discussion and future work
      7. Acknowledgements
      8. References
    18. Chapter 18. Committee-Based Sampling For Training Probabilistic Classifiers
      1. Abstract
      2. 1 INTRODUCTION
      3. 2 BACKGROUND
      4. 3 COMMITTEE-BASEDSAMPLING
      5. 4 HMMS AND PART-OF-SPEECHTAGGING
      6. 5 COMMITTEE-BASEDSAMPLING FOR HMMS
      7. 6 EXPERIMENTAL RESULTS
      8. 7 CONCLUSIONS
      9. References
    19. Chapter 19. Learning Prototypical Concept Descriptions
      1. Abstract
      2. 1 INTRODUCTION
      3. 2 LEARNING PROTOTYPICALDESCRIPTIONS
      4. 3 EVALUATION
      5. 4 DISCUSSION AND FUTUREDIRECTIONS
      6. Acknowledgments
      7. References
    20. Chapter 20. A Case Study of Explanation-Based Control
      1. Abstract
      2. 1 INTRODUCTION
      3. 2 THE ACROBOT
      4. 3 THE EBC APPROACH
      5. 4 A CONTROL THEORY SOLUTION
      6. 5 THE EBC SOLUTION
      7. 6 EMPIRICAL EVALUATION
      8. 7 CONCLUSIONS
      9. Acknowledgements
      10. References
    21. Chapter 21. Explanation-Based Learning and Reinforcement Learning: A Unified View
      1. Abstract
      2. 1 Introduction
      3. 2 Methods
      4. 3 Experiments and Results
      5. 4 Discussion
      6. 5 Conclusion
      7. Acknowledgements
      8. References
    22. Chapter 22. Lessons from Theory Revision Applied to Constructive Induction
      1. Abstract
      2. 1 Introduction
      3. 2 Context and Related Work
      4. 3 Demonstrations of Related Work
      5. 4 Theory-Guided Constructive Induction
      6. 5 Experiments
      7. 6 Discussion
      8. References
    23. Chapter 23. Supervised and Unsupervised Discretization of Continuous Features
      1. Abstract
      2. 1 Introduction
      3. 2 Related Work
      4. 3 Methods
      5. 4 Results
      6. 5 Discussion
      7. 6 Summary
      8. References
    24. Chapter 24. Bounds on the Classification Error of the Nearest Neighbor Rule
      1. Abstract
      2. 1 INTRODUCTION
      3. 2 DEFINITIONS AND THEOREMS
      4. 3 DISCUSSION AND CONCLUSION
      5. Acknowledgements
      6. References
    25. Chapter 25. Q-Learning for Bandit Problems
      1. Abstract
      2. 1 INTRODUCTION
      3. 2 BANDIT PROBLEMS
      4. 3 THE GITTINS INDEX
      5. 4 RESTART-IN-STATE-i PROBLEMS AND THE GITTINSINDEX
      6. 5 ON-LINE ESTIMATION OFGITTINS INDICES VIAQ-LEARNING
      7. 6 EXAMPLES
      8. 7 CONCLUSION
      9. Acknowledgements
      10. References
    26. Chapter 26. Distilling Reliable Information From Unreliable Theories
      1. Abstract
      2. 1 INTRODUCTION
      3. 2 IDENTIFYING STABLE EXAMPLES
      4. 3 USING STABILITY TO ELIMINATE NOISE
      5. 4 RESULTS
      6. 5 DISCUSSION
      7. Acknowledgements
      8. References
    27. Chapter 27. A Quantitative Study of Hypothesis Selection
      1. Abstract
      2. 1 Introduction
      3. 2 The Hypothesis Selection Problem
      4. 3 PAO Algorithms for Hypothesis Selection
      5. 4 Trading Off Exploitation and Exploration
      6. 5 Implication to Probabilistic Hill-Climbing
      7. 6 Related Work
      8. 7 Conclusion
      9. Acknowledgements
      10. References
    28. Chapter 28. Learning proof heuristics by adapting parameters
      1. Abstract
      2. 1 INTRODUCTION
      3. 2 FUNDAMENTALS
      4. 3 LEARNING PARAMETERS WITH A GA
      5. 4 THE UKB-PROCEDURE
      6. 5 DESIGNING A FITNESS FUNCTION
      7. 6 EXPERIMENTAL RESULTS
      8. 7 DISCUSSION
      9. Acknowledgements
      10. References
    29. Chapter 29. Efficient Algorithms for Finding Multi-way Splits for Decision Trees
      1. Abstract
      2. 1 Introduction
      3. 2 Computing Multi-Split Partitions
      4. 3 Experiments
      5. 4 Conclusion
      6. Acknowledgements
      7. References
    30. Chapter 30. Ant-Q: A Reinforcement Learning approach to the traveling salesman problem
      1. Abstract
      2. 1 INTRODUCTION
      3. 2 THE ANT-Q FAMILY OF ALGORITHMS
      4. 3 AN EXPERIMENTAL COMPARISONOF ANT-Q ALGORITHMS
      5. 4. TWO INTERESTING PROPERTIES OF ANT-Q
      6. 5 COMPARISONS WITH OTHER HEURISTICS AND SOME RESULTS ON DIFFICULT PROBLEMS
      7. 6 CONCLUSIONS
      8. Acknowledgements
      9. References
    31. Chapter 31. Stable Function Approximation in Dynamic Programming
      1. Abstract
      2. 1 INTRODUCTION AND BACKGROUND
      3. 2 DEFINITIONS AND BASIC THEOREMS
      4. 3 MAIN RESULTS: DISCOUNTED PROCESSES
      5. 4 NONDISCOUNTED PROCESSES
      6. 5 CONVERGING TO WHAT
      7. 6 EXPERIMENTS: HILL-CAR THE HARD WAY
      8. 7 CONCLUSIONS AND FURTHER RESEARCH
      9. References
    32. Chapter 32. The Challenge of Revising an Impure Theory
      1. Abstract
      2. 1 Introduction
      3. 2 Framework
      4. 3 Computational Complexity
      5. 4 Prioritizing Default Theories
      6. 5 Conclusion
      7. References
    33. Chapter 33. Symbiosis in Multimodal Concept Learning
      1. Abstract
      2. 1 INTRODUCTION
      3. 2 NICHE TECHNIQUES
      4. 3 SYSTEM OVERVIEW
      5. 4 INDIVIDUAL AND GROUP OPERATORS
      6. 5 FITNESS FUNCTION
      7. 6 COMPARISONS TO OTHER SYSTEMS
      8. 7 RESULTS
      9. 8 CONCLUSIONS
      10. Acknowledgements
      11. References
    34. Chapter 34. Tracking the Best Expert
      1. Abstract
      2. 1 INTRODUCTION
      3. 2 PRELIMINARIES
      4. 3 THE ALGORITHMS
      5. 4 FIXED SHARE ANALYSIS
      6. 5 VARIABLE SHARE ANALYSIS
      7. 6 EXPERIMENTAL RESULTS
      8. References
    35. Chapter 35. Reinforcement Learning by Stochastic Hill Climbing on Discounted Reward
      1. Abstract
      2. 1 Introduction
      3. 2 Domain
      4. 3 Difficulties of Q-learning
      5. 4 Hill Climbing for Reinforcement Learning
      6. 5 Experiments
      7. 6 Discussion
      8. 7 Conclusion
      9. Appendix
      10. References
    36. Chapter 36. Automatic Parameter Selection by Minimizing Estimated Error
      1. Abstract
      2. 1 Introduction
      3. 2 The Parameter Selection Problem
      4. 3 The Wrapper Method
      5. 4 Automatic Parameter Selection for C4.5
      6. 5 Experiments with C4.5-AP
      7. 6 Related Work
      8. 7 Conclusion
      9. Acknowledgments
      10. References
    37. Chapter 37. Error-Correcting Output Coding Corrects Bias and Variance
      1. Abstract
      2. 1 Introduction
      3. 2 Definitions and Previous Work
      4. 3 Decomposing the Error Rate into Bias and Variance Components
      5. 4 ECOC and Voting
      6. 5 ECOC Reduces Variance and Bias
      7. 6 Bias Differences are Caused by Non-Local Behavior
      8. 7 Discussion and Conclusions
      9. Acknowledgements
      10. References
    38. Chapter 38. Learning to Make Rent-to-Buy Decisions with Systems Applications
      1. Abstract
      2. 1 Introduction
      3. 2 Definitions and Main Analytical Results
      4. 3 Algorithm Ae
      5. 4 Analysis
      6. 5 Adaptive Disk Spindown andRent-to-Buy
      7. 6 Experimental Results
      8. Acknowledgements
      9. References
    39. Chapter 39. NewsWeeder: Learning to Filter Netnews
      1. Abstract
      2. 1 INTRODUCTION
      3. 2 APPROACH
      4. 3 RESULTS
      5. 4 CONCLUSION
      6. 5 FUTURE WORK
      7. Acknowledgments
      8. References
    40. Chapter 40. Hill Climbing Beats Genetic Search on a Boolean Circuit Synthesis Problem of Koza's
      1. Abstract
      2. 1 Introduction
      3. 2 Genetic Programming
      4. 3 GP vs RGAT
      5. 4 Hill Climbing
      6. 5 Interpretation and Speculation
      7. 6 References
    41. Chapter 41. Case-Based Acquisition of Place Knowledge
      1. Abstract
      2. 1. Introduction and Basic Concepts
      3. 2. The Evidence Grid Representation
      4. 3. Case-Based Recognition of Places
      5. 4. Case-Based Learning of Places
      6. 5. Experiments with Place Learning
      7. 6. Related Work on Spatial Learning
      8. 7. Directions for Future Work
      9. Acknowledgements
      10. References
    42. Chapter 42. Comparing Several Linear-threshold Learning Algorithms on Tasks Involving Superfluous Attributes
      1. Abstract
      2. 1 INTRODUCTION
      3. 2 THE LEARNING TASKS
      4. 3 THE ALGORIT
      5. 4 DESCRIPTION OF THE PLOTS
      6. 5 CHECKING PROCEDURES
      7. 6 OBSERVATIONS
      8. 7 CONCLUSION
    43. Chapter 43. Learning policies for partially observable environments: Scaling up
      1. Abstract
      2. 1 INTRODUCTION
      3. 2 PARTIALLY OBSERVABLE MARKOV DECISION PROCESSES
      4. 3 SOME SOLUTION METHODS FOR POMDP's
      5. 4 HANDLING LARGER POMDP's: A HYBRID APPROACH
      6. 5 MORE ADVANCED REPRESENTATIONS
      7. References
    44. Chapter 44. Increasing the performance and consistency of classification trees by using the accuracy criterion at the leaves
      1. Abstract
      2. 1 Introduction and Outline
      3. 2 Comparison of accuracy characteristics of split criteria
      4. 3 Revised Tree Growing Strategy
      5. 4 Empirical Results with revised strategy
      6. Acknowledgements
      7. References
    45. Chapter 45. Efficient Learning with Virtual Threshold Gates
      1. Abstract
      2. 1 Introduction
      3. 2 Preliminaries
      4. 3 The Winnow algorithms
      5. 4 Efficient On-line Learning of Simple Geometrical Objects When Dimension is Variable
      6. 5 Efficient On-line Learning of Simple Geometrical Objects When Dimension is Fixed
      7. 6 Conclusions
      8. Acknowledgements
      9. References
    46. Chapter 46. Instance-Based Utile Distinctions for Reinforcement Learning with Hidden State
      1. Abstract
      2. 1 INTRODUCTION
      3. 2 UTILE SUFFIX MEMORY
      4. 3 DETAILS OF THE ALGORITHM
      5. 4 EXPERIMENTAL RESULTS
      6. 5 RELATED WORK
      7. 6 DISCUSSION
      8. Acknowledgments
      9. References
    47. Chapter 47. Efficient Learning from Delayed Rewards through Symbiotic Evolution
      1. Abstract
      2. 1 Introduction
      3. 2 Neuro-Evolution
      4. 3 Symbiotic Evolution
      5. 4 The SANE Method
      6. 5 The Inverted Pendulum Problem
      7. 6 Population Dynamics in SANE
      8. 7 Related Work
      9. 8 Extending SANE
      10. 9 Conclusion
      11. Acknowledgments
      12. References
    48. Chapter 48. Free to Choose: Investigating the Sample Complexity of Active Learning of Real Valued Functions
      1. Abstract
      2. 1 INTRODUCTION
      3. 2 MODEL AND PRELIMINARIES
      4. 3 COLLECTING EXAMPLES: SAMPLING STRATEGIES
      5. 4 EXAMPLE 1: MONOTONIC FUNCTIONS
      6. 5 EXAMPLE 2: A CLASS WITH BOUNDED FIRST DERIVATIVE
      7. 6 CONCLUSIONS AND EXTENSIONS
      8. Acknowledgements
      9. References
    49. Chapter 49. On learning Decision Committees
      1. Abstract
      2. 1 Introduction
      3. 2 Definitions and theoretical results
      4. 3 Learning by DC{-i,0,i}: the IDC algorithm
      5. 4 Experiments
      6. 5 Discussion
      7. References
    50. Chapter 50. Inferring Reduced Ordered Decision Graphs of Minimum Description Length
      1. Abstract
      2. 1 INTRODUCTION
      3. 2 DECISION TREES AND DECISION GRAPHS
      4. 3 MANIPULATING DISCRETE FUNCTIONS USING RODGS
      5. 4 MINIMUM MESSAGE LENGTH AND ENCODING OF RODGS
      6. 5 DERIVING AN RODG OF MINIMAL COMPLEXITY
      7. 6 EXPERIMENTS
      8. 7 CONCLUSIONS AND FUTURE WORK
      9. References
    51. Chapter 51. On Pruning and Averaging Decision Trees
      1. Abstract
      2. 1 INTRODUCTION
      3. 2. OPTIMAL PRUNING
      4. 3 TREE AVERAGING
      5. 4 WEIGHTS FOR DECISION TREES
      6. 5 COMPLEXITY OF FANNING
      7. 6 COMPARISON OF AVERAGING AND PRUNING
      8. 7 DISCUSSION
      9. 8 FANNING OVER GRAPHS AND PRODUCTION RULES
      10. 9 CONCLUSION
      11. References
    52. Chapter 52. Efficient Memory-Based Dynamic Programming
      1. Abstract
      2. 1 INTRODUCTION
      3. 2 MEMORY-BASED APPROACH
      4. 3 EXPERIMENTAL DEMONSTRATION
      5. 4 DISCUSSION
      6. 5 CONCLUSION
      7. Acknowledgements
      8. References
    53. Chapter 53. Using Multidimensional Projection to Find Relations
      1. Abstract
      2. 1 MOTIVATION
      3. 2 BASIC NOTIONS: RELATION AND PROJECTION
      4. 3 MULTIDIMENSIONAL RELATIONAL PROJECTION
      5. 4 A PROTOTYPE IMPLEMENTATION: MRP
      6. 5 EXPERIMENTAL RESULTS
      7. 6 RELATED RESEARCH
      8. 7 CONCLUSIONS
      9. Acknowledgements
      10. References
    54. Chapter 54. Compression-Based Discretization of Continuous Attributes
      1. Abstract
      2. 1 INTRODUCTION
      3. 2 AN MDL MEASURE FOR DISCRETIZED ATTRIBUTES
      4. 3 ALGORITHMIC USAGE
      5. 4 EXPERIMENTS AND EMPIRICAL RESULTS
      6. 5 CONCLUSIONS AND FURTHER RESEARCH
      7. Acknowledgements
      8. References
    55. Chapter 55. MDL and Categorical Theories (Continued)
      1. Abstract
      2. 1 INTRODUCTION
      3. 2 CLASS DESCRIPTION THEORIES AND MDL
      4. 3 AN ANOMALY AND A PREVIOUS SOLUTION
      5. 4 A NEW SOLUTION
      6. 5 APPLYING THE SCHEME TO C4.5RULES
      7. 6 RELATED RESEARCH
      8. 7 CONCLUSION
      9. References
    56. Chapter 56. For Every Generalization Action, Is There Reallyan Equal and Opposite Reaction? Analysis of the Conservation Law for Generalization Performance
      1. Abstract
      2. 1 INTRODUCTION
      3. 2 CONSERVATION LAWREVISITED
      4. 3 AN ALTERNATE MEASURE OF GENERALIZATION
      5. 4 DISCUSSION
      6. Acknowledgments
      7. References
    57. Chapter 57. Active Exploration and Learning in Real-Valued Spaces using Multi-Armed Bandit Allocation Indices
      1. Abstract
      2. 1 Introduction and Motivation
      3. 2 Combining Classification Tree Algorithms with Gittins Indices
      4. 3 The Grasping Task
      5. 4 Discussion
      6. 5 Conclusion
      7. Acknowledgments
      8. References
    58. Chapter 58. Discovering Solutions with Low Kolmogorov Complexity and High Generalization Capability
      1. Abstract
      2. 1 INTRODUCTION
      3. 2 BASIC CONCEPTS
      4. 3 PROBABILISTIC SEARCH
      5. 4 "SIMPLE" NEURAL NETS
      6. 5 INCREMENTAL LEARNING
      7. 6 ACKNOWLEDGEMENTS
      8. References
    59. Chapter 59. A Comparison of Induction Algorithms for Selective andnon-Selective Bayesian Classifiers
      1. Abstract
      2. 1 INTRODUCTION
      3. 2 NAIVE BAYESIAN CLASSIFIERS
      4. 3 BAYESIAN NETWORK CLASSIFIERS
      5. 5 DISCUSSION
      6. 6 RELATED WORK
      7. 7 CONCLUSION
      8. Acknowledgement
      9. References
    60. Chapter 60. Retrofitting Decision Tree Classifiers Using Kernel Density Estimation
      1. Abstract
      2. 1. INTRODUCTION
      3. 2 A REVIEW OF KERNEL DENSITY ESTIMATION
      4. 3 CLASSIFICATION WITH KERNEL DENSITY ESTIMATES
      5. 4 DECISION TREE DENSITY ESTIMATORS
      6. 5 DETAILS ON DECISION TREE DENSITY ESTIMATORS
      7. 6 EXPERIMENTAL RESULTS
      8. 7 RELATED WORK, EXTENSIONS, AND DISCUSSION
      9. 8 CONCLUSION
    61. Chapter 61. Automatic Speaker Recognition: An Application of Machine Learning
      1. Abstract
      2. 1 INTRODUCTION
      3. 2 PREPROCESSING
      4. 3 SPEAKER CLASSIFICATION
      5. 4 EXPERIMENTAL RESULTS
      6. 5 CONCLUSION
      7. Acknowledgments
      8. References
    62. Chapter 62. An Inductive Learning Approach to Prognostic Prediction
      1. Abstract
      2. 1 INTRODUCTION
      3. 2 RECURRENCE SURFACE APPROXIMATION
      4. 3 CLINICAL APPLICATION
      5. 4 CONCLUSIONS AND FUTURE WORK
    63. Chapter 63. TD Models: Modeling the World at a Mixture of Time Scales
      1. Abstract
      2. 1 Multi-Scale Planning and Modeling
      3. 2 Reinforcement Learning
      4. 3 The Prediction Problem
      5. 4 A Generalized Bellman Equation
      6. 5 n-Step Models
      7. 6 Intermixing Time Scales
      8. 7 β-Models
      9. 8 Theoretical Results
      10. 9 TD(λ) Learning of β-models
      11. 10 A Wall-Following Example
      12. 11 A Hidden-State Example
      13. 12 Adding Actions (Future Work)
      14. 13 Conclusions
      15. Acknowledgments
      16. References
    64. Chapter 64. Learning Collection Fusion Strategies for Information Retrieval
      1. Abstract
      2. 1 INTRODUCTION
      3. 2 UNDERPINNINGS
      4. 3 LEARNING COLLECTION FUSION STRATEGIES
      5. 4 EXPERIMENTS
      6. 5 DISCUSSION AND CONCLUSIONS
      7. References
    65. Chapter 65. Learning by Observation and Practice:An Incremental Approach for Planning Operator Acquisition
      1. Abstract
      2. 1 Introduction
      3. 2 Learning architecture overview
      4. 3 Issues of learning planning operators
      5. 4 Learning algorithm descriptions
      6. 5 Empirical results and analysis
      7. Acknowledgements
      8. References
    66. Chapter 66. Learning with Rare Cases and Small Disjuncts
      1. Abstract
      2. 1. INTRODUCTION
      3. 2. BACKGROUND
      4. 3. WHY ARE SMALL DISJUNCTS SO ERROR PRONE?
      5. 4. THE PROBLEM DOMAINS
      6. 5. THE EXPERIMENTS
      7. 6. RESULTS AND DISCUSSION
      8. 7. FUTURE RESEARCH
      9. 8. CONCLUSION
      10. Acknowledgements
      11. References
    67. Chapter 67. Horizontal Generalization
      1. Abstract
      2. 1 INTRODUCTION
      3. 2 FAN GENERALIZERS
      4. 3 COMPUTER EXPERIMENTS
      5. 4 GENERAL COMMENTS ON FG's
      6. Acknowledgements
      7. References
    68. Chapter 68. Learning Hierarchies from Ambiguous Natural Language Data
      1. Abstract
      2. 1 Introduction
      3. 2 Background
      4. 3 Learning Translation Rules with FOCL
      5. 4 Learning a Semantic Hierarchy from scratch
      6. 5 Updating an existing hierarchy
      7. 7 Limitation
      8. 8 Related Work
      9. 9 Conclusion
      10. Acknowledgement
      11. References
  13. PART 2: INVITED TALKS
    1. Chapter 69. Machine Learning and Information Retrieval
    2. Chapter 70. Learning With Bayesian Networks
      1. References
    3. Chapter 71. Learning for Automotive Collision Avoidance and Autonomous Control
  14. Author Index

Product information

  • Title: Machine Learning Proceedings 1995
  • Author(s): Armand Prieditis, Stuart Russell
  • Release date: January 2016
  • Publisher(s): Morgan Kaufmann
  • ISBN: 9781483298665