Evaluation Theory, Models, and Applications, 2nd Edition

Book description

The golden standard evaluation reference text

Now in its second edition, Evaluation Theory, Models, and Applications is the vital text on evaluation models, perfect for classroom use as a textbook, and as a professional evaluation reference. The book begins with an overview of the evaluation field and program evaluation standards, and proceeds to cover the most widely used evaluation approaches. With new evaluation designs and the inclusion of the latest literature from the field, this Second Edition is an essential update for professionals and students who want to stay current. Understanding and choosing evaluation approaches is critical to many professions, and Evaluation Theory, Models, and Applications, Second Edition is the benchmark evaluation guide.

Authors Daniel L. Stufflebeam and Chris L. S. Coryn, widely considered experts in the evaluation field, introduce and describe 23 program evaluation approaches, including, new to this edition, transformative evaluation, participatory evaluation, consumer feedback, and meta-analysis. Evaluation Theory, Models, and Applications, Second Edition facilitates the process of planning, conducting, and assessing program evaluations. The highlighted evaluation approaches include:

  • Experimental and quasi-experimental design evaluations

  • Daniel L. Stufflebeam's CIPP Model

  • Michael Scriven's Consumer-Oriented Evaluation

  • Michael Patton's Utilization-Focused Evaluation

  • Robert Stake's Responsive/Stakeholder-Centered Evaluation

  • Case Study Evaluation

  • Key readings listed at the end of each chapter direct readers to the most important references for each topic. Learning objectives, review questions, student exercises, and instructor support materials complete the collection of tools. Choosing from evaluation approaches can be an overwhelming process, but Evaluation Theory, Models, and Applications, Second Edition updates the core evaluation concepts with the latest research, making this complex field accessible in just one book.

    Table of contents

    1. Title Page
    2. Copyright
    3. List of Figures, Tables, and Exhibits
      1. Figures
      2. Tables
      3. Exhibits
    4. Dedication
    5. Preface
    6. Acknowledgments
    7. The Authors
    8. Introduction
      1. Changes to the First Edition
      2. Intended Audience
      3. Overview of the Book's Contents
      4. Study Suggestions
      5. Summary
    9. Part One: Fundamentals of Evaluation
    10. Chapter 1: Overview of the Evaluation Field
      1. What Are Appropriate Objects of Evaluations and Related Subdisciplines of Evaluation?
      2. Are Evaluations Enough to Control Quality, Guide Improvement, and Protect Consumers?
      3. Evaluation as a Profession and Its Relationship to Other Professions
      4. What Is Evaluation?
      5. How Good Is Good Enough? How Bad Is Intolerable? How Are These Questions Addressed?
      6. What Are Performance Standards? How Should They Be Applied?
      7. Why Is It Appropriate to Consider Multiple Values?
      8. Should Evaluations Be Comparative, Noncomparative, or Both?
      9. How Should Evaluations Be Used?
      10. Why Is It Important to Distinguish Between Informal Evaluation and Formal Evaluation?
      11. How Do Service Organizations Meet Requirements for Public Accountability?
      12. What Are the Methods of Formal Evaluation?
      13. What Is the Evaluation Profession, and How Strong Is It?
      14. What Are the Main Historical Milestones in the Evaluation Field's Development?
      15. Summary
      16. Group Exercises
      17. Notes
      18. Suggested Supplemental Readings
    11. Chapter 2: Evaluation Theory
      1. General Features of Evaluation Theories
      2. Theory's Role in Developing the Program Evaluation Field
      3. Functional and Pragmatic Bases of Extant Program Evaluation Theory
      4. A Word About Research Related to Program Evaluation Theory
      5. Program Evaluation Theory Defined
      6. Criteria for Judging Program Evaluation Theories
      7. Theory Development as a Creative Process Subject to Review and Critique by Users
      8. Status of Theory Development in the Program Evaluation Field
      9. Importance and Difficulties of Considering Context in Theories of Program Evaluation
      10. Need for Multiple Theories of Program Evaluation
      11. Hypotheses for Research on Program Evaluation
      12. Potential Utility of Grounded Theories
      13. Potential Utility of Metaevaluations in Developing Theories of Program Evaluation
      14. Program Evaluation Standards and Theory Development
      15. Summary
      16. Group Exercises
      17. Note
      18. Suggested Supplemental Readings
    12. Chapter 3: Standards for Program Evaluations
      1. The Need for Evaluation Standards
      2. Background of Standards for Program Evaluations
      3. Joint Committee Program Evaluation Standards
      4. American Evaluation Association Guiding Principles for Evaluators
      5. Government Auditing Standards
      6. Using Evaluation Standards
      7. Summary
      8. Group Exercises
      9. Notes
      10. Suggested Supplemental Readings
    13. Part Two: An Evaluation of Evaluation Approaches and Models
    14. Chapter 4: Background for Assessing Evaluation Approaches
      1. Evaluation Approaches
      2. Importance of Studying Alternative Evaluation Approaches
      3. The Nature of Program Evaluation
      4. Previous Classifications of Alternative Evaluation Approaches
      5. Caveats
      6. Summary
      7. Group Exercise
      8. Suggested Supplemental Readings
    15. Chapter 5: Pseudoevaluations
      1. Background and Introduction
      2. Approach 1: Public Relations Studies
      3. Approach 2: Politically Controlled Studies
      4. Approach 3: Pandering Evaluations
      5. Approach 4: Evaluation by Pretext
      6. Approach 5: Empowerment Under the Guise of Evaluation
      7. Approach 6: Customer Feedback Evaluation
      8. Summary
      9. Group Exercises
      10. Notes
      11. Suggested Supplemental Readings
    16. Chapter 6: Quasi-Evaluation Studies
      1. Quasi-Evaluation Approaches Defined
      2. Functions of Quasi-Evaluation Approaches
      3. General Strengths and Weaknesses of Quasi-Evaluation Approaches
      4. Approach 7: Objectives-Based Studies
      5. Approach 8: The Success Case Method
      6. Approach 9: Outcome Evaluation as Value-Added Assessment
      7. Approach 10: Experimental and Quasi-Experimental Studies
      8. Approach 11: Cost Studies
      9. Approach 12: Connoisseurship and Criticism
      10. Approach 13: Theory-Based Evaluation
      11. Approach 14: Meta-Analysis
      12. Summary
      13. Group Exercises
      14. Note
      15. Suggested Supplemental Readings
    17. Chapter 7: Improvement- and Accountability-Oriented Evaluation Approaches
      1. Improvement- and Accountability-Oriented Evaluation Defined
      2. Functions of Improvement- and Accountability-Oriented Approaches
      3. General Strengths and Weaknesses of Decision- and Accountability-Oriented Approaches
      4. Approach 15: Decision- and Accountability-Oriented Studies
      5. Approach 16: Consumer-Oriented Studies
      6. Approach 17: Accreditation and Certification
      7. Summary
      8. Group Exercises
      9. Note
      10. Suggested Supplemental Readings
    18. Chapter 8: Social Agenda and Advocacy Evaluation Approaches
      1. Overview of Social Agenda and Advocacy Approaches
      2. Approach 18: Responsive or Stakeholder-Centered Evaluation
      3. Approach 19: Constructivist Evaluation
      4. Approach 20: Deliberative Democratic Evaluation
      5. Approach 21: Transformative Evaluation
      6. Summary
      7. Group Exercises
      8. Suggested Supplemental Readings
    19. Chapter 9: Eclectic Evaluation Approaches
      1. Overview of Eclectic Approaches
      2. Approach 22: Utilization-Focused Evaluation
      3. Approach 23: Participatory Evaluation
      4. Summary
      5. Group Exercises
      6. Suggested Supplemental Readings
    20. Chapter 10: Best Approaches for Twenty-First-Century Evaluations
      1. Selection of Approaches for Analysis
      2. Methodology for Analyzing and Evaluating the Nine Approaches
      3. Our Qualifications as Raters
      4. Conflicts of Interest Pertaining to the Ratings
      5. Standards for Judging Evaluation Approaches
      6. Comparison of 2007 and 2014 Ratings
      7. Issues Related to the 2011 Program Evaluation Standards
      8. Overall Observations
      9. The Bottom Line
      10. Summary
      11. Group Exercises
      12. Notes
      13. Suggested Supplemental Readings
    21. Part Three: Explication of Selected Evaluation Approaches
    22. Chapter 11: Experimental and Quasi-Experimental Design Evaluations
      1. Chapter Overview
      2. Basic Requirements of Sound Experiments
      3. Prospective Versus Retrospective Studies of Cause
      4. Uses of Experimental Design
      5. Randomized Controlled Experiments in Context
      6. Suchman and the Scientific Approach to Evaluation
      7. Contemporary Concepts Associated with the Experimental and Quasi-Experimental Design Approach to Evaluation
      8. Exemplars of Large-Scale Experimental and Quasi-Experimental Design Evaluations
      9. Guidelines for Designing Experiments
      10. Quasi-Experimental Designs
      11. Summary
      12. Group Exercises
      13. Suggested Supplemental Readings
    23. Chapter 12: Case Study Evaluations
      1. Overview of the Chapter
      2. Overview of the Case Study Approach
      3. Case Study Research: The Views of Robert Stake
      4. Case Study Research: The Views of Robert Yin
      5. Particular Case Study Information Collection Methods
      6. Summary
      7. Group Exercises
      8. Suggested Supplemental Readings
    24. Chapter 13: Daniel Stufflebeam's CIPP Model for Evaluation: An Improvement- and Accountability-Oriented Approach
      1. Overview of the Chapter
      2. CIPP Model in Context
      3. Overview of the CIPP Categories
      4. Formative and Summative Uses of Context, Input, Process, and Product Evaluations
      5. Philosophy and Code of Ethics Underlying the CIPP Model
      6. The Model's Values Component
      7. Using the CIPP Framework to Define Evaluation Questions
      8. Delineation of the CIPP Categories and Relevant Procedures
      9. Use of the CIPP Model as a Systems Strategy for Improvement
      10. Summary
      11. Group Exercises
      12. Suggested Supplemental Readings
    25. Chapter 14: Michael Scriven's Consumer-Oriented Approach to Evaluation
      1. Overview of Scriven's Contributions to Evaluation
      2. Scriven's Background
      3. Scriven's Basic Orientation to Evaluation
      4. Scriven's Definition of Evaluation
      5. Critique of Other Persuasions
      6. Formative and Summative Evaluation
      7. Amateur Versus Professional Evaluation
      8. Intrinsic and Payoff Evaluation
      9. Goal-Free Evaluation
      10. Needs Assessment
      11. Scoring, Ranking, Grading, and Apportioning
      12. Checklists
      13. Key Evaluation Checklist
      14. The Final Synthesis
      15. Metaevaluation
      16. Evaluation Ideologies
      17. Avenues to Causal Inference
      18. Product Evaluation
      19. Professionalization of Evaluation
      20. Scriven's Look to Evaluation's Future
      21. Summary
      22. Group Exercises
      23. Notes
      24. Suggested Supplemental Readings
    26. Chapter 15: Robert Stake's Responsive or Stakeholder-Centered Evaluation Approach
      1. Stake's Professional Background
      2. Factors Influencing Stake's Development of Evaluation Theory
      3. Stake's 1967 “Countenance of Educational Evaluation” Article
      4. Responsive Evaluation Approach
      5. Substantive Structure of Responsive Evaluation
      6. Functional Structure of Responsive Evaluation
      7. An Application of Responsive Evaluation
      8. Stake's Recent Rethinking of Responsive Evaluation
      9. Summary
      10. Group Exercises
      11. Note
      12. Suggested Supplemental Readings
    27. Chapter 16: Michael Patton's Utilization-Focused Evaluation
      1. Adherents of Utilization-Focused Evaluation
      2. Some General Aspects of Patton's Utilization-Focused Evaluation
      3. Intended Users of Utilization-Focused Evaluation
      4. Focusing a Utilization-Focused Evaluation
      5. The Personal Factor as Vital to an Evaluation's Success
      6. The Evaluator's Roles
      7. Utilization-Focused Evaluation and Values and Judgments
      8. Employing Active-Reactive-Adaptive Processes to Negotiate with Users
      9. Patton's Eclectic Approach
      10. Planning Utilization-Focused Evaluations
      11. Collecting and Analyzing Information and Reporting Findings
      12. Summary of Premises of Utilization-Focused Evaluation
      13. Strengths of the Utilization-Focused Evaluation Approach
      14. Limitations of the Utilization-Focused Evaluation Approach
      15. Summary
      16. Group Exercises
      17. Note
      18. Suggested Supplemental Readings
    28. Part Four: Evaluation Tasks, Procedures, and Tools
    29. Chapter 17: Identifying and Assessing Evaluation Opportunities
      1. Sources of Evaluation Opportunities
      2. Bidders' Conferences
      3. Summary
      4. Group Exercises
      5. Suggested Supplemental Reading
    30. Chapter 18: First Steps in Addressing Evaluation Opportunities
      1. Developing the Evaluation Team
      2. Developing Thorough Familiarity with the Need for the Evaluation
      3. Stipulating Standards for Guiding and Assessing the Evaluation
      4. Establishing Institutional Support for the Projected Evaluation
      5. Developing the Evaluation Proposal's Appendix
      6. Planning for a Stakeholder Review Panel
      7. Summary
      8. Group Exercise
      9. Suggested Supplemental Readings
    31. Chapter 19: Designing Evaluations
      1. A Design Used for Evaluating the Performance Review System of a Military Organization
      2. Generic Checklist for Designing Evaluations
      3. Summary
      4. Suggested Supplemental Readings
    32. Chapter 20: Budgeting Evaluations
      1. Ethical Imperatives in Budgeting Evaluations
      2. Fixed-Price Budget for Evaluating a Personnel Evaluation System
      3. Other Types of Evaluation Budgets
      4. Generic Checklist for Developing Evaluation Budgets
      5. Summary
      6. Group Exercises
      7. Note
      8. Suggested Supplemental Readings
    33. Chapter 21: Contracting Evaluations
      1. Definitions of Evaluation Contracts and Memorandums of Agreement
      2. Rationale for Evaluation Contracting
      3. Addressing Organizational Contracting Requirements
      4. Negotiating Evaluation Agreements
      5. Evaluation Contracting Checklist
      6. Summary
      7. Group Exercises
      8. Suggested Supplemental Readings
    34. Chapter 22: Collecting Evaluative Information
      1. Key Standards for Information Collection
      2. An Information Collection Framework
      3. Useful Methods for Collecting Information
      4. Summary
      5. Suggested Supplemental Readings
    35. Chapter 23: Analyzing and Synthesizing Information
      1. General Orientation to Analyzing and Synthesizing Information
      2. Principles for Analyzing and Synthesizing Information
      3. Analysis of Quantitative Information
      4. Analysis of Qualitative Information
      5. Justified Conclusions and Decisions
      6. Summary
      7. Group Exercises
      8. Suggested Supplemental Readings
    36. Chapter 24: Communicating Evaluation Findings
      1. Review of Pertinent Analysis and Advice from Previous Chapters
      2. Complex Needs and Challenges in Reporting Evaluation Findings
      3. Establishing Conditions to Foster Use of Findings
      4. Providing Interim Evaluative Feedback
      5. Preparing and Delivering the Final Report
      6. Providing Follow-Up Support to Enhance an Evaluation's Impact
      7. Summary
      8. Group Exercises
      9. Suggested Supplemental Readings
    37. Part Five: Metaevaluation and Institutionalizing and Mainstreaming Evaluation
    38. Chapter 25: Metaevaluation: Evaluating Evaluations
      1. Rationale for Metaevaluation
      2. Evaluator and Client Responsibilities in Regard to Metaevaluation
      3. Formative and Summative Metaevaluations
      4. A Conceptual and Operational Definition of Metaevaluation
      5. An Instructive Metaevaluation Case
      6. Metaevaluation Tasks
      7. Metaevaluation Arrangements and Procedures
      8. Comparative Metaevaluations
      9. Checklists for Use in Metaevaluations
      10. The Role of Context and Resource Constraints
      11. Summary
      12. Group Exercises
      13. Note
      14. Suggested Supplemental Readings
    39. Chapter 26: Institutionalizing and Mainstreaming Evaluation
      1. Review of this Book's Themes
      2. Overview of the Remainder of the Chapter
      3. Rationale and Key Principles for Institutionalizing and Mainstreaming Evaluation
      4. Early Efforts to Help Organizations Institutionalize Evaluation
      5. Recent Advances of Use in Institutionalizing and Mainstreaming Evaluation
      6. Checklist for Use in Institutionalizing and Mainstreaming Evaluation
      7. Summary
      8. Group Exercises
      9. Suggested Supplemental Readings
    40. Glossary
    41. References
    42. Index
    43. End User License Agreement

    Product information

    • Title: Evaluation Theory, Models, and Applications, 2nd Edition
    • Author(s): Daniel L. Stufflebeam, Chris L. S. Coryn
    • Release date: October 2014
    • Publisher(s): Jossey-Bass
    • ISBN: 9781118074053