Handbook of Workplace Assessment: Evidence-Based Practices for Selecting and Developing Organizational Talent

Book description

Praise for Handbook of Workplace Assessment

"Wow—what a powerhouse group of authors and topics! This will be my go-to s ource for in-depth information on a broad range of assessment issues."—Wayne F. Cascio, editor, Journal of World Business, and Robert H. Reynolds Chair in Global Leadership, The Business School University of Colorado Denver

"The Handbook of Workplace Assessment is must reading for practitioners, researchers, students, and implementers of assessment programs as we move forward in a global world of work where changes are continuously anticipated in the workforce, design of jobs, economies, legal arena, and technologies."—Sheldon Zedeck, professor of psychology, vice provost of academic affairs and faculty welfare, University of California at Berkeley

"The Handbook of Workplace Assessment is a book you will find yourself reaching for time after time as we all navigate through the demands of attracting, developing, and retaining talent. The authors and editors capture, in practical terms, how companies can effectively leverage assessment techniques to successfully manage talent and achieve business goals."—Jennifer R. Burnett, senior vice president, Global Staffing and Learning Talent Assessment for Selection and Development, Bank of America

"Scott and Reynolds have succeeded in developing a comprehensive yet practical guide to assessment that is sure to be a trusted resource for years to come."—Corey Seitz, vice president, Global Talent Management, Johnson & Johnson

Table of contents

  1. Copyright
  2. Preface-1
  3. Foreword
  4. Preface
    1. The Audience
    2. Overview of the Book
      1. Part One: Framework for Organizational Assessment
      2. Part Two: Assessment for Selection, Promotion, and Development
      3. Part Three: Strategic Assessment Programs
      4. Part Four: Advances, Trends, and Issues
      5. Appendix
    3. Orientation
  5. Acknowledgments
  6. The Editors
  7. The Contributors
  8. I. Framework for Organizational Assessment
    1. 1. INDIVIDUAL DIFFERENCES THAT INFLUENCE PERFORMANCE AND EFFECTIVENESS
      1. 1.1. Two Perspectives for Determining What to Assess
        1. 1.1.1. Work-Oriented Strategies
        2. 1.1.2. Great Eight Competency Model
          1. 1.1.2.1. Drilling Deeper
          2. 1.1.2.2. Inferring Job Requirements
        3. 1.1.3. Person-Oriented Analyses
          1. 1.1.3.1. Cognitive Ability
          2. 1.1.3.2. Personality
          3. 1.1.3.3. Interests and Value Orientations
      2. 1.2. Implications for Assessment in Organizations
    2. REFERENCES
    3. 2. INDICATORS OF QUALITY ASSESSMENT
      1. 2.1. Buy Versus Build
      2. 2.2. Test Construction Considerations
      3. 2.3. Reliability
        1. 2.3.1. Traditional Forms of Reliability
        2. 2.3.2. Modern Forms of Reliability
      4. 2.4. Validity
      5. 2.5. Operational Models for Assessment
        1. 2.5.1. Computer-Administered Tests
          1. 2.5.1.1. Internet Testing: Unproctored and Proctored Testing
          2. 2.5.1.2. Test Security and Types of Cheating
          3. 2.5.1.3. Test Score Reporting
          4. 2.5.1.4. Quality Control
      6. 2.6. Conclusion
    4. REFERENCES
    5. 3. GENERAL COGNITIVE ABILITY
      1. 3.1. Dominant Models of General Cognitive Ability
        1. 3.1.1. General Intelligence Factor Model
        2. 3.1.2. Cattell's Crystallized and Fluid Intelligences
        3. 3.1.3. Carroll's Three-Stratum Theory
      2. 3.2. Cognitive Ability and the World of Work
        1. 3.2.1. Validity Generalization
        2. 3.2.2. Prediction of Training Performance
        3. 3.2.3. Prediction of Job Performance
        4. 3.2.4. Prediction Linearity
        5. 3.2.5. General Versus Specific Cognitive Abilities
        6. 3.2.6. Differences on Cognitive Ability Among Ethnic/Race Demographic Groups
          1. 3.2.6.1. Single-Group Validity, Differential Validity, and Differential Prediction
          2. 3.2.6.2. Altering Item Types to Reduce Mean Ethnic Differences
      3. 3.3. Guiding Practice
        1. 3.3.1. The Diversity-Validity Dilemma
        2. 3.3.2. Content-Validity Considerations in the Selection of Measures
        3. 3.3.3. Validity Generalization and the Uniform Guidelines, Principles, and Standards
      4. 3.4. Conclusion
    6. REFERENCES
    7. 4. PERSONALITY
      1. 4.1. Defining Personality
      2. 4.2. Business Applications of Personality Assessment
        1. 4.2.1. What Kinds of Measures Are Available?
        2. 4.2.2. For What Jobs Is Personality Assessment Most Valuable?
        3. 4.2.3. How Are These Measures Developed?
        4. 4.2.4. For What Purposes Are These Measures Used?
      3. 4.3. How Well Does Personality Assessment Work?
        1. 4.3.1. Framing the Question
          1. 4.3.1.1. The Measurement Problem
          2. 4.3.1.2. The Comparative Validity of Personality
        2. 4.3.2. Predicting Job Performance and More
      4. 4.4. Standard Criticisms of Personality Assessment
        1. 4.4.1. Validity
        2. 4.4.2. Social Desirability
        3. 4.4.3. Faking
      5. 4.5. Future Directions
        1. 4.5.1. Beyond the FFM
        2. 4.5.2. Alternative Measurement Methods
          1. 4.5.2.1. Projective Techniques
          2. 4.5.2.2. Interviews
        3. 4.5.3. Remembering the Goals of Assessment
        4. 4.5.4. Better Theory
          1. 4.5.4.1. Predictor-Criterion Alignment
          2. 4.5.4.2. All Personality Measures Are Not Created Equal
          3. 4.5.4.3. Other Opportunities
      6. 4.6. Last Thoughts
    8. REFERENCES
    9. 5. ASSESSMENT OF BACKGROUND AND LIFE EXPERIENCE
      1. 5.1. Definition of Biodata
      2. 5.2. Validity of Biodata Measures
      3. 5.3. Item-Generation Methods: Advantages and Disadvantages
        1. 5.3.1. Functional Job Analysis Approach
        2. 5.3.2. Mumford and Stokes Item-Generation Method
        3. 5.3.3. Critical Incident Technique
        4. 5.3.4. Retrospective Life Experience Essay and Interview Technique
      4. 5.4. Scale Development Methods: Advantages, Disadvantages, and New Developments
        1. 5.4.1. Inductive (Internal) Strategy of Scale Construction
        2. 5.4.2. External (Empirical) Strategy of Scale Construction
        3. 5.4.3. Item-Level Weighting Approaches
        4. 5.4.4. Item-Response-Level Weighting Approaches
        5. 5.4.5. Deductive (Rational) Strategy of Scale Construction
        6. 5.4.6. Point Method
        7. 5.4.7. Equal or Unit Weighting of Items and Item-Options Method
        8. 5.4.8. Behavior-Based Interview and Rating Scales Method
        9. 5.4.9. Comparison of Inductive, Deductive, and External Strategies
        10. 5.4.10. New Developments in Scale Construction Methods
        11. 5.4.11. Characteristics of Criterion-Valid Items
      5. 5.5. Validity Generalization and Other Factors That Affect Usefulness of Biodata Measures
        1. 5.5.1. Questions to Ask When Evaluating Biodata Measures
      6. 5.6. Conclusion
    10. REFERENCES
    11. 6. KNOWLEDGE AND SKILL
      1. 6.1. Knowledge and Skill Definitions
      2. 6.2. Declarative Knowledge
        1. 6.2.1. Measure Development
        2. 6.2.2. Types of Declarative Knowledge Items
        3. 6.2.3. Operational Considerations
        4. 6.2.4. Psychometric Properties
        5. 6.2.5. Measurement Challenges
        6. 6.2.6. Applications
      3. 6.3. Procedural Knowledge and Skill
        1. 6.3.1. Types of Procedural Knowledge and Skill Measures
          1. 6.3.1.1. Situational Judgment Tests
          2. 6.3.1.2. Work Samples
        2. 6.3.2. Measure Development
          1. 6.3.2.1. Situational Judgment Tests
          2. 6.3.2.2. Work Samples
        3. 6.3.3. Operational Considerations
          1. 6.3.3.1. Situational Judgment Tests
          2. 6.3.3.2. Work Samples
        4. 6.3.4. Psychometric Characteristics
          1. 6.3.4.1. Situational Judgment Tasks
          2. 6.3.4.2. Work Samples
        5. 6.3.5. Measurement Challenges
          1. 6.3.5.1. Situational Judgment Tasks: What Do They Measure?
          2. 6.3.5.2. Work Samples: How to Obtain Reliable Scores Within Practical Constraints on Time
        6. 6.3.6. Applications
      4. 6.4. Conclusion
    12. REFERENCES
    13. 7. PHYSICAL PERFORMANCE
      1. 7.1. Identifying Physical Job Requirements
        1. 7.1.1. Identification of Essential Tasks
        2. 7.1.2. Classification of Physical Demand
        3. 7.1.3. Impact of Environment on Physical Demand of Job Tasks
        4. 7.1.4. Methods to Quantify Physical Demand
      2. 7.2. Physical Performance Test Design and Selection
        1. 7.2.1. Basic Ability Tests
        2. 7.2.2. Physical Test Reliability
        3. 7.2.3. Validity of Physical Tests
      3. 7.3. Test Scoring and Setting Passing Scores
        1. 7.3.1. Types of Scoring
        2. 7.3.2. Passing Score Determination
      4. 7.4. Physical Test Adverse Impact and Test Fairness
      5. 7.5. Implementation of Physical Tests
      6. 7.6. Litigation Related to Physical Testing
        1. 7.6.1. Title VII of the 1964 Civil Rights Act
        2. 7.6.2. Americans with Disabilities Act of 1990
      7. 7.7. Reduction in Injuries and Lost Time from Work
      8. 7.8. Conclusion
    14. REFERENCES
    15. 8. COMPETENCIES, JOB ANALYSIS, AND THE NEXT GENERATION OF MODELING
      1. 8.1. Historical Links Between Competency Modeling and Job Analysis
      2. 8.2. Building a Basis for Assessment: Comparing and Contrasting Competency Modeling and Job Analysis
        1. 8.2.1. Competency Modeling: A Focus on Individual Characteristics Required for Success
        2. 8.2.2. Competency Modeling: A Focus on Broadly Applicable Individual Characteristic Dimensions
        3. 8.2.3. Competency Modeling: A Clear Link to Strategy
        4. 8.2.4. Competency Modeling: A Coherent Organization Development and Change Emphasis
      3. 8.3. Strategic Competency Modeling and Assessment
      4. 8.4. Conclusion
    16. REFERENCES
  9. II. Assessment for Selection, Promotion, and Development
    1. SO WHERE ARE THE PROMISED, PRACTICAL, AND PROVEN SELECTION TOOLS FOR MANAGERIAL SELECTION AND BEYOND?
      1. 8.5. Notes
    2. REFERENCES
    3. 9. ASSESSMENT FOR TECHNICAL JOBS
      1. 9.1. Background on Technical Jobs
        1. 9.1.1. Work Environment
        2. 9.1.2. Education and Training
        3. 9.1.3. Internal Job Progression
        4. 9.1.4. Labor Organizations
      2. 9.2. Relationships with and Among Stakeholders
        1. 9.2.1. Labor-Management Relationship
        2. 9.2.2. Consortium as Consultant
      3. 9.3. Technical and Practical Considerations
        1. 9.3.1. Validity
        2. 9.3.2. Adverse Impact
        3. 9.3.3. Other Practical Considerations
        4. 9.3.4. Types of Tests Commonly Used for Technical Jobs
          1. 9.3.4.1. Cognitive Ability Tests
          2. 9.3.4.2. Work Sample Test
          3. 9.3.4.3. Personality and Biodata Tests
        5. 9.3.5. Reducing the Adverse Impact of Employment Tests
          1. 9.3.5.1. Combining Cognitive Ability Tests with Noncognitive Inventories
          2. 9.3.5.2. Cutoff Scores Versus Rank Ordering
        6. 9.3.6. Establishing Cutoff Scores
          1. 9.3.6.1. Judgmental Standard Setting
          2. 9.3.6.2. Statistical Standard Setting
          3. 9.3.6.3. Pass Rate Analysis for Standard Setting
      4. 9.4. Selling and Negotiating: Making Employment Tests a Reality
        1. 9.4.1. Identifying Jobs to Be Covered by the Employment Test
        2. 9.4.2. Benefits of a Valid Employment Test for Technical Jobs
        3. 9.4.3. Identifying Tests Acceptable to Management and Labor
        4. 9.4.4. Addressing Concerns of Management and Labor
          1. 9.4.4.1. Line Management Concerns
          2. 9.4.4.2. Concerns of Labor Organizations
        5. 9.4.5. Contributions of Line Management and Labor Organizations
      5. 9.5. Implementing an Employment Test for Technical Jobs
        1. 9.5.1. Influencing Stakeholders
        2. 9.5.2. Formalizing a Selection Policy
          1. 9.5.2.1. Test Security
          2. 9.5.2.2. Retest Periods
          3. 9.5.2.3. Providing Test Results
        3. 9.5.3. Formulating Testing Procedures and Training
      6. 9.6. Defending Employment Tests in a Technical Environment
        1. 9.6.1. External Challenges
        2. 9.6.2. Internal Challenges
        3. 9.6.3. Indirect Challenges
          1. 9.6.3.1. Contractors
          2. 9.6.3.2. Temporary Assignments
      7. 9.7. Recruitment and Employee Development
        1. 9.7.1. Technical Education Partnership
        2. 9.7.2. Career Assessment and Diagnostic Instrument
        3. 9.7.3. Skill Builders
        4. 9.7.4. Practice Tests
      8. 9.8. Conclusion
    4. REFERENCES
    5. 10. ASSESSMENT FOR ADMINISTRATIVE AND PROFESSIONAL JOBS
      1. 10.1. Administrative and Clerical Jobs
        1. 10.1.1. The Value of Assessment
          1. 10.1.1.1. Applicant Pool
          2. 10.1.1.2. Requirements of the Job
        2. 10.1.2. Choosing Assessment Approaches
          1. 10.1.2.1. Assessment Instruments for Clerical Jobs
          2. 10.1.2.2. Validity
        3. 10.1.3. Summary of Assessments for Administrative and Clerical Jobs
        4. 10.1.4. Professional and Technical Jobs
        5. 10.1.5. Assessment Challenges for Professional and Technical Jobs
          1. 10.1.5.1. Problematic Magic Bullet 1: Bachelor's Degree
          2. 10.1.5.2. Problematic Magic Bullet 2: Years of Experience
        6. 10.1.6. Applicant Pool
        7. 10.1.7. Professional and Technical Jobs: To Test or Not to Test?
        8. 10.1.8. Describing the Professional or Technical Job: Job Analysis
        9. 10.1.9. Developing an Assessment Strategy for Professional and Technical Jobs
        10. 10.1.10. Demonstrating Assessment Validity
          1. 10.1.10.1. General Cognitive Ability
          2. 10.1.10.2. Structured Interviews
          3. 10.1.10.3. Training and Experience Measures
        11. 10.1.11. Summary of Assessments for Professional and Technical Jobs
      2. 10.2. Conclusion
    6. REFERENCES
    7. 11. ASSESSMENT FOR SALES POSITIONS
      1. 11.1. A Wide World of Sales Positions
      2. 11.2. Assessments for Selecting for Sales Positions
        1. 11.2.1. Cognitive and Sales Ability Assessment
        2. 11.2.2. Personality Assessments
        3. 11.2.3. Biodata: Background and Experience Measures
      3. 11.3. Issues in Using Assessments for Sales Selection
        1. 11.3.1. Dispersed Locations
        2. 11.3.2. Roles at Odds
        3. 11.3.3. Unproctored Testing
        4. 11.3.4. The Experienced Candidate
      4. 11.4. Special Validation Issues in Sales Selection
        1. 11.4.1. Defining Success as a Salesperson
        2. 11.4.2. Risks and Rewards of Objective, Quantitative Sales Results Criteria
          1. 11.4.2.1. Understanding the Numbers
          2. 11.4.2.2. Predicting Focal Product Sales or All Product Sales
          3. 11.4.2.3. Sales Results and Meeting the Assumptions of Statistics
        3. 11.4.3. Performance Ratings of Sales Success
        4. 11.4.4. Communicating Value to the Sales Leaders
        5. 11.4.5. Challenges of Using Personality Testing in Hiring for Sales Positions
        6. 11.4.6. What the Personality Test Really Measures
        7. 11.4.7. Biodata: Background and Experience Questionnaires
        8. 11.4.8. Consortium Studies
        9. 11.4.9. Summary of Assessments for Sales Positions: The Selection Process
      5. 11.5. Implementation Issues
        1. 11.5.1. Training Dispersed Administrators
        2. 11.5.2. Test Security
      6. 11.6. Making Sales Assessment Work
    8. REFERENCES
    9. 12. ASSESSMENT FOR SUPERVISORY AND EARLY LEADERSHIP ROLES
      1. 12.1. The Business Need
      2. 12.2. Assessment Instruments
        1. 12.2.1. Leadership Theories and Supervisor Selection
        2. 12.2.2. Understanding the Requirements of the Supervisor Job
        3. 12.2.3. Competencies Versus KSAOs
        4. 12.2.4. Choosing the Right Assessment Tool
        5. 12.2.5. Templates for Supervisor Assessment
        6. 12.2.6. Supervisor Selection Assessment Versus Training and Development Assessment
      3. 12.3. Special Considerations for Implementation
      4. 12.4. Managing the Assessment Program
      5. 12.5. Evaluation and Return on Investment
        1. 12.5.1. Job Performance and Productivity
        2. 12.5.2. Employee Attitudes and Turnover
        3. 12.5.3. Third-Party Interventions
      6. 12.6. The Way Forward
    10. REFERENCES
    11. 13. EXECUTIVE AND MANAGERIAL ASSESSMENT
      1. 13.1. Objectives of Executive and Managerial Assessment
      2. 13.2. The Executive and Managerial Population
      3. 13.3. Executive and Managerial Work
      4. 13.4. Assessing Executives Compared to Other Leaders
      5. 13.5. Individual Assessment Tools
        1. 13.5.1. Types of Assessment Tools
        2. 13.5.2. Criteria for Selection of Assessment Tools
          1. 13.5.2.1. Relevance
          2. 13.5.2.2. Informing Individual Development
          3. 13.5.2.3. Acceptance
          4. 13.5.2.4. Legal Defensibility
          5. 13.5.2.5. Efficiency
        3. 13.5.3. Using Assessment Tools with Executives and Managers
          1. 13.5.3.1. Cognitive Tests
          2. 13.5.3.2. Situational Judgment Tests
          3. 13.5.3.3. Personality Inventories
          4. 13.5.3.4. Integrity Tests
          5. 13.5.3.5. Leadership Questionnaires
          6. 13.5.3.6. Motivational Fit Questionnaires
          7. 13.5.3.7. Projective Techniques
          8. 13.5.3.8. Biographical Data
          9. 13.5.3.9. Reference Checks
          10. 13.5.3.10. Career Achievement Records
          11. 13.5.3.11. Interviews
          12. 13.5.3.12. Multisource Ratings
          13. 13.5.3.13. Simulations
      6. 13.6. Designing and Implementing Assessment Systems
        1. 13.6.1. Designing Assessment Systems
        2. 13.6.2. Stakeholder Analysis
        3. 13.6.3. Implementing Assessment Programs
          1. 13.6.3.1. Communications
          2. 13.6.3.2. Staffing Assessment Programs
      7. 13.7. Case Studies
      8. 13.8. Evaluation of Assessment Programs
        1. 13.8.1. Evaluation Stages
          1. 13.8.1.1. Focus
          2. 13.8.1.2. Process
          3. 13.8.1.3. Outcomes
          4. 13.8.1.4. Impact
        2. 13.8.2. The Logical Path Method of Measurement
      9. 13.9. Earning—and Keeping—a Seat at the Executive Table
        1. 13.9.1. Build a Relationship with Senior Managers
        2. 13.9.2. Focus on Business Challenges
        3. 13.9.3. Connect Assessment Results to Organizational Outcomes
        4. 13.9.4. Capitalize on Other Opportunities to Add Value
    12. REFERENCES
    13. 14. THE SPECIAL CASE OF PUBLIC SECTOR POLICE AND FIRE SELECTION
      1. 14.1. Entry-Level Hiring for Police and Fire
        1. 14.1.1. Minimum Qualifications and Prescreens
        2. 14.1.2. Written Cognitive Ability and Related Tests
          1. 14.1.2.1. Reading Ability
          2. 14.1.2.2. Mathematical Ability
          3. 14.1.2.3. Mechanical Comprehension
          4. 14.1.2.4. Short-Term Memory Tests
        3. 14.1.3. Situational Judgment Tests
        4. 14.1.4. Personality Tests
        5. 14.1.5. Biodata
        6. 14.1.6. Nontraditional Tests
        7. 14.1.7. Supplemental Tests
          1. 14.1.7.1. Physical Ability
          2. 14.1.7.2. Minnesota Multiphasic Personality Inventory
          3. 14.1.7.3. References and Background Checks
      2. 14.2. Promotional Testing for Police and Fire
        1. 14.2.1. Minimum Requirements
        2. 14.2.2. Job Knowledge Tests
        3. 14.2.3. Assessment Centers
          1. 14.2.3.1. In-Basket Test
          2. 14.2.3.2. Interviews or Oral Boards
          3. 14.2.3.3. Leaderless Group Discussion
          4. 14.2.3.4. Role Plays
        4. 14.2.4. Performance Appraisal Ratings
        5. 14.2.5. Individual Assessments
        6. 14.2.6. Adverse Impact
          1. 14.2.6.1. Cutoff Scores
          2. 14.2.6.2. Multiple Predictors and Weighting
          3. 14.2.6.3. Techniques for Reducing Adverse Impact
        7. 14.2.7. Alternative Selection Devices
      3. 14.3. Reflections
        1. 14.3.1. Professional Practice Has Been Driven by the Attempt to Reduce Adverse Impact
        2. 14.3.2. Professional Practice Has Been Limited by Security Issues Inherent in High-Stakes Testing
        3. 14.3.3. Impact of Recent Litigation
      4. 14.4. Conclusion
    14. REFERENCES
  10. III. Strategic Assessment Programs
    1. 15. THE ROLE OF ASSESSMENT IN SUCCESSION MANAGEMENT
      1. 15.1. The Contemporary Succession Management Challenge
      2. 15.2. Fundamental 1: Align Succession Management with Business Strategy
      3. 15.3. Fundamental 2: Define Success Holistically for All Levels of Leadership
      4. 15.4. Fundamental 3: Identify Leadership Potential with a Focus on the Ability to Grow
      5. 15.5. Fundamental 4: Accurately Assess Readiness for Leadership at Higher Levels
      6. 15.6. Fundamental 5: Adopt a Creative, Risk-Oriented Approach to Development
      7. 15.7. Fundamental 6: Establish Management Accountabilities with Teeth
      8. 15.8. Conclusion
    2. REFERENCES
    3. 16. ASSESSING THE POTENTIAL OF INDIVIDUALS
      1. 16.1. The Talent Challenge in Organizations
      2. 16.2. Defining Talent and Potential
        1. 16.2.1. Defining Talent
        2. 16.2.2. Defining Potential
      3. 16.3. High-Potential Candidates
      4. 16.4. Key Factors for Identifying Potential
        1. 16.4.1. Cognitive Skills
        2. 16.4.2. Personality Variables
        3. 16.4.3. Learning Variables
        4. 16.4.4. Leadership Skills
        5. 16.4.5. Motivation Variables
        6. 16.4.6. Performance Record
        7. 16.4.7. Other Variables
      5. 16.5. An Integrated Model of Potential
        1. 16.5.1. Foundational Dimensions
        2. 16.5.2. Growth Dimensions
        3. 16.5.3. Career Dimensions
        4. 16.5.4. Common and Specific Dimensions of Potential
      6. 16.6. Useful Assessment Techniques
        1. 16.6.1. Tests and Inventories
          1. 16.6.1.1. Interviews
          2. 16.6.1.2. Multirater Feedback Surveys
          3. 16.6.1.3. Immediate Manager Evaluations
          4. 16.6.1.4. Special Rating Scales
          5. 16.6.1.5. Behavior Samples
          6. 16.6.1.6. Performance Measures
        2. 16.6.2. Individual Psychological Assessment
      7. 16.7. Assessing the Potential of Individuals
      8. 16.8. Special Assessment Issues
        1. 16.8.1. Transparency or Secrecy
        2. 16.8.2. Involvement of Candidates
        3. 16.8.3. Performance Versus Potential
        4. 16.8.4. Internal Versus External Assessment of Potential
        5. 16.8.5. Evaluating Progress and Success
      9. 16.9. A Few Lessons Learned from Experience
    4. REFERENCES
    5. 17. ASSESSMENT FOR ORGANIZATIONAL CHANGE
      1. 17.1. Building the Staffing Model Road Map
        1. 17.1.1. Step 1: Establish Guiding Principles, Policies, and Tools
          1. 17.1.1.1. Guiding Principle 1: Adapt the Staffing Model to Organizational Initiatives
          2. 17.1.1.2. Guiding Principle 2: Ensure Job Relatedness
          3. 17.1.1.3. Guiding Principle 3: Ensure Procedural Justice
          4. 17.1.1.4. Guiding Principle 4: Execute Quickly and Effectively
          5. 17.1.1.5. Guiding Principle 5: Identify and Involve Key Stakeholder Groups
          6. 17.1.1.6. Guiding Principle 6: Build a Rigorous and Fair Assessment and Selection Process
          7. 17.1.1.7. Guiding Principle 7: Review and Audit All Decisions
        2. 17.1.2. Step 2: Develop and Implement a Comprehensive Communication Plan
        3. 17.1.3. Step 3: Identify the Positions Requiring Staffing Decisions
        4. 17.1.4. Step 4: Create Job Requirements for the Targeted Positions
        5. 17.1.5. Step 5: Develop and Validate Assessment Tools
          1. 17.1.5.1. Develop the Assessment Tools
          2. 17.1.5.2. Validate the Assessment Tools
      2. 17.2. Conducting Assessments and Making Selection Decisions
        1. 17.2.1. Conducting Competency-Based Assessments
          1. 17.2.1.1. Rater Calibration
          2. 17.2.1.2. Consensus Ratings
          3. 17.2.1.3. Group Differences Analyses
        2. 17.2.2. Administering Additional Assessments
        3. 17.2.3. Combining Assessment Scores
        4. 17.2.4. Making Selection Decisions
        5. 17.2.5. Audit the Results
      3. 17.3. Conclusion
    6. REFERENCES
    7. 18. GLOBAL APPLICATIONS OF ASSESSMENT
      1. 18.1. Key Challenges in Designing Global Assessments
        1. 18.1.1. What Are the Objectives of the Assessment?
        2. 18.1.2. What Will Be the Nature of the Project Team?
        3. 18.1.3. How Comparable Are Jobs Across Locations?
        4. 18.1.4. How Might the Various Labor Markets Affect the Assessment Process?
        5. 18.1.5. What Legal Issues Should Be Considered?
        6. 18.1.6. How Should Cultural Differences Be Considered in Assessment?
        7. 18.1.7. How Does One Ensure Assessment Tools Are Equivalent Across Countries?
        8. 18.1.8. What Types of Evidence Are Needed to Support Effectiveness on a Global Basis?
      2. 18.2. Key Issues in Implementation
        1. 18.2.1. How Does One Balance the Goal of Efficiencies in Cost, Time, and Technology with Local Needs?
        2. 18.2.2. What Characteristics of the Local Assessment Environment Are Important in Global Implementation?
        3. 18.2.3. How Much Flexibility Should Be Allowed?
        4. 18.2.4. What Should Be the Basis for Score Interpretations?
        5. 18.2.5. What Is Needed to Market the System to Stakeholders Globally?
        6. 18.2.6. What Types of Monitoring Should Be Put in Place?
      3. 18.3. Final Thoughts
    8. REFERENCES
  11. IV. Advances, Trends, and Issues
    1. 19. ADVANCES IN TECHNOLOGY-FACILITATED ASSESSMENT
      1. 19.1. Drivers for Technology-Based Assessment
        1. 19.1.1. Technology Availability
        2. 19.1.2. Business Efficiency
        3. 19.1.3. Better Insight About People
        4. 19.1.4. Strategic Perspective and Impact
      2. 19.2. Applicable Standards, Guidelines, Regulations, and Best Practices
        1. 19.2.1. APA Taskforce Report on Internet-Based Testing
        2. 19.2.2. International Test Commission Guidelines for Internet-Based Testing
        3. 19.2.3. European Union Data Protection and U.S. Safe Harbor Program
        4. 19.2.4. Internet Applicant Rule and Record-Keeping Regulations
      3. 19.3. Applications of Technology to Assessment
        1. 19.3.1. Internet Applications and Screening
        2. 19.3.2. Computer-Based Testing
        3. 19.3.3. Assessment Centers and Simulations
        4. 19.3.4. High-Fidelity Technology-Based Assessments
        5. 19.3.5. Tracking and Administrative Tools
        6. 19.3.6. Research on Technology-Based Assessments
      4. 19.4. Case Studies
        1. 19.4.1. Unproctored Adaptive Testing
        2. 19.4.2. High-Fidelity Testing of Job Candidates
        3. 19.4.3. Portable, Long-Distance Assessment Centers
        4. 19.4.4. User-Constructed Assessment Platforms
      5. 19.5. Common Issues Raised by Technology-Based Assessment
        1. 19.5.1. Assessment Equivalence
        2. 19.5.2. Appropriate Deployment Conditions
        3. 19.5.3. Cultural Adaptation
        4. 19.5.4. Data Security and Privacy
        5. 19.5.5. Software Maintenance
        6. 19.5.6. Integration
      6. 19.6. Future Opportunities and Challenges
        1. 19.6.1. Applications of Social Networking Technology
        2. 19.6.2. Assessments Incorporating Virtual Reality
        3. 19.6.3. Future Challenges
    2. REFERENCES
    3. 20. THE LEGAL ENVIRONMENT FOR ASSESSMENT
      1. 20.1. The Legal Framework
      2. 20.2. The Statutes
        1. 20.2.1. Title VII of the Civil Rights Act of 1964
        2. 20.2.2. Americans with Disabilities Act
        3. 20.2.3. Age Discrimination in Employment Act of 1967
      3. 20.3. Legal and Professional Standards
        1. 20.3.1. Uniform Guidelines on Employee Selection Procedures
        2. 20.3.2. Professional Standards
        3. 20.3.3. Regulatory Agencies
        4. 20.3.4. Private Plaintiffs
        5. 20.3.5. Order of Proof
      4. 20.4. The Current Legal Landscape: Implications for Practice
        1. 20.4.1. Adverse Impact
        2. 20.4.2. Subjectivity in Assessments
        3. 20.4.3. Cutoff Scores
        4. 20.4.4. Less Adverse Alternatives
        5. 20.4.5. The Ricci Decision
        6. 20.4.6. Implications of Ricci
      5. 20.5. Conclusion
      6. 20.6. Notes
    4. 21. VALIDATION STRATEGIES
      1. 21.1. The Rationale for Validity
        1. 21.1.1. Scientific Importance of Validity
        2. 21.1.2. Legal Requirements for Validity
        3. 21.1.3. Business Case for Validity
      2. 21.2. Obtaining Validity Evidence
        1. 21.2.1. Traditional Strategies: Tried-and-True Approaches
          1. 21.2.1.1. Criterion-Related Evidence
          2. 21.2.1.2. Content-Related Evidence
        2. 21.2.2. Alternative Strategies—On and Off the Beaten Path
          1. 21.2.2.1. Leveraging Existing Evidence
          2. 21.2.2.2. Transporting Validation Evidence
          3. 21.2.2.3. Synthetic Validation Evidence
          4. 21.2.2.4. Meta-Analytic Validation Evidence
          5. 21.2.2.5. Developing New Evidence
          6. 21.2.2.6. Conducting Consortium Studies: Sharing the Trouble and the Costs
          7. 21.2.2.7. Validation Evidence Based on Knowledge of What Is Being Measured (Construct Evidence)
      3. 21.3. Selecting a Strategy
        1. 21.3.1. Return on Investment for Validation Research
        2. 21.3.2. A Balancing Act
    5. REFERENCES
    6. 22. ADDRESSING THE FLAWS IN OUR ASSESSMENT DECISIONS
      1. 22.1. The Flawed Classical Selection Model
      2. 22.2. Ethical Problems Created by a Flawed Model
      3. 22.3. Resolving the Moral and Ethical Issues
      4. 22.4. Converting the Job Analysis into a Selection Process
      5. 22.5. Conclusion
    7. REFERENCES
    8. 23. STRATEGIC EVALUATION OF THE WORKPLACE ASSESSMENT PROGRAM
      1. 23.1. The Evaluation Imperative
        1. 23.1.1. Responding Intelligently to the Call for Evidence-Based Management
        2. 23.1.2. Maximizing the Chances of Success on a High-Stakes Investment
        3. 23.1.3. Documenting Evidence to Respond to Legal Challenges
        4. 23.1.4. Demonstrating Value Added to Key Organizational Stakeholders
      2. 23.2. Evaluation for Strategic Decision Making
        1. 23.2.1. Keeping the Focus Strategic
        2. 23.2.2. Devising 7 ± 2 Big-Picture Questions to Guide the Evaluation
      3. 23.3. The Nuts and Bolts of Strategic Evaluation
        1. 23.3.1. Identifying What to Look at
          1. 23.3.1.1. Was the Program Needed in the First Place? Is It Still the Best Solution?
          2. 23.3.1.2. What Are the Quality and Value of the Assessment Program and Tool Design?
          3. 23.3.1.3. How Effectively Is the Program Being Implemented?
          4. 23.3.1.4. How Valuable Are the Strategic Outcomes of the Assessment Program?
          5. 23.3.1.5. Is or Was This Program Worth Implementing?
          6. 23.3.1.6. Is This Program the Best Possible Use of Available Resources?
        2. 23.3.2. Defining How Good Is Good
        3. 23.3.3. Packing the Findings Back Together
      4. 23.4. Challenges and Tensions in the Strategic Evaluation of Assessment Programs
        1. 23.4.1. Getting Buy-In to Strategic Evaluation
        2. 23.4.2. Balancing the Need for Rigor with the Realities of Decision Making
        3. 23.4.3. General Positive Bias
        4. 23.4.4. Investigating Claims of Culture or Gender Bias
        5. 23.4.5. Designing and Conducting Worthwhile Evaluations of Assessment Programs
        6. 23.4.6. Thinking About Utilization from Start to Finish
        7. 23.4.7. Drafting a Skeleton Report with an Informative Structure First
        8. 23.4.8. Who Evaluates the Evaluators?
        9. 23.4.9. Walking the Evaluative Talk
    9. REFERENCES
    10. 24. FINAL THOUGHTS ON THE SELECTION AND ASSESSMENT FIELD
      1. 24.1. The Role of Context
      2. 24.2. Differing Perspectives on the Use of Personality Measures in Selection
        1. 24.2.1. Lack of a Common Specification for Construct Label
        2. 24.2.2. Trait-Performance Relationships Vary Across Jobs and Performance Dimensions
        3. 24.2.3. Lack of Clarity as to the "Right" Personality Dimensions
        4. 24.2.4. The Role of Faking
      3. 24.3. The Cognitive Ability–Adverse Impact Dilemma
      4. 24.4. Job Analysis and Competency Modeling
        1. 24.4.1. Komaki's Call to Action
        2. 24.4.2. Future Prospects for the Selection and Assessment Field
    11. REFERENCES
  12. A. EXAMPLE ASSESSMENTS DESIGNED FOR WORKPLACE APPLICATION
    1. A.1. Key to Type of Construct Covered
      1. A.1.1. Key to Type of Construct Covered
    2. A.2. Construct Targeted
    3. A.3. Position Targeted
    4. A.4. Managerial and Leadership Targeted
    5. A.5. Job Analysis Support

Product information

  • Title: Handbook of Workplace Assessment: Evidence-Based Practices for Selecting and Developing Organizational Talent
  • Author(s):
  • Release date: July 2009
  • Publisher(s): Pfeiffer
  • ISBN: 978047041316