Amazon cover image
Image from Amazon.com

10-step evaluation for training and performance improvement / Seung Youn Chyung.

By: Material type: TextTextLanguage: English Publication details: Los Angeles : Sage, c2019.Description: xxiv, 322 pages : illustrations ; 23 cmContent type:
  • text
Media type:
  • unmediated
Carrier type:
  • volume
ISBN:
  • 9781544323961 (paperback)
Subject(s): LOC classification:
  • HF5549.5 T7C47 2019
Online resources:
Contents:
Chapter 1: Identify an evaluand (Step 1) and its stakeholders (Step 2) -- Chapter 2: Identify the purpose of evaluation (Step 3) -- Chapter 3: Assess evaluation feasibility and risk factors -- Chapter 4: Write a statement of work -- Chapter 5: Develop a program logic model (Step 4) -- Chapter 6: Determine dimensions and importance weighting (Step 5) -- Chapter 7: Determine data collection methods (Step 6) -- Chapter 8: Write an evaluation proposal and get approval -- Chapter 9: Develop data collection instruments I—Self-administered surveys (Step 7) -- Chapter 10: Develop data collection instruments II—Interviews, focus groups, observations, extant data reviews, and tests (Step 7) -- Chapter 11: Collect data (Step 8) -- Chapter 12: Analyze data with rubrics (Step 9) -- Chapter 13: Draw conclusions (Step 10) -- Table 29: Example of dimensions and dimensional evaluation questions (Summative evaluation) -- Table 30: Examples of dimensions with different levels of importance weighting -- Table 31: Identify evaluation type based on dimensions, program logic model categories, and importance weighting -- Table 32: Example worksheet for Step 5: Process for determining dimensions and importance weighting -- Table 33: Comparison of characteristics of experimental and descriptive case study type evaluation designs -- Table 34: Different Evaluation design options -- Table 35: Program evaluation conducted with a descriptive case study design -- Table 36: Advantages and disadvantages of different data collection methods -- Table 37: Examples of various results of triangulation -- Table 38: Examples of various data collection methods for four levels of training evaluation -- Table 39: How the SCM’s survey and interviews can be used as part of data collection methods for different dimensions -- Table 40: Example A: Four dimensions with multiple data collection methods -- Table 41: Identify level 3 evaluation based on dimensions and data collection methods -- Table 42: An example worksheet for Step 6: Determine data collection methods (for a new tech support ticketing and tracking system) -- Table 43: Sections to be included in an evaluation proposal and final report -- Table 44: Information to be presented in an evaluation proposal -- Table 45: Comply with IRB requirements, client organization’s requirements, and/or professional guidelines -- Table 46: Materials needed for collecting data -- Table 47: Structured, semi-structured, and unstructured survey questionnaires to be self-administered -- Table 48: Structured, semi-structured, and unstructured one-on-one interviews -- Table 49: Ethical conduct versus unethical conduct during evaluation -- Table 50: Evaluating multiple dimensions with multiple data collection methods and measurement instruments -- Table 51: Guidelines for using of not using rubrics based on significance weighting of each set of data -- Table 52: Hypothetical data file -- Table 53: Consolidating multiple rubrics into one rubric -- Table 54: Dimensional rubric in an experimental study -- Table 55: Using a dimensional triangulation rubric to combine results of multiple sources -- Table 56: Evaluating a new tech support ticketing and tracking system with consolidated dimensional rubrics -- Table 57: Evaluating a new tech support ticketing and tracking system with dimensional triangulation rubrics -- Table 58: Four dimensional results -- Table 59: Qualitative synthesis rubric: Example 1 -- Table 60: Qualitative synthesis rubric: Example 2 -- Table 61: Two examples of incorporating importance weighting (%) to produce weighted results -- Table 62: Possible SWOT results -- Table 63: General recommendations on excluding identifiable information from evaluation reports -- Table 64: Information to be presented in an evaluation final report.
Summary: "Written with a learning-by-doing approach in mind, 10-Step Evaluation for Training and Performance Improvement gives students actionable instruction for identifying, planning, and implementing a client-based program evaluation. The book introduces readers to multiple evaluation frameworks and uses problem-based learning to guide them through a 10-step evaluation process. As students read the chapters, they produce specific deliverables that culminate in a completed evaluation project."--Back cover
Star ratings
    Average rating: 0.0 (0 votes)
Holdings
Item type Current library Collection Call number Materials specified URL Status Notes Date due Barcode
Books Books Ladislao N. Diwa Memorial Library Reserve Section Non-fiction RUS HF5549.5 T7C47 2019 (Browse shelf(Opens below)) Link to resource Room use only 79443 00080881

Chapter 1: Identify an evaluand (Step 1) and its stakeholders (Step 2) -- Chapter 2: Identify the purpose of evaluation (Step 3) --
Chapter 3: Assess evaluation feasibility and risk factors -- Chapter 4: Write a statement of work -- Chapter 5: Develop a program logic model (Step 4) -- Chapter 6: Determine dimensions and importance weighting (Step 5) -- Chapter 7: Determine data collection methods (Step 6) -- Chapter 8: Write an evaluation proposal and get approval -- Chapter 9: Develop data collection instruments I—Self-administered surveys (Step 7) -- Chapter 10: Develop data collection instruments II—Interviews, focus groups, observations, extant data reviews, and tests (Step 7) -- Chapter 11: Collect data (Step 8) -- Chapter 12: Analyze data with rubrics (Step 9) --
Chapter 13: Draw conclusions (Step 10) -- Table 29: Example of dimensions and dimensional evaluation questions (Summative evaluation) -- Table 30: Examples of dimensions with different levels of importance weighting -- Table 31: Identify evaluation type based on dimensions, program logic model categories, and importance weighting -- Table 32: Example worksheet for Step 5: Process for determining dimensions and importance weighting -- Table 33: Comparison of characteristics of experimental and descriptive case study type evaluation designs -- Table 34: Different Evaluation design options -- Table 35: Program evaluation conducted with a descriptive case study design -- Table 36: Advantages and disadvantages of different data collection methods --
Table 37: Examples of various results of triangulation -- Table 38: Examples of various data collection methods for four levels of training evaluation -- Table 39: How the SCM’s survey and interviews can be used as part of data collection methods for different dimensions -- Table 40: Example A: Four dimensions with multiple data collection methods -- Table 41: Identify level 3 evaluation based on dimensions and data collection methods -- Table 42: An example worksheet for Step 6: Determine data collection methods (for a new tech support ticketing and tracking system) -- Table 43: Sections to be included in an evaluation proposal and final report
-- Table 44: Information to be presented in an evaluation proposal -- Table 45: Comply with IRB requirements, client organization’s requirements, and/or professional guidelines -- Table 46: Materials needed for collecting data -- Table 47: Structured, semi-structured, and unstructured survey questionnaires to be self-administered -- Table 48: Structured, semi-structured, and unstructured one-on-one interviews -- Table 49: Ethical conduct versus unethical conduct during evaluation -- Table 50: Evaluating multiple dimensions with multiple data collection methods and measurement instruments -- Table 51: Guidelines for using of not using rubrics based on significance weighting of each set of data -- Table 52: Hypothetical data file -- Table 53: Consolidating multiple rubrics into one rubric -- Table 54: Dimensional rubric in an experimental study -- Table 55: Using a dimensional triangulation rubric to combine results of multiple sources -- Table 56: Evaluating a new tech support ticketing and tracking system with consolidated dimensional rubrics -- Table 57: Evaluating a new tech support ticketing and tracking system with dimensional triangulation rubrics -- Table 58: Four dimensional results -- Table 59: Qualitative synthesis rubric: Example 1 -- Table 60: Qualitative synthesis rubric: Example 2 --
Table 61: Two examples of incorporating importance weighting (%) to produce weighted results -- Table 62: Possible SWOT results
-- Table 63: General recommendations on excluding identifiable information from evaluation reports -- Table 64: Information to be presented in an evaluation final report.

"Written with a learning-by-doing approach in mind, 10-Step Evaluation for Training and Performance Improvement gives students actionable instruction for identifying, planning, and implementing a client-based program evaluation. The book introduces readers to multiple evaluation frameworks and uses problem-based learning to guide them through a 10-step evaluation process. As students read the chapters, they produce specific deliverables that culminate in a completed evaluation project."--Back cover

Fund 164 F & J de Jesus, Inc. Purchased 02/11/2021 79443 pnr PHP 6,156.00 2021-01-047B 2021-1-0072

Copyright © 2023. Cavite State University | Koha 23.05