• Evaluation and Design
  • Evaluation Basics
  • Evaluation Standards and Meta-evaluation
  • Evaluation within a Design Context
  • Evaluation for Instructional Designers
  • Evaluation Planning
  • Research Principles Primer
  • Evaluation Approaches for Designers
  • Evaluation in the Analysis phase
  • Evaluation in the Design Phase
  • Evaluation in the Development Phase
  • Evaluation in the Implementation Phase
  • Personnel Evaluation
  • Data Collection Primer
  • Reporting Evaluation Findings
  • A Research Principles Primer
  • User Testing
  • Download
  • Translations
  • Evaluation Approaches for Designers

    When describing how people should conduct an evaluation, you will find that most suggestions can be classified as general principles or approaches, with a few being classified as models. General principles describe things evaluators need to do so the results of an evaluation are deemed credible. For example, clearly defining the purpose for an evaluation and the criteria you will use to judge merit and worth is a general principle all evaluators should follow. The evaluation principles someone might provide will not tell you what the purpose should be nor which criteria are most important—just that you need to state the purpose and criteria clearly. Likewise, suggestions for how to conduct an evaluation might be classified as an approach (not a model) because they do not prescribe specific methods—they suggest best practices given a potential evaluation purpose or need.

    For example, a model is an example of how a thing should look—those using a model attempt to replicate the model precisely. Evaluation models are prescriptive; they will provide detailed steps an evaluator or researcher must take if they claim to be using a specific model or design (e.g., random controlled trials). However, an evaluator will rarely conduct an evaluation the same way twice. Evaluators may use the steps of a recommended model as guidelines but will not attempt to replicate the procedure. The purpose, goals, context, and constraints of an evaluation will require the evaluator to adapt and revise any proposed model. The goal is not to replicate but to credibly adapt and approximate the model's proposed design.

    Whatever you prefer to call them, the next few chapters present descriptions of various approaches and models you might use when conducting an evaluation at various phases of the design and development process.

    Pseudo and Quasi-evaluations

    Before presenting any of the commonly used evaluation approaches, you should be aware of two situations that often affect the quality of the evaluation results we obtain and thus the decisions we make.  Stufflebeam & Coryn, (2014) refers to two types of evaluations we should either avoid or take steps to improve: Pseudo-evaluation and Quasi-evaluation. Any of the approaches described in this chapter have the potential to become a pseudo- or quasi-evaluation. An evaluation may seem well designed but might be compromised in some way.

    Pseudo-evaluation

    Pseudo-evaluations are flawed mainly because the evaluation is conducted in such a way as to confirm a predetermined outcome. Some pseudo-evaluations are founded on ill-intent; others are inadvertently compromised by stakeholders or limited due to unavoidable constraints that restrict our ability to conduct a proper evaluation. Either way, these should be avoided.

    Some examples of the ways an evaluation might become a pseudo-evaluations include:

    Quasi-evaluations 

    In contrast to pseudo-evaluation, quasi-evaluations have less value because they are incomplete or limited in some way by the scope of the evaluation's purpose, the types and sources of data collected, or the criteria used to determine the merit and worth of the evaluand. For every evaluation, an evaluator could ask several different questions and use a variety of criteria. Quasi-evaluations can be beneficial to evaluators, but they often do not provide a complete picture and thus could be improved.

    Some examples of the ways an evaluation might become a quasi-evaluations include:

    Instructional designers use many different evaluation approaches. However, an evaluator must ensure the evaluations they plan do not become pseudo-evaluations or quasi-evaluations.

    The CIPP Model  

    There are many evaluation approaches and models. Most align well with a specific phase of the ADDIE model. However, Stufflebeam’s (2003) Context, Input, Process and Product, CIPP model is a comprehensive approach to program evaluation that spans all facets of the design and development process. We present it here, and other evaluation approaches in later chapters dedicated to specific phases of the design and development process.

    The CIPP framework is a decision-oriented approach to evaluation. It aims to provide an analytic and rational basis for program decision-making at various stages of a program’s life cycle (i.e., conceptualization, planning, development, implementation, and maintenance). The CIPP model attempts to make evaluation directly relevant to the needs of decision-makers during the phases and activities of a program’s development.

    You cannot apply the CIPP model all at once. Each facet of the model must be applied separately depending on the program’s current stage of development. Each of the four components of the model aligns well with one of the four phases of the ADDIE model (analysis, design, development, and implementation).   

    The CIPP model recommends asking formative questions at the beginning of the program’s development, then transitions to a summative evaluation of the program once it has been implemented. Guiding questions for each phase include:

    The CIPP model is more of a framework than a prescription for completing an evaluation. Detailed descriptions of the CIPP framework can be obtained from various sources. Additional ways to accomplish each component of the CIPP model are provided in subsequent chapters.


    Figure 1: The CIPP model and the ADDIE Instructional Design Phases
    .
     

    Chapter Summary

    • Suggestions for how to conduct an evaluation can be classified as general principles, approaches, or models.
    • General principles describe things evaluators need to do so the results of an evaluation are deemed credible.
    • Models are prescriptive and provide specific steps that must be followed. 
    • An approach may approximate a model but the goal is not to replicate the design. 
    • The purpose, goals, context, and constraints of an evaluation will require the evaluator to adapt and revise any proposed model.
    • Pseudo-evaluations should be avoided as they are conducted to promote a specific predetermined solution. These include politically-inspired and advocacy-based evaluations.
    • Quasi-evaluations provide good information, but the value of the findings is limited in some way. Evaluation classified as quasi-evaluations could be improved by expanding the scope of the evaluation and the criteria used to determine merit and worth. 
    • The CIPP model is a comprehensive framework that spans each phase of the ADDIE model for developing instruction (i.e., analysis, design, development, and implementation). 
    • The CIPP conceptualization phase aligns with the analysis phase. 
    • The CIPP input phase aligns with the design phase. 
    • The CIPP process phase aligns with the development phase. 
    • The CIPP product phase aligns with the implementation phase. 

    Discussion Questions

    1. Think of an evaluation you might consider completing. Provide an example this evaluation might become a pseudo- or quasi-evaluation. What steps should be taken to avoid this?

    References

    Stufflebeam, D. L., & Coryn, C. L. (2014). Evaluation theory, models, and applications (Vol. 50). John Wiley & Sons.

    Stufflebeam, D. (2003). The CIPP model of evaluation. In T. Kellaghan, D. Stufflebeam & L. Wingate (Eds.), Springer international handbooks of education: International handbook of educational evaluation.

    This content is provided to you freely by EdTech Books.

    Access it online or download it at https://edtechbooks.org/eval_and_design/evaluation_approaches.