When describing how people should conduct an evaluation, you will find that most suggestions can be classified as general principles or approaches, with a few being classified as models. General principles describe things evaluators need to do so the results of an evaluation are deemed credible. For example, clearly defining the purpose for an evaluation and the criteria you will use to judge merit and worth is a general principle all evaluators should follow. The evaluation principles someone might provide will not tell you what the purpose should be nor which criteria are most important—just that you need to state the purpose and criteria clearly. Likewise, suggestions for how to conduct an evaluation might be classified as an approach (not a model) because they do not prescribe specific methods—they suggest best practices given a potential evaluation purpose or need.
For example, a model is an example of how a thing should look—those using a model attempt to replicate the model precisely. Evaluation models are prescriptive; they will provide detailed steps an evaluator or researcher must take if they claim to be using a specific model or design (e.g., random controlled trials). However, an evaluator will rarely conduct an evaluation the same way twice. Evaluators may use the steps of a recommended model as guidelines but will not attempt to replicate the procedure. The purpose, goals, context, and constraints of an evaluation will require the evaluator to adapt and revise any proposed model. The goal is not to replicate but to credibly adapt and approximate the model's proposed design.
Whatever you prefer to call them, the next few chapters present descriptions of various approaches and models you might use when conducting an evaluation at various phases of the design and development process.
Pseudo and Quasi-evaluations
Before presenting any of the commonly used evaluation approaches, you should be aware of two situations that often affect the quality of the evaluation results we obtain and thus the decisions we make. Stufflebeam & Coryn, (2014) refers to two types of evaluations we should either avoid or take steps to improve: Pseudo-evaluation and Quasi-evaluation. Any of the approaches described in this chapter have the potential to become a pseudo- or quasi-evaluation. An evaluation may seem well designed but might be compromised in some way.
Pseudo-evaluation
Pseudo-evaluations are flawed mainly because the evaluation is conducted in such a way as to confirm a predetermined outcome. Some pseudo-evaluations are founded on ill-intent; others are inadvertently compromised by stakeholders or limited due to unavoidable constraints that restrict our ability to conduct a proper evaluation. Either way, these should be avoided.
Some examples of the ways an evaluation might become a pseudo-evaluations include:
- Public Relations Studies - A consumer review process can be valuable in determining the merits and problems of a specific product. However, the evaluation would be categorized as a pseudo-evaluation if the evaluator proposed, for example, a success-case evaluation and only collected positive reviews to make the product look better than it is.
- Politically Mandated Evaluations – Often, an evaluation may be commissioned for political purposes. There may be a legitimate reason for doing the evaluation, and the evaluators may have every intention of conducting a credible evaluation. Still, if the evaluators are denied access to essential information or only use information that leads to a specific recommendation, the evaluation becomes a pseudo-evaluation. A politically mandated evaluation may also be classified as a pseudo-evaluation when those commissioning the evaluation wish to avoid making a decision, or they wish to make it look like something is being done while all along having no intention of using the evaluation results.
- Advocacy-based and Pandering Evaluations – Advocacy-based evaluations often become a form of pandering—working toward a predetermined outcome. An evaluation becomes a pseudo-evaluation when the evaluator caters to the client's desire for a result that supports a specific recommendation. So, while advocacy for an important cause may be admirable, advocacy-based evaluation often fits the definition of a pseudo evaluation because the evaluator, by definition, is promoting a particular viewpoint and pushing for a predetermined set of recommendations.
- Empowerment Evaluation – Building evaluation capacity can be a legitimate objective for an evaluator. An evaluator serving as a consultant may wish to help the company build evaluation capacity and put in place needed processes so the organization can conduct its own evaluations without the evaluator in the future. Stufflebean and Coryn (2014) warns, however, that this type of evaluation may become a pseudo-evaluation if the evaluator is expected to simply sign off on the evaluation as if the evaluation was completed by the evaluator and not the client.
Quasi-evaluations
In contrast to pseudo-evaluation, quasi-evaluations have less value because they are incomplete or limited in some way by the scope of the evaluation's purpose, the types and sources of data collected, or the criteria used to determine the merit and worth of the evaluand. For every evaluation, an evaluator could ask several different questions and use a variety of criteria. Quasi-evaluations can be beneficial to evaluators, but they often do not provide a complete picture and thus could be improved.
Some examples of the ways an evaluation might become a quasi-evaluations include:
- Limited Data – Some quasi-evaluations are limited because the evaluator has limited or no access to the information they need to conduct a proper evaluation. Ethical issues may constrain the evaluator; the data may not exist or cannot be obtained directly. When compromises are made, the results may be limited. For instance, an evaluator may not have access to key informants (e.g., young children); so, as a compromise, they ask someone associated with the key informants (e.g., a child's parent). Getting information from an indirect source may provide some information but likely is not as good as a direct source.
- Narrow Set of Evaluation Questions or Criteria – Experimental studies and objectives-oriented evaluations are often described as quasi-evaluation because they tend to answer a limited set of questions (usually only one) and base the evaluation on a single or a limited number of criteria. For example, an assessment (test) is often used to obtain information regarding the degree to which students have accomplished the learning objective for a course. If this were the only data collected and the criteria for judging the quality of the course was based solely on student achievement, the evaluation likely could be improved. There are many things an evaluator might consider when judging the quality of a course or the instructor (the ghost in the system); student grades are but one of those things.
- Personal Values – Employing an expert to conduct an evaluation (i.e., connoisseurship) can be an excellent way to produce useful results. We value expert opinion because these individuals have experience and understanding others may not. However, when the values and criteria a connoisseur uses to judge merit and worth do not align with those of the client (i.e., what the client thinks is important), the evaluation may have limited value.
Instructional designers use many different evaluation approaches. However, an evaluator must ensure the evaluations they plan do not become pseudo-evaluations or quasi-evaluations.
The CIPP Model
There are many evaluation approaches and models. Most align well with a specific phase of the ADDIE model. However, Stufflebeam’s (2003) Context, Input, Process and Product, CIPP model is a comprehensive approach to program evaluation that spans all facets of the design and development process. We present it here, and other evaluation approaches in later chapters dedicated to specific phases of the design and development process.
The CIPP framework is a decision-oriented approach to evaluation. It aims to provide an analytic and rational basis for program decision-making at various stages of a program’s life cycle (i.e., conceptualization, planning, development, implementation, and maintenance). The CIPP model attempts to make evaluation directly relevant to the needs of decision-makers during the phases and activities of a program’s development.
You cannot apply the CIPP model all at once. Each facet of the model must be applied separately depending on the program’s current stage of development. Each of the four components of the model aligns well with one of the four phases of the ADDIE model (analysis, design, development, and implementation).
The CIPP model recommends asking formative questions at the beginning of the program’s development, then transitions to a summative evaluation of the program once it has been implemented. Guiding questions for each phase include:
- Context: What needs to be done? (analysis)
- Input: How should it be done? (design)
- Process: Is it being done? (development)
- Product: Is it succeeding? (implementation, maintenance)
The CIPP model is more of a framework than a prescription for completing an evaluation. Detailed descriptions of the CIPP framework can be obtained from various sources. Additional ways to accomplish each component of the CIPP model are provided in subsequent chapters.
Figure 1: The CIPP model and the ADDIE Instructional Design Phases. |
|
Chapter Summary
- Suggestions for how to conduct an evaluation can be classified as general principles, approaches, or models.
- General principles describe things evaluators need to do so the results of an evaluation are deemed credible.
- Models are prescriptive and provide specific steps that must be followed.
- An approach may approximate a model but the goal is not to replicate the design.
- The purpose, goals, context, and constraints of an evaluation will require the evaluator to adapt and revise any proposed model.
- Pseudo-evaluations should be avoided as they are conducted to promote a specific predetermined solution. These include politically-inspired and advocacy-based evaluations.
- Quasi-evaluations provide good information, but the value of the findings is limited in some way. Evaluation classified as quasi-evaluations could be improved by expanding the scope of the evaluation and the criteria used to determine merit and worth.
- The CIPP model is a comprehensive framework that spans each phase of the ADDIE model for developing instruction (i.e., analysis, design, development, and implementation).
- The CIPP conceptualization phase aligns with the analysis phase.
- The CIPP input phase aligns with the design phase.
- The CIPP process phase aligns with the development phase.
- The CIPP product phase aligns with the implementation phase.
Discussion Questions
- Think of an evaluation you might consider completing. Provide an example this evaluation might become a pseudo- or quasi-evaluation. What steps should be taken to avoid this?
References
Stufflebeam, D. L., & Coryn, C. L. (2014). Evaluation theory, models, and applications (Vol. 50). John Wiley & Sons.
Stufflebeam, D. (2003). The CIPP model of evaluation. In T. Kellaghan, D. Stufflebeam & L. Wingate (Eds.), Springer international handbooks of education: International handbook of educational evaluation.