MODULE 1: FUNDAMENTALS OF MONITORING AND EVALUATION
MODULE 2: MONITORING AND EVALUATION FRAMEWORKS
MODULE 3: MONITORING AND EVALUATION SYSTEM
MODULE 4: MONITORING AND EVALUATION PLANNING
MODULE 5: CONDUCTING AN EVALUATION

Types of Evaluation

Needs Assessment

Needs assessment is a form of formative evaluation that is undertaken to ensure that a program or project activity is feasible, appropriate, and acceptable before it is fully implemented. It is usually conducted when a new program or activity is being developed or when an existing one is being adapted or modified.

It allows for modifications to be made to the plan before full implementation begins and maximizes the likelihood that the program will succeed.

Needs assessment.

Process/Implementation Evaluation

Process evaluation takes place as soon as program implementation begins throughout the life of the project/program by reviewing the activities and output components of the logframe.

It shows how well the program is working and the extent to which the program is being implemented as designed. It also reveals whether the program is accessible and acceptable to its target population.

Process evaluation determines whether project/program activities have been implemented as intended and resulted in certain outputs. Results of a process evaluation will strengthen the ability to report on your program and use the information to improve future activities.

Who, What, When, and Where?

  • To whom did you direct program efforts?
  • What has your program done?
  • When did your program activities take place?
  • Where did your program activities take place?
  • What are the barriers/facilitators to the implementation of program activities?

Outcome Evaluation

Outcome evaluation measures program effects on the target population by assessing the progress in the outcomes or outcome objectives that the program is to achieve.

Some questions you may address with an outcome evaluation include:

  • To what extent has the project improved access to safe and clean water?
  • Has the hygiene and sanitation campaign resulted in increased use of latrines and general cleanness?
  • How many households use covered water containers?
  • Did the program have any unintended (beneficial or adverse) effects on the target population(s)?
  • Do the benefits of the project justify a continued allocation of resources?
image 42

Outcome evaluation measures program effects on the target population by assessing the progress in the outcomes or outcome objectives that the program is to achieve.

Some questions you may address with an outcome evaluation include:

  • To what extent has the project improved access to safe and clean water?
  • Has the hygiene and sanitation campaign resulted in increased use of latrines and general cleanness?
  • How many households use covered water containers?
  • Did the program have any unintended (beneficial or adverse) effects on the target population(s)?
  • Do the benefits of the project justify a continued allocation of resources?
Process and Outcome Evaluation.

Impact Evaluation

OEDC-DAC defines impacts as “positive and negative, primary and secondary long-term effects produced by a development intervention, directly or indirectly, intended or unintended.”

Impact Evaluation.

Impact evaluations are conducted during the operation of an existing program at appropriate intervals, and the end of a program. They provide information/evidence about the impacts that have been produced [or the impacts that are expected to be produced] by an intervention.

An impact evaluation has to not only provide credible evidence that changes have occurred but also undertake credible causal inference that these changes have been at least partly due to a project, program, or policy. Also, it goes beyond looking only at goals and objectives to also examine unintended impacts.

Causal inference, which attributes change to the project and its activities, sets impact evaluation apart from other types of evaluation. There are three ways to investigate attribution:

  1. Temporal precedence – the “potential” cause happened before the “effect”. Did the cause happen before the effect?
  2. Covariation of the cause and effect – when the potential cause is present, so is the effect (but it’s not always true). Does the effect increase when the cause increases and vice-versa?
  3. No plausible alternative explanations – nothing else could have explained that effect. Are there any other plausible explanations that could explain the effect?

Another way of addressing the issue of attribution in impact evaluation is to ask the counterfactual question, that is, what would have happened if the intervention had not taken place?

Different types of impact evaluation are used before and after as well as during project/program implementation:

  • Ex post impact evaluation gathers evidence about actual impacts.
  • Ex-ante impact evaluation forecasts likely impacts.
  • During implementation gathers evidence about whether the program is on track to deliver intended impacts.

Impact evaluations differ in their overall intended use:

Formative impact evaluation is used to inform improvements to a project/program or policy, particularly when there is an ongoing policy commitment.

Summative impact evaluation is done to help make decisions about the beginning, continuing, or expanding a project/program or policy. A summative evaluation of a closed project/program may be used formatively for a new project/program.

There are a variety of evaluation designs, and the type of evaluation should match the development level of the program or program activity appropriately. The program stage and scope will determine the level of effort and the methods to be used.

Common impact evaluation designs include (covered in module 5):

  • Expermintal design
  • Quasi-experimental design
  • Non-experimental design

Impact evaluation can support deeper learning and direction for project scaling and future sustainability.

Scroll to Top