The Methods Lab sought to develop, test, and institutionalise flexible approaches to impact evaluations. Â Is it reasonable to expect there to be different methods used to identify the causes of an effect as compared to the effects of a cause? 2. Are the project/program services/activities beneficial to the target population? The objective was to provide an interactive capacity-building experience, customized to focus on UNICEF’s work and the unique circumstances of conducting impact evaluations of programs and policies in international development. DAC Criteria for Evaluating Development Assistance. Impact or outcome evaluations are undertaken when it is important to know whether and how well the objectives of a project or program were met. Evaluations that are being undertaken to support learning should be clear about who is intended to learn from it, how they will be engaged in the evaluation process to ensure it is seen as relevant and credible, and whether there are specific decision points around where this learning is expected to be applied. A broad assessment of the total observed results of a program, project or strategy including. A useful starting set of key evaluation questions (KEQs) based around these criteria could, for example, include: Is the initiative delivering on outputs and outcomes as planned? Impact evaluation is the main means for empirically testing what actually happens when interventions are implemented. 6 What methods can be used to do impact evaluation? Like any other evaluation, an impact evaluation should be planned formally and managed as a discrete project, with decision-making processes and management arrangements clearly described from the beginning of the process. Overview: Strategies for Causal Attribution, UNICEF Brief 7. Evaluative reasoning is a requirement of all evaluations, irrespective of the methods or evaluation approach used. Mahoney, J., Goertz, G., 2006. Perrin B (2012). Revised on June 5, 2020. The evaluation purpose refers to the rationale for conducting an impact evaluation. Research questions for evaluation of implementation Clear research questions for the impact evaluation are essential to moving forward with design. It is a useful approach to document stories of impact and to develop an understanding of the factors that enhance or impede impact. It should be. In other words, it takes into consideration that other causes may also have been involved, for example, other programmes/policies in the area of interest or certain contextual factors (often referred to as ‘external factors’). UNICEF-BetterEvaluation Impact Evaluation Webinar Series. Our sample survey examplesâor customizable survey templates that span every use case, can allow you to overcome writerâs block and help you identify the questions you want to ask the most. Read more. For example: For more information on how to apply the impact questions, read ‘Maximise Your Impact.’ You’ll notice that the questions aren’t just about discovering how many people’s lives have changed or how much those people’s lives have changed. In this post, we’ll be looking at examples of evaluation that you can use when conducting Level 3 - Behavior assessments using the Kirkpatrick model. Goertz, G., Mahoney, J., 2012. [, Did the intervention produce the intended results in the short, medium and long term? – This guidance note outlines the basic principles and ideas of Impact Evaluation including when, why, how and by whom it should be done. Addressing gender in impact evaluation - This paper is a resource for practitioners and evaluators who want to include a genuine focus on gender impact when commissioning or conducting evaluations. For an example in process evaluation, please see the Norwegian case âElectricity savings in householdsâ (N 3) in Chapter 8.4. Organisation for Economic Co-operation and Development – Development Assistance Committee (OECD-DAC). After reviewing currently available information, it is helpful to create an evaluation matrix (see below) showing which data collection and analysis methods will be used to answer each KEQ and then identify and prioritize data gaps that need to be addressed by collecting new data.