MANAGING AND IMPLEMENTING AN EVALUATION
How do I manage the evaluation process? How to I make sure ethical and conflict sensitivity considerations are addressed in the evaluation process? This section outlines the various dimensions of evaluation management and implementation—from management issues such as logistics and team development to robust data collection and analysis methods and ethics and conflict sensitivity. In addition, tools and resources developed for evaluations in specific sectors or topics are included here.
I am Interested in Evaluating Peacebuilding Programming in a Particular Sector. Are there Special Considerations and Tools?
Integrated Development and Peacebuilding
-
Bayne, Sarah, and Tony Vaux. “Integrated Peacebuilding and Development Programming: Design, Monitoring and Evaluation.” CCVRI Guidance Series. London: DFID, 2013.
ADVANCEDINTERMEDIATE
This document is one of a series of Practice Products developed under the Conflict, Crime, and Violence Results Initiative (CCVRI). The full set of products is intended to support DFID country offices and their partners to develop better measures of programme results in difficult conflict and fragile environments.
Security Sector Reform
-
Johannsen, Agneta M. “Security Sector Reform Assessment, Montiroing and Evaluation.” In Gender and Security Sector Reform Training Resource Package, edited by Megan Bastick. Geneva: Geneva Centre for the Democratic Control of Armed Forces (DCAF), 2015.
ADVANCEDINTERMEDIATE
These training resources are a companion to the SSR and Gender Toolkit. They are designed for SSR trainers and educators, to help you present material on gender and SSR in an interesting and interactive manner. The Gender and SSR Training Resource Package contains a wide range of exercises, discussion topics and examples from the ground that you can adapt and integrate into your SSR training.
-
Rynn, Simon and Duncan Hiscock. “Evaluating for Security and Justice: Challenges and Opportunities for Improved Monitoring and Evaluation of Security System Reform Programmes.” London: Saferworld, 2009.
INTERMEDIATE
This report brings together the results of a research project on the monitoring and evaluation (M&E) of security system reform (SSR) programmes. It focuses particularly on donor-supported SSR programmes, but with reference throughout to local ownership of and capacity for M&E activities.
Media
-
Költzow, Sarah. “Monitoring and Evaluation of Peacebuilding: The Role of New Media.” Paper 9. Geneva: Geneva Peacebuilding Platform, 2013.
INTERMEDIATE
This study looks at the use of social media, Information and Communication Technologies (ICT) and mobile phones in peacebuilding. It focuses on the potential of new media to improve peacebuilding through supporting monitoring and evaluation.
Transitional Justice
-
Duggan, Colleen. “Show Me Your Impact: Evaluating Transitional Justice in Contested Spaces.” Evaluation and Program Planning (2010), doi:10.1016/j.evalprogplan.2010.11.001
ADVANCEDINTERMEDIATE
This paper discusses some of the most significant challenges and opportunities for evaluating the effects of programs in support of transitional justice – the field that addresses how post-conflict or post-authoritarian societies deal with legacies of wide spread human rights violations.
Negotiation
How do I Collect and Analyze Data?
Evaluation Methods and Data Analysis
-
Brikci, Nouria, and Judith Green. “A Guide to Using Quantitative Research Methodology.” MSF, 2007.
BEGINNER
This practical guide starts with a discussion on what is qualitative research, what are its aims uses and ethical issues, and then explains how to develop quantitative research designs. Additionally, it explains how to generate data, with practical tips on how to ask questions, run a discussion, and other key aspects of quantitative research. Finally, a discussion on data management and analysis and some practical issues helps the user make the most of their data.
-
Centers for Disease Control and Prevention (CDC). “Analyzing Qualitative Data for Evaluation.” Evaluation Brief 19, CDC, 2009.
INTERMEDIATE
Relevant for evaluators, this brief focuses on analyzing qualitative data. It includes an overview of qualitative data; how to plan for qualitative data analysis; how to analyze qualitative data; and the advantages and disadvantages of qualitative data.
-
USAID Bureau of Policy, Planning and Learning. “Conducting Mixed-Method Evaluations.” Technical Note, Monitoring and Evaluation Series, Washington DC: USAID, 2013.
INTERMEDIATE
This technical note provides guidance on using mixed methods in evaluation, including concrete advice on how to get the most out of a mixed-method evaluation.
-
Bamberger, Michael. “Introduction to Mixed Methods in Impact Evaluation.” Impact Evaluation Note 3, Washington DC: InterAction, 2012.
BEGINNER
This guidance note explains what a mixed methods impact evaluation design is and what distinguishes this approach from quantitative or qualitative impact evaluation designs, and highlights the potential applications and benefits of a mixed methods approach for NGOs. Online annexes, webinar with Michael Bamberger (Webinar recording | Webinar slides | Webinar Q&A) and webinar on NGO experience (Webinar recording|International Rescue Committee slides | Freedom from Hunger slides) also available.
-
Resources on sampling are helpful for ensuring representativeness of the data collected:
- BetterEvaluation.org: Easy to access introduction to sampling covering probability sampling (including random sampling), purposive sampling and convenience sampling and suggests further resources.
- Alexander, Jessica, and John Cosgrove. “Representative Sampling in Humanitarian Evaluation.” Improving the Quality of EHA Evidance Discussion Series Method Note 1. London: ALNAP, 2014. Available here.
- Descriptions from the Encyclopedia of Survey Research Methods of purposive sampling (available here) and probability sampling (available here.)
-
Alexander, Jessica, and Francesca Bonino. “Ensuring Quality of Evidence Generated through Participatory Evaluation in Humanitarian Contexts.” Improving the Quality of EHA Evidence Discussion Series Method Note 3. London: ALNAP, 2014.
INTERMEDIATE
This note presents experience-based lessons about what tactics have been used to ensure accuracy and representativeness of data and analysis generated through participatory approaches and discusses the benefits of participatory evaluations.
What do I need to do throughout the Evaluation Process to Manage the Evaluation?
For Evaluation Managers
-
Willard, Alice. “Managing and Implementing an Evaluation. Guidelines and Tools for Evaluation Managers.” Catholic Relief Services (CRS) and the American Red Cross, 2008.
INTERMEDIATE
This detailed guide provides evaluation managers with solutions on how to implement evaluations. The module focuses on what needs to be done throughout the evaluation process to manage the evaluation team and minimize the inevitable disruptions to the project’s own implementation plan.
For Evaluators and Project Team
-
Church, Cheyanne and Mark Rogers. “Evaluation Management.” In Designing for Results: Integrating Monitoring and Evaluation in Conflict Transformation Programs, 137-177. Washington DC: Search for Common Ground, 2006.
BEGINNERINTERMEDIATE
Here, Church and Rogers provide a step-by-step explanation of the different items that need to be taken into account by project teams and evaluators as they get to the point of implementing an evaluation. The chapter is divided as follows:
- Developing the Terms of Reference
- The evaluation plan
- Frequently asked questions about working with external evaluators
- Strategies for overcoming common evaluation pitfalls
-
Ober, Heidi. “Guidance for Designing, Monitoring and Evaluating Peacebuilding Projects: Using Theories of Change.” London: CARE International UK, 2012.
BEGINNERINTERMEDIATE
This general guidance contains practical tips on data collection methods in Section 4.6 (pp. 18-19).
-
OECD. “Conducting an Evaluation in Situations of Conflict and Fragility.” In Evaluating Peacebuilding Activities in Settings of Conflict and Fragility: Improving Learning for Results, DAC Guidelines and Reference Series, 57-75. Paris: OECD Publishing, 2012.
INTERMEDIATE
This chapter is specifically dedicated to conducting evaluations in situations of conflict and fragility, and is divided as follows:
- Allow an inception phase
- Identify and assess the theory of change and implementation logic
- Gather data
- Criteria for evaluating
- Draw conclusions and make recommendations
- Reporting
- Management response and follow-up action
- Disseminate findings
- Feed back into programming and engage in learning
What are Ethical and Conflict Sensitivity Issues in Evaluation, and How do I Manage them?
Ethics in Evaluation
-
United Nations Evaluation Group (UNEG). “UNEG Ethical Guidelines for Evaluation.” Foundation Document, UNEG, 2008.
BEGINNER
Applicable to the conduct of evaluation in all UN Agencies, the UNEG Guidelines for Evaluation highlight the importance of ethical conduct in evaluation – which is described as a shared responsibility of all relevant stakeholders. Besides discussing the Ethical Principles in Evaluation, special attention is given to the duties of evaluation managers and evaluation commissioners.
-
American Evaluation Association. “American Evaluation Association Guiding Principles for Evaluators.” 2004.
BEGINNER
Developed in 1994, the Guiding Principles for Evaluators serve as the cornerstone of good evaluation practice. The Guiding Principles are broadly intended to cover all kinds of evaluation, and are aimed at guiding the ethical conduct of evaluation. Additional guidance material on the Guiding Principles for Evaluators can be found here.
-
Duggan, Colleen, and Kenneth Bush. “The Ethical Tipping Points of Evaluators in Conflict Zones.” American Journal of Evaluation 35 (2014): 1-22, doi: 10.1177/1098214014535658.
INTERMEDIATE
This article highlights the specifics of conducting ethical evaluations in settings of conflict and how evaluators can manage the particular challenges that arise in these settings. This article is relevant for both peacebuilding practitioners without evaluation experience and evaluators without prior experience in conflict zones.
-
Chigas, Diana, Madeline Church, and Vanessa Corlazzoli. “Evaluating Impacts of Peacebuilding Interventions: Approaches and Methods, Challenges and Considerations.” CCVRI Guidance Series. London: DFID, 2014.
INTERMEDIATE
This guidance includes a discussion of ethical and conflict sensitivity considerations for evaluations of peacebuilding impacts, including ethical and conflict sensitivity issues raised in different approaches to evaluation.