Evaluation Methods

There are many different methods and processes that can be used in monitoring and evaluation (M&E). Use the search field and filtering on this page to find the methods you are interested in or browse our extensive list.

If you'd like help finding methods that can be used for specific tasks in evaluation (like collecting data or creating reporting materials), please visit the Rainbow Framework.

Displaying 1 - 20 of 429 methods

Key informant attribution

A method for testing causal reasoning by asking key informants.

Process tracing

Process tracing is a case-based and theory-driven method for causal inference that applies specific types of tests to assess the strength of evidence for concluding that an interventio

Validation workshop

A validation workshop is a meeting that brings together evaluators and key stakeholders to review an evaluation's findings.

Feasibility

Feasibility refers to ensuring that an evaluation can be realistically and effectively implemented, considering factors such as practicality, resource use, and responsiveness to the pr

Rigour

Rigour involves using systematic, transparent processes to produce valid findings and conclusions.

Systematic inquiry

Systematic inquiry involves thorough, methodical, contextually relevant and empirical inquiry into evaluation questions.

Accessibility

Accessibility of evaluation products includes consideration of the format and access options for reports, including plain language, inclusive print design, material in multiple languag

Ethical guidelines

Ethical guidelines are designed to guide ethical behaviour and decision-making throughout evaluation practice.

Common good and equity

Consideration of common good and equity involves an evaluation going beyond using only the values of evaluation stakeholders to develop an evaluative framework to also consider common

Independence

Independence can include organisational independence, where an evaluator or evaluation team can independently set a work plan and finalise reports without undue interference, and behav

Utility

Utility standards are intended to increase the extent to which program stakeholders find evaluation processes and products valuable in meeting their needs.

Credibility

Credibility refers to the trustworthiness of the evaluation findings, achieved through high-quality evaluation processes, especially rigour, integrity, competence, inclusion of diverse

Strengthening national evaluation capacities

Strengthening national evaluation capacities refers to the ways in which an evaluation can have broader value beyond a single evaluation report by increasing national capacities.

Integrity

Integrity refers to ensuring honesty, transparency, and adherence to ethical behaviour by all those involved in the evaluation process.

Human rights and gender equality

Human rights and gender equality refer to the extent to which an evaluation adequately addresses human rights and gender in its design, conduct, and reporting.

Transferability

Transferability involves presenting findings in a way that they can be applied in other contexts or settings, considering the local culture and context to enhance the utility and reach

Ethical practice

Ethical practice in evaluation can be understood in terms of designing and conducting an evaluation to minimise any potential for harm and to maximise the value of the evaluation.

Professionalism

Professionalism within evaluation is largely understood in terms of high levels of competence and ethical practice.

Cultural competency

Cultural competency involves ensuring that evaluators have the skills, knowledge, and experience necessary to work respectfully and safely in cultural contexts different from their own

Impartiality

Impartiality in evaluation refers to conducting an evaluation without bias or favouritism, treating all aspects and stakeholders fairly.