Home page
DAC Evaluation Criteria in the face of divergence

There has been a significant change in the types of projects funded under international aid. Although many consider the DAC evaluation criteria to be a good short list of essential tests, the ability to apply them can be frustrated by a project's' objectives and design. This gives rise to doubts as to the efficacy of these criteria. This has created confusion and muddled thinking. The reason DAC evaluation criteria can become difficult to apply is that an increasing number of projects are divergent. Projects with more predictable outcomes are convergent and can be tested directly by DAC criteria. This raises the question as to why funding agencies are investing in, sometimes large, divergent projects which are difficult to evaluate?


One basic tenet of evaluation is to understand what a project is meant to do and how to measure what it does. Evaluation can be straightforward when the operational model, that is the process of the design, testing and evaluation is applied to a convergent process.

A convergent process is one that is based on known cause and effect relationships so that it can be represented as a deterministic model. A deterministic model is a set of quantitative relationships between inputs to a process and the outputs associated with those inputs. So the likely output achieved can be predicted by the quantities of inputs. Input values determine the output values.

Agriculture is founded on determinate processes of reproduction and growth. For example, the amount of fertilizer or the position of a crops in a rotation, following a nitrogen fixing crop, can be related to the likely yield. However, weather conditions can alter yields upwards or downwards depending on specific conditions. The graph below shows the significant impacts of different annual weather conditions on yields for barley when the inputs in each year were identical (Rothamstead Experimental Station).

Divergence but still convergent

As a result, a transparent and convergent model needs to be adjusted to accommodate the probability of weather conditions resulting in different yields. This issue of uncertainty is however manageable. It is managed by reviewing the data on weather conditions over several years so as to establish the probability of each type of combination of rainfall and temperatures. In this case a model can then provide, not an exact estimate of what the expected yield will be, but rather, an average around which there will be a given levels of variation or divergence.

If adequate locational-state data is collected on weather conditions then at the end of a season one has a complete dataset. A "complete dataset" contains all of the data necessary maximize the percentage of "explained variance" in an analysis of variance. As a result, in spite of data variance the dataset and the deterministic model remain predictable and explainable, capable of effective and efficient evaluation. These types of "projects" are open books for assessment of relevance, efficiency, effectiveness, impact and sustainability.


An increasing number of projects have introduced so-called cross-cutting issues to the extent that whole projects are fundamentally less about technical or economic issues but contain a substantial component that is of social significance. These aspects of project outcomes are heavily influenced by the constitutional forces of culture, politics, power and the existing degrees and practice of public choice. What is policy in one government's mandate can be rejected by the next. Sometimes sustainability of projects that have helped poor people transition from subsistence producers into a partial or wholly to cash operations, can falter because of lack of political representation following a change in policy. Democratization projects sometimes suffer from those in established political structures undermining progress through slow decision-making or reassignment of funding to other ends.

It is fairly self-evident that such projects are largely unpredictable or divergent because determinant models cannot be built around determinants that have no defined permanence/sustainability.

Many divergent projects are increasingly characterized by elaborate "theories of change" substituting an excessive number of assumptions for a transparent and useful determinate model. In general the number of assumptions or expectations of events is high signifying the low likelihood that all will be satisfied in practice. The result is a risky project that is unlikely to achieve objectives.


It is also evident that attempting to apply DAC evaluation criteria to divergent projects is often probably impossible because of the excessive dependency on assumption. In other words, if there are too many constraints that are in fact variable, evaluation becomes a guessing game. This has resulted in social scientists shifting the discussion to recommend the basis of evaluation moving from quantitative and deterministic cause and effect relationships to surveys of opinions, perceptions and attitudes. In these cases the evaluation tends to be limited to measuring perceptions as opposed to objective empirical quantitative evidence. Because of the predominance of social, cultural and human opinion in such surveys where there can be groups who have benefited, not benefited or who have even been prejudiced, the survey sociological basis for measuring impact is fairly unreliable. There are cases of surveys being subject to terms of reference that limit the exposure of facts which those funding the surveys would rather were not delved into.

So the question needs to be asked of donors and funding organizations as to their motivations and objectives in supporting projects.

Back to square one

A way to reduce the large number of divergent projects that end up in failure is to insist that DAC evaluation criteria be applied to project designs and to assessments of the risks associated with the number of conditions or assumptions. If it is not possible to map out how the criteria of solution relevance, effectiveness, efficiency, impact and sustainability will be measurable throughout a project it is likely that the project is too risky in the context of achieving stated results and eventual objectives. It would be preferable if evaluators were able to review project proposals and the process of selection (which can also expose decision-maker preferences) so as to understand the project's foundation and why it take its proposed functional form. If a project fails, the result of the evaluations applying DAC criteria, before it's inception, can provide important evidence for donors before funding. In cases where such projects have been funded, at least evaluators are in a position to point out where the project is likely to go off course in order to improve implementation management.

To have evaluators carry out their assignments in midstream or at the end of projects is not an effective and efficient use of human resources and it is associated with a failure to install a basic quality assurance step at the beginning of a project which raises the probability of poor performance.

1  OQSI-Open Quality Standards Initiative.