People are not generally open to change and ideas that contradict what they already believe, and under certain conditions, actively avoid such information while at the same time seeking information that bolsters their original beliefs.

The public and private sector and the people that work for them are not immune from this. Learning that their efforts may be ineffective or even counterproductive would be very dissonant. Cognitive dissonance theory predicts that, based upon the level of commitment to current beliefs, evidence to the contrary will be rejected and even discussion (for learning) is discouraged.

In a world of rapid change where organizations and people interact in complex and unpredictable ways, “traditional” monitoring approaches that are heavy and slow, infrequent and top-down are simply inadequate. Real-time monitoring and quick (learning) feedback loops, faster cycles of data collection and analysis allow for quick assessment of intended and unintended, positive and negative effects of interventions and helps immediate course-correction to ensure that research, design and implementation of programs is more responsive to the needs of beneficiaries.

Exploring the relevance of theories of change, the interfaces of capacity development with change processes, and socio-psychology to understanding national and local responses to current research (uptake) and development practices, and studying ways to overcome them will have little benefit if partners in the broader research and development field are predisposed not to carefully listen to evaluation findings.

Both public and private organizations may unwittingly be complicit in undermining national and local capacity development, creating attitudes or expectations that actually weaken well intended work for development and change objectives, replacing local resourcefulness and self-reliance with new attitudes that view the benefits of externally-directed programs as an entitlement that hinders ownership and new initiatives.

Failure to recognize how the on-the-ground implementation change processes and dynamics fosters such attitudes may help explain why many of the improvements attributed to externally-funded programs persistently lack sustainability. Many “modern” private sector interventions, research (for development) programs, in general, left their partners/clients inert, disempowered, and uncommitted to act independently on the challenges they face.

Dialogue

At Development Connect, we believe that finding ways to create space for meaningful dialogue on this, including through public, private, community dialogue mechanisms is a very significant challenge but one of utmost importance.

A “new” research for development paradigm supplanted a long-functioning, traditional system of addressing community needs. While those pre-existing systems may or may not have been efficient or equitable, they rarely left communities inert, unwilling to take action, and dependent on others, this may not be the result of a lack of capacity of communities to manage new technologies or systems, but rather an unwillingness to accept responsibility for them.

The amount of guidance and support communities receive from researchers and development partners is often inversely related to the even short term sustainability of an initiative. This has profound implications for development practice: sustained development happens when external development agents intervene less and when national systems are built, and local participation and ownership is encouraged.

Unfortunately, failures to empirically ground these concepts in the socio-psychological field of attitude change and behavior, have led to widely divergent interpretations of their meaning and more importantly, to their ineffective implementation in the field. To emphasize this point, “community participation” was claimed to be a key, albeit diversely understood element within each of the development processes leading too often to deleterious outcomes. If extensive external intervention does undermine sustainability, there are issues with the nature of these interventions that a better understanding of socio-psychology could help address.

There is a need to get more granular in our understanding of how to provide a stimulus for change.

Still and too often, programs do not have systematic ways of monitoring, tracking, and reporting capacity development, change, activities. In the absence of a systematic database for capacity strengthening activities, the assessment of a program/project/organization performance with regard to its objectives becomes difficult. Further, merely counting the number of people who attend (as an example) training courses may not fully capture the change process supported.

In order to determine whether initiatives undertaken are effectively contributing to developing national (research and development) capacity, any program/project/organization must thus have a comprehensive system to assess whether

  • change, capacity development principles are properly integrated into design/planning;
  • initiatives do invest and implement in a way that is consistent with the organization’s capacity development approach, and whether;
  • concrete capacity development results are systematically captured and reported following a capacity measurement framework.

National institutions, public and private, are under intense pressure to achieve results and demonstrate them to a variety of constituencies, including citizens, development partners, donors and other stakeholders.

Citizens are calling for increased transparency and accountability for how priorities are defined, funds allocated, and results reported, all while demanding to be part of the process. It is no longer sufficient for service providers to say they have introduced programs to reduce inequality and increase job opportunities; they must demonstrate that marginalized and/or poor communities have better access to quality basic services and more of them are gainfully employed, and they must back this up with evidence. Managing for results, and sound indicator selection to understand the wider context that affects changes in individual, organizational and institutional attitudes, practices, and behavior is critical, and can address these challenges.

Defining the purpose

The first step is to articulate the purposes of the capacity development component of a specific program/projects and the accompanying theory of change. Here, brainstorming about potential measurements by identifying the specific capacity, behavior, and performance changes one expects to see (as well as areas where we are unsure what to expect, but want to monitor change if it occurs) needs to take place.

The second step is to select one or more core purposes of the monitoring of the theory of change:

  • Tracking capacity development interventions: Inputs and outputs are relatively easy to measure and track. Input indicators include the activities and resources used for capacity development and outputs are a direct result from them;
  • Tracking capacity development sub-purposes or outcomes/impact: Monitoring changes in capacity, state or conditions of beneficiaries over time. These changes generally take more time to achieve than inputs and outputs;
  • Accountability of (potential) capacity service providers: Monitoring the productivity, quality, and timeliness of capacity development service providers.

It is quite common to have multiple purposes. However, it is usually not possible to monitor all of these purposes through one indicator or approach. Having clarity on the purposes behind monitoring allows however to interrogate the purposes of capacity development interventions, and help prioritize in which of those areas we will require monitoring approaches.

A learning approach is critical to enable adaptive management for capacity development. Results based management and monitoring and evaluation are vital in supporting a learning approach, particularly where clients and their partners can engage in joint review of jointly-defined indicators. Monitoring can track what has changed and link that back to a theory of change. Monitoring is most likely to support effective capacity development when organizations collaborate on definitions of indicators and targets and joint reviews are conducted to support mutual learning and adaptation. An evaluation would be needed to gain a better understanding of how and why the theory of change worked or did not work. Evaluations can also consider unintended consequences, alternative explanations, and lessons learned in greater depth.