Does one-size accountability fit all?
Nick van Praag • 2 November 2016
There's a lot of interest in scaling up quality and accountability across the humanitarian space. The double promise of better programming and greater accountability are powerful drivers. But how to spread and sustain the momentum?
One way forward is a standard feedback mechanism that every aid agency can use rather than reinventing one of its own. This idea is explored in a paper published by IRC that draws on experience piloting feedback mechanisms with Ground Truth Solutions in four different humanitarian programmes. What does it tell us?
It takes an ecosystem. There is a range of feedback tools and any standard approach must take the different options into account. Surveys delve into clients’ perceptions and feed into the way programmes are designed and managed. Focus groups are important for interpreting insight gleaned from survey data and other sources. Interviews, helplines and other one-on-one devices are essential for sensitive topics like gender-based violence and child protection. The challenge, as the paper says, is to find the best mix of proactively sought and spontaneously provided data, and to ensure they complement one another.
Standard themes and questions. Many themes resonate across humanitarian emergencies. Take affected people’s trust in aid agencies, their sense of empowerment, and the way they see outcomes down the road. The same goes for questions on the relevance and timeliness of services like water and sanitation or the distribution of cash. While similar themes and question formulations may work in different places, the contextual lens through which they are analysed differs from location to location.
Benchmarking. Using the same or similar questions makes it possible to compare perceptions across programmes and geographies. Are all programmes in a country seen as equally relevant? Do some programmes score better on trust than others? IRC is keen to get answers to these questions but it recognises that clients’ views are only one consideration. Take surveys in South Sudan that suggest providing information to enhance protection is not valued as much as more tangible programmes like healthcare. But that should not necessarily mean cutting back on information provision. In other words, benchmarks offer insight and can leverage action but must be interpreted with care.
From listening to action. The most important part of any feedback mechanism – and the hardest to realise, let alone standardise – is ensuring follow-up action on what affected people say. If this is to happen, ‘feedback processes must be managed, encouraged and rewarded’, says the IRC. At a minimum, organisations must tell affected people what they’ve learned from them. If they cannot act on what they hear, they should explain why.
The evidence suggests that some characteristics of feedback systems can be systematised, if not standardised. Take the formulation of questions, the principle of responsiveness and the centrality of follow up action. In the end, getting the feedback loop right is a lot about bringing these elements together and less about a standard model.