Humanitarians are increasingly turning to technologies that provide more affordable and effective ways to gather real-time feedback from people affected by humanitarian disasters. But while technology can improve access and enhance the scope of data collection, collecting the data is only one part of the feedback cycle. Too often, we fail to engage in discussions with respondents on the findings or report back to them on the changes made based on their input. The danger is that weak two-way communication turns feedback collection into a process that is extractive and narrow.
The response to the 2010 earthquake in Haiti is an example of how big data—harnessed through crowdsourcing— can help make data collection and needs assessments more inclusive of aid recipients and legions of digital humanitarians. Mission 4636, a real-time crowdsourcing initiative, sought to connect local communities with one another, and with aid providers, through SMS messaging and online platforms. As a result of the initiative, over 80,000 text messages were processed1, and language barriers were overcome between communities and responders by offering translations of feedback from Creole to English.2 However, in the end, few real feedback loops materialised. Information came largely in the form of reports and maps that could only be accessed on the internet and in English – virtually inaccessible to those providing feedback. Femke Mulder and her colleagues of the Vrije Universiteit Amsterdam argue that the result was that affected communities, “the original crowd”, were left out of the processing of information and interpretation of feedback.3 This approach provides accountability upwards, to donors, but does little to correct imbalances of power and information in the humanitarian sector.4
SMS hotlines were also set up in the Philippines following Typhoon Haiyan. Only rarely, however, did communities get to engage in any kind of dialogue about their feedback. Nor did they receive acknowledgement that they’d been heard or learn how their feedback was—or was not—used. Behind this lack of two-way communication was a short-staffed local office that received little guidance from superiors about whether—much less how—they might act on the data.5
Bureaucratic constraints, a lack of know-how, as well as fear and nervousness around engaging with constituents following data collection lead many aid organisations to adopt a narrow definition of feedback—one that does not recognise the importance of getting back to the communities whose feedback they seek. Too often, organisations plant the seeds of feedback while doing little to ensure that the roots take hold. Failing to respect each phase of the feedback cycle—from design and data collection through analysis, dialogue and course correction—can do more harm than good, with affected people pushed to the fringes of participation. Prioritising data collection over follow-up will ultimately lead to survey fatigue, as communities become disenchanted with the promise of being listened to only to find themselves ignored. To avoid this bind, organisations should ensure from the outset that they have the capacity to discuss the data with communities. Too narrow an approach to feedback, despite good intentions and well-thought-out data collection, does not improve accountability to affected populations. On the contrary, it may well undermine it.
Internews provides an example of two-way communication using novel means. Following the devastation of Hurricane Matthew in Haiti in 2016, Internews worked closely with local humanitarian and media organisations. They disseminated bi-weekly bulletins providing humanitarian actors with snapshots of the key concerns of affected people and information to clarify misunderstandings behind rumours that, if unaddressed, could exacerbate problems.6 ACTED, a humanitarian organisation, used these briefings to ensure that the expressed needs and concerns of the local communities were included in programme planning and implementation. Internews also partnered with Ayibo Post and Radio Ginen, a blog and radio station popular among locals and Haitians in the diaspora, to ensure that potentially life-saving information reached as many people as possible.7
At Ground Truth Solutions, we like to think that we know something about feedback, but even for us encouraging our partners to communicate findings back to the communities they serve is hard. It is important to remember that completing each phase of the feedback cycle does not have to be a solo mission. No single organisation gets each stage of the feedback cycle right. Learning from other organisations and collaborating is key. We use opportunities like a recent LabStorm, convened by Feedback Labs, to learn from our peers. In terms of dialogue, there is no need to create everything from scratch. Sometimes the best tools and mechanisms for receiving and disseminating information are the ones people already use regularly, like radio shows, community meetings, social media, and blogs.
Please reach out to us at email@example.com if you are interested in contributing your thoughts on how to ensure that constructive dialogue is an integral part of your feedback process.
1 Munro, Robert. “Crowdsourcing and the crisis-affected community: lessons learned and looking forward from Mission 4636,” Journal of Information Retrieval 16, no. 2 (2013): 210-266.
2 Mulder et al., “Questioning Big Data: Crowdsourcing crisis data towards an inclusive humanitarian response,” Big Data & Society (August 2016): 1-13.
4 Madianou et al., “The Appearance of Accountability : Communication Technologies and Power Asymmetries in Humanitarian Aid and Disaster Recovery,” Journal of Communication 66, no. 6 (December 2016): 960-981.
6 Foran, Rose. “What a Feedback Loop Looks Like,” (blog), Internews, April, 20, 2017, https://www.internews.org/story/what-feedback-loop-looks