Polls are not a substitute for dialogue with your constituents: what can we learn from the US elections?
To unpack this, we first considered data collection. The challenges of remote collection in harsh humanitarian environments means that most surveys (in non-Covid times) are conducted face to face. The result is that sampling frames are more readily respected by enumerators. It is easier to interview a representative sample of the target population when you are there on the ground than to do so remotely by phone. In the US, people’s decreasing willingness to respond to polls has reduced response rates to only 6% in recent years, according to the Pew Charitable Trust, from above 50% in the 1980s.
Declining response rates make it hard to construct a balanced sample of a target population, especially when people are also increasingly wary about revealing their political intentions. In most humanitarian contexts, however, people mostly tell us they welcome the chance to give their point of view – as distinct from being pumped for information for needs assessments, which can lead to fatigue. Surveying people face to face also affords valuable insight from non-verbal types of communication.
Methods aside, surveys in the humanitarian space seek out very different information than polls in the run-up to elections. GTS is tracking trends, not picking winners. The goal is to find out how affected people see their predicament and how they rate efforts intended to protect and assist them. This intelligence then feeds back into the response and, in the best case, triggers course corrections. When we follow up and ask the same questions again, we hope that these programmatic tweaks improve survey scores as relief programmes become more responsive to those they serve. We’re not predicting the future; we’re trying to play a small part in changing it.
A survey in a humanitarian crisis is not a freestanding activity that hones-in on specific outcomes. Rather, it is part of a broader process intended to trigger two sets of conversations; one between surveyors and respondents to make sure those leading the inquiry understand the nuances and contextual conundrums hidden in the data, and another among agencies and donors so they can factor the perspective of affected people into the way they design, fund, and implement programmes. Data should never be taken at face value but during electoral campaign madness, this can happen. The situation in the US underlines the need to make sense of quantitative data with communities, as well as with those who want to learn from it.
In other words, don’t expect surveys to give you all the answers. It’s what we tell our partners all the time. Graphs and numbers are powerful, but don’t trust them blindly. You can use the trends they can reveal to inform your actions, just like smart politicians do, and combine them with other evidence from focus groups, factual data sets and your best judgement. But they are not a substitute for a dialogue with your constituents.
The American polls have given us more food for thought. Polls and surveys are, by their nature, imprecise and imperfect tools. But are they so inaccurate as to mislead, in every instance? There is a lively debate on this question in the US. But on balance and done right, surveys of affected people in humanitarian operations provide valuable intelligence that we are better off with than without.
Could the inadequacies of election polling undermine efforts to regularly, proactively, and systematically inquire into the way things look to people on the user end of humanitarian action? We sincerely hope not. That’s why we will keep honing our methods to try to better understand the unique experiences of aid recipients, hoping to start conversations, not predict results.