The Constituent Voice Methodology

The methodology we use is called Constituent Voice™ (CV). CV draws on participatory development approaches and embraces techniques borrowed from the customer satisfaction industry. The research design is simple, and can be adapted to suit the needs and constraints of different contexts. We ask very few questions, typically 5-8 per survey, but ask them frequently. Respondents score their answers on a scale, which thus become a measure that can be tracked over time. Collected data is analysed to foster use and deeper dialogue among staff, encourage follow-up action, and increase engagement with communities.

Ground Truth’s approach is to systematically collect the views of affected people on key aspects of a humanitarian programme, analyse what they say, and help agencies to understand and communicate the resulting insights back to affected communities. The objective is to provide agencies with real-time, actionable information from people at the receiving end of aid that can be translated into programme improvements, while empowering people to express their views.

The Ground Truth Constituent Voice Cycle

Deciding what you need to know is the starting point
We start by looking at what each programme sets out to achieve – to understand the theory of change. This involves talking to staff running the programmes. What do they know already? What do they want to know more about? Next, we craft questions and test them with affected people. Do they understand the questions? Do they get at the right issues? The focus is on perceptions, not facts – on people’s trust in aid agencies, on the relevance and timeliness of services, on people’s sense of empowerment and their sense of the way things will turn out for them.
Ask few questions, but ask them frequently
Ground Truth’s survey instruments are short and to the point. Intervals between data collection vary from two weeks to several months. Timing depends on agencies’ capacity to digest feedback and the speed of change on the ground. Data collection methods depend on context. Options range from face-to-face interviews – using pen and paper, tablets or smartphones to record data – to SMS surveys and enumerated calls. We usually work with local enumerators, but some agencies may collect their own data.
Translating the data into follow-up action
The next step is to analyze the data and compare the results with information from other sources. After discussing the feedback with operational staff, we prepare a summary report, including suggestions for follow-up inquiry. The goal is to present the data in a clear and simple format that agencies can understand and track with ease.
Communicating the findings to affected people
We talk the data through with agency staff to get a better sense of what the findings mean. Equally important is dialogue with people from the affected communities. They need to feel their feedback is taken seriously. At a minimum, it is important to inform communities promptly about the survey results and how agencies plan to respond to what they learn.
Using feedback to drive programme adjustments
This is when agencies adjust their programmes in response to the feedback. While glaring problems should be addressed at once, agencies may also track troubling perceptions for a while before addressing persistent obstacles. The goal is to use regular feedback to match accountability to affected populations with performance management.
Tracking performance over time
Tracking perceptions over time is a more powerful source of intelligence than one-off surveys. The length of time between rounds depends on how fast things are moving on the ground and how quickly agencies can process the findings. Whether action is taken or merely considered, the cycle starts over again. Each round brings the opportunity of revising questions to make sure they continue to provide useful, actionable feedback.


Keystone privacy statement

Our Partners