An online resource for designing a better world.

The Ethics of Measurement

June 22, 2016

By Lauren Weinstein

You and your team have just designed and implemented a new tool to help people in Uganda manage their finances from their mobile phones. You’ve spent a couple of months on the ground prototyping and user-testing through a human-centered design process so you can be sure that the bugs are worked out and that your design meets the specific needs of users. Because ATMs are unreliable and hard to access from rural areas, sending physical cash to family members is risky, and there isn’t much of a culture of long-term financial planning, you’ve even built in some great, user-friendly techniques to encourage frequent micro-saving and smart spending decisions. This might be a program that helps people start to lift themselves out of poverty. It’s been a lot of hard work, but now low-income communities are going to have powerful money management options they didn’t have before.

After the launch celebration, on your flight back to the United States, you doodle your design mantras in your hand-bound notebook: ‘Be humble! Act with empathy! People First!” As you look out the window, you remember that hug from Dembe, Gonza showing you the mobile phone interface screens that had already allowed him to save money, the smiles and dancing. You’re feeling like you’ve done something meaningful. You’re looking forward to going back in six months for the evaluation — maybe Gonza will have saved enough to buy that car he was telling you about.

What you won’t know is that, while you were gone, people will have developed different habits for using the app because you won’t have been there standing right next to them. It’ll turn out that they actually prefer a different UX sequence which ends up taking them to error screens instead of where they need to go. It’ll turn out that when you were there giving them examples of things they could save for, some thought that you were saying that money would be put in their accounts for those items, while others thought they had a chance to win a new car or a motorbike. It’ll turn out that, as soon as the program begins taking up participants’ time and not delivering immediate results, people will have stopped using it.

What you will also not have realized is that your mobile banking app was actually one of seven different programs that people in this community interacted with. Development and aid organizations were swooping in week after week with new programs to help with issues ranging from farming, to public health and alternative education — all characterized by teams parachuting in to learn about the problem for three weeks, again to ‘co-design’ for a few weeks, then again for user testing and more prototyping, and again to deliver, and, yet again, to measure. Sometimes it will have been the same team, sometimes not. It’ll turn out that your program was just the flavor of the month. And that the following month something became hot and exciting — offering more photo opportunities, goods, and participation incentives.

The Ethics of Acting on Good Intentions

Often when we reach the stage of evaluation and measurement — even if we’ve had the best of intentions throughout — we can run into some ethical and data quality quandaries. These include knowing what to measure and how to measure, uncovering direct impacts when people are influenced by a variety of programs or factors, finding ways to gather unbiased or willing-to-please answers, or burdening people with the experience of answering surveys and interviews to name a few.

Recently Root Capital, a rural-region impact investor, boldly acknowledged some of the ethical challenges they’ve run up against in evaluations. One of their program participants explains:

“Here you come to ask us the same silly questions that you go sell to aid sponsors. Now when the aid comes you keep it for yourself. I don’t want to answer any question. Go take the answers from the ones we provided last year [to a different surveyor unknown to Root Capital]. … You’re all crooks of the same family. You’ll ask me my name, my family size, the kind of goods I have, and so on and so on. I am tired of all this and I am not answering a question, nor will anyone else in this family.”

This is a perfect example of how evaluation, instead of being useful to participants, can feel extractive to those being evaluated. During a project I worked on in Nigeria, one of the participants told us that he would run away and send his staff instead in “fear of your long interviews” when he saw us. Participants who are involved in multiple projects or even multiple evaluations can begin to develop survey fatigue, resorting to simply providing you with answers they think you want to hear.

As designers, if we don’t eventually deliver on, or at least actively respond to, the feedback we hear in evaluations, we risk breaching trust within relationships we’ve built with communities. The result is a devaluing of the program or product to participants and a jeopardizing of our legitimacy within the community, all of which contributes to shaping the perspective about initiatives and organizations that work with these individuals and communities after us.

Learning from Practice: Evaluating Co-design

None of this is to say we shouldn’t do evaluations; they’re incredibly important for measuring our intentions against our outcomes — critical in fact. It’s just that the structure and purpose of evaluations needs to be two-fold. First, we have to make sure the people we’re engaged with are unharmed by their interactions with the social innovation or research they’re participating in. Second, we need to test and see if our intended outcomes are in fact happening. We’ll have to uncover whether or not the results we’ve promised funders, discussed with beneficiaries, or anticipated at the outset are being realized as a result of the activities we put in place. If not, we’ll need to go through iterations to make sure things are working as best they can.

When I worked on a project to build a patient feedback program in Nigeria, some of the people we spoke to didn’t want to say anything bad about their health care centers for fear that their relatives or friends who work there would be fired or that the health care centers would be shut down as a result. We realized that we were creating a culture of feedback that hadn’t existed before and, in order to do that effectively and responsibly, we needed to protect the staff during the process by ensuring that no jobs would be at risk based on community feedback during the pilot phases. We also needed to ensure that giving feedback was worthwhile for communities — that healthcare centers were responding to feedback and implementing community suggestions so that they would be inclined to continue speaking up. We intentionally encouraged people to offer “good and bad” thoughts; however, we needed to not only say “we’ve heard you,” but show it as well by rapidly integrating beneficiary suggestions into the program or explaining why we couldn’t.

When people feel included in the design and operation of a service (and comfortable knowing you want to hear the truth) they can be more inclined to share their thoughts on how it could be improved. Participants speaking passionately and openly about what works and what doesn’t can be an indicator that they have a substantial comfort level with the program. If people don’t want to give feedback or offer vague examples, it could be a sign that the program isn’t that valuable to them, that they don’t feel ownership in the creation and implementation of the project, or that they think designers just want to hear positive commentary.

The Value of Evaluations

Evaluations are valuable learning experiences and launchpads for revisions, improvements, and iterations. Impact measurement in design can be more about learning what’s actually beneficial and how we strengthen that versus just fixing something that isn’t working. Impact measurement doesn’t need to be as black and white as “the program failed, let’s end it;” or “it was a success, let’s scale it!”

Often those of us doing quick and scrappy social impact designs don’t have the time, money, or staff to conduct randomized controlled trials, and that’s okay. Just because we can’t conduct what some call ‘gold standard’ evaluations doesn’t mean we can or should excuse ourselves from conducting other kinds of valuable, qualitative, small scale evaluations to help us understand project impact. There are several ways to build revelatory, ethical, and achievable evaluation loops into our work:

  1. Gather a rich, qualitative understanding of the baseline and know your intended impacts. Revisit logic models frequently to see if the activities in place are yielding the outcomes they’re meant to. Use logic models to test assumptions about how certain inputs or activities may or may not influence behavior changes. This is something the health sector does quite well in the process of patient diagnosis and treatment. Painting a vivid picture of the capabilities, challenges, limitations, and even perceptions at the start can provide a strong comparison for even the most incremental changes.
  2. Build a local team to assist with monitoring and evaluating programs. A local team who knows the context can find out what really works and doesn’t from people — a team who has walked in the shoes of the beneficiaries can connect with participants in ways that perhaps the design team cannot. They can help with gathering qualitative data around preferences, behaviors, work-arounds, and micro-interactions that won’t be captured in compliance or uptake quantitative data. This team can help gather rich, honest opinions from people who don’t feel like they needed to tell evaluators what they wanted to hear. They can also gather ongoing, light touch, anecdotal, or observational data that is less time-consuming and exhausting for participants.
  3. Build evaluation into program dynamics so it doesn’t have to be invasive or extractive to participants. Find creative ways for evaluation to actually be useful to participants. An Australian peer-to-peer family service called Family by Family utilities a creative relational tool that doubles as a goal-setting mechanism for families and an evaluation tool for the program itself. In this process, families have a conversation with a trusted peer/professional, set and share goals, and continually reflect on how they’re working towards them. This is an intentionally designed activity for families to talk about how they’re working toward achieving their own goals and behavior change while simultaneously helping the program assess how families are meeting their own goals as a result of participating in the program.

Emergent Field Ethics

As the emerging field of social innovation does not currently have a governing body or single guiding code of ethics, design researchers and social impact designers are left to their own devices to determine what kinds of engagement is ethical. Various organizations are working to spread the ethics gospel: IDEO has produced a Little Book of Design Research Ethics, and in 2006 AIGA came out with ethical guidelines for graphic designers. Further, there’s been Root Capital’s Client-Centric Approach Impact Evaluation and The Lean Research Framework by D Lab, yet there are still few collectively-agreed-upon mechanisms to hold social impact designers accountable to the people they intend to serve.

In academia, a code of ethics and an advisory board would oversee intentions and interactions with research participants. In design research and social impact, we have so many points of interaction with people — research, prototyping, implementation, evaluation — but no rules or regulations for shaping what those interactions look like or protecting people and families in the process. If we try to draw from anthropological or sociological ethics codes, we risk picking and choosing from ethics menus at our leisure. This is problematic because we don’t want to measure only what’s easy, but rather improvement in people’s lives. We must, as an industry, define our own ethics that are flexible enough to allow us to act on research but always protect beneficiaries. Creating that flexible yet principled approach to ethics in measurement is among our most pressing design challenges.

Lauren Weinstein is a multidisciplinary designer and writer whose professional experience includes service design and international development, specializing in participatory systems design for international public service improvement. She is currently a Senior Service Designer at The Australian Centre for Social Innovation; her writing has been published in Design and Culture, Fast.co.exist, and featured on GOOD.is.

Original photo used in image courtesy of next billion

Leave a Reply

More Stories About Justice/Equity

Book Review: Design for Good

October 26, 2017
In Design for Good: A New Era of Architecture for Everyone, John Cary has written one of the most consequential books yet for people who are concerned with the future of the built environment in the... Read More

DC City Series Infographic

October 18, 2017
This DC City Series Infographic pulls together some of the brightest minds at work designing for social good in the Washington, DC metro region. Far from a definitive list, this cohort is a starting point... Read More

DC Adopts Community Land Trust Approach to Avert Further Gentrification

October 9, 2017
In an effort to combat imminent gentrification in its east Anacostia River neighborhood, Washington, DC, is adopting a community land trust model to promote and ensure affordability in its less-developed neighborhoods. Prompted by the $45... Read More