by Zara Rahman and Tom Walker
In areas affected by armed conflict or human rights violations, security concerns around technology can be a matter of life and death. Understanding and responding to these concerns takes a level of technical understanding that is out of reach for many. This shifts responsibility onto the people designing those tools. How can they ensure that the tech they’re building doesn’t put people at greater risk?
Over the past couple of years, we’ve been researching the realities in which human rights defenders and people in conflict-affected areas are using technology tools to communicate and manage information. We found that people use technology in ways that the designers and product owners of those tools never imagined.
For example, voice messages are hugely popular among Syrian refugees because they are easier and quicker to send than typing a message, don’t require high levels of literacy and can be listened to weeks later, when a person accesses wi-fi after arriving in a new country. Although a slew of bespoke apps designed for refugees has been launched since 2015, humanitarian workers typically find that refugees are far more likely to use messaging apps already on their phone like Viber, imo, or WhatsApp.
The technical teams behind the world’s major messaging apps probably didn’t prioritise the use case of a refugee who needs to keep in touch with their family, while dealing with limited access to electrical power, intermittent wi-fi, and the threat from state surveillance across borders. But the use case is a very real one.
The same could be said for the technology tools that human rights defenders are using. Although a few tools are explicitly designed for these use cases—such as Benetech’s Martus tool and various tools built by HURIDOCS, a non-profit providing information management solutions for human rights defenders—the organisations behind them are tiny and under-resourced in comparison with the large companies behind most commercial tech solutions.
Human rights defenders often have limited financial resources, and lack the time and technical literacy to investigate technology options in depth. In some cases, they might also have to deal with problems ranging from physical threats to legal battles on a daily basis. As a result, their priorities when choosing technology are reliability, sustainability, and accessibility. For them, technology tools produced by large companies often seem to fit the bill.
But using proprietary tools designed for an entirely different context can raise security risks—particularly in conflict-affected or oppressive contexts. For example, following a sudden change of regime, even data that seemed innocuous when it was collected could be used to target a particular ethnic or political group. In these situations, humanitarian or human rights organisations may receive highly sensitive information from people, even when they don’t ask for it. At this point, human rights or humanitarian organisations have to face the question of how to store this data. Once again, ease of access and reliability come to the fore.
From our conversations with a range of human rights organisations, we know that Google Drive is often used to store sensitive information, despite the fact that the organisation itself does not know where the data is stored, or who else has access to it. Some were even using tools from the CIA-funded company Palantir—which offers technical support and powerful data mining tools through its Philanthropy Engineering initiative. The conflict of interest, in this case, seems particularly stark: collecting data on vulnerable populations but potentially feeding it straight to intelligence services. However, organisations sometimes deem these potential risks less pressing than immediate usability challenges, prioritising short-term efficiency gains over longer-term strategic losses. This puts both organisations and the people they aim to protect at risk.
When designing tools to address these challenges, technical developers need to prioritise the rights of those using the tools, as well as the rights of the people reflected in the data that is collected. With a string of partners from various sectors and a strong community that has emerged since 2014, we call this taking a responsible data approach. Key responsible data issues range from thinking about how data can be securely stored and effectively deleted, to ensuring that people understand the way that the data they submit will be used, shared and managed (known as ‘informed consent’).
To put this into practice, people involved in designing and building tools can start by considering what data is really needed for a tool to perform its function. Many tools are designed to collect the maximum amount of data possible for the purposes of analytics. However, the more data that is collected in a risky context, the harder it will be for an organisation to secure it and manage it. We advocate instead for data minimization—collecting only the data that is actually essential, and nothing more. Being transparent about the data that any tool collects and who it is shared with, as well as giving organisations the flexibility to manage this could be helpful here.
In some situations, technology can skew power dynamics between people and the organisations that aim to help them. In a crisis, people may feel they have no choice but to use a particular technology tool recommended by a humanitarian organisation to get information or help—even if they don’t know if it is secure, or fully understand how it will collect and store their data. Even worse, many companies don’t state how long they retain data or allow the people who submitted it to delete it later on. At this point, many humanitarians are asking themselves if gaining fully informed consent is even possible.
Amid increasing efforts to introduce new security features in many mainstream tech tools, it’s important to consider whether at-risk users understand when these features might be needed, and how they can turn them on. For example, even though Facebook Messenger now offers end-to-end encryption as an opt-in feature, it won’t be that useful if the most vulnerable subset of users don’t understand what it is, or how to turn it on. Designing for these situations poses its own set of challenges—one that requires a much deeper understanding of how people in unstable contexts interact with technology.
Some human rights defenders and activists that we spoke to rely on tools supplied by corporate entities, having tried open source tools and found them too hard to use. But it’s hard (if not impossible) for under-funded open-source alternatives to compete with corporate giants. Non-profit funding goes in cycles, meaning that tools can change or stop being updated, leaving their users at a loss. For example, what happens when a government suddenly introduces measures to block a certain tool? The non-profit organisation Open Whisper Systems was able to introduce a new feature when Egypt and the United Arab Emirates started blocking its secure messaging app Signal—but how many other non-profits could do the same?
The few organisations providing open-source alternatives and building with vulnerable populations’ use cases in mind need a great deal of support and investment, over a long period of time, to be able to compete. When power disparities are especially stark, such as in the humanitarian or the human rights sector, building this technology needs to be even more thoughtful than usual, and be designed with the people it aims to help.
Designers need to think about building effective security measures into their tools, prioritising user experience and recognising the huge gap between imagined realities and lived realities. Our research picks out some of the key challenges for human rights defenders using technology in their work, and identifies issues for humanitarian organisations to consider when introducing new technology tools such as messaging apps. Organisations like Simply Secure and IF are making great progress toward helping companies and non-profit organisations build security and privacy in much more accessible ways, but there is still a long way to go.
Zara Rahman is a writer, researcher and linguist interested in the intersection of power, race and technology. She works as a Research Lead at The Engine Room and this year is a fellow at the Data & Society Research Institute. (Twitter: @zararah)
Tom Walker is a researcher focused on politics, technology and activism. He works as a Research Lead at The Engine Room. (Twitter: @thomwithoutanh)