HUMANITARIAN TAKEAWAYS: DIGITAL TRANSFORMATION |
|
|
|
Dear friends of CHA,
We present you the new issue of Humanitarian Takeaways. This time we are dealing with digital transformation and digital technologies.
The selected materials explain what 'digital transformation' is about; provide insights into the relationship between digital technologies and power imbalances / coloniality, and between technologies and (doing no) harm. A couple of other articles discuss data sharing between humanitarian organisation and donors; as well as look into the biometric technology and the AI industry. And if you scroll down, you will spot some resources that we find helpful in navigating the not-always-easy-to-understand intersection of digital and humanitarian worlds.
In case you would like to catch up on the previous issues of Humanitarian Takeaways, here they are: #1 gender equality, #2 locally led humanitarian action, #3 anti-racism in aid organisations, and #4 climate crisis. We really hope that you will find these materials helpful and applicable to your work!
If you have any observations or suggestions on how we could make Takeaways more useful to you, we are always grateful for your feedback. And last but not the least – please share these (and other) Takeaways with anyone who you think might be interested in these learnings.
|
|
|
|
Was this email forwarded to you?
If you would like to receive Humanitarian Takeaways in the future,
|
|
|
|
Research articles and reports |
|
|
|
By Larissa Fast (NCHS, April 2022)
Article length: 29 pages
(Sub)topics: data sharing; donors; governance of sharing humanitarian data; trust
This report investigates issues related to data sharing between humanitarian actors and donors, with a focus on two key questions: 1) What formal or informal frameworks govern the collection and sharing of disaggregated humanitarian data between humanitarian actors and donors? 2) How are these frameworks and the related requirements understood or perceived by humanitarian actors and donors? To explore this, the author undertook 1) a structured, thematic document analysis, followed by 2) qualitative semi-structured interviews with a) donor participants and with b) humanitarian actors.
The author found out that 'donor and humanitarian references to ‘data’ in the context of humanitarian operations are usually generic and lack a consistent definition or even a shared terminology' (p. 9). This is problematic as there are so many data types varying from quantitative to qualitative, personal to non-personal, sensitive to non-sensitive, financial to programme-related data, just to mention a few. If shared without proper frameworks in place, data might be exposed, no matter if transferred in an aggregated or in a disaggregated manner.
The author argues that a lack of data literacy and awareness of responsible data management exacerbates many issues related to data sharing. Additionally, a range of factors related to complex regulatory frameworks complicate the nature and handling of data sharing requests. Different legal factors might be overlapping and contradictory, and in combination with the varying definitions of ‘data’, lead to inconsistencies in practices and requirements.
The differences among donor expectations exist both in terms of the level of detail of data requests and the type of markers or indicators requested. Both donors and humanitarians confirmed that different standards exist for different partners. Additionally, responsible data sharing is built on trust and enabled by trust. Since a high level of trust is more likely to exist between donors and established humanitarian actors, newer, less established, usually national or local humanitarian actors might face barriers in this regard.
|
|
|
|
By Irene Scott (The Humanitarian Leader, March 2022)
Article length: 15 pages
(Sub)topics: Social Media Analytics; social listening; inclusion
This article explores the limitations of data collected using Social Media Analytics (SMA) tools in fairly representing community-wide perceptions. Additionally, it explores whether these data deficiencies are fairly represented.
After exploring the literature on the use of social media in crisis contexts by humanitarian agencies, the author presents the main data quality limitations of SMA tools employed during the COVID-19 pandemic by public health professionals and risk communicators. The author then uses these limitations as a metric to assess a selection of social listening* reports created to inform the risk communication priorities of humanitarian agencies. Furthermore, the author explores how a lack of transparency about data limitations could be misleading or impact the ability of social listening reports and other outputs to meet the aim of driving actionable intelligence.
The limitations of data collected using Social Media Analytics (SMA) tools include digital divide (i.e. under-representation of women, elderly populations, people with disabilities, and low-income groups); over-representation of dominant languages; capturing only publicly posted data; determining the authenticity of users who post; the fact that social media platforms might not be the fora which people use for sharing their opinions about issues that are of interest to researchers; and the impact of researcher-generated search terms.
After undertaking an assessment of 12 sample social listening reports produced by international NGOs and UN agencies, the article comes to the conclusion that these limitations are not adequately discussed in the reporting produced from such data. According to the author, this is problematic since 'by presenting this kind of data as an accurate depiction of community-wide insights, without a nuanced discussion of limitations, there is the potential to misrepresent community perceptions and to further silence and marginalise vulnerable groups' (p. 4).
* Drawing on Stewart (2018) and Hou et al. (2021), the author defines social listening in a humanitarian context as 'the process of monitoring and analysing community conversations in online spaces (such as social media) to understand needs and inform humanitarian responses' (p. 5).
|
|
|
|
By Aarathi Krishnan (Carr Centre for Human Rights Policy, Harvard Kennedy School, January 2022)
Article length: 14 pages
(Sub)topics: digital governance; digital futures; decoloniality; foresight; harm
In this discussion paper Aarathi Krishnan asks: Can humanitarian governance systems design mitigation or subversion mechanisms to not lock people into future harm, future inequity, or future indebtedness because of technology design and intervention? The author proposes looking at digital governance with the help of foresight and decolonial approaches in order to create 'a space of safety and humanity for all, and through this—birth new forms of digital humanism' (p. 1).
The author argues that our digital futures are uncertain and not neutral: ‘[l]ike the technical architecture of classic colonialism, digital colonialism is rooted in the design of a tech ecosystem for the purposes of profit and plunder' (p. 2). With respect to humanitarianism, the risks arise when humanitarian systems, by themselves criticised of perpetuating oppressive structures, intersect with similarly criticised technology systems. The currently existing data governance protocols narrow the focus of governance to issues such as data and AI, as opposed to taking into account broader governance of the digital systems usage within intersecting systems. These protocols are homogeneous and focus on the present. In other words, humanitarians fail to fully consider how digital technologies might be used (also in the future) against the very people that they are trying to protect.
The emergent digital ethics and governance framework for the humanitarian system suggested in this paper 'specifically aims at interrogating and analysing context, motives, and impact of use over a long-term time frame' (p. 6). This approach 'de-centres humanitarian coloniality, disrupts the idea of "solutionism", centres justice and equity, and actively works to mitigate harm, now and into the future' (p. 6). Building on the research by Coppi et al. (2021) and the principles outlined in the AI Decolonial Manifesto, the framework aims to ensure that the following principles are instilled into the use and deployment of digital systems for humanitarian purposes: considering the positionality of the authorship; assessing future impacts and harm; utilising a wider range of knowledge sources and experiences; designing accountability mechanisms to hold institutions to account; ensuring that the inclusion of historically excluded, impacted populations can influence decision-making; assessing patterns across various systems 'to understand why' (p. 9) as opposed to blindly designing technical solutions; assessing whose rights are privileged and whose dispossessed; and designing malleability and adaptability in changing contexts.
|
|
|
|
Blog and opinion articles |
|
|
|
By Svetlana Zens, Digital Rights Programme Manager of MCRB (ORF, 26/10/2022)
In this article, Svetlana Zens argues that humanitarian organisations need to prioritise data safety and protection, especially in the regions where data laws are weak or non-existent. According to Zens, the proper measures are not always in place as there is resistance to implementing them. The reasons for this are the novelty of the digital skills topic and digital rights concepts, or the sense that an actual physical threat is not immediate. Constant education of the stakeholders and a systematic Data Protection Impact Assessment (DPIA) are key in order to mitigate risks and secure sensitive data.
|
|
|
|
By Greg Noone, Tech Monitor's features editor (Tech Monitor, 20/10/2022)
This article discusses the safety aspects related to biometric data in refugee settings. One of the issues is consent: Are people who flee/migrate really freely giving it? Are they provided with the information on the data protection regulations? And is it really consent, if it is given under the assumption that refusing it will impede access to basic aid? Another highly problematic practice by humanitarian agencies is sharing sensitive data with governments, as happened with the biometric data of Rohingya refugees when UNHCR shared it with the government of Myanmar. Additionally, organisations are using experimental tools in high-risk settings, not taking into account the impacts that these tools have on people's rights and dignity, as argued by Dr Petra Molnar, a research fellow at Harvard’s Berkman Klein Centre for Internet & Society and one of the experts interviewed for this article.
|
|
|
|
By David Souter, consultant and researcher (APC, 14/09/2022)
In this article David Souter argues that while the fundamental goals and structures of our society and governance have not fundamentally changed, digitalisation has altered some of the mechanisms by which those goals can be pursued. In the future, although there will be profound changes (especially with regard to AI, robotics and Machine Learning) the underlying motivations underpinning individuals, governments, and businesses will also stay the same, and inequalities will remain or will be exacerbated by digitalisation. The implications thereof include accepting that ‘digital transformation’ will have both positive and negative impacts and that ‘digital solutions’ to major challenges can be no more than partial, i.e. 'digital contributions'.
|
|
|
|
By John Bryant, senior research officer at HPG/ODI (The New Humanitarian, 04/07/2022)
In this article, John Bryant is discussing the lack of a cost-benefit analysis for the user when designing digital tools that are deployed in crisis settings. It is a responsibility of humanitarians to ensure that the future users of these tools to access goods and services could participate in their design. Additionally, humanitarians have to consider how such tools often deepen power inequalities and exclusion in a humanitarian system.
|
|
|
|
By Karen Hao, senior reporter, and Andrea Paola Hernández, freelance journalist (MIT Technology Review, 20/04/2022)
This article explores working at AI data-labelling platforms which enable deep learning (such work might include manually tagging videos, sorting photos, or transcribing audio materials). The article tells the story of Oskarina Fuentes Anaya, who migrated from Venezuela to Colombia because of the Venezuelan economic catastrophe and who works at one of such platforms. It describes the exploitative conditions of work, and how these companies – as they race to the bottom – specifically hire people from crisis settings. This story is part two of MIT Technology Review’s series on AI colonialism, the idea that artificial intelligence is creating a new colonial world order. It was supported by the MIT Knight Science Journalism Fellowship Program and the Pulitzer Center.
|
|
|
|
Is something missing?
Send us your feedback – we are eager to hear it:
|
|
|
|
A podcast series by The New Humanitarian
'Fixing Aid' is The New Humanitarian's six-episode podcast series on innovations in the humanitarian world hosted by innovations expert Alae Ismail. The topics discussed include giving feedback on humanitarian services; a case of an Afghan-led business which pivoted to emergency aid; the potential of blockchain in fixing the ID problem; tech-based tools in the Ukrainian aid response; the possibility of predicting future displacements; and the dangers of border technology for refugees.
|
|
|
|
A global forum initiated by ICRC
The DigitHarium is a global forum to discuss and debate digital transformation within the humanitarian sector, with a focus on humanitarian protection, policy, ethics, and action. The DigitHarium has three main channels: a monthly Digital Dilemmas Dialogue (a 30-minute discussion between two experts); a monthly Digital Dilemmas Debate (a 60-minute roundtable with a panel of experts, practitioners, and other stakeholder groups); and regular blog articles and podcasts.
|
|
|
|
A platform by the CyberPeace Institute
The 'Platform #Ukraine’ provides insights on how cyber attacks and operations impact civilians since the invasion of Ukraine by the forces of the Russian Federation. The analysis is presented in three sections: 1) cyber threats, which analyses the cyber attacks and attribution of the attacks to threat actors; 2) impact and harm, which traces the harm of the cyber attacks on civilians; 3) law and policy, where legal instruments relating to cyber attacks, and the challenges of applying them, are documented.
|
|
|
|
Did you find it helpful?
If you haven't done it yet,
|
|
|
|
|