A review of the United Nations Institute for Disarmament Research recent report on the military applications of AI and gender.
Technology and ‘the digital revolution’ have permeated, and fundamentally changed the most minute aspects of our everyday lives - from communication, shopping, research, travel, and financial transactions. Technological tools have led exciting new frontiers for global connectivity, access to education, civic engagement, human rights documentation, democratization, and conflict resolution. At the same time, however, a growing number of scholars, practitioners, data scientists, and policymakers are challenging the trope of technology as an ‘unbias’ ‘great equalizer,’ revealing how technology and the digital ecosystem cannot be separated from the inequalities and biases inherent in the context within which it is created.
The United Nations Institute for Disarmament Research (UNIDIR)’s Gender and Disarmament program has published a series of reports and videos focused on gendered norms and expectations filtered into technology design, defense, and response, how this impacts cybersecurity policy and practice, and implications for the Women, Peace, and Security agenda. Katherine Chandler's recent 2021 report: Does Military AI Have Gender?: Understanding bias and promoting ethical approaches in military applications of AI, examines how Artificial Intelligence (AI) is particularly biased, and the implications of the most intimate everyday processes that rely on AI, to state-level security systems and issues of conflict, peace, and security. She interrogates how gender norms in technology development are reinforced in military applications of AI; how AI systems are deployed in practice; and proposes actionable steps to counter bias and promote ethical approaches to military applications of AI. The report is unique insofar as it goes beyond just calling out biases inherent in AI and thinks critically about how AI can be used to reduce inequalities and further the WPS agenda.
What is AI? How is AI gendered?
AI is a computer system that aims to replicate - and go beyond - human cognition through machines. AI is quickly becoming a fundamental part of our everyday life - encoding systems like Face ID, google search, spell check, digital voice assistants and smart home devices, personal advertisements, as well as larger systems like smart and self-driving cars, and autonomous weapons. Because it is designed by, and to replicate, humans and human interactions, it cannot be separated from human bias and power relations.
Machine learning is a “pathway” to AI, that uses algorithms to recognize and analyze patterns of data. It is how AI can translate languages, respond to voice commands and identify images. However, machine learning, and therefore AI systems, are trained on data sets that assume white men as the ‘default’ human and grossly underrepresent other genders, races, and cultural specificities. The data is extracted and classified by a team of engineers and digital laborers who incorporate their own social, cultural, and political biases. Because machine learning and AI encode patterns of the data it was trained on, gender, racial, and cultural biases are ingrained in, and reproduced by, these systems. As Chandler discusses, this has salient implications for more vulnerable people on a day-to-day basis: social media platforms tend to advertise higher-paying jobs to men than women; digital voice assistants (Siri, Alexa, Cortana, and Google Assistant) all have traditionally female-sounding voices which can reinforce traditional gender roles; BIPOC women and gender minorities are particularly marginalized by hiring, housing, financial, and security systems that rely on AI.
Article Details
Published
Written by
Topic
Program
Content Type
Opinion & Insights