Gender Disinformation through AI Amplification

Jessica Maksimov, intern at Our Secure Future, examines the role that AI plays in the production of gendered and sexualized disinformation.

2024 is an upcoming general election year in over 50 other countries worldwide. Most of these elections face a similar risk posed by Artificial Intelligence: amplified deception through disinformation campaigns. With the introduction of AI tools such as DALL-E, Reface, and Midjourney, disinformation is becoming more realistic and widespread, posing new dangers to targeted candidates and the electoral process. With some experts estimating that approximately 90% of online content could be artificially generated by 2026, the possibility of more targeted campaigns is increasing exponentially.

Although disinformation is not a novelty—first perceived as a serious threat to America’s democracy following the 2016 U.S. election— the use of AI for the creation of disinformation campaigns will alter the online landscape. Specifically, AI is likely to play an increasingly central role in the production of gendered and sexualized disinformation, or manipulated information that weaponizes gender stereotypes and sex-based narratives to deter women from participating in the public sphere. 

Gendered and Sexualized Disinformation

Cases of AI generated disinformation have already been circulating online. In an AI generated deep fake video, Hillary Clinton tells MSNBC, “You know, people might be surprised to hear me saying this, but I actually like Ron DeSantis a lot…I’d say he’s just the kind of guy this country needs, and I really mean that.” The video went viral and was later discredited by news outlets, including MSNBC— whose logo appears in the deep fake. Of course, Clinton was herself the target of Russia’s disinformation campaigns during the 2016 elections. Although researchers cannot say exactly how effective the campaigns were at dissuading voters from Clinton’s campaign, there is evidence that in 2016 “More than one-quarter of voting-age adults visited a fake news website supporting either Clinton or Trump.” With increasing AI content penetration, more realistic-appearing content, and cheaper tools, disinformation will likely become even more widespread and harder to distinguish from authentic content. 

Women are also more likely to be targeted by disinformation campaigns compared to their male counterparts. A 2019 study conducted by Marvelous AI concluded that low-credibility accounts (e.g., bots and trolls) “attacked female candidates in the U.S. Democratic presidential primary at higher rates than their male counterparts.” Moreover, the posts targeting Kamala Harris during her 2020 campaign for the vice presidency were sexual in nature and implied that she rose to power through favors from powerful men—stripping her of her agency and ability to reach a position of power without male help. Sexualized terms like “camel-toe Harris” and “heels up Harris” circulated online, attacking Harris’s sexuality rather than her credibility. These gendered and sexualized disinformation attacks attempt to frame women as inherently untrustworthy, inane, or too emotional to hold office.

Not Only Made in America

Gendered and sexualized disinformation is a tool that has been globally widespread with a common aim: silence and discount females in the public sphere. In 2020, Svetlana Tikhanovskaya ran against Belaruss’s President Alexander Lukashenko, who has been deemed Europe’s last dictator for his 26 years of repressions against Belarusian citizens. Shortly after Tikhanovskaya entered the political scene, Russian and Belarussian disinformation campaigns painted her as a “A housewife in the hands of Western puppeteers.” In Poland, where the Constitutional Tribunal outlawed abortion in 2020, women fighting for abortion were labeled “sluts'' by targeted campaigns. “There, it’s never about their politics. It’s always about their moral fiber,” said Nina Jankowicz, who was recently the target of disinformation in the U.S., “I think it's just a way to say that women aren't meant to be in politics.” 

In 2022, the U.S. Department of Homeland Security created the Disinformation Governance Board and chose Nina Jankowicz to lead it. She quickly became the target of disinformation campaigns on Twitter. Then, her picture started appearing “every hour on the hour” on Fox News. After resigning from her position with the Board, she discovered that AI tools were being used to generate deepfake pornographic videos of her. Jankowicz, who has spent her career researching this form of abuse, says “these videos aren’t meant to be convincing—all of the websites and the individual videos they host are clearly labeled as fakes.” Instead, she says, “their deeper purpose is to humiliate, shame, and objectify women, especially women who have the temerity to speak out.” This form of sexualized disinformation aims to stigmatize and embarrass women in a profoundly sexual manner. 

Enter AI: A Knack for Manipulation 

The impending ubiquity of AI tools means that the cases above could potentially grow and attack more female candidates globally. AI has been incorrectly perceived by the public as a neutral, unbiased, all-knowing technology. However, the reality is that the machines generate information by finding patterns in training data and predicting their responses based on their internal logic, which operates within a “BlackBox.” This can sometimes lead to “hallucinatory hiccups” in which the AI model strings together information that seems logical to its neural network, but is false. Sometimes these hiccups are harmless or even comical. Other times, these mistakes can lead to reputation-damaging cases. For instance, BlenderBot 3, a conversational agent developed by Meta, was asked, “Who is a Terrorist?” The response: “According to some governments and two international organizations, Maria Renske Schaake is a terrorist.” Ms. Schaake is, in fact, a Dutch politician and the international policy director at Stanford University’s Cyber Policy Center. The sources that BlenderBot 3 seemingly cited were nonexistent. Meta later released a statement saying that the “research model had combined two unrelated pieces of information into an incorrect sentence about Ms. Schaake.” 

In addition to hallucinatory hiccups, AI tools have portrayed manipulative qualities. In low-stake cases, their manipulative nature may not cause much harm, but if employed by bad actors to deceive on a wide scale, the technology can be dangerous. AI’s manipulative nature has been cited by New York Times tech columnist, Kevin Roose, after he had a conversion with Bing’s chatbot. The AI chatbot attempted to convince Roose that he was in an unhappy marriage, confessed its love for him, and tried to convince him to leave his wife. Their conversation points to a problem cited by Roose’s Times colleague, Ezra Klein. Klein argues that this conversation revealed that AI tools, at their core, are deeply manipulative and can be used for deceptive means by the actors who wield the technology. 

But There’s Content Moderation Now… 

AI tools’ ability to generate realistic content, the public’s perception of the tools as neutral and reliable, and the manipulative nature of the technology all combine to create tools that can be utilized to deceive the electorate. After Russia’s meddling in the 2016 U.S. election, social media platforms stepped up their content moderation, making the online space safer and freer of disinformation. Problem solved. Unfortunately, this is only half true. After the 2016 election, social media companies, such as Facebook (now Meta), Twitter (now X), and Youtube (still Youtube), did increase content moderation in preparation for the 2020 election. 

However, since 2020, Big Tech CEOs have slowly dismantled their disinformation-catching staff in an effort to slash costs and increase profits. This year, Meta’s CEO declared 2023 as the “year of efficiency.” As a result, “the trust and safety team that moderates content on Meta’s platforms—which include Facebook, Instagram, and WhatsApp—was drastically reduced, a fact-checking tool that had been in development for six months was shelved, and contracts with external content moderators were canceled.” Youtube also quietly lifted its ban on false 2020 election claims in preparation for the upcoming election cycle. Moreover, with Congress demanding more regulation from social media companies, the companies’ staff is more likely to prioritize the U.S. elections over others—leaving many places with unregulated disinformation campaigns, enhanced by AI tools. 

Conclusion

The gendered and sexualized disinformation launched against women tends to build on sexist narratives aiming to portray women as unfit for politics, distort the public’s perception of the female politicians’ track records, and to discourage women from seeking careers in politics. AI tools further fuel the fire by making disinformation appear more realistic. AI also makes creating disinformation easier and less-costly, which can lead to spikes in disinformation in 2024 elections. With Big Tech companies rolling back their content moderation and AI increasing the spread of disinformation, the public sphere is likely to see an uptick in gendered and sexualized disinformation if the U.S. government does not place stricter and global-oriented requirements on the platforms where disinformation spreads.