As Nigeria’s internet user base is projected to hit 200 million by 2030, up to 30 million women and girls could be at risk of online abuse facilitated by generative artificial intelligence (AI), a new report by public strategy and media group Gatefield warns.
The report, released to mark Safer Internet Day on Tuesday, indicates that nearly half of women active online may face AI-driven harms each year. These include sexualised deepfakes, impersonation, disinformation campaigns, and coordinated harassment targeting women specifically. Gatefield estimates that between 30 and 40 million women and girls could be affected annually if trends continue unchecked.
Drawing on its State of Online Harms 2025 report, public data, and conservative predictive modelling, Gatefield found that nearly half of Nigerian internet users already experience some form of online harm, with women accounting for 58 percent of victims.
“AI-enabled abuse represents a structurally violent form of digital harm,” the report states, “systematically targeting women while exploiting both human and technological vulnerabilities.” Citing research on non-consensual image sharing in Nigeria, Gatefield notes that almost 90 percent of affected women reported depression or suicidal thoughts, with 11 out of 27 study participants considering suicide and one attempting it.
The report also highlights high-profile cases to illustrate the scale of the threat. In 2025, AI-generated nude images of Afrobeats singer Ayra Starr circulated widely on social media, with platforms slow to respond. Natasha Akpoti-Uduaghan, a politician from Kogi Central, faced deepfake audio and video attacks following sexual harassment allegations, while Nollywood actress Kehinde Bankole was subjected to AI-generated harassment campaigns involving digitally altered images.
“Platforms such as X and Grok facilitated abuse through frictionless content generation, delayed moderation, and opaque policies,” Farida Adamu, Gatefield’s insights and analytics lead, said. She emphasised that the problem stems from unsafe product design rather than isolated malicious actors.
Gatefield warns that Nigeria currently lacks AI-specific legislation, forensic capacity, and platform accountability frameworks, leaving women increasingly exposed as other countries adopt stronger measures. The report references initiatives in the European Union, France, the United States, and the United Kingdom that target non-consensual AI sexual content and hold platforms responsible for moderating harmful material.
“Without immediate legal frameworks, Nigeria risks the structural exclusion of women from politics, media, and culture,” Shirley Ewang, Gatefield’s advocacy lead, said. She further cautioned that generative AI is accelerating abuse faster than existing laws and platform policies can respond, underscoring the urgent need for enforceable regulations to protect millions of women online.
