top of page

Digital abuse crosses the screen as gender-based violence becomes a human rights crisis

Digital abuse crosses the screen as gender-based violence becomes a human rights crisis
Digital abuse crosses the screen as gender-based violence becomes a human rights crisis | Photo: Gilles Lambert

When UNESCO and UN Women jointly sounded an alarm in January 2026, it marked a defining moment in how the world must understand gender-based violence. No longer can online harassment be dismissed as a benign or peripheral issue confined to screens and comment threads. The latest evidence shows that technology-facilitated gender-based violence (TFGBV) has taken on a systemic and life-altering force, inflicting physical danger, emotional trauma and barriers to participation in public life for millions of women and girls around the globe.


The shift from digital insult to human harm is not theoretical, it is measurable. A comprehensive study released alongside the alert reveals that the proportion of women journalists who reported that online abuse translated into real-world violence has more than doubled in five years, rising from 20% in 2020 to 42% in 2025. For many media professionals, a flood of hostile messages, threatening videos and derogatory AI-generated content has preceded doxxing, stalking and even attacks at home and in public.


These figures arrive on a backdrop of glaring legal insufficiencies. Despite the ubiquity of digital platforms and their role in civic discourse, fewer than 40% of countries have laws that specifically criminalise cyber-harassment or cyber-stalking. In practical terms, roughly 1.8 billion women and girls—nearly half of the global female population—remain without legal safeguards against the kinds of abuse that begin with a screen tap and can end in physical harm.


What makes TFGBV so pernicious is its ability to mutate and amplify existing inequalities through technology itself. Generative artificial intelligence, hailed for its creative and economic potential, has been repurposed by abusers with alarming efficiency. Independent analyses estimate that between 90% and 95% of deepfake content targets women, often presenting manipulated imagery designed to shame, silence or intimate fear. Women in public roles—journalists, activists, academics and politicians—are disproportionately affected, facing not only reputational damage but targeted campaigns of disinformation intended to erode credibility and deter public participation.


A case in point is the experience of several award-winning journalists in South Asia and Latin America, who spoke anonymously to researchers about coordinated online attacks that preceded threats delivered to their workplaces and homes. In one documented sequence, a reporter received AI-generated videos paired with leaked personal information, followed by harassing calls to her newsroom. Colleagues reported heightened anxiety, and several withdrew temporarily from public reporting. These are not outliers; they reflect a broader pattern where digital violence becomes a gateway to physical intimidation and self-censorship.


Recognising the urgency of the crisis, international actors have begun responding with frameworks aimed at closing both legal and technological gaps. In late 2024, states adopted the Convention Against Cybercrime, the first binding international instrument to explicitly address digital wrongdoing with consequences for human rights and safety. The pact obliges signatories to modernise their criminal codes to encompass online abuse and to cooperate across borders in investigations—a critical step given how digital harm often transcends national jurisdictions.


Alongside legal reform, technical guidance is emerging to counteract the misuse of advanced systems. UNESCO’s AI Red Teaming Playbook offers corporations and developers a structured methodology to test and mitigate biases and vulnerabilities in AI models that can enable digital harassment. By simulating adversarial attacks on algorithms and content moderation systems, organisations can better anticipate how technologies might be exploited, and implement safeguards before harm occurs.


Complementing these global mechanisms is the Seen & Unseen initiative, launched in January 2026 with pilots in countries including Pakistan. Rather than treating digital harm as an abstract threat, the programme equips communities with digital literacy, legal education and survival strategies—so that women can recognise, respond to and report abuse, and advocate for protective laws at the national level. Local activists involved in the project describe a transformation in civic empowerment, with survivors of online abuse reporting greater confidence in pursuing justice and asserting their rights.


Yet formidable challenges remain. Technology companies, while promising to enhance user safety, often rely on automated systems that struggle to distinguish between harassment and legitimate expression. This leads to either under-enforcement, where harmful content persists, or over-enforcement, where women’s voices are erroneously suppressed. Civil society groups have called for more transparency in how platforms set priorities for content moderation, and for community-centred design processes that reflect the lived experiences of those most vulnerable to TFGBV.


Closing the legal and technological gaps is not just a matter of policy, it is essential to achieving Sustainable Development Goal 5, which calls for gender equality and the elimination of discrimination and violence against women. As the digital and physical worlds become ever more intertwined, the protection of human rights must evolve accordingly. Without decisive action, the very tools that have expanded access to information, education and community risk becoming vectors of harm that undercut equality and safety.


Civil society response is scaling up support

A growing network of non-governmental organisations is filling urgent protection gaps with practical services, training and advocacy designed to reduce harm now and reform systems over time.


  • association for progressive communications, take back the tech — a long-running global campaign that equips women, girls and allies with safety tactics, digital literacy and advocacy tools, including survivor-centred guides and community actions tailored to online harassment and image-based abuse. (apc.org)


  • coalition against stalkerware — an international alliance of domestic-violence services and cybersecurity groups offering plain-language resources to detect, document and remove stalkerware, plus coordinated policy advocacy with industry. Their site consolidates warning signs, device-check steps and referral links for survivors and support workers. (stopstalkerware.org)


  • international women’s media foundation and icfj, online violence response hub — a one-stop resource with step-by-step playbooks for women journalists and their editors, including risk assessments, documentation templates, newsroom protocols and mental-health guidance. (iwmf.org)


  • pen america, online abuse defense program — offers training for writers and reporters, practical field manuals, and bystander-ally guidance to counter targeted pile-ons and coordinated harassment. PEN also partners on live trainings that teach safe intervention online. (PEN America)


  • right to be (formerly Hollaback!) — delivers widely used bystander intervention curricula and survivor-support tools for digital abuse, including the 5Ds method and community platforms for documenting incidents and receiving peer support. (righttobe.org)


  • nnedv safety net project — a specialist tech-safety programme embedded in domestic-violence services, providing up-to-date toolkits for survivors and agencies on privacy, device security, anti-doxing strategies and platform reporting. (NNEDV)


  • witness, deepfake rapid response force — connects victims and civil-society partners with expert analysis of suspected deepfakes and manipulated media, helping to triage urgent cases, challenge disinformation and support takedowns. (gen-ai.witness.org)


  • derechos digitales — a Latin-American digital-rights NGO producing regionally grounded research and policy proposals on TFGBV, including legal analyses and survivor-centred recommendations that inform legislators and platforms. (Derechos Digitales)


How this strengthens the wider response

These organisations translate policy into action by offering survivor-safe guidance, emergency verification for AI-generated fakes, newsroom protocols, legal navigation and community-based support. Their work complements new regulatory instruments and platform safeguards, advancing progress toward SDG 5 on gender equality.


more to explore

 

For readers wishing to explore further efforts to confront this crisis, resources on technology and gender equality are available through UNESCO’s information platforms, and programmes advancing women’s digital safety are outlined on UN Women’s site.


Further reading:• UNESCO technology and gender equality resources: unesco.org/en/communication-information

UN Women digital safety programmes: unwomen.org/en

bottom of page