The right to disappear
- Editorial Team SDG16

- Feb 2
- 4 min read

For years, “being on the internet” was, more or less, a choice. You opened an account, uploaded a photo, accepted terms hardly anyone read. There were traps, of course, but a basic instinct still held: if I didn’t publish my image, my image wouldn’t circulate. Today that instinct has broken.
Artificial intelligence has delivered genuine, sometimes admirable advances: instant translation, accessibility tools, assisted diagnosis, creativity expanded, and new ways to work with speed and precision. But it has also shifted the balance of power over something deeply personal and unavoidably public: identity. Because now you don’t have to show yourself to be “shown”. An old photograph, a tagged post, a low-resolution video, a voice note — that can be enough for a machine to reconstruct, enhance, and, if prompted, invent.
This is where the uncomfortable question sits in everyday life: where are the limits today on a person’s right to decide whether they are visible online — and what may be done with their image?
In principle, the law tries to set boundaries. In Europe, data protection rules include the right to erasure, often called the “right to be forgotten”, allowing people to request deletion of personal data in certain situations. In Spain, the right to one’s own image sits alongside rights to honour and privacy. But these safeguards were built for a different river.
They work, with effort, when the issue is a specific post, a particular link, a search result. In today’s ecosystem, an image is no longer a static file. It is raw material.
And that is the first reason so many people searching “how to delete your digital footprint” feel stuck. Even if you remove what you posted, copies may exist elsewhere. A screenshot may have travelled. A clip may have been reposted. Your face may have been scraped, indexed, remixed, or used to train systems you never agreed to. The question is no longer only “Can I take down what I uploaded?” It becomes, “Can I stop versions of me being manufactured?”
That shift exposes the legal and ethical gaps society is now facing. Because ethics, when it is taken seriously, begins with consent. Not consent buried in thirty pages of terms and conditions, but human consent: specific, informed, and revocable. In the offline world, the rule felt simple: my image is mine, and using it requires permission except for narrow public-interest circumstances. Online, that principle has been quietly replaced by a dangerous assumption: “If it’s online, it’s free to reuse.”
It isn’t just a technical argument. It is about dignity and safety. Visibility can be weaponised. And the harm is not shared evenly. Women and vulnerable people carry a disproportionate cost when their image is used to humiliate, discredit, intimidate, or extort. The pattern is old, AI simply automates it. What once required time, skill, and access can now be scaled. The asymmetry is brutal: the person harmed must prove it, report it, chase it across platforms, the person causing it can do it in seconds, often behind anonymity.
So what does “having the right to decide” actually mean in practice for someone trying to erase themselves online, remove their photo from Google, or reduce their online presence?
It means recognising that control needs to exist before the damage, not only after it. The most important limits are not purely technical, they are moral, legal, and social. For me, they sit in a few clear principles.
One is dignity.
An image is not “just data”. It is reputation, safety, and public presence. Treating it as endlessly reusable content is a denial of personhood.
Another is verifiable consent.
If AI can create realistic content from fragments, then a modern standard of consent must be stronger than ever. People need simple ways to say “no”, and that “no” must travel with the content, not as a polite request, but as an enforceable boundary. Labelling synthetic content helps, but it cannot be the only line of defence. The crucial test is whether consent can be checked, challenged, and withdrawn.
Another is shared responsibility.
For too long, the story has been: “the platform only hosts”, “the model only generates”, “the user is solely to blame”. That division is convenient, but it does not match reality. If a service makes predictable harm easy, it should be required to make harm harder: faster takedowns for non-consensual content, better reporting routes, stronger identity abuse prevention, and meaningful consequences for repeat offenders.
There is also a tension that cannot be ignored: the public’s right to information. A democratic society must be able to report what matters, scrutinise power, document events, and preserve evidence. This is why the answer cannot be a single “delete everything” switch. The balance is delicate. But that balance is precisely why authenticity and verification must become cultural values again — in schools, in families, in workplaces, and in platforms’ design choices.
If you are searching for “how to delete your digital footprint”, you are not asking for a magic trick. You are asking for a right that used to be intuitive: the right to choose your visibility, and to limit the use of your image. The internet was built to copy, AI was built to generate. Without firm boundaries, the default outcome is a permanent, uncontrollable visibility that benefits everyone except the person being exposed.
In the end, the frontier is not whether technology will keep advancing. It will. The frontier is whether we accept visibility as a fate imposed on ordinary people, or whether we defend — through law, responsible design, and everyday norms — that being online remains, above all, a human decision.



