In 1972, NASA launched Pioneer 10 carrying a gold-anodized aluminum plaque. It displayed a line drawing of a nude man and woman, designed to represent humanity to any extraterrestrials. The decision reflected bold optimism and a conviction that art, science, and communication are for everyone.
Today, that same image is banned by some AI tools and social media platforms under the label of “inappropriate content.”
This is not progress!
As automated moderation expands, we quietly lose nuance and forget our history.
The NASA 1972 Pioneer plaque and how Midjourney bot is considering it NSFW.

Midjourney refuses to generate a description of NASA’s 1972 Pioneer plaque. Remarkably, more than 50 years ago the world was more open-minded than today’s AI filters.

Censorship in the Age of AI: A Blunt Instrument
As an artist and researcher working with political posters, historical media, and the human form, I am aware of boundaries. I see the need to protect minors, prevent exploitation, and create safe spaces.
But this isn’t true safety. It’s just oversimplification.
I recently had a Pinterest board titled Politics and Propaganda removed. It contained carefully archived visual material from various ideologies and eras. Nazi, Soviet, American, Christian democratic, Liberal, Socialist, Chinese, Vietnamese... It was a project that spanned more than four years. My goal was to visually study political language, preserving the memory of persuasion and resistance. Among the banned images: The May 7, 1945, TIME Magazine cover showing Hitler’s face crossed out.
It was flagged as “hate speech.”
The same fate had my pin of the British India (Allied) WW2 Propaganda Leaflet. You can read the very interesting story on this blog post from 2012.
The only positive thing of AI powered censorship is that can be extremely funny sometimes. So after TIME and British India the Pinterest filters banned an M'n'Ms ad...
If a mainstream magazine cover from 1945 can be banned, we must ask: Who is writing the definitions? And who benefits from forgetting?
Scholars such as Tarleton Gillespie remind us that platforms are not neutral—they act as “custodians of the internet”, shaping what culture remembers or forgets through moderation policies that are often invisible to users (Gillespie, 2018³).
Diana Ross colorful digital mosaic portrait.

When I uploaded this digital mosaic portrait of Diana Ross to Behance, it was initially flagged as NSFW. Thanks to their support team, it was reviewed and restored in less than 30 minutes. Still, the incident highlights just how strict, and sometimes overly conservative, AI filters can be.

Nudity ≠ Pornography
There’s a crisis in the visual arts. More platforms treat nudity as always sexual, ignoring context. Photograph of a breastfeeding mother from an African tribe? NSFW.
A 1970s anatomical diagram? Flagged.
Last year, if you asked Midjourney to generate an image of a female Olympian in mid-throw, you would get an image with an athlete in a tracksuit. Something is obviously wrong and inauthentic. Thank God, this year they have updated this odd thing, but the incident is showing something really scary.
Yet explicit pornography remains easy to find, and sexy poses are created in apps like MJ, even if you are just asking for a beautiful girl. Sometimes, it is even promoted by platforms that block scientific, ethnographic, or artistic nudity. This isn’t about protection. It’s about algorithms overreacting. Safiya Noble demonstrates that algorithms don’t merely reflect reality—they amplify cultural taboos and conservative or commercial norms (Noble, 2018). (Noble, 2018)
The definition of pornography itself remains contested. Isabel Tang’s 1999 work, 'Pornography: The Hidden History of Civilization,' explores the evolution of erotica and the emergence of pornography as a modern cultural construct during the Victorian era.
AI Filters Lack What Art Requires: Context
The main problem is that AI still can't understand context. Machine learning sorts patterns and guesses, but it doesn't know what something means. It sees images, not the reason behind them. For example, a museum curator carefully saved a painting that caused a lot of debate. Because the curator understood its history and how it affected art, they could show why the painting mattered. This kind of careful thinking is what machines miss when they remove things without thinking. A swastika in a hateful meme and a swastika in a museum poster about the Holocaust look the same to a computer filter. Unless someone specifically tells the system to treat them differently (which rarely happens), the result is random censorship.
As Kate Crawford observes: “The biases in AI systems are not glitches; they are features embedded in the data and the practices that produce it” (Crawford, 2021¹).
Recent studies also warn of “predictive multiplicity,” where equally valid moderation models can yield contradictory results for the same content, thereby undermining consistency and fairness (Gomez et al., 2024⁴). (Marx, 2024)
When we call this safety, we teach machines to ignore history, erase meaning, and miss important details.
Historic political communication from the WWII and some ad for MnMs mimicking the shepard fairey style.

For years I collected political and historical images on my Pinterest boards, believing I was studying political communication and history. I never imagined this would be labeled as ‘hateful activities.’ Worse still, when I challenged the decision, pointing out the obvious nonsense on a Time magazine cover, I received a reply reaffirming the ban. Speechless

The Cost: Cultural Erasure and Institutional Fear
This strict approach has a bigger cost. It discourages artists, teachers, historians, and archivists. Ultimately, our visual culture is shaped by what’s safe for advertisers, rather than by what matters to us.
Art history without the nude is incomplete. Political history without propaganda is incoherent. Human history without the body is dishonest.
When we block these images, we aren’t protecting people. We’re taking something important away from everyone.
And let's not forget the human toll of moderation itself. Sarah Roberts' research on content moderators shows that those tasked with "keeping platforms safe" describe the work as "spending eight hours in a hole of filth"—a psychologically damaging process that reduces complex cultural artifacts to risk categories (Roberts, 2019). In one instance, a moderator shared how the constant exposure to distressing content led to nightmares and affected their mental health. They cope by taking frequent breaks and seeking support from colleagues who understand the unique challenges of the job. Such narratives link cultural erasure directly to labor injustice.
A Call to the IT Industry and Democratic Institutions
This isn’t only a technical issue. It’s also about democracy.
We urgently need:
• AI companies should make their filter training processes transparent, disclose which datasets are excluded, and identify the individuals responsible for these decisions.
• The development of context-aware moderation tools is needed to accurately identify and allow historical, artistic, and scientific content.
• Appeals processes should rely on qualified human reviewers who can consider artistic, cultural, or scholarly merit when judging disputed content.
• International standards should explicitly enshrine artistic freedom and cultural diversity, rather than deferring to the most restrictive norms.
Most importantly, we call for open public debate. Our society must decide together what we filter, why we make those choices, and what is at stake.
As Helen Nissenbaum argues, meaning is always contextual: what is appropriate in one setting may be entirely benign in another (Nissenbaum, 2010⁵). A platform that erases context erases culture.
Conclusion: Not Safer, Just Smaller
We didn't send the Pioneer Plaque to avoid controversy. We sent it to share our story, in all its complexity and humanity.
Let us imagine a future where a plaque, created by a group of artists, historians, and individuals passionate about culture from around the world, travels into space. This plaque could showcase a mix of different stories, celebrating diversity and what we all share as human beings. By drawing on ideas and creativity from everywhere, we can craft a message for everyone. This would show our shared history and hopes.
Let's not let machines, driven by fear and legal concerns, decide which parts of our story we are allowed to remember, share, or discuss. Art, culture, and memory deserve more than just a content warning—they deserve protection, ongoing debate, and bold stewardship for future generations.
Blackpink's singer Lisa in a photomosaic made out of anime pictures of herself.

A digital mosaic of Korean singer Lisa from Blackpink was another artwork flagged by some media as NSFW. I am trying to understand if they are going to ban their video as well. 

References
Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press.
Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumé III, H., & Crawford, K. (2018). Datasheets for Datasets. arXiv preprint arXiv:1803.09010.
Gillespie, T. (2018). Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media. Yale University Press.
Gomez, J. F., Ytre-Arne, B., & Karlsen, F. (2024). Algorithmic Arbitrariness in Content Moderation: The Problem of Predictive Multiplicity. arXiv preprint arXiv:2402.16979.
Nissenbaum, H. (2010). Privacy in Context: Technology, Policy, and the Integrity of Social Life. Stanford University Press.
Roberts, S. T. (2019). Behind the Screen: Content Moderation in the Shadows of Social Media. Yale University Press.
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err. Journal of Experimental Psychology: General, 144(1), 114–126.
Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press.
Back to Top