Real Enough? How Forgeries and AI Hoaxes Shape Historical Memory
- EPOCH

- 13 hours ago
- 8 min read
Maia Evill-Pearce │ University of Winchester
How do you know your understanding of history is accurate? Historical deception has been a constant throughout human history, with forged and fake copies of documents or works made with the intent to defraud. These hoaxes have been embedded into our interpretation of the past for as long as we can remember. Found in museums, art galleries, and private collections, forgeries and fakes have a way of shaping our views and reinterpreting the past in a silent but influential way. The newer, more dangerous use of generative Artificial Intelligence (AI) promises to create false narratives faster and more convincingly than ever before. We need to think about how this adds yet another complex challenge to how history is accurately portrayed and communicated in the modern world.
We must first consider what historical deception has been used for and is being used for. Misleading people about historical events has and is often undertaken for political or ideological purposes, sometimes to manipulate and falsify historical accounts, distorting public understanding and creating false narratives. In other cases, it is used to create inaccurate and unfounded beliefs about a nation or an event. But forgeries can also simply be created for monetary gain or creative license. The intent behind these creations is important, and their impact, too, is paramount.
To understand how generative AI that creates images and videos, such as ChatGPT, Midjourney, Grok, and DALL-E, is more dangerous than ‘traditional’ fakes and forgeries, we need to be mindful of the historical background of deception and how it has evolved. An early and famous example of historical forgery is the ‘Sleeping Cupid’, an early work by Michelangelo, which he buried to make it seem more realistic as an aged antique, and later sold to Cardinal Raffaele Riario for a higher price. This work was one of the earliest documented examples of a forgery and its transaction.
Fake artefacts such as the crystal skulls of Central and South America are an example of a sensationalised historical hoax in the nineteenth and twentieth centuries that, for some period of time, altered the historical view of pre-Colombian Mesoamerican culture. The crystal skulls varied in size: some were smaller, others more human-sized, whilst some featured finer details and others presented unusual or enlarged characteristics. Although scientific research determined they were likely created in nineteenth century Europe, it did not prevent conspiracy theories that these skulls must have been forged by an ancient Mesoamerican civilisation, or, better yet, proof of alien contact with the Aztec Empire. As sensationalised and farcical as the theories seem, they presented an important point. How did artefacts that had never been discovered at archaeological excavations distort the historical understanding of Mesoamerican civilisations? The legends surrounding the skulls had undermined knowledge derived from genuine archaeological excavations and verified artefacts and instead fabricated legends about how they may have been used. Not only did this perpetuate misinformation about Indigenous cultures and beliefs, but it also proved that altering views of the world and historical memory was quite simple to do.

Forgeries have replicated, created, and distorted the view of history - it is important to remember that every forgery has its own impact. Take the case of the ancient Michigan Relics, which were tablets presented as evidence of a Near Eastern civilisation in North America. They were later determined to be one of the most complex pseudo-archaeological fakes in the United States. However, that did not stop historians and scientists from believing that the strange hieroglyphs and symbols on the tablets may have been evidence of a culture they had not been privy to before. It also opened the area up for looting, which had a detrimental impact on our archaeological understanding of the region. When the works of art forgers like Eric Hebborn and Shaun Greenhalgh have infiltrated art markets and museums by introducing inauthentic pieces, views begin to change not only about a particular period of history but also about the alleged artist. Ultimately, forgeries and fakes have embedded themselves in the public imagination, and these examples prove how difficult it is to change perceptions. The intricacy and craftsmanship of forgeries and fakes have only advanced over the centuries, making it harder for historians and art experts to expose them. So how is AI more dangerous, and what new threat does it pose to historical interpretation?

With the emergence of generative AI, historians are currently tackling one of the most difficult threats to modern understandings of human history. AI fabrication poses challenges for identifying historical truth due to its creation of fake archival material and the speed at which this can be created. False videos and photographs are gaining popularity, particularly in short-form social media content, which in most cases presents this AI-generated material as fact or fails to note that the images are, in fact, not real. A disturbing statistic emerged last year, which estimated that 34 million AI-generated images were created daily. This in itself is an alarming statistic and raises questions about how the average person, who is not trained in a historical or IT field, will be able to accurately debunk AI-generated images as fiction. For the younger generations who are consuming more short-form media than any other, we must question how these videos and images are shaping their views. Each AI image is distorting the truth, but will we get to a point where this AI-material is being catalogued as fact?
AI-generated imagery poses a number of threats to our understanding of history, particularly by shaping historical narratives and blurring the line between fact and fiction. By promoting particular imagery, political and cultural views can change and distort the authentic historical narrative through propaganda. The clear instability in major political forums worldwide is a haven for AI-images, which help to promote agendas and spread propaganda that alters the historical record with an urgency not seen before. Most reside in the realm of believability, and examples seem to keep emerging as political instability increases. In 2024, AI images emerged depicting US President Donald J. Trump with groups of African American voters, and it is presumed the images were created as a way to target African American voters prior to the 2024 US presidential election. AI-voice alterations were used to imitate Democratic presidential candidate Kamala Harris in a speech, which reinforced stereotypes and false information by supporters of the opposition. In 2026, the German government raised concerns about AI-generated images of Nazi activities in World War Two and warned that the images were reducing historical facts and circulating modified narratives. The imagined events create false narratives and risk people remembering or being influenced by the AI material, leading them to accept the reconstructions as truth.
Many AI images look like real archival material, photographs, or lithographs depicting cultures, creating false senses of authenticity. These images are treated as authentic historical records, replacing the original sources in people’s memories. Many people who digest short-form Instagram and TikTok videos perceive a ‘historical’ video as more authentic if the images seem antique or vintage. But dangerously, countless generated images perpetuate stereotypes. AI models are trained on existing datasets, and many of these contain colonial and stereotypical depictions of various cultures as seen in the Chinese market image below, where clothing, hairstyles, and signage signal historical inaccuracies and stereotypes. Unfortunately, this image generation often leads to Indigenous cultures worldwide being reduced to primitive stereotypes depicting complex societies as visual tropes and reinforcing outdated portrayals instead of real lived experiences. Modern biases infiltrate AI imagery, adding modern aesthetics and values to past events or people. Body language and facial features may reflect contemporary ideals rather than historical truths. For instance, the carved relief image below shows both Aztec and Mayan influences but incorporates modern aesthetic qualities that create a misleading appearance. The imagery provides a threat of promoting misinformation about minority and Indigenous communities in places and cultures where they have been previously romanticised and caricatured, further systematically erasing these communities.


As an example, the image below was created for the purposes of this article. Within 60 seconds, an AI image generator created an image promoting false narratives. The prompt asked for a vintage photograph of a newly discovered Ancient Egyptian tomb. The image's details are complex, depicting features and a location generally associated with the subject area. The AI photograph draws on previous tomb discoveries, with the sarcophagus centrally located, the ushabti figures, and the Anubis jackal-headed statue in the foreground, all associated with the afterlife. The colossal pharaoh statues resemble previous discoveries of Ramesses II statues, and the cobwebs give the impression of a freshly opened tomb with its funerary offerings. But the image slowly reveals itself as fake when you look more closely. A sphinx-like statue sits at the front of the image, with an unusual feature of 3 front legs. Coins found on the floor of the image raise a concern, as formal currency wasn’t introduced until the Ptolemaic period around 360 BCE. But the other artefacts suggest the tomb dates back even further. And who lit those candles before the photo was taken?

Ultimately, each of these AI images evokes an emotion, which is why we interact with them and how they have become so popular in short-form content. Some promote wonder, like the image above; others may promote pride or nostalgia. Alarmingly though, emotional resonance can override the critical thinking required to debunk these images. People are becoming more reliant on AI images than primary sources, largely because this is the type of content they are consuming more of as it is more accessible to the average person. But it shifts the evidence-based narratives of history to visual storytelling through AI constructions. Audiences are becoming less likely to question who made the content or what ‘fact’ it is based on. Historical debates are getting lost, and reconstructed history is rising to the forefront. The erosion of historical literacy is allowing the past to be modified. It simplifies complex histories, strengthens dominant narratives, and subtly reshapes people's historical memory.
There is no one clear way to tackle the influx of AI-generated historical images, but we can be responsible in how we use them and how they are viewed. Small changes to media that clearly label images as AI reconstructions or watermark them as such are a good start. Pairing the generated images with real context, sources, and further reading material would give audiences the tools they need to differentiate AI imagery. An example of a legitimate and responsible use of an AI-generated image could be the visualisation of ruined, damaged, or lost architecture such as the Colossus of Rhodes, the Statue of Zeus at Olympia, or the Lighthouse of Alexandria. But using ethical guidelines along with the image by disclosing that the image is AI-generated in a caption, ensuring the image is generated using prompts that are historically accurate, and ensuring the image is used as an illustration and not as a primary source, are all steps in the right direction. Fundamentally, historical hoaxes and contemporary digital deceptions have long been prevalent and will continue to be so. They are dangerous in influencing perceptions of what the past could have looked like and allow false narratives to persist among audiences both online and in person. There will continue to be a growing crisis of what is and is not authentic content, which will affect generations both younger and older. We need to remember that forgeries, fakes, and AI-generated content all share a common theme: deception. But we need to be vigilant and continue to question what we see; otherwise, history and its sources will become more vulnerable than ever.
Further Reading:
Noah Charney, The Art of Forgery: The Minds, Motives and Methods of the Master Forgers (London, 2015).
Jason Steinhauer, History, Disrupted: How Social Media and the World Wide Web Have Changed the Past (New York, 2021).
Michael Wooldridge, A Brief History of Artificial Intelligence: What It Is, Where We Are, and Where We Are Going (New York, 2021).
Peter J. Holliday, Power, Image, and Memory: Historical Subjects in Art (New York, OUP USA).
Maia Evill-Pearce is a PhD candidate at the University of Winchester. Her PhD thesis examines the representation and historical narratives of hidden heritage, with particular focus on African American and Indigenous American cemetery sites. In addition to this research, she has conducted studies on art and artefact forgery within the heritage sector, as well as research on artefact trafficking and the loss of cultural heritage in Iraq. She recently spoke at a BBC History Magazine Weekend about the history of an Elizabethan country house in Wiltshire.


