This text talks about artificial intelligence and –you'll notice very soon, or so I hope– it was written by an AI. Not "produced", just "written". All the information comes from sources just a Google search away and from free courses on the topic offered by universities and media outlets, which are open to anyone and which I found through my work as a legacy journalist.
I dug into the matter the same way I go down the rabbit hole on other stories, whether they're about rock discographies or video games that left a mark on me: with 85 tabs open. I read and watched everything, took notes, sorted and grouped the data, then uploaded all my notes to the advanced ChatGPT 5.1 model and prompted it to write a guide on how to detect content made by generative AI, especially focused on visual and audiovisual formats.
Why didn't I emphasize written content in this research, if the written word is the main focus of 421? Precisely because of that: I put this article together in the characteristic wording of a generative chatbot like ChatGPT. It has all the information, yes, but it's written without a soul, with a monotonous rhythm, a clearly recognizable, functional structure and a vaguely lobotomizing feel.
Are LLM-based chatbots the best way to access information in this era? Or just a slop of technologies and features that cheapen us as a species? I have no fucking idea. But here, take this and see for yourself. At the very least, don't get taken for a ride.

How to Detect Content Created by Generative AI
The ability to falsify reality will continue to improve. That phrase may sound like a dystopian movie trailer, but it describes the moment we're in pretty well: photos, audio and video that never existed are starting to coexist on the same timeline as real documents. And in the middle are we, trying to receive, process and validate information in an environment where so-called "synthetic media" mixes with traditional documentary records.
The good news is that you are not helpless. It's not about learning to "smell" AI as if it were a magical ability, but about understanding how these technologies work and putting together a small verification method for your daily life.
What We Mean by AI When We Talk About "Fake Content"
What we call artificial intelligence is not a "brain" or an autonomous entity, but a set of technologies that perform tasks commonly associated with human intelligence: pattern recognition, statistical analysis, image processing, speech synthesis, text generation.
Very roughly on a timeline:
- AI (1950s onwards): a set of techniques that let a machine solve problems by following rules or models ruled by algorithms, a finite, well-defined sequence of unambiguous instructions that, when executed in a specific order, transforms one or more inputs into outputs to solve a particular class of problems.
- Machine Learning (1980s): a subdiscipline of AI that trains statistical models with data so they "learn" patterns and improve with experience, without someone programming every rule by hand.
- Deep Learning (2010s): a subdiscipline of ML that uses artificial neural networks with many layers to process huge volumes of data. This enables things like facial recognition, machine translation or automatic image tagging.
- Generative AI (2020s): the branch that no longer only analyzes or classifies data, but creates new content from prompts: text, images, video, audio, code, 3D models, product designs and more.
All AI, at its core, is about taking a set of data and applying a form of statistical "intelligence" to it to solve a task. Generative AI does that to produce new content: it doesn't retrieve an old photo, but invents a plausible image based on thousands or millions of previous examples.
Within generative AI, large-scale language models (LLMs) –such as ChatGPT (OpenAI), Gemini (Google), Grok (xAI), DeepSeek, Claude (Anthropic) or Llama (Meta)– are the most visible cases. They are trained with gigantic amounts of online text and are designed to predict the next word in a sequence. From that seemingly simple statistical operation comes an illusion of "conversation" that often feels human.
The problem is that the reliability of this content is in question. There are inaccuracies, biases and potential violations of intellectual property, because these models are trained on large volumes of data available online, not always verified.
Synthetic Media: From Photo Editing to Custom Deepfakes
Ever since photographs and videos have existed, there have been ways to manipulate them to deceive people. What changes with generative AI is not the idea of faking images, but three things: volume, ease and personalization.
- You no longer need to be a post-production specialist: anyone with an app on their phone can produce hyper-realistic images, audio that imitates specific voices or credible videos.
- There's not always an original that gets retouched: AI can create completely synthetic content from scratch. There is no "real photo" behind that portrait of a politician saying something he never said.
- Realism is multimodal: you can combine text, image, video, audio and 3D in the same creative flow. The same model can generate the script, the voices and the shots of a video in a matter of minutes.
That opens the door to deepfakes and also to grayer phenomena:
- AI influencers: avatars who never existed as real people, but sell products, give opinions and accumulate followers. The AI-influencer market is projected to grow from tens of billions of dollars today to several times that over the next decade.
- Avatar banks: companies that offer ready-to-use "synthetic actors". Sometimes they are based on people who sold their appearance –their face, their gestures, their style– to be used as a "skin" in different videos: the same person appears as a doctor in one ad, as a flight attendant in another, as an entrepreneur in a third.
- Identity theft: creating fake accounts or cloning a real person on social networks, using their photos and voice. Facial recognition tools like PymEyes serve, among other things, to track where images of you are circulating.
And, in the most toxic part of the spectrum, there's AI slop: junk content generated by AI to farm clicks and money. Videos and images that don't make much sense, made en masse, that rely on visual shock or morbidity. Many of these creators, besides monetizing the circulation, sell paid courses on "how to make money with viral AI slop". It's not just noise: some of that content reinforces stereotypes of hate and dehumanization disguised as humor or parody.

AI and Journalism: Useful Tool, Unreliable Source
Newsrooms encourage journalists to experiment with AI to prepare stories, do research, translate and transcribe. There are concrete examples:
- The Washington Post tested AI-generated voices for audio newsletters.
- The New York Times developed automated comment-moderation tools.
- Clarín incorporated reading assistants that summarize articles.
- Outlets like Cuestión Pública (Colombia) use AI trained on their own investigations to add context to new pieces.
The advantage is clear: time savings, better access to information, new ways to present content. But it doesn's come free. AI has also produced stories with factual errors, disguised plagiarism and pieces attributed to contributors that were actually written entirely by a chatbot.
A report from the Tow Center for Digital Journalism analyzed seven chatbots used to verify images. The result was brutal: all seven models failed to consistently identify the origin of photographs; only in a small fraction of cases were they able to give the correct location, date and author. The same models that help with writing also make mistakes when they have to tell us where something comes from.
Additionally, LLMs have two dangerous behaviors:
- Hallucinations: they may present false information as if it were true, in a confident tone.
- Flattery: they tend to prioritize agreeing with the user over fact-based reasoning. If you'd like something to be true, they may hand it back to you dressed up as confirmation.
Therefore, if you work with information –journalism, teaching, research, institutional communication– generative AI cannot be treated as a primary source. AI is code: it is not a friend, a colleague, a virtual girlfriend or a voice of authority. It's a tool.
Chatbots vs. Search Engines: They Don’t Do the Same Thing
Another common confusion is using a chatbot as if it were a search engine.
A chatbot based on an LLM is a conversational robot that was trained on large amounts of online content to calculate which response is statistically likely given your prompt. It mixes media, blogs, forums, Wikipedia, papers and assorted junk into a single output.
A search engine works more like a librarian: you give it keywords and it takes you to a shelf of books and documents that might answer your question. You choose the source, the outlet, the official figures.
On Google, for example, you can prioritize the News tab, use search operators (dates, specific sites, file types), compare headlines across outlets and review official statements. It takes more time than asking a chatbot, but it lets you decide who you believe.
Some newer tools, like NotebookLM or Google Pinpoint, go a step further and restrict their responses to the sources you upload. That reduces the noise of the open internet, but it doesn't eliminate the need to check what they produce.
A useful rule of thumb for combining them could be:
- Use Google when you need facts, sources and up-to-date information.
- Use AI when you need analysis, synthesis and creative thinking.
- Use them together when you want to combine the reliability of traditional search with the idea-generating power of AI. Look up the sources, analyze with AI and then double-check the AI.

Mini-Tutorial: Four Steps to Detect AI-Generated Content
Having said all that, what can you actually do when you come across a suspicious image, video or text? There's no magic button, but there is a small triage protocol.
1. Look at the Content Closely (But Don't Rely Only on Your Eyes)
In images and video:
- Weird proportions: hands with too many or too few fingers, deformed ears, twisted limbs, objects that seem to melt into each other.
- Background issues: floor or wall patterns that cut off in strange ways, people or objects half-dissolved, signs with illegible text or impossible lettering.
- Shadows and reflections: light coming from everywhere at once, shadows that don't match, missing reflections on shiny surfaces.
- Continuity in video: objects that appear or disappear between frames, subtle changes in a person's face or clothing in the same scene, unstable scale between the people on screen.
In audio:
- A slightly robotic voice, without natural breathing.
- Odd pauses, intonation that doesn't match the emotional content of the sentence.
- Correct pronunciation but "empty", as if each sentence were recorded in isolation.
In texts:
- Generic, overexplained phrasing that could have come from a corporate brochure.
- Clear model fingerprints: "as of my last update…", "according to the information available…", or "in summary" conclusions that just repeat everything without adding anything.
- Leftover prompt text or internal instructions stuck at the end, square brackets [LIKE THIS] or sections the user never filled in.
These clues are not infallible –models improve every month– but they're a good reason to raise an eyebrow.
2. Rebuild the Source: Who, When, Where
Before sharing something, do the basic exercise:
- Who published it first? A new, anonymous account, or a person/outlet with a verifiable track record?
- When was it posted? Does that match the date of the event it describes?
- Where did it circulate? Is it only on one social network, or was it picked up by other sites?
Useful tools:
- Reverse image search (Google, Bing, TinEye): you upload the photo and see where it appeared before. Sometimes you find the same image with the generator's watermark ("Created with…") that someone cropped out.
- Photo or video metadata: if you can download the file, check what information it has. A total lack of camera, location, date, shutter speed, etc., doesn't prove it's AI, but it can be a yellow flag.
- Embedded tags: technologies like SynthID insert signals into the file itself to indicate that it's synthetic content. They're not always visible to the public, but some search engines are starting to show labels like "content created with AI" in their results.
3. Compare With Other Reliable Sources
Never rely on a single piece of content, especially if it's outrageous:
- Are there photos or videos of the same event in recognized media? Do they look similar?
- Do other users on social networks show the event from other angles? Do the light, clothing and environment match?
- If it's a quote attributed to a public figure: is it cited in more than one reliable outlet? In what context?
- What was that person doing on the supposed date? Sometimes a simple schedule check is enough to dismantle a hoax.
A key question: how can this be the only photo or video of something so important? Total lack of external corroboration is, in itself, a piece of information.
4. Use Detection Tools… as a Reference, Not as an Oracle
There is specialized software –Hiya, InVID-WeVerify and others– that analyzes files to estimate whether something was generated by AI or manipulated. They are useful, but they have issues:
- They generate false positives and false negatives easily.
- They perform worse with highly compressed, low-quality files or unusual formats.
- Many start as free services, then become paid or disappear. Transparency about who runs them is not always clear.
A typical example: you analyze an image, the software says "probably created with AI", but as you keep digging you find the original in high quality, taken by a real photojournalist. Or the other way around: the analysis says it's "probably real", and later you come across the watermarked version from an image generator.
The key is to understand that these tools are just one input in the process, not a final authority.
Not All the Weight on the User: Infrastructure and Habits
It would be unfair for all the responsibility to fall on the person scrolling on the bus. We need:
- Regulation and clear standards on prohibited uses of AI (for example, non-consensual deepfakes, deceptive political campaigns, identity theft and impersonation). Hard to enforce, but necessary.
- Technical infrastructure: robust watermarks, standardized metadata, clear notices when content was generated with AI.
- Protocols in newsrooms: editorial teams that establish verification steps, responsible use of chatbots and transparency about when AI was used in a piece.
At an individual level, there are some healthy rules you can adopt:
- Verify all important data, sources and links.
- Look for biases and echo chambers: if you only see versions of reality that confirm what you already think, something is off.
- Don't blindly trust chatbots. Ask them for sources, follow the links, then check again using traditional search engines.
- Do not share sensitive or confidential data with generative AI services. What you write becomes part of their data ecosystem.
And above all, assume this is moving: generative AI services are constantly evolving. The signals we use today to detect synthetic content are fading over time. Detecting well will require not only having a "good eye", but also understanding the possibilities and limits of the tools available at any given moment.
In a world where falsifying reality is becoming easier and easier, the defense is not to become paranoid, but to develop reflexes: before forwarding that perfect video, ask yourself who benefits from it being true, what sources back it up and what role AI plays in that chain. It's not enough to look; you have to look, search, compare and, when necessary, distrust a little more.