What are people saying?

We're Cooked: The Terrifying, Awe-Inspiring Reality of AI Video

CCasey Parker
September 30, 2025
8 min read
We're Cooked: The Terrifying, Awe-Inspiring Reality of AI Video
Credit: Photo by Ben Kim on Unsplash

Do you remember the "Will Smith eating spaghetti" video? It was a bizarre, nightmarish spectacle that went viral just a couple of years ago. The AI generated clip was unsettling, with distorted faces and pasta that defied physics. It was a fascinating but clumsy party trick, something we could all laugh at as a strange glimpse into a distant future. Well, that future is no longer distant. It arrived while we were busy scrolling. The leap from that spaghetti mess to the jaw dropping, near-photorealistic clips we see today has been so fast, so exponential, that it’s left us collectively breathless.

This isn’t just about cool new tech anymore. A deep dive into the conversations happening in online forums and communities reveals a society grappling with a monumental shift. The discourse is a turbulent mix of sheer amazement and profound, gut-wrenching fear. People are describing the latest AI video models as "mind blowing" and "insanely realistic" in one breath, and then whispering about "existential dread" in the next. The overriding feeling is that we are standing at the edge of a cliff, mesmerized by the view but terrified of the fall. As one user bluntly put it, "we're cooked." This technology is advancing at a "frightening" pace, and society is completely unprepared for the shockwaves.


The Death of Truth and the Rise of Doubt

The most pervasive fear, the one that echoes in every corner of this discussion, is the impending death of objective truth. It is a terrifyingly simple equation: if any video can be convincingly faked, then reality itself becomes a matter of opinion. Users are not just worried about being fooled by a fake video. They are worried about a much more insidious outcome. They fear a world where a "lack of faith in actual video evidence" becomes the norm. It's a world where the very concept of evidence crumbles, and with it, a cornerstone of accountability. Authentic footage of a crime, a confession, or a historic event could be immediately dismissed as a sophisticated fake. As one person articulated, in this new world, "reality won't exist anymore."

This erosion of trust is the perfect breeding ground for widespread manipulation. Malicious actors now have a weapon of unprecedented power. Users envision a tidal wave of political propaganda designed to swing elections, fabricated evidence to ruin an opponent’s career, and sophisticated scams that will prey on the vulnerable. The concern is that "the common person will be fooled," especially when they are not actively looking for a fake. When you are just "casually scrolling" through your social media feed, your guard is down. That is when the most effective deception will strike.


Your Livelihood is on the Line

While many worry about abstract societal harms, a very real and immediate terror is gripping creative professionals. In communities like the videography subreddit, the mood is bleak. These artists and technicians see the writing on the wall. The consensus is that corporations, always looking to cut costs, will inevitably use AI to "skimp on media content." Why hire a film crew, actors, and editors when an AI can generate a passable commercial for a fraction of the cost?

This is not just about a few jobs. It represents a potential hollowing out of entire creative industries. The fear is often tied to broader economic anxieties, with many of the same users now seriously discussing once fringe ideas like Universal Basic Income (UBI) as a potential solution for a future where human labor is devalued on a massive scale.


How to Spot the Ghost in the Machine

For now, the technology is not perfect. Despite the stunning quality, there are still lingering flaws, little "tells" that give away the AI’s artifice. Attentive users have become adept at spotting these imperfections, which often linger in what is known as the "uncanny valley".

The most common giveaway is text. AI struggles immensely with spelling, often producing garbled nonsense like "Hel's Algels" on a jacket instead of "Hell's Angels." Hands are another major problem area, frequently appearing with the wrong number of fingers or moving in unnatural, disjointed ways. You should also watch the backgrounds for inconsistencies that subtly shift or warp between frames. Finally, pay attention to the emotions of the characters. AI often produces a "soulless" or repetitive quality. A common example is the "concluding laugh," a forced chuckle at the end of a sentence that feels programmed rather than genuine.


The Dangers Hiding in Plain Sight

Beyond the obvious deepfakes, users have identified more subtle and perhaps more dangerous threats. A key insight is that the most effective AI fakes might not be the dramatic or shocking ones.

The real danger lies in videos that are "SO innocuous that I'd never even bother to look for clues."

A plausible, even boring, video of a public official making a mundane statement is less likely to trigger our critical thinking than a wild, unbelievable scene. Another sophisticated concern is the "flooding the zone" strategy. Bad actors do not need to make you believe one specific fake video. They just need to create such a massive volume of fake content that the public becomes completely overwhelmed. In this environment of information chaos, any real video evidence can be plausibly denied and dismissed as just another AI fake.

Interestingly, users also noted how an AI’s training data creates strange quirks. Because many models are trained on curated stock footage, the AI characters often appear unnaturally happy, polished, and articulate, like actors in a corporate video. One user even pointed to a practical bottleneck rarely discussed: the immense power consumption required for high quality generation could be a limiting factor in how widespread this technology truly becomes.


What Can You Do? A Guide to Surviving the New Reality

The conversations are not just filled with dread. They are also full of practical warnings and advice. Scams are already active. Users warn that criminals are using AI voice and video cloning to impersonate family members in distress or company executives to defraud people out of thousands of dollars. The government seems unable to help, with deep pessimism about the possibility of meaningful regulation. Many believe we are on a fast track to the "Dead Internet," a dystopian online world so flooded with AI "slop" that it becomes unusable, forcing us into smaller, verified online communities or back into offline life.

Amidst the fear, here is some consolidated advice for navigating this new terrain.

For All of Us:

  • Adopt a Critical Mindset: The era of "seeing is believing" is officially over. Approach every video with healthy skepticism, especially if it makes you feel a strong emotion like anger or outrage.
  • Learn the Tells: Train your eye to look for the flaws. Check the hands, look for garbled text, watch for weird blinking or unnatural emotional responses, and examine the background.
  • Protect Your Family: Establish a private code word or question with your loved ones. This simple step can defeat an AI impersonation scam in seconds. If someone calls you claiming to be a family member in trouble, ask for the code word to verify their identity.
  • Verify the Source: Before you share or believe a video, do a little digging. Where did it come from? An anonymous account on social media is not a trustworthy source.

For Content Creators:

  • Choose Your Tools Wisely: For those looking to experiment, users identify '' 'Google's Veo' '' as being "head and shoulders above the rest" in terms of realism. Be prepared for the cost, however, as some plans are cited at over $100 per month.
  • Master Your Prompts: The quality of your output depends almost entirely on the quality of your input. It is recommended to use an '' 'LLM' '', like '' 'ChatGPT' '', to help you brainstorm and refine detailed, descriptive prompts.
  • Know the Limits: Understand that current models still struggle with character consistency across multiple clips, generating legible text, and controlling fine details. Work with these limitations, not against them.

The ground is shifting beneath our feet, and the bedrock of "seeing is believing" is starting to crack. The awe we feel at this technology's power is matched only by the anger and resignation from those asking, "Who does this benefit?"

As the internet becomes increasingly saturated with AI generated content, the responsibility falls on each of us to become more critical, more vigilant, and more intentional about how we consume information. The future of truth may depend on it.

Comments

Please log in to post a comment.