- morehuman daily
- Posts
- 🧠Issue #25 – The Countdown to AI-Generated People You Can’t Detect
🧠Issue #25 – The Countdown to AI-Generated People You Can’t Detect
We’re approaching a tipping point: when you watch a video on social media, you won’t know if the person smiling back is real or AI-generated. The question isn’t if—it’s how soon... and what that will mean for being human.

🔍 The Spark
Earlier this year, OpenAI unveiled Sora—a tool capable of generating photorealistic videos from text prompts. In seconds, it can create a convincing clip of a person hiking, talking, even crying.
Now imagine this: a TikTok influencer with perfect charisma. Posts daily. Tells compelling stories. Has never once existed.
With voice cloning, deepfakes, and AI-generated scripts now converging into seamless pipelines, the timeline to indistinguishable AI influencers is shrinking fast. Experts at MIT’s Center for Advanced Virtuality say we may be just 12–18 months away from widespread, high-quality AI personas populating social feeds—with no visual or audio giveaways.
đź’ˇ The Insight
At first, this feels like a problem of detection. How will we spot what’s real?
But zoom out, and it’s something deeper:
It’s not just about being fooled—it’s about how our brains will adapt to a world where the faces we trust might not be human.
Psychologists studying “parasocial relationships” already show that people can form emotional bonds with perceived personalities, even if they’re fictional or artificial. Now, we’re scaling that up with tools that let anyone build a flawless, customizable “person” who never messes up, never sleeps, and always says what the algorithm likes.
The result?
Influence without accountability
Connection without reciprocity
Performance mistaken for presence
🤯 The Tension
Here’s the human paradox:
We want to believe what we see. We’re wired for faces, voices, microexpressions. AI can now mimic all of that—flawlessly.
And yet… we’re also craving something deeper:
Messy truth
Imperfect vulnerability
The rare, electric feeling of real recognition
The danger isn’t just deception—it’s fatigue. As more “people” on our feeds turn out to be crafted personas, we may start to doubt everything. Or worse… stop caring.
And that erosion of trust? It changes how we relate to each other, too.
đź§ The Human Prompt
Next time you connect with someone online, pause and ask:
What signals tell me they’re real? And do those signals still matter as much as they used to?
Because maybe the future of being human online isn’t just about knowing what’s fake. It’s about remembering what makes something feel real—and not letting go of that.
đź”— Curiosity Clicks
OpenAI’s Sora Demo
See the stunning realism that’s reshaping what’s possible in AI-generated video.Why People Bond With Fake Influencers – HackerNoon An exploration of how humans build emotional connections to virtual characters, with research showing over 40% of Gen Z "don't care whether an influencer is real as long as the content is good."
Detecting Synthetic Media in 2025 – MIT Media Lab
A forward-looking review of where AI detection tools stand today—and why they’re struggling to keep up.
đź’¬ Quote That Hits
“In a world where authenticity can be synthesized, our capacity to care about the real may become the most valuable human trait.”
— MIT Center for Advanced Virtuality
🤔 Worth Considering
In 12 months, your feed might be full of perfect strangers who don’t exist.
They’ll look just like us. Sound just like us. Tell stories that move us.
And some will be built better than us—by design.
But being human was never about polish. It was about presence. Fallibility. The pause before you say something real.
As AI gets better at mimicking all the things we think make us trustworthy, maybe the only way forward is to double down on what can’t be faked.
Let’s not just ask what’s real.
Let’s feel what’s real—and teach each other how to stay grounded in it.
More soon,
— Jesse