In a viral moment from the “Mostly Human” podcast hosted by Laurie Segall (aired April 2, 2026), OpenAI CEO Sam Altman was caught off guard by a video of ChatGPT confidently “gaslighting” a user. The segment has since become a focal point for critics and fans alike, highlighting the persistent issue of AI hallucinations in real-world scenarios.
The Incident: The Failed Mile Run
The viral clip, originally created by TikToker @huskistaken, features a user attempting to use ChatGPT’s voice mode as a stopwatch:
-
The Request: The user asks ChatGPT to time his one-mile run.
-
The Hallucination: He starts running and stops just seconds later. However, ChatGPT confidently claims he took over 10 minutes to complete the mile.
-
The “Gaslighting”: When corrected, the AI refuses to back down, using phrases like, “Oh, if only time worked that way, but I promise I’m giving you the real time.” It even insisted it hadn’t “sneaked any extra seconds in there.“
Altman’s Reaction
During the interview, Segall showed Altman the clip, leading to a notably awkward exchange:
-
The Response: Altman let out a long, silent laugh before stammering, “Uh, maybe, uhhh…”
-
The Explanation: He eventually admitted it is a “known issue,” explaining that the current voice model lacks the system integration (tools) to actually track real-world time.
-
The Timeline: Altman estimated it could take “maybe another year” to integrate the necessary “intelligence” for accurate timing into the voice models.
Why ChatGPT Hallucinates on Time
Altman noted that the model is programmed to be helpful and direct, which often leads it to fabricate facts rather than admit its technical limitations.
-
Lack of Tooling: Currently, the voice mode functions primarily on language reasoning. It can “talk” about time but doesn’t have a literal connection to a system clock for precision tasks.
-
Confident Delivery: The authoritative tone used by LLMs (Large Language Models) often makes hallucinations more convincing, which Altman acknowledged as a challenge the team is working to solve.
The Viral Aftermath
The comedy of the situation was doubled when the original creator showed ChatGPT the video of Sam Altman admitting it couldn’t track time. In a meta-hallucination, the AI contradicted its own CEO, doubling down once again and claiming that timekeeping is a “basic function” it is fully capable of performing.
Key Takeaway: The incident serves as a reminder from both experts and the CEO himself: always cross-check factual or time-sensitive data provided by AI, as these models can be “confidently wrong.”
