Has the AGI age started? ChatGPT reportedly passed the Turing test
- Marijan Hassan - Tech Journalist
- 5 days ago
- 2 min read
For decades, the Turing test has served as a benchmark for artificial intelligence, probing whether a machine can exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. Conceived by British mathematician Alan Turing in 1950, the test, often referred to as the "imitation game," involves a human interrogator engaging in text-based conversations with both a hidden human and a machine. If the interrogator cannot reliably distinguish between the two, the machine is said to have passed the test, suggesting a significant level of human-like intelligence.

Now, a new study claims that this long-standing threshold has been crossed. Researchers report that OpenAI's GPT-4.5 has passed a rigorous, three-party version of the Turing test.
The study involved nearly 300 participants who, over eight rounds, took on the role of either an interrogator or one of the two "witnesses" being interrogated online.
The AI persona effect
A crucial aspect of the experiment was the prompting of the AI models. When given a "no-persona" prompt, with basic instructions to convince the interrogator of their humanity, GPT-4.5's success rate was a mere 36 percent.
However, when instructed to embody a specific persona, such as a young person knowledgeable about internet culture, its success rate soared to the groundbreaking 73 percent. For comparison, GPT-4o, without personal prompting, achieved only a 21 percent success rate.
While these results are undeniably intriguing, experts caution against interpreting them as absolute proof of human-level thought in AI. François Chollet, a software engineer at Google, previously noted that the Turing test was intended more as a thought experiment than a definitive test.
Modern large language models are exceptionally skilled at mimicking human conversation due to their training on vast datasets of human text, even if they lack genuine understanding.
Jones himself acknowledges the complexity of determining whether LLMs possess true human-like intelligence. However, he emphasizes the practical implications of his findings, stating that they provide further evidence that "LLMs could substitute for people in short interactions without anyone being able to tell." This raises concerns about potential job automation, more effective social engineering attacks, and broader societal disruption.
Ultimately, the study underscores not only the rapid advancements in AI but also the evolving nature of how humans perceive technology. As we become more familiar with interacting with AI, our ability to distinguish it from human conversation may also change.