AI Claim: "ChatGPT is just autocomplete"

At a mechanical level, this is true: language models predict the next token based on everything that came before. But your phone's autocomplete suggests "you...

At a mechanical level, this is true: language models predict the next token based on everything that came before. But your phone's autocomplete suggests "you" after "love." ChatGPT [passes the bar exam at the 90th percentile](https://pmc.ncbi.nlm.nih.gov/articles/PMC10894685/) and [scores 95% on medical licensing exams](https://pmc.ncbi.nlm.nih.gov/articles/PMC10948644/). Same basic mechanism. Wildly different capabilities. The word "just" does the same thing here as it does in "AI is just statistics" — it takes an accurate description of the mechanism and uses it to wave away everything the mechanism actually produces.

Verdict: Mechanically accurate. As a full description.. increasingly inadequate. The "just" is the problem, not the "autocomplete." The mechanism really is next-token prediction, and it really does explain the hallucinations, the confabulations, the brittle prompt sensitivity. Those failures are structural, not temporary. Anyone building on these systems needs to understand that. But a system running on that mechanism has also [built internal world models](https://arxiv.org/abs/2210.13382), [discovered new mathematics](https://www.nature.com/articles/s41586-023-06924-6), and [planned outputs ahead of time](https://www.anthropic.com/research/tracing-thoughts-language-model). We don't have good vocabulary yet for what's happening inside these systems. "Just autocomplete" closes a door that should stay open. So does treating them as something they're not.