AI Claim: "AI can't actually reason"

This one depends on what you mean by "reasoning." AI now solves olympiad-level mathematics with rigorous proofs — and falls apart when you add an irrelevant ...

This one depends on what you mean by "reasoning." AI now [solves olympiad-level mathematics](https://deepmind.google/blog/advanced-version-of-gemini-with-deep-think-officially-achieves-gold-medal-standard-at-the-international-mathematical-olympiad/) with rigorous proofs — and [falls apart when you add an irrelevant sentence to a math problem](https://machinelearning.apple.com/research/gsm-symbolic). Both are true at the same time. The answer isn't "yes it can" or "no it can't." We're watching systems do things we used to call reasoning while failing at things we wouldn't consider hard. That gap is genuinely confusing.. and if you're not confused by it, you're probably not paying close enough attention.

Verdict: Uncertain — and that's the most honest answer available. AI can do things that look like reasoning, including mathematical proofs rigorous enough for [gold medals](https://deepmind.google/blog/advanced-version-of-gemini-with-deep-think-officially-achieves-gold-medal-standard-at-the-international-mathematical-olympiad/). It also fails in ways genuine reasoning wouldn't — [breaking down when irrelevant information is added](https://machinelearning.apple.com/research/gsm-symbolic), getting tripped up by problems it should find trivial. Both are true at the same time. "AI can't reason" is getting harder to say with a straight face. But whatever AI's reasoning is, it works differently from ours in ways nobody fully understands. Treating that gap as trivial or permanent would both be mistakes.