Davidzmorris pfp
Davidzmorris
@davidzmorris
Apple has released a new research paper showing that LLMs can't reason. It's a major blow to the worldview of many AI boosters.Large Language Models, trained on large amounts of existing human language or images, are the basis for tools like ChatGPT. Current AI leaders like Sam Altman have said LLMs can be the path to "AGI," or Artificial General Intelligece - superintelligent machines that can mimic all human cognition. But LLMs are not built to understand anything - merely to measure and then mimic. And LLMs have been hitting a peformance wall - that's why we have seven "GPT 4.x" models, but no "GPT 5" - 5 is meant to indicate the achievement of AGI. A new concept called "chain of reasoning" or "chain of thought" has been implemented to help LLMs reason. Apple tested these newer Chain of Thought LLM models - also called Large Reasoning Models. https://davidzmorris.substack.com/p/apples-llm-debunking-has-the-agi
1 reply
0 recast
5 reactions

Davidzmorris pfp
Davidzmorris
@davidzmorris
The results make clear there is still no "reasoning" going on at all: for instance, Apple found that LRMs can't reliably solve the "Tower of Hanoi" puzzle, which is trivially solvable by a tailored algorithm. In fact, LRMs couldn't even reliably solve Hanoi when given the solution. Read more, including insights from the researchers via Gary Marcus: https://davidzmorris.substack.com/p/apples-llm-debunking-has-the-agi
0 reply
0 recast
2 reactions