Language ≠ Thought, Large Models Can't Learn Reasoning: A Nature Article Sparks an Uproar in the AI Community
MIT Study Reveals Shocking Truth: Language Skills Don’t Equal Intelligence in AI
Got it all wrong?
Why are large language models (LLMs) lacking in spatial intelligence? How does training GPT-4 with non-language data make it smarter? Now, there are "standard answers" to these questions.
Recently, an article published by MIT and other institutions in the top journal Nature observed that the neural networks in the human brain responsible for generating and parsing language do not handle formal reasoning. It also suggested that reasoning does not require language as a medium.
The paper claims, "Language is primarily a tool for communication rather than thought and is not necessary for any form of tested reasoning," sparking significant discussion in the tech community.
Could it be that, as linguist Noam Chomsky said, the hype around ChatGPT is a waste of resources, and the path to general artificial intelligence (AGI) through large language models is completely wrong?
Keep reading with a 7-day free trial
Subscribe to AI Disruption to keep reading this post and get 7 days of free access to the full post archives.