Microsoft's Bold Claims of AI 'Human Reasoning' Shot Down By Experts

Chatbots aren’t as smart as you, yet

  • Microsoft researchers claim that AI is approaching human-level thinking abilities. 
  • But experts say AI has a long way to go before it matches human reasoning. 
  • Current chatbots are mimics rather than original thinkers.
And artificial intelligence brain sitting atop an electronic circuit CPU.

Yuichiro Chino / Getty Images

Microsoft researchers claim that AI is approaching human reasoning abilities, but some experts say that's nonsense. 

A recent paper by computer scientists claims that AI is moving toward artificial general intelligence, another way of saying a machine can do anything the human brain can do. The paper has been met with skepticism at a time when concerns about AI are rising

"Human-level performance is one thing," Nick Byrd, a professor at Stevens Institute of Technology who studies how reasoning works, told Lifewire in an email interview. "Human-level reasoning is something else altogether. Digital calculators approached (and surpassed) human-level math performance a long time ago. However, I doubt that their calculations function like what most humans do when they do the math."

Artificial General Intelligence

The recent Microsoft paper broke new ground with its claims that AI chatbots powered by OpenAI's GPT-4 are thinking the way humans do. 

"We demonstrate that, beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting," the researchers write in the paper's abstract.

"Moreover, in all of these tasks, GPT-4's performance is strikingly close to human-level performance and often vastly surpasses prior models such as ChatGPT. Given the breadth and depth of GPT-4's capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system."

The large language models (LLMs) powering the current generation of AI chatbots have been likened to an advanced form of autocorrect. That's why the new paper by Microsoft researchers surprised many observers. 

The current generation of chatbots fails to apply human-level reasoning in describing specific facts, Anagha S. Nadkarni, the CEO of AI Detector Pro, a company that finds AI-generated material, said in an email.

"As an example, I once asked it to tell me about the Bengal Famine of 1943, and it didn't do a great job—the answer is circular and weak," Nadkarni added. "Nor did it go very deep into the impact of colonialism on India and Bangladesh, or bother mentioning which countries comprised the area that experienced the famine."

Human-level performance is one thing. Human-level reasoning is something else altogether.

Mimicry rather than reasoning is a better way to describe how current AI chatbots operate, Byrd said. 

"When I ask for a chatbot to write a press release about Microsoft's latest AI achievements, the chatbot is generating a series of words that best fit (statistically) the kind of phrases and sentences in similar press releases from its training data," he added. "Big picture, this fake-it-till-you-make-it mimicry may be similar to what we do, especially when we are still learning how to do something new."

Alexandra Mousavizadeh, the CEO of Evident, a company that analyzes artificial intelligence implementation, said in an email interview that GPT-4 does not mimic human thinking in all its diversity and adaptability. 

"It is multimodal and very, very impressive for a system that doesn't take special prompting to solve problems, but we shouldn't overlook its limitations, like basic logical flaws, hallucinations, and biases," she added. 

AI Is a Mimic, Bound By Constraints

While AI may not reason the way humans do, that doesn't mean the software can't achieve similar results to brain power, Lou Bachenheimer, the CTO of tech firm Blue Prism said via email. Bachenheimer suggested asking generative AI to design a billboard. 

AI robot face and programming code on a black background.

Yuichiro Chino / Getty Images

"It would design something to fit the size constraint, a rectangle in this case," he said. "Unlike AI models, however, humans have the capacity to think beyond given constraints. In this example, a human billboard designer might have the idea of adding a cardboard cutout to the billboard and making a case for why it should not just be a rectangle. While the AI can create the design, it's using prebuilt assets and will struggle to innovate the way humans can."

In the future, even skeptics of the Microsoft paper say that AI might be able to approach human reasoning abilities. 

"It clearly will in some domains," Mousavizadeh said. "In others, it will be superhuman, like in chess, fault detection, or writing code. We should look for the emergence of human-like reasoning where human consciousness's structure, adaptability, and complexity are being mimicked. For now, that's not really present."

Was this page helpful?