Why Researchers Can't Agree on AI Consciousness

The idea has some surprising defenders

Key Takeaways

  • A top researcher says AI is already conscious.
  • But other AI experts say that computers are a long way from gaining human-level cognitive abilities, including consciousness. 
  • Determining whether something is conscious can be tricky.
A woman's face overlaid by computer board circuitry to indicate Artificial Intelligence.

John Lund / Getty Images

The idea of conscious artificial intelligence (AI) conjures images of machines taking over the world, but experts disagree over whether to take the concept seriously. 

A top AI researcher recently claimed that AI is already smarter than we think. Ilya Sutskever, the chief scientist of the OpenAI research group, tweeted that "it may be that today's large neural networks are slightly conscious." But other AI experts say that it's far too soon to determine anything of the sort. 

"To be conscious, an entity needs to be aware of its existence in its environment and that actions it takes will impact its future," Charles Simon, the CEO of FutureAI, told Lifewire in an email interview. "Neither of these is present in current AI."


Sutskever has previously warned that super-smart AI could cause problems. Interviewed in the AI documentary iHuman, he said advanced AI will "solve all the problems that we have today" but also warned that they have "the potential to create infinitely stable dictatorships."

OpenAI was founded as a nonprofit meant to head off the risks intelligent computers pose but has also conducted research intended to create AI. 

While many scientists have dismissed Simon's claim that AI is conscious, he's got at least one well-known defender. MIT computer scientist Tamay Besiroglu defended Sutskever in a Tweet. 

"Seeing so many prominent [machine learning] folks ridiculing this idea is disappointing," Besiroglu wrote on Twitter. "It makes me less hopeful in the field's ability to seriously take on some of the profound, weird, and important questions that they'll undoubtedly be faced with over the next few decades."

What Is Consciousness?

Even determining whether something is conscious can be tricky. AI researcher Sneh Vaswani told Lifewire in an email that consciousness has multiple stages. AI has made "decent inroads" into the first stages, he said. 

"Today, a machine can understand emotions, build a personality profile and adapt to a human's personality," he added. "As AI evolves, it's moving toward the advanced stages of consciousness faster than we can even comprehend."

There are many definitions of consciousness, and some would contend that trees and ants are somewhat conscious, an idea that "stretches the definition beyond common usage," Simon said. He contends that self-awareness is the comprehension of one's self as a conscious entity. 

"Both consciousness and self-awareness manifest in a number of behaviors such as showing self-interest but also in an internal sensation," Simon said. "If AIs are truly conscious, we will be able to observe the behaviors but will have little knowledge of the internal sensation. It is possible to fake consciousness with a library of conscious-appearing behaviors like referring to itself as 'I,' but a truly conscious entity is able to plan and consider multiple outcomes."

Vaswani is optimistic about the outcome of creating super-smart AI even though Elon Musk is among those who have warned that conscious AI could lead to humanity's destruction. 

Security officer watching cloud blocks forming face in sky.

Colin Anderson Productions pty ltd / Getty Images

"When AI fully gains consciousness, it will complete an 'incomplete' society: Humans and AI will coexist," Vaswani said. "We'll achieve larger goals together, and AI will seamlessly blend into our world."

Some AI experts say that the very concept of conscious AI is more sci-fi than reality. The issue tends to overemphasize 'terminator style' robots and not the very real potential damage from biased AI that already exists, Triveni Gandhi, Responsible AI Lead at AI company Dataiku, said in an email to Lifewire. 

"We may not be facing down the next Ex Machina, but we are facing some real challenges," Gandhi said. "This can be seen in the misuse of heavily biased AI to predict healthcare costs, in recruitment tools that filter out resumes unfairly, or in credit lending models that reinforce existing inequalities."

AI is not inherently good or bad, it is just data that does what we tell it to do, Gandhi argues. 

"Human biases find their way into data and machine learning models, so we should be very clear about the data we use, why we are choosing to use AI in that capacity, and how the choices are then presented to people affected by an AI system," Gandhi added.

Was this page helpful?