How AI Has Pushed Humans to Have More Creative Thought Processes

Helping you strategize and make better decisions

  • Researchers have found that AI can improve human decisions. 
  • Using AI for strategies can remove preconceived notions. 
  • But experts say that relying too much on AI can erode trust.
Someone working on a laptop computer in a home office.

10'000 Hours / Getty Images

Artificial intelligence (AI) might one day be smarter than humans, so we might as well take advantage of all that processing power. 

A new paper suggests that superhuman artificial intelligence can improve decision-making. Researchers at the City University of Hong Kong say AI has prompted humans to become more creative in gaming. It's part of a growing effort to incorporate AI into our strategies. 

"AI can help humans sift through mountains of data and find insights that would have been otherwise missed," Maya Mikhailov, the founder of SAVVI AI, told Lifewire in an email interview. "In addition, it can learn how to make lower-level recurring lower-level decision-making so that humans can focus on more involved scenarios."

Testing AI With Games

The researchers used the ancient game of Go to look at decision-making. They started by examining 5.8 million move decisions made by professional Go players over the past 71 years. The study found that humans began to make significantly better decisions following the advent of superhuman AI. 

"Our findings suggest that the development of superhuman AI programs may have prompted human players to break away from traditional strategies and induced them to explore novel moves, which in turn may have improved their decision-making,” the authors wrote in their paper. 

The advantages of letting AI make decisions may hold for areas beyond gaming. Mikhailov said that one benefit of AI informing human decision-making is that it allows the data to decide rather than starting with a preconceived notion of the outcome. 

"You find this a lot in marketing, for example, where 'personas' are developed to guess who the ideal consumer is, their habits, and what they want out of the product,” Mikhailov added. “AI, in this case, doesn't have those notions of who the customer is supposed to be. It just sifts through the data looking for predictions of outcomes toward a goal."

Even your medical care could eventually be informed by AI input. Robert Pearl, a professor and physician at Stanford University, said in an email that AI would improve medical practice by aiding the decisions that doctors make.  

... generative AI applies a logical sequence of data-analytic steps and undergoes constant learning...

"Every doctor consistently using it will be better able to diagnose and treat patients than ones who fail to embrace it,” he added. “Patients will be empowered to better understand their health and manage their medical problems. And the technology will increase patient safety and reduce medical errors in hospitals.”

Unlike other computer applications, Generative AI closely resembles how doctors learn and solve medical problems, Pearl said. It draws on a vast database that includes all published material. It applies complex rules to sort through the information and then predicts the answer that most closely comes from the symptoms and clinical findings. 

“In the future, it will augment the information it is given by asking questions as physicians do to find the best clinical fit,” he added. “Overall, generative AI applies a logical sequence of data-analytic steps and undergoes constant learning, similar to a medical student or resident. Finally, with medical knowledge doubling every 73 days, it will be able to do what no physician is capable of—staying up to date.”

AI Is the Sum of Its Input

As brilliant as AI has become, at least for games, harnessing its power might not be appropriate for every situation. 

Someone working on a laptop in a white room with an human-looking AI overlaying the computer screen.

NicoElNino / Getty Images

Mikhailov said that relying too much on historical data can encode biases from the past. She pointed to the book "Weapons of Math Destruction" in which author Cathy O'Neil gives examples of preferences in areas such as lending to police action that can be caused by an AI trained on historical data. 

The dangers are especially acute "when the system is not reinforced with feedback mechanisms that include new data and human supervision," Mikhailov said. 

Relying on AI can also erode trust, Giancarlo Erra, the CEO of AI-based startup Words. tel, said in an email interview. He noted that Large Language Models like GPT are trained to produce perfect language. 

"AI is a statistical tool giving us back, expressed in nice words and logical thinking, what it was trained on," he added. "It means there's actually 'no reasoning' or 'understanding' behind it, so it can be a very believable liar, not knowing it's lying. Fact-checking (if needed) and critical thinking are very important to use AI correctly: as an extension of our real thinking and understanding."

Was this page helpful?