News > Smart & Connected Life Protect the Future: How to Stop AI from Overpowering Humanity Regulations may help By Sascha Brodsky Sascha Brodsky Senior Tech Reporter Macalester College Columbia University Sascha Brodsky is a freelance journalist based in New York City. His writing has appeared in The Atlantic, the Guardian, the Los Angeles Times and many other publications. lifewire's editorial guidelines Published on February 1, 2023 11:49AM EST Fact checked by Jerri Ledford Fact checked by Jerri Ledford Western Kentucky University Gulf Coast Community College Jerri L. Ledford has been writing, editing, and fact-checking tech stories since 1994. Her work has appeared in Computerworld, PC Magazine, Information Today, and many others. lifewire's fact checking process Tweet Share Email Tweet Share Email Smart & Connected Life Mobile Phones Internet & Security Computers & Tablets Smart Life Tech Leaders Home Theater & Entertainment Software & Apps Social Media Streaming Gaming Women in Gaming Oxford researchers warn that super-powerful AI could destroy humanity.Experts say that even ordinary AI could spell trouble for people. One way to control AI might be to ensure it's properly regulated. Yuichiro Chino / Getty Images Artificial intelligence (AI) may prove to be a grave threat to the world, but experts say there are ways we can fight back. Oxford University scientists recently warned that "superhuman AI" could end up being at least as dangerous as nuclear weapons. The discussion is part of growing concerns about the potential dangers of AI. "Existing AI technology is already 'powerful' enough to warrant great care when deploying at global or national scale," Kevin Gordon, the vice president of AI Technologies at NexOptic, an AI imaging solutions company, told Lifewire in an email interview. "No need to imagine Terminator-esque scenarios; it could be something as boring as AI that assists legal or regulatory processes that over time steers the law in unfavorable ways, or AI that is designed to maximize user engagement on social media platforms." Superhuman AI? Michael Cohen, an Oxford doctoral student, had a bleak prognosis of AI during a talk before a UK science and technology committee. He warned that advanced AI might wipe out the human race. But Selmer Bringsjord, the director of the AI & Reasoning Lab at Rensselaer Polytechnic Institute, who specializes in AI's mathematical and philosophical foundations, said in an email interview that AI doesn't need to be superhuman to be dangerous. "AI that merely approaches human-level intelligence, and is both autonomous and kinetically powerful, would be enough to end humanity," he said. To illustrate his point, Bringsjord suggested the hypothetical case of a nuclear bomb that operates rather like a mine. The device is hidden under the soil, and it will detonate if and only if the temperature of that soil reaches 2 degrees Celsius. "In the language of AI, this 'artificial agent' perceives temperature around it, and performs one of two actions as a result, detonate versus stay dormant," Bringsjord said. "This AI is enough to cause a problem for humanity, is it not? Yet it's intellectually dim." Preventing AI Disaster One way to prevent an AI disaster might be to regulate how artificial intelligence is developed and used. Bringsjord said he's in favor of this approach. "Even the raw material that comes in handy for making a nuclear bomb is regulated," he said. "Rational, technologized governments all need to regulate AI and the 'raw material' that can be used to engineer supremely dangerous AI. How AI is approached corresponds to different raw materials." AI that merely approaches human-level intelligence, and is both autonomous and kinetically powerful, would be enough to end humanity. Bringsjord pointed to the case of popular language models like ChatGPT as an example of AI that should be regulated. "Why? Because by definition, the actions of such AI can't be formally verified as correct by simply inspecting the basis for the actions," he added. "That's okay for a silly chat; it's not okay for an AI that is autonomous and powerful, obviously." Not everyone is confident that dangerous AI can be stopped. "In its current form, there are many ways researchers and businesses can safeguard users of their applications," Gordon said. "In the long term, with true 'superhuman' AI, it's not nearly as clear." Gordon pointed out that prominent thinkers in the field have proposed "merging" with superhuman AI so that humans scale at the same pace as technology and are not left behind. "Perhaps there will be a way to instill values into these systems such that they behave more or less in line with human-centric values," Gordon said. "But it's unclear how exactly this could be done effectively for a truly 'superhuman' technology." Some observers say the AI threat is overblown. Christopher White, the president of NEC Laboratories America, a technology research lab, said in an email that AI could be compared to a calculator. MR. Cole_Photographer / Getty Images "It can do math faster than humans, but it's a tool that no one would consider dangerous or worthy of regulation," he added. "It is natural that we would aspire to develop thinking tools that augment our ability to think and solve problems." White pointed out that AI systems are typically "superhuman" only for specific and limited tasks. "Despite our tremendous advances in AI, we don't yet have the resources or technology to develop an all-purpose thinking machine that outperforms humans in more than a few dimensions," he said. Was this page helpful? Thanks for letting us know! Get the Latest Tech News Delivered Every Day Subscribe Tell us why! Other Not enough details Hard to understand Submit