News > Smart & Connected Life Artificial Intelligence Isn't Taking Over Anytime Soon, Right? Don’t fear the singularity By Sascha Brodsky Sascha Brodsky Senior Tech Reporter Macalester College Columbia University Sascha Brodsky is a freelance journalist based in New York City. His writing has appeared in The Atlantic, the Guardian, the Los Angeles Times and many other publications. lifewire's editorial guidelines Published on October 7, 2021 11:20AM EDT Fact checked by Rich Scherr Fact checked by Rich Scherr Twitter University of Maryland Baltimore County Rich Scherr is a seasoned technology and financial journalist who spent nearly two decades as the editor of Potomac and Bay Area Tech Wire. lifewire's fact checking process Tweet Share Email Tweet Share Email Smart & Connected Life AI & Everyday Life News Key Takeaways Don’t worry about AI evolving and taking over the world, some experts say. But a former Google executive said that AI would overtake human intelligence. The real danger of AI is its ability to divide humans, according to one analyst. Kilito Chan / Getty Images Is artificial intelligence (AI) coming to conquer us? Former Google executive Mo Gawdat said in a recent interview that AI will soon overtake human intelligence, with dire consequences for our civilization. As evidence, Gawdat claims that he witnessed a robot arm making what he perceived to be a taunting gesture to AI researchers. But some experts beg to differ. "AI is woefully inadequate in many domains and relies heavily on Big Data and human surveillance to fuel its software models," Sean O'Brien, a visiting fellow at the Information Society Project at Yale Law School, told Lifewire in an email interview. Smarter Than Who? Gawdat joins a long line of doomsayers who warn of an impending AI apocalypse. Elon Musk, for example, claims that AI might one day conquer humanity. "Robots will be able to do everything better than us," Musk said during a speech. "I have exposure to the most cutting edge AI, and I think people should be really concerned by it." AI developers at Google X, Gawdat claimed in the interview, had a fright when they were building robot arms able to find and pick up a ball. Suddenly, he said that one arm grabbed the ball and seemed to hold it up to the researchers in a gesture that, to him, seemed like it was showing off. ...we also need to assume that the developer(s) of AI is given complete authority over its creation without any check and balance and a built-in 'kill' switch or fail-safe mechanism. "And I suddenly realized this is really scary," Gawdat said. "It completely froze me." Enter the Singularity Gawdat, and others concerned about future AI, talk about the concept of "the singularity," which will mark the moment when artificial intelligence becomes smarter than humans. "The development of full artificial intelligence could spell the end of the human race," physicist Stephen Hawking once famously told the BBC. "It would take off on its own and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn't compete and would be superseded." But O'Brien called the singularity "a fantasy that relies upon fundamental misunderstandings about the nature of body and mind as well as a misreading of the writing of early pioneers in computing such as Alan Turing." Artificial intelligence isn't close to being able to match human intelligence, O'Brien said. AI analyst Lian Jye Su agrees that AI can't match human intelligence, although he's less optimistic about when that could occur. "Most, if not all, AI nowadays are still focused on a single task," he told Lifewire in an email interview. "Therefore, the estimate is that we will need one or two new generations of hardware and software before technological singularity is within reach. Even when the technology is mature, we also need to assume that the developer(s) of AI is given complete authority over its creation without any check and balance and a built-in 'kill' switch or fail-safe mechanism." True Concerns About AI The real danger of AI is its ability to divide humans, Su said. AI already has been used to seed discrimination and spread hatred through deepfake videos, he noted. And, Su said, AI has helped "social media giants create echo chambers through personalized recommendation engines, and foreign powers alter political landscapes and polarize societies through highly effective targeted advertising." Just because AI may be a poor and misguided model of human cognition doesn't mean it isn't dangerous or that it can't approach or surpass humans in many areas, O'Brien said. "A pocket calculator is better and faster at arithmetic than a human will ever be, just as machines can be much stronger than humans and 'fly' or 'swim,'" he added. C M / Unsplash How AI affects humans depends on how we use it, O'Brien said. Robot labor, for example, could help humans by freeing up people for creative work or forcing them into poverty. "Likewise, we are now well aware of the perils of AI and its inherent biases, which are misused across the digital landscape to repress people of color and marginalized populations," he added. Was this page helpful? Thanks for letting us know! Get the Latest Tech News Delivered Every Day Subscribe Tell us why! Other Not enough details Hard to understand Submit