News > Internet & Security Facebook’s Deepfake Tech Won’t Save Us, Experts Say By Allison Murray Allison Murray Twitter Tech News Reporter Southern Illinois University Allison reports on all things tech. She's a news junky that keeps her eye on the latest trends. Allison is a writer working out of Chicago, IL, with her only coworker: her cat Norbert. lifewire's editorial guidelines Updated on June 25, 2021 01:59PM EDT Fact checked by Rich Scherr Fact checked by Rich Scherr Twitter University of Maryland Baltimore County Rich Scherr is a seasoned technology and financial journalist who spent nearly two decades as the editor of Potomac and Bay Area Tech Wire. lifewire's fact checking process Tweet Share Email Tweet Share Email Internet & Security Mobile Phones Internet & Security Computers & Tablets Smart Life Tech Leaders Home Theater & Entertainment Software & Apps Social Media Streaming Gaming Women in Gaming As deepfakes become easier to make, new and improved ways of spotting them have become a priority. Facebook’s deepfake-spotting technology uses reverse machine learning to uncover if a video is a deepfake or not. Experts say using blockchain technology would be the best way to see if a video is real or not since the method relies on contextual data. Getty Images Facebook is confident in its machine learning model to combat deepfakes, but experts say machine learning on its own won’t save us from being duped by deepfakes. Companies like Facebook, Microsoft, and Google all are working to combat deepfakes from spreading across the web and social networks. While methods differ, there’s one potential fool-proof method to spot these false videos: blockchains. “[Blockchains] just give you lots of potential to validate the deepfake in a way that's the best form of validation that I can see,” Stephen Wolfram, founder & CEO of Wolfram Research and author of A New Kind of Science, told Lifewire over the phone. Facebook's Deepfake-Spotting Tech Deepfake tech has grown rapidly over the past few years. The misleading videos use machine learning methods to do things like superimpose someone’s face onto another person's body, alter background conditions, fake lip-sync, and more. They range from harmless parodies to making celebrities or public figures say or do something they didn’t. Experts say that the technology is advancing quickly, and that deepfakes will only get more convincing (and easier to create) as the technology becomes more widely available and more innovative. ApolitikNow / Flickr / CC By 2.0 Facebook recently gave more insight into its deepfake detecting technology in partnership with Michigan State University. The social network says it relies on reverse engineering from a single artificial intelligence-generated image to the generative model used to produce it. Research scientists who worked with Facebook said that the method relies on uncovering the unique patterns behind the AI model used to generate a deepfake. “By generalizing image attribution to open-set recognition, we can infer more information about the generative model used to create a deepfake that goes beyond recognizing that it has not been seen before. And by tracing similarities among patterns of a collection of deepfakes, we could also tell whether a series of images originated from a single source,” wrote research scientists Xi Yin and Tan Hassner in Facebook’s blog post about its deepfake-spotting method. Facebook Wolfram says it makes sense that you would use machine learning to spot an advanced AI model (a deepfake). However, there's always room to fool the technology. “I'm not at all surprised that there's a decent machine learning way of [detecting deepfakes],” Wolfram said. “The only question is if you put in enough effort, can you fool it? I'm sure that you can.” Combatting Deepfakes A Different Way Instead, Wolfram said that he believes using blockchain would be the best option to accurately spot for certain types of deepfakes. His opinion of using blockchain over machine learning goes back to 2019, and he said that, ultimately, the blockchain approach can provide a more accurate solution to our deepfake problem. “I'd expect image and video viewers could routinely check against blockchains (and 'data triangulation computations') a bit like how web browsers now check security certificates,” Wolfram wrote in an article published in Scientific American. Since blockchains store data in blocks that are then chained together in chronological order, and since decentralized blockchains are immutable, the data entered is irreversible. The only question is if you put in enough effort, can you fool it? I'm sure that you can. Wolfram explained that by putting a video into a blockchain, you’d be able to see the time it was taken, the location, and other contextual information that would allow you to tell if it’s been altered in any way. “In general, having the more metadata that there is that contextualizes the picture or video, the more likely you are to be able to tell,” he said. “You can't fake time on a blockchain.” However, Wolfram said the method used—whether it’s machine learning or using blockchain—depends on the type of deepfake you’re trying to protect against (i.e., a video of Kim Kardashian saying something silly or a video of a politician making a statement or suggestion). “The blockchain approach protects against certain kinds of deep fakes, just as the machine learning image processing protects against certain kinds of deep fakes,” he said. The bottom line, it seems, is vigilance for all of us when it comes to combating the coming deepfake deluge. Was this page helpful? Thanks for letting us know! Get the Latest Tech News Delivered Every Day Subscribe Tell us why! Other Not enough details Hard to understand Submit