News > Social Media Your LinkedIn Contact Could Be a Deepfake Seeing shouldn’t be believing By Sascha Brodsky Sascha Brodsky Senior Tech Reporter Macalester College Columbia University Sascha Brodsky is a freelance journalist based in New York City. His writing has appeared in The Atlantic, the Guardian, the Los Angeles Times and many other publications. lifewire's editorial guidelines Published on April 14, 2022 01:50PM EDT Fact checked by Jerri Ledford Fact checked by Jerri Ledford Western Kentucky University Gulf Coast Community College Jerri L. Ledford has been writing, editing, and fact-checking tech stories since 1994. Her work has appeared in Computerworld, PC Magazine, Information Today, and many others. lifewire's fact checking process Tweet Share Email Tweet Share Email Social Media Mobile Phones Internet & Security Computers & Tablets Smart Life Tech Leaders Home Theater & Entertainment Software & Apps Social Media Streaming Gaming Women in Gaming A recent study found that many contacts on LinkedIn aren’t real people. It’s part of the growing problem of deep fakes, in which a person in an existing image or video is replaced with a computer-altered representation.Experts recommend exercising caution when clicking on URLs or responding to LinkedIn messages. Liyao Xie / Getty Images You might want to think twice before connecting with that friendly face online. Researchers say many contacts on the popular networking site LinkedIn aren’t real people. It’s part of the growing problem of deep fakes, in which a person in an existing image or video is replaced with a computer-altered representation. "Deep fakes are important in that they effectively eliminate what was traditionally considered a surefire method of confirming identity," Tim Callan, the chief compliance officer of the cybersecurity firm Sectigo told Lifewire in an email interview. "If you can’t believe a voice or video mail from your trusted colleague, then it has become that much harder to protect process integrity." Linking to Who? The investigation into LinkedIn contacts started when Renée DiResta, a researcher at the Stanford Internet Observatory, got a message from a profile listed as Keenan Ramsey. The note seemed ordinary, but DiResta noted some strange things about Keenan's profile. For one thing, the image portrayed a woman with only one earring, perfectly centered eyes, and blurred hair strands that seemed to disappear and reappear. On Twitter, DiResta wrote, "This random account messaged me… The face looked AI-generated, so my first thought was spear phishing; it'd sent a 'click here to set up a meeting' link. I wondered if it was pretending to work for the company it claimed to represent since LinkedIn doesn't tell companies when new accounts claim to work somewhere… But then I got inbound from another fake, followed by a subsequent note from an obviously *real* employee referencing a prior message from the first fake person, and it turned into something else altogether." DiResta and her colleague, Josh Goldstein, launched a study that found more than 1,000 LinkedIn profiles using faces that appear to be created by AI. Deep Fakers Deep fakes are a growing problem. Over 85,000 deepfake videos were detected up to December 2020, according to one published report. Recently, deep fakes have been used for amusement and to show off the technology, including one example in which former President Barack Obama talked about fake news and deepfakes. "While this was great for fun, with adequate computer horsepower and applications, you could produce something that [neither] computers nor the human ear can tell the difference," Andy Rogers, a senior assessor at Schellman, a global cybersecurity assessor, said in an email. "These deepfake videos could be used for any number of applications. For instance, famous people and celebrities on social media platforms such as LinkedIn and Facebook could make market-influencing statements and other extremely convincing post content." AndSim / Getty Images Hackers, specifically, are turning to deepfakes because both the technology and its potential victims are becoming more sophisticated. "It’s much harder to commit a social engineering attack through inbound email, especially as targets are increasingly educated about spear phishing as a threat," Callan said. Platforms need to crack down on deepfakes, Joseph Carson, the chief security scientist at the cybersecurity firm Delinea, told Lifewire via email. He suggested that Uploads to sites go through analytics to determine the authenticity of the content. "If a post has not had any type of trusted source or context provided, then correct labeling of the content should be clear to the viewer that the content source has been verified, is still being analyzed, or that the content has been significantly modified," Carson added. Deep fakes are important in that they effectively eliminate what was traditionally considered a surefire method of confirming identity. Experts recommend users exercise caution when clicking on URLs or responding to LinkedIn messages. Be aware that voice and even moving images of supposed colleagues can be faked, Callan suggested. Approach these interactions with the same level of skepticism you hold for text-based communications. However, if you’re worried about your own identity being used in a deep fake, Callan said there’s no simple solution. "The best protections have to be put in place by those who develop and operate the digital communications platforms you are using," Callan added. "A system that confirms the [identities] of participants using unbreakable cryptographic techniques can very effectively undermine this kind of risk." Was this page helpful? Thanks for letting us know! Get the Latest Tech News Delivered Every Day Subscribe Tell us why! Other Not enough details Hard to understand Submit