Why Facebook’s Use of Instagram to Train AI Raises Privacy Flags

Your vacation photos aren’t safe from computer vision

Key Takeaways

  • Privacy experts are raising concerns over Facebook’s use of public Instagram photos to train artificial intelligence. 
  • The program was taught to recognize images by showing the computer over 1 billion public photos. 
  • Instagram’s privacy policy includes a section that lets users know information may be used in research and development.
A bust of a person with AI recognition points overlaid on the face and neck area.
John Lamb / Getty Images

Facebook’s use of Instagram photos to train artificial intelligence is raising privacy concerns. 

The social media giant announced recently that it had built software that can learn from what it is looking at. The program was taught to recognize images by reviewing over 1 billion public photos. Experts say users should be aware that Facebook is using their pictures. 

"It’s all about knowing consent," James E. Lee, chief operating officer of the Identity Theft Resource Center, said in an email interview.

"Instagram’s privacy policy—which most people probably don’t read—states very clearly that the company reserves the right to use the photos you post for research. Users can turn facial recognition on/off in their privacy settings."

Better Than the Rest

Facebook’s program, nicknamed SEER for SElf-supERvised, bested other artificial intelligence (AI) models in an object-recognition test, the company claimed. The program achieved a "classification accuracy score" of 84.2% when it was put through a test that checks whether an AI program can identify what’s in a picture. 

"SEER’s performance demonstrates that self-supervised learning can excel at computer vision tasks in real-world settings," the company said in a blog post.

"This is a breakthrough that ultimately clears the path for more flexible, accurate, and adaptable computer vision models in the future."

"Although Facebook’s terms and conditions may allow them to leverage user data in such a way, most users are not explicitly and actively aware their data is being mined for such purposes."

If it is launched commercially, SEER would help identify objects—not people—without being programmed to know via a label what’s in a photo, Lee said. "That’s a more efficient and faster way than the current method that requires huge datasets to match an object with its identity," he added.

"There is always the potential for misuse, but there are also legitimate potential benefits of this kind of tech."

Facebook's program could help the company better police content that violates its policies, for example, limiting unwanted exposure to obscene or graphic images, Aimee O’Driscoll, a security researcher at privacy site Comparitech, said in an email interview. It also could be used to automatically describe images, improving user experiences for people with visual impairments.

You’ve Already Agreed to this Program

Instagram’s privacy policy includes a section that lets users know information may be used in research and development. "The company is utilizing its trove of data for another part of its business, similar to the way it uses user data to feed its advertising business," O’Driscoll said.

"Even so, users may still feel uncomfortable with their images being used in this way."

Yashar Behzadi, the CEO of Synthesis AI, a company that uses artificial intelligence for computer vision, said Facebook’s latest AI advancements represent a "significant improvement" in computer-vision capability.

"Users can likely expect better image tagging and contextual search, while advertisers will benefit from more accurate user targeting," he added. 

But Facebook’s approach of leveraging billions of Instagram images raises some serious privacy and regulatory concerns, Behzadi said. 

Facial recognition scanning a person up close but also several people in a crowd.
John M Lund Photography Inc / Getty Images

"Although Facebook’s terms and conditions may allow them to leverage user data in such a way, most users are not explicitly and actively aware their data is being mined for such purposes," he said.

"We believe companies should be more forthright and transparent with users, allowing them full control over their data."

Many other companies have used artificial intelligence to identify an image’s contents, pointed out Bobby Gill, CEO of app developer Blue Label Labs, in an email interview. "However, the fact that this is almost certainly going to be used for marketing is what’s troubling," he added. 

The new program could raise potential privacy concerns, depending on how Facebook plans to use the system, Gill said. 

"This data would likely be accessible to technical marketers that would use it to identify certain trends based on various elements identified in an image," he said.

"For example, being able to pull information from images that people post adds another dimension for associative systems that generally use behavior to profile and target individuals. It might come to learn that anyone with, say, frogs in 3-7% of their pictures is highly likely to purchase home fitness equipment."

Was this page helpful?