How AI Can Turn Thoughts Into Images and Why You Should Care

Yet another way that artificial intelligence could put your privacy at risk

  • A new AI method that can understand what you see raises privacy concerns. 
  • Researchers are using a text-to-image algorithm. 
  • Experts say more AI regulation is needed.
Artificial intelligence concept image with a human face leading a circuit board.

Yuichiro Chino / Getty Images

Computers might soon determine what you see based on your brain waves. 

Researchers recently demonstrated that artificial intelligence (AI) could read brain scans and recreate versions of images a person has seen. The new study adds to growing concerns that AI may intrude on privacy. 

"The latest AI rage continues to be ChatGPT, but privacy experts have brought up the fact that the tool has been trained with pretty much anything scraped off the internet—including our personal information," Kevin Gordon, the vice president of AI Technologies at NexOptic told Lifewire in an email interview. "This was done without anyone's knowledge, and nobody was even given a chance to provide consent. And because ChatGPT works on prompts, there are concerns that users may inadvertently include personal data in those prompts, which are then saved into ChatGPT's database."

The Thoughts That Count

The researchers involved in the new study used an algorithm called Stable Diffusion, similar to other text-to-image "generative" AIs such as DALL-E 2. The software can produce new images from text prompts. The team reduced the training time for each participant by incorporating photo captions into the algorithm.

"This paper demonstrates that by combining visual structural information decoded from activity in the early visual cortex with semantic features decoded from activity in higher-order areas and by directly mapping the decoded information to the internal representations of a latent diffusion model (LDM; Stable Diffusion) without fine-tuning, it is possible to decode (or generate) images from brain activity," the researchers wrote on their website

The new research isn't the only way AI could impinge on privacy. Harold Li, a vice president at cybersecurity company ExpressVPN, said in an email that AI technology is raising concerns because it analyzes data and gets better over time at understanding the world and its users. 

"We use the technologies produced by artificial intelligence daily," Li added. "Notice how the autocorrect on your phone starts to recognize words not in the dictionary just because you use them repeatedly? That's machine learning."

AI also makes it easier to collect greater volumes of personal and sensitive data, Richard Watson-Bruhn, the US Head of Digital Trust & Cyber Security at PA Consulting, noted in an email interview. He pointed to the growing collection of videos by law enforcement as evidence of this trend.

"Now video can be collected and analyzed by AI, it is far easier and more common to collect video data, potentially without your knowledge," he added. "This collection by itself is a harm, removing privacy we previously had."

The growing use of AI also increases the need for data to develop AI models, Watson-Bruhn said. "This increases the incentive and likelihood that personal data collected is consolidated and used without your knowledge in overzealous ways you never would have approved of," he added. "There are many examples of this, perhaps the most famous being the use of data by Cambridge Analytica to influence US elections, use of the data never communicated in its collection. The potential misuse of your data both intentionally and otherwise, is another harm."

... privacy is a fundamental human right, and anything that threatens that, such as AIs, must be regulated.

Regulating AI

Some experts say the only way to keep AI from collecting too much personal information about users might be through regulation. One legislative effort toward regulating AI is the EU’s General Data Protection Regulation, which establishes the right of an individual to have the logic of any legal or other significant decision made by an AI explained to them by another human.

“For the time being, most of the responsibility to protect privacy falls on the companies gathering data about user identities,” Asif Savvas, the  Chief Product Officer at Simeio, said in an email interview. 

The White House’s National Cybersecurity Strategy launched this month includes an AI Bill of Rights to guide businesses on implementing ethical AI. 

"But it is more of a guideline than an actual bill and is hard to execute,” Li said. “At the same time, privacy is a fundamental human right, and anything that threatens that, such as AIs, must be regulated.”

Was this page helpful?