Scam AI Apps Are Tricking Users—Here's How to Stay Safe

Beware of high fees and poor performance

  • A new crop of AI chatbot apps is pushing users into pricey subscriptions. 
  • The apps often don't perform as promised and are loaded with ads. 
  • Experts say it's safer to stick with apps from well-known tech companies like Microsoft.
A hand holding a smartphone displaying an AI chatbot app that doesn't appear to be from a reputable source.

hocus-focus / Getty Images

Chatbots powered by artificial intelligence (AI) are all the rage but be wary of downloading mobile apps that use the technology. 

A new report finds that scammers are flooding app stores with software claiming to leverage OpenAI's ChatGPT. The apps often bring high subscription charges and feature intrusive advertising. 

"AI apps have exploded since the emergence of generative AI and chatbots such as ChatGPT and Bard," Josh Davies, a marketing manager at the cybersecurity firm Fortra told Lifewire in an email interview. "Everyone is rushing to push their AI bot, and many have developed an AI application to perform certain functions like creating presentations, generating animations, or stock images. This explosion can make it very hard to tell the legitimate from the illegitimate, and makes it easier to accidentally download an illegitimate app, masquerading as a helpful new AI tool."

Fleeceware Apps

New AI apps try to push users into paying subscription fees ranging from $9.99 to $69.99, according to the recent study by the cybersecurity firm Sophos. The apps also use tactics such as tightly limiting app usage and functionality without a subscription. 

"Using a combination of advertising within and outside of the app stores and fake reviews that game the rating systems of the stores, the developers of these misleading apps are able to lure unsuspecting device users into downloading them, often with 'free trial' versions that then kick in automatic recurring subscription fees that users may not know are coming, or prompt them to buy a subscription to 'pro' versions that promise greater functionality but fail to deliver," the report's authors wrote. 

The investigators looked at one iOS AI app on Apple's App Store. They found that the 'pro' features that users pay for are essentially the same as those available for free to registered users of ChatGPT. 

"Mixed in with the thousands of brief four-star reviews are comments from people who downloaded the app and found it didn't work—either it only showed ads or failed to respond to questions when unlocked," according to the report. "One user reported that the reply to every message is 'Sorry, I could not understand your message.'"

AI apps that aren't on an app store can be even more dangerous, Caroline Wong, the chief strategy officer at the cybersecurity company warned in an email. 

Be wary of sites you've never seen, and double-check the spelling of site names before clicking. It could be a phishing attempt.

"Scammers can create and advertise fake websites that look like the real deal," she added.
"Be wary of sites you've never seen, and double-check the spelling of site names before clicking. It could be a phishing attempt."

Keep in mind that even legitimate apps could pose a privacy risk. Britain's spy agency has warned that AI chatbots like ChatGPT pose a security threat because sensitive queries could be hacked or leaked.

"There is always a risk when you download an app—AI chat app or otherwise," Davies said. "Apps will ask for access to your device's information such as your location, contacts, and even screen recording capabilities. These permissions are required for legitimate apps to perform their core functions, so people are used to granting access, but it can be abused to obtain your data, monitor your locations, or spy on your activity."

Staying Safe With AI Apps

The safest AI apps are those owned by companies such as Microsoft, and Google, Davies noted. For example, Bing recently incorporated ChatGPT 4 into its search functionality, allowing users to access the latest version of ChatGPT without paying for a premium subscription. 

Conceptual illustration of someone stealing personal data from an app on a smartphone.

nadia_bormotova / Getty Images

But if you stray from the offerings of the major tech companies, finding out what's safe and what's not can get trickier. According to the Sophos report, these 'fleeceware' apps stay within Apple and Google terms of service, so they aren't rejected during review and are allowed into the app stores. 

At the very least, it's a good idea to use the same safety precautions you should take with any other app you aren't sure about, Wong said.

"Avoid sharing personal details, including your bank account, social security number, home or work address," Wong added. "Users should proceed with caution with what they share with AI chat apps."

Was this page helpful?