Millions Installed These AI Apps — But Researchers Found Leaked GPS Data

🔊

Artificial intelligence apps

Artificial intelligence apps that identify dog breeds, insects, and spiders from a single photo have gained massive popularity. However, recent findings by Cybernews reveal that convenience may have come at a serious privacy cost.

Three AI-powered photo identification apps, with a combined 2 million downloads on Google Play, exposed sensitive data from over 150,000 users due to a Firebase misconfiguration.

What Data Was Exposed?

The leaked information included:

  • Email addresses
  • Usernames (often containing full names)
  • Profile photos
  • Firebase Cloud Messaging (FCM) tokens
  • GPS coordinates

Although passwords were not exposed, the leaked data remains highly sensitive.

Why GPS Data Is Especially Dangerous

Location data can reveal:

  • Home addresses
  • Daily routines
  • Travel patterns
  • Frequently visited places

If exploited, this information could enable stalking, doxxing, or highly targeted phishing and social engineering attacks.

Researchers also warned that attackers could misuse exposed FCM tokens to send malicious push notifications disguised as legitimate app alerts.

What Caused the Leak?

All three apps shared the same security flaw:

A Firebase database misconfiguration with public read and write access enabled.

This meant that anyone who discovered the database could view — and potentially modify — user data.

Each exposed database contained a “poc” (Proof of Concept) entry, a common indicator left behind by automated bots scanning the internet for unsecured cloud databases. This strongly suggests that attackers may have accessed the data before researchers did.

Affected Applications

  • Dog Breed Identifier Photo Cam (500K downloads)
  • Spider Identifier App by Photo (500K downloads)
  • Insect Identifier by Photo Cam (1M downloads)

The apps were published under MobilMinds applications, with references to OZI Technologies Private Limited, a company operating across multiple countries. Researchers reportedly received no response from the developers.

A Bigger Problem in AI Apps

This incident is part of a broader issue.

Cybernews researchers found that:

  • 72% of analyzed Android AI apps contained at least one hardcoded secret
  • On average, each AI app leaked 5.1 secrets
  • 81.14% of exposed secrets were related to Google Cloud identifiers, endpoints, or API keys

Hardcoding secrets in applications is widely considered one of the most dangerous development practices, yet it remains common in AI-powered mobile apps.

Key Takeaways

  1. App popularity does not guarantee security.
  2. Cloud misconfigurations remain one of the most common causes of data breaches.
  3. AI app growth is outpacing secure development practices.

As users, we should:

  • Carefully review app permissions
  • Avoid granting unnecessary location access
  • Be cautious with apps that require photo metadata access

As developers and security professionals, this serves as a reminder that innovation must be matched with strong security controls.

Security should never be optional — especially when handling user location data.

Resources: https://cybernews.com/security/ai-photo-apps-leaking-gps-data/


#Cybersecurity #DataPrivacy #AI #AndroidSecurity #CloudSecurity #Infosec

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *