Category: Cybersecurity

  • 200M+ Telegram User Records Allegedly Leaked — What This Means for Users


    🚨 200M+ Telegram User Records Allegedly Leaked — What This Means for Users

    A post on a well-known data leak forum claims that more than 200 million Telegram user records have been exposed. The dataset allegedly includes usernames, phone numbers, and email addresses.

    Telegram has publicly denied that private user data was compromised. However, cybersecurity researchers who reviewed a sample of the leaked data suggest the situation may be more complex.


    What Was Reportedly Exposed?

    According to researchers who analyzed a sample of the dataset shared by the attackers, the exposed information may include:

    • Telegram usernames
    • Full names
    • Email addresses
    • Phone numbers
    • User IDs

    The attackers claim the combined dataset contains over 200 million records, totaling approximately 44GB of uncompressed data.

    At this time, it remains unclear whether this represents:

    • A new Telegram data breach
    • Previously scraped public data
    • A compilation of older leaked databases
    • Or a mixture of multiple sources

    Duplicates may also exist in the dataset, which could reduce the actual number of unique affected users.


    Telegram’s Response

    Telegram stated that the leaked records appear to result from contact imports and contain only public user IDs and usernames.

    According to the company, no private data was exposed, and users are not at risk.

    However, researchers noted that email addresses and phone numbers are typically not publicly visible on Telegram unless users intentionally make them public. This raises questions about whether the dataset includes information from older breaches or external sources.


    Why This Leak Matters

    Even if part of the data was publicly accessible, data aggregation at scale significantly increases risk.

    When attackers combine:

    • Emails
    • Phone numbers
    • Usernames

    They can launch:

    • Mass phishing campaigns
    • SMS phishing (smishing) attacks
    • Social engineering operations
    • Credential stuffing attempts
    • Account takeover attacks

    The larger the dataset, the easier it becomes to automate and scale these attacks globally.


    The Bigger Cybersecurity Concern: Data Aggregation

    Modern cyber threats often don’t rely on a single breach.

    Instead, attackers collect and merge:

    • Scraped public data
    • Previously leaked databases
    • Breach compilations
    • Contact lists

    Even if individual data points seem harmless, combined datasets become highly valuable for cybercriminals.

    This case highlights how aggregation — not just breaches — represents a growing cybersecurity threat.


    How Telegram Users Can Protect Themselves

    If you use Telegram, consider taking these steps:

    ✅ Enable Two-Step Verification
    ✅ Hide your phone number in privacy settings
    ✅ Restrict who can find you by phone number
    ✅ Never share login verification codes
    ✅ Be cautious of suspicious links or messages

    Users should remain alert for phishing attempts, especially following publicized leak announcements.


    Final Thoughts

    Whether this incident represents a new breach or a recycled dataset, the scale alone makes it concerning.

    Messaging platforms remain high-value targets for threat actors. Proactive security habits are essential in today’s evolving threat landscape.

    Staying informed and practicing strong digital hygiene can significantly reduce your risk.


    Source: Reporting originally published by Cybernews (February 10, 2026). This article includes independent analysis and commentary.


  • Millions Installed These AI Apps — But Researchers Found Leaked GPS Data

    Artificial intelligence apps

    Artificial intelligence apps that identify dog breeds, insects, and spiders from a single photo have gained massive popularity. However, recent findings by Cybernews reveal that convenience may have come at a serious privacy cost.

    Three AI-powered photo identification apps, with a combined 2 million downloads on Google Play, exposed sensitive data from over 150,000 users due to a Firebase misconfiguration.

    What Data Was Exposed?

    The leaked information included:

    • Email addresses
    • Usernames (often containing full names)
    • Profile photos
    • Firebase Cloud Messaging (FCM) tokens
    • GPS coordinates

    Although passwords were not exposed, the leaked data remains highly sensitive.

    Why GPS Data Is Especially Dangerous

    Location data can reveal:

    • Home addresses
    • Daily routines
    • Travel patterns
    • Frequently visited places

    If exploited, this information could enable stalking, doxxing, or highly targeted phishing and social engineering attacks.

    Researchers also warned that attackers could misuse exposed FCM tokens to send malicious push notifications disguised as legitimate app alerts.

    What Caused the Leak?

    All three apps shared the same security flaw:

    A Firebase database misconfiguration with public read and write access enabled.

    This meant that anyone who discovered the database could view — and potentially modify — user data.

    Each exposed database contained a “poc” (Proof of Concept) entry, a common indicator left behind by automated bots scanning the internet for unsecured cloud databases. This strongly suggests that attackers may have accessed the data before researchers did.

    Affected Applications

    • Dog Breed Identifier Photo Cam (500K downloads)
    • Spider Identifier App by Photo (500K downloads)
    • Insect Identifier by Photo Cam (1M downloads)

    The apps were published under MobilMinds applications, with references to OZI Technologies Private Limited, a company operating across multiple countries. Researchers reportedly received no response from the developers.

    A Bigger Problem in AI Apps

    This incident is part of a broader issue.

    Cybernews researchers found that:

    • 72% of analyzed Android AI apps contained at least one hardcoded secret
    • On average, each AI app leaked 5.1 secrets
    • 81.14% of exposed secrets were related to Google Cloud identifiers, endpoints, or API keys

    Hardcoding secrets in applications is widely considered one of the most dangerous development practices, yet it remains common in AI-powered mobile apps.

    Key Takeaways

    1. App popularity does not guarantee security.
    2. Cloud misconfigurations remain one of the most common causes of data breaches.
    3. AI app growth is outpacing secure development practices.

    As users, we should:

    • Carefully review app permissions
    • Avoid granting unnecessary location access
    • Be cautious with apps that require photo metadata access

    As developers and security professionals, this serves as a reminder that innovation must be matched with strong security controls.

    Security should never be optional — especially when handling user location data.

    Resources: https://cybernews.com/security/ai-photo-apps-leaking-gps-data/


    #Cybersecurity #DataPrivacy #AI #AndroidSecurity #CloudSecurity #Infosec

  • 🔒 Is Google Using Your Gmail to Train Gemini AI? Here’s What You Need to Know

    Concerns about Google’s data practices have resurfaced after discussions online claimed that Gmail is now being scanned to help train Google’s latest AI model, Gemini.
    While the topic has caused confusion and worry among users, the truth is more nuanced.

    This blog post breaks down what’s real, what’s exaggerated, and how you can control your privacy settings.


    📌 What’s Actually Happening?

    Google has always scanned Gmail messages for essential features such as:

    • spam and phishing detection
    • malware scanning
    • inbox categorization (Primary, Social, Promotions)
    • Smart Compose and AI-assisted replies

    These processes are not new—they’ve been part of Gmail for over 15 years.

    What’s new is that some of these features now rely on Gemini AI, Google’s advanced language model.
    This means that Gmail data can be used to improve AI-based features if certain settings are enabled.


    🤖 Does Google Use Your Emails to Train Gemini?

    Yes, but only if “Smart Features” are turned on.

    Google is not secretly opening and reading emails. Instead, the data is processed automatically the same way it has been for years, but now the processing also helps improve Gemini-powered features.

    When you keep Smart Features enabled, your Gmail content may be used to improve:

    • Smart Compose
    • Smart Reply
    • Categorization
    • Document summaries
    • Search suggestions

    If you turn Smart Features off, Google stops using your Gmail content for AI training and advanced personalization.


    🕵️‍♂️ Is This a New Privacy Issue?

    Not exactly.

    The scanning behavior itself is not new. Google has always processed emails to make Gmail functional and secure.

    The main updates are:

    • Google now uses Gemini to power some Gmail features
    • Google clarified this in recent privacy and UI updates
    • Users can choose to opt out more clearly than before

    However, online articles and social media posts often make the situation sound like a sudden privacy “breach,” which can be misleading.


    🔒 How to Opt Out of Google AI Training in Gmail

    If you don’t want your Gmail content being used to improve Gemini or AI-powered features, here’s how to turn it off.

    1️⃣ Gmail Settings (Desktop or Mobile)

    1. Open Gmail → Settings
    2. Select See all settings (desktop) or Settings (mobile)
    3. Find Smart features in Gmail, Chat, and Meet
    4. Turn off Smart Features
    5. Click Save changes
    6. Refresh Gmail or sign out and sign back in

    2️⃣ Google Workspace Smart Features

    1. Open Google Account → Data & Privacy
    2. Go to Google Workspace smart features
    3. Click Manage Workspace smart feature settings
    4. Disable both:
      • Smart features in Google Workspace
      • Smart features in other Google products
    5. Save changes

    ⚠️ Note:
    Turning off Smart Features will disable conveniences like Smart Compose and automatic inbox categories.


    🧪 Why the Confusion?

    Some viral posts referenced a past lawsuit where Google was fined for collecting Android data even when users opted out.
    However:

    • That lawsuit is not related to Gmail
    • It is not connected to Gemini
    • It involved Web & App Activity, not emails

    Because of this, many people mistakenly combined the two issues.


    🛡️ Final Thoughts

    This situation highlights the importance of regularly reviewing our privacy settings—especially as AI becomes more integrated into the tools we use every day.

    Here’s the bottom line:

    • ✔ Gmail scanning is not new
    • ✔ Gemini now powers some Gmail features
    • ✔ You can choose to opt out
    • ✔ Transparency and awareness are key

    Staying informed helps us protect our privacy without falling into misinformation or unnecessary fear.


    🧭 Want to Stay Safer Online?

    Follow for more breakdowns on:

    • cybersecurity news
    • digital privacy
    • AI ethics
    • practical security tips
    • how to manage your digital footprint

    #CyberSecurity #Google #Gmail #GeminiAI #DataPrivacy #AIEthics #OnlineSafety #TechBlog

  • 🔒 How to Protect Your iPhone: A Step-by-Step Settings Guide

    Your iPhone is powerful and convenient, but many default settings trade your privacy for ease of use. The good news? With a few quick changes, you can make your device much safer without breaking your daily routine.

    Here’s a simple, step-by-step guide you can follow in Settings to harden your iPhone.

    1. Privacy & Tracking

    • Go to Settings → Privacy & Security
    • Turn off Analytics & Improvements (all options).
    • Tap Tracking → disable Allow Apps to Request to Track.
    • Tap Apple Advertising → turn off Personalized Ads.

    2. Safari Security

    • Go to Settings → Safari
    • Enable Block Pop-ups.
    • Turn on Prevent Cross-Site Tracking.
    • Enable Fraudulent Website Warning.
    • Under Hide IP Address, choose From Trackers.
    • At the bottom, enable Advanced Tracking & Fingerprinting Protection → For All Browsing.
    • Bonus: Browse in Private Mode for sensitive sessions.

    3. App Permissions

    • Go to Settings → Apps → [choose app]
    • Turn off Background App Refresh.
    • Set permissions to While Using the App (camera, mic, location).
    • Remove apps you rarely use.

    4. Lock Screen & Passcode

    • Go to Settings → Display & Brightness → Auto-Lock → set to 30 sec–1 min.
    • Go to Settings → Notifications → Show Previews → set to When Unlocked.
    • Go to Settings → Face ID & Passcode
    • Turn off unneeded access under Allow Access When Locked.
    • Make sure Stolen Device Protection is enabled.

    5. Wi-Fi, Bluetooth & AirDrop

    • Go to Settings → Wi-Fi → Edit → remove unused networks.
    • Turn Auto-Join Hotspots to Ask to Join.
    • Keep AirDrop set to Receiving Off (enable Contacts Only when needed).
    • Turn off Bluetooth when not using.

    6. Messages & Calls

    • Go to Settings → Messages
    • Enable Filter Unknown Senders.
    • Turn off Send Read Receipts.
    • Disable Send as SMS.
    • Go to Phone app → Settings → Silence Unknown Callers (or use call filtering tools).

    7. Extra Protection (Optional)

    • Use a VPN + private DNS + ad blocker to hide your traffic from ISPs and trackers.
    • Use DuckDuckGo as your default search engine.
    • If using Mail app → Settings → Mail → Privacy Protection → enable Hide IP Address and Block Remote Content.

    Final Thoughts

    You don’t need to flip every switch at once. Start with the small changes that don’t affect your daily life (like disabling tracking and hiding lock screen previews). Over time, layer on the stricter settings for stronger protection.

    The goal isn’t perfect security — it’s being a harder target than most people. Even a few of these steps will keep your data safer and give you more peace of mind.

    ✨ Tip: Save this post and walk through it with your phone in hand. Your iPhone will thank you later.