Discord is about to make a decision that will affect hundreds of millions of users. In the coming days, the platform will roll out mandatory age verification-starting in the UK and Australia, with global expansion already planned. On the surface, this looks like a responsible move to protect minors. Look closer, and you’ll find a company with a history of catastrophic data breaches, a platform actively exploited by cybercriminals as command infrastructure, and a system that just leaked 70,000 government IDs through a third-party vendor.

This isn’t just about age verification. This is about whether you should trust a platform that has repeatedly failed to protect its users with your most sensitive personal data.

What’s Actually Changing

Discord is implementing two verification methods for users who want to access age-restricted content or change their profile age:

Biometric Face Scanning: Users capture a video selfie that’s analyzed by AI to estimate age. The technology, provided by third-party vendors like k-ID, claims to determine whether you’re over 18 without storing the actual video. At least, that’s the promise.

Document Upload: The alternative is uploading government-issued IDs-passports, driver’s licenses, or national ID cards. These documents are processed either automatically or reviewed by human moderators when the automated system fails.

The verification triggers in specific scenarios: when users attempt to change their age to access restricted content, or when trying to join servers and channels marked as adult-only. It’s a direct response to the UK’s Online Safety Act, which mandates “robust” age verification for platforms hosting adult content by July 2025.

But here’s what Discord isn’t advertising: your data doesn’t stay with Discord. It flows through third-party vendors, gets stored on systems outside your control, and has already been compromised once.

The October 2025 Breach: Verification Data Becomes the Target

Just months before this global rollout, Discord suffered what should have been a wake-up call. Instead, it appears to be business as usual.

In October 2025, hackers compromised a third-party customer service provider called 5CA that Discord used for age verification processing. The fallout was devastating:

  • 70,000 users had their government ID photos exposed
  • 1.6 terabytes of sensitive data stolen
  • Personal information including names, email addresses, IP addresses, and partial credit card data
  • The victims: precisely those users who had submitted IDs for age verification appeals

The irony is crushing. The very security measure designed to protect users became the attack vector that exposed them. Discord’s response? A carefully worded press release stating this “was not a breach of Discord” but rather of a third-party vendor-as if that distinction matters to the 70,000 people whose passports and driver’s licenses are now in criminal hands.

The breach wasn’t sophisticated. Attackers exploited weaknesses in Zendesk access controls, used compromised API tokens, and exfiltrated data with automated scripts. These are basic security failures that any competent platform should prevent. Discord didn’t.

Discord as Criminal Infrastructure

Perhaps most disturbing is how Discord has evolved into a preferred tool for cybercriminals-not just as a communication platform, but as active command and control infrastructure.

The ChaosBot Operation

In October 2025, security researchers identified “ChaosBot”-a sophisticated Rust-based malware that uses Discord’s own API as its command and control channel. Here’s how it works:

The malware infects a victim’s system, authenticates with Discord’s API using stolen bot tokens, and creates a private text channel named after the compromised computer. Attackers then send commands directly into this channel: shell to execute commands, download to exfiltrate files, screenshot to capture the display. The infected machine sends results back as file attachments.

To security tools and firewalls, this looks like legitimate Discord traffic. It’s HTTPS connections to discord.com-exactly the same traffic generated by millions of gamers. The malware hides in plain sight because Discord is considered trustworthy infrastructure.

Webhook Exfiltration

Discord webhooks-simple HTTPS URLs that accept POST requests-have become the exfiltration method of choice for data-stealing malware. Criminals embed these webhook URLs into:

  • Malicious npm packages distributed through Node.js repositories
  • Poisoned PyPI modules targeting Python developers
  • Compromised Ruby gems

When victims install these packages, the malware immediately begins sending stolen credentials, browser data, and system information to Discord webhooks. No traditional command infrastructure needed. No suspicious network connections. Just encrypted HTTPS traffic that every corporate firewall allows.

Why Criminals Choose Discord

The calculus is simple:

  1. Trusted Traffic: Discord connections are rarely blocked or inspected
  2. Free Infrastructure: No need to rent servers or register domains
  3. Easy Replacement: Compromised webhooks and bot accounts can be replaced instantly
  4. Blending In: Malicious traffic is identical to legitimate gaming traffic
  5. Global CDN: Discord’s content delivery network hosts exfiltrated files

This isn’t theoretical. The VVS Stealer, sold on Telegram since April 2025 for €10 per week to €199 lifetime, specifically targets Discord tokens and credentials. It’s designed to hijack Discord sessions and extract browser data-and it’s being actively used in the wild.

The CSAM Crisis

If data breaches and criminal infrastructure weren’t enough, Discord faces a more fundamental problem: it has become a hunting ground for predators targeting children.

New Jersey’s Lawsuit

In April 2025, New Jersey became the first US state to sue Discord, accusing the platform of violating consumer protection laws by making it dangerously easy for adults to contact minors. The lawsuit, heavily redacted but deeply disturbing, alleges Discord’s design facilitates grooming and exploitation.

The platform allows children to create accounts with minimal verification, join public servers where adults congregate, and receive direct messages from strangers. The lawsuit claims Discord knows this is happening and has failed to implement adequate safeguards.

The FBI Warning

In August 2024, Senator Mark Warner demanded answers from Discord following an FBI warning about “violent predatory groups” operating on messaging platforms. These groups target vulnerable teenagers with a horrific goal: forcing them into producing child sexual abuse material (CSAM) or sharing acts of self-harm, sometimes livestreamed.

The Washington Post investigation that followed revealed the mechanics of these operations. Predators identify isolated or mentally vulnerable teens on Discord, establish trust through private messages, and then escalate to extortion. Victims are told that unless they produce increasingly degrading content, their families will be harmed or their secrets exposed.

How the Platform Enables Abuse

Several Discord features create perfect conditions for exploitation:

Server Discovery: Predators can find servers populated by minors and join them Direct Messages: Private channels operate outside server moderation Voice Channels: Encrypted voice communication leaves no logs File Sharing: CSAM distribution through Discord’s CDN links Vanity Invites: Custom invite links that are easily shared and difficult to track

Discord’s moderation system, which relies heavily on user reports, is reactive rather than preventive. By the time someone reports abuse, the damage is already done.

The Verification Technology: Promises vs. Reality

Let’s examine the actual systems Discord wants you to trust with your biometric data and government IDs.

The Illusion of Security: Bypass Methods for Age Verification

While Discord proudly boasts about its “robust” age verification, the reality is that these systems are easily bypassed. The claim that biometric scans and document checks provide effective protection ignores fundamental security principles: security through convenience is no security at all.

Deep Live Cam: Real-Time Face Swapping

Deep Live Cam is an open-source tool that allows users to perform real-time face swapping. The technology uses artificial intelligence to dynamically manipulate facial features:

# Installation and usage
git clone https://github.com/hacksider/Deep-Live-Cam.git
cd Deep-Live-Cam
python run.py --execution-provider cuda

With Deep Live Cam, users can: - Use a pre-made adult face - Manipulate webcam feed in real-time - Route the manipulated webcam as a virtual camera to Discord

OBS Studio: Virtual Cameras and Video Manipulation

OBS Studio offers robust virtual camera functionality that allows users to present any video source as a webcam:

# Install virtual camera on Windows
data\obs-plugins\win-dshow\virtualcam-install.bat

Users can: - Use pre-recorded videos as webcam feed - Combine images with animated effects - Select the virtual camera as webcam source in Discord

ManyCam: Professional Video Manipulation

ManyCam is a commercial solution offering similar features: - Combine multiple video sources - Apply real-time filters and effects - Provide virtual cameras for various applications

Why These Methods Work

Discord’s age verification systems rely on several assumptions that are easily circumvented:

  1. Real-time verification vs. pre-produced material: Systems don’t verify if material is live
  2. Simple face detection vs. complex manipulation: Basic face detection can be fooled with simple tricks
  3. No source verification: Discord cannot ensure the webcam source is authentic

Using these bypass methods carries significant risks: - Violation of terms of service: Accounts can be banned - Legal consequences: Could be considered fraud in some jurisdictions - Security risks: Installing third-party software can create security vulnerabilities

Conclusion: Technical Solutions Are Not Magic Bullets

The existence and ease of implementation of these bypass methods prove that technical age verification alone is not an adequate protection mechanism. Discord is focusing on the wrong question - not “How can we verify identity?” but “How can we protect children without endangering their privacy?”.

Biometric Face Scanning

The technology uses AI to estimate age from a video selfie. The claimed advantage is that it doesn’t require document upload-just a quick facial scan. But this creates several problems:

Accuracy Concerns: AI age estimation systems have documented bias issues. They perform differently across ethnicities, skin tones, and facial structures. A 17-year-old with certain facial features might pass as 25, while a 25-year-old with others gets flagged as underage.

Deepfake Vulnerability: Modern deepfake tools can generate convincing real-time facial manipulations. While Discord claims their system detects such attempts, the arms race between detection and generation favors the fakes. Every detection system eventually fails.

Data Reversibility: Discord and its vendors claim they don’t store the actual video-only a mathematical “template.” But biometric templates can often be reversed to reconstruct facial features. Once your biometric data is compromised, you can’t change your face like you can change a password.

Document Upload Systems

The alternative-uploading government IDs-is arguably worse. Consider what you’re sharing:

  • Full legal name
  • Date of birth
  • Document numbers (passport, driver’s license)
  • Home address (on many IDs)
  • Photograph
  • Nationality

This is everything an identity thief needs. And as the October 2025 breach demonstrated, this data doesn’t stay secure.

The verification process also creates a data retention problem. When you appeal an age determination, your ID gets stored for human review. Discord hasn’t disclosed how long these documents are retained, who has access to them, or what jurisdictions they’re stored in. Given their history, “trust us” isn’t a sufficient answer.

GDPR and Biometric Data

Under Europe’s General Data Protection Regulation, biometric data is classified as “special category data” requiring enhanced protection:

  • Explicit consent is required for processing
  • Purpose limitation restricts how data can be used
  • Right to erasure allows users to demand deletion
  • Data portability gives users the right to extract their data

But here’s the problem: Discord’s verification system makes consent illusory. If you want to access age-restricted content, you must consent. There’s no meaningful alternative. This isn’t freely given consent under GDPR-it’s coercion.

The UK Online Safety Act

The legislation driving this verification rollout mandates age verification but fails to establish security standards. There’s no requirement for:

  • Encryption standards for stored IDs
  • Data retention limits
  • Breach notification timelines
  • Liability for third-party vendor failures

This regulatory vacuum allows platforms to implement the cheapest possible verification solutions while externalizing security risks to users.

Jurisdictional Arbitrage

Discord processes verification data through third-party vendors, potentially routing it through servers outside the EU. This creates legal ambiguity about which data protection laws apply and how users can exercise their rights. When your German passport photo is processed by a vendor using US-based servers, whose laws govern its protection?

The Trust Deficit

The fundamental question isn’t whether age verification is a good idea-it’s whether Discord is competent to implement it safely.

The evidence suggests they aren’t:

  • They leaked 70,000 government IDs through basic vendor security failures
  • They allowed 4 billion messages to be scraped and sold by Spy.pet before taking action
  • Their platform is actively exploited as criminal infrastructure
  • They’re being sued by US states for endangering children
  • They respond to security incidents with deflection rather than accountability

Trust is earned through consistent competence and transparency. Discord has demonstrated neither. They treat security as a marketing checkbox rather than a core responsibility.

What Users Should Do

If you use Discord, consider these protective measures:

Avoid verification where possible: Don’t upload IDs or biometric data unless absolutely necessary. The risks outweigh the benefits of accessing age-restricted content.

Use compartmentalization: Create a dedicated email address for Discord. Don’t link it to your primary accounts or use it for other services.

Enable all security features: Two-factor authentication, backup codes, and login notifications. Assume your account is a target.

Monitor your data: Regularly check haveibeenpwned.com and similar services. When Discord inevitably has another breach, you’ll want to know immediately.

Consider alternatives: For sensitive communications, use platforms with better security track records. Signal, Session, or Matrix offer stronger privacy protections.

Request your data: Exercise your GDPR rights to see what Discord has collected. You might be surprised by the volume and sensitivity of stored information.

The Bigger Picture

Discord’s age verification rollout represents a dangerous trend: governments mandating identity verification for internet access without establishing security standards or accountability mechanisms. Platforms implement these systems using the cheapest available vendors, externalize the risks to users, and face no consequences when breaches occur.

The October 2025 breach wasn’t an anomaly-it was an inevitability. When you mandate the collection of sensitive data without mandating its protection, criminals will come for it. Discord just made itself the world’s largest repository of government IDs tied to internet identities. That’s not a bug; it’s a design feature of poor regulatory frameworks.

The uncomfortable truth: Age verification can be implemented responsibly. It requires end-to-end encryption, local processing (where verification happens on your device, not Discord’s servers), strict data minimization, and harsh liability for breaches. Discord’s implementation has none of these safeguards.

Conclusion

Discord is asking you to trust them with your passport, your biometric data, and your identity. They’re asking this from a platform that:

  • Just leaked 70,000 government IDs
  • Hosts active malware operations using its infrastructure
  • Is being sued for facilitating child exploitation
  • Allowed 4 billion user messages to be scraped and sold
  • Blames third parties when its security fails

The question isn’t whether age verification is necessary-it’s whether Discord should be the platform implementing it. The answer, based on their track record, is clearly no.

Until Discord demonstrates meaningful security improvements, transparent data handling, and accountability for failures, users should be extremely cautious about submitting any form of identity verification. Your biometric data and government IDs are too valuable to trust to a platform that treats security as an afterthought.

The age verification rollout isn’t about protecting users. It’s about regulatory compliance and liability protection for Discord. Don’t let them use your identity as a shield for their incompetence.

References

  1. Discord. (2025). Adapting Discord For The UK Online Safety Act. Retrieved from https://discord.com/safety/adapting-discord-for-the-uk-online-safety-act

  2. Discord Support. (2025). What’s Changing for UK Users Due to the UK Online Safety Act. Retrieved from https://support.discord.com/hc/en-us/articles/33362401287959

  3. Weatherbed, J. (2025). Discord is verifying some users’ age with ID and facial scans. The Verge. Retrieved from https://www.theverge.com/news/650493/discord-age-verification-face-id-scan-experiment

  4. Belanger, A. (2026). Discord faces backlash over age checks after data breach exposed 70,000 IDs. Ars Technica. Retrieved from https://arstechnica.com/tech-policy/2026/02/discord-faces-backlash-over-age-checks-after-data-breach-exposed-70000-ids/

  5. BBC News. (2025). ID photos of 70,000 users may have been leaked, Discord says. Retrieved from https://www.bbc.com/news/articles/c8jmzd972leo

  6. McFadden, A. (2025). Discord App Exposes Children to Abuse and Graphic Content, Lawsuit Says. The New York Times. Retrieved from https://www.nytimes.com/2025/04/17/nyregion/discord-lawsuit-new-jersey.html

  7. Boburg, S., Verma, P., & Dehghanpoor, C. (2024). On popular online platforms, predatory groups coerce children into self-harm. The Washington Post. Retrieved from https://www.washingtonpost.com/investigations/interactive/2024/764-predator-discord-telegram/

  8. Dwyer, J. (2022). Self-Checkout This Discord C2. IBM Security X-Force. Retrieved from https://www.ibm.com/think/x-force/self-checkout-discord-c2

  9. Check Point Research. (2025). From Trust to Threat: Hijacked Discord Invites Used for Multi-Stage Malware Delivery. Retrieved from https://research.checkpoint.com/2025/from-trust-to-threat-hijacked-discord-invites-used-for-multi-stage-malware-delivery/

  10. Discord. (2025). Update on a Security Incident Involving Third-Party Customer Service. Retrieved from https://discord.com/press-releases/update-on-security-incident-involving-third-party-customer-service

  11. National Center for Missing & Exploited Children. (2024). CyberTipline Data. Retrieved from https://www.missingkids.org/cybertiplinedata

  12. Cox, J. (2024). Criminals Are Weaponizing Child Abuse Imagery to Ban Discord Servers. 404 Media. Retrieved from https://www.404media.co/criminals-are-weaponizing-child-abuse-imagery-to-ban-discord-servers/