1 min read

Generative AI-Enabled Deepfake Phishing

Intel Alert

Impacted Domains: Cyber, Reputation
Impacted Industries: Defense, All Industries
Date: September 14, 2025


North Korean hackers used ChatGPT to generate deepfake military IDs, significantly increasing the success rate and credibility of targeted phishing campaigns.

So What:
State-sponsored actors are weaponizing generative AI to bypass traditional defenses. Deepfake IDs dramatically boost phishing effectiveness — enabling credential theft, unauthorized access, operational disruption, and reputational crises fueled by media exposure. Organizations without AI-aware detection and identity controls face elevated, enterprise-wide vulnerability.

Risk Value:
$6M–$90M for mid-size firms, depending on credential exposure and downstream impact.

Mitigation Cost:
$110K–$440K for small/midsize organizations to deploy next-gen detection, enhanced verification, and continuous training.

What to Do:
  • Deploy deepfake detection tools and AI-powered phishing filters across all communication channels.

  • Enforce strict multi-factor verification for sensitive systems, privileged accounts, and document handling.

  • Continuously train employees on evolving generative AI–enabled social engineering tactics.

  • Integrate incident response with automated forensic evidence collection to counter AI-driven breaches.

Risk AIQ Score: 8

🔗 Yahoo Finance: North Korean Hackers Using ChatGPT for Deepfake Phishing