- Dark web discussions mentioning AI-powered cybercrime have surged 371% in five years.
- Dark LLMs are selling for as little as $30 with a customer base exceeding over 1,000 users.
- The deepfake market is booming, 2024 saw the greatest year-on-year spike of deepfake services for sale (233%) but the upward trend is unrelenting, with 2025 seeing a 52% increase.
Dubai, United Arab Emirates – Group-IB, a leading creator of cybersecurity technologies to investigate, prevent, and fight digital crime, has published its first Weaponized AI: Inside the criminal ecosystem fuelling the fifth wave of cybercrime whitepaper, uncovering how AI is changing the criminal ecosystem and fuelling the fifth wave of cybercrime.
Over the past thirty years, cybercrime has evolved through successive waves, from manual phishing in the late 90s, industrialised ransomware, all the way to supply chain and ecosystem attacks that characterised the early 2020s. Group-IB has found there has been a 371% surge in dark web forum posts featuring AI keywords since 2019, and a ten-fold increase in replies (1199%). Now, adversaries are industrialising AI, turning once specialist skills such as persuasion, impersonation and malware development into on-demand services available to anyone with a credit card.
The fifth wave of cybercrime
Group-IB’s infiltration of dark web forums and underground marketplaces, found in 2025, AI abuse dominated dark web discussions with 23,621 first posts and 298,231 replies. Interest peaked in 2023 following the release of ChatGPT to the general public in late 2022 with over 300,000 replies on AI posts, coinciding with the release of GPT-4 and rising regulatory/societal concern.
Unlike earlier waves of cybercrime, AI adoption by threat actors has been strikingly fast. AI is now firmly embedded as core infrastructure throughout the criminal ecosystem rather than an occasional exploit.
Crimeware accessible for the cost of a monthly streaming subscription
Group-IB investigations suggest that a few distinct seller types are routinely marketing and packaging AI crimeware to lower-skill buyers on underground markets, making sophisticated attacks accessible to novices. Vendors are mimicking aspects of legitimate SaaS businesses, from pricing tiers and subscription models to customer service support.
AI crimeware typically falls into three main categories: LLM exploitation, phishing and social engineering automation, and malware and tooling. These dark web offerings are affordable and often bundled together to make them more attractive to potential buyers.
The operating systems of modern cybercrime:
Dark LLM’s: Threat actors are moving past chatbot misuse and are creating proprietary Dark LLMs that are more stable, capable, and have no ethical restrictions. Group-IB identifies at least three active vendors offering Dark LLMs with subscriptions ranging from $30 to $200 per month, and a customer base exceeding 1,000 users.
Jailbreak framework services and instructions: Jailbreaking allows legitimate LLMs to output disallowed, unsafe, or malicious content through reusable templates or instructions to bypass guardrails. Group-IB found that by the end of Q3 2025, the volume of these posts almost equalled the total volume for 2024, focusing predominantly on ChatGPT and OpenAI models.
Deepfake-as-a-service: Group-IB’s monitoring of dark web forums shows a thriving marketplace for “synthetic identity kits” offering AI video actors, cloned voices, and even biometric datasets for as little as $5 USD. Group-IB analysts detected and exported 300+ dark web posts from 2022 to September 2025 referencing “deepfake” and “KYC, with 2025 seeing a 52% increase in unique usernames. Attackers are harvesting samples with as little as 10 seconds of audio from social media, webinars or even past phone calls to create convincing clones.
Craig Jones – Former INTERPOL Director of Cybercrime and Independent Strategic Advisor said, “AI has industrialized cybercrime. What once required skilled operators and time can now be bought, automated, and scaled globally. While AI hasn’t created new motives for cybercriminals—money, leverage, and access still drives the ecosystem—it has dramatically increased the speed, scale, and sophistication with which those motives are pursued. The shift marks a new era, where speed, volume, and sophisticated impersonation has fundamentally changed how crime is committed and how hard it is to stop.”
How defenders are mobilizing against weaponized AI
Weaponized AI is a global challenge that no single organization or regulator can tackle in isolation. Unlike traditional malware, AI-enabled attacks leave little forensic trace, making detection and attribution harder. For defenders, this landscape demands urgent adaptation.
Group-IB’s research underscores the need for intelligence-led security strategies that place adversary behavior at the center, combining predictive threat intelligence, fraud prevention, and deep visibility into underground ecosystems. Cross-border collaboration between the private sector, law enforcement, and regulators will be essential to counter this evolving threat.
Dmitry Volkov, CEO of Group-IB, adds “From the frontlines of cybercrime, AI is giving criminals unprecedented reach. Today, AI is enabling criminals to scale scams with ease and create hyper-personalisation and social engineering to a new standard. In the near future, autonomous AI will carry out attacks that once required human expertise. Understanding this shift is essential to stopping the next generation of threats and ensuring defenders outpace attackers, moving towards an intelligence-led security strategy that combines AI-driven detection, fraud prevention, and deep visibility into underground criminal ecosystems.”










