Kaspersky experts outline how the rapid development of AI is reshaping the cybersecurity landscape in 2026, both for individual users and for businesses. Large language models (LLMs) are influencing defensive capabilities while simultaneously expanding opportunities for threat actors.
Deepfakes are becoming a mainstream technology, and awareness will continue to grow. Companies are increasingly discussing the risks of synthetic content and training employees to reduce the likelihood of falling victim to it. As the volume of deepfakes grows, so does the range of formats in which they appear. At the same time, awareness is rising not only within organizations but also among regular users: end consumers encounter fake content more often and better understand the nature of such threats. As a result, deepfakes are becoming a stable element of the security agenda, requiring a systematic approach to training and internal policies.
Deepfake quality will improve through better audio and a lowering barrier to entry. The visual quality of deepfakes is already high, while realistic audio remains the main area for future growth. At the same time, content generation tools are becoming easier to use: even non-experts can now create a mid-quality deepfake in just a few clicks. As a result, the average quality continues to rise, creation becomes accessible to a far broader audience, and these capabilities will inevitably continue to be leveraged by cybercriminals.
Online deepfakes will continue to evolve but remain tools for advanced users. Real-time face and voice swapping technologies are improving, but their setup still requires more advanced technical skills. Wide adoption is unlikely, yet the risks in targeted scenarios will grow: increasing realism and the ability to manipulate video through virtual cameras make such attacks more convincing.
Efforts to develop a reliable system for labeling AI-generated content will continue. There are still no unified criteria for reliably identifying synthetic content, and current labels are easy to bypass or remove, especially when working with open-source models. For this reason, new technical and regulatory initiatives aimed at addressing the problem are likely to emerge.
Open-weight models will approach top closed models in many cybersecurity-related tasks, which create more opportunities for misuse. Closed models still offer stricter control mechanisms and safeguards, limiting abuse. However, open-source systems are rapidly catching up in functionality and circulate without comparable restrictions. This blurs the difference between proprietary models and open-source models both of which can be used efficiently for undesired or malicious purposes.
The line between legitimate and fraudulent AI-generated content will become increasingly blurred. AI can already produce well-crafted scam emails, convincing visual identities, and high-quality phishing pages. At the same time, major brands are adopting synthetic materials in advertising, making AI-generated content look familiar and visually “normal.” As a result, distinguishing real from fake will become even more challenging, both for users and for automated detection systems.
AI will become a cross-chain tool in cyberattacks and be used across most stages of the kill chain. Threat actors already employ LLMs to write code, build infrastructure, and automate operational tasks. Further advances will reinforce this trend: AI will increasingly support multiple stages of an attack, from preparation and communication to assembling malicious components, probing for vulnerabilities and deploying tools. Attackers will also work to hide signs of AI involvement, making such operations harder to analyze.
“While AI tools are being used in cyberattacks, they are also become a more common tool in security analysis and influence how SOC teams work. Agent-based systems will be able to continuously scan infrastructure, identify vulnerabilities, and gather contextual information for investigations, reducing the amount of manual routine work. As a result, specialists will shift from manually searching for data to making decisions based on already-prepared context. In parallel, security tools will transition to natural-language interfaces, enabling prompts instead of complex technical queries,” adds Vladislav Tushkanov, Research Development Group Manager at Kaspersky.
About Kaspersky:
Kaspersky is a global cybersecurity and digital privacy company founded in 1997. With over a billion devices protected to date from emerging cyberthreats and targeted attacks, Kaspersky’s deep threat intelligence and security expertise is constantly transforming into innovative solutions and services to protect businesses, critical infrastructure, governments and consumers around the globe. The company’s comprehensive security portfolio includes leading endpoint protection, specialized security products and services, as well as Cyber Immune solutions to fight sophisticated and evolving digital threats. We help nearly 200,000 corporate clients protect what matters most to them.
