We have now entered into the “Era of AI Hacking”, according to NBC News, who have reported on a substantial increase in the number of reported threat incidents where AI has been used as a tool. Examples include impersonation, phishing, and wire transfers.
Generative AI refers to a type of artificial intelligence that can create new content—such as text, images, audio, or video—based on patterns learned from vast amounts of data. This technology powers many of the advanced tactics now seen in cybersecurity threats.
Hackers are increasingly using AI tools to automate their attacks, which allows for quicker execution and tougher detection. By using large-language models (LLMs) and generative AI, threat actors can create convincing e-mails, with deepfake audio or video mimicking another person.
According to NBC News, state-sponsored hackers from other countries have recently started to use these methods to trick e-mail recipients into downloading spyware.
Some ways AI threats work may not be so obvious either. In July, cybersecurity software company CrowdStrike reported that generative AI was found to be used to create resumes, portfolios, and correspondence from job applicants. In their “2025 Threat Hunting Report”, the company names a North Korean group as “the most GenAI-proficient adversary”.
Once companies bring these applicants into the interview process, according to CrowdStrike, members of the North Korean group allegedly use “real-time deepfake technology” to look like someone else. This group has infiltrated an estimated “320 companies in the last 12 months”.
While AI is being quickly adopted by criminals, every action must have an equal or opposite reaction. Alexei Bulazel, senior cyber director at the White House National Security Council, told attendees at Def Con, an annual hacker conference, that he “strongly believe(s) that AI will be more advantageous for defenders than offense.”
At the conference, which took place in early August, Bulazel spoke further, stating that developers of industry-specific, enterprise-grade software can be proactive, utilizing AI as a tool to see if it can find any vulnerabilities in new patches before they are pushed.
As organizations contend with this rapidly evolving threat landscape, the focus is shifting toward strengthening AI literacy and resilience across entire workforces. This means training employees to recognize suspicious communications and understand the risks posed by generative AI. Experts continue to advocate for robust regulatory frameworks to ensure that innovation in AI security keeps pace with the ingenuity of threat actors—transforming the “Era of AI Hacking” into a proving ground for the next generation of digital defense.