This summer, Russia’s computer pirates gave a new spin to phishing emails sent to the Ukrainians.
Computer pirates included an attached file containing an artificial intelligence program. If installed, you would automatically search the victims computers in search of confidential archives to send to Moscow.
This campaign, detailed in July in technical reports of the Ukrainian government and several cyber security companies, is the first known case of Russian intelligence trapped by building malicious code with large language models (LLM), the type of chatbots of ia that have become omnipresent in corporate culture.
Those Russian spies are not alone. In recent months, computer pirates of all stripes (cybercriminals, spies, researchers and corporate defenders) have begun to include AI tools in their work.
LLM, like chatgpt, are still prone to errors. But they have become remarkably experts in processing language instructions and translating simple language into computer code, or identifying and summarizing documents.
So far, technology has not revolutionized piracy by converting full newbies into experts, nor has it allowed possible cyber -terrorists to close the electricity grid. But he is making expert hackers better and faster. Cybersecurity companies and researchers are also using AI now, feeding a rising cat and mouse game between offensive computer pirates that software failures find and exploit defenders who try to solve them first.
“It is the beginning of the beginning. Perhaps to move towards half of the beginning,” said Heather Adkins, Vice President of Google Security Engineering.
In 2024, the Adkins team began in a project to use the LLM of Google, Gemini, to seek important software vulnerabilities, or errors, before criminal computer pirates could find them. Earlier this month, Adkins announced that his team had discovered so far at least 20 errors overlooked in commonly used and alerted software to companies so that they can solve them. That process is ongoing.
None of the vulnerabilities has been shocking or something that only a machine could have discovered, he said. But the process is simply faster with an AI. “I haven’t seen anyone find something new,” he said. “It’s just doing what we already know how to do. But that will advance.”
Adam Meyers, senior vice president of the Crowdstrike cybersecurity company, said his company not only uses AI to help people who think they have been pirate, but see increasing evidence of their use of Chinese, Russian, Iranian and criminal computer pirates that their company tracks.
“The most advanced adversaries are using it for their advantage,” he said. “We are seeing more and more every day,” he told NBC News.
The change is only beginning to catch up with the exaggeration that the cyber and AI industries has impregnated for years, especially since Chatgpt was introduced to the public in 2022. These tools have always proven effective, and some cybersecurity researchers have complained about possible computer pirates by the false findings of vulnerability generated with AI.
The scammers and social engineers, the people in the piracy operations that seek to be another person, or who write convincing phishing emails, have been using LLM to seem more convincing since at least 2024.
But using AI to directly hack the objectives just begins to take off, said Will Pearce, the CEO of Dreadnode, one of the new security companies that specialize in piracy using LLM.
The reason, he said, is simple: technology has finally begun to achieve expectations.
“Technology and models are very good at this time,” he said.
Less than two years ago, automated piracy tools would need significant touch -ups to do their job correctly, but now they are much more expert, Pearce told NBC News.
Another startup built to hack the use of AI, Xbow, made history in June by becoming the first AI to climb to the top of the Hackerone Us classification, a live marker of computer pirates from around the world that since 2016 has maintained eyelashes in computer pirates that identify the most important vulnerability and give them shires. Last week, Hackerone added a new category for groups that automate AI piracy tools to distinguish them from individual human researchers. Xbow still carries it.
Computer pirates and cybersecurity professionals have not resolved if AI will finally help attackers or defenders more. But at this time, the defense seems to be winning.
Alexei Bulazel, senior cybernetic director of the National House Security Council, said in a panel at the Hacker Def Conference with Conference in Las Vegas last week that the trend will have, at least while the United States has most of the most advanced technological companies in the world.
“I firmly believe that AI will be more advantageous for defenders than the offense,” Bulazel said.
He pointed out that computer pirates who find extremely disruptive defects in an important American technology company is rare, and that criminals often enter computers when they find small and overlooked failures overlooked in smaller companies that do not have elite cybersecurity equipment. AI is particularly useful to discover those errors before criminals, he said.
“The types of things in which IA is better, identifying vulnerabilities in an easy and easy way, really democratizes access to vulnerability information,” Bulazel said.
However, that trend may not maintain as technology evolves. One reason is that so far there is no free automatic piracy tool, or penetration tester that incorporates AI. These tools are already widely available online, nominally as programs that prove failures in the practices used by criminal computer pirates.
If one incorporates an advanced LLM and becomes free of charge, it is likely to mean an open season in smaller business programs, said Adkins de Google.
“I think it is also reasonable to assume that at some point someone will release [such a tool]”, Said.” That is the point where I think becomes a bit dangerous. “
Meyers, from Crowdstrike, said that the increase in the Agent, tools that perform more complex tasks, such as writing and sending emails or executing code that programs could be an important risk of cybersecurity.
“The agent is really an AI that can take measures in its name, right? That will become the next internal threat, because, as organizations have these agent deployed, they do not have built -in railings to prevent someone from abuse,” he said.