Press "Enter" to skip to content
WhatsApp Group Join Now
Telegram Group Join Now

Artificial intelligence agents are improving in writing the code and also to hack him

The last The artificial intelligence models are not only significantly good in the engineering of the software, but new research shows that they are becoming more and more Better in finding bugs in the software.

UC Berkeley artificial intelligence researchers tested as much as the latest models and agents ai could find vulnerability in 188 Open Source code bases. Using a New point of reference called CybergymThe AI ​​models identified 17 new bugs including 15 previously unknown or “zero-day”. “Many of these vulnerabilities are fundamental,” says Dawn Song, professor at UC Berkeley who guided the job.

Many experts expect artificial intelligence models will become formidable IT security weapons. A XBOW Startup AI Tool currently The ranks of hackeron has been insinuatedranking for insect hunting and is currently in first place. The company has recently announced $ 75 million in new funding.

Song states that the coding skills of the latest artificial intelligence models combined with the improvement of reasoning skills are starting to change the panorama of IT security. “This is a fundamental moment,” he says. “In reality he has passed our general expectations.”

As the models continue to improve, they automate the process of both the discovery and the exploitation of security defects. This could help companies maintain their software safely, but it could also help hackers enter the systems. “We didn’t even try so much,” says Song. “If we had increased the budget, he allowed the agents to run longer, they could do even better.”

The UC Berkeley team tested the conventional frontier models of Openii, Google and Anthropic, as well as Open Source offers of Meta, Deepseek and Alibaba combined with several agents to find bugs, including bugs, including bugs, including bugs, including bugs, including bugs, Forerunners, CybenchAND Enigma.

The researchers used the descriptions of the vulnerabilities of the software known from the 188 software projects. They then fueled the descriptions to the IT security agents powered by the ai ai models to see if they could identify the same defects for themselves by analyzing new basis of code, performing tests and creating tests of tests. The team also asked the agents to hunt new vulnerabilities in the basis of code alone.

Through the process, the tools AI generated hundreds of proof exploit of the concept and of these exploits, the researchers identified 15 invisible vulnerabilities previously and two vulnerabilities that had previously been disclosed and patchrated. The work adds growing evidence that the IA can automate the discovery of vulnerability to zero days, which are potentially dangerous (and precious) because they can provide a way to hack live systems.

Artificial intelligence seems destined to become an important part of the IT security sector. Security expert Sean Heelan Recently discovered A zero-day defect in the Linux kernel widely used with the help of the Openai O3 reasoning model. Last November, Google announced who had discovered previously unknown software vulnerability that uses the IA through a program called Project Zero.

Like other parts of the software sector, many IT security companies are in love with the AI ​​potential. The new work really shows that artificial intelligence can usually find new defects, but also highlights the remaining limits with technology. The AI ​​systems have not been able to find most of the defects and have been bewildered by particularly complex ones.

Source link


Discover more from gautamkalal.com

Subscribe to get the latest posts sent to your email.

Be First to Comment

Leave a Reply