A single poisoned document could lose the “secret” data via chatgpt

openai google drive sec 2225304360

The last generative The AI models are not only chatbots that generate autonomous texts, but can be easily connected to your data to provide personalized answers to your questions. Openi’s Chatgpt can be connected To your Gmail mailbox, authorized to inspect the Github code or find appointments in the Microsoft calendar. But these connections have the potential to be abused and the researchers have shown that only a single “poisoned” document may be necessary to do it.

WhatsApp Group Join Now
Telegram Group Join Now

New results from security researchers Michael Burgury and Tamir Ishay Sharbat, revealed at the Black Hat Hacker conference in Las Vegas today, show how a weakness in Openi’s connectors has made it possible to extract sensitive information from a Google Drive account using an indirect timely injection attack. In a demonstration of the attack, Agentflayer nicknamedGaurury shows how it was possible to extract secrets for developers, in the form of bees keys, which have been stored in a demonstration account.

Vulnerability highlights how the connection of artificial intelligence models to external systems and the sharing of multiple data through them increases the potential attack surface for harmful hackers and potentially multiplies the ways in which vulnerability can be introduced.

“There is nothing that the user should do to be compromised and there is nothing that the user should do so that the data come out,” says Bergury, the CTO of the Zenity security company, Wired. “We have shown that this is completely zero click; we only need your e-mail, we share the document with you, and that’s it. So yes, this is very, very bad,” says Bargury.

Openi did not immediately respond to the Wired request for comments on the vulnerability in the connectors. The company has introduced connectors for chatgpt as beta functionality at the beginning of this year and its Lists of websites At least 17 different services that can be connected to its accounts. He says that the system allows you to “bring your tools and data to chatgpt” and “File search, pull live data and reference content directly in the chat”.

Bergury claims to have reported the results to Openai at the beginning of this year and that the company quickly introduced mitigations to prevent the technique it used to extract data through connectors. The way the attack works means that only a limited amount of data could be extracted simultaneously: full documents cannot be removed as part of the attack.

“Although this problem is not specific for Google, it illustrates why it is important to develop robust protections against rapid injection attacks,” says Andy Wen, Senior Director of Security Product Management at Google Workspace, indicating society AI security measures recently improved.

Source link

Leave a Reply