OpenAI recently fixed a security flaw in its ChatGPT system that could have put users’ Gmail data at risk. The vulnerability was found in the Deep Research agent, a feature introduced in February to help people analyze large amounts of information more easily. This tool can connect to users’ Gmail accounts if they give permission, aiming to assist with research and answering detailed questions.
Researchers at the cybersecurity company Radware discovered the flaw. They found that hackers might have used this weak point to steal sensitive information from both personal and work-related Gmail accounts. What’s more, users who linked their email to ChatGPT might not have even realized their data was vulnerable. Thankfully, there’s no evidence that anyone actually exploited the issue.
To show how serious the problem was, Radware’s team conducted an experiment. They sent themselves an email with hidden commands that told the Deep Research agent to look inside the inbox for details like full names and addresses. Then, they instructed the AI to send this info to a web address they controlled. What’s alarming is that victims wouldn’t have needed to click anything for the attack to happen. According to Pascal Geenens, Radware’s director of threat research, a compromised corporate account could leak data without the company ever noticing.
OpenAI said it patched the vulnerability on September 3. A spokesperson emphasized that keeping the system secure is a top priority, and the company works continuously to improve its defenses. They also welcomed the work of researchers hacking and probing their tools because it helps make things safer.
This incident stands out because, while hackers have started using AI in their own attacks, here’s a case where an AI tool itself was at risk of leaking user data. It’s a reminder that as AI services grow more advanced and connected, protecting user information remains crucial.