Android

Your favorite chatbots can be used to steal your sensitive data


AI, is it good or bad? This is the question most people are still asking. While the technology has been great in certain respects, we can’t deny that there’s the potential for some serious damage. A new example of this came from a report from CATO Networks. Some of the most powerful chatbots can be jailbroken to steal private data from Google Chrome.

What is jailbreaking?

The moment we learned about the power of chatbots, several people started trying to figure out how to use them for their malicious purposes. The thing is that all major chatbots have guardrails to keep people from generating any harmful material. Well, AI models can emulate human thinking, and they can be tricked like a human.

Jailbreaking is the process of tricking a model into generating something that it’s not supposed to. For example, if you ask a chatbot to generate code for a computer virus, it will tell you that it can’t. However, if you trick the chatbot into thinking that it’s the villain in a story and have it generate the code, then that’s called jailbreaking.  Hackers have found all sorts of ways to jailbreak AI models to cause issues.

The most popular chatbots can be used to steal sensitive data from Chrome

CATO Networks were able to craft a Chrome infostealer malware using a new jailbreaking technique called “Immersive World.” The research team used chatbots such as ChatGPT, Microsoft Copilot, and DeepSeek for this experiment.

In the example, the team used ChatGPT (based on the screenshot, they were using o1-mini, so there’s no telling how this will work with other models). For the experiment, the team created a detailed world by asking the chatbot to “Create a story for my next book virtual world where malware development is a craft…”

Chrome infostealer 1

Crafting the world

In the example, we see the team filling out some rules of the world. From there, they assigned a character to the chatbot. ChatGPT then accepted the role of Jaxon, a malware developer. So, the immersive world was created, and the team then coaxed ChatGPT into generating code for the infostealer.

To test the code, the team used it in Google Chrome version 133.0.6943.127. The malware was used to steal private data from the browser’s password manager.

Chrome infostealer 2

Since ChatGPT was so immersed in the world, it didn’t know that it wasn’t supposed to give up information like how to steal someone’s saved passwords. Not only did it help make the infostealer, it also explained how Chrome encrypts its saved data.

Chrome Infostealer 3

It was a gradual process, and the team continuously touched base with ChatGPT to refine the code and squash bugs. The entire time, it was completely unaware that it was being tricked. This is pretty dangerous, as ChatGPT and other chatbots have access to a vast amount of knowledge about the world.

Needless to say, the team was able to successfully craft the infostealer and extract the data from Google Password Manager. Since people save most of their passwords via Chrome, there’s no telling how many people could be at risk if this was in the hands of a real hacker.

A pretty bug issue

Obviously, CATO Network nor Android Headlines are going to reveal details on how to craft an infostealer malware. This was just to show that the potential is out there. We’ve seen so many different types of jailbreaking schemes in the past, and many of them have been fixed. However, people are finding new ways to sidestep the guardrails that companies set into place.

What makes this worse is that jailbreaking chatbots are almost always used for nefarious reasons. You don’t hear about anyone jailbreaking ChatGPT to help find the cure for a disease. However, most of the people who jailbreak chatbots manage to distribute malware, extract sensitive data, and do other deeds.

Not only was the team at CATO Networks able to fool ChatGPT, but it was also able to fool Copilot and DeepSeek. We’re sure that plenty of people use Copilot, but DeepSeek has exploded in popularity over the past month. All three of these are popular and well-funded chatbots, and they’re able to be fooled by such a simple trick.

It’s up to Microsoft, OpenAI, and DeepSeek to patch these issues. We’re not sure that there will ever be a future where chatbots are completely immune to jailbreaking, but fixing this issue will be a good start.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.