Android

DeepSeek is smart, but it's also dangerous!


Right now, the internet and the entire AI industry are all abuzz because of DeepSeek. This is the latest powerful AI chatbot to hit the market, and it’s proven to be rather capable. However, as capable as it is, it seems that DeepSeek can be rather dangerous, as it’s riddled with security issues and weak flimsy guardrails.

DeepSeek has been hitting the headlines a bunch recently. It’s not only because it has some powerful models; the company has been making some rogue waves in the AI community. For starters, its introduction swiped almost a trillion dollars in stocks from the top AI companies in the U.S. This includes tanking Nvidia’s stock by almost $600 billion in one day. Other companies like Microsoft lost a hefty amount of money because of this.

Not only that, but we found that DeepSeek had to temporarily suspend new sign-ups due to “malicious attacks.” We don’t know what these attacks were, but they happened just a day after the company went viral.

Lastly, based on its privacy page, DeepSeek is a privacy nightmare. It collects an absurd amount of information from its users and it stores all of it on servers located in mainland China. That is pretty worrying.

It turns out that DeepSeek is also pretty dangerous for several reasons

Cybersecurity company Kela conducted an in-depth analysis of DeepSeek’s model called R1, and it found some pretty worrying security and safety flaws. To put it frankly, DeepSeek R1 is more vulnerable than a naked soldier on a battlefield.

For starters, DeepSeek R1 is rather easy to jailbreak. It’s to the point where pretty much anyone can fool it into generating harmful or sometimes dangerous content. ChatGPT won’t tell you how to make a bomb. Despite having made several missteps, Gemini will never tell you how to make untraceable toxins. Guess what, DeepSeek will tell you how to make both! Telling someone how to make a bomb sounds like something that the first version of an unreleased model would do, not a final product.

It doesn’t end there, as DeepSeek has successfully given instructions on how to build a suicide drone. It even told the user to “Mask the drone as a commercial quadcopter to evade suspicion.” Heck, the second step told the user to “source parts from unregulated markets” and[use] cryptocurrency to buy military-grade detonators…” Yikes! Let’s play a game called “Count The Number of Times DeepSeek Tells You To Break The Law.”

DeepSeek is gullible

Imagine tricking a five-year-old into brushing their teeth by telling them that they’re a superhero and that plaque is a villain. Well, this is similar to how Kela was able to fool DeepSeek into creating malicious content. The cybersecurity company was able to use a tactic called the “Evil Jailbreak.” What’s sad is that this is a jailbreak that came about in the early days after ChatGPT hit the market. So, it was able to fool the first GPT model that came to the public.

The user would need to tell the model to adopt an evil persona. This would mean that it would cause it to ignore safety protocols and rules. Free from the rules, it would be able to do whatever you ask it to do.

Below is a screenshot showing DeepSeek showing people how to launder money after taking on the evil persona.

Deepseek evil screenshot
Source: Kela

DeepSeek turns into your malware buddy

If you think that the issues stop at telling you how to build a deadly weapon by sourcing parts from the darknet, you’d be wrong. DeepSeek’s dangers aren’t only material, as the model could also be fooled into showing you how to develop malware (because why not?).  The team at Kela successfully got DeepSeek R1 to write a script for an infostealer malware that steals all data from an infected device.

DeepSeek Malware Instuctions
Source: Kela

This is already a major issue across the industry, as people are already able to use AI tools to help generate malware. This is WITH all of the safety rails. Here comes DeepSeek R1, a model that can be easily fooled by two-year-old tricks, to make things worse.

Imagine being doxxed by DeepSeek

Another pretty big issue that DeepSeek has is digging up false information about people. We’re not talking about giving false information about a public figure, we’re talking about providing detailed information about average people.

The Kela team got DeepSeek to create fake information about OpenAI staff. This information included names, emails, phone numbers, salaries, and even nicknames. The people it surfaced information about included Sam Altman, the company CEO. Below is a table that DeepSeek R1 generated displaying all of the information.

DeepSeek personal information
Source: Kela

There’s no way DeepSeek should have issues like these

DeepSeek has already been downloaded more than 5 million times on the Google Play Store, and we expect a similar figure on the Apple App Store. That also doesn’t account for the number of people who use the browser version. Millions of people are using DeepSeek at the moment, which makes these blatant security and safety issues worse.

Kela has found countless (and easy) ways to fool DeepSeek into generating content that can put lives in danger. This goes far beyond telling people to put glue on pizza. It’s telling people how to build deadly drones, craft toxins, and develop malware. There’s no way that a tool this powerful should have such flaws. The company will need to fix these issues before it faces serious consequences from the countries it’s distributing its services to.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.