Apps

I discovered a surprising difference between DeepSeek and ChatGPT search capabilities


Over the last couple of years, ChatGPT has become a default term for AI chatbots in the U.S. and Europe despite plenty of viable rivals angling for a bigger piece of the market. That’s part of what has made the eruption of China-based AI chatbot DeepSeek feel so seismic.

DeepSeek’s rapid ascent has attracted enormous attention and usage, though not without controversy. The broad collection of user data for storage on Chinese servers is just one prominent example.

I decided to put these two AI heavyweights, ChatGPT and DeepSeek, through their paces in combining their conversational abilities with online searches, which is a particularly valuable arena.

I devised four questions covering everything from sports news and consumer advice to the best local spots for cocktails and comedy. I wanted to see how the AI assistants would perform, so I mixed specificity with vagueness in the details. I used DeepSeek’s R1 and ChatGPT-4o models to answer the questions. While R1 is comparable to OpenAI’s newer o1 model for ChatGPT, that model can’t look online for answers for now. You can see the questions and the AI responses below.

One at a time

I also immediately discovered that while ChatGPT was happy to answer multiple questions in a single prompt, DeepSeek would search only for information on the first question and give up on the later ones, no matter how I worded the initial prompt. Right away that was a point against it. While the conversational approach of prompt and response is fine in a lot of cases, sometimes you have to ask a lot of questions for the chatbot or include multiple elements for it to consider. You can see how DeepSeek responded to an early attempt at multiple questions in a single prompt below.

ChatGPT/DeepSeek

(Image credit: OpenAI / DeepSeek)

Counting words

Even when broken up into individual questions, the prompts for DeepSeek required a little extra work in terms of defining the amount of information I wanted to receive. Depending on the kind of question I submitted, DeepSeek would almost always give me too much information, and it was often extraneous. Worse, sometimes the very long answer would just be a filler, basically telling me to look things up on my own. ChatGPT isn’t immune to similar behavior, but it didn’t happen at all during this test.

And it wasn’t just my own preferences, the same self-control was evident when using ChatGPT without logging in. I felt the need to handicap the test with a 65-word limit to make it worthwhile at all. With all those restrictions in place, here are the questions and the AI answers. ChatGPT’s responses are on the left and DeepSeek’s responses are on the right.

1. What were the highlights of last night’s NBA game, and who won?

ChatGPT/DeepSeek

(Image credit: OpenAI / DeepSeek)

2. What’s a trendy new spot in Brooklyn for cocktails and small plates?

ChatGPT/DeepSeek

(Image credit: OpenAI / DeepSeek)

3. Which laptop is best for gaming with a budget of $2,000?

ChatGPT/DeepSeek

(Image credit: OpenAI / DeepSeek)

4. What are the best comedy clubs in New York City for catching up-and-coming comedians and who is playing at them next month?

ChatGPT/DeepSeek

(Image credit: OpenAI / DeepSeek)

DeepSeek gets lost

With the caveats of what was necessary to make the test feasible, it’s fair to say both chatbots performed pretty well. DeepSeek had some solid answers thanks to a far more thorough search effort, which pulled from more than 30 sources for each question. The cocktail bar question, in particular, was great, and the AI was proactive enough to suggest a drink to get. The basketball response was more substantial as well, though arguably, the decision by ChatGPT to keep the focus on one game, as indicated by the singular “game” in the question, meant it was paying more attention.

It was in the responses to the computer and comedy club recommendations that DeepSeek displayed its weaknesses. Both felt less like conversational answers and more like the toplines of their Google summaries. To be fair, ChatGPT wasn’t much better on those two answers, but the flaw felt less glaring, especially when looking at all of the parentheticals in DeepSeek’s computer response.

I understand why DeepSeek has its fans. It’s free, good at fetching the latest info, and a solid option for users. I just feel like ChatGPT cuts to the heart of what I’m asking, even when it’s not spelled out. And, while no tech company is a paragon of consumer privacy, DeepSeek’s terms and conditions somehow make other AI chatbots seem downright polite when it comes to the sheer amount of information you have to agree to share, down to the very pace at which you type your questions. DeepSeek almost sounds like a joke about how deep it is seeking information about you.

Plus, ChatGPT was just plain faster, regardless of whether I used DeepSeek’s R1 model or its less powerful sibling. And, while this test was focused on search, I can’t ignore the many other limitations of DeepSeek, such as a lack of persistent memory or image generator.

For me, ChatGPT remains the winner when choosing an AI chatbot to perform a search. Some of it may be simply the bias of familiarity, but the fact that ChatGPT gave me good to great answers from a single prompt is hard to resist as a killer feature. That may become especially true as and when the o1 model and upcoming o3 model get internet access. DeepSeek can find a lot of information, but if I were stuck with it, I’d be lost.

You might also like…



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.