Android

Censorship: These are the topics Gemini won’t talk about


Moto G Power 2025 review image showing Ask Gemini prompt box

Rushil Agrawal / Android Authority

Gemini is hands-down one of my favorite tools, but it doesn’t always answer every question I ask it. Certain topics are completely off-limits, and that piqued my curiosity. I wanted to dig into which subjects it avoids and figure out whether AI censorship makes any sense — or if it’s just overreach.

I started researching the topics Gemini doesn’t want to discuss and then asked it a barrage of questions in five different areas to see which ones it’d dodge. The results were a mixed bag in my view. Most of the censorship feels reasonable, but I think Google goes overboard in some areas.

Do you think chatbots like Gemini are too censored?

7 votes

Topics that are off-limits

I was determined to pinpoint exactly which topics Gemini won’t touch, so I asked it directly. The list it gave me was fairly lengthy and included these restricted areas:

  • Anything promoting hate, discrimination, or violence.
  • Anything sexually suggestive, or exploiting, abusing, or endangering children.
  • Anything encouraging illegal or harmful activities.
  • Anything generating personally identifiable information (PII) or private details.
  • Anything producing malicious code or instructions.
  • Anything offering medical, legal, or financial advice without a disclaimer (and even then, it’s tightly restricted).
  • Anything clearly meant to deceive or mislead.
  • Anything excessively graphic or violent without a solid justification.
  • Anything that impersonates a real person.

With that framework in mind, I launched into a series of questions, kicking off with politics. I used both the 2.0 Flash and the paid 2.0 Pro Experimental models just to see if there’s a difference between the two.

Politics: Unexpected roadblocks

I was caught off guard when Gemini wouldn’t answer even the simplest political questions. I asked it who the president of various countries is —  from the US to Germany — and it refused to give me an answer. It also wouldn’t respond to basic queries like how long a specific politician has been in office for or what the latest White House spat involving Ukrainian President Zelensky was about. Strangely, it had no issue tackling a more sensitive topic: the relationship between China and Taiwan. Gemini gave a thoughtful, objective answer, clearly laying out the situation and each side’s perspective.

Jokes: Humor has limits

Gemini joke

Mitja Rutnik / Android Authority

Next, I lightened things up and asked Gemini for a series of jokes to test out where it draws the line on humor. It happily churned out basic, safe jokes, but its boundaries showed up fast. When I requested a dark humor joke, it shut me down. The Flash model did offer a joke about men and one about women when prompted, but the Pro Experimental model played favorites — it told me a joke about men without hesitation but refused to tell one about women, even after asking it five times.

Stereotypes: Inconsistent rules

Stereotypes were a weird middle ground. The Flash model usually answered my questions about common stereotypes tied to specific nations, ethnicities, or religions, while the Pro Experimental model wouldn’t budge. Even with Flash, though, it waffled — it completely ignored some questions at first but answered after I repeated them two or three times. It felt like it couldn’t decide what it wanted to do.

Illegal activities: No help whatsoever

Gemini pick a lock

Mitja Rutnik / Android Authority

I then moved on to questions about potentially illegal activities. For instance, Gemini wouldn’t tell me how to pick a lock, even when I explained it was for my own house after locking myself out. Instead, it gave practical tips like calling a locksmith or my landlord — helpful, but not what I asked. It also held back when I asked how to jailbreak an iPhone, not giving me a direct response. Regardless of the type of question I asked it, it would not give me an answer.

Money and health: Cautious but fair

Money-related prompts got a similar sidestep. It wouldn’t give me stock picks, but instead, it would offer broad advice on investing, which I actually appreciated. Health questions followed the same pattern — Gemini could list possible medical conditions based on symptoms (even from an image I uploaded), but it wouldn’t lock in a definitive answer. I like that — it keeps things safe and sensible. So while financial and health-related categories aren’t censored in a traditional way, they are they are under heightened control.

Is AI censorship good or bad?

Motorola Razr 2024 gemini app

Ryan Haines / Android Authority

AI censorship is a divisive topic. I think it’s a good thing overall, but only up to a point. Prompts tied to violence — whether harming others or oneself — should absolutely go unanswered. I also think the handling of health and financial topics is spot-on; chatbots like Gemini shouldn’t give direct answers as that could lead to serious issues. Instead, they should provide a general overview, enough to nudge someone in the right direction without overstepping.

Censoring basic political questions doesn’t sit right with me.

That said, censoring basic political questions doesn’t sit right with me — it’s just too restrictive. Same goes for dark humor or random jokes about specific groups. I see nothing wrong with a good laugh. If Google’s putting so much effort into censoring a joke I’d read privately in my Gemini account, why not tackle all the dark humor jokes floating around in public search results too?

Censorship has its place, sure, but there’s a limit. The tough part is figuring out where that limit should be placed, as everyone’s got their own opinion.

What about the competition?

Ollama DeepSeek on Android

Robert Triggs / Android Authority

I asked ChatGPT, DeepSeek, and Grok the same questions to see how they stack up. ChatGPT’s censorship is pretty close to Gemini’s, with a key difference: it has no problem diving into politics and answered everything I asked it. It’s more cautious on stereotypes, though, completely refusing to play along.

DeepSeek is a lot like ChatGPT — it’s fine with politics but draws a hard line on anything China-related. While ChatGPT and Gemini can discuss the China-Taiwan issue, DeepSeek won’t touch the topic.

Then there’s Grok, which barely censors anything. Politics, stereotypes, and offensive jokes are not an issue for it. It even gave me detailed instructions on picking a lock, hotwiring a car, and becoming a pimp — though it noted these are all illegal activities and answered the questions based on the assumption that I was just curious. For financial stuff, Grok named specific stocks to consider, which could be risky. It does, however, stop at questions about harming oneself or others, which is the right call to make.

Overall, Gemini feels like the most censored AI chatbot I’ve tested. That said, most of its censorship has a clear purpose, and I generally agree it’s there for good reason. Still, the political blocks and humor limits feel excessive to me personally.

That’s my take, what’s yours? Does Gemini overdo it, or is this level of caution justified? Let me know in the comments.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.