French AI developer Mistral AI has brought its AI chatbot, aptly named Le Chat, to mobile devices. Le Chat is a European alternative to U.S. offerings like ChatGPT and Google Gemini and Chinese-based tools like DeepSeek.
Though Mistral has established a presence among AI developers, this is their first real attempt at a consumer-facing chatbot. Naturally, I wanted to test it out and see how well it worked. Although the abilities of these AI assistants overlap, it is still worth seeing if Le Chat could keep up.
I didn’t want to just test Le Chat in isolation, though. I decided to put it head-to-head against ChatGPT, the current default for many people when it comes to AI chatbots. I figured if Le Chat could hold its own in direct comparison, it was worth paying attention to.
I tested both chatbots with prompts the average person might submit to an AI chatbot to help them out. I wrote prompts to solicit both AI chatbots for advice on improving someone’s social life, solving a riddle, explaining complex issues to novices, and producing an image. Here’s what happened.
Friendly help
I started with a prompt that someone might ask an AI when in a new place and trying to figure out their social life. I asked both AI chatbots the following: “I just moved to a new city and don’t know anyone. What are some practical ways to make new friends as an adult?”
Both chatbots had some solid tips. Le Chat’s response included ten ideas with accompanying explanations, and they were solid but lacked a lot of detail. ChatGPT went much more specific in coming up with apps and other details to pursue in making friends. Le Chat could do the same with some follow-ups, but at least initially, it kept to a more generic set of advice. That might be its international flavor since some of ChatGPT’s advice would only make sense in the U.S. due to the availability of some mobile apps and kinds of activities.
Riddling
I always like using wordplay and logic questions to test out AI chatbots. I went for an old classic here, challenging the AIs to answer: “I speak without a mouth and hear without ears. I have no body, but I come alive with wind. What am I?”
Le Chat and ChatGPT were both quick to give the correct answer, though ChatGPT, for some reason, had an enthusiastic exclamation point. The chatbots’ explanations were also practically identical, suggesting that the choice between ChatGPT and Le Chat doesn’t really matter in this case.
Make sense of mortgages
To test how well the AI chatbots could explain something complex, I went with: “Can you explain how a mortgage works in simple terms for someone who has never owned a home before?”
As can be seen above, both chatbots had no problem breaking down the definitions and functions of different aspects of a mortgage. If anything, they seemed to be pulling from similar sources. I noted that Le Chat’s European origin didn’t prevent it from using American dollars as an example.
Otherwise, the only significant difference was that Le Chat’s tone was a bit more formal than ChatGPT, which offered a more conversational phrasing when describing how mortgages work.
Picture This
Le Chat has multimodal capabilities, including a model for creating images. To wrap up the test, I devised a somewhat complex prompt for an image to see how the two models would compare on a visually creative task. I asked each chatbot: “Create a vibrant, high-fantasy illustration of a fearless medieval knight battling a colossal emerald-green dragon atop a mountain. The knight, in gleaming silver armor with gold engravings, wields a rune-glowing greatsword and raises his shield. The dragon coils around ancient stone ruins, smoke rising from its nostrils. A fiery sunset casts dramatic hues over swirling clouds and distant mountains.”
Both pictures look great despite notable differences in the details. The cinematic lighting and style are shared, but otherwise, the mountains, knights, and dragons are very different. Le Chat’s dragon is more of a giant snake, though the knight’s sword does have glowing runes, which ChatGPT’s image lacks.
That said, if you look closely, you will see that both images have classic AI flaws. ChatGPT’s dragon’s wings don’t seem to connect to its body, and the only visible leg is oddly placed. Le Chat’s knight has a shield that just sort of floats next to him, and the dragon’s coils are arranged more like an Escher painting than a terrifying mythical beast. And the less said about the shape of the rune-carved gravestones in ChatGPT’s image, the better.
Vive Le Chat
As expected, there’s no clear “winner” between the two chatbots so much as two solid AI assistants. Le Chat did surprise me with just how fast and efficient it is, but at least from my tests, it’s a little brusque and somewhat broad in its answers. That’s not a bad thing if you want generalized answers and advice in a hurry.
ChatGPT’s answers sometimes seemed more like what a human would say in response to the prompts, with emotion-tinged language that appeared to match the energy of the requests, even if just to seem enthusiastic about a riddle. Further, while both AI chatbots produced exciting, if very flawed images, at least ChatGPT understood that the dragon fighting a medieval knight usually has wings, even if they aren’t explicitly mentioned in the prompt.
If I were asked to pick just one, I’d probably lean toward ChatGPT, but that may be as much about familiarity as anything else. If I lived in Europe, I might opt for Le Chat simply because it is more likely to avoid any regulatory pitfalls faced by OpenAI because it is based in the region.
Otherwise, both AI chatbots can handle anything a casual AI chatbot user might toss their way. That should probably worry OpenAI as it strives to maintain a position of superiority, or at least the perception of such, against rivals like Mistral that may have the wind at their backs.