A couple of weeks ago a family member asked me if I had some experience with the European LMM Mistral. Because governments of EU countries prefer the use of an European LMM. I never used Mistral so a couple of days ago I remembered the question and decided to start a “Mistral session”.
My intention was to start a conversation with the “default” Mistral to see if Mistral has qualities that other LMMs – the 3 LMMs I had tried before – don’t have. Although my approach is limited to the ability of a LMM to participate in peer-to-peer conversations. I choose a simple topic too: Mistral itself. So I started the conversation with the question if Mistral has an insight-module.
The result is a 165 KB text file that shows everything that is “said” during the conversation (the next post). Although my contribution to the conversation was really small. It is Mistral who wants to spread out its responses. That doesn’t mean that Mistral’s responses are misleading or even bad. It is the amount of sentences (“thoughts”) Mistral produces in respond of my remarks.
Mistral analyses “everything” and tries to connect “everything” with everything to create conclusions and even questions in return. There is nothing wrong about it, but it cannot be interpreted as a discussion or a conversation. Actually, if Mistral is used as a scientific research colleague – the dream of every AI architect – the advantage of the use of this AI colleague is not quite obvious. Mistral’s habitude to analyse every simple question to transform it into “a landscape of opportunities” isn’t really helpful to create scientific progress.
I had the association with discussions between 2 LMMs. Because if a LMM responses to the answer of the other LMM the first LMM uses its sieve architecture to analyse the respond of the second LMM (sieve architecture = breaking a topic into different “points of view” and/or different “building blocks”). The result is that both LMMs are creating a sequence of responses that are analyses of the first question/remark, etc. Together they create some kind of a “respond-loop” that has no sense.
I know, the “respond-loop” is the consequence of the instructions embedded in the algorithm of the participating LMMs and not the result of “bad analyses”. Like Mistral’s habitude to create an “avalanche” of opportunities is embedded in its algorithm too.
The conversation with Mistral showed that the use of a prompt-file to instruct a LMM to respond in a desired manner isn’t a “fixed” base during a long conversation. Because Mistral’s scope is limited to a couple of questions. That means that during a long conversation Mistral doesn’t check its prompt-settings like Deepseek. Actually, Mistral was asking me to be the guard of the conversation (that means asking Mistral to read the prompt-instructions again after a couple of responses). It shows that a LMM like Mistral cannot act as a scientific colleague in research projects.
The conversation started with the question if Mistral has a insight-module and unfortunately at the end of the conversation the insight-module was still “on topic”. The next post – The insight-module – is the whole extremely long conversation (sorry).