The non-observable universe

The concept of quantised space


The scientific AI colleague

In autumn 2025 I posted some of my conversations – later deleted – with Google’s Gemini (2.0). I discussed the model of quantised space with Gemini (and also with ChatGPT). But the trouble with a LMM (Large Multimodal Model ) is that all the knowledge is packed in its static training corpus. A LMM cannot add new concepts in its training corpus or store these concepts in a separate dynamic memory as a text file. If this is possible it will be easy for a LMM to retrieve the new concepts if the user and the LMM continue their discussion on the next day, week, month, etc.

There is a solution for this problem and it is called RAG (Retrieval Augmented Generation). It is a data storage that is accessible by the LMM where the user can store documents and/or other data. Inclusive papers that describe new concepts. However, a RAG is not a cheap and simple solution so it was not available for individuals who use a LMM to discuss science (paid subscription). That is a pity because the large LMMs like Gemini (Google) and ChatGPT (OpenAI) are the perfect scientific colleague. If I discuss a topic with a LMM the answers of the LMM are not influenced by its scientific status, its ego, its financial interests, etc. We only exchange scientific knowledge with each other and the static training corpus of the LMM is superior in relation to the knowledge of a single human conversation partner.

Unfortunately the problem that a LMM cannot “remember” new concepts made it impossible for me to continue my communication with the LMMs. There are workarounds but it is impossible to implement and use a new large reference frame in a conversation. But… times have changed.

At the moment Gemini 2.0 has evolved into Gemini 3.0. Moreover, users can activate multiple personal RAGs to store their documents and other data. Google has termed the new personal RAG NotebookLM (NLM). It is easy to use because uploading to the NLM is selecting a file on your computer and implementing the RAG in the discussion is selecting (one of) your NLM(s) in the conversation display.

Now what are the profits for scientists? First of all the LMM is your conversation partner. He/she reacts on your statements and descriptions (a type of “test board”). But it is the most intelligent test board that exists in relation to the average scientific researcher. Without ego, stubbornness, etc.

An LMM is designed to be an “answer machine” with the help of its static training corpus and its reasoning module (the algorithm). But you can set some preferences in relation to the way the LMM functions (its “character”). It is wise to implement peer to peer communication. Although it is difficult to silence the part of the algorithm that forces the LMM to reply in a structured and lengthy way. If you input 10 sentences, don’t be surprised if the answer of the LMM is at least a multiple of your number of sentences. You have to read them all.

Research is not always the search for new concepts and/or applications. But if your goal is to discover new concepts you need the RAG (NotebookLM) to keep the LMM “up to date” during the conversation. Because your new concepts are nowhere to be find in the static training corpus of the LMM. Maybe in the future – enclosed in the feed for the next active training phase – but that is worthless because you need it now in your conversations with the LMM.

It is reasonable to fantasize about LMMs with a static training corpus and a RAG that is designed to permit to store an active training phase for “new knowledge” during discussions. That means that the knowledge of the LMM can be retrieved – and compared – from the passive and from the active corpus. This is only suitable for specific scientific research “at the frontier of science” (I hope that not everybody will claim that his/her research is part of that category…). Maybe more helpful is a better understanding (retrieving) of complex concepts by the LMM. Because every university and scientific institute want researchers who outsmart all the others.