AI meets Domain Driven Design

How We Build Better RAG Systems With Knowledge Maps

What is a RAG (Retrieval Augmented Generation) model?

Large language models, also known as LLMs, are able to generate texts for a variety of tasks such as question answering, language translation and text completion by training with enormous amounts of data and using billions of parameters.

ChatGPT is only able to provide answers based on the data with which it was trained. If the information required to answer a question is missing from the training data, language models sometimes generate completely fictitious, incorrect answers: they hallucinate.

A RAG system can use the natural language generation capability of a language model, but access its own data sources. This means that the system generates answers not only on the basis of the data with which it has been trained, but also on the basis of the data that is made available to the system. This can be company-specific data or external sources, for example.

RAG systems always provide correct answers, right?

In a RAG system, the risk of generating incorrect answers to company-specific questions could therefore be considered minimal, provided that it draws on a high-quality knowledge database.

This is at least the common promise associated with the usual RAG systems.

Unfortunately, these promises are (currently still) false, as we have already been able to prove empirically in our Enterprise AI Leaderboard:

This Enterprise AI Leaderboard shows that ChatGPT-4, gpt-4-0125 RAG, ChatPDF and Ask Your PDF all failed in their attempt to provide 10 correct answers based on a provided PDF.

Why do current RAG systems fail?

To answer this question, let's take a brief look at the technology of common RAG systems:

Vector databases and chunks

Vector-based Retrieval Augmented Generation (RAG) breaks down large documents into smaller, manageable "chunks" to optimize information processing. Similar to a large book that is divided into shorter sections, these chunks facilitate quick browsing and access to information. Most locally usable models allow chunks with a maximum size of only 512 characters.

Let's take a look at how a text can be broken down into chunks of different sizes:

Chunks lead to loss of meaning

Hosted models such as OpenAI embeddings allow larger chunks, but ultimately only calculate a list of numbers (vector) to represent the entire chunk. This representation as a vector, no matter how large or small the chunks are, sometimes leads to a loss of meaning.

Chunking and storing in a vector database is a purely technical approach to understanding documents and knowledge. This is also the reason why it does not work well: It lacks the basic understanding of context and a description of how the organization works in the first place.

In the language of domain-driven design, we say: There is a lack of fundamental domain knowledge.

The limits of vector-based databases in a business context


The problem with vector-based databases is that they attempt to overcome technical challenges using only technological means.

However, in order to really understand a business and its problems, a deep understanding of the respective business is necessary. This is precisely why we work with knowledge maps.

How knowledge maps lead to better RAG systems

As we have seen, it is not enough to break down documents into chunks, store them in a vector database and hope that this will enable the complex relationships, dependencies and nuances of all company specifics to be recognized and thus provide correct, useful answers.

Before an answer can be generated, the system needs an understanding of the complex relationships and structures of what we call the "domain" in domain-driven design: The business level of what we are developing a technical solution for.

Knoweldge Maps

Before we start with the technical implementation, we first prepare the knowledge about the business domain. We analyze the business and prepare the information to make it accessible to Large Language Model . We call this information about the business Knowledge Maps.

Domain Experts

We obtain this information by talking to people who have business knowledge. We call them: Domain Experts.

The knowledge of the business experts is thus translated into a model that contains information about the interrelationships of the domain.

The model is integrated into the system in the form of a knowledge map, which validates the information that is searched when querying the data provided in the technical domain context. As we have already seen, the answers generated by the RAG system are many times better.

Knowledge Maps: Easy to optimize

In a vector-based database, it is impossible to check the quality of the data segmentation, as it consists exclusively of numerical values.

However, if you present a knowledge map to a domain expert who has no technical knowledge, it can be understood and corrected immediately.

With this approach, improvements can be made with very little effort in a short time and the system can be adapted more closely to business needs. In contrast to vector-based RAGs, which do not have the ability to "learn", this approach makes it easier to continuously adapt and improve the system.

Summary

Common RAG systems use a purely technical approach to search for relevant information from data sources and output answers in the form of natural language.

To do this, the data provided is broken down into chunks and stored in vector databases. The problem with this technique is the often poor quality of the answers. During the process of chunking and saving as a vector, the meaning is sometimes lost, which leads to poor quality answers to user questions.

Our solution is to provide the AI system with knowledge about the domain in the form of knowledge maps. We create models that provide a RAG system with knowledge about a company and its specific dependencies and structures. Thanks to these knowledge maps, questions by the users can be directed to the existing knowledge database in a more targeted and successful manner in order to provide correct answers and thus increase added value.

You might also be interested in

The 101 of AI language models and their benefits for companies

We explain the basics of so-called large language models and explain how they can be used in companies.

Interested in an AI solution for your company?

We look forward to hearing from you!

mail: christoph.hasenzagl@trustbit.tech

tel: +43 664 88454881