Intro
With the arrival of ChatGPT, asking human-like questions to complex data systems has become possible. We can now connect our data to AI models. But despite advancements and promising results, these systems often struggle to deal with the messy realities of our data:
-
Complex Data and Scale: Many companies have databases spanning hundreds of tables. LLMs have a difficulty to capture the scale or interconnected complexity found in real business environments.
-
Real-World Relevance: Common LLM answers often ignore the operational questions that executives, analysts, and teams rely on. These might include performance metrics, key insights into user behavior, or specific compliance-related questions.
-
Lack of Business Context: Without rich metadata, domain ontologies, and transformations that bring real-world meaning to the data, even the most advanced LLMs can produce “hallucinations”—answers that are plausible-sounding but incorrect. This erodes trust and diminishes the value of AI-driven insights.
Knowledge Graphs (KGs) offer a promising solution. By layering on business context—such as linking data points to specific domains, categories, and hierarchies—KGs help LLMs produce more accurate and explainable outputs. These systems can capture the complex domain rules in order to solve problems that LLMs are not trained on.
Use Cases
1. Chatbots
Scenario: A gaming company wants their support team to know details about their users from 1000s of chats:
- “Amy broke her leg and had a really bad couple of weeks?”
- “John likes to go hiking and learn new languages. He speaks 4 fluently and frequently mentions his trips to Europe”
- ” Rita spent 200 USD in the past week on bonuses and she struggles with the level 13”
Challenge: Their data spans thousands of interactions, multiple tables, and complex systems.
Solution: Read about our implementation here
2. Code analysis for coding assistants
Scenario: A developer using coding assistants needs an answers that needs to include a dozen of files and understand how different parts of the system relate to each other:
- “If I change X variable, and improve Y function, where in the system do I need to change other settings for code to work”
- “How can I update Z function with new parameters and how would the API need to be changes?”
Challenge: Complex codebases have many links between the files, and typical SOA systems only allow for interacting with a few files disconnected files that can fit LLM context window Solution: Read about our implementation here
3. HR
Scenario: A company providing HR services needs to answer questions like:
- “List all the candidates that have more than 5 years of experience”
- “What are the people who got promoted at their work in the first 6 months”
Challenge: Reasoning over a large number of PDFs and answering complex questions is not well implemented with RAG solutions and usually breaks down them more data you add Solution: Read about our implementation here
Why This Matters
-
Better, Faster, Stronger: With enhanced accuracy and context, AI Agents and Apps can answer questions accurately in production
-
Reduced Errors and “Hallucinations”: By connecting LLMs to KGs, the system’s output is rooted in real data, not guesswork.
-
Improved Trust and Adoption: Explainable results, help to use LLMs as part of everyday operations.