Skip to main content
This guide walks through building a complete RAG (Retrieval Augmented Generation) system with RadarOS. The agent will search a knowledge base of documents to answer questions.

Overview

User Question → Agent → KnowledgeBase Tool → Vector Search → Context → LLM → Answer
1

Set up embeddings and vector store

Choose an embedding provider and vector store for your documents.
2

Create a KnowledgeBase

Initialize the KnowledgeBase with your vector store.
3

Add documents

Index your content into the knowledge base.
4

Create an agent with the KB tool

Use kb.asTool() to give the agent retrieval capabilities.
5

Query

Ask questions and get grounded answers.

Complete Example

In-Memory RAG

import {
  Agent,
  openai,
  KnowledgeBase,
  InMemoryVectorStore,
  OpenAIEmbedding,
} from "@radaros/core";

const embedder = new OpenAIEmbedding();
const vectorStore = new InMemoryVectorStore(1536);

const kb = new KnowledgeBase({
  name: "company-docs",
  vectorStore,
});

await kb.initialize();

// Add documents
await kb.addDocuments([
  {
    id: "pricing",
    content: "Our Pro plan costs $49/month and includes unlimited agents, 100k API calls, and priority support.",
    metadata: { category: "pricing" },
  },
  {
    id: "features",
    content: "RadarOS supports multi-agent teams, workflows, RAG, and multi-modal inputs including images and audio.",
    metadata: { category: "features" },
  },
  {
    id: "setup",
    content: "Install with npm install @radaros/core. Set your OPENAI_API_KEY environment variable to get started.",
    metadata: { category: "getting-started" },
  },
]);

// Create agent with KB as a tool
const agent = new Agent({
  name: "support-bot",
  model: openai("gpt-4o"),
  instructions: "You are a support agent. Use the search tool to find answers from the knowledge base. Always cite sources.",
  tools: [kb.asTool()],
  logLevel: "info",
});

const result = await agent.run("How much does the Pro plan cost?");
console.log(result.text);
// "The Pro plan costs $49/month and includes unlimited agents, 100k API calls, and priority support."

RAG with Qdrant

import {
  Agent,
  openai,
  KnowledgeBase,
  QdrantVectorStore,
  OpenAIEmbedding,
} from "@radaros/core";

const embedder = new OpenAIEmbedding();

const vectorStore = new QdrantVectorStore({
  url: "http://localhost:6334",
  collectionName: "docs",
  dimensions: 1536,
});

const kb = new KnowledgeBase({
  name: "docs",
  vectorStore,
});

await kb.initialize();

// Add documents (only needed once)
await kb.addDocuments([
  { id: "doc-1", content: "Your document content here..." },
  { id: "doc-2", content: "Another document..." },
]);

const agent = new Agent({
  name: "rag-agent",
  model: openai("gpt-4o"),
  instructions: "Answer questions using the knowledge base.",
  tools: [kb.asTool({ topK: 3, description: "Search the documentation" })],
});

const result = await agent.run("What does the documentation say about X?");
console.log(result.text);

RAG with MongoDB Atlas

import {
  Agent,
  openai,
  KnowledgeBase,
  MongoDBVectorStore,
  OpenAIEmbedding,
} from "@radaros/core";

const embedder = new OpenAIEmbedding();

const vectorStore = new MongoDBVectorStore({
  uri: "mongodb+srv://user:pass@cluster.mongodb.net",
  dbName: "myapp",
  collectionName: "documents",
  indexName: "vector_index",
  dimensions: 1536,
});

const kb = new KnowledgeBase({
  name: "knowledge",
  vectorStore,
});

await kb.initialize();

// Add and query as above
For the best retrieval accuracy, enable hybrid search. It combines semantic (vector) and keyword (BM25) matching:
import {
  Agent, openai, KnowledgeBase,
  InMemoryVectorStore, OpenAIEmbedding,
} from "@radaros/core";

const kb = new KnowledgeBase({
  name: "Product Docs",
  vectorStore: new InMemoryVectorStore(new OpenAIEmbedding()),
  searchMode: "hybrid",
});

await kb.initialize();
await kb.addDocuments([
  { id: "pricing", content: "Our Pro plan costs $49/month with unlimited API calls." },
  { id: "sla", content: "Enterprise SLA guarantees 99.9% uptime with 4-hour response time." },
]);

const agent = new Agent({
  name: "support-bot",
  model: openai("gpt-4o"),
  tools: [kb.asTool({ topK: 3, searchMode: "hybrid" })],
  instructions: "Answer using the knowledge base. Cite sources.",
});

const result = await agent.run("What's the SLA uptime guarantee?");
See Hybrid Search for tuning weights, BM25 details, and comparison of all three modes.

Customizing the Tool

The asTool() method accepts configuration to customize how the knowledge base is exposed to the agent:
kb.asTool({
  toolName: "search_docs",                   // Custom tool name
  description: "Search company documentation", // Custom description
  topK: 5,                                    // Number of results
  minScore: 0.7,                              // Minimum relevance score
  formatResults: (results) => {               // Custom result formatting
    return results
      .map((r) => `[${r.document.id}] ${r.document.content}`)
      .join("\n\n");
  },
});
toolName
string
Name of the generated tool. Defaults to search_<collection>.
description
string
Description shown to the LLM. Auto-generated if not provided.
topK
number
default:"5"
Maximum number of results to return.
minScore
number
Minimum similarity score threshold.
formatResults
(results: VectorSearchResult[]) => string
Custom function to format search results into a string for the LLM.