Skip to main content

Knowledge Base Overview

RadarOS provides a KnowledgeBase abstraction for Retrieval-Augmented Generation (RAG). Store documents in a vector store, search by semantic similarity, and expose retrieval as a tool so agents can answer questions using your private data.

What is RAG?

Retrieve

Convert documents to embeddings, store in a vector database, and search by semantic similarity.

Augment

Inject retrieved chunks into the LLM context before generating a response.

Generate

The model produces answers grounded in the retrieved content instead of relying only on its training data.
RAG reduces hallucinations and keeps responses aligned with your documents—ideal for internal docs, support knowledge bases, and domain-specific Q&A.

KnowledgeBase Class

import { KnowledgeBase } from "@radaros/core";

const kb = new KnowledgeBase({
  name: "Product Docs",
  vectorStore: myVectorStore,
  collection: "product_docs", // optional, defaults to sanitized name
});

await kb.initialize();
await kb.add({ id: "doc-1", content: "Our API supports webhooks..." });
const results = await kb.search("How do I set up webhooks?");
name
string
required
Display name for the knowledge base. Used in tool descriptions.
vectorStore
VectorStore
required
A vector store implementation (InMemory, PgVector, Qdrant, MongoDB).
collection
string
Collection/index name inside the vector store. Defaults to a sanitized version of name.
searchMode
"vector" | "keyword" | "hybrid"
default:"vector"
Default search strategy. "hybrid" combines vector + keyword search via Reciprocal Rank Fusion for the best results. See Hybrid Search.
hybridConfig
HybridSearchConfig
Fine-tune hybrid search weights and RRF constant. See Hybrid Search.

asTool() — Expose KB to Agents

The most powerful feature: turn a KnowledgeBase into a ToolDef that agents can call automatically.
const kb = new KnowledgeBase({
  name: "Support KB",
  vectorStore: vectorStore,
});

await kb.initialize();
await kb.addDocuments(docs);

const agent = new Agent({
  name: "SupportBot",
  model: openai("gpt-4o"),
  tools: [kb.asTool({ toolName: "search_kb", topK: 5 })],
  instructions: "Answer using the knowledge base when relevant.",
});

const result = await agent.run("How do I reset my password?");
// Agent calls search_kb internally and uses retrieved docs in its answer
toolName
string
Tool name exposed to the LLM. Default: search_<collection>.
description
string
Custom tool description. Default: auto-generated from KB name.
topK
number
default:"5"
Number of results to return per search.
minScore
number
Minimum similarity score to include a result.
filter
Record<string, unknown>
Metadata filter applied to every search.
searchMode
"vector" | "keyword" | "hybrid"
Override search mode for this tool. Inherits from KB config if not set.
formatResults
(results) => string
Custom formatter for search results. Default: numbered list with scores.

Flow Diagram

1

Create KnowledgeBase

Configure name, vector store, and optional collection.
2

Initialize

Call kb.initialize() to ensure the vector store is ready.
3

Add Documents

Use add() or addDocuments() to ingest content. Embeddings are computed if not provided.
4

Expose as Tool

Call kb.asTool() and pass the result to Agent({ tools: [...] }).
5

Query

When the user asks a question, the agent calls the search tool and uses results in its response.

Next Steps