Skip to main content

Unified Memory

RadarOS provides a single unified memory config that works identically across Agent, VoiceAgent, and BrowserAgent. All memory subsystems share one storage backend.

Quick Start

import { Agent, MongoDBStorage, openai } from "@radaros/core";

const agent = new Agent({
  name: "assistant",
  model: openai("gpt-4o"),
  memory: {
    storage: new MongoDBStorage({ uri: "mongodb://localhost/radaros" }),
  },
});
With just storage, you get:
  • Session persistence — message history saved across runs
  • Summaries — overflow messages automatically summarized

Full Configuration

import { InMemoryGraphStore } from "@radaros/core";

const agent = new Agent({
  name: "assistant",
  model: openai("gpt-4o"),
  memory: {
    storage: new MongoDBStorage({ uri: "mongodb://localhost/radaros" }),
    maxMessages: 20,          // messages in session history (default: 50)
    maxTokens: 128_000,       // auto-trim history to fit context window

    summaries: true,          // ON by default — long-term conversation context
    userFacts: true,          // OFF by default — "prefers dark mode", "lives in Mumbai"
    userProfile: true,        // OFF by default — structured: name, role, timezone
    entities: true,           // OFF by default — companies, people, projects
    decisions: true,          // OFF by default — audit trail of agent choices

    learnings: {              // OFF by default — needs a vector store
      vectorStore: qdrant(...),
    },

    graph: {                  // OFF by default — knowledge graph
      store: new InMemoryGraphStore(),
    },

    procedures: true,         // OFF by default — learns multi-step workflows

    contextBudget: {          // optional — controls context token allocation
      maxTokens: 4000,
      priorities: { summaries: 0.3, graph: 0.2 },
    },

    model: openai("gpt-4o-mini"), // cheaper model for background extraction
  },
});

How It Works

Memory operates in a cycle around each agent run:

1. Context Assembly (Before Run)

MemoryManager.buildContext() gathers relevant data from all enabled stores and creates a context string injected into the system prompt:
// What buildContext() produces (approximate):
`
## Memory Context

### Session Summary
The user previously discussed shipping delays for order #12345
and requested a refund, which was processed successfully.

### About This User
- Name: Akash Sengar
- Role: Product Manager
- Company: Xhipment
- Prefers dark mode
- Timezone: Asia/Kolkata

### Relevant Entities
- Xhipment (company): Logistics platform, user's employer
- Order #12345: Delayed shipment from Dec 15

### Recent Decisions
- Approved refund for order #12345 (reason: 7-day delay exceeded SLA)

### Relevant Learnings
- Refunds for delays >5 days should be auto-approved per company policy
`
This context is appended to the system prompt, giving the model persistent awareness across sessions.

2. Background Extraction (After Run)

After each run completes, MemoryManager.afterRun() fires in the background (non-blocking). It sends the conversation to a cheaper model (memory.model) to extract:
  • New user facts and profile updates
  • Entity mentions (companies, people, projects)
  • Decision records
  • Learnings worth remembering
// Background extraction happens automatically — no code needed.
// To use a cheaper model for extraction:
memory: {
  storage,
  model: openai("gpt-4o-mini"), // Uses ~10x less tokens than the main model
  summaries: true,
  userFacts: true,
}

3. Session Overflow

When the session exceeds maxMessages, the oldest messages are summarized into a single summary entry, then removed from the session. This keeps the active history small while preserving context through summaries.

Works Everywhere

The same memory config works across all agent types:
// Text Agent
new Agent({ model, memory: { storage } });

// Voice Agent
new VoiceAgent({ provider, memory: { storage } });

// Browser Agent
new BrowserAgent({ model, memory: { storage } });

Simplified API

For quick operations without dealing with individual stores, use the high-level remember, recall, and forget methods:
const mm = agent.memory!;

// Store a fact
await mm.remember("User prefers dark mode", { userId: "user-42" });

// Search across all stores with composite scoring
const results = await mm.recall("dark mode preference", { userId: "user-42" });
console.log(results[0].content, results[0].score);

// Remove memories
await mm.forget({ userId: "user-42", factId: "fact-abc" });
See Simplified API for full details.

Default Feature States

FeatureDefaultRequires
SessionsONstorage
SummariesONstorage
User FactsOFFuserFacts: true
User ProfileOFFuserProfile: true
EntitiesOFFentities: true
DecisionsOFFdecisions: true
LearningsOFFlearnings: { vectorStore }
Graph MemoryOFFgraph: { store }
ProceduresOFFprocedures: true

Accessing Stores Directly

You can access individual stores via the MemoryManager:
const mm = agent.memory; // MemoryManager | null

const facts = await mm?.getUserFacts()?.getFacts("user-123");
const profile = await mm?.getUserProfile()?.getProfile("user-123");
const entities = await mm?.getEntityMemory()?.listEntities();

Inspecting Memory Context

You can call buildContext() directly to see what the model receives:
const mm = agent.memory;
if (mm) {
  const ctx = await mm.buildContext({
    sessionId: "session-abc",
    userId: "user-42",
    agentName: "assistant",
  });
  console.log(ctx);
  // Prints the full context string that would be injected into the system prompt
}
This is useful for debugging — if the model seems to “forget” something, check if the relevant store is enabled and producing context.

Cross-References