Skip to main content

User Memory

RadarOS agents can remember facts about users across sessions. The UserMemory class automatically extracts personal details from conversations — like preferences, location, profession — and injects them into future interactions for personalized responses. This is different from session memory, which stores conversation history within a single session. User memory persists per user, not per session.

Quick Start

import { Agent, openai, UserMemory, MongoDBStorage } from "@radaros/core";

const storage = new MongoDBStorage("mongodb://localhost:27017/myapp");

const userMemory = new UserMemory({
  storage,
  model: openai("gpt-4o-mini"),
  maxFacts: 50,
});

const agent = new Agent({
  name: "PersonalAssistant",
  model: openai("gpt-4o"),
  instructions: "You are a helpful personal assistant.",
  storage,
  userMemory,
  logLevel: "info",
});

// Conversation 1
const r1 = await agent.run(
  "Hi! I'm a TypeScript developer based in Mumbai.",
  { userId: "user-42", sessionId: "session-1" }
);

// Conversation 2 — new session, agent remembers the user
const r2 = await agent.run(
  "What frameworks should I learn?",
  { userId: "user-42", sessionId: "session-2" }
);
// Agent knows the user is a TypeScript developer from Mumbai
Storage is automatically initialized by both UserMemory and Agent — no need to call storage.initialize() manually.

How It Works

1

Conversation happens

The user talks to the agent via agent.run() or agent.stream().
2

Facts are auto-extracted

After each run, the agent fires a non-blocking LLM call to extract personal facts from the conversation (e.g., “Lives in Mumbai”, “Prefers concise answers”). This happens in the background and does not add latency to the response.
3

Facts are stored

Extracted facts are deduplicated and persisted in the storage driver under the memory:user namespace, keyed by userId.
4

Facts are injected

On subsequent runs, stored facts are injected into the system prompt so the agent can personalize its responses.

Configuration

const userMemory = new UserMemory(config?: UserMemoryConfig);
storage
StorageDriver
Storage driver for persisting user facts. Defaults to InMemoryStorage. Use MongoDBStorage, PostgresStorage, or SqliteStorage for persistence.
model
ModelProvider
LLM used for automatic fact extraction. If not provided, the agent’s own model is used as a fallback.
maxFacts
number
default:"100"
Maximum number of facts stored per user. When exceeded, the oldest facts are dropped.
enabled
boolean
default:"true"
Enable or disable auto-extraction. When disabled, getContextString() returns empty and extractAndStore() is a no-op.

Methods

MethodReturnsDescription
getFacts(userId)Promise<UserFact[]>Get all stored facts for a user
addFacts(userId, facts, source?)Promise<void>Manually add facts (deduplicates automatically)
removeFact(userId, factId)Promise<void>Remove a specific fact by ID
clear(userId)Promise<void>Clear all facts for a user
getContextString(userId)Promise<string>Formatted facts string for system prompt injection
extractAndStore(userId, messages, model?)Promise<void>Extract facts from messages and store them
asTool(config?)ToolDefCreate a tool the agent can call to recall user facts

UserFact Type

interface UserFact {
  id: string;
  fact: string;
  createdAt: Date;
  source: "auto" | "manual";
}
Facts from auto-extraction have source: "auto". Manually added facts have source: "manual".

asTool() — Let the Agent Recall Facts On Demand

Instead of only injecting facts into the system prompt, you can give the agent a tool to actively look up user facts when asked. This is useful for queries like “What do you know about me?”.
const agent = new Agent({
  name: "MemoryBot",
  model: openai("gpt-4o"),
  instructions: "You are a friendly assistant with a great memory.",
  storage,
  userMemory,
  tools: [userMemory.asTool()],
});

await agent.run("What do you know about me?", { userId: "user-42" });
// Agent calls recall_user_facts tool → retrieves stored facts → responds
The tool reads ctx.userId from the run context automatically — no manual wiring needed.
Smart deduplication: When asTool() is registered in the agent’s tools, RadarOS automatically skips injecting user facts into the system prompt to avoid duplication. The agent retrieves facts on demand via the tool instead, saving tokens.

Options

userMemory.asTool({
  name: "recall_user_facts",       // default
  description: "Custom description" // optional override
});

What Gets Extracted

The extraction LLM is prompted to identify:
  • Preferences (e.g., “Prefers concise answers”)
  • Location (e.g., “Based in Mumbai”)
  • Profession (e.g., “TypeScript developer”)
  • Interests (e.g., “Loves building AI tools”)
  • Goals and communication style
It is explicitly instructed to skip transient information like “asked about weather today” and to avoid duplicating existing facts.

Manual Fact Management

You can also manage facts programmatically:
// Add facts manually
await userMemory.addFacts("user-42", [
  "Prefers dark mode",
  "Uses VS Code",
]);

// Remove a specific fact
const facts = await userMemory.getFacts("user-42");
await userMemory.removeFact("user-42", facts[0].id);

// Clear all facts for a user
await userMemory.clear("user-42");

Storage Options

const userMemory = new UserMemory({
  model: openai("gpt-4o-mini"),
});

Session Memory vs User Memory

Session Memory

Stores conversation history per session. Used for multi-turn context within a single conversation. Configured via memory in AgentConfig.

User Memory

Stores personal facts per user. Persists across sessions. Used for cross-session personalization. Configured via userMemory in AgentConfig.
Both can be used together — session memory provides immediate conversation context, while user memory provides long-term personalization.

User Memory in Voice Agents

UserMemory also works with Voice Agents. The VoiceAgent handles everything internally:
  1. On connect({ userId }), stored facts are loaded and appended to the voice agent’s instructions
  2. During the conversation, transcripts are collected
  3. On disconnect, transcripts are consolidated (fragmented speech deltas merged) and facts are auto-extracted
import { VoiceAgent, OpenAIRealtimeProvider, OpenAIProvider, UserMemory } from "@radaros/core";

const agent = new VoiceAgent({
  name: "assistant",
  provider: new OpenAIRealtimeProvider("gpt-4o-realtime-preview"),
  userMemory,
  model: new OpenAIProvider("gpt-4o-mini"),
  instructions: "You are a helpful voice assistant.",
});

const session = await agent.connect({ userId: "user-42" });
// Facts are loaded automatically. New facts extracted on disconnect.
Voice agents use UserMemory but not Memory (long-term summarization). The realtime API manages its own conversation context. Only UserMemory persists across voice sessions.