Agents
What is an Agent?
An Agent in RadarOS is a conversational AI unit that processes user input, optionally calls tools, and produces text or structured output. Agents are model-agnostic—you plug in any supported LLM provider (OpenAI, Anthropic, Google Gemini, Ollama)—and can be extended with tools, memory, sessions, and guardrails.Synchronous
Use
run(input, opts?) for a single request-response cycle.Streaming
Use
stream(input, opts?) for token-by-token or chunk-by-chunk output.AgentConfig
Configure an agent by passing anAgentConfig object to the Agent constructor. All properties except name and model are optional.
Display name for the agent. Used in logs and events.
The LLM provider instance. Use factory functions like
openai("gpt-4o"), anthropic("claude-sonnet-4-20250514"), or google("gemini-2.0-flash").System instructions for the agent. Can be a static string or a function that receives
RunContext and returns a string (useful for dynamic prompts).Array of tool definitions created with
defineTool(). Enables function calling.Memory instance for short-term and optional long-term conversation context. See Memory.
Storage driver for sessions. Defaults to
InMemoryStorage if not provided.Default session ID for this agent. Can be overridden per-run via
RunOpts.Default user ID for this agent. Can be overridden per-run via
RunOpts.Whether to include session history in the messages sent to the LLM. Set to
false to disable.Limits how many prior turns to include. Each turn = 2 messages (user + assistant). If unset, defaults to 20 messages.
Maximum number of tool-call rounds before stopping. Prevents infinite loops.
Sampling temperature passed to the model (0–2 typically). Lower = more deterministic.
Zod schema to enforce structured JSON output. See Structured Output.
Lifecycle hooks:
beforeRun, afterRun, onToolCall, onError. See Hooks & Guardrails.Input and output guardrails. Each is an array of validators. See Hooks & Guardrails.
Custom event bus for emitting agent events. Defaults to a new
EventBus if not provided.User memory instance for cross-session personalization. See User Memory.
Logging level:
"debug" | "info" | "warn" | "error" | "silent". Default is "silent".Retry configuration for transient LLM API failures (429, 5xx, network errors). Default: 3 retries with exponential backoff.
Maximum context window tokens. When set, conversation history is automatically trimmed (oldest messages first) to fit within this limit.
Methods
run(input, opts?) → RunOutput
Processes user input and returns the full response.stream(input, opts?) → AsyncGenerator<StreamChunk>
Streams the response as chunks. Use for real-time UIs or SSE.RunOpts
Options passed torun() or stream():
| Property | Type | Description |
|---|---|---|
sessionId | string | Session ID for multi-turn conversations. Auto-generated if omitted. |
userId | string | User identifier. |
metadata | Record<string, unknown> | Arbitrary metadata available in RunContext. |
apiKey | string | Per-request API key override for the model provider. |
RunOutput
The object returned byrun():
| Property | Type | Description |
|---|---|---|
text | string | The assistant’s text response. |
toolCalls | ToolCallResult[] | Results from any tool calls made during the run. |
usage | TokenUsage | { promptTokens, completionTokens, totalTokens, reasoningTokens? }. |
structured | unknown | Parsed structured output when structuredOutput schema is set. |
thinking | string | Model’s internal reasoning content (when reasoning is enabled). |
durationMs | number | Wall-clock duration of the run in milliseconds. |