Tools & Function Calling
RadarOS agents can call external functions (tools) during a run. Tools are defined withdefineTool(), validated with Zod schemas, and executed automatically when the LLM decides to use them.
defineTool()
UsedefineTool() to create a type-safe tool definition:
Tool Definition Anatomy
name
name
Unique identifier for the tool. The LLM uses this to invoke the tool. Use camelCase (e.g.
getWeather, searchDatabase).description
description
Natural language description of what the tool does. The LLM uses this to decide when to call the tool. Be clear and specific.
parameters
parameters
A Zod object schema defining the tool’s input. Use
.describe() on fields to help the LLM understand each parameter. The schema is converted to JSON Schema for the provider.execute
execute
Async function that runs when the tool is called. Receives parsed
args (typed from the schema) and RunContext. Returns a string or ToolResult (with optional artifacts).Examples
Weather Tool
Calculator Tool
Multiple Tools on an Agent
Pass an array of tools to the agent:ToolResult Type
Instead of returning a plain string, return aToolResult to include optional artifacts:
Tool Artifacts
Artifacts are structured data attached to a tool result. The LLM only sees thecontent string, but your application receives the full artifacts in ToolCallResult. Use artifacts to pass structured data (JSON, images, charts) through the agent pipeline without polluting the LLM context.
Parallel Tool Execution
When the LLM returns multiple tool calls in a single turn, RadarOS executes them in parallel (batches of up to 5 by default). Each tool call is validated against its schema; invalid arguments produce an error result for that call only.Tool Caching
Add acache property to any tool to cache results and avoid redundant executions:
Strict Mode (OpenAI Structured Outputs)
Enablestrict on a tool to activate OpenAI’s Structured Outputs for tool calls. When enabled, the model is guaranteed to return valid JSON matching the schema — no malformed arguments.
strict is true, RadarOS automatically:
- Strips verbose JSON Schema metadata (
$schema,additionalPropertieson nested objects) - Sets
additionalProperties: falseat the top level (required by OpenAI) - Passes
strict: trueto the OpenAI function definition
Strict mode is only supported by OpenAI models. For other providers, the
strict option is ignored.Sandbox & Approval
Tools support two additional safety features:sandbox— Run the tool in an isolated subprocess with timeout and memory limits. See Sandbox Execution.requiresApproval— Require human approval before executing the tool. See Human-in-the-Loop.
Tool Error Handling
When a tool’sexecute function throws an error, RadarOS catches it and returns the error message as the tool result. The LLM sees the error and can decide how to respond — often retrying with different arguments or explaining the failure to the user.
Tool Router (Automatic Tool Selection)
When an agent has many tools (e.g., 50+ from an MCP server), sending all tool schemas in every prompt wastes tokens and can confuse the model. The Tool Router automatically selects only the relevant tools for each query using a cheap/fast model.Basic usage
Pass atoolRouter config to the agent. The router runs before every LLM call and filters the tool list:
Configuration
| Option | Type | Default | Description |
|---|---|---|---|
model | ModelProvider | required | Cheap/fast model used to select relevant tools |
maxTools | number | 8 | Maximum number of tools to select per query |
minTools | number | 0 | Minimum tools to return; if selection returns fewer, all tools are sent as fallback |
temperature | number | 0 | Temperature for the selection model |
How it works
- Before each
run()orstream()call, the router sends the user query and a compact tool index (name + description) to the selection model. - The selection model returns a JSON array of tool names.
- The agent rebuilds its tool set with only the selected tools for that turn.
- If selection fails or returns too few tools, all tools are sent as a safe fallback.
When to use it
- Many tools — MCP servers or large toolkit collections with 10+ tools
- Cost-sensitive — Reducing prompt tokens directly lowers cost
- Mixed-domain agents — Agent has tools across shipping, billing, CRM, etc. and most queries only need a few
Choosing a router model
Use the cheapest model that can reliably read tool names and match them to a query. Good choices:anthropic("claude-haiku-4-5-20251001")openai("gpt-4o-mini")google("gemini-2.0-flash")
The
toolRouter config is entirely optional. If omitted, the agent sends all tools on every call (the default behavior).Tool Result Limits
When tools return large payloads (e.g. an MCP server returning 200KB of database records), the entire result is sent back to the LLM on the next roundtrip — causing massive prompt token usage and high costs.toolResultLimit intercepts oversized tool results and either smart-truncates them or summarizes them via a cheap model before they reach the main LLM.
Basic usage
With summarization
Use a cheap model to summarize large results instead of truncating. This preserves data fidelity while cutting tokens:Configuration
| Option | Type | Default | Description |
|---|---|---|---|
maxChars | number | 20000 | Character threshold before the strategy kicks in |
strategy | "truncate" | "summarize" | "truncate" | How to handle oversized results |
model | ModelProvider | — | Model for summarization (required when strategy is "summarize") |
Strategies
"truncate" (default) — Smart JSON-aware truncation:
- JSON arrays: keeps the first N items that fit, appends
"[Showing 15 of 892 items — 877 more omitted]" - JSON objects with array values: truncates each array proportionally
- Plain text: hard-cut with a note about remaining characters
"summarize" — Sends the full result to a cheap/fast model with instructions to preserve all key data points, totals, IDs, and dates. The summary replaces the original result before the main model sees it. Falls back to truncation if summarization fails.
When to use it
- MCP tools returning bulk data — API endpoints that return full record sets without pagination
- Cost-sensitive agents — A 200KB tool result can cost $0.25+ in prompt tokens on a single roundtrip
- Any agent with
maxToolRoundtrips> 0 — Tool results are sent back to the LLM on each roundtrip; large results compound quickly
toolResultLimit only affects text results sent back to the LLM. It does not modify the ToolCallResult stored in RunOutput.toolCalls — your application still has access to the original full result.Using RunContext in Tools
Theexecute function receives two arguments: args (the parsed parameters) and ctx (the RunContext). Use ctx to access session info, metadata, state, and dependencies.
RunContext properties available in tools
| Property | Type | Description |
|---|---|---|
ctx.runId | string | Current run’s unique ID |
ctx.sessionId | string | Session ID |
ctx.userId | string? | User ID |
ctx.tenantId | string? | Tenant ID |
ctx.metadata | Record<string, unknown> | Arbitrary metadata from RunOpts |
ctx.sessionState | Record<string, unknown> | Mutable key-value state bag |
ctx.dependencies | Record<string, string> | Resolved dependency variables |
ctx.signal | AbortSignal? | Cancellation signal |
ctx.eventBus | EventBus | Event bus for emitting events |
Sandbox Configuration
Run tools in isolated subprocesses with resource limits. Set at the agent level (applies to all tools) or per-tool.| Property | Type | Default | Description |
|---|---|---|---|
enabled | boolean | true (when config provided) | Explicit on/off |
timeout | number | 30000 (30s) | Execution timeout in ms |
maxMemoryMB | number | 256 | Max heap memory in MB |
allowNetwork | boolean | false | Allow outbound network |
allowFS | boolean | { readOnly?: string[]; readWrite?: string[] } | false | Filesystem access. Pass object for granular paths |
env | Record<string, string> | undefined | Environment variables forwarded to the sandbox |
Conditional Approval
requiresApproval can be a function that decides approval based on the arguments:
Runtime Tool Mutation
Add, remove, and replace tools on a running agent:Dynamic Tool Resolver
For context-dependent tools that change per run, usetoolResolver: