Skip to main content

OpenAI

Use OpenAI’s GPT-4o, GPT-4o-mini, GPT-4-turbo, and o1 models with RadarOS through the unified ModelProvider interface.

Setup

Install the OpenAI SDK (required by RadarOS for OpenAI support):
npm install openai

Factory

import { openai } from "@radaros/core";

const model = openai("gpt-4o");
modelId
string
required
The OpenAI model identifier.
config
object
Optional configuration. See Config below.

Supported Models

Model IDDescription
gpt-4oLatest flagship model. Fast, capable.
gpt-4o-miniSmaller, faster, cost-effective.
gpt-4-turboHigh capability, larger context.
o1-previewReasoning-optimized model.
Pass any valid OpenAI model ID to the factory. New models are supported as soon as the OpenAI API supports them.

Config

apiKey
string
OpenAI API key. If omitted, uses OPENAI_API_KEY environment variable.
baseURL
string
Custom API base URL. Use for Azure OpenAI, proxies, or self-hosted endpoints.

Example

const model = openai("gpt-4o", {
  apiKey: process.env.OPENAI_API_KEY,
  baseURL: "https://api.openai.com/v1", // or Azure/proxy URL
});

Per-Request API Key Override

Override the API key for individual requests (e.g., multi-tenant apps):
const model = openai("gpt-4o");

// Use default key from env/config
const out1 = await agent.run("Hello");

// Override for this request
const out2 = await agent.run("Hello", { apiKey: "sk-tenant-specific-key" });
The apiKey in RunOpts is passed through to the model’s generate() and stream() calls.

Realtime / Voice

For real-time voice agents, use openaiRealtime() to create an OpenAI Realtime provider:
import { VoiceAgent, openaiRealtime } from "@radaros/core";

const agent = new VoiceAgent({
  name: "assistant",
  provider: openaiRealtime("gpt-4o-realtime-preview"),
  instructions: "You are a voice assistant.",
  voice: "alloy",
});
openaiRealtime() is a shorthand for new OpenAIRealtimeProvider(). It accepts the same config:
openaiRealtime("gpt-4o-realtime-preview", {
  apiKey: "sk-...",    // optional, defaults to OPENAI_API_KEY env
  baseURL: "wss://...", // optional custom WebSocket endpoint
});
Requires: npm install ws See the Voice Agents docs for full details.

Multi-Modal Support

OpenAI GPT-4o models support images, audio, and files as input.

Images

const result = await agent.run([
  { type: "text", text: "Describe this image." },
  { type: "image", data: "https://example.com/photo.jpg", mimeType: "image/jpeg" },
]);

Audio

const result = await agent.run([
  { type: "text", text: "Transcribe this audio." },
  { type: "audio", data: base64AudioData, mimeType: "audio/mp3" },
]);

Files

Files are sent using OpenAI’s native file input type. Both URLs and base64 data are supported:
const result = await agent.run([
  { type: "text", text: "Summarize this document." },
  { type: "file", data: "https://example.com/report.pdf", mimeType: "application/pdf", filename: "report.pdf" },
]);
For base64 files, the provider automatically wraps them as data: URIs for the API.

Reasoning Models (o-series)

OpenAI’s o-series models (o1, o3) have built-in chain-of-thought reasoning. Configure them via the reasoning config:
import { Agent, openai } from "@radaros/core";

const agent = new Agent({
  name: "reasoning-agent",
  model: openai("o3-mini"),
  instructions: "You are a precise, analytical assistant.",
  reasoning: {
    effort: "high", // "low" | "medium" | "high"
  },
});

const result = await agent.run(
  "If a train travels at 120 km/h for 2.5 hours, then slows to 80 km/h for 1.75 hours, what is the total distance?"
);

console.log(result.text);
// "The total distance is 440 km. (120 × 2.5 = 300 km) + (80 × 1.75 = 140 km)"
The effort parameter controls how much computation the model spends on reasoning:
  • "low" — Quick answers, minimal reasoning
  • "medium" — Balanced reasoning
  • "high" — Maximum reasoning depth, best for complex problems
Reasoning models may not support system prompts or streaming depending on the version. RadarOS handles these constraints automatically.

Structured Outputs

OpenAI supports strict structured output via JSON mode. When using defineTool with Zod schemas, RadarOS automatically uses OpenAI’s strict mode for more reliable function calling:
import { Agent, openai, defineTool } from "@radaros/core";
import { z } from "zod";

const extractTool = defineTool({
  name: "extractContact",
  description: "Extract contact information from text",
  parameters: z.object({
    name: z.string().describe("Full name"),
    email: z.string().email().describe("Email address"),
    phone: z.string().optional().describe("Phone number"),
    company: z.string().optional().describe("Company name"),
  }),
  execute: async (contact) => JSON.stringify(contact),
});

const agent = new Agent({
  name: "extractor",
  model: openai("gpt-4o"),
  tools: [extractTool],
  instructions: "Extract contact info from the user's message using the extractContact tool.",
});

const result = await agent.run(
  "Reach out to Jane Doe at jane@acme.com, she works at Acme Corp"
);
Zod schemas are converted to JSON Schema with strict: true, ensuring the model always returns valid, well-typed arguments.

Full Example

import { Agent, openai } from "@radaros/core";

const agent = new Agent({
  name: "GPT Assistant",
  model: openai("gpt-4o", {
    apiKey: process.env.OPENAI_API_KEY,
  }),
  instructions: "You are a concise, helpful assistant.",
});

const output = await agent.run("What is 2 + 2?");
console.log(output.text);