OpenAI
Use OpenAI’s GPT-4o, GPT-4o-mini, GPT-4-turbo, and o1 models with RadarOS through the unifiedModelProvider interface.
Setup
- Install
- Environment
Install the OpenAI SDK (required by RadarOS for OpenAI support):
Factory
The OpenAI model identifier.
Optional configuration. See Config below.
Supported Models
| Model ID | Description |
|---|---|
gpt-4o | Latest flagship model. Fast, capable. |
gpt-4o-mini | Smaller, faster, cost-effective. |
gpt-4-turbo | High capability, larger context. |
o1-preview | Reasoning-optimized model. |
Using a different OpenAI model
Using a different OpenAI model
Pass any valid OpenAI model ID to the factory. New models are supported as soon as the OpenAI API supports them.
Config
OpenAI API key. If omitted, uses
OPENAI_API_KEY environment variable.Custom API base URL. Use for Azure OpenAI, proxies, or self-hosted endpoints.
Example
Per-Request API Key Override
Override the API key for individual requests (e.g., multi-tenant apps):apiKey in RunOpts is passed through to the model’s generate() and stream() calls.
Realtime / Voice
For real-time voice agents, useopenaiRealtime() to create an OpenAI Realtime provider:
openaiRealtime() is a shorthand for new OpenAIRealtimeProvider(). It accepts the same config:
npm install ws
See the Voice Agents docs for full details.