Models
Model Agnosticism
RadarOS is model-agnostic. You can use OpenAI, Anthropic, Google Gemini, Vertex AI, AWS Bedrock, Azure OpenAI, Azure AI Foundry, DeepSeek, Mistral, xAI (Grok), Perplexity, Cohere, Meta (Llama), Vercel v0, Ollama, or any custom provider through a unifiedModelProvider interface. Switch models with a single line change—no refactoring required.
Unified Interface
All providers implement the same
generate() and stream() methods. Your agent code stays identical.Easy Switching
Swap
openai("gpt-4o") for anthropic("claude-sonnet-4-20250514") without touching the rest of your app.ModelProvider Interface
Every model in RadarOS implements theModelProvider interface:
Unique identifier for the provider (e.g.,
"openai", "anthropic").The specific model identifier (e.g.,
"gpt-4o", "claude-sonnet-4-20250514").Non-streaming completion. Returns the full response with message, usage, and finish reason.
Streaming completion. Yields text deltas, tool call chunks, and finish events.
Factory Functions
Use factory functions to create model instances. Each returns aModelProvider ready for agents, teams, and workflows.
| Factory | Provider | Returns | Config |
|---|---|---|---|
openai(modelId, config?) | OpenAI | ModelProvider | { apiKey?, baseURL? } |
anthropic(modelId, config?) | Anthropic | ModelProvider | { apiKey? } |
google(modelId, config?) | Google Gemini | ModelProvider | { apiKey? } |
vertex(modelId, config?) | Google Vertex AI | ModelProvider | { project?, location?, credentials? } |
ollama(modelId, config?) | Ollama (local) | ModelProvider | { host? } (default: http://localhost:11434) |
awsBedrock(modelId, config?) | AWS Bedrock | ModelProvider | { accessKeyId?, secretAccessKey?, region? } |
awsClaude(modelId, config?) | AWS Claude (Bedrock) | ModelProvider | { awsAccessKey?, awsSecretKey?, awsRegion? } |
azureOpenai(modelId, config?) | Azure OpenAI | ModelProvider | { apiKey?, endpoint?, deployment?, apiVersion? } |
azureFoundry(modelId, config?) | Azure AI Foundry | ModelProvider | { apiKey?, endpoint? } |
deepseek(modelId, config?) | DeepSeek | ModelProvider | { apiKey?, baseURL? } |
mistral(modelId, config?) | Mistral | ModelProvider | { apiKey?, baseURL? } |
xai(modelId, config?) | xAI (Grok) | ModelProvider | { apiKey?, baseURL? } |
perplexity(modelId, config?) | Perplexity | ModelProvider | { apiKey?, baseURL?, search? } |
cohere(modelId, config?) | Cohere | ModelProvider | { apiKey? } |
meta(modelId, config?) | Meta (Llama) | ModelProvider | { apiKey?, baseURL? } |
vercel(modelId, config?) | Vercel v0 | ModelProvider | { apiKey?, baseURL? } |
openaiRealtime(modelId?, config?) | OpenAI Realtime | RealtimeProvider | { apiKey?, baseURL? } |
googleLive(modelId?, config?) | Gemini Live | RealtimeProvider | { apiKey? } |
Switching Models in One Line
Realtime Providers (Voice)
For voice agents, use the realtime helpers that return aRealtimeProvider:
ModelConfig Options
Pass these options togenerate() or stream() (or set them on agents):
Sampling temperature (0–2). Lower = more deterministic. Typical: 0.7.
Maximum tokens in the completion. Provider-specific limits apply.
Nucleus sampling. Alternative to temperature for some providers.
Stop sequences. Generation stops when any of these strings are produced.
Output format:
"text" (default), "json" (JSON object), or { type: "json_schema", schema, name? } for structured output.Per-request API key override. Use when you need to override the key set at construction (e.g., multi-tenant).
Example: ModelConfig Usage
SDK Implementation
Each provider uses the best available SDK for its API. Some providers support a dual-mode pattern: they try to load the native SDK first for full feature access, and fall back to theopenai SDK (via the provider’s OpenAI-compatible endpoint) if the native SDK is not installed.
| Provider | Native SDK | Fallback | Notes |
|---|---|---|---|
| OpenAI | openai | — | Direct |
| Anthropic | @anthropic-ai/sdk | — | Direct |
| Google / Vertex | @google/genai | — | Direct |
| Ollama | ollama | — | Direct |
| AWS Bedrock | @aws-sdk/client-bedrock-runtime | — | Converse API |
| AWS Claude | @anthropic-ai/bedrock-sdk | — | Anthropic SDK + AWS auth |
| Mistral | @mistralai/mistralai | openai | Dual-mode |
| Cohere | cohere-ai | openai | Dual-mode |
| Perplexity | @perplexity-ai/perplexity_ai | openai | Dual-mode; native unlocks search options & citations |
| xAI, DeepSeek, Meta, Vercel | openai | — | OpenAI-compatible APIs |
| Azure OpenAI, Azure Foundry | openai | — | OpenAI-compatible with Azure auth |
For dual-mode providers, install the native SDK to access provider-specific features (e.g., Perplexity search filtering, Cohere RAG connectors). If you only need basic chat completions, the
openai fallback works fine.TokenUsage
Every model response includes aTokenUsage object with normalized counts and the raw provider metrics:
providerMetrics field contains the raw usage object from the underlying API (e.g., usageMetadata from Gemini, usage from OpenAI, usage from Anthropic). This is useful for debugging, auditing, or accessing provider-specific fields that aren’t captured in the normalized interface (e.g., thoughtsTokenCount, prompt_tokens_details, cache_read_input_tokens).
Next Steps
OpenAI
GPT-4o, GPT-4o-mini, GPT-4-turbo, o1-preview.
Anthropic
Claude Sonnet, Claude Haiku.
Google Gemini
Gemini 2.5 Flash, Gemini 2.5 Pro. Multi-modal support.
Vertex AI
Enterprise Gemini via Google Cloud. IAM auth, VPC, compliance.
Ollama
Local models: Llama, CodeLlama, Mistral.
AWS Bedrock
Mistral, Nova, Llama, Cohere via AWS.
AWS Claude
Claude on Bedrock with AWS auth.
Azure OpenAI
GPT-4o, o-series on Azure with enterprise compliance.
Azure AI Foundry
Phi, Llama, Mistral, Cohere on Azure.
DeepSeek
Reasoning and chat models with chain-of-thought.
Mistral
Code generation, vision (Pixtral), reasoning.
xAI (Grok)
Grok models with live web search.
Perplexity
Search-grounded answers with citations.
Cohere
RAG-optimized Command models, fine-tuning.
Meta (Llama)
Llama 3.3, Llama 4 via the Llama API.
Vercel v0
Web development code generation.
OpenAI-Compatible
Together, Groq, Fireworks, OpenRouter, NVIDIA, and more.
Custom Provider
Implement your own ModelProvider from scratch.