Skip to main content

ModelProvider Interface

Every model in RadarOS implements this interface. You never implement it directly — use factory functions like openai(), anthropic(), google().
Property/MethodTypeDescription
providerIdstring (readonly)Provider identifier (e.g. "openai", "anthropic", "google")
modelIdstring (readonly)Model identifier (e.g. "gpt-4o", "claude-sonnet-4-20250514")
generate(messages, options?)Promise<ModelResponse>Send messages, get a complete response
stream(messages, options?)AsyncGenerator<StreamChunk>Send messages, get a streaming response

ModelConfig

Options passed to generate() and stream(). Also used as per-request overrides on the agent.
PropertyTypeDefaultDescription
temperaturenumberProvider default (usually 1.0)Sampling temperature. 0 = deterministic, 2 = very creative
maxTokensnumberProvider defaultMaximum tokens in the response
topPnumberundefinedNucleus sampling. Alternative to temperature
stopstring[]undefinedStop sequences — model stops generating when it produces any of these
responseFormat"text" | "json" | { type: "json_schema"; schema: object; name?: string }"text""text" = plain text, "json" = JSON mode, or a JSON schema for structured output
apiKeystringProvider-level keyPer-request API key override
reasoningReasoningConfigundefinedEnable extended thinking

Factory Functions and Provider Configs

Each provider has a factory function that creates a ModelProvider. Here are the config options for every provider.

openai(modelId, config?)

import { openai } from "@radaros/core";
const model = openai("gpt-4o", { apiKey: "sk-..." });
Config PropertyTypeDefaultDescription
apiKeystringOPENAI_API_KEY env varOpenAI API key
baseURLstring"https://api.openai.com/v1"Custom base URL. Use this for OpenAI-compatible APIs (Together, Groq, etc.)
Common models: gpt-4o, gpt-4o-mini, o3, o3-mini, gpt-4.1, gpt-4.1-mini, gpt-4.1-nano

anthropic(modelId, config?)

import { anthropic } from "@radaros/core";
const model = anthropic("claude-sonnet-4-20250514", { apiKey: "sk-ant-..." });
Config PropertyTypeDefaultDescription
apiKeystringANTHROPIC_API_KEY env varAnthropic API key
Common models: claude-sonnet-4-20250514, claude-haiku-4-5-20251001, claude-opus-4-20250514

google(modelId, config?)

import { google } from "@radaros/core";
const model = google("gemini-2.5-flash", { apiKey: "AI..." });
Config PropertyTypeDefaultDescription
apiKeystringGOOGLE_API_KEY env varGoogle AI Studio API key
Common models: gemini-2.5-flash, gemini-2.5-pro, gemini-2.0-flash

vertex(modelId, config?)

import { vertex } from "@radaros/core";
const model = vertex("gemini-2.5-flash", {
  project: "my-gcp-project",
  location: "us-central1",
});
Config PropertyTypeDefaultDescription
projectstringGOOGLE_CLOUD_PROJECT env varGCP project ID
locationstring"us-central1"GCP region
credentialsstringApplication Default CredentialsPath to a service account JSON key file, OR the JSON string itself

ollama(modelId, config?)

import { ollama } from "@radaros/core";
const model = ollama("llama3.2", { host: "http://localhost:11434" });
Config PropertyTypeDefaultDescription
hoststring"http://localhost:11434"Ollama server URL
Common models: llama3.2, llama3.1, codellama, mistral, phi3, gemma2

deepseek(modelId, config?)

import { deepseek } from "@radaros/core";
const model = deepseek("deepseek-chat");
Config PropertyTypeDefaultDescription
apiKeystringDEEPSEEK_API_KEY env varDeepSeek API key
Common models: deepseek-chat, deepseek-reasoner

mistral(modelId, config?)

import { mistral } from "@radaros/core";
const model = mistral("mistral-large-latest");
Config PropertyTypeDefaultDescription
apiKeystringMISTRAL_API_KEY env varMistral API key
Common models: mistral-large-latest, mistral-medium-latest, codestral-latest, pixtral-large-latest

xai(modelId, config?)

import { xai } from "@radaros/core";
const model = xai("grok-3");
Config PropertyTypeDefaultDescription
apiKeystringXAI_API_KEY env varxAI API key
Common models: grok-3, grok-3-mini, grok-2

perplexity(modelId, config?)

import { perplexity } from "@radaros/core";
const model = perplexity("sonar-pro");
Config PropertyTypeDefaultDescription
apiKeystringPERPLEXITY_API_KEY env varPerplexity API key
Common models: sonar-pro, sonar, sonar-reasoning-pro

cohere(modelId, config?)

import { cohere } from "@radaros/core";
const model = cohere("command-r-plus");
Config PropertyTypeDefaultDescription
apiKeystringCOHERE_API_KEY env varCohere API key
Common models: command-r-plus, command-r, command-light

meta(modelId, config?)

import { meta } from "@radaros/core";
const model = meta("Llama-4-Maverick-17B-128E-Instruct-FP8");
Config PropertyTypeDefaultDescription
apiKeystringMETA_API_KEY env varMeta Llama API key

awsBedrock(modelId, config?)

import { awsBedrock } from "@radaros/core";
const model = awsBedrock("amazon.nova-pro-v1:0", {
  region: "us-east-1",
});
Config PropertyTypeDefaultDescription
regionstringAWS_REGION env varAWS region
accessKeyIdstringAWS credential chainAWS access key
secretAccessKeystringAWS credential chainAWS secret key
sessionTokenstringundefinedAWS session token (for temporary credentials)
profilestringundefinedAWS credentials profile name

awsClaude(modelId, config?)

import { awsClaude } from "@radaros/core";
const model = awsClaude("claude-sonnet-4-20250514", {
  region: "us-east-1",
});
Config PropertyTypeDefaultDescription
regionstringAWS_REGION env varAWS region
accessKeyIdstringAWS credential chainAWS access key
secretAccessKeystringAWS credential chainAWS secret key

azureOpenai(modelId, config?)

import { azureOpenai } from "@radaros/core";
const model = azureOpenai("gpt-4o", {
  resourceName: "my-resource",
  deploymentName: "my-deployment",
  apiKey: "...",
});
Config PropertyTypeDefaultDescription
resourceNamestringRequiredAzure OpenAI resource name
deploymentNamestringRequiredDeployment name
apiKeystringAZURE_OPENAI_API_KEY env varAzure API key
apiVersionstring"2024-12-01-preview"Azure API version

azureFoundry(modelId, config?)

import { azureFoundry } from "@radaros/core";
const model = azureFoundry("Phi-4", {
  endpoint: "https://my-endpoint.inference.ai.azure.com",
  apiKey: "...",
});
Config PropertyTypeDefaultDescription
endpointstringRequiredAzure AI Foundry inference endpoint
apiKeystringAZURE_FOUNDRY_API_KEY env varAzure API key

vercel(modelId, config?)

import { vercel } from "@radaros/core";
const model = vercel("v0-1.0-md");
Config PropertyTypeDefaultDescription
apiKeystringVERCEL_API_KEY env varVercel API key

OpenAI-Compatible Providers

Any API that follows the OpenAI API format can be used with the openai() factory by setting baseURL:
// Together AI
const together = openai("meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo", {
  apiKey: process.env.TOGETHER_API_KEY,
  baseURL: "https://api.together.xyz/v1",
});

// Groq
const groq = openai("llama-3.3-70b-versatile", {
  apiKey: process.env.GROQ_API_KEY,
  baseURL: "https://api.groq.com/openai/v1",
});

// Fireworks AI
const fireworks = openai("accounts/fireworks/models/llama-v3p1-70b-instruct", {
  apiKey: process.env.FIREWORKS_API_KEY,
  baseURL: "https://api.fireworks.ai/inference/v1",
});

// OpenRouter
const openrouter = openai("anthropic/claude-sonnet-4-20250514", {
  apiKey: process.env.OPENROUTER_API_KEY,
  baseURL: "https://openrouter.ai/api/v1",
});

// LM Studio (local)
const lmstudio = openai("local-model", {
  baseURL: "http://localhost:1234/v1",
});

Environment Variables

Quick reference for all provider environment variables:
ProviderEnv VariableFactory
OpenAIOPENAI_API_KEYopenai()
AnthropicANTHROPIC_API_KEYanthropic()
GoogleGOOGLE_API_KEYgoogle()
Vertex AIGOOGLE_CLOUD_PROJECTvertex()
Ollama— (no key needed)ollama()
DeepSeekDEEPSEEK_API_KEYdeepseek()
MistralMISTRAL_API_KEYmistral()
xAIXAI_API_KEYxai()
PerplexityPERPLEXITY_API_KEYperplexity()
CohereCOHERE_API_KEYcohere()
MetaMETA_API_KEYmeta()
AWSAWS_REGION, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEYawsBedrock(), awsClaude()
Azure OpenAIAZURE_OPENAI_API_KEYazureOpenai()
Azure FoundryAZURE_FOUNDRY_API_KEYazureFoundry()
VercelVERCEL_API_KEYvercel()