Azure AI Foundry
Use open-source and partner models hosted on Azure AI Foundry (formerly Azure AI Model Catalog) — access Phi, Llama, Mistral, Cohere, and more through Azure’s infrastructure with enterprise compliance. Azure AI Foundry exposes an OpenAI-compatible API, so RadarOS connects to it using the standardopenai SDK pointed at your Azure endpoint.
Setup
- Install
- Environment
Uses the standard OpenAI SDK:
Factory
The model name as deployed on Azure AI Foundry (e.g.,
"Phi-4", "Meta-Llama-3.1-70B-Instruct").Optional configuration. See Config below.
Supported Models
Microsoft Models
| Model | Description |
|---|---|
Phi-4 | Latest Phi model. Strong reasoning for its size. |
Phi-3.5-mini-instruct | Fast, compact, good for edge scenarios. |
Phi-3.5-vision-instruct | Vision-capable Phi model. |
Meta Models
| Model | Description |
|---|---|
Meta-Llama-3.1-405B-Instruct | Largest open model. Exceptional capability. |
Meta-Llama-3.1-70B-Instruct | Strong balance of size and performance. |
Meta-Llama-3.1-8B-Instruct | Fast, efficient, good for most tasks. |
Mistral Models
| Model | Description |
|---|---|
Mistral-large | High capability, strong reasoning. |
Mistral-small | Fast, cost-effective. |
Mistral-Nemo | Latest Nemo model. |
Cohere Models
| Model | Description |
|---|---|
Cohere-command-r-plus | Strong RAG and tool use. |
Cohere-command-r | Fast, efficient. |
Discovering more models
Discovering more models
Browse the full model catalog at Azure AI Foundry. Available models depend on your Azure region and subscription.
Config
Azure API key. Falls back to
AZURE_API_KEY env var.Azure AI Foundry endpoint URL. Falls back to
AZURE_ENDPOINT env var. Format: https://<host>.<region>.models.ai.azure.comAPI version. Falls back to
AZURE_API_VERSION env var.Authentication
Environment Variables (Recommended)
Explicit Config
Tool Calling
Tool calling is supported by models that implement function calling (Mistral Large, Llama 3.1, Cohere Command R/R+):Vision Models
Some Azure AI Foundry models support image input (e.g.,Phi-3.5-vision-instruct):
Full Example
Azure AI Foundry vs Other Providers
| Feature | openai() | azureFoundry() | ollama() |
|---|---|---|---|
| Models | OpenAI only | Phi, Llama, Mistral, Cohere | Any local model |
| Auth | API key | Azure API key | None |
| Hosting | OpenAI cloud | Azure cloud | Local machine |
| Cost | Per-token | Per-token (Azure pricing) | Free (your hardware) |
| Privacy | OpenAI terms | Azure data residency | Fully local |
| Compliance | SOC2 | SOC2, HIPAA, GDPR | N/A |
| GPU required | No | No | Yes (recommended) |
azureFoundry() when you want open-source model quality with Azure enterprise compliance — without managing GPU infrastructure.
Environment Variables
| Variable | Description |
|---|---|
AZURE_API_KEY | Azure AI Foundry API key |
AZURE_ENDPOINT | Azure AI Foundry endpoint URL |
AZURE_API_VERSION | API version (optional) |
Cross-References
- Azure OpenAI — OpenAI GPT/o-series models on Azure
- Ollama — Run the same open-source models locally
- AWS Bedrock — Similar model catalog on AWS
- Custom Provider — Build your own provider