Skip to main content

Azure AI Foundry

Use open-source and partner models hosted on Azure AI Foundry (formerly Azure AI Model Catalog) — access Phi, Llama, Mistral, Cohere, and more through Azure’s infrastructure with enterprise compliance. Azure AI Foundry exposes an OpenAI-compatible API, so RadarOS connects to it using the standard openai SDK pointed at your Azure endpoint.

Setup

Uses the standard OpenAI SDK:
npm install openai

Factory

import { azureFoundry } from "@radaros/core";

const model = azureFoundry("Phi-4");
modelId
string
required
The model name as deployed on Azure AI Foundry (e.g., "Phi-4", "Meta-Llama-3.1-70B-Instruct").
config
AzureFoundryConfig
Optional configuration. See Config below.

Supported Models

Microsoft Models

ModelDescription
Phi-4Latest Phi model. Strong reasoning for its size.
Phi-3.5-mini-instructFast, compact, good for edge scenarios.
Phi-3.5-vision-instructVision-capable Phi model.

Meta Models

ModelDescription
Meta-Llama-3.1-405B-InstructLargest open model. Exceptional capability.
Meta-Llama-3.1-70B-InstructStrong balance of size and performance.
Meta-Llama-3.1-8B-InstructFast, efficient, good for most tasks.

Mistral Models

ModelDescription
Mistral-largeHigh capability, strong reasoning.
Mistral-smallFast, cost-effective.
Mistral-NemoLatest Nemo model.

Cohere Models

ModelDescription
Cohere-command-r-plusStrong RAG and tool use.
Cohere-command-rFast, efficient.
Browse the full model catalog at Azure AI Foundry. Available models depend on your Azure region and subscription.

Config

apiKey
string
Azure API key. Falls back to AZURE_API_KEY env var.
endpoint
string
required
Azure AI Foundry endpoint URL. Falls back to AZURE_ENDPOINT env var. Format: https://<host>.<region>.models.ai.azure.com
apiVersion
string
API version. Falls back to AZURE_API_VERSION env var.

Authentication

export AZURE_API_KEY="..."
export AZURE_ENDPOINT="https://my-host.eastus.models.ai.azure.com"
const model = azureFoundry("Phi-4");

Explicit Config

const model = azureFoundry("Phi-4", {
  apiKey: "...",
  endpoint: "https://my-host.eastus.models.ai.azure.com",
});

Tool Calling

Tool calling is supported by models that implement function calling (Mistral Large, Llama 3.1, Cohere Command R/R+):
import { Agent, azureFoundry, defineTool } from "@radaros/core";
import { z } from "zod";

const agent = new Agent({
  name: "foundry-assistant",
  model: azureFoundry("Mistral-large"),
  instructions: "You are a helpful assistant with tool access.",
  tools: [
    defineTool({
      name: "getStockPrice",
      description: "Get current stock price",
      parameters: z.object({ symbol: z.string() }),
      execute: async ({ symbol }) => `${symbol}: $185.42 (+2.3%)`,
    }),
  ],
});

const result = await agent.run("What's the current price of AAPL?");
console.log(result.text);
Not all Azure AI Foundry models support tool calling. Phi models and some smaller models may not support function calling. Test with your specific model.

Vision Models

Some Azure AI Foundry models support image input (e.g., Phi-3.5-vision-instruct):
import { Agent, azureFoundry, type ContentPart } from "@radaros/core";
import { readFileSync } from "node:fs";

const agent = new Agent({
  name: "vision-agent",
  model: azureFoundry("Phi-3.5-vision-instruct"),
  instructions: "Describe images in detail.",
});

const imageData = readFileSync("photo.jpg").toString("base64");
const result = await agent.run([
  { type: "text", text: "What's in this image?" },
  { type: "image", data: imageData, mimeType: "image/jpeg" },
] as ContentPart[]);

Full Example

import { Agent, azureFoundry, CostTracker, defineTool } from "@radaros/core";
import { z } from "zod";

const costTracker = new CostTracker({
  pricing: {
    "Meta-Llama-3.1-70B-Instruct": { promptPer1k: 0.00268, completionPer1k: 0.00354 },
    "Mistral-large": { promptPer1k: 0.004, completionPer1k: 0.012 },
  },
});

const agent = new Agent({
  name: "research-agent",
  model: azureFoundry("Meta-Llama-3.1-70B-Instruct", {
    endpoint: "https://my-host.eastus.models.ai.azure.com",
  }),
  instructions: "You are a research assistant. Use tools to find information.",
  tools: [
    defineTool({
      name: "search",
      description: "Search for information",
      parameters: z.object({ query: z.string() }),
      execute: async ({ query }) => `Results for "${query}": ...`,
    }),
  ],
  costTracker,
  maxToolRoundtrips: 3,
});

const result = await agent.run("Research the latest trends in AI agent frameworks");
console.log(result.text);
console.log(`Cost: $${costTracker.getSummary().totalCost.toFixed(4)}`);

Azure AI Foundry vs Other Providers

Featureopenai()azureFoundry()ollama()
ModelsOpenAI onlyPhi, Llama, Mistral, CohereAny local model
AuthAPI keyAzure API keyNone
HostingOpenAI cloudAzure cloudLocal machine
CostPer-tokenPer-token (Azure pricing)Free (your hardware)
PrivacyOpenAI termsAzure data residencyFully local
ComplianceSOC2SOC2, HIPAA, GDPRN/A
GPU requiredNoNoYes (recommended)
Use azureFoundry() when you want open-source model quality with Azure enterprise compliance — without managing GPU infrastructure.

Environment Variables

VariableDescription
AZURE_API_KEYAzure AI Foundry API key
AZURE_ENDPOINTAzure AI Foundry endpoint URL
AZURE_API_VERSIONAPI version (optional)

Cross-References