Skip to main content

Ollama (Local Models)

Use Ollama to run open-source models locally with RadarOS. No API key required—ideal for development, privacy-sensitive workloads, and cost-free experimentation.

Setup

1

Install Ollama

Download and install Ollama from ollama.ai. Start the Ollama service (it runs on http://localhost:11434 by default).
2

Pull a model

Pull the model you want to use:
ollama pull llama3.1
ollama pull codellama
ollama pull mistral
3

Use in RadarOS

Use the ollama factory with the model name:
import { ollama } from "@radaros/core";

const model = ollama("llama3.1");

Factory

import { ollama } from "@radaros/core";

const model = ollama("llama3.1");
modelId
string
required
The Ollama model name (e.g., llama3.1, codellama, mistral).
config
object
Optional configuration. See Config below.

Config

host
string
default:"http://localhost:11434"
Ollama server URL. Defaults to http://localhost:11434. Use this for remote Ollama instances or custom ports.
Ollama runs locally and does not require an API key. Just ensure the Ollama service is running.

Example

const model = ollama("llama3.1", {
  host: "http://localhost:11434",
});

// Remote Ollama instance
const remoteModel = ollama("mistral", {
  host: "http://192.168.1.100:11434",
});

ModelUse Case
llama3.1General purpose, strong all-around performance
codellamaCode generation and understanding
mistralFast, efficient, good for chat
mixtralMixture of experts, higher capability
phi3Small, fast, good for edge devices
Run ollama list to see installed models. Browse ollama.com/library for the full catalog.

Full Example

import { Agent, ollama } from "@radaros/core";

const agent = new Agent({
  name: "Local Assistant",
  model: ollama("llama3.1"),
  instructions: "You are a helpful assistant running locally.",
});

const output = await agent.run("Explain recursion in one sentence.");
console.log(output.text);