Ollama Edge Helpers
Utilities for running Ollama on resource-constrained edge devices — ensure it’s running, pull models, and get hardware-appropriate recommendations.Quick Start
Functions
ensureOllama(baseUrl?)
Check if Ollama is running. If not, spawn ollama serve and wait up to 10 seconds for it to become available.
checkOllama(baseUrl?)
Read-only check — does not attempt to start Ollama.
pullModel(model, opts?)
Pull a model from the Ollama registry if not cached locally.
recommendModel(ramMb)
Pick the largest model that fits comfortably in available RAM (with 30% headroom for OS and Node.js).
| RAM | Recommended Model | Label |
|---|---|---|
| 1 GB | tinyllama:1.1b | fast |
| 2 GB | tinyllama:1.1b | fast |
| 3 GB | llama3.2:1b | balanced |
| 6 GB | phi3:mini | capable |
| 8+ GB | mistral:7b | powerful |
hasModel(model, baseUrl?)
Check if a specific model is cached locally.
listModelTiers()
Get all known model tiers with their RAM requirements.
Installing Ollama on Raspberry Pi
tinyllama:1.1b is the only viable option. Pi 5 with 8 GB can comfortably run phi3:mini (3.8B parameters).