Getting Started — Your First AI Coding Agent
Go from zero to a running AI agent that fixes real bugs in under 10 minutes.
What is Harness?
Harness is an open-source AI coding agent with MIT license. It connects to 5 AI providers and 50+ models — Anthropic, OpenAI, Google, Ollama, and more — through a single, unified interface.
Unlike single-provider tools, Harness lets you switch models with a single flag, compare costs across providers, and run entirely local with Ollama. Enterprise features like permission modes, audit logging, and custom approval callbacks are built in.
Claude Code only works with Anthropic. Cursor is locked to their proprietary backend. Aider supports multiple providers but has no streaming SDK or enterprise permission system. Harness supports 5 providers and 50+ models behind one CLI and one Python API.
Source code and issues on GitHub → AgentBoardTT/openharness
Installation
Install Harness using uv (recommended) for isolated, reproducible installs.
# Install with uv (recommended)
uv tool install harness-agent
# Verify installation
harness --version
harness 0.2.0
uv tool install installs Harness into an isolated environment and adds the
harness binary to your PATH. No virtualenv management required.
Connect Your Provider
Harness supports four major providers. Pick the tab that matches your setup.
harness connect --provider anthropic --api-key sk-ant-...
Or set the environment variable directly:
export ANTHROPIC_API_KEY="sk-ant-..."
harness connect --provider openai --api-key sk-...
Or set the environment variable directly:
export OPENAI_API_KEY="sk-..."
harness connect --provider google --api-key YOUR_KEY
Or set the environment variable directly:
export GOOGLE_API_KEY="..."
# Install Ollama first
curl -fsSL https://ollama.ai/install.sh | sh
ollama pull llama3.1
# No API key needed!
harness -p ollama "Hello world"
Ollama runs models locally. Your code never leaves your machine and there are no usage costs.
Fix a Buggy Calculator
Let's give the agent a real task. Create the file below — it has three intentional bugs. Can you spot them before the agent does?
# calculator.py — spot the bugs!
def add(a: float, b: float) -> float:
return a + b
def subtract(a: float, b: float) -> float:
return a + b # Bug: should be a - b
def multiply(a: float, b: float) -> float:
return a * b
def divide(a: float, b: float) -> float:
return a / b # Bug: no zero division check
def percentage(value: float, total: float) -> float:
return value / total * 100 # Bug: no zero check on total
Now run the agent:
harness "Fix all bugs in calculator.py and add proper error handling"
The agent reads the file, identifies all three bugs, applies fixes, and reports what it changed. It typically completes in 2–4 tool calls.
The agent produces the fixed functions:
def subtract(a: float, b: float) -> float:
return a - b
def divide(a: float, b: float) -> float:
if b == 0:
raise ValueError("Cannot divide by zero")
return a / b
def percentage(value: float, total: float) -> float:
if total == 0:
raise ValueError("Total cannot be zero")
return value / total * 100
Under the hood, the agent executed these tool calls in sequence:
- Read — read
calculator.pyto understand the full file - Analysis — reasoned about each function's correctness
- Edit — applied precise edits for each of the three bugs
- Read — re-read the file to verify the changes
The agent never blindly rewrites files — it uses targeted edits to minimize diff noise and preserve your existing code style.
Switch Providers
Run the exact same task with any provider using the -p flag. No code changes required.
# Same task, different providers
harness -p anthropic "Fix calculator.py"
harness -p openai "Fix calculator.py"
harness -p google "Fix calculator.py"
Claude Code only works with Anthropic. Cursor is locked to their proprietary backend.
With Harness, switch providers with a single -p flag — no config changes, no re-authentication, no workflow disruption.
Cost Comparison
The calculator fix uses roughly 2,000 tokens. Here's what that costs across providers:
| Provider | Model | Input $/1M | Output $/1M | This Task (~2K tokens) |
|---|---|---|---|---|
| Anthropic | Claude Sonnet 4 | $3.00 | $15.00 | ~$0.03 |
| OpenAI | GPT-4o | $2.50 | $10.00 | ~$0.02 |
| Gemini 2.5 Pro | $1.25 | $10.00 | ~$0.02 | |
| Ollama | Llama 3.1 70B | Free | Free | $0.00 |
Run harness /cost in the REPL at any time to see your cumulative token usage and estimated spend for the current session.
Go Local with Ollama
For sensitive codebases, run a fully local model. No data leaves your machine, no API key required, no cost.
harness -p ollama -m llama3.1 "Fix calculator.py"
Your code never leaves your machine. Ollama runs Llama 3.1, Mistral, CodeLlama, and dozens more models completely offline.
REPL Tour
Run harness with no arguments to enter the interactive REPL — a persistent
session where the agent remembers context across messages.
# Start interactive mode
harness
# Try these commands:
/status # Show provider, model, session info
/models # List available models
/cost # Show token usage and cost
/help # See all commands
/model gpt-4o # Switch model mid-session
| Command | Description |
|---|---|
| /status | Show current provider, model, and session ID |
| /models | List all available models for the current provider |
| /cost | Show cumulative token usage and estimated cost |
| /model <name> | Switch to a different model mid-session |
| /connect | Set up or change your API key interactively |
| /help | Show all available commands |
| /exit | Exit the REPL |
Start the REPL, ask the agent to fix the calculator, then switch to gpt-4o with /model gpt-4o and ask it to add unit tests — the context is preserved.
Next Steps
You've installed Harness, connected a provider, fixed real bugs, and explored multi-provider switching. The next tutorial covers the Python SDK — building custom AI-powered tools programmatically with full async streaming.
Learn to integrate Harness directly into Python scripts. Build a streaming AI security code reviewer in 50 lines of Python.