Getting Started — Your First AI Coding Agent

Go from zero to a running AI agent that fixes real bugs in under 10 minutes.

Beginner 10 min read
1

What is Harness?

Harness is an open-source AI coding agent with MIT license. It connects to 5 AI providers and 50+ models — Anthropic, OpenAI, Google, Ollama, and more — through a single, unified interface.

Unlike single-provider tools, Harness lets you switch models with a single flag, compare costs across providers, and run entirely local with Ollama. Enterprise features like permission modes, audit logging, and custom approval callbacks are built in.

What competitors can't do

Claude Code only works with Anthropic. Cursor is locked to their proprietary backend. Aider supports multiple providers but has no streaming SDK or enterprise permission system. Harness supports 5 providers and 50+ models behind one CLI and one Python API.

Source code and issues on GitHub → AgentBoardTT/openharness

2

Installation

Install Harness using uv (recommended) for isolated, reproducible installs.

bash
# Install with uv (recommended)
uv tool install harness-agent

# Verify installation
harness --version
Expected output

harness 0.2.0

Why uv?

uv tool install installs Harness into an isolated environment and adds the harness binary to your PATH. No virtualenv management required.

3

Connect Your Provider

Harness supports four major providers. Pick the tab that matches your setup.

bash
harness connect --provider anthropic --api-key sk-ant-...

Or set the environment variable directly:

bash
export ANTHROPIC_API_KEY="sk-ant-..."
bash
harness connect --provider openai --api-key sk-...

Or set the environment variable directly:

bash
export OPENAI_API_KEY="sk-..."
bash
harness connect --provider google --api-key YOUR_KEY

Or set the environment variable directly:

bash
export GOOGLE_API_KEY="..."
bash
# Install Ollama first
curl -fsSL https://ollama.ai/install.sh | sh
ollama pull llama3.1

# No API key needed!
harness -p ollama "Hello world"
🔒
No API key required

Ollama runs models locally. Your code never leaves your machine and there are no usage costs.

4

Fix a Buggy Calculator

Let's give the agent a real task. Create the file below — it has three intentional bugs. Can you spot them before the agent does?

python calculator.py
# calculator.py — spot the bugs!

def add(a: float, b: float) -> float:
    return a + b

def subtract(a: float, b: float) -> float:
    return a + b  # Bug: should be a - b

def multiply(a: float, b: float) -> float:
    return a * b

def divide(a: float, b: float) -> float:
    return a / b  # Bug: no zero division check

def percentage(value: float, total: float) -> float:
    return value / total * 100  # Bug: no zero check on total

Now run the agent:

bash
harness "Fix all bugs in calculator.py and add proper error handling"
Try it — expected agent output

The agent reads the file, identifies all three bugs, applies fixes, and reports what it changed. It typically completes in 2–4 tool calls.

The agent produces the fixed functions:

python calculator.py (fixed)
def subtract(a: float, b: float) -> float:
    return a - b

def divide(a: float, b: float) -> float:
    if b == 0:
        raise ValueError("Cannot divide by zero")
    return a / b

def percentage(value: float, total: float) -> float:
    if total == 0:
        raise ValueError("Total cannot be zero")
    return value / total * 100

Under the hood, the agent executed these tool calls in sequence:

  1. Read — read calculator.py to understand the full file
  2. Analysis — reasoned about each function's correctness
  3. Edit — applied precise edits for each of the three bugs
  4. Read — re-read the file to verify the changes

The agent never blindly rewrites files — it uses targeted edits to minimize diff noise and preserve your existing code style.

5

Switch Providers

Run the exact same task with any provider using the -p flag. No code changes required.

bash
# Same task, different providers
harness -p anthropic "Fix calculator.py"
harness -p openai "Fix calculator.py"
harness -p google "Fix calculator.py"
What competitors can't do

Claude Code only works with Anthropic. Cursor is locked to their proprietary backend. With Harness, switch providers with a single -p flag — no config changes, no re-authentication, no workflow disruption.

6

Cost Comparison

The calculator fix uses roughly 2,000 tokens. Here's what that costs across providers:

Provider Model Input $/1M Output $/1M This Task (~2K tokens)
Anthropic Claude Sonnet 4 $3.00 $15.00 ~$0.03
OpenAI GPT-4o $2.50 $10.00 ~$0.02
Google Gemini 2.5 Pro $1.25 $10.00 ~$0.02
Ollama Llama 3.1 70B Free Free $0.00
Track your costs

Run harness /cost in the REPL at any time to see your cumulative token usage and estimated spend for the current session.

7

Go Local with Ollama

For sensitive codebases, run a fully local model. No data leaves your machine, no API key required, no cost.

bash
harness -p ollama -m llama3.1 "Fix calculator.py"
🔒
Full privacy. Zero cost.

Your code never leaves your machine. Ollama runs Llama 3.1, Mistral, CodeLlama, and dozens more models completely offline.

8

REPL Tour

Run harness with no arguments to enter the interactive REPL — a persistent session where the agent remembers context across messages.

bash
# Start interactive mode
harness

# Try these commands:
/status    # Show provider, model, session info
/models    # List available models
/cost      # Show token usage and cost
/help      # See all commands
/model gpt-4o  # Switch model mid-session
Command Description
/statusShow current provider, model, and session ID
/modelsList all available models for the current provider
/costShow cumulative token usage and estimated cost
/model <name>Switch to a different model mid-session
/connectSet up or change your API key interactively
/helpShow all available commands
/exitExit the REPL
Try it

Start the REPL, ask the agent to fix the calculator, then switch to gpt-4o with /model gpt-4o and ask it to add unit tests — the context is preserved.

9

Next Steps

You've installed Harness, connected a provider, fixed real bugs, and explored multi-provider switching. The next tutorial covers the Python SDK — building custom AI-powered tools programmatically with full async streaming.

Tutorial 2: The Python SDK

Learn to integrate Harness directly into Python scripts. Build a streaming AI security code reviewer in 50 lines of Python.