Skip to content

Code Assistant Example

The directory containing the example files can be found here.

This example builds a simplified "Claude Code" β€” a terminal coding assistant powered by a mesh of three autonomous agents. It demonstrates how to compose specialized agents into a system where a brain reasons, hands execute, and a coordinator orchestrates β€” just like a real AI coding assistant.

It highlights:

  • How an orchestrator agent coordinates both LLM and tool agents
  • How LLM-to-LLM delegation works via agent_call with infer
  • How tool delegation works via agent_call with tool_call
  • How separation of concerns (brain vs hands) enables safer, more reliable systems
  • How agents discover each other dynamically through the Registry

πŸ’¬ High-Level User Request

"
Add docstrings to all functions in utils.py
β€” User Request

From this single input, the system:

  • Explores the workspace to understand the project structure
  • Reads the target file to get the current code
  • Reasons about what docstrings are needed (using the Planner's LLM)
  • Generates the complete updated file with proper PEP 257 docstrings
  • Writes the changes to disk
  • Summarizes what was changed for the user

🧩 Agent Overview

1. Orchestrator Agent (Coordinator β€” LLM + Agent Calls)

Role: User-facing coordinator and workflow manager.
- Receives the user's coding request
- Decides which agent to call, when, and with what context
- Delegates reasoning to the Planner and file operations to the Coder
- Aggregates results and presents a summary to the user

Uses an LLM for decision-making, but never touches files or generates code directly. It coordinates.

2. Planner Agent (Brain β€” LLM-Only)

Role: Code analysis, planning, and code generation. - Analyzes coding tasks and creates step-by-step implementation plans
- Reviews existing source code for issues and improvements
- Generates precise, complete file contents ready to be written
- Responds with structured output following PEP 8 / PEP 257 conventions

No filesystem access. The Planner is a pure reasoning engine β€” it thinks, but cannot act. This is the infer delegation in action.

3. Coder Agent (Hands β€” Tools-Only)

Role: Deterministic filesystem operations.
- read_file(path) β€” Read file contents with line numbers
- write_file(path, content) β€” Create or overwrite files
- list_directory(path) β€” List directory contents
- search_in_files(pattern, path, file_filter) β€” Grep-like search

No LLM. The Coder executes file operations reliably and deterministically. All operations are sandboxed to a configurable workspace directory for safety.

🧠 Design Note: Why Three Agents?

This architecture mirrors how production coding assistants (Claude Code, Cursor, etc.) work internally:

Concern Agent Why Separate?
Reasoning Planner Can use a powerful (expensive) model; no side effects
Execution Coder Deterministic; can run on a secure file server
Coordination Orchestrator Can use a lighter (cheaper) model; just routes work

The key insight: the brain that generates code should not be the same component that writes files. Separation makes the system safer, more testable, and independently scalable.


πŸ” Sequence Diagram

sequenceDiagram
    participant User
    participant Orchestrator as Orchestrator Agent (LLM)
    participant Planner as Planner Agent (LLM-only)
    participant Coder as Coder Agent (Tools-only)

    User->>Orchestrator: "Add docstrings to all functions in utils.py"

    Note over Orchestrator: LLM thinks: "I need to explore the workspace first"
    Orchestrator->>Coder: agent_call β†’ list_directory(".")
    Coder-->>Orchestrator: [main.py, utils.py, config.py]

    Note over Orchestrator: LLM thinks: "Let me read the target file"
    Orchestrator->>Coder: agent_call β†’ read_file("utils.py")
    Coder-->>Orchestrator: File contents (6 functions, no docstrings)

    Note over Orchestrator: LLM thinks: "Now I need the Planner to generate docstrings"
    Orchestrator->>Planner: agent_call β†’ infer("Add docstrings to these functions: ...")
    Planner-->>Orchestrator: Complete updated file with PEP 257 docstrings

    Note over Orchestrator: LLM thinks: "Write the changes"
    Orchestrator->>Coder: agent_call β†’ write_file("utils.py", updated_content)
    Coder-->>Orchestrator: βœ… Wrote 52 lines to utils.py

    Note over Orchestrator: LLM produces final summary
    Orchestrator-->>User: "Added docstrings to 6 functions in utils.py βœ…"

Two Types of agent_call

The Orchestrator uses both delegation modes available in Protolink:

Delegation Target Action What Happens
LLM-to-LLM Planner infer Orchestrator's LLM sends a prompt β†’ Planner's LLM reasons β†’ response returned
LLM-to-Tool Coder tool_call Orchestrator's LLM specifies tool + args β†’ Coder executes β†’ result returned

This is the core of Protolink's agent mesh: agents delegating to each other over the network using a standardized protocol. The Orchestrator doesn't know (or care) whether the Planner and Coder are on the same machine or across the globe.


🧠 Agent Classification Summary

Agent Uses LLM Has Tools Purpose
Orchestrator Agent βœ… ❌ Coordination, routing, decision-making
Planner Agent βœ… ❌ Code analysis, planning, code generation
Coder Agent ❌ βœ… File read/write/list/search

🎯 Why This Example Matters

  • Relatable use case: Every developer understands what a coding assistant does
  • Both delegation modes: Demonstrates infer (LLM-to-LLM) and tool_call (LLM-to-Tool) in one system
  • Separation of concerns: Brain, Hands, and Coordinator are cleanly separated
  • LLM-agnostic: Switch between OpenAI, Anthropic, Ollama with a single environment variable
  • Safety by design: Workspace sandboxing prevents agents from accessing arbitrary files
  • Autonomous multi-step: The entire workflow runs without human intervention after the initial request

🧠 The Inference Loop β€” How the Orchestrator Works

When the Orchestrator receives a user request, Protolink runs an inference loop:

  1. The Orchestrator's LLM reads the system prompt + discovered agent cards (from the Registry)
  2. The LLM decides which agent to call and outputs a structured agent_call
  3. Protolink intercepts the agent_call, resolves the agent URL via the Registry, and sends an HTTP request
  4. The target agent processes the request and returns a result
  5. The result is injected back into the LLM's conversation as an observation
  6. The LLM decides the next step (another agent_call, or a final response)
  7. Steps 2–6 repeat until the LLM produces a final response
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                   INFERENCE LOOP                        β”‚
β”‚                                                         β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”   β”‚
β”‚  β”‚ LLM      │───→│ Protolink│───→│ Remote Agent      β”‚   β”‚
β”‚  β”‚ thinks   β”‚    β”‚ routes   β”‚    β”‚ executes          β”‚   β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β”‚
β”‚       β–²                                    β”‚            β”‚
β”‚       β”‚              observation           β”‚            β”‚
β”‚       β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜            β”‚
β”‚                                                         β”‚
β”‚  Loop continues until LLM produces "final" response     β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

This is how a single Task.create_infer(prompt=...) can trigger an autonomous, multi-step workflow spanning multiple agents.


πŸ“ Files and Structure

examples/code_assistant/
β”œβ”€β”€ run.py                  # ⭐ Entry point β€” starts all agents & demo
β”œβ”€β”€ orchestrator_agent.py   # LLM coordinator (delegates to Planner & Coder)
β”œβ”€β”€ planner_agent.py        # LLM-only reasoning (code analysis & generation)
β”œβ”€β”€ coder_agent.py          # Tools-only (read, write, list, search files)
β”œβ”€β”€ .env.example            # Environment template
β”œβ”€β”€ README.md               # Detailed README with setup instructions
└── workspace/              # Demo Python project (auto-created by run.py)
    β”œβ”€β”€ main.py             # Simple calculator app
    β”œβ”€β”€ utils.py            # Utility functions (without docstrings)
    └── config.py           # App configuration

πŸš€ Running the Example

  1. Install Protolink

    pip install protolink
    

  2. Configure your LLM provider

    export OPENAI_API_KEY=sk-...
    
    export ANTHROPIC_API_KEY=sk-ant-...
    
    # Install from https://ollama.ai
    ollama pull gemma4:latest
    ollama serve
    
  3. Run the demo

    cd examples/code_assistant
    LLM_PROVIDER=openai python run.py
    
    cd examples/code_assistant
    LLM_PROVIDER=anthropic python run.py
    
    cd examples/code_assistant
    LLM_PROVIDER=ollama python run.py
    
  4. Try different queries

    python run.py "Add type hints to all functions in utils.py"
    python run.py "Create a test file for utils.py"
    python run.py "Refactor utils.py to use a Calculator class"
    


πŸ“‹ Expected Output

πŸ€– CODE ASSISTANT β€” Protolink Multi-Agent Coding System
======================================================================

πŸ“‚ Setting up demo workspace...
   Created main.py
   Created utils.py
   Created config.py

πŸ“‘ Starting Registry...
   Registry running at http://localhost:9000

πŸ”§ Starting Coder Agent (tools-only, no LLM)...
   Coder running at http://localhost:8030
   Tools: ['read_file', 'write_file', 'list_directory', 'search_in_files']

🧠 Starting Planner Agent (LLM: openai)...
   Planner running at http://localhost:8020

🎯 Starting Orchestrator Agent (LLM: openai)...
   Orchestrator running at http://localhost:8010

πŸ” Verifying agent discovery...
   Discovered 3 agents:
   β€’ orchestrator (LLM): reasoning
   β€’ planner (LLM): reasoning
   β€’ coder (Tools): ['read_file', 'write_file', 'list_directory', 'search_in_files']

πŸ“ Processing: "Add docstrings to all functions in utils.py"
======================================================================

⏳ Orchestrator is working...

   πŸ“ [coder] list_directory: .
   πŸ“– [coder] read_file: utils.py
   🧠 [planner] infer called: Add docstrings to these functions...
   ✍️  [coder] write_file: utils.py

======================================================================
βœ… RESULT:
----------------------------------------------------------------------
Added comprehensive docstrings to all 6 functions in utils.py:
β€’ add(a, b) β€” Addition operation
β€’ subtract(a, b) β€” Subtraction operation
β€’ multiply(a, b) β€” Multiplication operation
β€’ divide(a, b) β€” Division with zero-check
β€’ power(base, exponent) β€” Exponentiation
β€’ factorial(n) β€” Recursive factorial

All docstrings follow PEP 257 conventions.
----------------------------------------------------------------------

# Feature Where
1 agent_call with infer Orchestrator β†’ Planner (LLM-to-LLM reasoning)
2 agent_call with tool_call Orchestrator β†’ Coder (file operations)
3 Registry Discovery Agents find each other dynamically at runtime
4 LLM-Agnostic One-line switch: create_llm("openai") β†’ create_llm("anthropic")
5 Transport-Agnostic All agents use HTTP; switchable to WebSocket/gRPC
6 Tool-Only Agents Coder has tools but no LLM β€” pure determinism
7 LLM-Only Agents Planner has LLM but no tools β€” pure reasoning
8 Custom handle_task Planner subclasses Agent for observability
9 Autonomous Orchestration Multi-step workflow without human intervention
10 Workspace Sandboxing File operations constrained to safe directory

🧩 Extending the Example

  • Add a Reviewer agent (LLM-only) that reviews code changes before they're written
  • Add a Test Runner agent (tool-only) that runs pytest after edits
  • Switch transports β€” use WebSocket for real-time streaming of edit progress
  • Add MCP tools β€” import tools from external MCP servers (e.g., GitHub, Jira)
  • Use different LLMs per agent β€” cheap model for Orchestrator, powerful model for Planner

πŸ“š See Also