Code Assistant Example¶
The directory containing the example files can be found here.
This example builds a simplified "Claude Code" β a terminal coding assistant powered by a mesh of three autonomous agents. It demonstrates how to compose specialized agents into a system where a brain reasons, hands execute, and a coordinator orchestrates β just like a real AI coding assistant.
It highlights:
- How an orchestrator agent coordinates both LLM and tool agents
- How LLM-to-LLM delegation works via
agent_callwithinfer - How tool delegation works via
agent_callwithtool_call - How separation of concerns (brain vs hands) enables safer, more reliable systems
- How agents discover each other dynamically through the Registry
π¬ High-Level User Request¶
From this single input, the system:
- Explores the workspace to understand the project structure
- Reads the target file to get the current code
- Reasons about what docstrings are needed (using the Planner's LLM)
- Generates the complete updated file with proper PEP 257 docstrings
- Writes the changes to disk
- Summarizes what was changed for the user
π§© Agent Overview¶
1. Orchestrator Agent (Coordinator β LLM + Agent Calls)¶
Role: User-facing coordinator and workflow manager.
- Receives the user's coding request
- Decides which agent to call, when, and with what context
- Delegates reasoning to the Planner and file operations to the Coder
- Aggregates results and presents a summary to the user
Uses an LLM for decision-making, but never touches files or generates code directly. It coordinates.
2. Planner Agent (Brain β LLM-Only)¶
Role: Code analysis, planning, and code generation.
- Analyzes coding tasks and creates step-by-step implementation plans
- Reviews existing source code for issues and improvements
- Generates precise, complete file contents ready to be written
- Responds with structured output following PEP 8 / PEP 257 conventions
No filesystem access. The Planner is a pure reasoning engine β it thinks, but cannot act. This is the
inferdelegation in action.
3. Coder Agent (Hands β Tools-Only)¶
Role: Deterministic filesystem operations.
- read_file(path) β Read file contents with line numbers
- write_file(path, content) β Create or overwrite files
- list_directory(path) β List directory contents
- search_in_files(pattern, path, file_filter) β Grep-like search
No LLM. The Coder executes file operations reliably and deterministically. All operations are sandboxed to a configurable workspace directory for safety.
π§ Design Note: Why Three Agents?¶
This architecture mirrors how production coding assistants (Claude Code, Cursor, etc.) work internally:
| Concern | Agent | Why Separate? |
|---|---|---|
| Reasoning | Planner | Can use a powerful (expensive) model; no side effects |
| Execution | Coder | Deterministic; can run on a secure file server |
| Coordination | Orchestrator | Can use a lighter (cheaper) model; just routes work |
The key insight: the brain that generates code should not be the same component that writes files. Separation makes the system safer, more testable, and independently scalable.
π Sequence Diagram¶
sequenceDiagram
participant User
participant Orchestrator as Orchestrator Agent (LLM)
participant Planner as Planner Agent (LLM-only)
participant Coder as Coder Agent (Tools-only)
User->>Orchestrator: "Add docstrings to all functions in utils.py"
Note over Orchestrator: LLM thinks: "I need to explore the workspace first"
Orchestrator->>Coder: agent_call β list_directory(".")
Coder-->>Orchestrator: [main.py, utils.py, config.py]
Note over Orchestrator: LLM thinks: "Let me read the target file"
Orchestrator->>Coder: agent_call β read_file("utils.py")
Coder-->>Orchestrator: File contents (6 functions, no docstrings)
Note over Orchestrator: LLM thinks: "Now I need the Planner to generate docstrings"
Orchestrator->>Planner: agent_call β infer("Add docstrings to these functions: ...")
Planner-->>Orchestrator: Complete updated file with PEP 257 docstrings
Note over Orchestrator: LLM thinks: "Write the changes"
Orchestrator->>Coder: agent_call β write_file("utils.py", updated_content)
Coder-->>Orchestrator: β
Wrote 52 lines to utils.py
Note over Orchestrator: LLM produces final summary
Orchestrator-->>User: "Added docstrings to 6 functions in utils.py β
"
Two Types of agent_call¶
The Orchestrator uses both delegation modes available in Protolink:
| Delegation | Target | Action | What Happens |
|---|---|---|---|
| LLM-to-LLM | Planner | infer |
Orchestrator's LLM sends a prompt β Planner's LLM reasons β response returned |
| LLM-to-Tool | Coder | tool_call |
Orchestrator's LLM specifies tool + args β Coder executes β result returned |
This is the core of Protolink's agent mesh: agents delegating to each other over the network using a standardized protocol. The Orchestrator doesn't know (or care) whether the Planner and Coder are on the same machine or across the globe.
π§ Agent Classification Summary¶
| Agent | Uses LLM | Has Tools | Purpose |
|---|---|---|---|
| Orchestrator Agent | β | β | Coordination, routing, decision-making |
| Planner Agent | β | β | Code analysis, planning, code generation |
| Coder Agent | β | β | File read/write/list/search |
π― Why This Example Matters¶
- Relatable use case: Every developer understands what a coding assistant does
- Both delegation modes: Demonstrates
infer(LLM-to-LLM) andtool_call(LLM-to-Tool) in one system - Separation of concerns: Brain, Hands, and Coordinator are cleanly separated
- LLM-agnostic: Switch between OpenAI, Anthropic, Ollama with a single environment variable
- Safety by design: Workspace sandboxing prevents agents from accessing arbitrary files
- Autonomous multi-step: The entire workflow runs without human intervention after the initial request
π§ The Inference Loop β How the Orchestrator Works¶
When the Orchestrator receives a user request, Protolink runs an inference loop:
- The Orchestrator's LLM reads the system prompt + discovered agent cards (from the Registry)
- The LLM decides which agent to call and outputs a structured
agent_call - Protolink intercepts the agent_call, resolves the agent URL via the Registry, and sends an HTTP request
- The target agent processes the request and returns a result
- The result is injected back into the LLM's conversation as an observation
- The LLM decides the next step (another agent_call, or a final response)
- Steps 2β6 repeat until the LLM produces a
finalresponse
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β INFERENCE LOOP β
β β
β ββββββββββββ βββββββββββ ββββββββββββββββββββ β
β β LLM ββββββ Protolinkββββββ Remote Agent β β
β β thinks β β routes β β executes β β
β ββββββββββββ βββββββββββ ββββββββββββββββββββ β
β β² β β
β β observation β β
β ββββββββββββββββββββββββββββββββββββββ β
β β
β Loop continues until LLM produces "final" response β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
This is how a single Task.create_infer(prompt=...) can trigger an autonomous, multi-step workflow spanning multiple agents.
π Files and Structure¶
examples/code_assistant/
βββ run.py # β Entry point β starts all agents & demo
βββ orchestrator_agent.py # LLM coordinator (delegates to Planner & Coder)
βββ planner_agent.py # LLM-only reasoning (code analysis & generation)
βββ coder_agent.py # Tools-only (read, write, list, search files)
βββ .env.example # Environment template
βββ README.md # Detailed README with setup instructions
βββ workspace/ # Demo Python project (auto-created by run.py)
βββ main.py # Simple calculator app
βββ utils.py # Utility functions (without docstrings)
βββ config.py # App configuration
π Running the Example¶
-
Install Protolink
pip install protolink -
Configure your LLM provider
export OPENAI_API_KEY=sk-...export ANTHROPIC_API_KEY=sk-ant-...# Install from https://ollama.ai ollama pull gemma4:latest ollama serve -
Run the demo
cd examples/code_assistant LLM_PROVIDER=openai python run.pycd examples/code_assistant LLM_PROVIDER=anthropic python run.pycd examples/code_assistant LLM_PROVIDER=ollama python run.py -
Try different queries
python run.py "Add type hints to all functions in utils.py" python run.py "Create a test file for utils.py" python run.py "Refactor utils.py to use a Calculator class"
π Expected Output¶
π€ CODE ASSISTANT β Protolink Multi-Agent Coding System
======================================================================
π Setting up demo workspace...
Created main.py
Created utils.py
Created config.py
π‘ Starting Registry...
Registry running at http://localhost:9000
π§ Starting Coder Agent (tools-only, no LLM)...
Coder running at http://localhost:8030
Tools: ['read_file', 'write_file', 'list_directory', 'search_in_files']
π§ Starting Planner Agent (LLM: openai)...
Planner running at http://localhost:8020
π― Starting Orchestrator Agent (LLM: openai)...
Orchestrator running at http://localhost:8010
π Verifying agent discovery...
Discovered 3 agents:
β’ orchestrator (LLM): reasoning
β’ planner (LLM): reasoning
β’ coder (Tools): ['read_file', 'write_file', 'list_directory', 'search_in_files']
π Processing: "Add docstrings to all functions in utils.py"
======================================================================
β³ Orchestrator is working...
π [coder] list_directory: .
π [coder] read_file: utils.py
π§ [planner] infer called: Add docstrings to these functions...
βοΈ [coder] write_file: utils.py
======================================================================
β
RESULT:
----------------------------------------------------------------------
Added comprehensive docstrings to all 6 functions in utils.py:
β’ add(a, b) β Addition operation
β’ subtract(a, b) β Subtraction operation
β’ multiply(a, b) β Multiplication operation
β’ divide(a, b) β Division with zero-check
β’ power(base, exponent) β Exponentiation
β’ factorial(n) β Recursive factorial
All docstrings follow PEP 257 conventions.
----------------------------------------------------------------------
π― Protolink Features Showcased¶
| # | Feature | Where |
|---|---|---|
| 1 | agent_call with infer |
Orchestrator β Planner (LLM-to-LLM reasoning) |
| 2 | agent_call with tool_call |
Orchestrator β Coder (file operations) |
| 3 | Registry Discovery | Agents find each other dynamically at runtime |
| 4 | LLM-Agnostic | One-line switch: create_llm("openai") β create_llm("anthropic") |
| 5 | Transport-Agnostic | All agents use HTTP; switchable to WebSocket/gRPC |
| 6 | Tool-Only Agents | Coder has tools but no LLM β pure determinism |
| 7 | LLM-Only Agents | Planner has LLM but no tools β pure reasoning |
| 8 | Custom handle_task |
Planner subclasses Agent for observability |
| 9 | Autonomous Orchestration | Multi-step workflow without human intervention |
| 10 | Workspace Sandboxing | File operations constrained to safe directory |
π§© Extending the Example¶
- Add a Reviewer agent (LLM-only) that reviews code changes before they're written
- Add a Test Runner agent (tool-only) that runs pytest after edits
- Switch transports β use WebSocket for real-time streaming of edit progress
- Add MCP tools β import tools from external MCP servers (e.g., GitHub, Jira)
- Use different LLMs per agent β cheap model for Orchestrator, powerful model for Planner
π See Also¶
- Getting Started β Core concepts and setup
- Agents β Agent lifecycle and tools
- Transports β Switching between HTTP, WebSocket, and runtime transports
- Tools β Native and MCP tool integration
- LLMs β LLM backends and usage
- Ticket Booking Example β Another multi-agent example