Skip to content

Home

Protolink Logo

A lightweight, production-ready framework for agent-to-agent communication, built on and extending Google's A2A protocol.

Get Started View on GitHub


Welcome to the Protolink documentation.

This site provides an overview of the framework, its concepts, and how to use it in your projects.

Current release: see protolink on PyPI.

Python Version PyPI version Ruff ty Ask DeepWiki License: MIT PyPI Downloads

Contents

ProtoLink is a lightweight, production-ready Python framework for building distributed multi-agent systems where AI agents communicate directly with each other.

Each ProtoLink agent is a self-contained runtime that can embed an LLM, manage execution context, expose and consume tools (native or via MCP), and coordinate with other agents over a unified transport layer.

ProtoLink implements and extends Google’s Agent-to-Agent (A2A) specification for agent identity, capability declaration, and discovery, while going beyond A2A by enabling true agent-to-agent collaboration.

The framework emphasizes minimal boilerplate, explicit control, and production-readiness, making it suitable for both research and real-world systems.

ProtoLink implements Google’s A2A protocol at the wire level, while providing a higher-level agent runtime that unifies client, server, transport, tools, and LLMs into a single composable abstraction the Agent.

Concept Google A2A ProtoLink
Agent Protocol-level concept Runtime object
Transport External server concern Agent-owned
Client Separate Built-in
LLM Out of scope First-class
Tools Out of scope Native + MCP
UX Enterprise infra Developer-first
  • Build agents quickly
    See Getting Started and Agents for the core concepts and basic setup.

  • Choose your transport
    Explore Transports to switch between HTTP, WebSocket, runtime, and future transports with minimal code changes.

  • Plug in LLMs & tools
    Use LLMs and Tools to wire in language models and both native & MCP tools as agent modules.

Key ideas:

  • Unified Agent model: a single autonomous AI Agent instance handles both client and server responsibilities, incorporating LLMs and tools.
  • Flexible transports: HTTP, WebSocket, in‑process runtime, and planned JSON‑RPC / gRPC transports.
  • LLM‑ready architecture: first‑class integration with API, local, and server‑hosted LLMs.
  • Tools as modules: native Python tools and MCP tools plugged directly into agents.

Use this documentation to:

  • Install Protolink and run your first agent.
  • Understand how agents, transports, LLMs, and tools fit together.
  • Explore practical examples you can adapt to your own systems.

Protolink is open source under the MIT license. Contributions are welcome – see the repository’s Contributing section on GitHub.