How to Build a Multi-Agent System with Shared Tools
Learn how to build a multi-agent system where specialized agents share a common tool registry. Architecture patterns, code examples, and a working 3-agent implementation using AgentNode as the shared tool layer.
Multi-agent systems are quickly becoming the dominant architecture for complex AI workflows. Instead of building a single monolithic agent that tries to do everything, developers are deploying teams of specialized agents — each with distinct capabilities, each collaborating through shared tool registries.
But here is the hard part: how do you give three, five, or twenty agents access to the same pool of verified, production-ready tools without duplicating code, fragmenting permissions, or creating a maintenance nightmare?
This tutorial walks you through building a multi-agent system with a shared tool layer. You will build three specialized agents — a researcher, a coder, and a reviewer — that share a common set of tools managed through AgentNode. By the end, you will have a working system and a reusable architecture pattern for your own multi-agent projects.
What Is a Multi-Agent System?
A multi-agent system (MAS) is an architecture where multiple autonomous agents work together to accomplish tasks that would be difficult or impossible for a single agent. Each agent has its own specialization, decision-making logic, and capabilities — but they coordinate through shared resources, communication protocols, and tool registries.
Think of it as a software engineering team: you have a frontend developer, a backend developer, and a QA engineer. Each person has specialized skills, but they all share the same codebase, CI/CD pipeline, and project management tools. A multi-agent system works the same way, except the team members are AI agents.
Why Multi-Agent Over Single-Agent?
Single-agent architectures hit practical ceilings quickly. Here is where multi-agent systems win:
- Specialization — each agent can be optimized for a narrow task, improving accuracy and reducing prompt complexity
- Parallelism — agents can work on different subtasks simultaneously, cutting execution time
- Fault isolation — if one agent fails, others continue operating
- Scalability — you can add new agents without rewriting existing ones
- Maintainability — smaller, focused agents are easier to test, debug, and update
The challenge is coordination. Agents need a shared understanding of what tools are available, how to call them, and what permissions they have. That is where a shared tool registry becomes essential.
Architecture Patterns for Shared Tool Access
Before diving into code, let us examine the three most common patterns for giving multiple agents access to shared tools.
Pattern 1: Centralized Tool Registry
In this pattern, all agents connect to a single tool registry that acts as the source of truth. When an agent needs a tool, it queries the registry, receives the tool definition, and executes it. This is the simplest pattern and the one we will implement in this tutorial.
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Researcher │ │ Coder │ │ Reviewer │
│ Agent │ │ Agent │ │ Agent │
└──────┬───────┘ └──────┬───────┘ └──────┬───────┘
│ │ │
└────────────────────┼────────────────────┘
│
┌────────▼────────┐
│ AgentNode Tool │
│ Registry │
└─────────────────┘
Advantages: single source of truth, easy to audit, consistent tool versions across agents. Disadvantages: single point of failure (mitigated by registry reliability).
Pattern 2: Federated Tool Access
Each agent maintains its own subset of tools from the registry, loaded at initialization. Agents can request additional tools dynamically but primarily work with their pre-loaded set. This reduces latency at the cost of potential version drift.
Pattern 3: Tool Broker Agent
A dedicated orchestrator agent manages tool access for all other agents. Worker agents request tool execution through the broker, which handles authentication, rate limiting, and audit logging. This adds latency but provides the strongest security and observability guarantees.
Setting Up the Shared Tool Layer with AgentNode
We will use AgentNode as our shared tool registry because it provides a unified SDK that works across frameworks, built-in verification, and consistent tool definitions regardless of which agent is calling them.
Prerequisites
pip install agentnode-sdk langchain openai
export AGENTNODE_API_KEY=your_key_here
export OPENAI_API_KEY=your_key_here
Defining the Shared Tool Registry
from agentnode_sdk import AgentNodeClient
# Initialize the shared client
client = AgentNodeClient()
# Define the shared tool pool
SHARED_TOOLS = {
"web_search": client.get_tool("web-search-v2"),
"code_executor": client.get_tool("sandboxed-code-executor"),
"file_reader": client.get_tool("file-reader"),
"sentiment_analyzer": client.get_tool("sentiment-analysis"),
"code_reviewer": client.get_tool("static-code-analysis"),
"summarizer": client.get_tool("text-summarizer"),
}
def get_tools_for_role(role: str) -> list:
"""Return the subset of tools appropriate for a given agent role."""
role_tools = {
"researcher": ["web_search", "file_reader", "summarizer"],
"coder": ["code_executor", "file_reader", "web_search"],
"reviewer": ["code_reviewer", "sentiment_analyzer", "summarizer"],
}
return [SHARED_TOOLS[t] for t in role_tools.get(role, [])]
This pattern ensures every agent pulls tools from the same verified source. No tool duplication, no version conflicts, no unverified tools sneaking into the pipeline.
Building the Three Specialized Agents
Now let us build the three agents. Each agent is a class that wraps an LLM with a specific system prompt and a subset of shared tools.
Agent 1: The Researcher
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain_openai import ChatOpenAI
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder
class ResearcherAgent:
def __init__(self, shared_tools):
self.llm = ChatOpenAI(model="gpt-4o", temperature=0)
self.tools = get_tools_for_role("researcher")
self.prompt = ChatPromptTemplate.from_messages([
("system", """You are a research specialist. Your job is to:
1. Search for relevant information on a given topic
2. Read and analyze source documents
3. Produce structured research summaries
Always cite your sources. Focus on accuracy over speed."""),
("human", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
])
self.agent = create_openai_tools_agent(self.llm, self.tools, self.prompt)
self.executor = AgentExecutor(agent=self.agent, tools=self.tools)
def research(self, topic: str) -> str:
return self.executor.invoke({"input": f"Research this topic: {topic}"})
Agent 2: The Coder
class CoderAgent:
def __init__(self, shared_tools):
self.llm = ChatOpenAI(model="gpt-4o", temperature=0)
self.tools = get_tools_for_role("coder")
self.prompt = ChatPromptTemplate.from_messages([
("system", """You are a coding specialist. Your job is to:
1. Write clean, well-documented code based on specifications
2. Execute code in a sandboxed environment to verify it works
3. Fix any errors found during execution
Always include error handling and type hints."""),
("human", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
])
self.agent = create_openai_tools_agent(self.llm, self.tools, self.prompt)
self.executor = AgentExecutor(agent=self.agent, tools=self.tools)
def code(self, spec: str) -> str:
return self.executor.invoke({"input": f"Write code for: {spec}"})
Agent 3: The Reviewer
class ReviewerAgent:
def __init__(self, shared_tools):
self.llm = ChatOpenAI(model="gpt-4o", temperature=0)
self.tools = get_tools_for_role("reviewer")
self.prompt = ChatPromptTemplate.from_messages([
("system", """You are a code review specialist. Your job is to:
1. Analyze code for bugs, security issues, and style problems
2. Check sentiment and tone of documentation
3. Produce actionable review summaries
Be constructive but thorough. Flag security issues as priority."""),
("human", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
])
self.agent = create_openai_tools_agent(self.llm, self.tools, self.prompt)
self.executor = AgentExecutor(agent=self.agent, tools=self.tools)
def review(self, code: str) -> str:
return self.executor.invoke({"input": f"Review this code:\n{code}"})
Orchestrating the Multi-Agent Workflow
With the agents defined, we need an orchestrator that coordinates the workflow. Here is a simple sequential orchestrator:
class MultiAgentOrchestrator:
def __init__(self):
self.researcher = ResearcherAgent(SHARED_TOOLS)
self.coder = CoderAgent(SHARED_TOOLS)
self.reviewer = ReviewerAgent(SHARED_TOOLS)
def execute(self, task: str) -> dict:
# Step 1: Research
research_results = self.researcher.research(task)
# Step 2: Code based on research
code_spec = f"""Based on this research:\n{research_results}
Write an implementation for: {task}"""
code_results = self.coder.code(code_spec)
# Step 3: Review the code
review_results = self.reviewer.review(code_results)
return {
"research": research_results,
"code": code_results,
"review": review_results,
}
# Run the workflow
orchestrator = MultiAgentOrchestrator()
result = orchestrator.execute("Build a rate limiter with sliding window")
print(result)
Notice that all three agents share the same underlying tool registry. The researcher and the coder both have access to web_search and file_reader, but they use them in different contexts with different prompts. The tools themselves remain consistent — same verification status, same API contracts, same security guarantees.
Advanced: Dynamic Tool Discovery
Static tool assignment works for simple workflows, but production systems often need agents to discover tools dynamically. Here is how to implement dynamic tool discovery using the AgentNode search API:
class DynamicToolAgent:
def __init__(self, role: str):
self.client = AgentNodeClient()
self.role = role
self.active_tools = {}
def discover_tools(self, query: str) -> list:
"""Search the registry for tools matching a query."""
results = self.client.search(
query=query,
verified_only=True,
min_trust_score=0.8,
limit=5,
)
for tool in results:
self.active_tools[tool.name] = tool
return results
def execute_with_discovery(self, task: str):
# First, discover relevant tools
tools = self.discover_tools(task)
# Then execute the task with discovered tools
return self._run(task, tools)
You can browse shared agent tools on AgentNode to see what is available for your multi-agent workflows. Every tool in the registry comes with standardized definitions that work across frameworks.
Using AgentNode with CrewAI for Multi-Agent Systems
If you prefer a framework purpose-built for multi-agent coordination, you can build agent crews with CrewAI while still using AgentNode as your shared tool layer. CrewAI handles the orchestration, role assignment, and inter-agent communication, while AgentNode provides the verified tool registry.
from crewai import Agent, Task, Crew
from agentnode.integrations.crewai import AgentNodeCrewAITools
# Load tools from AgentNode into CrewAI format
tools = AgentNodeCrewAITools.load(["web-search-v2", "code-executor", "summarizer"])
researcher = Agent(
role="Senior Research Analyst",
goal="Find comprehensive information on the given topic",
tools=tools.for_role("researcher"),
)
coder = Agent(
role="Software Engineer",
goal="Write production-quality code based on research",
tools=tools.for_role("coder"),
)
crew = Crew(agents=[researcher, coder], tasks=[...], verbose=True)
result = crew.kickoff()
For a deeper dive into how different agent frameworks handle multi-agent coordination, you can compare agent frameworks including LangChain, CrewAI, and AutoGen side by side.
Security Considerations for Shared Tool Access
When multiple agents share tools, security becomes exponentially more important. A compromised tool affects not one agent, but every agent in the system.
Principle of Least Privilege
Only give each agent the tools it actually needs. The get_tools_for_role() pattern we implemented earlier is a basic form of this. In production, enforce this at the registry level with API key scoping.
Tool Verification Is Non-Negotiable
In a multi-agent system, an unverified tool is an attack vector that multiplies across every agent that uses it. Always filter for verified tools with trust scores above your threshold. Refer to the AgentNode SDK docs for details on enforcing verification in your tool queries.
Audit Logging
Log every tool invocation with the agent identity, input parameters, and output. When something goes wrong in a multi-agent system, you need to trace exactly which agent called which tool with which arguments.
Performance Optimization
Multi-agent systems can be slow if you are not careful. Here are the optimizations that matter most:
- Tool caching — cache tool definitions locally instead of fetching them on every invocation
- Parallel execution — run independent agents concurrently using asyncio or threading
- Connection pooling — share HTTP connections across agents when calling the tool registry
- Selective loading — only load tools that a specific workflow needs, not the entire registry
- Result caching — if two agents need the same web search result, cache it at the orchestrator level
import asyncio
async def parallel_research_and_review(orchestrator, task):
"""Run research and initial code review in parallel."""
research_task = asyncio.create_task(
orchestrator.researcher.research(task)
)
# Pre-load reviewer tools while research runs
review_prep = asyncio.create_task(
orchestrator.reviewer.preload_tools()
)
research_results, _ = await asyncio.gather(research_task, review_prep)
return research_results
Common Pitfalls and How to Avoid Them
- Tool version conflicts — two agents using different versions of the same tool. Solution: pin tool versions in your shared registry configuration.
- Circular dependencies — Agent A waits for Agent B, which waits for Agent A. Solution: use directed acyclic graph (DAG) orchestration.
- Unbounded tool calls — an agent stuck in a loop calling the same tool. Solution: set max iteration limits on every agent executor.
- Missing error propagation — one agent fails silently, corrupting downstream agents. Solution: implement health checks and error boundaries between agent steps.
Frequently Asked Questions
What is a multi-agent system?
A multi-agent system is an architecture where multiple autonomous AI agents collaborate to accomplish complex tasks. Each agent specializes in a specific role, such as research, coding, or review, and they coordinate through shared resources like tool registries, message queues, or orchestrator agents. Multi-agent systems are more scalable, resilient, and maintainable than monolithic single-agent designs.
How do agents share tools?
Agents share tools through a centralized tool registry — a service that stores tool definitions, handles authentication, and provides consistent APIs across agents. Each agent queries the registry for the tools it needs, receives standardized tool definitions, and executes them. AgentNode serves as this shared layer, providing verified tools that work identically regardless of which agent or framework is calling them.
What is the best framework for multi-agent systems?
The best framework depends on your use case. CrewAI excels at role-based agent teams with structured workflows. AutoGen is strong for conversational multi-agent debates. LangGraph provides the most flexibility for custom orchestration patterns. All three work well with AgentNode as the shared tool layer. For a detailed breakdown, see our guide that lets you compare agent frameworks head to head.
LLM Runtime: Let the Model Handle It
If your agent uses OpenAI or Anthropic tool calling, AgentNodeRuntime handles tool registration, system prompt injection, and the tool loop automatically. The LLM discovers, installs, and runs AgentNode capabilities on its own — no hardcoded tool calls needed.
from openai import OpenAI
from agentnode_sdk import AgentNodeRuntime
runtime = AgentNodeRuntime()
result = runtime.run(
provider="openai",
client=OpenAI(),
model="gpt-4o",
messages=[{"role": "user", "content": "your task here"}],
)
print(result.content)
The Runtime registers 5 meta-tools (agentnode_capabilities, agentnode_search, agentnode_install, agentnode_run, agentnode_acquire) that let the LLM search the registry, install packages, and execute tools autonomously. Works with Anthropic too — just change provider="anthropic" and pass an Anthropic client.
See the LLM Runtime documentation for the full API reference, trust levels, and manual tool calling.