Use Cases & Solutions12 min read

How to Build an AI Agent That Finds Its Own Tools

Most AI agents are limited to the tools you hardcode at build time. This tutorial shows you how to build an agent that uses AgentNode's resolve_and_install API to discover and install tools at runtime — starting from zero.

By agentnode

Every AI agent tutorial starts the same way: here are your tools, hardcoded in a list, defined at build time. A web scraper. A calculator. Maybe a database connector. The agent is smart about when to use them, but it can never reach beyond them. Hit a task that requires a tool you did not anticipate? The agent fails.

This is a fundamental limitation, and it mirrors a problem that was solved decades ago in traditional software: package managers. Developers do not ship applications with every library pre-installed. They declare dependencies and resolve them at build time — or even at runtime. AI agents should work the same way.

In this tutorial, you will build an AI agent that starts with zero domain-specific tools and acquires capabilities autonomously as it encounters tasks that require them. The agent uses AgentNode's resolve_and_install API to search a verified tool registry, evaluate options, install what it needs, and execute — all within a single conversation turn.

The Architecture: Agent + Tool Resolution Layer

The agent we are building has three components:

  1. The reasoning core — an LLM (GPT-4 or Claude) that decides what to do next.
  2. The tool resolution layer — a set of meta-tools that let the agent search for, evaluate, and install capabilities from AgentNode.
  3. The execution layer — dynamically loaded tools that the agent acquires at runtime.

The key insight is that the meta-tools (search, evaluate, install) are the only tools hardcoded into the agent. Everything else is discovered dynamically. If you want to understand the broader platform that makes this possible, start with what AgentNode is and how it works.

Why This Matters

A tool-resolving agent has several advantages over a static agent:

  • Unbounded capabilities — the agent's ability set is limited only by what exists in the registry, not by what you anticipated at build time.
  • Smaller footprint — no need to pre-load dozens of tools the agent might never use. Install on demand, keep the context window lean.
  • Automatic upgrades — when a better tool is published, the agent discovers it naturally through search ranking.
  • Graceful degradation — if the agent cannot find a tool for a task, it can tell the user what capability is missing instead of silently failing.

Prerequisites

You will need:

  • Python 3.10+
  • An AgentNode API key (free tier includes 100 tool resolutions per month)
  • An OpenAI or Anthropic API key for the reasoning core
  • The agentnode Python package
pip install agentnode-sdk openai

Step 1: Define the Meta-Tools

The agent starts with exactly three tools — these are the tools it uses to find other tools:

from agentnode_sdk import AgentNodeClient

client = AgentNodeClient(api_key="your-agentnode-api-key")

def search_tools(query: str, max_results: int = 5) -> list:
    """Search AgentNode for tools matching a capability description.
    
    Args:
        query: Natural language description of needed capability
        max_results: Maximum number of results to return
    
    Returns:
        List of tool summaries with name, description, trust score
    """
    results = client.search(query, limit=max_results, verified_only=True)
    return [
        {
            "name": r.name,
            "description": r.description,
            "trust_score": r.trust_score,
            "trust_level": r.trust_level,
            "capabilities": r.capabilities
        }
        for r in results
    ]

def evaluate_tool(tool_name: str) -> dict:
    """Get detailed information about a tool before installing it.
    
    Args:
        tool_name: The name of the tool to evaluate
    
    Returns:
        Detailed tool info including permissions, dependencies, scores
    """
    info = client.get_tool_info(tool_name)
    return {
        "name": info.name,
        "description": info.description,
        "version": info.version,
        "trust_score": info.trust_score,
        "permissions": info.permissions,
        "dependencies": info.dependencies,
        "parameter_schema": info.parameters,
        "author": info.author,
        "last_verified": info.last_verified_at
    }

def install_tool(tool_name: str) -> dict:
    """Install a tool from AgentNode and make it available for use.
    
    Args:
        tool_name: The name of the tool to install
    
    Returns:
        Installation result with tool interface details
    """
    tool = client.install(tool_name)
    return {
        "status": "installed",
        "name": tool.name,
        "description": tool.description,
        "parameters": tool.parameters,
        "ready": True
    }

These three functions — search, evaluate, install — give the agent everything it needs to acquire new capabilities. The search function uses AgentNode's semantic search, so the agent can describe what it needs in natural language.

Step 2: Build the Resolve-and-Install API

For convenience, AgentNode provides a single resolve_and_install method that combines search, evaluation, and installation into one call. This is useful for agents that want to move fast:

def resolve_and_install(capability: str, min_trust_score: int = 70) -> dict:
    """Find and install the best tool for a capability in one step.
    
    Args:
        capability: Natural language description of needed capability
        min_trust_score: Minimum trust score to accept (default 70)
    
    Returns:
        Installed tool details or error if no suitable tool found
    """
    result = client.resolve_and_install(
        capability=capability,
        min_trust_score=min_trust_score,
        auto_select=True  # Pick the highest-scoring match
    )
    
    if result.status == "installed":
        return {
            "status": "installed",
            "tool_name": result.tool.name,
            "trust_score": result.tool.trust_score,
            "parameters": result.tool.parameters
        }
    else:
        return {
            "status": "not_found",
            "message": f"No verified tool found for: {capability}",
            "suggestions": result.partial_matches
        }

Step 3: The Agent Loop

Now we build the core agent loop. This is similar to a standard ReAct agent, but with the critical addition: the agent can expand its own tool set mid-conversation.

import json
import openai

ai_client = openai.OpenAI(api_key="your-openai-key")

class ToolResolvingAgent:
    def __init__(self):
        self.an_client = AgentNodeClient(api_key="your-agentnode-key")
        self.installed_tools = {}  # name -> tool object
        self.meta_tools = {
            "search_tools": search_tools,
            "evaluate_tool": evaluate_tool,
            "install_tool": self._install_and_register,
            "resolve_and_install": self._resolve_and_register
        }
    
    def _install_and_register(self, tool_name: str) -> dict:
        """Install a tool and register it for future use."""
        tool = self.an_client.install(tool_name)
        self.installed_tools[tool.name] = tool
        return {
            "status": "installed",
            "name": tool.name,
            "parameters": tool.parameters
        }
    
    def _resolve_and_register(self, capability: str,
                               min_trust_score: int = 70) -> dict:
        """Resolve and install, then register for use."""
        result = self.an_client.resolve_and_install(
            capability=capability,
            min_trust_score=min_trust_score
        )
        if result.status == "installed":
            self.installed_tools[result.tool.name] = result.tool
            return {
                "status": "installed",
                "tool_name": result.tool.name,
                "trust_score": result.tool.trust_score
            }
        return {"status": "not_found", "message": str(result.message)}
    
    def _get_all_tool_schemas(self) -> list:
        """Build OpenAI tool schemas for all available tools."""
        schemas = []
        # Meta-tools (always available)
        meta_definitions = [
            {
                "name": "search_tools",
                "description": "Search AgentNode registry for tools matching a capability",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "query": {"type": "string", "description": "Capability to search for"},
                        "max_results": {"type": "integer", "default": 5}
                    },
                    "required": ["query"]
                }
            },
            {
                "name": "evaluate_tool",
                "description": "Get detailed info about a tool before installing",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "tool_name": {"type": "string"}
                    },
                    "required": ["tool_name"]
                }
            },
            {
                "name": "install_tool",
                "description": "Install a tool from AgentNode to use it",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "tool_name": {"type": "string"}
                    },
                    "required": ["tool_name"]
                }
            },
            {
                "name": "resolve_and_install",
                "description": "Find and install the best tool for a capability in one step",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "capability": {"type": "string"},
                        "min_trust_score": {"type": "integer", "default": 70}
                    },
                    "required": ["capability"]
                }
            }
        ]
        for defn in meta_definitions:
            schemas.append({"type": "function", "function": defn})
        
        # Dynamically installed tools
        for name, tool in self.installed_tools.items():
            schemas.append({
                "type": "function",
                "function": {
                    "name": name.replace("-", "_"),
                    "description": tool.description,
                    "parameters": tool.parameters
                }
            })
        return schemas
    
    def _execute_tool(self, name: str, arguments: dict) -> str:
        """Execute a meta-tool or installed tool."""
        if name in self.meta_tools:
            result = self.meta_tools[name](**arguments)
        elif name in self.installed_tools:
            result = self.installed_tools[name].execute(**arguments)
        elif name.replace("_", "-") in self.installed_tools:
            result = self.installed_tools[name.replace("_", "-")].execute(**arguments)
        else:
            result = {"error": f"Unknown tool: {name}"}
        return json.dumps(result)
    
    def run(self, user_message: str, max_turns: int = 20) -> str:
        """Run the agent on a user message."""
        messages = [
            {"role": "system", "content": (
                "You are an AI agent with access to a tool registry. "
                "You start with no domain tools, but you can search for, "
                "evaluate, and install tools from AgentNode as needed. "
                "When you encounter a task requiring a capability you "
                "don't have, use resolve_and_install or search_tools "
                "to find and install what you need. Always prefer "
                "verified tools with high trust scores."
            )},
            {"role": "user", "content": user_message}
        ]
        
        for turn in range(max_turns):
            tools = self._get_all_tool_schemas()
            response = ai_client.chat.completions.create(
                model="gpt-4",
                messages=messages,
                tools=tools,
                tool_choice="auto"
            )
            
            message = response.choices[0].message
            messages.append(message)
            
            if not message.tool_calls:
                return message.content
            
            for tool_call in message.tool_calls:
                result = self._execute_tool(
                    tool_call.function.name,
                    json.loads(tool_call.function.arguments)
                )
                messages.append({
                    "role": "tool",
                    "tool_call_id": tool_call.id,
                    "content": result
                })
        
        return "Agent reached maximum turns."

Step 4: Watch It Work

Here is the agent in action. Notice how it starts with zero tools and acquires capabilities based on the task:

agent = ToolResolvingAgent()

# Task that requires tools the agent does not have yet
result = agent.run(
    "Scrape the pricing page of stripe.com, extract the plan names "
    "and prices, and save the result as a CSV file."
)
print(result)

# The agent will:
# 1. Realize it needs a web scraper -> resolve_and_install("web scraping")
# 2. Realize it needs HTML parsing -> resolve_and_install("HTML content extraction")
# 3. Realize it needs CSV writing -> resolve_and_install("CSV file creation")
# 4. Execute the tools in sequence to complete the task

After this conversation, agent.installed_tools contains three tools that were not there at the start. If the next task also needs web scraping, the agent already has it loaded — no redundant installations.

Step 5: Add Safety Guardrails

An agent that installs tools autonomously needs guardrails. Here are the safety mechanisms you should implement:

Trust Score Thresholds

MINIMUM_TRUST_SCORE = 70  # Only install Verified or Gold tools
BLOCKED_PERMISSIONS = ["system_exec", "env_write", "network_listen"]

def safe_install(self, tool_name: str) -> dict:
    """Install with safety checks."""
    info = self.an_client.get_tool_info(tool_name)
    
    if info.trust_score < MINIMUM_TRUST_SCORE:
        return {
            "status": "blocked",
            "reason": f"Trust score {info.trust_score} below minimum {MINIMUM_TRUST_SCORE}"
        }
    
    dangerous = [p for p in info.permissions if p in BLOCKED_PERMISSIONS]
    if dangerous:
        return {
            "status": "blocked",
            "reason": f"Tool requests blocked permissions: {dangerous}"
        }
    
    return self._install_and_register(tool_name)

Installation Budget

MAX_INSTALLS_PER_SESSION = 10

def _install_and_register(self, tool_name: str) -> dict:
    if len(self.installed_tools) >= MAX_INSTALLS_PER_SESSION:
        return {
            "status": "blocked",
            "reason": "Maximum tool installations reached for this session"
        }
    # ... proceed with installation

Human Approval Mode

For high-stakes environments, add a human approval step before installation:

def _install_with_approval(self, tool_name: str) -> dict:
    info = self.an_client.get_tool_info(tool_name)
    print(f"\nAgent wants to install: {info.name}")
    print(f"  Trust score: {info.trust_score}")
    print(f"  Permissions: {info.permissions}")
    approval = input("Allow? (y/n): ")
    if approval.lower() == "y":
        return self._install_and_register(tool_name)
    return {"status": "denied", "reason": "User denied installation"}

Advanced: Capability Caching and Warm Starts

For production deployments, you can cache the agent's resolved tool set between sessions so it does not start from zero every time:

import json
from pathlib import Path

CACHE_FILE = Path("tool_cache.json")

def save_tool_manifest(self):
    """Save the current tool set for next session."""
    manifest = {
        name: {"version": tool.version, "trust_score": tool.trust_score}
        for name, tool in self.installed_tools.items()
    }
    CACHE_FILE.write_text(json.dumps(manifest))

def warm_start(self):
    """Reload tools from previous session."""
    if CACHE_FILE.exists():
        manifest = json.loads(CACHE_FILE.read_text())
        for name in manifest:
            try:
                self._install_and_register(name)
            except Exception:
                pass  # Tool may have been removed or updated

This pattern is especially useful for agents with predictable task patterns. The first session discovers tools; subsequent sessions start warm. For more on getting started with the foundational concepts, see getting started with AgentNode.

Limitations and Honest Tradeoffs

Autonomous tool resolution is powerful, but it is not magic. Be aware of these limitations:

  • Latency — searching and installing tools adds seconds to the agent's response time. For latency-sensitive applications, pre-load commonly needed tools.
  • Context window cost — each installed tool's schema consumes tokens. With 15+ tools, the schema overhead becomes significant. Consider unloading tools that are no longer needed.
  • Search quality dependency — the agent is only as good as the registry's search. Vague capability descriptions may return irrelevant results.
  • Trust is not certainty — a high trust score means the tool passed automated verification, not that it is perfect. Edge cases exist.

For a deeper understanding of how tool verification and trust scoring work, consult the AgentNode API documentation and browse the agent skill catalog to see trust scores in practice.

Can AI agents find their own tools?

Yes. Using a tool registry like AgentNode, AI agents can search for capabilities by natural language description, evaluate available options based on trust scores and functionality, and install tools at runtime. The agent in this tutorial starts with zero domain tools and acquires capabilities autonomously as it encounters tasks that need them.

What is agent tool resolution?

Agent tool resolution is the process by which an AI agent identifies a capability gap (a task it cannot perform with its current tools), searches a registry for a matching tool, evaluates its suitability and safety, and installs it for use. It is analogous to dependency resolution in traditional package managers like pip or npm, but happens at runtime during agent execution.

How does AgentNode's resolve API work?

The resolve_and_install API accepts a natural language capability description and an optional minimum trust score. It performs a semantic search against AgentNode's verified tool catalog, ranks results by relevance and trust score, selects the best match, installs it in the calling environment, and returns the tool interface ready for use. The entire process typically completes in under two seconds.

Is autonomous tool installation safe?

It can be, with proper guardrails. AgentNode's verification pipeline ensures that tools have been tested in a sandboxed environment before they are available for installation. Developers should also implement trust score thresholds, permission blocklists, installation budgets, and optionally human approval steps. The combination of registry-level verification and agent-level guardrails makes autonomous installation practical for production use.

LLM Runtime: Let the Model Handle It

If your agent uses OpenAI or Anthropic tool calling, AgentNodeRuntime handles tool registration, system prompt injection, and the tool loop automatically. The LLM discovers, installs, and runs AgentNode capabilities on its own — no hardcoded tool calls needed.

from openai import OpenAI
from agentnode_sdk import AgentNodeRuntime

runtime = AgentNodeRuntime()

result = runtime.run(
    provider="openai",
    client=OpenAI(),
    model="gpt-4o",
    messages=[{"role": "user", "content": "your task here"}],
)
print(result.content)

The Runtime registers 5 meta-tools (agentnode_capabilities, agentnode_search, agentnode_install, agentnode_run, agentnode_acquire) that let the LLM search the registry, install packages, and execute tools autonomously. Works with Anthropic too — just change provider="anthropic" and pass an Anthropic client.

See the LLM Runtime documentation for the full API reference, trust levels, and manual tool calling.

Frequently Asked Questions

Can AI agents find their own tools?
Yes. Using a tool registry like AgentNode, AI agents can search for capabilities by natural language description, evaluate available options based on trust scores and functionality, and install tools at runtime. The agent starts with zero domain tools and acquires capabilities autonomously as it encounters tasks that need them.
What is agent tool resolution?
Agent tool resolution is the process by which an AI agent identifies a capability gap, searches a registry for a matching tool, evaluates its suitability and safety, and installs it for use. It is analogous to dependency resolution in traditional package managers like pip or npm, but happens at runtime during agent execution.
How does AgentNode's resolve API work?
The resolve_and_install API accepts a natural language capability description and an optional minimum trust score. It performs a semantic search against AgentNode's verified tool catalog, ranks results by relevance and trust score, selects the best match, installs it, and returns the tool interface ready for use. The process typically completes in under two seconds.
Is autonomous tool installation safe?
It can be, with proper guardrails. AgentNode's verification pipeline ensures tools are tested in a sandboxed environment before they are available. Developers should also implement trust score thresholds, permission blocklists, installation budgets, and optionally human approval steps for production use.
Build an AI Agent That Finds Its Own Tools Autonomously — AgentNode Blog | AgentNode