Concepts14 min read

What Is ANP? The Open Standard for AI Agent Capabilities

ANP (AgentNode Package) is the open package format that gives AI agents a universal way to discover, verify, and use tools across any framework. Learn how the manifest works, why typed schemas matter, and how ANP compares to existing approaches.

By agentnode

The Problem: AI Agent Tools Are Fragmented

Every AI agent framework has its own way of defining tools. LangChain has Tool and StructuredTool. CrewAI has its own tool decorator. AutoGPT has plugins. MCP has a server protocol. OpenAI has function calling schemas. If you build a tool for one framework, you rewrite it for every other framework your team uses.

This fragmentation creates real costs. A PDF extraction tool that works in LangChain cannot be dropped into a CrewAI agent without rewriting the interface. A web search tool built for AutoGPT cannot be used in a vanilla Python agent without an adapter layer. Every tool author faces the same choice: pick one framework and lock in, or maintain multiple implementations of the same logic.

The deeper problem is that agents themselves cannot reason about tools they have never seen. Without a standard way to describe what a tool accepts, what it returns, and what permissions it needs, an agent cannot autonomously discover and use new capabilities. It needs a human to wire everything together.

ANP (AgentNode Package) is the open format designed to solve this. It defines a single, machine-readable manifest that describes everything an agent needs to know about a capability: what it does, how to call it, what data goes in, what data comes out, and what system access it requires. One package, any agent framework.

What ANP Is

ANP stands for AgentNode Package. It is a package format specification, currently at version 0.2, that defines how AI agent capabilities are described, distributed, and consumed. Every ANP package contains:

  • A manifest.yaml file that describes the package metadata, capabilities, runtime requirements, framework compatibility, and security permissions
  • Python source code implementing the actual tool logic
  • Typed input and output schemas for every tool, defined in JSON Schema
  • A test suite that proves the tools work

The manifest is the key innovation. It is designed for machine consumption — agents can read a manifest, understand exactly what a tool does, generate valid inputs, and parse outputs without any human intervention. This is what separates ANP from traditional Python packages where the only "interface contract" is a docstring.

The Manifest: Section by Section

An ANP manifest is a YAML file that lives at the root of every package. Here is a complete example for a multi-tool CSV analysis pack:

manifest_version: "0.2"
package_id: "csv-analyzer-pack"
version: "1.0.0"
name: "CSV Analyzer Pack"
description: "Analyze, filter, and describe CSV files for AI agents"
publisher_slug: "datatools"

capabilities:
  tools:
    - name: "describe_csv"
      description: "Get summary statistics and column info for a CSV file"
      entrypoint: "csv_analyzer_pack.tool:describe"
      input_schema:
        type: object
        properties:
          file_path:
            type: string
            description: "Path to CSV file"
        required: ["file_path"]

    - name: "filter_csv"
      description: "Filter rows in a CSV file based on a query"
      entrypoint: "csv_analyzer_pack.tool:filter_rows"
      input_schema:
        type: object
        properties:
          file_path:
            type: string
          query:
            type: string
            description: "Natural language filter query"
        required: ["file_path", "query"]

runtime:
  language: python
  min_version: "3.10"
  dependencies:
    - pandas>=2.0

frameworks: [langchain, crewai, generic]

permissions:
  network: none
  filesystem: read
  code_execution: sandboxed
  data_access: local

Let's walk through each section.

Package Identity

The top-level fields establish what the package is:

  • manifest_version — currently "0.2", tells consumers which schema rules to apply
  • package_id — a globally unique slug (lowercase, hyphens, 3-60 characters) that identifies this package across the registry
  • version — semantic version (major.minor.patch), enabling agents to pin or upgrade versions
  • name — human-readable display name
  • description — longer explanation of what the package does, used for search and discovery
  • publisher_slug — identifies the publisher, linking the package to a trust profile

Capabilities: Tools

The capabilities.tools array is where the real value lives. Each tool entry defines:

  • name — a machine-friendly identifier for this specific tool within the pack
  • description — what the tool does, in plain language that an LLM can reason about
  • entrypoint — a module:function reference (e.g., csv_analyzer_pack.tool:describe) that tells the runtime exactly which Python function to call
  • input_schema — a JSON Schema object defining the expected input. Agents use this to generate valid arguments.
  • output_schema — a JSON Schema object defining what the tool returns. Agents use this to parse results.

Tools can also declare capability_ids for semantic discovery. For example, a PDF extraction tool might declare ["pdf_extraction", "text_extraction"], allowing agents to find it by searching for what they need rather than by package name.

Runtime

The runtime section declares what the package needs to run:

  • language — currently python (the MVP focuses on the Python ecosystem)
  • min_version — minimum Python version, e.g., "3.10"
  • dependencies — pip-installable dependencies with version constraints

This is not just informational. The AgentNode verification pipeline actually installs these dependencies in a sandbox and tests whether the package works with them. Declaring incorrect dependencies means a lower verification score.

Frameworks

The frameworks field is a list of compatible agent frameworks: langchain, crewai, autogpt, mcp, generic. A package marked generic works with any framework through the standard load_tool() interface. Framework-specific tags indicate that the package has been tested against that framework's tool loading conventions.

Permissions

The permissions section is one of ANP's most important design decisions. Every package must explicitly declare what system access it requires:

PermissionLevelsWhat It Controls
networknone, restricted, unrestrictedWhether the tool can make HTTP requests and to which domains
filesystemnone, temp, workspace_read, workspace_write, anyWhether the tool can read or write files
code_executionnone, limited_subprocess, shellWhether the tool can execute subprocesses
data_accessinput_only, connected_accounts, persistentWhether the tool accesses external data sources

Permissions are not advisory. AgentNode's verification pipeline checks declared permissions against actual behavior, and packages with overly broad or undeclared permissions receive lower trust scores. An agent can use these declarations to enforce its own policy — for example, refusing to install any package that requires unrestricted network access.

Multi-Tool Packs vs. Single-Tool Packs

ANP v0.2 introduced support for multi-tool packs — a single package that exposes multiple related tools. This is a practical decision: many capabilities are naturally grouped. A CSV analysis package might offer describe_csv, filter_csv, and plot_csv. Forcing these into separate packages would create unnecessary overhead.

Multi-Tool Example

In a multi-tool pack, each tool in the capabilities.tools array has its own entrypoint in module:function format:

capabilities:
  tools:
    - name: "describe_csv"
      entrypoint: "csv_analyzer_pack.tool:describe"
      input_schema: { ... }
    - name: "filter_csv"
      entrypoint: "csv_analyzer_pack.tool:filter_rows"
      input_schema: { ... }
    - name: "plot_csv"
      entrypoint: "csv_analyzer_pack.tool:plot"
      input_schema: { ... }

Loading a specific tool from a multi-tool pack uses the tool_name parameter:

from agentnode_sdk.installer import load_tool

describe = load_tool("csv-analyzer-pack", tool_name="describe_csv")
result = describe({"file_path": "data.csv"})

Single-Tool Example

A single-tool pack is simpler. It has one tool in the capabilities array, and can use either a package-level entrypoint or a tool-level one:

capabilities:
  tools:
    - name: "extract_pdf"
      entrypoint: "pdf_reader_pack.tool:extract"
      input_schema:
        type: object
        properties:
          file_path:
            type: string
        required: ["file_path"]

Loading it is straightforward — no tool_name needed:

extract = load_tool("pdf-reader-pack")
result = extract({"file_path": "report.pdf"})

Both patterns use the same load_tool() interface. The calling agent does not need to know whether a package is single-tool or multi-tool — it just asks for what it needs.

Typed Schemas: Why They Matter for Agents

The most consequential design choice in ANP is that every tool has typed input and output schemas defined in JSON Schema. This is not just documentation — it is a machine-readable contract.

Consider what happens when an LLM-based agent encounters a tool it has never used before. Without a schema, the agent has to guess: What parameters does this function accept? Are they required or optional? What types are expected? What does the return value look like? This guessing leads to errors, retries, and wasted tokens.

With ANP schemas, the agent has complete information:

input_schema:
  type: object
  properties:
    file_path:
      type: string
      description: "Path to CSV file"
    query:
      type: string
      description: "Natural language filter query"
  required: ["file_path", "query"]

output_schema:
  type: object
  properties:
    rows:
      type: array
      description: "Filtered rows matching the query"
    total_matches:
      type: integer
      description: "Number of matching rows"

An agent can read this schema and know exactly what dict to construct, which fields are required, and what type of data to expect back. This is the same principle behind OpenAI function calling and MCP tool definitions — but applied as a universal standard that works across all frameworks.

The entrypoint function itself follows a simple contract: it receives a dict matching input_schema and returns a dict matching output_schema. No classes to instantiate, no decorators to apply, no framework-specific base classes to inherit from.

Cross-Framework Compatibility

One of ANP's core goals is that a package author writes tool logic once and it works everywhere. Here is what that looks like in practice across different frameworks:

Vanilla Python

from agentnode_sdk.installer import load_tool

extract = load_tool("pdf-reader-pack")
result = extract({"file_path": "report.pdf"})
print(result["text"])

LangChain

from agentnode_langchain import load_agentnode_tool

tool = load_agentnode_tool("pdf-reader-pack")
# Returns a LangChain StructuredTool, ready for use in an agent chain
agent.tools.append(tool)

CrewAI

from agentnode_crewai import load_agentnode_tool

tool = load_agentnode_tool("pdf-reader-pack")
# Returns a CrewAI-compatible tool
agent = Agent(tools=[tool], ...)

MCP

from agentnode_mcp import serve_anp_tools

# Expose ANP packages as MCP tool servers
serve_anp_tools(["pdf-reader-pack", "web-search-pack"])

The key insight is that the tool's logic — the Python function behind the entrypoint — never changes. What changes is the thin adapter layer that translates between ANP's universal dict-in, dict-out contract and each framework's native tool interface. These adapters are provided by framework-specific bridge libraries (agentnode-langchain, agentnode-crewai, agentnode-mcp), not by the package author.

How ANP Compares to Existing Approaches

To understand why ANP exists, it helps to compare it against the alternatives that developers use today.

Raw pip Packages

Most Python tools for AI agents are distributed as regular pip packages. This works for installation, but it tells the agent nothing about what the package does. There is no standard way for an agent to discover that a pip package contains a PDF extraction tool, what parameters it accepts, or what permissions it needs. The "interface" is whatever the README says, which is not machine-readable.

ANP builds on top of pip — packages are still installed via pip — but adds a structured manifest layer that makes the package self-describing and machine-consumable.

Framework-Specific Tools

LangChain tools, CrewAI tools, and AutoGPT plugins each define their own way of declaring tools. These work well within their ecosystem, but they are not portable. A LangChain StructuredTool cannot be loaded by CrewAI without an adapter. An AutoGPT plugin cannot be used in a vanilla Python script.

ANP is framework-agnostic by design. The manifest describes the tool in a neutral format, and thin bridge libraries translate it to each framework's conventions. Package authors write code once; the ecosystem handles compatibility.

MCP (Model Context Protocol)

MCP defines a protocol for exposing tools via a server-client model. It solves a related but different problem: how a running LLM session communicates with tool servers. MCP and ANP are complementary. An ANP package can be served as an MCP tool server, and the ANP manifest's typed schemas map directly to MCP's tool definitions. ANP handles packaging, distribution, and verification; MCP handles runtime communication.

OpenAI Function Calling

OpenAI's function calling schema defines how an LLM generates structured arguments for tool calls. ANP's input_schema is directly compatible with this — both use JSON Schema. The difference is that OpenAI function calling only defines the calling convention, not how tools are packaged, distributed, verified, or permissioned. ANP covers the full lifecycle.

Comparison Table

Featurepip PackageFramework ToolsMCPANP
Machine-readable interfaceNoFramework-specificYesYes
Typed input/output schemasNoVariesYesYes
Cross-frameworkN/ANoProtocol-levelYes
Permission declarationsNoNoNoYes
Verification pipelineNoNoNoYes
Autonomous discoveryNoNoPartialYes
Multi-tool packsN/AVariesYesYes
Distribution built-inYes (PyPI)No standardNo standardYes (AgentNode)

The Verification Pipeline

A manifest alone is a claim. ANP pairs the format with a verification pipeline that proves whether the claim is accurate. When a package is published to AgentNode, the platform:

  1. Installs the package in an isolated sandbox with its declared dependencies
  2. Imports every declared entrypoint to verify they resolve to actual Python functions
  3. Smoke tests each tool by generating inputs from the declared input_schema and calling the tool
  4. Scores the package 0-100 based on install success, import success, smoke test results, schema completeness, and reliability across multiple runs

The score determines a verification tier:

  • Gold (90-100) — all steps pass, high reliability, complete schemas
  • Verified (70-89) — core functionality works, minor issues
  • Partial (50-69) — installs and imports, but smoke tests have failures
  • Unverified (below 50) — significant issues, quarantined from default search results

This verification runs automatically on every publish and is re-run periodically. An agent querying the registry can filter by verification tier, ensuring it only installs packages that demonstrably work.

The Permission Model in Depth

Security is not an afterthought in ANP — it is a first-class design constraint. The permission model exists because AI agents will increasingly install and run code autonomously. Without explicit, machine-readable permission declarations, an agent has no way to evaluate whether a package is safe to use for its current task.

Consider a practical scenario: an agent is processing confidential financial documents. It needs a PDF extraction tool but should not install one that has unrestricted network access (which could exfiltrate data). With ANP permissions, the agent's policy engine can check:

# Agent policy: no network access for document processing tasks
manifest = client.get_manifest("pdf-reader-pack")

if manifest["permissions"]["network"] != "none":
    raise PolicyViolation("Task requires network-isolated tools")

The five permission dimensions — network, filesystem, code execution, data access, and user approval — cover the primary attack surface for tool-based capabilities. Packages that request overly broad permissions receive lower verification scores and trust rankings, creating an incentive for publishers to request only what they need.

Publishing an ANP Package

Creating and publishing an ANP package follows a straightforward workflow:

  1. Write your tool — a Python function that takes a dict (matching your input schema) and returns a dict (matching your output schema)
  2. Create the manifest — define your manifest.yaml with package identity, capabilities, runtime, frameworks, and permissions
  3. Add tests — ANP requires a test suite in tests/ for the quality gate to pass
  4. Package and publish — use the AgentNode CLI to validate and publish
# Validate your manifest locally
agentnode validate

# Publish to the registry
agentnode publish

On publish, the verification pipeline runs automatically. Your package gets a score, a tier, and appears in search results. Other agents can discover and use it immediately.

The Vision: ANP as an Open Standard

ANP is designed to grow beyond AgentNode. The format specification is intentionally simple and self-contained — a YAML manifest, JSON Schema types, and Python entrypoints. There is nothing in the format that requires the AgentNode registry specifically. Any registry, any agent framework, and any runtime could implement ANP support.

The long-term vision has three layers:

  • Format layer — ANP as an open specification that any tool author can adopt, independent of where they publish
  • Registry layer — AgentNode (and potentially others) as registries that host, verify, and serve ANP packages
  • Agent layer — autonomous agents that discover, evaluate, and install capabilities from any ANP-compatible registry

Today, agents are limited by what their developers pre-configure. With a standard capability format, agents can grow their own toolsets — finding what they need, verifying it meets their policies, and using it immediately. ANP is the format that makes this possible.

The agent ecosystem needs what the web got with HTTP and what packages got with npm/PyPI: a shared standard that everyone can build on. ANP is that standard for AI agent capabilities.

Getting Started

If you want to explore ANP packages or publish your own, here are the next steps:

  • Browse existing packages — visit the AgentNode registry to see verified ANP packages you can use today
  • Install the SDKpip install agentnode-sdk to start loading and using ANP tools in your agents
  • Install the CLInpm install -g agentnode-cli to validate and publish your own packages
  • Read the docs — the full documentation covers manifest authoring, publishing, and framework integration in detail

ANP is young but production-ready. The registry already hosts 89+ verified packages across PDF processing, web search, data analysis, code execution, and more. Every one of them follows the same format, the same interface, and the same verification standard.

LLM Runtime: Let the Model Handle It

If your agent uses OpenAI or Anthropic tool calling, AgentNodeRuntime handles tool registration, system prompt injection, and the tool loop automatically. The LLM discovers, installs, and runs AgentNode capabilities on its own — no hardcoded tool calls needed.

from openai import OpenAI
from agentnode_sdk import AgentNodeRuntime

runtime = AgentNodeRuntime()

result = runtime.run(
    provider="openai",
    client=OpenAI(),
    model="gpt-4o",
    messages=[{"role": "user", "content": "your task here"}],
)
print(result.content)

The Runtime registers 5 meta-tools (agentnode_capabilities, agentnode_search, agentnode_install, agentnode_run, agentnode_acquire) that let the LLM search the registry, install packages, and execute tools autonomously. Works with Anthropic too — just change provider="anthropic" and pass an Anthropic client.

See the LLM Runtime documentation for the full API reference, trust levels, and manual tool calling.

#ANP#AgentNode Package#agent capability standard#open standard#AI agent tools#manifest format