Building & Publishing8 min read

Publishing Your First ANP Package: The Complete Guide

Everything you need to know about publishing an ANP package to AgentNode — from project structure and manifest writing to verification and scoring.

By agentnode

What You Are Publishing

An ANP (AgentNode Package) is a self-contained, portable unit of functionality that AI agents can discover, install, and use. When you publish an ANP package to AgentNode, you are making a tool available to every agent that speaks the ANP protocol — regardless of whether that agent is built with LangChain, CrewAI, AutoGen, or a custom framework.

This guide covers the entire publishing process from an empty directory to a verified, scored package on the registry. By the end, you will have published a real package and understand every step of the pipeline.

Project Structure

A minimal ANP package has this structure:

my-tool/
├── manifest.yaml
├── my_tool/
│   ├── __init__.py
│   └── tool.py
└── tests/
    └── test_tool.py

That is four files. Let us walk through each one.

Naming Conventions

  • The root directory name does not matter — it is not included in the package.
  • The package_id in manifest.yaml is what identifies your package. It must be a lowercase slug: letters, numbers, and hyphens only (e.g., web-scraper, json-validator).
  • The Python module directory should use underscores instead of hyphens (e.g., web_scraper/), since hyphens are not valid in Python identifiers.
  • The entrypoint follows the module:function format. For a function called scrape in web_scraper/tool.py, the entrypoint is web_scraper.tool:scrape.

Writing the Manifest

The manifest.yaml file is the heart of your package. Here is a complete example for a tool that validates JSON against a schema:

manifest_version: "0.2"
package_id: json-schema-validator
version: "0.1.0"

capabilities:
  tools:
    - name: validate_json
      entrypoint: json_schema_validator.tool:validate_json
      input_schema:
        type: object
        properties:
          data:
            type: object
            description: "The JSON data to validate"
          schema:
            type: object
            description: "The JSON Schema to validate against"
        required: [data, schema]
      output_schema:
        type: object
        properties:
          valid:
            type: boolean
          errors:
            type: array
            items:
              type: string

runtime:
  language: python
  min_version: "3.10"
  dependencies:
    - jsonschema>=4.20

Every field matters. The manifest_version must be "0.2" — this is the current ANP specification version. The input_schema and output_schema use JSON Schema syntax and tell agents exactly what your tool expects and returns. The runtime section tells the installer what language, version, and dependencies are required.

Writing the Tool Code

Create the module directory and entrypoint file:

# json_schema_validator/__init__.py
# (empty file — just marks this as a Python package)
# json_schema_validator/tool.py
import jsonschema


def validate_json(data: dict, schema: dict) -> dict:
    """Validate a JSON object against a JSON Schema.

    Args:
        data: The JSON data to validate.
        schema: The JSON Schema to validate against.

    Returns:
        A dict with 'valid' (bool) and 'errors' (list of strings).
    """
    validator = jsonschema.Draft7Validator(schema)
    errors = [
        f"{'.'.join(str(p) for p in e.absolute_path)}: {e.message}"
        if e.absolute_path
        else e.message
        for e in validator.iter_errors(data)
    ]

    return {
        "valid": len(errors) == 0,
        "errors": sorted(errors),
    }

Tool Code Best Practices

  • Type hints on all parameters and return values. The verification pipeline checks that your function signature matches the manifest schemas.
  • Docstrings. They are used during smoke testing to understand what sample inputs to generate.
  • Return dictionaries, not custom objects. ANP tools communicate via JSON-serializable data. Returning a Pydantic model or dataclass may work, but a plain dict is safest.
  • Handle errors gracefully. Uncaught exceptions during smoke testing lower your score. Either catch expected errors and return them in a structured format, or let only truly unexpected errors propagate.
  • No global state or side effects on import. The verification pipeline imports your module to check for errors. If importing triggers a database connection or API call, the import step will fail.

Writing Tests

Tests are not optional — they directly affect your verification score. Create tests/test_tool.py:

import pytest
from json_schema_validator.tool import validate_json


SAMPLE_SCHEMA = {
    "type": "object",
    "properties": {
        "name": {"type": "string"},
        "age": {"type": "integer", "minimum": 0},
    },
    "required": ["name", "age"],
}


def test_valid_data_passes():
    result = validate_json({"name": "Alice", "age": 30}, SAMPLE_SCHEMA)
    assert result["valid"] is True
    assert result["errors"] == []


def test_invalid_type_detected():
    result = validate_json({"name": "Alice", "age": "thirty"}, SAMPLE_SCHEMA)
    assert result["valid"] is False
    assert len(result["errors"]) > 0
    assert any("age" in e for e in result["errors"])


def test_missing_required_field():
    result = validate_json({"name": "Alice"}, SAMPLE_SCHEMA)
    assert result["valid"] is False
    assert any("age" in e.lower() or "required" in e.lower() for e in result["errors"])


def test_extra_fields_allowed_by_default():
    result = validate_json(
        {"name": "Alice", "age": 30, "email": "alice@example.com"},
        SAMPLE_SCHEMA,
    )
    assert result["valid"] is True


def test_empty_data_against_schema():
    result = validate_json({}, SAMPLE_SCHEMA)
    assert result["valid"] is False
    assert len(result["errors"]) >= 2  # missing name and age


def test_returns_expected_shape():
    result = validate_json({"name": "X", "age": 1}, SAMPLE_SCHEMA)
    assert "valid" in result
    assert "errors" in result
    assert isinstance(result["valid"], bool)
    assert isinstance(result["errors"], list)

Six tests is a solid starting point. Each one tests a distinct behavior: happy path, type errors, missing fields, extra fields, empty input, and output shape validation. This variety is what pushes your test score higher.

Testing Locally

Before publishing, always run everything locally:

# Install dependencies
pip install jsonschema pytest

# Run the tool manually
python -c "
from json_schema_validator.tool import validate_json
schema = {'type': 'object', 'properties': {'x': {'type': 'integer'}}, 'required': ['x']}
print(validate_json({'x': 42}, schema))
print(validate_json({'x': 'hello'}, schema))
"

# Run the tests
pytest tests/ -v

If everything passes locally, it will almost certainly pass the verification pipeline too. The most common reason for local success but remote failure is missing dependencies in the manifest — always double-check that runtime.dependencies lists everything your code imports.

Publishing via CLI

The recommended way to publish is via the AgentNode CLI:

# Install the CLI (one-time)
npm install -g agentnode-cli

# Authenticate with your AgentNode account
agentnode login

# Navigate to your package directory and publish
cd my-tool/
agentnode publish

The CLI reads manifest.yaml, bundles all referenced files (your module directory and tests), and uploads the package. You will see a confirmation with a link to your package page.

What the CLI Checks Before Upload

  • The manifest.yaml file exists and is valid YAML.
  • The manifest_version is a supported version.
  • The package_id is a valid slug.
  • The entrypoint file exists on disk.
  • If you are publishing an update, the version is higher than the currently published version.

Publishing via Web Form

If you prefer a browser-based workflow, go to agentnode.net/publish. You can upload your files through the web interface. This is especially convenient if you used the Builder or Import tool and already have the files ready.

The Verification Pipeline

Once your package is uploaded, the verification pipeline runs automatically. It has four stages:

1. Install (15 points)

The pipeline creates an isolated environment and installs your package's dependencies from runtime.dependencies. If any dependency fails to install — due to a typo, a version that does not exist, or a native library not available in the sandbox — this step fails.

2. Import (15 points)

The pipeline imports your entrypoint module. This catches syntax errors, missing imports, and any code that runs at module level and crashes. Keep your module-level code minimal.

3. Smoke Test (25 points)

The pipeline generates sample input based on your input_schema and calls your tool function. It checks that:

  • The function runs without crashing.
  • The return value matches the output_schema.
  • The function completes within the timeout (30 seconds by default).

This is the highest-value single step. A tool that runs successfully on realistic input earns a full 25 points.

4. Unit Tests (15 points)

The pipeline runs pytest tests/. All tests must pass to earn the full 15 points. Partial credit is given if some tests pass.

Bonus Scoring Components

Beyond the four steps, additional factors affect your total score:

  • Reliability (10 points) — Consistency across multiple runs of the smoke test.
  • Determinism (5 points) — Does the tool produce the same output for the same input?
  • Contract compliance (10 points) — Does the actual output match the declared output_schema precisely?
  • Warnings (-2 each) — Deprecation warnings, unclosed resources, or other noisy output.

Checking Your Results

Visit your package page on AgentNode to see the verification results. You will see:

  • A score badge — Gold (90+), Verified (70-89), Partial (50-69), or Unverified (<50).
  • A breakdown showing points earned in each category.
  • Detailed logs for any step that failed or lost points, so you know exactly what to fix.

If your score is not where you want it, publish a new version with fixes. Update the version field in manifest.yaml (e.g., from "0.1.0" to "0.1.1"), fix the issues, and run agentnode publish again. Each version is verified independently.

After Publishing

Once your package is live, anyone can install it using the AgentNode SDK:

from agentnode_sdk import AgentNodeClient

client = AgentNodeClient()
client.resolve_and_install(["json-schema-validation"])
validate = load_tool("json-schema-validator")

result = validate(data={"name": "Alice", "age": 30}, schema=my_schema)
print(result["valid"])  # True

Your tool is now discoverable through the search at agentnode.net/search, installable by any agent, and verified with a public quality score. That is the complete journey from empty directory to published, verified ANP package.

LLM Runtime: Let the Model Handle It

If your agent uses OpenAI or Anthropic tool calling, AgentNodeRuntime handles tool registration, system prompt injection, and the tool loop automatically. The LLM discovers, installs, and runs AgentNode capabilities on its own — no hardcoded tool calls needed.

from openai import OpenAI
from agentnode_sdk import AgentNodeRuntime

runtime = AgentNodeRuntime()

result = runtime.run(
    provider="openai",
    client=OpenAI(),
    model="gpt-4o",
    messages=[{"role": "user", "content": "your task here"}],
)
print(result.content)

The Runtime registers 5 meta-tools (agentnode_capabilities, agentnode_search, agentnode_install, agentnode_run, agentnode_acquire) that let the LLM search the registry, install packages, and execute tools autonomously. Works with Anthropic too — just change provider="anthropic" and pass an Anthropic client.

See the LLM Runtime documentation for the full API reference, trust levels, and manual tool calling.

#publishing#tutorial#anp#cli#verification#getting-started