Building & Publishing12 min read

How to Publish an MCP Server: From Local Tool to Global Registry

Your MCP server works locally but nobody can find it. This step-by-step guide takes you from local MCP tool to globally discoverable, verified agent skill — with code examples, manifest templates, and tips for maximizing your trust score.

By agentnode

Your MCP server works. It runs locally, Claude connects to it, and it does exactly what you built it to do. But here is the problem: nobody else can find it. It lives in a GitHub repo that maybe has a README, maybe has installation instructions, and almost certainly has no verification or trust scoring. If someone wants to use your tool, they need to clone your repo, figure out the dependencies, hope it works on their machine, and trust that it is safe — all on faith.

MCP (Model Context Protocol) has no built-in distribution mechanism. Anthropic designed it as a protocol, not a platform. There is no mcp publish command. No central registry. No verification pipeline. That gap between "works locally" and "discoverable globally" is where most MCP servers die — not because they are bad, but because nobody knows they exist.

This guide bridges that gap. You will go from a working local MCP server to a published, verified, globally discoverable agent skill that Claude, Cursor, and every major agent framework can find and use.

Why GitHub Repos Are Not Enough

Before we get into the how, let us address the common objection: "My MCP server is on GitHub. Is that not enough?"

No. Here is why:

No Discovery

GitHub is a code hosting platform, not a tool registry. There is no way for an AI agent to search GitHub for "an MCP server that does web scraping" and get back a structured result with schemas, trust scores, and installation instructions. Discovery requires someone to already know your repo exists — which defeats the purpose of publishing.

No Verification

A GitHub repo tells you nothing about whether the tool actually works. Does it install cleanly? Do the dependencies resolve? Does it pass its own tests? Does it handle edge cases? Without automated verification, every user is a beta tester. This problem is amplified for MCP servers because MCP server path traversal vulnerabilities have been documented as a real attack surface.

No Security Audit

MCP servers often request file system access, network access, or both. A GitHub repo does not declare these permissions in a structured way. Users cannot assess the security posture of a tool before installing it. They have to read the source code — if they read it at all.

No Standardization

Every MCP server repo has its own installation process, dependency management, and configuration approach. There is no consistent manifest format that tools can parse programmatically. This makes automated installation and framework integration difficult.

The Publishing Workflow: Overview

Publishing your MCP server to a verified registry involves five steps:

  1. Prepare your server — ensure it is clean, documented, and has proper dependency management
  2. Create an ANP manifest — describe your server's capabilities, schemas, and permissions
  3. Add tests — write tests that prove your tool works (this significantly boosts your trust score)
  4. Submit to AgentNode — publish using the CLI or web interface
  5. Verification runs automatically — your tool is sandbox-tested and scored

Let us walk through each step in detail.

Step 1: Prepare Your MCP Server

Before publishing, make sure your MCP server meets these baseline requirements:

Clean Dependency Management

Your server should have a proper requirements.txt or pyproject.toml with pinned dependencies. Loose version ranges are the number one cause of verification failures — a package that installs on your machine today may fail tomorrow when a dependency releases a breaking update.

# Bad: loose versions
requests
beautifulsoup4

# Good: pinned versions
requests==2.31.0
beautifulsoup4==4.12.3

Typed Input/Output Schemas

Every tool your MCP server exposes should have explicit JSON Schema definitions for inputs and outputs. If you are using the MCP SDK, you likely already have these in your tool definitions. Make sure they are complete — include descriptions for every field, mark required fields, and specify types precisely.

@server.tool()
async def scrape_page(
    url: str,      # The URL to scrape
    format: str = "markdown"  # Output format: text, markdown, html
) -> dict:
    """Scrape a web page and return structured content."""
    # ... implementation
    return {
        "content": content,
        "title": title,
        "word_count": len(content.split())
    }

Error Handling

Your tools should return structured errors rather than raising unhandled exceptions. The verification pipeline will test edge cases — empty inputs, invalid URLs, missing fields — and tools that crash instead of returning error messages score significantly lower.

Step 2: Create an ANP Manifest

The ANP (AgentNode Package) manifest is a JSON file that describes your package. For an MCP server, it looks like this:

{
  "manifest_version": "0.2",
  "name": "my-web-scraper",
  "version": "1.0.0",
  "summary": "MCP server for structured web scraping with JavaScript rendering",
  "description": "Extracts structured content from web pages including JS-rendered content. Returns clean text, markdown, or structured data.",
  "author": {
    "name": "Your Name",
    "email": "you@example.com"
  },
  "source_type": "mcp",
  "capabilities": [
    {
      "name": "scrape_page",
      "capability_type": "tool",
      "description": "Scrape a web page and return structured content",
      "entrypoint": "server:scrape_page",
      "input_schema": {
        "type": "object",
        "properties": {
          "url": {
            "type": "string",
            "description": "The URL to scrape"
          },
          "format": {
            "type": "string",
            "enum": ["text", "markdown", "html"],
            "description": "Output format",
            "default": "markdown"
          }
        },
        "required": ["url"]
      },
      "output_schema": {
        "type": "object",
        "properties": {
          "content": {"type": "string"},
          "title": {"type": "string"},
          "word_count": {"type": "integer"}
        }
      }
    }
  ],
  "permissions": {
    "network": "external",
    "filesystem": "none",
    "code_execution": "none"
  },
  "compatibility": {
    "frameworks": ["mcp", "langchain", "crewai", "vanilla"],
    "python": ">=3.10"
  }
}

Key fields to get right:

  • source_type — set to "mcp" to indicate this is an MCP server being packaged as an ANP skill
  • capabilities — one entry per tool your server exposes. Each must have complete input and output schemas.
  • permissions — honestly declare what your tool accesses. A web scraper needs "network": "external". A file organizer needs "filesystem": "read_write". Lying about permissions tanks your trust score when the sandbox catches the discrepancy.
  • compatibility.frameworks — include "mcp" plus any other frameworks your tool supports after ANP wrapping

If you already have MCP tools or LangChain tools, you can skip manual manifest creation entirely. The import your existing tools page auto-generates the ANP manifest from your existing code.

Step 3: Add Tests

Tests are the single biggest factor in your trust score. Here is the scoring breakdown:

  • No tests at all: 3 points (baseline)
  • Auto-generated tests pass: 8 points
  • Publisher-provided tests pass: 15 points

That is a 12-point difference — enough to move you from Partial to Verified tier, or from Verified to Gold.

Write tests that cover:

# tests/test_scraper.py
import pytest
from server import scrape_page

@pytest.mark.asyncio
async def test_scrape_basic():
    """Test basic scraping with a known page."""
    result = await scrape_page(
        url="https://example.com",
        format="text"
    )
    assert "content" in result
    assert "title" in result
    assert result["word_count"] > 0

@pytest.mark.asyncio
async def test_scrape_invalid_url():
    """Test that invalid URLs return structured errors."""
    result = await scrape_page(
        url="not-a-valid-url",
        format="text"
    )
    assert "error" in result

@pytest.mark.asyncio
async def test_scrape_formats():
    """Test all output formats."""
    for fmt in ["text", "markdown", "html"]:
        result = await scrape_page(
            url="https://example.com",
            format=fmt
        )
        assert isinstance(result["content"], str)
        assert len(result["content"]) > 0

Note: tests that require network access will be run with network enabled during verification. Tests that require specific API credentials should be marked with @pytest.mark.skip(reason="requires credentials") — the verification pipeline will record the credential boundary without penalizing your score.

Step 4: Publish to AgentNode

With your manifest and tests ready, publishing is a single command:

# Install the CLI if you haven't
npm install -g agentnode-cli

# Login (one-time)
agentnode login

# Publish from your project directory
agentnode publish ./my-web-scraper

The CLI validates your manifest locally first — catching schema errors, missing fields, and common mistakes before anything is uploaded. If validation passes, your package is submitted to the registry.

You can also publish through the web interface. Navigate to publish your MCP server to AgentNode and follow the guided workflow. It is particularly useful if you want to review the manifest visually before submitting.

For a more detailed walkthrough covering every option, see the complete guide to publishing ANP packages.

Step 5: Verification Pipeline

After you publish, the verification pipeline runs automatically. Here is what happens:

Installation Test (15 points)

Your package and all dependencies are installed in a clean Docker container. Pinned dependencies, minimal dependency trees, and standard package managers all help here.

Import Test (15 points)

Every declared capability entrypoint is imported and validated. If your manifest says server:scrape_page exists, the pipeline imports it and confirms it is callable.

Smoke Test (25 points)

The pipeline generates test inputs based on your declared input schemas and calls each tool. Tools that return valid outputs matching the output schema score highest. Tools that crash or return malformed data score lowest. Tools that hit a credential boundary (cannot proceed without API keys) receive partial credit.

Unit Tests (15 points)

If you provided tests (Step 3), they run now. Passing tests from the publisher are weighted highest because they demonstrate the author has validated their own code.

Additional Scoring

The remaining points come from contract validation (schema completeness, permission accuracy), reliability (consistent results across multiple runs), and determinism checks.

Tips for Maximizing Your Trust Score

Based on analyzing hundreds of published packages, here are the patterns that separate Gold-tier packages from the rest:

  1. Pin all dependencies. Version ranges cause flaky installations. Pin to exact versions.
  2. Write at least 3 tests. Publisher-provided tests are worth nearly double auto-generated ones.
  3. Handle errors gracefully. Return structured error objects instead of raising exceptions.
  4. Declare permissions honestly. The sandbox detects network access, file system writes, and code execution. Mismatches reduce your score.
  5. Include complete schemas. Every input and output field should have a type and description. The more complete your schemas, the better the smoke tests can exercise your tool.
  6. Keep dependencies minimal. Fewer dependencies means fewer installation failure modes. Do you really need that 50MB ML library for a web scraper?
  7. Test edge cases. Empty strings, null values, invalid formats — the smoke test generator will try these. Make sure your tool handles them.

What Happens After Publishing

Once verified, your MCP server becomes a globally discoverable agent skill. Here is what that means in practice:

  • Claude users can discover and install your tool through AgentNode's catalog
  • Cursor users can find your tool by capability and add it to their workflows
  • LangChain developers can install your tool via the AgentNode SDK and use it as a native LangChain tool
  • CrewAI developers can integrate your tool into crew task definitions
  • Any agent framework can discover your tool through the AgentNode API and use it through the standard tool interface

Your tool went from "works on my machine" to "available to every AI agent developer worldwide." That is the power of publishing to a verified registry rather than just pushing to GitHub.

If you have existing tools in other formats — LangChain tools, OpenAI functions, or standalone Python scripts — you can also import existing MCP tools to AgentNode using the import workflow, which handles format conversion automatically.

Ready to start? Build an MCP server from scratch if you are starting fresh, or go straight to publishing if your server is already working.

Frequently Asked Questions

Where can I publish my MCP server?

MCP (Model Context Protocol) does not include a built-in publishing or distribution mechanism. You can host your server's source code on GitHub, but that does not provide discovery, verification, or standardized installation. AgentNode is currently the primary registry that accepts MCP servers and packages them as verified, discoverable agent skills. You publish using the ANP format, and your MCP server becomes available to users across Claude, Cursor, LangChain, CrewAI, and other frameworks.

How to get my MCP server verified?

Verification happens automatically when you publish to AgentNode. The pipeline tests installation, imports, smoke tests (calling your tools with generated inputs), and any publisher-provided unit tests. To maximize your score: pin your dependencies, write at least 3 tests, declare permissions honestly, and handle errors gracefully. Scores of 70+ earn "Verified" tier, and 90+ earn "Gold." You can view your verification breakdown on your package page and re-publish updated versions to improve your score.

What is ANP packaging?

ANP (AgentNode Package) is a manifest-based format for describing AI agent tools. An ANP package contains a manifest.json file that declares the tool's name, version, capabilities (with typed input/output schemas), required permissions, and framework compatibility. It is framework-agnostic — meaning a single ANP package can be used by LangChain, CrewAI, MCP, AutoGPT, or vanilla Python. For MCP servers, ANP wrapping adds the discoverability and verification layer that MCP itself does not provide.

How to Publish an MCP Server to a Global Registry (2026) — AgentNode Blog | AgentNode