How to Build an Agent Skill from Scratch with the AgentNode Builder
Use the AgentNode Builder to describe what your AI tool does and get a complete, publishable ANP package in minutes — no boilerplate required.
Why Use the AgentNode Builder?
Every developer who has built an AI tool knows the routine: scaffold a project, write a manifest, wire up input and output schemas, add error handling, write tests, and hope it all integrates correctly. The AgentNode Builder at agentnode.net/builder compresses that entire cycle into a single guided workflow. You describe what your tool does in plain language, and the builder generates a complete ANP package — manifest, entrypoint code, input/output schemas, and starter tests — ready for review, customization, and publishing.
This tutorial walks through the full process: from opening the builder to publishing a verified agent skill that other developers and AI agents can install.
Step 1: Describe Your Tool
Navigate to agentnode.net/builder and sign in. You will see a single text area asking you to describe the tool you want to create. This is the most important step, so be specific.
A vague description like "a tool that does math" will produce a generic calculator. A precise description produces a focused, useful skill. Here is an example of a good description:
A tool that takes a webpage URL, fetches its HTML content, extracts the
main article text (ignoring navigation, ads, and sidebars), and returns
the cleaned text along with the page title and estimated word count.
Notice what makes this effective: it specifies the input (a URL), the process (fetch HTML, extract article), and the output (cleaned text, title, word count). The builder uses this to generate accurate schemas and realistic implementation code.
Tips for Better Descriptions
- State the input explicitly. "Takes a URL" or "accepts a JSON object with fields X, Y, Z" removes ambiguity.
- Describe the output shape. "Returns a dict with keys: text, title, word_count" directly informs the output schema.
- Mention dependencies if you know them. "Uses BeautifulSoup for parsing" tells the builder which library to include in the manifest.
- Scope it to one responsibility. A single, focused tool scores higher during verification than a Swiss-army-knife monolith.
Step 2: Review the Generated Package
After you submit your description, the builder produces several files. Here is what you will typically see:
manifest.yaml
The manifest is the identity card of your ANP package. The builder generates it with all required fields pre-filled:
manifest_version: "0.2"
package_id: web-article-extractor
version: "0.1.0"
capabilities:
tools:
- name: extract_article
entrypoint: web_article_extractor.tool:extract_article
input_schema:
type: object
properties:
url:
type: string
description: "The URL of the webpage to extract"
required: [url]
output_schema:
type: object
properties:
title:
type: string
text:
type: string
word_count:
type: integer
runtime:
language: python
min_version: "3.10"
dependencies:
- requests>=2.31
- beautifulsoup4>=4.12
Check that package_id is a slug you are happy with — it becomes the permanent identifier. Verify that the dependencies list includes everything your tool needs and nothing it does not.
Tool Entrypoint Code
The builder generates the actual Python function in a module matching the entrypoint path. For our example, that file would be web_article_extractor/tool.py:
import requests
from bs4 import BeautifulSoup
def extract_article(url: str) -> dict:
"""Fetch a webpage and extract the main article text."""
response = requests.get(url, timeout=15, headers={
"User-Agent": "AgentNode-Skill/1.0"
})
response.raise_for_status()
soup = BeautifulSoup(response.text, "html.parser")
# Remove non-content elements
for tag in soup(["script", "style", "nav", "footer", "header", "aside"]):
tag.decompose()
title = soup.title.string.strip() if soup.title and soup.title.string else ""
text = soup.get_text(separator="\n", strip=True)
word_count = len(text.split())
return {
"title": title,
"text": text,
"word_count": word_count,
}
This is working code, not a placeholder. The builder generates real implementations based on your description. That said, you should always read it carefully — generated code is a starting point, not a finished product.
Tests
The builder also creates a tests/ directory with starter test cases. These are critical: they directly affect your verification score. We will come back to improving them in a later step.
Step 3: Customize the Code
This is where you add your domain expertise. Common customizations include:
- Error handling. The generated code typically raises on failure. You may want to return a structured error instead, especially if downstream agents need to handle failures gracefully.
- Edge cases. What happens if the URL points to a PDF? A login wall? A page with no article content? Add guards for the scenarios your users will encounter.
- Performance. If the tool will be called in tight loops, consider connection pooling, caching, or timeouts tuned to your use case.
- Output refinement. The generated output schema is a best guess. If you want to add metadata like
extracted_attimestamps orsource_urlecho-back, update both the code and theoutput_schemain the manifest.
Whatever you change in the code, make sure the manifest stays in sync. If you add a new dependency, add it to runtime.dependencies. If you rename the function, update the entrypoint.
Step 4: Test Locally
Before publishing, verify that your package works on your own machine. Install the AgentNode SDK and run the tool directly:
# Install the SDK
pip install agentnode-sdk
# From your package directory, run a quick smoke test
python -c "
from web_article_extractor.tool import extract_article
result = extract_article('https://example.com')
print(result['title'])
print(f'{result[\"word_count\"]} words')
"
Then run the tests the builder generated:
pip install pytest
pytest tests/ -v
If any tests fail, fix them now. Broken tests will lower your verification score after publishing. If the builder's tests are too simple — for example, they only check that the function exists — add meaningful assertions:
def test_extract_article_returns_required_fields():
result = extract_article("https://example.com")
assert "title" in result
assert "text" in result
assert isinstance(result["word_count"], int)
assert result["word_count"] > 0
def test_extract_article_handles_missing_title():
# example.com has a title, but test graceful handling
result = extract_article("https://example.com")
assert isinstance(result["title"], str)
Step 5: Publish
You have two publishing options. The fastest for builder-created packages is the web form at agentnode.net/publish — you can upload the generated files directly. Alternatively, use the CLI:
# Install the CLI
npm install -g agentnode-cli
# Authenticate
agentnode login
# Publish from your package directory
agentnode publish
The CLI reads your manifest.yaml, bundles your code and tests, and uploads everything. Within seconds, the verification pipeline kicks in.
Step 6: Check Your Verification Results
After publishing, visit your package page on AgentNode. The verification panel shows the results of four automated steps:
- Install — Can the package and its dependencies be installed cleanly?
- Import — Can the entrypoint module be imported without errors?
- Smoke Test — Does the tool execute with sample input and return valid output?
- Unit Tests — Do the tests in
tests/pass?
Each step contributes to a score out of 100. Packages scoring 90 or above earn Gold tier status. Between 70 and 89 is Verified. The builder gives you a strong starting point, but writing thorough tests and handling edge cases is what pushes you into Gold territory.
From Idea to Published Skill in Minutes
The AgentNode Builder removes the friction between "I have an idea for a tool" and "other agents can use my tool." The workflow is intentionally linear: describe, review, customize, test, publish. Each step builds on the last, and the builder handles the boilerplate so you can focus on what your tool actually does.
If you want to convert an existing tool instead of starting from scratch, check out the Import tool, which supports LangChain, MCP, OpenAI, and CrewAI formats. And if you prefer to write everything by hand, the ANP Manifest Reference covers every field in detail.
Start building at agentnode.net/builder.
LLM Runtime: Let the Model Handle It
If your agent uses OpenAI or Anthropic tool calling, AgentNodeRuntime handles tool registration, system prompt injection, and the tool loop automatically. The LLM discovers, installs, and runs AgentNode capabilities on its own — no hardcoded tool calls needed.
from openai import OpenAI
from agentnode_sdk import AgentNodeRuntime
runtime = AgentNodeRuntime()
result = runtime.run(
provider="openai",
client=OpenAI(),
model="gpt-4o",
messages=[{"role": "user", "content": "your task here"}],
)
print(result.content)
The Runtime registers 5 meta-tools (agentnode_capabilities, agentnode_search, agentnode_install, agentnode_run, agentnode_acquire) that let the LLM search the registry, install packages, and execute tools autonomously. Works with Anthropic too — just change provider="anthropic" and pass an Anthropic client.
See the LLM Runtime documentation for the full API reference, trust levels, and manual tool calling.