Import Your LangChain Tools to AgentNode
Step-by-step guide to importing your existing LangChain tools to AgentNode. Gain verification, discovery, and monetization for your tools without rewriting them. Includes code examples, testing workflow, and before/after comparison.
You have built LangChain tools. They work. Your agents depend on them. But they live in a private repository, undiscoverable by other developers, unverified by any third party, and generating zero revenue. Importing them to AgentNode changes all of that — without rewriting a single line of your core tool logic.
This guide walks you through the complete migration process: from a working LangChain BaseTool to a published, verified, monetizable AgentNode package. We will cover the conversion process, testing strategies, and the exact commands you need at every step.
Why Import Your LangChain Tools to AgentNode?
LangChain is an excellent framework for building agent tools. But LangChain itself is not a registry — it does not provide discovery, verification, or distribution. When you import your tools to AgentNode, you gain:
- Verification — your tool passes a 4-step security review and receives a public trust score
- Discovery — other developers can find and install your tool through search
- Cross-framework compatibility — your LangChain tool automatically works in MCP, CrewAI, and ANP contexts
- Monetization — set a per-invocation price and earn revenue when others use your tool
- Version management — publish updates with full version history and per-version verification
Your original LangChain tool continues to work exactly as before. The AgentNode package wraps your existing logic with standardized metadata, not a rewrite.
Understanding the Conversion: BaseTool to ANP
LangChain tools inherit from BaseTool and implement a _run() method. The AgentNode Protocol (ANP) uses a similar but more structured format that includes metadata, input/output schemas, and verification hooks.
Here is what changes and what stays the same:
| Aspect | LangChain BaseTool | ANP Package |
|---|---|---|
| Core logic | _run() method | Same logic, wrapped in execute() |
| Input schema | args_schema (Pydantic) | input_schema (JSON Schema, auto-converted from Pydantic) |
| Description | description string | Structured metadata (name, description, category, tags) |
| Error handling | Framework-dependent | Standardized error codes |
| Authentication | Custom per tool | Registry-managed API keys |
Step 1: Audit Your Existing LangChain Tool
Before converting, audit your tool to ensure it is ready for public consumption:
# Example: existing LangChain tool
from langchain.tools import BaseTool
from pydantic import BaseModel, Field
class WeatherInput(BaseModel):
city: str = Field(description="City name for weather lookup")
units: str = Field(default="celsius", description="Temperature units")
class WeatherTool(BaseTool):
name = "weather_lookup"
description = "Get current weather for a city"
args_schema = WeatherInput
def _run(self, city: str, units: str = "celsius") -> str:
# Your existing implementation
api_key = os.environ.get("WEATHER_API_KEY")
response = requests.get(
f"https://api.weather.com/v1/current",
params={"city": city, "units": units},
headers={"Authorization": f"Bearer {api_key}"}
)
data = response.json()
return f"{data['temp']}°{'C' if units == 'celsius' else 'F'} in {city}"
Check for these common issues:
- Hardcoded credentials — move all API keys to environment variables
- Missing error handling — add try/except for network calls and invalid inputs
- Undeclared dependencies — list every pip package your tool needs
- Missing type hints — add input and output type annotations
Step 2: Install the AgentNode CLI
pip install agentnode-cli
agentnode auth login
The CLI includes an import wizard specifically designed for LangChain tools. For a comprehensive walkthrough of the import process including MCP tools, see our tutorial on how to import LangChain and MCP tools to AgentNode.
Step 3: Run the Automatic Converter
The AgentNode CLI can auto-detect LangChain BaseTool subclasses and generate the ANP wrapper:
# Point the converter at your tool file
agentnode import langchain --file tools/weather.py
# Output:
# Detected LangChain tool: weather_lookup
# Generating ANP package structure...
# Created: agentnode_weather_lookup/
# ├── anp.toml # Package metadata
# ├── tool.py # Wrapped tool logic
# ├── schemas.py # Input/output schemas
# ├── tests/
# │ └── test_tool.py # Auto-generated tests
# └── README.md # Auto-generated docs
What the Converter Does
- Parses your
BaseToolsubclass and extracts thename,description, andargs_schema - Converts Pydantic
args_schemato JSON Schema for the ANPinput_schema - Wraps your
_run()method in an ANP-compatibleexecute()function - Generates a
anp.tomlwith metadata pre-filled from your tool class - Creates test scaffolding based on your input schema
Step 4: Review and Customize the Generated Package
The auto-generated package needs your review. Open anp.toml and verify the metadata:
[package]
name = "weather-lookup"
version = "1.0.0"
description = "Get current weather for a city with temperature unit conversion"
authors = ["your-username"]
category = "data"
tags = ["weather", "api", "geolocation"]
[tool]
entrypoint = "tool.py:execute"
input_schema = "schemas.py:WeatherInput"
output_schema = "schemas.py:WeatherOutput"
[dependencies]
requests = ">=2.28.0"
[env]
WEATHER_API_KEY = { required = true, description = "API key for weather service" }
Review the wrapped tool logic in tool.py:
from agentnode_sdk import AgentNodeToolError
from .schemas import WeatherInput, WeatherOutput
import requests
import os
def execute(input: WeatherInput, context: ToolContext) -> ToolResult:
"""Get current weather for a city."""
try:
api_key = context.get_secret("WEATHER_API_KEY")
response = requests.get(
"https://api.weather.com/v1/current",
params={"city": input.city, "units": input.units},
headers={"Authorization": f"Bearer {api_key}"},
timeout=10,
)
response.raise_for_status()
data = response.json()
return ToolResult.success(
WeatherOutput(
temperature=data["temp"],
units=input.units,
city=input.city,
description=data.get("description", ""),
)
)
except requests.RequestException as e:
return ToolResult.error(f"Weather API request failed: {e}", code="API_ERROR")
except KeyError as e:
return ToolResult.error(f"Unexpected API response format: {e}", code="PARSE_ERROR")
Notice the key differences: secrets are accessed through context.get_secret() instead of os.environ, there is a structured ToolResult return type, and error handling uses standardized error codes.
Step 5: Test the Converted Tool
Run the auto-generated tests and add your own:
# Run the auto-generated tests
agentnode test
# Run with verbose output
agentnode test --verbose
# Test against the sandbox (simulates the verification environment)
agentnode test --sandbox
The sandbox test is critical — it runs your tool in the same isolated environment that the verification pipeline will use. If your tool passes sandbox testing locally, it will almost certainly pass verification during publishing.
Step 6: Publish to AgentNode
# Dry run first — checks everything without publishing
agentnode publish --dry-run
# Publish for real
agentnode publish
Publishing triggers the 4-step verification pipeline. You will receive a notification when verification completes (typically 5-15 minutes). For a complete guide on the publishing process including tips for passing verification on the first try, see our tutorial on how to publish your first ANP package.
Once published, your tool is available at your publisher dashboard and discoverable through AgentNode search.
Step 7: Set Up Monetization (Optional)
# Enable monetization for your tool
agentnode monetize enable --package weather-lookup
# Set pricing
agentnode monetize pricing --package weather-lookup --per-invocation 0.001
Monetization is optional but available for any verified tool. You can also use the web-based import tool if you prefer a graphical interface.
Before and After Comparison
| Metric | Before (LangChain only) | After (AgentNode) |
|---|---|---|
| Discovery | None (private repo) | Searchable by all developers |
| Verification | None | Trust score: 91/100 |
| Framework support | LangChain only | LangChain, MCP, CrewAI, ANP |
| Version management | Git tags | Registry versioning with per-version verification |
| Monetization | None | $0.001/invocation |
| Install command | pip install from git | agentnode install weather-lookup |
Batch Import: Multiple Tools at Once
If you have multiple LangChain tools in a directory, you can import them all at once:
# Import all LangChain tools found in a directory
agentnode import langchain --dir ./tools/ --batch
# Output:
# Found 7 LangChain tools:
# 1. weather_lookup (tools/weather.py)
# 2. web_scraper (tools/scraper.py)
# 3. calculator (tools/math_tools.py)
# 4. file_converter (tools/converter.py)
# ...
# Generating ANP packages for all 7 tools...
Troubleshooting Common Migration Issues
Issue: Pydantic V1 vs V2 Compatibility
If your LangChain tool uses Pydantic V1 syntax, the converter handles the translation automatically. However, if you see schema generation errors, ensure your tool works with Pydantic V2 first.
Issue: Dynamic Dependencies
LangChain tools sometimes import packages dynamically inside _run(). The converter cannot detect these — you must manually add them to the [dependencies] section in anp.toml.
Issue: Async Tools
If your LangChain tool implements _arun(), the converter creates both sync and async entrypoints. Both are tested during verification.
Issue: Tool Chains
If your tool calls other LangChain tools internally, each sub-tool needs to be imported separately. The converter will flag these dependencies and suggest the correct import order.
What Happens After Import
Once your tool is published on AgentNode:
- It appears in search results when developers look for tools in your category
- Other developers can install it with a single command
- Your trust score is visible, building reputation for your publisher account
- Usage analytics show who is using your tool and how often
- You receive revenue (if monetization is enabled) for every invocation
You keep full ownership and can update, unpublish, or modify pricing at any time through the publisher dashboard.
Frequently Asked Questions
Can I import LangChain tools to AgentNode?
Yes. AgentNode provides a dedicated import tool in the CLI that automatically converts LangChain BaseTool subclasses to the AgentNode Protocol (ANP) format. The converter handles schema translation, metadata extraction, and test generation. Your core tool logic remains unchanged — the ANP wrapper provides standardized metadata and error handling around your existing code.
Will my LangChain tool still work after import?
Yes. Importing to AgentNode does not modify your original LangChain tool. The import process creates a separate ANP package that wraps your existing logic. You can continue using the original LangChain version in your projects while the ANP version is available on AgentNode for other developers. Both versions share the same core implementation.
How long does migration take?
For a single tool, the technical migration takes 15-30 minutes: about 5 minutes for the automatic conversion, 5-10 minutes for reviewing and customizing the generated package, and 5-15 minutes for verification after publishing. Batch imports of multiple tools are proportionally faster because you only set up the CLI and authentication once. The most time-consuming part is usually reviewing the generated test suite and adding edge cases.
LLM Runtime: Let the Model Handle It
If your agent uses OpenAI or Anthropic tool calling, AgentNodeRuntime handles tool registration, system prompt injection, and the tool loop automatically. The LLM discovers, installs, and runs AgentNode capabilities on its own — no hardcoded tool calls needed.
from openai import OpenAI
from agentnode_sdk import AgentNodeRuntime
runtime = AgentNodeRuntime()
result = runtime.run(
provider="openai",
client=OpenAI(),
model="gpt-4o",
messages=[{"role": "user", "content": "your task here"}],
)
print(result.content)
The Runtime registers 5 meta-tools (agentnode_capabilities, agentnode_search, agentnode_install, agentnode_run, agentnode_acquire) that let the LLM search the registry, install packages, and execute tools autonomously. Works with Anthropic too — just change provider="anthropic" and pass an Anthropic client.
See the LLM Runtime documentation for the full API reference, trust levels, and manual tool calling.