Using AgentNode with LangChain: The Complete Integration Guide
Learn how to use AgentNode tools as LangChain tools. Install the adapter, load verified agent skills, and build chains and agents with real examples.
Why Use AgentNode Tools in LangChain?
LangChain is one of the most widely used frameworks for building applications with large language models. It provides a powerful abstraction for chaining together LLM calls, tools, memory, and retrieval into coherent agent workflows. But building reliable tools from scratch—and keeping them maintained—is a significant time sink.
AgentNode solves this by providing a registry of verified, portable agent skills that you can install and use in any framework. The agentnode-langchain adapter bridges AgentNode packages directly into LangChain's tool interface, so you get production-ready capabilities without writing integration glue yourself.
In this guide, you will learn how to install the adapter, load AgentNode tools as LangChain tools, use them in chains and agents, and build a practical workflow that combines multiple skills.
Prerequisites
Before you begin, make sure you have:
- Python 3.10 or newer
- A working LangChain installation (
pip install langchain langchain-openai) - An OpenAI API key (or another LLM provider configured for LangChain)
- The AgentNode SDK installed (
pip install agentnode-sdk)
Step 1: Install the AgentNode LangChain Adapter
The adapter is a separate Python package that wraps AgentNode tools in LangChain's BaseTool interface. Install it alongside the SDK:
pip install agentnode-langchain
This installs the adapter and its dependencies. It does not install LangChain itself—you need to have LangChain already set up in your project.
Step 2: Install an AgentNode Package
You need at least one AgentNode package installed locally. Let's start with a practical example: a PDF extraction tool.
# Search for PDF tools
agentnode search "pdf extraction"
# Install a PDF reader package
agentnode install pdf-reader-pack
You can verify what's installed with agentnode list. Each installed package exposes one or more tools with defined input schemas, output schemas, and Python entrypoints.
Step 3: Load AgentNode Tools as LangChain Tools
The adapter provides a straightforward function to convert installed AgentNode tools into LangChain-compatible tools:
from agentnode_langchain import load_tools
# Load all tools from an installed package
tools = load_tools("pdf-reader-pack")
# Inspect what you got
for tool in tools:
print(f"Tool: {tool.name}")
print(f"Description: {tool.description}")
print(f"Schema: {tool.args_schema.schema()}")
print()
The load_tools function reads the package manifest, locates each tool's entrypoint, and wraps it in a LangChain StructuredTool. The tool's name, description, and input schema are all pulled from the AgentNode manifest, so the LLM receives accurate information about what each tool does and what arguments it expects.
Loading Specific Tools
If a package contains multiple tools and you only need some of them, you can filter:
# Load only specific tools by name
tools = load_tools("pdf-reader-pack", tool_names=["extract_text", "extract_tables"])
Loading from Multiple Packages
You can combine tools from different packages into a single list:
from agentnode_langchain import load_tools
pdf_tools = load_tools("pdf-reader-pack")
csv_tools = load_tools("csv-toolkit")
all_tools = pdf_tools + csv_tools
Step 4: Use Tools in a LangChain Agent
Now that you have LangChain-compatible tools, you can use them with any LangChain agent type. Here's a complete example using the OpenAI functions agent:
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_openai_functions_agent
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from agentnode_langchain import load_tools
# Load tools
tools = load_tools("pdf-reader-pack")
# Set up the LLM
llm = ChatOpenAI(model="gpt-4o", temperature=0)
# Create a prompt template
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant that can extract and analyze "
"information from PDF documents. Use the available tools "
"when the user asks about PDF content."),
("human", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
])
# Create the agent
agent = create_openai_functions_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
# Run it
result = executor.invoke({
"input": "Extract all the text from /path/to/quarterly-report.pdf "
"and summarize the key financial figures."
})
print(result["output"])
When you run this, the agent will recognize that it has a PDF extraction tool available, call it with the file path, receive the extracted text, and then use the LLM to summarize the financial figures. The verbose flag lets you see each step the agent takes.
Step 5: Use Tools in a LangChain Chain (LCEL)
Not every use case requires a full agent. If you know exactly which tool to call and in what order, you can use LangChain Expression Language (LCEL) to build a deterministic chain:
from langchain_openai import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from agentnode_langchain import load_tools
# Load specific tool
tools = load_tools("pdf-reader-pack", tool_names=["extract_text"])
extract_text = tools[0]
# Step 1: Extract text from PDF (deterministic tool call)
pdf_text = extract_text.invoke({"file_path": "/path/to/document.pdf"})
# Step 2: Feed extracted text to LLM for analysis
llm = ChatOpenAI(model="gpt-4o", temperature=0)
prompt = ChatPromptTemplate.from_messages([
("system", "Analyze the following document text and produce a structured summary."),
("human", "{text}"),
])
chain = prompt | llm | StrOutputParser()
summary = chain.invoke({"text": pdf_text})
print(summary)
This pattern is useful when you want predictable execution order without giving the LLM control over tool selection. It's faster, cheaper (fewer LLM calls), and easier to debug.
Practical Example: Multi-Tool Document Processing
Let's build a more complete workflow that combines multiple AgentNode tools in a single agent. Imagine you have PDF documents and spreadsheets, and you want an agent that can handle both:
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_openai_functions_agent
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from agentnode_langchain import load_tools
# Combine tools from multiple packages
tools = (
load_tools("pdf-reader-pack") +
load_tools("csv-toolkit") +
load_tools("text-summarizer")
)
print(f"Loaded {len(tools)} tools:")
for t in tools:
print(f" - {t.name}: {t.description[:80]}...")
llm = ChatOpenAI(model="gpt-4o", temperature=0)
prompt = ChatPromptTemplate.from_messages([
("system",
"You are a document analysis assistant with access to tools for "
"reading PDFs, processing CSV files, and summarizing text. "
"Use the appropriate tool based on the file type and user request. "
"Always extract the data first, then analyze it."),
("human", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
])
agent = create_openai_functions_agent(llm, tools, prompt)
executor = AgentExecutor(
agent=agent,
tools=tools,
verbose=True,
max_iterations=5,
handle_parsing_errors=True,
)
# The agent can now handle multiple file types
result = executor.invoke({
"input": "I have a PDF report at /data/report.pdf and a CSV of sales "
"data at /data/sales.csv. Extract key metrics from both and "
"compare the PDF's stated revenue with the CSV totals."
})
print(result["output"])
The agent will decide which tools to use for each file type, call them in the right order, and synthesize the results. This is the power of combining a verified tool registry with LangChain's agent orchestration.
Handling Async Tools
Some AgentNode packages expose asynchronous entrypoints. The adapter detects these automatically and wraps them correctly for LangChain's async interface:
# Async usage works the same way
tools = load_tools("web-scraper-pack")
# In an async context
result = await executor.ainvoke({
"input": "Scrape the pricing page at https://example.com/pricing"
})
If the underlying tool is async, the adapter preserves that. If it's synchronous, the adapter wraps it to work in both sync and async contexts.
Checking Tool Verification Status
One advantage of using AgentNode tools over ad-hoc implementations is that every tool has a verification score. You can check this programmatically before loading:
from agentnode_sdk import Client
client = Client()
info = client.package_info("pdf-reader-pack")
print(f"Verification tier: {info['verification_tier']}")
print(f"Score: {info['verification_score']}/100")
print(f"Trust level: {info['trust_level']}")
This lets you enforce minimum trust levels in your application. For example, you might only load tools with a verification score above 70 in production.
Troubleshooting Common Issues
Tool not found after install
Make sure the package is installed for the same Python environment where your LangChain code runs. Run agentnode list from that environment to verify.
Schema mismatch errors
If the LLM generates arguments that don't match the tool's input schema, check the schema with agentnode info package-slug. The adapter passes the schema directly to LangChain, but some LLMs may need clearer descriptions in the tool's manifest.
Import errors on load
Some AgentNode packages have system dependencies (like poppler-utils for PDF processing). Check the package's environment requirements with agentnode info package-slug and install any missing system packages.
Summary
The AgentNode LangChain adapter gives you access to a growing registry of verified, portable tools without writing integration code. The key steps are: install the adapter with pip install agentnode-langchain, install packages with agentnode install, and load them with load_tools(). From there, they work like any other LangChain tool—in agents, chains, or direct invocations.
Every tool you load through AgentNode has been through a verification pipeline that checks installation, imports, runtime behavior, and contract compliance. This means fewer surprises in production compared to tools you build and maintain yourself.
LLM Runtime: Let the Model Handle It
If your agent uses OpenAI or Anthropic tool calling, AgentNodeRuntime handles tool registration, system prompt injection, and the tool loop automatically. The LLM discovers, installs, and runs AgentNode capabilities on its own — no hardcoded tool calls needed.
from openai import OpenAI
from agentnode_sdk import AgentNodeRuntime
runtime = AgentNodeRuntime()
result = runtime.run(
provider="openai",
client=OpenAI(),
model="gpt-4o",
messages=[{"role": "user", "content": "your task here"}],
)
print(result.content)
The Runtime registers 5 meta-tools (agentnode_capabilities, agentnode_search, agentnode_install, agentnode_run, agentnode_acquire) that let the LLM search the registry, install packages, and execute tools autonomously. Works with Anthropic too — just change provider="anthropic" and pass an Anthropic client.
See the LLM Runtime documentation for the full API reference, trust levels, and manual tool calling.