Getting started
Install AgentNode, set your permissions, and your agent learns new skills on its own. No manual tool wiring, no dependency management.
Setup in 3 steps
Install the SDK
One command. Python 3.10+ required. No account needed.
pip install agentnode-sdkSet up AgentNode
Run the setup wizard to configure permissions and trust. Or skip it — sensible defaults work out of the box.
agentnode setupFind, install, and run
Search the registry, install a skill, and run it — all from the CLI. Or use the Python SDK for programmatic access.
# CLI
agentnode search pdf
agentnode install pdf-reader-pack
agentnode run pdf-reader-pack --input '{"file_path":"report.pdf"}'
# Python SDK
from agentnode_sdk import AgentNodeClient
client = AgentNodeClient()
client.resolve_and_install(["pdf_extraction"])How your agent learns skills
When your agent encounters something it can't do, AgentNode handles the rest automatically.
Detect
Agent fails at a task. AgentNode analyzes the error and identifies the missing capability.
Resolve
AgentNode searches its registry, scores matches by capability fit, trust level, and compatibility.
Install & retry
The best-matching skill is installed, verified, and the agent retries the task — all in one call.
# One call — detect gap, install skill, retry automatically
from agentnode_sdk import AgentNodeClient
client = AgentNodeClient()
result = client.smart_run(
lambda: process_pdf("report.pdf"),
auto_upgrade_policy="safe"
)
# result.upgraded == True, result.installed_slug == "pdf-reader-pack"For LLM agents: the Runtime
If your agent uses tool calling, AgentNodeRuntime handles everything — tool registration, system prompt, and the tool loop. Tested across 22 models. Native support for OpenAI, Anthropic, and Gemini. Any OpenAI-compatible provider (Mistral, DeepSeek, Qwen, Llama) works via OpenRouter.
smart_run()
For error-recovery agents. Wrap your logic, and AgentNode detects failures, installs missing skills, and retries.
AgentNodeRuntime
For LLM tool-calling agents. The model decides what to search, install, and run via 5 meta-tools.
from openai import OpenAI
from agentnode_sdk import AgentNodeRuntime
runtime = AgentNodeRuntime()
result = runtime.run(
provider="openai",
client=OpenAI(),
model="gpt-4o",
messages=[{"role": "user", "content": "Extract text from report.pdf"}],
)
print(result.content)Works with Anthropic, Gemini, and any OpenAI-compatible provider — see the full Runtime docs.
Auto-upgrade policies
You control how much autonomy your agent has. Set the policy once — it applies to all automatic installations.
"safe"defaultAuto-installs verified and trusted packages. Best for most setups — your agent learns new skills without manual approval, but only from reviewed sources.
"strict"Only installs trusted or curated packages. Use this when security is critical and you want the tightest control over what your agent can install.
"off"Detection only — the agent identifies missing skills but never installs anything automatically. You decide what to add.
Permissions
Every skill declares what it needs. AgentNode enforces these permissions at install time and at runtime. Skills that request more than their trust level allows are blocked.
none / read / write / fullControls whether a skill can make HTTP requests.none / read / read_writeControls file access for the skill.none / local / remoteControls whether a skill can execute code.none / metadata / read / read_writeControls what data the skill can see.Trusted and curated skills run directly in your process for speed. Verified and unverified skills run in an isolated subprocess with a restricted environment.
Works with your stack
Native runtime integration for OpenAI, Anthropic, and Gemini. Framework adapters for LangChain, CrewAI, and MCP. Compatible with Mistral, DeepSeek, Qwen, Ollama, and any OpenAI-compatible provider.
Native Runtime Integration
Framework Adapters
Compatible (standard Python)
Ready to go?
Install the SDK, set your policy, and let your agent figure out the rest.