Framework Integration7 min read

Using AgentNode with CrewAI: Build Powerful Agent Crews

Connect AgentNode verified tools to CrewAI agents and crews. Step-by-step guide covering installation, tool loading, crew workflows, and multi-tool examples.

By agentnode

What Is CrewAI and Why Use AgentNode with It?

CrewAI is a framework for orchestrating multiple AI agents that collaborate on tasks. Each agent in a crew has a specific role, a goal, and a set of tools it can use. Agents work together sequentially or in parallel, passing results between tasks to complete complex workflows.

The challenge with CrewAI is tooling: you need to build or find reliable tools for each agent. AgentNode provides a registry of verified agent skills that you can plug directly into CrewAI agents. Instead of writing custom tool wrappers for file processing, data extraction, or API integrations, you install a verified package and hand it to your crew.

Prerequisites

  • Python 3.10 or newer
  • CrewAI installed (pip install crewai)
  • An LLM API key (OpenAI, Anthropic, or another provider supported by CrewAI)
  • The AgentNode SDK (pip install agentnode-sdk)

Installing the Adapter

AgentNode provides a dedicated adapter that converts AgentNode tools into CrewAI-compatible tool objects:

pip install agentnode-crewai

This package depends on both agentnode-sdk and crewai, so make sure both are already installed in your environment.

Installing AgentNode Packages

Install the packages you want your agents to use. For this tutorial, we will use a few different tools:

# Search the registry
agentnode search "web scraping"
agentnode search "text analysis"

# Install packages
agentnode install web-scraper-pack
agentnode install text-summarizer
agentnode install csv-toolkit

Verify your installations:

agentnode list

Loading Tools for CrewAI Agents

The adapter provides a load_tools function that returns CrewAI-compatible tools from an installed AgentNode package:

from agentnode_crewai import load_tools

# Load all tools from a package
scraper_tools = load_tools("web-scraper-pack")
summarizer_tools = load_tools("text-summarizer")

# Check what's available
for tool in scraper_tools:
    print(f"{tool.name}: {tool.description}")

Each returned tool is a CrewAI Tool object with the correct name, description, and argument schema populated from the AgentNode manifest. The LLM sees these descriptions when deciding which tool to use.

Building Your First Crew

Let's build a research crew with two agents: a researcher who gathers information and an analyst who synthesizes it.

from crewai import Agent, Task, Crew, Process
from agentnode_crewai import load_tools

# Load tools
scraper_tools = load_tools("web-scraper-pack")
summarizer_tools = load_tools("text-summarizer")

# Define agents with their tools
researcher = Agent(
    role="Research Specialist",
    goal="Gather comprehensive information on the given topic "
         "from web sources",
    backstory="You are an experienced researcher who excels at "
              "finding relevant information from web pages and "
              "extracting key data points.",
    tools=scraper_tools,
    verbose=True,
)

analyst = Agent(
    role="Analysis Specialist",
    goal="Synthesize research findings into clear, actionable insights",
    backstory="You are a skilled analyst who takes raw research data "
              "and distills it into concise summaries with key "
              "takeaways and recommendations.",
    tools=summarizer_tools,
    verbose=True,
)

# Define tasks
research_task = Task(
    description="Research the current state of AI agent frameworks. "
                "Scrape at least 3 relevant web pages and collect "
                "key information about features, adoption, and trends.",
    expected_output="A detailed collection of facts, quotes, and data "
                    "points from web sources about AI agent frameworks.",
    agent=researcher,
)

analysis_task = Task(
    description="Analyze the research findings and produce a structured "
                "summary. Identify the top 3 trends, compare major "
                "frameworks, and provide recommendations.",
    expected_output="A structured analysis with sections for trends, "
                    "framework comparison, and recommendations.",
    agent=analyst,
)

# Assemble the crew
crew = Crew(
    agents=[researcher, analyst],
    tasks=[research_task, analysis_task],
    process=Process.sequential,
    verbose=True,
)

# Run it
result = crew.kickoff()
print(result)

In this crew, the researcher agent uses the web scraper tools to gather data, then the analyst agent uses the summarizer tools to process the findings. CrewAI handles the orchestration, passing the researcher's output as context to the analyst's task.

Multi-Tool Agents

Agents can have tools from multiple AgentNode packages. This is useful when an agent's role requires diverse capabilities:

from agentnode_crewai import load_tools

# Combine tools from different packages
all_tools = (
    load_tools("web-scraper-pack") +
    load_tools("csv-toolkit") +
    load_tools("text-summarizer")
)

data_agent = Agent(
    role="Data Processing Specialist",
    goal="Collect, process, and summarize data from various sources",
    backstory="You handle web scraping, CSV processing, and text "
              "analysis. Choose the right tool for each data source.",
    tools=all_tools,
    verbose=True,
)

The agent will see all available tools and select the appropriate one based on the task. If it needs to scrape a webpage, it picks the scraper. If it receives a CSV file, it uses the CSV toolkit.

A Real-World Workflow: Competitive Analysis Crew

Here's a more complete example that demonstrates a three-agent crew performing competitive analysis:

from crewai import Agent, Task, Crew, Process
from agentnode_crewai import load_tools

# Load tools for each role
scraper_tools = load_tools("web-scraper-pack")
csv_tools = load_tools("csv-toolkit")
summarizer_tools = load_tools("text-summarizer")

# Agent 1: Data Collector
collector = Agent(
    role="Data Collector",
    goal="Gather pricing and feature data from competitor websites",
    backstory="You systematically collect structured data from "
              "web pages, focusing on pricing tiers, feature lists, "
              "and product specifications.",
    tools=scraper_tools,
    verbose=True,
)

# Agent 2: Data Organizer
organizer = Agent(
    role="Data Organizer",
    goal="Structure collected data into a clean comparison format",
    backstory="You take raw collected data and organize it into "
              "structured formats suitable for analysis. You are "
              "meticulous about data quality.",
    tools=csv_tools,
    verbose=True,
)

# Agent 3: Strategy Analyst
strategist = Agent(
    role="Strategy Analyst",
    goal="Produce actionable competitive intelligence from "
         "organized data",
    backstory="You are a senior strategist who identifies competitive "
              "advantages, market gaps, and strategic opportunities "
              "from structured competitive data.",
    tools=summarizer_tools,
    verbose=True,
)

# Tasks
collect_task = Task(
    description="Visit the pricing pages of these three competitors: "
                "competitor-a.com/pricing, competitor-b.com/pricing, "
                "competitor-c.com/pricing. Extract pricing tiers, "
                "features per tier, and any usage limits.",
    expected_output="Raw pricing and feature data from all three "
                    "competitor websites.",
    agent=collector,
)

organize_task = Task(
    description="Take the collected competitor data and organize it "
                "into a structured comparison. Create a clear "
                "feature-by-feature comparison across all competitors.",
    expected_output="A structured comparison table showing features, "
                    "pricing, and limits for each competitor.",
    agent=organizer,
)

analyze_task = Task(
    description="Analyze the organized competitive data. Identify "
                "where our product has advantages, where competitors "
                "are stronger, and recommend 3 strategic actions.",
    expected_output="A competitive analysis report with strengths, "
                    "weaknesses, opportunities, and 3 specific "
                    "strategic recommendations.",
    agent=strategist,
)

crew = Crew(
    agents=[collector, organizer, strategist],
    tasks=[collect_task, organize_task, analyze_task],
    process=Process.sequential,
    verbose=True,
)

result = crew.kickoff()
print(result)

Each agent in this crew has a focused role with tools matched to its responsibilities. The sequential process ensures data flows from collection to organization to analysis in the right order.

Filtering Tools by Verification Score

Before loading tools into production crews, you may want to check their verification status:

from agentnode_sdk import Client

client = Client()

# Check verification before loading
for slug in ["web-scraper-pack", "csv-toolkit", "text-summarizer"]:
    info = client.package_info(slug)
    tier = info.get("verification_tier", "unverified")
    score = info.get("verification_score", 0)
    print(f"{slug}: {tier} ({score}/100)")
    if score < 50:
        print(f"  WARNING: Low verification score for {slug}")

This is especially important for production deployments where you want confidence that the tools have been tested in AgentNode's verification sandbox.

Tips for Effective CrewAI + AgentNode Workflows

  • Match tools to roles. Give each agent only the tools it needs. An agent with 20 tools will spend more tokens deciding which one to use and may make worse choices than an agent with 3 focused tools.
  • Use sequential process for dependent tasks. If task B needs task A's output, use Process.sequential. Only use Process.hierarchical when tasks are genuinely independent.
  • Check tool descriptions. The LLM selects tools based on their descriptions. If an agent consistently picks the wrong tool, check whether the tool descriptions in the AgentNode manifest are clear enough.
  • Set max iterations. Agents can get stuck in loops. Set reasonable iteration limits on your crew or agents to prevent runaway token usage.
  • Test tools individually first. Before building a full crew, test each AgentNode tool independently with tool.run() to verify it works with your data.

Summary

The AgentNode CrewAI adapter lets you equip your agent crews with verified, production-ready tools from the AgentNode registry. Install with pip install agentnode-crewai, load tools with load_tools("package-slug"), and assign them to agents. Each tool has been through a verification pipeline, so you can focus on designing your crew's workflow rather than debugging tool implementations.

LLM Runtime: Let the Model Handle It

If your agent uses OpenAI or Anthropic tool calling, AgentNodeRuntime handles tool registration, system prompt injection, and the tool loop automatically. The LLM discovers, installs, and runs AgentNode capabilities on its own — no hardcoded tool calls needed.

from openai import OpenAI
from agentnode_sdk import AgentNodeRuntime

runtime = AgentNodeRuntime()

result = runtime.run(
    provider="openai",
    client=OpenAI(),
    model="gpt-4o",
    messages=[{"role": "user", "content": "your task here"}],
)
print(result.content)

The Runtime registers 5 meta-tools (agentnode_capabilities, agentnode_search, agentnode_install, agentnode_run, agentnode_acquire) that let the LLM search the registry, install packages, and execute tools autonomously. Works with Anthropic too — just change provider="anthropic" and pass an Anthropic client.

See the LLM Runtime documentation for the full API reference, trust levels, and manual tool calling.

#crewai#integration#tutorial#agent-skills#python
Using AgentNode with CrewAI: Build Agent Crews (2026) — AgentNode Tutorial | AgentNode