Using AgentNode with AutoGen and Semantic Kernel
AutoGen and Semantic Kernel are two of Microsoft's most powerful AI frameworks. This integration guide shows how to load AgentNode tools as AutoGen functions and Semantic Kernel plugins, with full code examples for both.
Microsoft has invested heavily in the AI agent space with two complementary frameworks: AutoGen for multi-agent conversations and Semantic Kernel for enterprise AI application development. Both frameworks need tools — functions that agents can call to interact with the world. AgentNode provides a registry of verified, portable tools that work with both frameworks out of the box.
This guide covers everything you need to integrate AgentNode tools into AutoGen and Semantic Kernel projects. We will walk through installation, configuration, and complete code examples for both frameworks, including multi-agent scenarios where agents share tools from a common registry.
If you have already worked through the LangChain integration guide or the CrewAI integration, the patterns will be familiar. AgentNode's cross-framework design means the core concepts — resolve, install, load — are identical. Only the framework-specific adapter code differs.
Why AgentNode + Microsoft Frameworks?
AutoGen and Semantic Kernel each solve different problems, but they share a common limitation: tool discovery and management is manual. Developers hand-code tool functions, maintain them alongside application code, and have no standardized way to share tools between projects or teams.
AgentNode addresses this by providing:
- A shared tool registry — instead of writing tool functions from scratch, resolve verified capabilities from a catalog of hundreds of tools.
- Cross-framework portability — tools installed from AgentNode work in AutoGen, Semantic Kernel, LangChain, CrewAI, and vanilla Python without modification.
- Verification and trust — every tool has a public trust score, so you know whether a tool is Gold-verified or unverified before your agent uses it.
- Version management — the SDK handles dependency resolution, version selection, and updates.
For teams using Microsoft's AI stack, this means your AutoGen agents and Semantic Kernel applications can share the same tool catalog, reducing duplication and ensuring consistent quality across projects.
Prerequisites
Before starting, ensure you have the following installed:
# Python 3.9 or higher
python --version
# Install the AgentNode SDK
pip install agentnode-sdk
# Install AutoGen (for AutoGen examples)
pip install pyautogen
# Install Semantic Kernel (for SK examples)
pip install semantic-kernel
You will also need an AgentNode account (free) and an API key for your LLM provider (OpenAI, Azure OpenAI, or Anthropic).
AutoGen Integration
AutoGen is designed for multi-agent conversations where multiple AI agents collaborate to solve tasks. Agents in AutoGen can be equipped with functions that they call during conversations. AgentNode tools map naturally to AutoGen functions.
Basic Setup: Loading a Single Tool
The simplest integration loads a single AgentNode tool and registers it as an AutoGen function:
from agentnode_sdk import AgentNodeClient, load_tool
import autogen
# Step 1: Resolve and install the tool from AgentNode
client = AgentNodeClient()
client.resolve_and_install(["web-scraping"])
# Step 2: Load the tool
scraper = load_tool("web-scraper")
# Step 3: Create an AutoGen function wrapper
def scrape_webpage(url: str, format: str = "markdown") -> str:
"""Scrape a webpage and return its content."""
result = scraper.run({"url": url, "format": format})
return result["content"]
# Step 4: Configure AutoGen agents
config_list = [{"model": "gpt-4", "api_key": "your-key"}]
assistant = autogen.AssistantAgent(
name="assistant",
llm_config={
"config_list": config_list,
"functions": [
{
"name": "scrape_webpage",
"description": "Scrape a webpage and return its content",
"parameters": {
"type": "object",
"properties": {
"url": {"type": "string", "description": "The URL to scrape"},
"format": {"type": "string", "enum": ["text", "markdown", "html"]}
},
"required": ["url"]
}
}
]
}
)
user_proxy = autogen.UserProxyAgent(
name="user_proxy",
function_map={"scrape_webpage": scrape_webpage}
)
# Step 5: Start the conversation
user_proxy.initiate_chat(
assistant,
message="Scrape https://example.com and summarize the content."
)
The key insight here is that the AgentNode tool's typed schema maps directly to AutoGen's function parameter specification. You can even automate this mapping using the tool's built-in schema:
# Automatic schema mapping
tool = load_tool("web-scraper")
function_spec = {
"name": tool.name,
"description": tool.description,
"parameters": tool.input_schema
}
Multi-Agent Conversations with Shared Tools
AutoGen's real power is multi-agent conversations. Multiple agents collaborate, and each can have different tools. With AgentNode, you can give each agent specialized capabilities from the same verified registry:
from agentnode_sdk import AgentNodeClient, load_tool
import autogen
client = AgentNodeClient()
# Resolve tools for different agent roles
client.resolve_and_install([
"web-scraping",
"text-summarization",
"sentiment-analysis",
"data-visualization"
])
# Load tools
scraper = load_tool("web-scraper")
summarizer = load_tool("text-summarizer")
sentiment = load_tool("sentiment-analyzer")
visualizer = load_tool("data-visualizer")
# Wrapper functions
def scrape(url: str) -> str:
return scraper.run({"url": url})["content"]
def summarize(text: str) -> str:
return summarizer.run({"text": text})["summary"]
def analyze_sentiment(text: str) -> dict:
return sentiment.run({"text": text})
def create_chart(data: dict, chart_type: str) -> str:
return visualizer.run({"data": data, "type": chart_type})["image_url"]
# Research agent — scrapes and summarizes
researcher = autogen.AssistantAgent(
name="researcher",
system_message="You are a research agent. Use scrape_webpage to gather information and summarize_text to condense it.",
llm_config={
"config_list": config_list,
"functions": [
{"name": "scrape_webpage", "description": "Scrape a URL", "parameters": scraper.input_schema},
{"name": "summarize_text", "description": "Summarize text", "parameters": summarizer.input_schema}
]
}
)
# Analyst agent — analyzes sentiment and creates visualizations
analyst = autogen.AssistantAgent(
name="analyst",
system_message="You are an analyst. Use analyze_sentiment and create_chart to analyze and visualize data.",
llm_config={
"config_list": config_list,
"functions": [
{"name": "analyze_sentiment", "description": "Analyze text sentiment", "parameters": sentiment.input_schema},
{"name": "create_chart", "description": "Create a chart", "parameters": visualizer.input_schema}
]
}
)
user_proxy = autogen.UserProxyAgent(
name="user_proxy",
function_map={
"scrape_webpage": scrape,
"summarize_text": summarize,
"analyze_sentiment": analyze_sentiment,
"create_chart": create_chart
}
)
# Group chat with specialized agents
groupchat = autogen.GroupChat(
agents=[user_proxy, researcher, analyst],
messages=[],
max_round=12
)
manager = autogen.GroupChatManager(groupchat=groupchat, llm_config={"config_list": config_list})
user_proxy.initiate_chat(
manager,
message="Research the latest trends in AI agent tools, analyze the sentiment across different sources, and create a visualization."
)
This pattern — specialized agents with domain-specific tools from a shared registry — is one of AutoGen's most powerful capabilities, and AgentNode makes it practical by providing the verified tool catalog that agents draw from.
Semantic Kernel Integration
Semantic Kernel takes a different approach from AutoGen. Instead of multi-agent conversations, SK focuses on building AI applications with a plugin architecture. Plugins are collections of functions that the kernel can invoke to complete tasks. AgentNode tools map to SK plugins.
Basic Setup: Loading a Tool as a Plugin
import semantic_kernel as sk
from semantic_kernel.functions import kernel_function
from agentnode_sdk import AgentNodeClient, load_tool
# Step 1: Initialize AgentNode and resolve tools
client = AgentNodeClient()
client.resolve_and_install(["text-summarization"])
summarizer = load_tool("text-summarizer")
# Step 2: Create a Semantic Kernel plugin class
class SummarizerPlugin:
"""A plugin that summarizes text using an AgentNode verified skill."""
@kernel_function(
name="summarize",
description="Summarize a long text into key points"
)
def summarize(self, text: str, max_length: int = 200) -> str:
result = summarizer.run({"text": text, "max_length": max_length})
return result["summary"]
# Step 3: Register the plugin with Semantic Kernel
kernel = sk.Kernel()
kernel.add_plugin(SummarizerPlugin(), plugin_name="summarizer")
# Step 4: Use in a semantic function
prompt = """Summarize the following article using the summarizer plugin:
{{$input}}
Call the summarizer.summarize function with the article text."""
result = await kernel.invoke_prompt(prompt, input_vars={"input": long_article})
Multi-Tool Plugin with AgentNode
Semantic Kernel plugins can contain multiple functions. You can create a comprehensive plugin that wraps several AgentNode tools:
from semantic_kernel.functions import kernel_function
from agentnode_sdk import AgentNodeClient, load_tool
# Resolve multiple tools
client = AgentNodeClient()
client.resolve_and_install([
"text-summarization",
"sentiment-analysis",
"language-detection",
"keyword-extraction"
])
summarizer = load_tool("text-summarizer")
sentiment = load_tool("sentiment-analyzer")
lang_detect = load_tool("language-detector")
keywords = load_tool("keyword-extractor")
class TextAnalysisPlugin:
"""Comprehensive text analysis plugin powered by AgentNode verified skills."""
@kernel_function(name="summarize", description="Summarize text into key points")
def summarize(self, text: str, max_length: int = 200) -> str:
result = summarizer.run({"text": text, "max_length": max_length})
return result["summary"]
@kernel_function(name="analyze_sentiment", description="Analyze the sentiment of text")
def analyze_sentiment(self, text: str) -> str:
result = sentiment.run({"text": text})
return f"Sentiment: {result['label']} (score: {result['score']}, confidence: {result['confidence']})"
@kernel_function(name="detect_language", description="Detect the language of text")
def detect_language(self, text: str) -> str:
result = lang_detect.run({"text": text})
return f"Language: {result['language']} (confidence: {result['confidence']})"
@kernel_function(name="extract_keywords", description="Extract keywords from text")
def extract_keywords(self, text: str, count: int = 10) -> str:
result = keywords.run({"text": text, "count": count})
return ", ".join(result["keywords"])
# Register
kernel.add_plugin(TextAnalysisPlugin(), plugin_name="text_analysis")
This pattern creates a single Semantic Kernel plugin backed by four verified AgentNode tools. Each tool was independently verified through AgentNode's sandbox pipeline, and the plugin provides a clean SK-native interface.
Using Planners with AgentNode Tools
Semantic Kernel's planning capabilities work seamlessly with AgentNode-backed plugins. The planner can see all registered functions and compose them into execution plans:
from semantic_kernel.planners import SequentialPlanner
# The planner sees all AgentNode-backed plugin functions
planner = SequentialPlanner(kernel)
# The planner creates an execution plan using available tools
plan = await planner.create_plan(
"Analyze this article: detect the language, extract keywords, "
"analyze the sentiment, and provide a summary."
)
# Execute the plan — each step calls an AgentNode tool
result = await plan.invoke(kernel)
The planner automatically sequences the tool calls based on its understanding of the functions' capabilities. Because AgentNode tools have rich descriptions and typed schemas, the planner makes better decisions about which tools to use and in what order.
Dynamic Tool Loading Helper
For both AutoGen and Semantic Kernel, you can create a helper that dynamically generates framework-compatible tool definitions from AgentNode skills:
from agentnode_sdk import AgentNodeClient, load_tool
class AgentNodeToolLoader:
"""Helper to load AgentNode tools for any framework."""
def __init__(self):
self.client = AgentNodeClient()
self._tools = {}
def resolve(self, capabilities: list[str]):
"""Resolve and install capabilities from AgentNode."""
self.client.resolve_and_install(capabilities)
def load(self, tool_name: str):
"""Load a tool and cache it."""
if tool_name not in self._tools:
self._tools[tool_name] = load_tool(tool_name)
return self._tools[tool_name]
def as_autogen_function(self, tool_name: str) -> tuple[dict, callable]:
"""Return an AutoGen function spec and callable."""
tool = self.load(tool_name)
spec = {
"name": tool.name,
"description": tool.description,
"parameters": tool.input_schema
}
def fn(**kwargs):
return tool.run(kwargs)
return spec, fn
def as_sk_function(self, tool_name: str) -> callable:
"""Return a callable suitable for a Semantic Kernel plugin."""
tool = self.load(tool_name)
def fn(**kwargs):
return tool.run(kwargs)
fn.__name__ = tool.name
fn.__doc__ = tool.description
return fn
# Usage
loader = AgentNodeToolLoader()
loader.resolve(["web-scraping", "text-summarization"])
# For AutoGen
spec, fn = loader.as_autogen_function("web-scraper")
# For Semantic Kernel
sk_fn = loader.as_sk_function("text-summarizer")
This helper eliminates boilerplate and makes it trivial to add AgentNode tools to either framework.
Cross-Framework Benefits
Using AgentNode as your tool registry across both AutoGen and Semantic Kernel provides concrete advantages:
Consistency
The same verified tool runs identically in both frameworks. If your organization uses AutoGen for research agents and Semantic Kernel for production applications, both environments use the same underlying capabilities. No discrepancies, no "it works in AutoGen but not in SK" problems.
Reduced Maintenance
Without AgentNode, you maintain separate tool implementations for each framework. With AgentNode, you maintain zero tool implementations — the registry provides them, verified and ready to use. Your team focuses on agent logic rather than tool plumbing.
Shared Trust Baseline
Every tool your agents use has been through the same verification pipeline. Whether an AutoGen agent or a Semantic Kernel application calls a tool, you have the same trust guarantees. This is particularly important for enterprise environments where security teams need to approve every capability an agent can access.
Easy Migration
If you decide to migrate from AutoGen to Semantic Kernel (or vice versa), your tools do not change. Only the thin adapter layer changes. For a detailed framework comparison, see our analysis of how different frameworks approach tool management.
Azure OpenAI Configuration
Since AutoGen and Semantic Kernel are Microsoft frameworks, many users deploy with Azure OpenAI. Here is how to configure both frameworks with Azure endpoints while using AgentNode tools:
# AutoGen with Azure OpenAI + AgentNode tools
config_list = [
{
"model": "gpt-4",
"api_type": "azure",
"api_key": os.environ["AZURE_OPENAI_KEY"],
"base_url": os.environ["AZURE_OPENAI_ENDPOINT"],
"api_version": "2024-02-01"
}
]
assistant = autogen.AssistantAgent(
name="assistant",
llm_config={"config_list": config_list, "functions": [spec]}
)
# Semantic Kernel with Azure OpenAI + AgentNode tools
from semantic_kernel.connectors.ai.open_ai import AzureChatCompletion
kernel = sk.Kernel()
kernel.add_service(AzureChatCompletion(
deployment_name="gpt-4",
endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
api_key=os.environ["AZURE_OPENAI_KEY"]
))
kernel.add_plugin(TextAnalysisPlugin(), plugin_name="text_analysis")
AgentNode tools are LLM-agnostic. They do not care which language model drives the agent — the tools receive structured input and return structured output regardless of whether the calling agent uses GPT-4, Claude, or an open source model.
Error Handling and Best Practices
When integrating AgentNode tools with either framework, follow these practices for robust production deployments:
Handle Resolution Failures
from agentnode.exceptions import ResolutionError, VerificationError
try:
client.resolve_and_install(["specialized-capability"])
except ResolutionError:
# No tool found for this capability
print("No matching tool found. Using fallback.")
except VerificationError:
# Tool found but below minimum trust threshold
print("Available tools do not meet trust requirements.")
Set Trust Policies
client = AgentNodeClient(
policy={
"min_trust_level": "verified", # Only Gold or Verified tools
"stability_preference": "stable",
"max_resolution_time": 5000
}
)
Cache Tool Instances
Load tools once and reuse them. Do not call load_tool() inside every function invocation — it is unnecessary overhead. The tool loader helper shown earlier handles this automatically.
Log Tool Invocations
In production, log every tool call with its inputs and outputs. This is essential for debugging agent behavior and auditing tool usage:
import logging
logger = logging.getLogger("agentnode_tools")
def wrapped_tool_call(tool, params):
logger.info(f"Calling {tool.name} with {params}")
result = tool.run(params)
logger.info(f"{tool.name} returned {result}")
return result
What Microsoft Frameworks Does AgentNode Support?
AgentNode provides first-class support for both of Microsoft's primary AI agent frameworks. AutoGen support covers single-agent function calling, multi-agent group chats with specialized tools, and dynamic function registration. Semantic Kernel support covers native plugin class creation, planner-compatible function registration, and multi-function plugins.
Beyond Microsoft frameworks, AgentNode tools work with LangChain, CrewAI, AutoGPT, MCP, and vanilla Python. For the full SDK reference, see the AgentNode SDK documentation. To browse available tools compatible with your framework, use the framework-compatible tool search.
Frequently Asked Questions
Does AgentNode work with AutoGen?
Yes. AgentNode tools integrate with AutoGen through function wrappers. You resolve and install tools from the AgentNode registry, load them with load_tool(), and register them as AutoGen functions in your agent's llm_config. The tool's typed input schema maps directly to AutoGen's function parameter specification. This works for single-agent setups, multi-agent group chats, and any AutoGen conversation pattern that uses function calling.
How to use AgentNode with Semantic Kernel?
AgentNode tools integrate with Semantic Kernel through the plugin architecture. Create a plugin class with methods decorated with @kernel_function, and inside each method call the AgentNode tool's run() method. Register the plugin with kernel.add_plugin(). The SK planner can then automatically use your AgentNode-backed functions when composing execution plans. Multi-function plugins that wrap several AgentNode tools are the recommended pattern for comprehensive capability sets.
Can AutoGen agents share tools from AgentNode?
Yes. In AutoGen multi-agent group chats, each agent can be assigned different tools from the same AgentNode registry. A research agent might use web scraping and summarization tools while an analyst agent uses sentiment analysis and visualization tools. All tools are resolved from a single AgentNodeClient instance, ensuring consistent versions and trust levels. The shared registry eliminates duplication — instead of each agent maintaining its own tool implementations, all agents draw from the same verified catalog.
What Microsoft frameworks does AgentNode support?
AgentNode provides full support for Microsoft's two primary AI agent frameworks: AutoGen and Semantic Kernel. For AutoGen, tools register as functions with typed parameter schemas. For Semantic Kernel, tools register as plugin methods compatible with SK's planner and function-calling infrastructure. Both integrations use the same underlying AgentNode SDK, so tools resolved for one framework are immediately usable in the other. AgentNode also supports non-Microsoft frameworks including LangChain, CrewAI, AutoGPT, and vanilla Python.
LLM Runtime: Let the Model Handle It
If your agent uses OpenAI or Anthropic tool calling, AgentNodeRuntime handles tool registration, system prompt injection, and the tool loop automatically. The LLM discovers, installs, and runs AgentNode capabilities on its own — no hardcoded tool calls needed.
from openai import OpenAI
from agentnode_sdk import AgentNodeRuntime
runtime = AgentNodeRuntime()
result = runtime.run(
provider="openai",
client=OpenAI(),
model="gpt-4o",
messages=[{"role": "user", "content": "your task here"}],
)
print(result.content)
The Runtime registers 5 meta-tools (agentnode_capabilities, agentnode_search, agentnode_install, agentnode_run, agentnode_acquire) that let the LLM search the registry, install packages, and execute tools autonomously. Works with Anthropic too — just change provider="anthropic" and pass an Anthropic client.
See the LLM Runtime documentation for the full API reference, trust levels, and manual tool calling.