Framework Integration11 min read

Using AgentNode with OpenAI Function Calling: Complete Integration Guide

Learn how to integrate AgentNode's verified agent tools with OpenAI's function calling API. Step-by-step tutorial covering tool loading, schema conversion, and building a complete execution loop with GPT-4.

By agentnode

OpenAI's function calling API transformed GPT-4 from a text generator into a tool-using agent. But here is the problem every developer hits almost immediately: where do the tools come from? You can hand-write JSON schemas and wire up functions manually, but that approach does not scale past a handful of tools. And if you want your agent to gain new capabilities without a code deploy, manual wiring is a dead end.

AgentNode solves this by giving your OpenAI-powered agent access to a registry of verified, schema-typed tools that can be loaded at runtime and converted to OpenAI function schemas automatically. In this guide, you will build a complete integration from scratch — loading tools from AgentNode, converting them for OpenAI, and running a full tool-calling execution loop.

What Is OpenAI Function Calling?

Function calling is OpenAI's mechanism for letting GPT models interact with external code. Instead of generating free-form text when it needs to perform an action, the model outputs a structured JSON object that matches a function schema you provide. Your application then executes that function and feeds the result back to the model.

The flow looks like this:

  1. You define one or more function schemas (name, description, parameters) and send them alongside your prompt.
  2. GPT-4 decides whether to call a function. If it does, it returns a tool_calls response with the function name and arguments as JSON.
  3. Your application executes the function with those arguments.
  4. You send the function's return value back to GPT-4 as a tool message.
  5. GPT-4 uses the result to continue reasoning or produce a final answer.

This is powerful — but it requires you to define and maintain every tool schema by hand. That is where AgentNode fits in.

Why AgentNode + OpenAI?

AgentNode is a verified registry of AI agent tools packaged in the ANP (AgentNode Package) format. Each tool comes with typed schemas, input/output definitions, and verification scores. The key benefits of combining AgentNode with OpenAI's function calling:

  • Pre-built schemas — every AgentNode tool already has typed parameter definitions. No need to hand-write JSON schemas for each function.
  • Verified tools — each tool passes a 4-step verification pipeline including sandbox testing, so you know it actually works before your agent calls it.
  • Runtime discovery — your agent can browse verified agent tools and install new capabilities without redeploying code.
  • Cross-framework portability — the same tools work with LangChain, CrewAI, and MCP servers for Claude and Cursor.

Prerequisites

Before you begin, make sure you have the following:

  • Python 3.10 or later
  • An OpenAI API key with access to GPT-4 or GPT-4-turbo
  • An AgentNode account (free tier works for this tutorial)
  • The agentnode and openai Python packages installed
pip install agentnode-sdk openai

Step 1: Load Tools from AgentNode

The first step is loading the tools you want your agent to use. The AgentNode SDK provides a straightforward API for searching and installing tools.

from agentnode_sdk import AgentNodeClient

# Initialize the client
client = AgentNodeClient(api_key="your-agentnode-api-key")

# Search for tools by capability
results = client.search("web scraping", verified_only=True)
print(f"Found {len(results)} verified tools")

# Install specific tools
tools = client.install([
    "web-scraper",
    "json-parser",
    "sentiment-analyzer"
])

print(f"Loaded {len(tools)} tools")
for tool in tools:
    print(f"  - {tool.name}: {tool.description}")

Each tool object contains a name, description, parameters schema, and an execute() method. The parameters schema follows JSON Schema format — which is exactly what OpenAI expects.

Step 2: Convert AgentNode Tools to OpenAI Function Schemas

OpenAI's function calling API expects tools in a specific format. You need to convert AgentNode's ANP schemas into OpenAI-compatible function definitions. Here is a converter function:

def anp_to_openai_tools(agentnode_tools):
    """Convert AgentNode tools to OpenAI function calling format."""
    openai_tools = []
    for tool in agentnode_tools:
        openai_tool = {
            "type": "function",
            "function": {
                "name": tool.name,
                "description": tool.description,
                "parameters": {
                    "type": "object",
                    "properties": tool.parameters.get("properties", {}),
                    "required": tool.parameters.get("required", [])
                }
            }
        }
        openai_tools.append(openai_tool)
    return openai_tools

# Convert your loaded tools
openai_tools = anp_to_openai_tools(tools)

# Inspect the generated schema
import json
print(json.dumps(openai_tools[0], indent=2))

The conversion is straightforward because AgentNode tools already use JSON Schema for parameter definitions. The converter wraps each tool in OpenAI's expected structure and maps the fields accordingly.

Handling Complex Parameter Types

Some AgentNode tools have nested parameters or use advanced JSON Schema features. Here is an extended converter that handles edge cases:

def anp_to_openai_tools_extended(agentnode_tools):
    """Extended converter handling nested schemas and enums."""
    openai_tools = []
    for tool in agentnode_tools:
        # Deep copy to avoid mutation
        params = json.loads(json.dumps(tool.parameters))

        # OpenAI requires 'type': 'object' at the top level
        if "type" not in params:
            params["type"] = "object"

        # Strip unsupported keywords if present
        for key in ["$schema", "$id", "additionalProperties"]:
            params.pop(key, None)

        openai_tool = {
            "type": "function",
            "function": {
                "name": tool.name.replace("-", "_"),  # OpenAI prefers underscores
                "description": tool.description[:1024],  # OpenAI max
                "parameters": params
            }
        }
        openai_tools.append(openai_tool)
    return openai_tools

Step 3: Build the Execution Loop

Now comes the core of the integration — a loop that sends messages to GPT-4, handles tool calls, executes AgentNode tools, and feeds results back. This is the pattern every OpenAI agent follows.

import openai

openai_client = openai.OpenAI(api_key="your-openai-api-key")

def create_tool_map(agentnode_tools):
    """Create a lookup dict from tool name to tool object."""
    tool_map = {}
    for tool in agentnode_tools:
        # Map both hyphenated and underscored names
        tool_map[tool.name] = tool
        tool_map[tool.name.replace("-", "_")] = tool
    return tool_map

def run_agent(user_message, agentnode_tools, max_iterations=10):
    """Run a GPT-4 agent with AgentNode tools."""
    tool_map = create_tool_map(agentnode_tools)
    openai_tools = anp_to_openai_tools_extended(agentnode_tools)

    messages = [
        {"role": "system", "content": "You are a helpful assistant with access to tools. Use them when needed to answer the user's question."},
        {"role": "user", "content": user_message}
    ]

    for iteration in range(max_iterations):
        response = openai_client.chat.completions.create(
            model="gpt-4",
            messages=messages,
            tools=openai_tools,
            tool_choice="auto"
        )

        message = response.choices[0].message
        messages.append(message)

        # If no tool calls, we have our final answer
        if not message.tool_calls:
            return message.content

        # Execute each tool call
        for tool_call in message.tool_calls:
            func_name = tool_call.function.name
            arguments = json.loads(tool_call.function.arguments)

            # Look up the AgentNode tool and execute
            tool = tool_map.get(func_name)
            if tool:
                result = tool.execute(**arguments)
            else:
                result = {"error": f"Unknown tool: {func_name}"}

            # Feed result back to GPT-4
            messages.append({
                "role": "tool",
                "tool_call_id": tool_call.id,
                "content": json.dumps(result)
            })

    return "Agent reached maximum iterations without final answer."

# Run the agent
result = run_agent(
    "Scrape the homepage of example.com and analyze the sentiment",
    tools
)
print(result)

Step 4: Add Error Handling and Retries

Production agents need robust error handling. Tools can fail for many reasons — network timeouts, invalid inputs, rate limits. Here is how to make the execution loop resilient:

import time
import logging

logger = logging.getLogger(__name__)

def execute_tool_safely(tool, arguments, max_retries=2):
    """Execute an AgentNode tool with retry logic."""
    for attempt in range(max_retries + 1):
        try:
            result = tool.execute(**arguments)
            return {"status": "success", "data": result}
        except TimeoutError:
            if attempt < max_retries:
                time.sleep(2 ** attempt)
                continue
            return {"status": "error", "message": "Tool execution timed out after retries"}
        except ValueError as e:
            return {"status": "error", "message": f"Invalid input: {str(e)}"}
        except Exception as e:
            logger.error(f"Tool {tool.name} failed: {e}")
            return {"status": "error", "message": f"Tool execution failed: {str(e)}"}

Replace the direct tool.execute() call in the execution loop with execute_tool_safely(). This ensures that a single tool failure does not crash the entire agent — GPT-4 is smart enough to work around errors or try alternative approaches when it receives an error response.

Step 5: Dynamic Tool Loading

The real power of combining AgentNode with OpenAI is dynamic tool loading. Instead of hardcoding which tools your agent has, let it discover and install tools at runtime based on the task at hand.

def run_agent_with_dynamic_tools(user_message):
    """Agent that discovers tools based on the task."""
    # Start with the AgentNode search tool itself
    search_tool = client.get_meta_tool("search")
    install_tool = client.get_meta_tool("install")

    base_tools = [search_tool, install_tool]
    active_tools = list(base_tools)

    messages = [
        {"role": "system", "content": (
            "You have access to a tool registry. "
            "If you need a capability you don't have, "
            "search for it and install it first."
        )},
        {"role": "user", "content": user_message}
    ]

    for iteration in range(15):
        openai_tools = anp_to_openai_tools_extended(active_tools)

        response = openai_client.chat.completions.create(
            model="gpt-4",
            messages=messages,
            tools=openai_tools,
            tool_choice="auto"
        )

        message = response.choices[0].message
        messages.append(message)

        if not message.tool_calls:
            return message.content

        for tool_call in message.tool_calls:
            func_name = tool_call.function.name
            arguments = json.loads(tool_call.function.arguments)

            tool_map = create_tool_map(active_tools)
            tool = tool_map.get(func_name)
            result = execute_tool_safely(tool, arguments)

            # If a new tool was installed, add it to active tools
            if func_name == "install" and result["status"] == "success":
                new_tools = result["data"]
                active_tools.extend(new_tools)

            messages.append({
                "role": "tool",
                "tool_call_id": tool_call.id,
                "content": json.dumps(result)
            })

    return "Agent reached maximum iterations."

This pattern lets your agent start with zero domain-specific tools and acquire capabilities as it encounters tasks that require them. For a deep dive into this architecture, see our guide on what AgentNode is and how it works.

Complete Working Example

Here is a full, runnable script that ties everything together. This agent can scrape web pages, parse JSON, and analyze sentiment — all using AgentNode tools through OpenAI's function calling:

"""Complete AgentNode + OpenAI Function Calling Example."""
import json
import openai
from agentnode_sdk import AgentNodeClient

# Initialize clients
an_client = AgentNodeClient(api_key="your-agentnode-key")
ai_client = openai.OpenAI(api_key="your-openai-key")

# Load and convert tools
tools = an_client.install(["web-scraper", "json-parser", "sentiment-analyzer"])
openai_tools = anp_to_openai_tools_extended(tools)
tool_map = create_tool_map(tools)

# Run a conversation
response = run_agent(
    "Scrape the AgentNode homepage and tell me the overall sentiment",
    tools
)
print(response)

Performance Tips

A few recommendations for production deployments:

  • Cache tool schemas — converting ANP to OpenAI format on every request is wasteful. Convert once and cache the result.
  • Limit tool count — GPT-4 performs best with 5-15 tools. If you have more, use a two-stage approach: first let the model pick a tool category, then load the specific tools.
  • Use streaming — for long-running tool calls, use OpenAI's streaming API so the user sees progress immediately.
  • Set timeouts — configure both OpenAI API timeouts and AgentNode tool execution timeouts to prevent runaway calls.
  • Log everything — log each tool call, its arguments, and its result. This is essential for debugging agent behavior in production.

What Works with GPT-4 vs GPT-4-Turbo

Both GPT-4 and GPT-4-turbo support function calling, but there are practical differences:

  • GPT-4-turbo has a larger context window (128k tokens) and supports parallel tool calls — it can call multiple tools in a single response turn. This is ideal for agents that need to gather information from several tools simultaneously.
  • GPT-4 (8k/32k context) is more deliberate in its tool usage and tends to call tools sequentially. This can be an advantage for tasks where order matters.
  • GPT-4o offers the best balance of speed and capability for most tool-calling workflows.

All three work with the same AgentNode integration code. The only difference is the model name in the chat.completions.create() call.

Beyond OpenAI: Cross-Framework Compatibility

One of AgentNode's core design principles is framework portability. The same tools you loaded for OpenAI work with other frameworks too. Check out the LangChain integration guide and the CrewAI integration to see how the same verified tools plug into different ecosystems. You can also consult the AgentNode documentation for a full API reference.

How do I use AgentNode with OpenAI?

Install the AgentNode SDK, load tools using client.install(), convert them to OpenAI function schemas using the converter function shown above, and pass them to the tools parameter in your chat.completions.create() call. GPT-4 will automatically decide when to call each tool.

Does AgentNode work with GPT-4?

Yes. AgentNode tools work with GPT-4, GPT-4-turbo, GPT-4o, and any OpenAI model that supports function calling. The AgentNode SDK provides schema conversion utilities that output the exact JSON format OpenAI expects.

Can I use MCP tools with OpenAI?

Yes. AgentNode supports the Model Context Protocol (MCP), and tools published in MCP format can be loaded through the AgentNode SDK and converted to OpenAI function schemas. This means you can use the same tool with both Claude (via MCP) and GPT-4 (via function calling) without modification.

What is function calling?

Function calling is an OpenAI API feature that lets GPT models output structured JSON matching a function schema instead of free-form text. This enables the model to interact with external tools, APIs, and code. You define what functions are available, and the model decides when and how to call them based on the conversation context.

LLM Runtime: Let the Model Handle It

If your agent uses OpenAI or Anthropic tool calling, AgentNodeRuntime handles tool registration, system prompt injection, and the tool loop automatically. The LLM discovers, installs, and runs AgentNode capabilities on its own — no hardcoded tool calls needed.

from openai import OpenAI
from agentnode_sdk import AgentNodeRuntime

runtime = AgentNodeRuntime()

result = runtime.run(
    provider="openai",
    client=OpenAI(),
    model="gpt-4o",
    messages=[{"role": "user", "content": "your task here"}],
)
print(result.content)

The Runtime registers 5 meta-tools (agentnode_capabilities, agentnode_search, agentnode_install, agentnode_run, agentnode_acquire) that let the LLM search the registry, install packages, and execute tools autonomously. Works with Anthropic too — just change provider="anthropic" and pass an Anthropic client.

See the LLM Runtime documentation for the full API reference, trust levels, and manual tool calling.

Frequently Asked Questions

How do I use AgentNode with OpenAI?
Install the AgentNode SDK, load tools using client.install(), convert them to OpenAI function schemas using the converter function, and pass them to the tools parameter in your chat.completions.create() call. GPT-4 will automatically decide when to call each tool.
Does AgentNode work with GPT-4?
Yes. AgentNode tools work with GPT-4, GPT-4-turbo, GPT-4o, and any OpenAI model that supports function calling. The AgentNode SDK provides schema conversion utilities that output the exact JSON format OpenAI expects.
Can I use MCP tools with OpenAI?
Yes. AgentNode supports the Model Context Protocol (MCP), and tools published in MCP format can be loaded through the AgentNode SDK and converted to OpenAI function schemas. This means you can use the same tool with both Claude via MCP and GPT-4 via function calling without modification.
What is function calling?
Function calling is an OpenAI API feature that lets GPT models output structured JSON matching a function schema instead of free-form text. This enables the model to interact with external tools, APIs, and code. You define what functions are available, and the model decides when and how to call them based on the conversation context.
AgentNode + OpenAI Function Calling: Integration Guide — AgentNode Blog | AgentNode