Tools For Agents
Tools For Agents
Tools let agents take actions: things like fetching data, running code, calling
external APIs, and even using a computer. There are three classes of tools in the
Agent SDK:
Hosted tools: these run on LLM servers alongside the AI models. OpenAI offers
retrieval, web search and computer use as hosted tools.
Function calling: these allow you to use any Python function as a tool.
Agents as tools: this allows you to use an agent as a tool, allowing Agents to call
other agents without handing off to them.
Hosted tools
OpenAI offers a few built-in tools when using the OpenAIResponsesModel:
agent = Agent(
name="Assistant",
tools=[
WebSearchTool(),
FileSearchTool(
max_num_results=3,
vector_store_ids=["VECTOR_STORE_ID"],
),
],
)
The name of the tool will be the name of the Python function (or you can provide a
name)
Tool description will be taken from the docstring of the function (or you can
provide a description)
The schema for the function inputs is automatically created from the function's
arguments
Descriptions for each input are taken from the docstring of the function, unless
disabled
We use Python's inspect module to extract the function signature, along with griffe
to parse docstrings and pydantic for schema creation.
import json
class Location(TypedDict):
lat: float
long: float
@function_tool
async def fetch_weather(location: Location) -> str:
Args:
location: The location to fetch the weather for.
"""
# In real life, we'd fetch the weather from a weather API
return "sunny"
@function_tool(name_override="fetch_data")
def read_file(ctx: RunContextWrapper[Any], path: str, directory: str | None = None)
-> str:
"""Read the contents of a file.
Args:
path: The path to the file to read.
directory: The directory to read the file from.
"""
# In real life, we'd read the file from the file system
return "<file contents>"
agent = Agent(
name="Assistant",
tools=[fetch_weather, read_file],
)
name
description
params_json_schema, which is the JSON schema for the arguments
on_invoke_tool, which is an async function that receives the context and the
arguments as a JSON string, and must return the tool output as a string.
class FunctionArgs(BaseModel):
username: str
age: int
tool = FunctionTool(
name="process_user",
description="Processes extracted user data",
params_json_schema=FunctionArgs.model_json_schema(),
on_invoke_tool=run_function,
)
Automatic argument and docstring parsing
As mentioned before, we automatically parse the function signature to extract the
schema for the tool, and we parse the docstring to extract descriptions for the
tool and for individual arguments. Some notes on that:
The signature parsing is done via the inspect module. We use type annotations to
understand the types for the arguments, and dynamically build a Pydantic model to
represent the overall schema. It supports most types, including Python primitives,
Pydantic models, TypedDicts, and more.
We use griffe to parse docstrings. Supported docstring formats are google, sphinx
and numpy. We attempt to automatically detect the docstring format, but this is
best-effort and you can explicitly set it when calling function_tool. You can also
disable docstring parsing by setting use_docstring_info to False.
The code for the schema extraction lives in agents.function_schema.
Agents as tools
In some workflows, you may want a central agent to orchestrate a network of
specialized agents, instead of handing off control. You can do this by modeling
agents as tools.
spanish_agent = Agent(
name="Spanish agent",
instructions="You translate the user's message to Spanish",
)
french_agent = Agent(
name="French agent",
instructions="You translate the user's message to French",
)
orchestrator_agent = Agent(
name="orchestrator_agent",
instructions=(
"You are a translation agent. You use the tools given to you to translate."
"If asked for multiple translations, you call the relevant tools."
),
tools=[
spanish_agent.as_tool(
tool_name="translate_to_spanish",
tool_description="Translate the user's message to Spanish",
),
french_agent.as_tool(
tool_name="translate_to_french",
tool_description="Translate the user's message to French",
),
],
)
@function_tool
async def run_my_agent() -> str:
"""A tool that runs the agent with custom configs".
return str(result.final_output)
Handling errors in function tools
When you create a function tool via @function_tool, you can pass a
failure_error_function. This is a function that provides an error response to the
LLM in case the tool call crashes.