Skip to content

FilesystemMiddleware

FilesystemMiddleware

Bases: AgentMiddleware

Middleware for providing filesystem and optional execution tools to an agent.

This middleware adds filesystem tools to the agent: ls, read_file, write_file, edit_file, glob, and grep.

Files can be stored using any backend that implements the BackendProtocol.

If the backend implements SandboxBackendProtocol, an execute tool is also added for running shell commands.

This middleware also automatically evicts large tool results to the file system when they exceed a token threshold, preventing context window saturation.

PARAMETER DESCRIPTION
backend

Backend for file storage and optional execution.

If not provided, defaults to StateBackend (ephemeral storage in agent state).

For persistent storage or hybrid setups, use CompositeBackend with custom routes.

For execution support, use a backend that implements SandboxBackendProtocol.

TYPE: BACKEND_TYPES | None DEFAULT: None

system_prompt

Optional custom system prompt override.

TYPE: str | None DEFAULT: None

custom_tool_descriptions

Optional custom tool descriptions override.

TYPE: dict[str, str] | None DEFAULT: None

tool_token_limit_before_evict

Token limit before evicting a tool result to the filesystem.

When exceeded, writes the result using the configured backend and replaces it with a truncated preview and file reference.

TYPE: int | None DEFAULT: 20000

Example
from deepagents.middleware.filesystem import FilesystemMiddleware
from deepagents.backends import StateBackend, StoreBackend, CompositeBackend
from langchain.agents import create_agent

# Ephemeral storage only (default, no execution)
agent = create_agent(middleware=[FilesystemMiddleware()])

# With hybrid storage (ephemeral + persistent /memories/)
backend = CompositeBackend(default=StateBackend(), routes={"/memories/": StoreBackend()})
agent = create_agent(middleware=[FilesystemMiddleware(backend=backend)])

# With sandbox backend (supports execution)
from my_sandbox import DockerSandboxBackend

sandbox = DockerSandboxBackend(container_id="my-container")
agent = create_agent(middleware=[FilesystemMiddleware(backend=sandbox)])
METHOD DESCRIPTION
__init__

Initialize the filesystem middleware.

wrap_model_call

Update the system prompt and filter tools based on backend capabilities.

awrap_model_call

(async) Update the system prompt and filter tools based on backend capabilities.

wrap_tool_call

Check the size of the tool call result and evict to filesystem if too large.

awrap_tool_call

(async)Check the size of the tool call result and evict to filesystem if too large.

before_agent

Logic to run before the agent execution starts.

abefore_agent

Async logic to run before the agent execution starts.

before_model

Logic to run before the model is called.

abefore_model

Async logic to run before the model is called.

after_model

Logic to run after the model is called.

aafter_model

Async logic to run after the model is called.

after_agent

Logic to run after the agent execution completes.

aafter_agent

Async logic to run after the agent execution completes.

state_schema class-attribute instance-attribute

state_schema = FilesystemState

The schema for state passed to the middleware nodes.

tools instance-attribute

tools = _get_filesystem_tools(backend, custom_tool_descriptions)

Additional tools registered by the middleware.

name property

name: str

The name of the middleware instance.

Defaults to the class name, but can be overridden for custom naming.

__init__

__init__(
    *,
    backend: BACKEND_TYPES | None = None,
    system_prompt: str | None = None,
    custom_tool_descriptions: dict[str, str] | None = None,
    tool_token_limit_before_evict: int | None = 20000,
) -> None

Initialize the filesystem middleware.

PARAMETER DESCRIPTION
backend

Backend for file storage and optional execution, or a factory callable. Defaults to StateBackend if not provided.

TYPE: BACKEND_TYPES | None DEFAULT: None

system_prompt

Optional custom system prompt override.

TYPE: str | None DEFAULT: None

custom_tool_descriptions

Optional custom tool descriptions override.

TYPE: dict[str, str] | None DEFAULT: None

tool_token_limit_before_evict

Optional token limit before evicting a tool result to the filesystem.

TYPE: int | None DEFAULT: 20000

wrap_model_call

wrap_model_call(
    request: ModelRequest, handler: Callable[[ModelRequest], ModelResponse]
) -> ModelResponse

Update the system prompt and filter tools based on backend capabilities.

PARAMETER DESCRIPTION
request

The model request being processed.

TYPE: ModelRequest

handler

The handler function to call with the modified request.

TYPE: Callable[[ModelRequest], ModelResponse]

RETURNS DESCRIPTION
ModelResponse

The model response from the handler.

awrap_model_call async

awrap_model_call(
    request: ModelRequest, handler: Callable[[ModelRequest], Awaitable[ModelResponse]]
) -> ModelResponse

(async) Update the system prompt and filter tools based on backend capabilities.

PARAMETER DESCRIPTION
request

The model request being processed.

TYPE: ModelRequest

handler

The handler function to call with the modified request.

TYPE: Callable[[ModelRequest], Awaitable[ModelResponse]]

RETURNS DESCRIPTION
ModelResponse

The model response from the handler.

wrap_tool_call

wrap_tool_call(
    request: ToolCallRequest,
    handler: Callable[[ToolCallRequest], ToolMessage | Command],
) -> ToolMessage | Command

Check the size of the tool call result and evict to filesystem if too large.

PARAMETER DESCRIPTION
request

The tool call request being processed.

TYPE: ToolCallRequest

handler

The handler function to call with the modified request.

TYPE: Callable[[ToolCallRequest], ToolMessage | Command]

RETURNS DESCRIPTION
ToolMessage | Command

The raw ToolMessage, or a pseudo tool message with the ToolResult in state.

awrap_tool_call async

awrap_tool_call(
    request: ToolCallRequest,
    handler: Callable[[ToolCallRequest], Awaitable[ToolMessage | Command]],
) -> ToolMessage | Command

(async)Check the size of the tool call result and evict to filesystem if too large.

PARAMETER DESCRIPTION
request

The tool call request being processed.

TYPE: ToolCallRequest

handler

The handler function to call with the modified request.

TYPE: Callable[[ToolCallRequest], Awaitable[ToolMessage | Command]]

RETURNS DESCRIPTION
ToolMessage | Command

The raw ToolMessage, or a pseudo tool message with the ToolResult in state.

before_agent

before_agent(state: StateT, runtime: Runtime[ContextT]) -> dict[str, Any] | None

Logic to run before the agent execution starts.

PARAMETER DESCRIPTION
state

The current agent state.

TYPE: StateT

runtime

The runtime context.

TYPE: Runtime[ContextT]

RETURNS DESCRIPTION
dict[str, Any] | None

Agent state updates to apply before agent execution.

abefore_agent async

abefore_agent(state: StateT, runtime: Runtime[ContextT]) -> dict[str, Any] | None

Async logic to run before the agent execution starts.

PARAMETER DESCRIPTION
state

The current agent state.

TYPE: StateT

runtime

The runtime context.

TYPE: Runtime[ContextT]

RETURNS DESCRIPTION
dict[str, Any] | None

Agent state updates to apply before agent execution.

before_model

before_model(state: StateT, runtime: Runtime[ContextT]) -> dict[str, Any] | None

Logic to run before the model is called.

PARAMETER DESCRIPTION
state

The current agent state.

TYPE: StateT

runtime

The runtime context.

TYPE: Runtime[ContextT]

RETURNS DESCRIPTION
dict[str, Any] | None

Agent state updates to apply before model call.

abefore_model async

abefore_model(state: StateT, runtime: Runtime[ContextT]) -> dict[str, Any] | None

Async logic to run before the model is called.

PARAMETER DESCRIPTION
state

The agent state.

TYPE: StateT

runtime

The runtime context.

TYPE: Runtime[ContextT]

RETURNS DESCRIPTION
dict[str, Any] | None

Agent state updates to apply before model call.

after_model

after_model(state: StateT, runtime: Runtime[ContextT]) -> dict[str, Any] | None

Logic to run after the model is called.

PARAMETER DESCRIPTION
state

The current agent state.

TYPE: StateT

runtime

The runtime context.

TYPE: Runtime[ContextT]

RETURNS DESCRIPTION
dict[str, Any] | None

Agent state updates to apply after model call.

aafter_model async

aafter_model(state: StateT, runtime: Runtime[ContextT]) -> dict[str, Any] | None

Async logic to run after the model is called.

PARAMETER DESCRIPTION
state

The current agent state.

TYPE: StateT

runtime

The runtime context.

TYPE: Runtime[ContextT]

RETURNS DESCRIPTION
dict[str, Any] | None

Agent state updates to apply after model call.

after_agent

after_agent(state: StateT, runtime: Runtime[ContextT]) -> dict[str, Any] | None

Logic to run after the agent execution completes.

PARAMETER DESCRIPTION
state

The current agent state.

TYPE: StateT

runtime

The runtime context.

TYPE: Runtime[ContextT]

RETURNS DESCRIPTION
dict[str, Any] | None

Agent state updates to apply after agent execution.

aafter_agent async

aafter_agent(state: StateT, runtime: Runtime[ContextT]) -> dict[str, Any] | None

Async logic to run after the agent execution completes.

PARAMETER DESCRIPTION
state

The current agent state.

TYPE: StateT

runtime

The runtime context.

TYPE: Runtime[ContextT]

RETURNS DESCRIPTION
dict[str, Any] | None

Agent state updates to apply after agent execution.