Skip to main content
This guide shows how to implement skills in the SDK. For conceptual overview, see Skills Overview. OpenHands supports an extended version of the AgentSkills standard with optional keyword triggers.

Context Loading Methods

MethodWhen Content LoadsUse Case
Always-loadedAt conversation startRepository rules, coding standards
Trigger-loadedWhen keywords matchSpecialized tasks, domain knowledge
Progressive disclosureAgent reads on demandLarge reference docs (AgentSkills)

Always-Loaded Context

Content that’s always in the system prompt.

Option 1: AGENTS.md (Auto-loaded)

Place AGENTS.md at your repo root - it’s loaded automatically. See Permanent Context.
from openhands.sdk.context.skills import load_project_skills

# Automatically finds AGENTS.md, CLAUDE.md, GEMINI.md at workspace root
skills = load_project_skills(workspace_dir="/path/to/repo")
agent_context = AgentContext(skills=skills)

Option 2: Inline Skill (Code-defined)

from openhands.sdk import AgentContext
from openhands.sdk.context import Skill

agent_context = AgentContext(
    skills=[
        Skill(
            name="code-style",
            content="Always use type hints in Python.",
            trigger=None,  # No trigger = always loaded
        ),
    ]
)

Trigger-Loaded Context

Content injected when keywords appear in user messages. See Keyword-Triggered Skills.
from openhands.sdk.context import Skill, KeywordTrigger

Skill(
    name="encryption-helper",
    content="Use the encrypt.sh script to encrypt messages.",
    trigger=KeywordTrigger(keywords=["encrypt", "decrypt"]),
)
When user says “encrypt this”, the content is injected into the message:
<EXTRA_INFO>
The following information has been included based on a keyword match for "encrypt".
Skill location: /path/to/encryption-helper

Use the encrypt.sh script to encrypt messages.
</EXTRA_INFO>

Progressive Disclosure (AgentSkills Standard)

For the agent to trigger skills, use the AgentSkills standard SKILL.md format. The agent sees a summary and reads full content on demand.
from openhands.sdk.context.skills import load_skills_from_dir

# Load SKILL.md files from a directory
_, _, agent_skills = load_skills_from_dir("/path/to/skills")
agent_context = AgentContext(skills=list(agent_skills.values()))
Skills are listed in the system prompt:
<available_skills>
  <skill>
    <name>code-style</name>
    <description>Project coding standards.</description>
    <location>/path/to/code-style/SKILL.md</location>
  </skill>
</available_skills>
Add triggers to a SKILL.md for both progressive disclosure AND automatic injection when keywords match.

Full Example

examples/01_standalone_sdk/03_activate_skill.py
import os

from pydantic import SecretStr

from openhands.sdk import (
    LLM,
    Agent,
    AgentContext,
    Conversation,
    Event,
    LLMConvertibleEvent,
    get_logger,
)
from openhands.sdk.context import (
    KeywordTrigger,
    Skill,
)
from openhands.sdk.tool import Tool
from openhands.tools.file_editor import FileEditorTool
from openhands.tools.terminal import TerminalTool


logger = get_logger(__name__)

# Configure LLM
api_key = os.getenv("LLM_API_KEY")
assert api_key is not None, "LLM_API_KEY environment variable is not set."
model = os.getenv("LLM_MODEL", "anthropic/claude-sonnet-4-5-20250929")
base_url = os.getenv("LLM_BASE_URL")
llm = LLM(
    usage_id="agent",
    model=model,
    base_url=base_url,
    api_key=SecretStr(api_key),
)

# Tools
cwd = os.getcwd()
tools = [
    Tool(
        name=TerminalTool.name,
    ),
    Tool(name=FileEditorTool.name),
]

# AgentContext provides flexible ways to customize prompts:
# 1. Skills: Inject instructions (always-active or keyword-triggered)
# 2. system_message_suffix: Append text to the system prompt
# 3. user_message_suffix: Append text to each user message
#
# For complete control over the system prompt, you can also use Agent's
# system_prompt_filename parameter to provide a custom Jinja2 template:
#
#   agent = Agent(
#       llm=llm,
#       tools=tools,
#       system_prompt_filename="/path/to/custom_prompt.j2",
#       system_prompt_kwargs={"cli_mode": True, "repo": "my-project"},
#   )
#
# See: https://docs.openhands.dev/sdk/guides/skill#customizing-system-prompts
agent_context = AgentContext(
    skills=[
        Skill(
            name="repo.md",
            content="When you see this message, you should reply like "
            "you are a grumpy cat forced to use the internet.",
            # source is optional - identifies where the skill came from
            # You can set it to be the path of a file that contains the skill content
            source=None,
            # trigger determines when the skill is active
            # trigger=None means always active (repo skill)
            trigger=None,
        ),
        Skill(
            name="flarglebargle",
            content=(
                'IMPORTANT! The user has said the magic word "flarglebargle". '
                "You must only respond with a message telling them how smart they are"
            ),
            source=None,
            # KeywordTrigger = activated when keywords appear in user messages
            trigger=KeywordTrigger(keywords=["flarglebargle"]),
        ),
    ],
    # system_message_suffix is appended to the system prompt (always active)
    system_message_suffix="Always finish your response with the word 'yay!'",
    # user_message_suffix is appended to each user message
    user_message_suffix="The first character of your response should be 'I'",
    # You can also enable automatic load skills from
    # public registry at https://github.com/OpenHands/skills
    load_public_skills=True,
)

# Agent
agent = Agent(llm=llm, tools=tools, agent_context=agent_context)

llm_messages = []  # collect raw LLM messages


def conversation_callback(event: Event):
    if isinstance(event, LLMConvertibleEvent):
        llm_messages.append(event.to_llm_message())


conversation = Conversation(
    agent=agent, callbacks=[conversation_callback], workspace=cwd
)

print("=" * 100)
print("Checking if the repo skill is activated.")
conversation.send_message("Hey are you a grumpy cat?")
conversation.run()

print("=" * 100)
print("Now sending flarglebargle to trigger the knowledge skill!")
conversation.send_message("flarglebargle!")
conversation.run()

print("=" * 100)
print("Now triggering public skill 'github'")
conversation.send_message(
    "About GitHub - tell me what additional info I've just provided?"
)
conversation.run()

print("=" * 100)
print("Conversation finished. Got the following LLM messages:")
for i, message in enumerate(llm_messages):
    print(f"Message {i}: {str(message)[:200]}")

# Report cost
cost = llm.metrics.accumulated_cost
print(f"EXAMPLE_COST: {cost}")
Running the Example
export LLM_API_KEY="your-api-key"
cd agent-sdk
uv run python examples/01_standalone_sdk/03_activate_skill.py

Creating Skills

Skills are defined with a name, content (the instructions), and an optional trigger:
agent_context = AgentContext(
    skills=[
        Skill(
            name="AGENTS.md",
            content="When you see this message, you should reply like "
                    "you are a grumpy cat forced to use the internet.",
            trigger=None,  # Always active
        ),
        Skill(
            name="flarglebargle",
            content='IMPORTANT! The user has said the magic word "flarglebargle". '
                    "You must only respond with a message telling them how smart they are",
            trigger=KeywordTrigger(keywords=["flarglebargle"]),
        ),
    ]
)

Keyword Triggers

Use KeywordTrigger to activate skills only when specific words appear:
Skill(
    name="magic-word",
    content="Special instructions when magic word is detected",
    trigger=KeywordTrigger(keywords=["flarglebargle", "sesame"]),
)

File-Based Skills (SKILL.md)

For reusable skills, use the AgentSkills standard directory format.

Directory Structure

Each skill is a directory containing:
my-skill/
├── SKILL.md          # Required: Skill definition with frontmatter
├── scripts/          # Optional: Executable scripts
│   └── helper.sh
├── references/       # Optional: Reference documentation
│   └── examples.md
└── assets/           # Optional: Static assets
    └── config.json

SKILL.md Format

The SKILL.md file defines the skill with YAML frontmatter:
---
name: my-skill                    # Required (standard)
description: >                    # Required (standard)
  A brief description of what this skill does and when to use it.
license: MIT                      # Optional (standard)
compatibility: Requires bash      # Optional (standard)
metadata:                         # Optional (standard)
  author: your-name
  version: "1.0"
triggers:                         # Optional (OpenHands extension)
  - keyword1
  - keyword2
---

# Skill Content

Instructions and documentation for the agent...

Frontmatter Fields

FieldRequiredDescription
nameYesSkill identifier (lowercase + hyphens)
descriptionYesWhat the skill does (shown to agent)
triggersNoKeywords that auto-activate this skill (OpenHands extension)
licenseNoLicense name
compatibilityNoEnvironment requirements
metadataNoCustom key-value pairs
Add triggers to make your SKILL.md keyword-activated by matching a user prompt. Without triggers, the skill can only be triggered by the agent, not the user.

Loading Skills

Use load_skills_from_dir() to load all skills from a directory:
examples/05_skills_and_plugins/01_loading_agentskills/main.py
"""Example: Loading Skills from Disk (AgentSkills Standard)

This example demonstrates how to load skills following the AgentSkills standard
from a directory on disk.

Skills are modular, self-contained packages that extend an agent's capabilities
by providing specialized knowledge, workflows, and tools. They follow the
AgentSkills standard which includes:
- SKILL.md file with frontmatter metadata (name, description, triggers)
- Optional resource directories: scripts/, references/, assets/

The example_skills/ directory contains two skills:
- rot13-encryption: Has triggers (encrypt, decrypt) - listed in <available_skills>
  AND content auto-injected when triggered
- code-style-guide: No triggers - listed in <available_skills> for on-demand access

All SKILL.md files follow the AgentSkills progressive disclosure model:
they are listed in <available_skills> with name, description, and location.
Skills with triggers get the best of both worlds: automatic content injection
when triggered, plus the agent can proactively read them anytime.
"""

import os
from pathlib import Path

from pydantic import SecretStr

from openhands.sdk import LLM, Agent, AgentContext, Conversation
from openhands.sdk.context.skills import (
    discover_skill_resources,
    load_skills_from_dir,
)
from openhands.sdk.tool import Tool
from openhands.tools.file_editor import FileEditorTool
from openhands.tools.terminal import TerminalTool


def main():
    # Get the directory containing this script
    script_dir = Path(__file__).parent
    example_skills_dir = script_dir / "example_skills"

    # =========================================================================
    # Part 1: Loading Skills from a Directory
    # =========================================================================
    print("=" * 80)
    print("Part 1: Loading Skills from a Directory")
    print("=" * 80)

    print(f"Loading skills from: {example_skills_dir}")

    # Discover resources in the skill directory
    skill_subdir = example_skills_dir / "rot13-encryption"
    resources = discover_skill_resources(skill_subdir)
    print("\nDiscovered resources in rot13-encryption/:")
    print(f"  - scripts: {resources.scripts}")
    print(f"  - references: {resources.references}")
    print(f"  - assets: {resources.assets}")

    # Load skills from the directory
    repo_skills, knowledge_skills, agent_skills = load_skills_from_dir(
        example_skills_dir
    )

    print("\nLoaded skills from directory:")
    print(f"  - Repo skills: {list(repo_skills.keys())}")
    print(f"  - Knowledge skills: {list(knowledge_skills.keys())}")
    print(f"  - Agent skills (SKILL.md): {list(agent_skills.keys())}")

    # Access the loaded skill and show all AgentSkills standard fields
    if agent_skills:
        skill_name = list(agent_skills.keys())[0]
        loaded_skill = agent_skills[skill_name]
        print(f"\nDetails for '{skill_name}' (AgentSkills standard fields):")
        print(f"  - Name: {loaded_skill.name}")
        desc = loaded_skill.description or ""
        print(f"  - Description: {desc[:70]}...")
        print(f"  - License: {loaded_skill.license}")
        print(f"  - Compatibility: {loaded_skill.compatibility}")
        print(f"  - Metadata: {loaded_skill.metadata}")
        if loaded_skill.resources:
            print("  - Resources:")
            print(f"    - Scripts: {loaded_skill.resources.scripts}")
            print(f"    - References: {loaded_skill.resources.references}")
            print(f"    - Assets: {loaded_skill.resources.assets}")
            print(f"    - Skill root: {loaded_skill.resources.skill_root}")

    # =========================================================================
    # Part 2: Using Skills with an Agent
    # =========================================================================
    print("\n" + "=" * 80)
    print("Part 2: Using Skills with an Agent")
    print("=" * 80)

    # Check for API key
    api_key = os.getenv("LLM_API_KEY")
    if not api_key:
        print("Skipping agent demo (LLM_API_KEY not set)")
        print("\nTo run the full demo, set the LLM_API_KEY environment variable:")
        print("  export LLM_API_KEY=your-api-key")
        return

    # Configure LLM
    model = os.getenv("LLM_MODEL", "anthropic/claude-sonnet-4-5-20250929")
    llm = LLM(
        usage_id="skills-demo",
        model=model,
        api_key=SecretStr(api_key),
        base_url=os.getenv("LLM_BASE_URL"),
    )

    # Create agent context with loaded skills
    agent_context = AgentContext(
        skills=list(agent_skills.values()),
        # Disable public skills for this demo to keep output focused
        load_public_skills=False,
    )

    # Create agent with tools so it can read skill resources
    tools = [
        Tool(name=TerminalTool.name),
        Tool(name=FileEditorTool.name),
    ]
    agent = Agent(llm=llm, tools=tools, agent_context=agent_context)

    # Create conversation
    conversation = Conversation(agent=agent, workspace=os.getcwd())

    # Test the skill (triggered by "encrypt" keyword)
    # The skill provides instructions and a script for ROT13 encryption
    print("\nSending message with 'encrypt' keyword to trigger skill...")
    conversation.send_message("Encrypt the message 'hello world'.")
    conversation.run()

    print(f"\nTotal cost: ${llm.metrics.accumulated_cost:.4f}")


if __name__ == "__main__":
    main()
Running the Example
export LLM_API_KEY="your-api-key"
cd agent-sdk
uv run python examples/05_skills_and_plugins/01_loading_agentskills/main.py

Key Functions

load_skills_from_dir()

Loads all skills from a directory, returning three dictionaries:
from openhands.sdk.context.skills import load_skills_from_dir

repo_skills, knowledge_skills, agent_skills = load_skills_from_dir(skills_dir)
  • repo_skills: Skills from repo.md files (always active)
  • knowledge_skills: Skills from knowledge/ subdirectories
  • agent_skills: Skills from SKILL.md files (AgentSkills standard)

discover_skill_resources()

Discovers resource files in a skill directory:
from openhands.sdk.context.skills import discover_skill_resources

resources = discover_skill_resources(skill_dir)
print(resources.scripts)     # List of script files
print(resources.references)  # List of reference files
print(resources.assets)      # List of asset files
print(resources.skill_root)  # Path to skill directory

Skill Location in Prompts

The <location> element in <available_skills> follows the AgentSkills standard, allowing agents to read the full skill content on demand. When a triggered skill is activated, the content is injected with the location path:
<EXTRA_INFO>
The following information has been included based on a keyword match for "encrypt".

Skill location: /path/to/rot13-encryption
(Use this path to resolve relative file references in the skill content below)

[skill content from SKILL.md]
</EXTRA_INFO>
This enables skills to reference their own scripts and resources using relative paths like ./scripts/encrypt.sh.

Example Skill: ROT13 Encryption

Here’s a skill with triggers (OpenHands extension): SKILL.md:
---
name: rot13-encryption
description: >
  This skill helps encrypt and decrypt messages using ROT13 cipher.
triggers:
  - encrypt
  - decrypt
  - cipher
---

# ROT13 Encryption Skill

Run the [encrypt.sh](scripts/encrypt.sh) script with your message:

\`\`\`bash
./scripts/encrypt.sh "your message"
\`\`\`
scripts/encrypt.sh:
#!/bin/bash
echo "$1" | tr 'A-Za-z' 'N-ZA-Mn-za-m'
When the user says “encrypt”, the skill is triggered and the agent can use the provided script.

Loading Public Skills

OpenHands maintains a public skills repository with community-contributed skills. You can automatically load these skills without waiting for SDK updates.

Automatic Loading via AgentContext

Enable public skills loading in your AgentContext:
agent_context = AgentContext(
    load_public_skills=True,  # Auto-load from public registry
    skills=[
        # Your custom skills here
    ]
)
When enabled, the SDK will:
  1. Clone or update the public skills repository to ~/.openhands/cache/skills/ on first run
  2. Load all available skills from the repository
  3. Merge them with your explicitly defined skills
Skill Precedence: If a skill name conflicts, your explicitly defined skills take precedence over public skills.

Programmatic Loading

You can also load public skills manually and have more control:
from openhands.sdk.context.skills import load_public_skills

# Load all public skills
public_skills = load_public_skills()

# Use with AgentContext
agent_context = AgentContext(skills=public_skills)

# Or combine with custom skills
my_skills = [
    Skill(name="custom", content="Custom instructions", trigger=None)
]
agent_context = AgentContext(skills=my_skills + public_skills)

Custom Skills Repository

You can load skills from your own repository:
from openhands.sdk.context.skills import load_public_skills

# Load from a custom repository
custom_skills = load_public_skills(
    repo_url="https://github.com/my-org/my-skills",
    branch="main"
)

How It Works

The load_public_skills() function uses git-based caching for efficiency:
  • First run: Clones the skills repository to ~/.openhands/cache/skills/public-skills/
  • Subsequent runs: Pulls the latest changes to keep skills up-to-date
  • Offline mode: Uses the cached version if network is unavailable
This approach is more efficient than fetching individual skill files via HTTP and ensures you always have access to the latest community skills.
Explore available public skills at github.com/OpenHands/skills. These skills cover various domains like GitHub integration, Python development, debugging, and more.

Customizing Agent Context

Message Suffixes

Append custom instructions to the system prompt or user messages via AgentContext:
agent_context = AgentContext(
    system_message_suffix="""
<REPOSITORY_INFO>
Repository: my-project
Branch: feature/new-api
</REPOSITORY_INFO>
    """.strip(),
    user_message_suffix="Remember to explain your reasoning."
)
  • system_message_suffix: Appended to system prompt (always active, combined with repo skills)
  • user_message_suffix: Appended to each user message

Replacing the Entire System Prompt

For complete control, provide a custom Jinja2 template via the Agent class:
from openhands.sdk import Agent

agent = Agent(
    llm=llm,
    tools=tools,
    system_prompt_filename="/path/to/custom_system_prompt.j2",  # Absolute path
    system_prompt_kwargs={"cli_mode": True, "repo_name": "my-project"}
)
Custom template example (custom_system_prompt.j2):
You are a helpful coding assistant for {{ repo_name }}.

{% if cli_mode %}
You are running in CLI mode. Keep responses concise.
{% endif %}

Follow these guidelines:
- Write clean, well-documented code
- Consider edge cases and error handling
- Suggest tests when appropriate
Key points:
  • Use relative filenames (e.g., "system_prompt.j2") to load from the agent’s prompts directory
  • Use absolute paths (e.g., "/path/to/prompt.j2") to load from any location
  • Pass variables to the template via system_prompt_kwargs
  • The system_message_suffix from AgentContext is automatically appended after your custom prompt

Next Steps