Skills add specialized behaviors, domain knowledge, and context-aware triggers to your agent through structured prompts.
This guide shows how to implement skills in the SDK. For conceptual overview, see Skills Overview.OpenHands supports an extended version of the AgentSkills standard with optional keyword triggers.
from openhands.sdk.context import Skill, KeywordTriggerSkill( name="encryption-helper", content="Use the encrypt.sh script to encrypt messages.", trigger=KeywordTrigger(keywords=["encrypt", "decrypt"]),)
When user says “encrypt this”, the content is injected into the message:
Copy
Ask AI
<EXTRA_INFO>The following information has been included based on a keyword match for "encrypt".Skill location: /path/to/encryption-helperUse the encrypt.sh script to encrypt messages.</EXTRA_INFO>
import osfrom pydantic import SecretStrfrom openhands.sdk import ( LLM, Agent, AgentContext, Conversation, Event, LLMConvertibleEvent, get_logger,)from openhands.sdk.context import ( KeywordTrigger, Skill,)from openhands.sdk.tool import Toolfrom openhands.tools.file_editor import FileEditorToolfrom openhands.tools.terminal import TerminalToollogger = get_logger(__name__)# Configure LLMapi_key = os.getenv("LLM_API_KEY")assert api_key is not None, "LLM_API_KEY environment variable is not set."model = os.getenv("LLM_MODEL", "anthropic/claude-sonnet-4-5-20250929")base_url = os.getenv("LLM_BASE_URL")llm = LLM( usage_id="agent", model=model, base_url=base_url, api_key=SecretStr(api_key),)# Toolscwd = os.getcwd()tools = [ Tool( name=TerminalTool.name, ), Tool(name=FileEditorTool.name),]# AgentContext provides flexible ways to customize prompts:# 1. Skills: Inject instructions (always-active or keyword-triggered)# 2. system_message_suffix: Append text to the system prompt# 3. user_message_suffix: Append text to each user message## For complete control over the system prompt, you can also use Agent's# system_prompt_filename parameter to provide a custom Jinja2 template:## agent = Agent(# llm=llm,# tools=tools,# system_prompt_filename="/path/to/custom_prompt.j2",# system_prompt_kwargs={"cli_mode": True, "repo": "my-project"},# )## See: https://docs.openhands.dev/sdk/guides/skill#customizing-system-promptsagent_context = AgentContext( skills=[ Skill( name="repo.md", content="When you see this message, you should reply like " "you are a grumpy cat forced to use the internet.", # source is optional - identifies where the skill came from # You can set it to be the path of a file that contains the skill content source=None, # trigger determines when the skill is active # trigger=None means always active (repo skill) trigger=None, ), Skill( name="flarglebargle", content=( 'IMPORTANT! The user has said the magic word "flarglebargle". ' "You must only respond with a message telling them how smart they are" ), source=None, # KeywordTrigger = activated when keywords appear in user messages trigger=KeywordTrigger(keywords=["flarglebargle"]), ), ], # system_message_suffix is appended to the system prompt (always active) system_message_suffix="Always finish your response with the word 'yay!'", # user_message_suffix is appended to each user message user_message_suffix="The first character of your response should be 'I'", # You can also enable automatic load skills from # public registry at https://github.com/OpenHands/skills load_public_skills=True,)# Agentagent = Agent(llm=llm, tools=tools, agent_context=agent_context)llm_messages = [] # collect raw LLM messagesdef conversation_callback(event: Event): if isinstance(event, LLMConvertibleEvent): llm_messages.append(event.to_llm_message())conversation = Conversation( agent=agent, callbacks=[conversation_callback], workspace=cwd)print("=" * 100)print("Checking if the repo skill is activated.")conversation.send_message("Hey are you a grumpy cat?")conversation.run()print("=" * 100)print("Now sending flarglebargle to trigger the knowledge skill!")conversation.send_message("flarglebargle!")conversation.run()print("=" * 100)print("Now triggering public skill 'github'")conversation.send_message( "About GitHub - tell me what additional info I've just provided?")conversation.run()print("=" * 100)print("Conversation finished. Got the following LLM messages:")for i, message in enumerate(llm_messages): print(f"Message {i}: {str(message)[:200]}")# Report costcost = llm.metrics.accumulated_costprint(f"EXAMPLE_COST: {cost}")
Running the Example
Copy
Ask AI
export LLM_API_KEY="your-api-key"cd agent-sdkuv run python examples/01_standalone_sdk/03_activate_skill.py
Skills are defined with a name, content (the instructions), and an optional trigger:
Copy
Ask AI
agent_context = AgentContext( skills=[ Skill( name="AGENTS.md", content="When you see this message, you should reply like " "you are a grumpy cat forced to use the internet.", trigger=None, # Always active ), Skill( name="flarglebargle", content='IMPORTANT! The user has said the magic word "flarglebargle". ' "You must only respond with a message telling them how smart they are", trigger=KeywordTrigger(keywords=["flarglebargle"]), ), ])
The SKILL.md file defines the skill with YAML frontmatter:
Copy
Ask AI
---name: my-skill # Required (standard)description: > # Required (standard) A brief description of what this skill does and when to use it.license: MIT # Optional (standard)compatibility: Requires bash # Optional (standard)metadata: # Optional (standard) author: your-name version: "1.0"triggers: # Optional (OpenHands extension) - keyword1 - keyword2---# Skill ContentInstructions and documentation for the agent...
Keywords that auto-activate this skill (OpenHands extension)
license
No
License name
compatibility
No
Environment requirements
metadata
No
Custom key-value pairs
Add triggers to make your SKILL.md keyword-activated by matching a user prompt. Without triggers, the skill can only be triggered by the agent, not the user.
"""Example: Loading Skills from Disk (AgentSkills Standard)This example demonstrates how to load skills following the AgentSkills standardfrom a directory on disk.Skills are modular, self-contained packages that extend an agent's capabilitiesby providing specialized knowledge, workflows, and tools. They follow theAgentSkills standard which includes:- SKILL.md file with frontmatter metadata (name, description, triggers)- Optional resource directories: scripts/, references/, assets/The example_skills/ directory contains two skills:- rot13-encryption: Has triggers (encrypt, decrypt) - listed in <available_skills> AND content auto-injected when triggered- code-style-guide: No triggers - listed in <available_skills> for on-demand accessAll SKILL.md files follow the AgentSkills progressive disclosure model:they are listed in <available_skills> with name, description, and location.Skills with triggers get the best of both worlds: automatic content injectionwhen triggered, plus the agent can proactively read them anytime."""import osfrom pathlib import Pathfrom pydantic import SecretStrfrom openhands.sdk import LLM, Agent, AgentContext, Conversationfrom openhands.sdk.context.skills import ( discover_skill_resources, load_skills_from_dir,)from openhands.sdk.tool import Toolfrom openhands.tools.file_editor import FileEditorToolfrom openhands.tools.terminal import TerminalTooldef main(): # Get the directory containing this script script_dir = Path(__file__).parent example_skills_dir = script_dir / "example_skills" # ========================================================================= # Part 1: Loading Skills from a Directory # ========================================================================= print("=" * 80) print("Part 1: Loading Skills from a Directory") print("=" * 80) print(f"Loading skills from: {example_skills_dir}") # Discover resources in the skill directory skill_subdir = example_skills_dir / "rot13-encryption" resources = discover_skill_resources(skill_subdir) print("\nDiscovered resources in rot13-encryption/:") print(f" - scripts: {resources.scripts}") print(f" - references: {resources.references}") print(f" - assets: {resources.assets}") # Load skills from the directory repo_skills, knowledge_skills, agent_skills = load_skills_from_dir( example_skills_dir ) print("\nLoaded skills from directory:") print(f" - Repo skills: {list(repo_skills.keys())}") print(f" - Knowledge skills: {list(knowledge_skills.keys())}") print(f" - Agent skills (SKILL.md): {list(agent_skills.keys())}") # Access the loaded skill and show all AgentSkills standard fields if agent_skills: skill_name = list(agent_skills.keys())[0] loaded_skill = agent_skills[skill_name] print(f"\nDetails for '{skill_name}' (AgentSkills standard fields):") print(f" - Name: {loaded_skill.name}") desc = loaded_skill.description or "" print(f" - Description: {desc[:70]}...") print(f" - License: {loaded_skill.license}") print(f" - Compatibility: {loaded_skill.compatibility}") print(f" - Metadata: {loaded_skill.metadata}") if loaded_skill.resources: print(" - Resources:") print(f" - Scripts: {loaded_skill.resources.scripts}") print(f" - References: {loaded_skill.resources.references}") print(f" - Assets: {loaded_skill.resources.assets}") print(f" - Skill root: {loaded_skill.resources.skill_root}") # ========================================================================= # Part 2: Using Skills with an Agent # ========================================================================= print("\n" + "=" * 80) print("Part 2: Using Skills with an Agent") print("=" * 80) # Check for API key api_key = os.getenv("LLM_API_KEY") if not api_key: print("Skipping agent demo (LLM_API_KEY not set)") print("\nTo run the full demo, set the LLM_API_KEY environment variable:") print(" export LLM_API_KEY=your-api-key") return # Configure LLM model = os.getenv("LLM_MODEL", "anthropic/claude-sonnet-4-5-20250929") llm = LLM( usage_id="skills-demo", model=model, api_key=SecretStr(api_key), base_url=os.getenv("LLM_BASE_URL"), ) # Create agent context with loaded skills agent_context = AgentContext( skills=list(agent_skills.values()), # Disable public skills for this demo to keep output focused load_public_skills=False, ) # Create agent with tools so it can read skill resources tools = [ Tool(name=TerminalTool.name), Tool(name=FileEditorTool.name), ] agent = Agent(llm=llm, tools=tools, agent_context=agent_context) # Create conversation conversation = Conversation(agent=agent, workspace=os.getcwd()) # Test the skill (triggered by "encrypt" keyword) # The skill provides instructions and a script for ROT13 encryption print("\nSending message with 'encrypt' keyword to trigger skill...") conversation.send_message("Encrypt the message 'hello world'.") conversation.run() print(f"\nTotal cost: ${llm.metrics.accumulated_cost:.4f}")if __name__ == "__main__": main()
Running the Example
Copy
Ask AI
export LLM_API_KEY="your-api-key"cd agent-sdkuv run python examples/05_skills_and_plugins/01_loading_agentskills/main.py
from openhands.sdk.context.skills import discover_skill_resourcesresources = discover_skill_resources(skill_dir)print(resources.scripts) # List of script filesprint(resources.references) # List of reference filesprint(resources.assets) # List of asset filesprint(resources.skill_root) # Path to skill directory
The <location> element in <available_skills> follows the AgentSkills standard, allowing agents to read the full skill content on demand. When a triggered skill is activated, the content is injected with the location path:
Copy
Ask AI
<EXTRA_INFO>The following information has been included based on a keyword match for "encrypt".Skill location: /path/to/rot13-encryption(Use this path to resolve relative file references in the skill content below)[skill content from SKILL.md]</EXTRA_INFO>
This enables skills to reference their own scripts and resources using relative paths like ./scripts/encrypt.sh.
Here’s a skill with triggers (OpenHands extension):SKILL.md:
Copy
Ask AI
---name: rot13-encryptiondescription: > This skill helps encrypt and decrypt messages using ROT13 cipher.triggers: - encrypt - decrypt - cipher---# ROT13 Encryption SkillRun the [encrypt.sh](scripts/encrypt.sh) script with your message:\`\`\`bash./scripts/encrypt.sh "your message"\`\`\`
scripts/encrypt.sh:
Copy
Ask AI
#!/bin/bashecho "$1" | tr 'A-Za-z' 'N-ZA-Mn-za-m'
When the user says “encrypt”, the skill is triggered and the agent can use the provided script.
OpenHands maintains a public skills repository with community-contributed skills. You can automatically load these skills without waiting for SDK updates.
You can also load public skills manually and have more control:
Copy
Ask AI
from openhands.sdk.context.skills import load_public_skills# Load all public skillspublic_skills = load_public_skills()# Use with AgentContextagent_context = AgentContext(skills=public_skills)# Or combine with custom skillsmy_skills = [ Skill(name="custom", content="Custom instructions", trigger=None)]agent_context = AgentContext(skills=my_skills + public_skills)
from openhands.sdk.context.skills import load_public_skills# Load from a custom repositorycustom_skills = load_public_skills( repo_url="https://github.com/my-org/my-skills", branch="main")
The load_public_skills() function uses git-based caching for efficiency:
First run: Clones the skills repository to ~/.openhands/cache/skills/public-skills/
Subsequent runs: Pulls the latest changes to keep skills up-to-date
Offline mode: Uses the cached version if network is unavailable
This approach is more efficient than fetching individual skill files via HTTP and ensures you always have access to the latest community skills.
Explore available public skills at github.com/OpenHands/skills. These skills cover various domains like GitHub integration, Python development, debugging, and more.
Custom template example (custom_system_prompt.j2):
Copy
Ask AI
You are a helpful coding assistant for {{ repo_name }}.{% if cli_mode %}You are running in CLI mode. Keep responses concise.{% endif %}Follow these guidelines:- Write clean, well-documented code- Consider edge cases and error handling- Suggest tests when appropriate
Key points:
Use relative filenames (e.g., "system_prompt.j2") to load from the agent’s prompts directory
Use absolute paths (e.g., "/path/to/prompt.j2") to load from any location
Pass variables to the template via system_prompt_kwargs
The system_message_suffix from AgentContext is automatically appended after your custom prompt