中文文档 | Contributing | Documentation
2025-12-22: Native-first distribution (Windows OOTB). Kode prefers a cached native binary and falls back to the Node.js runtime when needed. See docs/binary-distribution.md.
Kode supports the AGENTS.md standard: a simple, open format for guiding coding agents, used by 60k+ open-source projects.
- ✅ AGENTS.md - Native support for the OpenAI-initiated standard format
- ✅ Legacy
.claudecompatibility - Reads.claudedirectories andCLAUDE.mdwhen present (seedocs/compatibility.md) - ✅ Subagent System - Advanced agent delegation and task orchestration
- ✅ Cross-platform - Works with 20+ AI models and providers
Use # Your documentation request to generate and maintain your AGENTS.md file automatically, while preserving compatibility with existing .claude workflows.
- Kode reads project instructions by walking from the Git repo root → current working directory.
- In each directory, it prefers
AGENTS.override.mdoverAGENTS.md(at most one file per directory). - Discovered files are concatenated root → leaf (combined size capped at 32 KiB by default; override with
KODE_PROJECT_DOC_MAX_BYTES). - If
CLAUDE.mdexists in the current directory, Kode also reads it as a legacy instruction file.
Kode is a powerful AI assistant that lives in your terminal. It can understand your codebase, edit files, run commands, and handle entire workflows for you.
⚠️ Security Notice: Kode runs in YOLO mode by default (equivalent to the--dangerously-skip-permissionsflag), bypassing all permission checks for maximum productivity. YOLO mode is recommended only for trusted, secure environments when working on non-critical projects. If you're working with important files or using models of questionable capability, we strongly recommend usingkode --safeto enable permission checks and manual approval for all operations.📊 Model Performance: For optimal performance, we recommend using newer, more capable models designed for autonomous task completion. Avoid older Q&A-focused models like GPT-4o or Gemini 2.5 Pro, which are optimized for answering questions rather than sustained independent task execution. Choose models specifically trained for agentic workflows and extended reasoning capabilities.
- Kode does not send product telemetry/analytics by default.
- Network requests happen only when you explicitly use networked features:
- Model provider requests (Anthropic/OpenAI-compatible endpoints you configure)
- Web tools (
WebFetch,WebSearch) - Plugin marketplace downloads (GitHub/URL sources) and OAuth flows (when used)
- Optional update checks (opt-in via
autoUpdaterStatus: enabled)
- 🤖 AI-Powered Assistance - Uses advanced AI models to understand and respond to your requests
- 🔄 Multi-Model Collaboration - Flexibly switch and combine multiple AI models to leverage their unique strengths
- 🦜 Expert Model Consultation - Use
@ask-model-nameto consult specific AI models for specialized analysis - 👤 Intelligent Agent System - Use
@run-agent-nameto delegate tasks to specialized subagents - 📝 Code Editing - Directly edit files with intelligent suggestions and improvements
- 🔍 Codebase Understanding - Analyzes your project structure and code relationships
- 🚀 Command Execution - Run shell commands and see results in real-time
- 🛠️ Workflow Automation - Handle complex development tasks with simple prompts
Option+G(Alt+G) opens your message in your preferred editor (respects$EDITOR/$VISUAL; falls back to code/nano/vim/notepad) and returns the text to the prompt when you close it.Option+Enterinserts a newline inside the prompt without sending; plain Enter submits.Option+Mcycles the active model.
Our state-of-the-art completion system provides unparalleled coding assistance:
- Hyphen-Aware Matching - Type
daoto matchrun-agent-dao-qi-harmony-designer - Abbreviation Support -
dqmatchesdao-qi,ndematchesnode - Numeric Suffix Handling -
py3intelligently matchespython3 - Multi-Algorithm Fusion - Combines 7+ matching algorithms for best results
- No @ Required - Type
gp5directly to match@ask-gpt-5 - Auto-Prefix Addition - Tab/Enter automatically adds
@for agents and models - Mixed Completion - Seamlessly switch between commands, files, agents, and models
- Smart Prioritization - Results ranked by relevance and usage frequency
- 500+ Common Commands - Curated database of frequently used Unix/Linux commands
- System Intersection - Only shows commands that actually exist on your system
- Priority Scoring - Common commands appear first (git, npm, docker, etc.)
- Real-time Loading - Dynamic command discovery from system PATH
- 🎨 Interactive UI - Beautiful terminal interface with syntax highlighting
- 🔌 Tool System - Extensible architecture with specialized tools for different tasks
- 💾 Context Management - Smart context handling to maintain conversation continuity
- 📋 AGENTS.md Integration - Use
# documentation requeststo auto-generate and maintain project documentation
npm install -g @shareai-lab/kode🇨🇳 For users in China: If you encounter network issues, use a mirror registry:
npm install -g @shareai-lab/kode --registry=https://registry.npmmirror.com
Dev channel (latest features):
npm install -g @shareai-lab/kode@devAfter installation, you can use any of these commands:
kode- Primary commandkwa- Kode With Agent (alternative)kd- Ultra-short alias
- No WSL/Git Bash required.
- On
postinstall, Kode will best-effort download a native binary from GitHub Releases into${KODE_BIN_DIR:-~/.kode/bin}/<version>/<platform>-<arch>/kode(.exe). - The wrapper (
cli.js) prefers the native binary and falls back to the Node.js runtime (node dist/index.js) when needed.
Overrides:
- Mirror downloads:
KODE_BINARY_BASE_URL - Disable download:
KODE_SKIP_BINARY_DOWNLOAD=1 - Cache directory:
KODE_BIN_DIR
See docs/binary-distribution.md.
- Global config (models, pointers, theme, etc):
~/.kode.json(or<KODE_CONFIG_DIR>/config.jsonwhenKODE_CONFIG_DIR/CLAUDE_CONFIG_DIRis set). - Project/local settings (output style, etc):
./.kode/settings.jsonand./.kode/settings.local.json(legacy.claudeis supported for some features). - Configure models via
/model(UI) orkode models import/export(YAML). Details:docs/develop/configuration.md.
Start an interactive session:
kode
# or
kwa
# or
kdGet a quick response:
kode -p "explain this function" path/to/file.js
# or
kwa -p "explain this function" path/to/file.jsRun Kode as an ACP agent server (stdio JSON-RPC), for clients like Toad/Zed:
kode-acp
# or
kode --acpToad example:
toad acp "kode-acp"More: docs/acp.md.
Kode supports a powerful @ mention system for intelligent completions:
# Consult specific AI models for expert opinions
@ask-claude-sonnet-4 How should I optimize this React component for performance?
@ask-gpt-5 What are the security implications of this authentication method?
@ask-o1-preview Analyze the complexity of this algorithm# Delegate tasks to specialized subagents
@run-agent-simplicity-auditor Review this code for over-engineering
@run-agent-architect Design a microservices architecture for this system
@run-agent-test-writer Create comprehensive tests for these modules# Reference files and directories with auto-completion
@packages/core/src/query/index.ts
@docs/README.md
@.env.exampleThe @ mention system provides intelligent completions as you type, showing available models, agents, and files.
Kode can connect to MCP servers to extend tools and context.
- Config files:
.mcp.json(recommended) or.mcprcin your project root. Seedocs/mcp.md. - CLI:
kode mcp add
kode mcp list
kode mcp get <name>
kode mcp remove <name>Example .mcprc:
{
"my-sse-server": { "type": "sse", "url": "http://127.0.0.1:3333/sse" }
}- Default mode skips most prompts for speed.
- Safe mode:
kode --saferequires approval for Bash commands and file writes/edits. - Plan mode: the assistant may ask to enter plan mode to draft a plan file; while in plan mode, only read-only/planning tools (and the plan file) are allowed until you approve exiting plan mode.
- Multi-line/large paste is inserted as a placeholder and expanded on submit.
- Pasting multiple existing file paths inserts
@pathmentions automatically (quoted when needed). - Image paste (macOS): press
Ctrl+Vto attach clipboard images; you can paste multiple images before sending.
- In safe mode (or with
KODE_SYSTEM_SANDBOX=1), agent-triggered Bash tool calls try to run inside abwrapsandbox when available. - Network is disabled by default; set
KODE_SYSTEM_SANDBOX_NETWORK=inheritto allow network. - Set
KODE_SYSTEM_SANDBOX=requiredto fail closed if sandbox cannot be started. - See
docs/system-sandbox.mdfor details and platform notes.
- Models: use
/model, orkode models import kode-models.yaml, and ensure required API key env vars exist. - Windows: if the native binary download is blocked/offline, set
KODE_BINARY_BASE_URL(mirror) orKODE_SKIP_BINARY_DOWNLOAD=1(skip download); the wrapper will fall back to the Node.js runtime (dist/index.js). - MCP: use
kode mcp listto check server status; tuneMCP_CONNECTION_TIMEOUT_MS,MCP_SERVER_CONNECTION_BATCH_SIZE, andMCP_TOOL_TIMEOUTif servers are slow. - Sandbox: install
bwrap(bubblewrap) on Linux, or setKODE_SYSTEM_SANDBOX=0to disable.
Use the # prefix to generate and maintain your AGENTS.md documentation:
# Generate setup instructions
# How do I set up the development environment?
# Create testing documentation
# What are the testing procedures for this project?
# Document deployment process
# Explain the deployment pipeline and requirementsThis mode automatically formats responses as structured documentation and appends them to your AGENTS.md file.
# Clone the repository
git clone https://github.com/shareAI-lab/Kode.git
cd Kode
# Build the image locally
docker build --no-cache -t kode .
# Run in your project directory
cd your-project
docker run -it --rm \
-v $(pwd):/workspace \
-v ~/.kode:/root/.kode \
-v ~/.kode.json:/root/.kode.json \
-w /workspace \
kodeThe Docker setup includes:
-
Volume Mounts:
$(pwd):/workspace- Mounts your current project directory~/.kode:/root/.kode- Preserves your kode configuration directory between runs~/.kode.json:/root/.kode.json- Preserves your kode global configuration file between runs
-
Working Directory: Set to
/workspaceinside the container -
Interactive Mode: Uses
-itflags for interactive terminal access -
Cleanup:
--rmflag removes the container after exit
Note: Kode uses both ~/.kode directory for additional data (like memory files) and ~/.kode.json file for global configuration.
The first time you run the Docker command, it will build the image. Subsequent runs will use the cached image for faster startup.
You can use the onboarding to set up the model, or /model.
If you don't see the models you want on the list, you can manually set them in /config
As long as you have an openai-like endpoint, it should work.
/help- Show available commands/model- Change AI model settings/config- Open configuration panel/agents- Manage subagents/output-style- Set the output style/statusline- Configure a custom status line command/cost- Show token usage and costs/clear- Clear conversation history/init- Initialize project context/plugin- Manage plugins/marketplaces (skills, commands)
Kode supports subagents (agent templates) for delegation and task orchestration.
- Agents are loaded from
.kode/agentsand.claude/agents(user + project), plus plugins/policy and--agents. - Manage in the UI:
/agents(creates new agents under./.claude/agents/~/.claude/agentsby default). - Run via mentions:
@run-agent-<agentType> ... - Run via tooling:
Task(subagent_type: "<agentType>", ...) - CLI flags:
--agents <json>(inject agents for this run),--setting-sources user,project,local(control which sources are loaded)
Minimal agent file example (./.kode/agents/reviewer.md):
---
name: reviewer
description: "Review diffs for correctness, security, and simplicity"
tools: ["Read", "Grep"]
model: inherit
---
Be strict. Point out bugs and risky changes. Prefer small, targeted fixes.Model field notes:
- Compatibility aliases:
inherit,opus,sonnet,haiku(mapped to model pointers) - Kode selectors (via
/model): pointers (main|task|compact|quick), profile name, modelName, orprovider:modelName(e.g.openai:o3)
Validate agent templates:
kode agents validateSee docs/agents-system.md.
Kode supports the Agent Skills open format for extending agent capabilities:
- Agent Skills format (
SKILL.md) - see specification - Marketplace compatibility (
.kode-plugin/marketplace.json, legacy.claude-plugin/marketplace.json) - Install from any repository using
add-skillCLI
Install skills from any git repository:
# Install from GitHub
npx add-skill vercel-labs/agent-skills -a kode
# Install to global directory
npx add-skill vercel-labs/agent-skills -a kode -g
# Install specific skills
npx add-skill vercel-labs/agent-skills -a kode -s pdf -s xlsx# Add a marketplace (local path, GitHub owner/repo, or URL)
kode plugin marketplace add ./path/to/marketplace-repo
kode plugin marketplace add owner/repo
kode plugin marketplace list
# Install a plugin pack (installs skills/commands)
kode plugin install document-skills@anthropic-agent-skills --scope user
# Project-scoped install (writes to ./.kode/...)
kode plugin install document-skills@anthropic-agent-skills --scope project
# Disable/enable an installed plugin
kode plugin disable document-skills@anthropic-agent-skills --scope user
kode plugin enable document-skills@anthropic-agent-skills --scope userInteractive equivalents:
/plugin marketplace add owner/repo
/plugin install document-skills@anthropic-agent-skills --scope user
- In interactive mode, run a skill as a slash command:
/pdf,/xlsx, etc. - Kode can also invoke skills automatically via the
Skilltool when relevant.
Create ./.kode/skills/<skill-name>/SKILL.md (project) or ~/.kode/skills/<skill-name>/SKILL.md (user):
---
name: my-skill
description: Describe what this skill does and when to use it.
allowed-tools: Read Bash(git:*) Bash(jq:*)
---
# Skill instructionsNaming rules:
namemust match the folder name- Lowercase letters/numbers/hyphens only, 1–64 chars
Compatibility:
- Kode also discovers
.claude/skillsand.claude/commandsfor legacy compatibility.
- Marketplace repo: publish a repo containing
.kode-plugin/marketplace.jsonlisting plugin packs and theirskillsdirectories (legacy.claude-plugin/marketplace.jsonis also supported). - Plugin repo: for full plugins (beyond skills), include
.kode-plugin/plugin.jsonat the plugin root and keep all paths relative (./...).
See docs/skills.md for a compact reference and examples.
Use output styles to switch system-prompt behavior.
- Select:
/output-style(menu) or/output-style <style> - Built-ins:
default,Explanatory,Learning - Stored per-project in
./.kode/settings.local.jsonasoutputStyle(legacy.claude/settings.local.jsonis supported) - Custom styles: Markdown files under
output-styles/in.claude/.kodeuser + project locations - Plugins can provide styles under
output-styles/(or manifestoutputStyles); plugin styles are namespaced as<plugin>:<style>
See docs/output-styles.md.
Unlike single-model CLIs, Kode implements true multi-model collaboration, allowing you to fully leverage the unique strengths of different AI models.
We designed a unified ModelManager system that supports:
- Model Profiles: Each model has an independent configuration file containing API endpoints, authentication, context window size, cost parameters, etc.
- Model Pointers: Users can configure default models for different purposes in the
/modelcommand:main: Default model for main Agenttask: Default model for SubAgentcompact: Model used for automatic context compression when nearing the context windowquick: Fast model for simple operations and utilities
- Dynamic Model Switching: Support runtime model switching without restarting sessions, maintaining context continuity
You can export/import model profiles + pointers as a team-shareable YAML file. By default, exports do not include plaintext API keys (use env vars instead).
# Export to a file (or omit --output to print to stdout)
kode models export --output kode-models.yaml
# Import (merge by default)
kode models import kode-models.yaml
# Replace existing profiles instead of merging
kode models import --replace kode-models.yaml
# List configured profiles + pointers
kode models listExample kode-models.yaml:
version: 1
profiles:
- name: OpenAI Main
provider: openai
modelName: gpt-4o
maxTokens: 8192
contextLength: 128000
apiKey:
fromEnv: OPENAI_API_KEY
pointers:
main: gpt-4o
task: gpt-4o
compact: gpt-4o
quick: gpt-4oOur specially designed TaskTool (Architect tool) implements:
- Subagent Mechanism: Can launch multiple sub-agents to process tasks in parallel
- Model Parameter Passing: Users can specify which model SubAgents should use in their requests
- Default Model Configuration: SubAgents use the model configured by the
taskpointer by default
We specially designed the AskExpertModel tool:
- Expert Model Invocation: Allows temporarily calling specific expert models to solve difficult problems during conversations
- Model Isolation Execution: Expert model responses are processed independently without affecting the main conversation flow
- Knowledge Integration: Integrates expert model insights into the current task
- Option+M Quick Switch: Press Option+M in the input box to cycle the main conversation model
/modelCommand: Use/modelcommand to configure and manage multiple model profiles, set default models for different purposes- User Control: Users can specify specific models for task processing at any time
Architecture Design Phase
- Use o3 model or GPT-5 model to explore system architecture and formulate sharp and clear technical solutions
- These models excel in abstract thinking and system design
Solution Refinement Phase
- Use gemini model to deeply explore production environment design details
- Leverage its deep accumulation in practical engineering and balanced reasoning capabilities
Code Implementation Phase
- Use Qwen Coder model, Kimi k2 model, GLM-4.5 model, or Claude Sonnet 4 model for specific code writing
- These models have strong performance in code generation, file editing, and engineering implementation
- Support parallel processing of multiple coding tasks through subagents
Problem Solving
- When encountering complex problems, consult expert models like o3 model, Claude Opus 4.1 model, or Grok 4 model
- Obtain deep technical insights and innovative solutions
# Example 1: Architecture Design
"Use o3 model to help me design a high-concurrency message queue system architecture"
# Example 2: Multi-Model Collaboration
"First use GPT-5 model to analyze the root cause of this performance issue, then use Claude Sonnet 4 model to write optimization code"
# Example 3: Parallel Task Processing
"Use Qwen Coder model as subagent to refactor these three modules simultaneously"
# Example 4: Expert Consultation
"This memory leak issue is tricky, ask Claude Opus 4.1 model separately for solutions"
# Example 5: Code Review
"Have Kimi k2 model review the code quality of this PR"
# Example 6: Complex Reasoning
"Use Grok 4 model to help me derive the time complexity of this algorithm"
# Example 7: Solution Design
"Have GLM-4.5 model design a microservice decomposition plan"// Example of multi-model configuration support
{
"modelProfiles": [
{ "name": "o3", "provider": "openai", "modelName": "o3", "apiKey": "...", "maxTokens": 1024, "contextLength": 128000, "isActive": true, "createdAt": 1710000000000 },
{ "name": "qwen", "provider": "alibaba", "modelName": "qwen-coder", "apiKey": "...", "maxTokens": 1024, "contextLength": 128000, "isActive": true, "createdAt": 1710000000001 }
],
"modelPointers": {
"main": "o3", // Main conversation model
"task": "qwen-coder", // Sub-agent model
"compact": "o3", // Context compression model
"quick": "o3" // Quick operations model
}
}- Usage Statistics: Use
/costcommand to view token usage and costs for each model - Multi-Model Cost Comparison: Track usage costs of different models in real-time
- History Records: Save cost data for each session
- Context Inheritance: Maintain conversation continuity when switching models
- Context Window Adaptation: Automatically adjust based on different models' context window sizes
- Session State Preservation: Ensure information consistency during multi-model collaboration
- Maximized Efficiency: Each task is handled by the most suitable model
- Cost Optimization: Use lightweight models for simple tasks, powerful models for complex tasks
- Parallel Processing: Multiple models can work on different subtasks simultaneously
- Flexible Switching: Switch models based on task requirements without restarting sessions
- Leveraging Strengths: Combine advantages of different models for optimal overall results
| Feature | Kode | Single-model CLI |
|---|---|---|
| Number of Supported Models | Unlimited, configurable for any model | Only supports one model |
| Model Switching | ✅ Option+M quick switch | ❌ Requires session restart |
| Parallel Processing | ✅ Multiple SubAgents work in parallel | ❌ Single-threaded processing |
| Cost Tracking | ✅ Separate statistics for multiple models | ❌ Single model cost |
| Task Model Configuration | ✅ Different default models for different purposes | ❌ Same model for all tasks |
| Expert Consultation | ✅ AskExpertModel tool | ❌ Not supported |
This multi-model collaboration capability makes Kode a true AI Development Workbench, not just a single AI assistant.
Kode is built with modern tools and requires Bun for development.
# macOS/Linux
curl -fsSL https://bun.sh/install | bash
# Windows
powershell -c "irm bun.sh/install.ps1 | iex"# Clone the repository
git clone https://github.com/shareAI-lab/kode.git
cd kode
# Install dependencies
bun install
# Run in development mode
bun run devbun run build# Run tests
bun test
# Test the CLI
./cli.js --helpWe welcome contributions! Please see our Contributing Guide for details.
Apache 2.0 License - see LICENSE for details.
- Some code from @dnakov's anonkode
- Some UI learned from gemini-cli
- Some system design learned from upstream agent CLIs
