AI workflows for the shell — no code required
iq brings multi-step autonomous AI workflows to your shell — no coding required. Describe what you want to achieve in simple YAML, and iq handles the logic, execution, state, and recovery for you.
- No-code workflow design describe automations in simple, declarative YAML no programming required.
- Goal-driven workflows orchestrates multiple AI-powered steps into coherent agentic workflow.
- Chaining, Routing, Iterating and State management is natively supported using flexible, rule-based language.
- Native Model Context Protocol (MCP) integration and server modes connect and orchestrate external tools directly within your workflows, and optionally run a workflow as an MCP server.
- Fail-safe execution recover from errors with built-in supervisor and fallback strategies.
- Automatic caching reduces costs and speeds up execution by caching LLM responses with intelligent invalidation.
- Flexible I/O seamless input/output into stdio, files, directories, S3, SQS and many other.
- Support multiple LLM providers works with OpenAI, AWS Bedrock, and local models.
macOS - Homebrew (Recommended)
brew tap fogfish/iq https://github.com/fogfish/iq
brew install iqUpgrade to latest version
brew upgrade iqDirect binaries download (Linux, Windows)
Download the executable from the latest release and add it to your PATH.
The utility requires access to LLMs for execution. LLM access config and credentials are stored in ~/.iqrc.
AWS Bedrock (Recommended)
Requires AWS account with access to AWS Bedrock service.
iq config --bedrockThe default config uses global.anthropic.claude-sonnet-4-5-20250929-v1:0 inference profile. Use -m, --llm-id flags to override the default model.
iq config --bedrock -m amazon.nova-pro-v1:0Open AI
Requires account at Open AI platform and access key for api usage.
iq config --openai <api-key>The default config uses gpt-5 model. Use -m, --llm-id flags to override the default model.
iq config --openai <api-key> -m gpt-4oLocal AI, on Your Computer
LM Studio is recommended approach to run LLMs locally. After LM Studio installation and installation of desired model, you can configure iq
iq config --lmstudioThe default config uses gemma-3-27b-it model. Use -m, --llm-id flags to override the default model.
iq config --lmstudio -m gpt-ossLet's build your first AI workflow with iq.
Here;s a minimal example that researches a topic and summarizes it using two AI agents:
name: research
jobs:
main:
steps:
- prompt: |
Find three key facts about the topic:
{{.input}}.
- prompt: |
Summarize following facts clearly in 2–3 sentences:
{{.input}}Run the workflow from your shell, pass the topic you want to research via stdio:
echo "singularity" | iq agent -f research.ymlEnable caching to save time and costs on repeated runs:
# First run - executes workflow and caches results
echo "singularity" | iq agent -f research.yml --cache-dir .cache
# Second run - instant, uses cached results
echo "singularity" | iq agent -f research.yml --cache-dir .cacheCaching is automatic—no changes to your workflow YAML needed. Cache entries are invalidated automatically when prompts change.
See User Guide about the workflow syntax.
# run workflow
iq agent -f <yaml> FILE1, FILE2, ...
# draft prompts markdown and agent workflows
iq draft
iq draft agentUse iq help for details
Input
- (stdin) Read input data from standard input
FILE ...One or more input files (local or s3:// paths)-I <dir>,--input-dirProcess all files in a directory (local or s3:// paths)
Input modifiers
--jsonDisplay output as formatted, colored JSON--mergeCombine all input files into a single document before processing
Output
- (stdout) Write output data to standard output
-o <file>,--fileWrite output to a single file (supports s3:// paths)-O <dir>,--output-dirWrite each result to a file in a directory or S3 bucket
batch command treats a mounted directory of files as a processing queue—reading from an input directory, applying each file content as input to the workflow, and writing the results to an output directory. This batch-oriented processing is ideal for transformation, summarization or enhanced file processing at scale—with minimal setup and full traceability of inputs and outputs.
The command support mounting of AWS S3 bucket. Use s3:// prefix prefix to direct the utility.
iq agent batch -f <prompt> -I s3://... -O s3://...Processing a large number of files may require the ability to start, stop, and resume the utility reliably. To support this, you can use the --mutable flag, which removes each input file immediately after it has been successfully processed. This enables fault-tolerant, resumable execution by ensuring already-processed files are skipped on subsequent runs.
Use --strict to fail fast, terminating the processing on the first error.
Process files in chunks:
# Split by sentences
iq agent -f workflow.yml --splitter sentence large-doc.txt
# Split by paragraphs
iq agent -f workflow.yml --splitter paragraph large-doc.txt
# Fixed-size chunks
iq agent -f workflow.yml --splitter chunk --splitter-chunk 2048 large-doc.txtSentences: A sentence is defined as a punctuation mark (., !, or ?) followed by a whitespace character. The default punctuation marks are overwritten with --splitter-chars flag.
Paragraphs: A paragraph is defined as a block of text separated by an empty line (essentially using \n\n as delimiter). The default delimiter is overwritten with --splitter-chars flag.
Fixed chunks: A fixed chunk has a defined size limit, which is extended to include the end of the nearest sentence (it prevents loss of context). The chunk size is configured with --splitter-chunk flag and --splitter-chars are used to define punctuation marks similar to sentence split.
TBD.
Expose your workflow as a Model Context Protocol server:
iq agent serve -a workflow.ymlRequires workflow with name and schemas defined.
Connect external tools to your workflows
---
servers:
- type: stdio
name: filesystem
command: ["npx", "-y", "@modelcontextprotocol/server-filesystem", "./"]
---
Using the filesystem tool, read the file {{.input}} and summarize it.The examples/ directory contains complete workflow examples.
- User Guide — Complete workflow design guide
iq is MIT licensed and accepts contributions via GitHub pull requests:
- Fork it
- Create your feature branch (
git checkout -b my-new-feature) - Commit your changes (
git commit -am 'Added some feature') - Push to the branch (
git push origin my-new-feature) - Create new Pull Request