The "Weekend Demo" vs. "Production Reality" in AI Development. We've all been there. You hack together an AI agent on a Saturday. You use Vercel's AI SDK, throw in some LangChain, and it works perfectly on your localhost. It answers quickly. It handles errors. Then you push to production. Suddenly, reality hits: 1. Timeouts: Your sophisticated reasoning chain takes 75 seconds. Your serverless function kills it at 60. Hard stop. 2. Flakiness: The OpenAI API hiccups. Your script crashes. The user has to restart the entire process. 3. Concurrency: 50 users try it at once. Your rate limits explode. Jobs get dropped. This is the "Production Gap". Building reliable AI agents requires more than just prompt engineering. It requires reliable infrastructure. At Trigger, we built the infrastructure specifically for this gap. We call it Durable Execution. - No Timeouts: Run tasks for hours or days. Perfect for deep research agents. - Checkpointing: If an API fails, we retry just that step. We don't restart the whole run. - Queueing: Heavy load? We queue the jobs and process them as capacity allows. Nothing gets dropped. Stop trying to shoehorn long-running AI processes into short-lived serverless functions. Use infrastructure designed for the job. Learn more 🧵
About us
Trigger.dev is the platform for building AI workflows in TypeScript. Long-running tasks with retries, queues, observability, and elastic scaling.
- Website
-
https://trigger.dev
External link for Trigger.dev
- Industry
- Software Development
- Company size
- 2-10 employees
- Headquarters
- London
- Type
- Privately Held
- Founded
- 2022
Locations
-
Primary
Get directions
42-46 Princelet St
London, GB
Employees at Trigger.dev
Updates
-
Tierly uses Trigger to orchestrate 10+ AI models for competitive pricing analysis. Each analysis: dozens of scraping tasks, human review gates, and real-time progress updates. They analyze SaaS pricing pages, discovers competitors, and generates recommendations. Workflows take 5-15 minutes with multiple AI calls. Their initial sync API routes hit timeouts, rate limits, and had zero visibility into failures. Moving to Trigger fixed all of it. Two chains run in parallel via batch triggers which cut analysis time in half. Wait tokens pause execution for human review with no webhooks needed. Shared queue keeps Firecrawl requests under limit across all concurrent analyses. Progressive model escalation: gpt-4o-mini → gpt-4o → gpt-4o + markdown fallback. Trigger handles retries automatically. The results: → Reliable 10+ AI call workflows → Human review gates without webhook complexity → Automatic rate limiting → Full visibility into every step → Workflows in TypeScript alongside their Next.js app Read the full story: https://lnkd.in/gV5eyduq
-
-
We've just shipped 5 Trigger Skills. Install them and your AI coding assistant will automatically gain in-depth @triggerdotdev knowledge. ✦ 𝚝𝚛𝚒𝚐𝚐𝚎𝚛-𝚜𝚎𝚝𝚞𝚙 ✦ 𝚝𝚛𝚒𝚐𝚐𝚎𝚛-𝚊𝚐𝚎𝚗𝚝𝚜 ✦ 𝚝𝚛𝚒𝚐𝚐𝚎𝚛-𝚝𝚊𝚜𝚔𝚜 ✦ 𝚝𝚛𝚒𝚐𝚐𝚎𝚛-𝚛𝚎𝚊𝚕𝚝𝚒𝚖𝚎 ✦ 𝚝𝚛𝚒𝚐𝚐𝚎𝚛-𝚌𝚘𝚗𝚏𝚒𝚐 But what are skills? Skills are portable instruction sets that teach AI coding assistants how to use a framework correctly; patterns, anti-patterns, and examples it follows automatically. They use the Agent Skills standard, and you can install them with Vercel's open-source CLI. Our available skills ↓ ✦ 𝚝𝚛𝚒𝚐𝚐𝚎𝚛-𝚜𝚎𝚝𝚞𝚙 Go from zero to running: SDK installation, project init, directory structure, and first task. ✦ 𝚝𝚛𝚒𝚐𝚐𝚎𝚛-𝚊𝚐𝚎𝚗𝚝𝚜 Build AI agent workflows: prompt chaining, parallel tool calling, routing between models, evaluator-optimizer loops, and human-in-the-loop approval gates. ✦ 𝚝𝚛𝚒𝚐𝚐𝚎𝚛-𝚝𝚊𝚜𝚔𝚜 Write background jobs with retries, queues, concurrency control, cron scheduling, and batch triggering -- all with the correct patterns from the first prompt. ✦ 𝚝𝚛𝚒𝚐𝚐𝚎𝚛-𝚛𝚎𝚊𝚕𝚝𝚒𝚖𝚎 Add live progress indicators, streaming AI responses, and real-time status updates to your frontend using React hooks. ✦ 𝚝𝚛𝚒𝚐𝚐𝚎𝚛-𝚌𝚘𝚗𝚏𝚒𝚐 Set up build extensions for Prisma, FFmpeg, Playwright, Python, and custom deploy configurations in trigger.config.ts. Without skills, AI assistants can hallucinate APIs that don't exist, use deprecated import paths, forget to export tasks, and wrap triggerAndWait in Promise.all (which breaks retries entirely). Skills give your assistant the actual patterns so it writes correct code the very first time. Install our skills now: npx skills add triggerdotdev/skills
-
Autonomous AI Agents are great but without control they can cause real damage. There is a lot of hype about agents that go off and do everything on their own. But in an enterprise environment, "autonomy" often looks a lot like "risk". - You don't want an agent automatically refunding charges without oversight. - You don't want an agent publishing blog posts without an editor's review. - You don't want an agent committing code to main without a human check. The missing piece in most agent frameworks is the "Human in the Loop". But implementing this technically is surprisingly hard. You need to: 1. Pause the execution state (which might be deep in a call stack). 2. Persist that state to a database. 3. Create a secure way for a human to interact. 4. Resume the state days later when the human clicks "Approve". We built this primitive directly into Trigger. We call it "Waitpoints". With a single line of code (`wait.forToken`), you can pause your AI agent indefinitely. It enables workflows like: Research Agent drafts report -> Emails Manager -> Manager clicks 'Edit' or 'Approve' -> Agent publishes. It bridges the gap between AI speed and human judgment. Learn more 🧵
-
New in the dashboard → A Limits page 🎛️ See rate limits, quotas, and plan features 📈 Track API token usage in real-time ⭐ Upgrade if you need more of anything https://lnkd.in/eSPaQM93
-
-
Run Python scripts from your TypeScript background tasks using our Python build extension. No separate Python server. No complex orchestration. Just add your .py files and go. → The problem: Python has the best ML/data libs. But orchestrating Python workers (Celery, RQ) means managing Redis, separate deployments, different observability. Dependencies installed at build time. Scripts bundled automatically. Using Trigger, there are two execution modes: - Inline - Or run a script file with args Returns stdout, stderr, exit code. Streaming output also supported. The value: mix ecosystems in one task. No IPC. No message queues. Just sequential code. → vs traditional task queues: No Redis or message broker to configure. No separate worker pool to deploy and scale. Works natively with your TypeScript codebase. → vs serverless functions: Not a separate deployment target. Tasks run with full access to your application context, types, and shared code. → vs running scripts in production: We handle container orchestration, dependency management, and dev/prod parity automatically. Check out our practical guides in our docs: - Web scraping with Crawl4AI + Playwright - Image processing with Pillow → S3 - PDF form extraction with PyMuPDF - Doc conversion with Microsoft's MarkItDown All TypeScript orchestration with Python execution. https://lnkd.in/eFH_zHmu
-
Today we shipped a small but long overdue improvement to our date picker UI with the help of shadcn. https://lnkd.in/eSMe222u
-
Enter a company name. The smart spreadsheet does the research. Each column is a separate task running in parallel. Trigger Realtime updates the cells live. The magic behind it: Exa.ai; an AI-native search engine. Unlike Google, Exa is built for LLMs. Semantic search that actually understands what you're looking for + returns clean, structured content ready for extraction. 4 parallel AI-powered extraction tasks retrieve: - Basic info (website, description) - Industry classification - Employee headcount - Funding details Each task fans out to Exa, Claude extracts structured fields, results stream back with sources, and the final record is persisted to Supabase. - batch.triggerByTaskAndWait runs all 4 in parallel - Realtime streams results as they complete - metadata.parent updates the UI from child tasks Fork it and build your own enrichment tool: https://lnkd.in/e-GGgTHh Stack: Next.js 16 + Supabase + Exa + Claude + Supabase + Trigger
-
In case you missed it; here are some of the things we shipped in December: → Deploy using Trigger's own build servers We've released native builds for deployments as part of our ongoing effort to improve the reliability of our deployments. Native builds use our own build server infra instead of relying on third-parties. → Batch trigger improvements We've significantly improved our batch trigger system with support for larger payloads, streaming ingestion, and increased batch sizes. → Faster cold starts with zstd compression We've enabled zstd compression by default for deployment images, improving cold start times when images need to be pulled to worker nodes. → Debounce task runs Consolidate multiple triggers into a single execution by debouncing task runs with a unique key and delay window. → Login with Google: You can now create an account or login to Trigger with your Google account. → Bun runtime upgraded to v1.3.3 We've upgraded our Bun deployment runtime from v1.2.20 to v1.3.3, giving you access to the latest improvements in JavaScript execution speed and Bun's evolving ecosystem. → Claude Agent SDK guide and examples We’ve added a guide and examples showing how to pair the Claude Agent SDK with Trigger to run long-running, multi-step Claude agents safely with durable execution, retries, streaming, and full observability. https://lnkd.in/eWZbZeiM To see everything we've shipped, check out our changelog: https://lnkd.in/deDJcnse
-
We started the year as a background jobs platform. We ended it as the best way to add AI agents and workflows to your apps. Here are some of our 2025 highlights and future plans ↓ - v4 shipped with a completely rebuilt run engine. Warm starts are now 100-300ms instead of cold-starting every time. Waitpoints let you pause a run mid-execution; wait for a webhook callback or human approval, without paying for compute while you wait. - We built an MCP server. AI tools like Cursor and Claude can trigger tasks, inspect runs and deploy code. We also added a custom agent rules file so AI-generated task code follows patterns that actually work. - Realtime Streams v2 fixed most of the pain points: no more chunk limits, auto-reconnects when connections drop, 28-day retention. Type-safe end-to-end streaming straight to your apps. - Our infrastructure upgrades included: → GitHub integration that deploys on push and spins up preview branches per PR → Multi-region support → Static IPs for whitelisting - We made self-hosting the platform much easier using Docker and Kubernetes, with new self-hosting guides for each, and we crossed 13k GitHub stars, and have 4k+ members in our Discord server. We really appreciate all of the support from our open source community! - Other shipped features include; Python support, Prisma 6/7, Bun runtime, native builds, debounced triggers with around ~70 notable changelog entries added this year → https://lnkd.in/deDJcnse - We're also now SOC 2 Type II certified. The certification demonstrates that our security controls, policies, and procedures meet the highest industry standards for protecting customer data. - We raised a Series A, led by Standard Capital, with participation from Y Combinator, and other world-class investors. - We've also welcomed 3 new team members and are hiring across multiple roles in the UK and Europe (apply on our website!) So, what's next? You can expect significant improvements to the existing platform. Starting with more advanced observability features and faster run starts using MicroVMs. We'll ship more integrations with common third-party services so you can get started faster or get your data out of Trigger.dev. If you're building anything requiring background tasks, long-running processes, or AI agents & workflows, get started with Trigger for free: https://trigger.dev
-