Behind the scenes of Vibe Coding Agents: Code Sandboxes explained (E2B as reference)
Recently, I started digging into the infrastructure behind Vibe Coding Agents and wanted to share what I learned about Code Sandboxes.
Big realization:
Docker and Lambda weren’t designed for how AI thinks.
AI agents need safe, stateful, pauseable environments — not stateless containers.
Here’s what’s actually happening under the hood 👇
(and why users feel the difference)
1️⃣ Firecracker > Docker
Instead of containers, these platforms use microVMs (Firecracker).
• Hardware-level isolation → safe for untrusted AI-generated code
• Sub-200ms startup via pre-warmed VM pools
👉 This is why AI coding tools feel responsive, not “run → wait → reload”.
You type → code runs → preview updates instantly.
2️⃣ Serverless with RAM (Snapshotting)
They don’t just stop code — they snapshot the entire running machine (CPU + RAM).
• Python objects, variables, dev servers stay in memory
• Resume hours later exactly where you left off
👉 You can leave a vibe-coding or data analysis session, come back later, and everything is still alive.
No reloading data. No re-running setup.
3️⃣ Copy-on-Write Filesystems
No Docker image pull per run.
• Base VM images already exist on the host
• New sandboxes only store what changes
👉 Heavy environments (10GB+ Python/Node stacks) start in milliseconds.
No “installing dependencies…” anxiety.
Why this category exists (real use cases)
This infrastructure unlocks things that were previously painful or impossible:
• Vibe Coding IDEs → Cursor / Replit-style agents that write + run code live
• AI Data Analysts → Keep large datasets in memory across chat turns
• LLM Evals & Testing → Safely run thousands of generated programs in parallel
• Agent Platforms → Where an agent owns a long-lived computer, not just tools
Market landscape (rough buckets)
• E2B → Developer-first agent sandboxes
• Modal → Serverless compute + sandboxes for AI workloads
• Replit → Vertically integrated (IDE + runtime)
• Docker / Lambda → Excellent infra, wrong abstraction for agents
Special thanks to Tomáš Beran and Tereza Tizkova for supporting this post