Akka Infoq Agentic Ai Design Patterns
Akka Infoq Agentic Ai Design Patterns
May 1, 2025
Founder of Amorphous
Data
Today’s agenda
Q&A
05
2
Poll question
3
Agentic is real, but…there is a lot to learn
Visit akka.io
4
Agents and agentic systems are
distributed systems, powered by AI
…that must deliver reliable outcomes
…while depending upon unreliable LLMs.
5
AI Agency
Capacity to make meaning from your environment
static adaptive
“A big gap exists between current
LLM-based assistants and full-fledged
reactive AI agents, but this gap will close as proactive
we learn how to build, govern and
trust agentic AI solutions.”
tasks goals
–Gartner
supervised autonomous
economic productivity
cost
A paradigm shift to AI-fueled app ecosystems
AI agents and apps become part of a symbiotic existence
By 2028, 33% of enterprise software applications will include agentic AI, up from less than 1% in 2024.
Gartner, TSP 2025 Trends: Agentic AI — The Evolution of Experience, 24 February 2025
AI agents personalize
App Enhanced User
Experience
interactions to increase
satisfaction
Ecosystem
Cloud-native
AI agents automate routine
Applications Operational
tasks to allow humans to focus
Efficiency
on strategic initiatives
+
Agentic AI Scalability
AI-driven SaaS adapt to
business needs without
Services proportional increases in cost
7
LLM-powered app services are intelligent
Models can be prompted to perform a range of user & system tasks
Input Response
LLM automation varies SaaS app use cases and
by data type behavior
chunked
response
Stateless, long-running, computationally
LLM client LLM intensive resources that can analyze,
prompt reason, and plan
LLM
task agent 1
routing
classifier
which
An LLM router classifies a task for routing to an LLM specialist:
agent out
task agent 2 e.g. classify this support call as either sales or technical
task agent 1 LLM subtasks can be divided for speed or multiple runs:
parallelization blast vote out
task agent 2 e.g. execute security tests from different povs, with success voting
orchestrator
task agent 1 An orchestrator LLM breaks down tasks not known in advance:
synthesizer agent
combine out
task agent 2 e.g. gathering information from targets identified by orchestrator LLM
solution
generator evaluator One LLM generates a response while another provides feedback:
evaluator-optimizer out
agent agent e.g. a translation LLM that has nuance checking from evaluator LLM
accepted
feedback
action
Create and execute a complex plan while staying “grounded” with feedback:
service agent LLM loop environment out e.g. create a travel itinerary and book all reservations for a vacation
stop
feedback
Multi-agent systems are orchestrated
Traceable, auditable, debuggable, with point-in-time recovery
Agentic systems are workflows
reliable execution of AI tasks with visibility into request /
response data, built-in retries, and error compensation
agent
workflow
sequential 1. Pick locale 2. Share dates 3. Travel plan
flight search
parallel done build itinerary
hotel search
monitor
dates
proposed adjust
human-in-the-loop itinerary budget
plan
Single-agent enrichment loop
Prompt → retrieve → enrich → repeat is a repetitive cycle & pattern
agentic enrichment loop
in
prompt
agent LLM memory tool response
out
vector DB, context DB MCP, APIs
save response
API,
functions, or
call tool programs
3. call tool to
take action
results
4. add results update prompt w/ results
to prompt
save response
repeat, if multiple
tools called
LLMs are stateless. Context is assembled.
Agentic services augment prompts with data from many sources.
event stream
updating context There is a storm alert for the
events Bahamas next week.
Agentic systems are distributed systems
Architectural techniques and practices required for scale and resilience
chunked
response ➔ Async, non-blocking invocation
LLM client LLM ➔ Event-based, streaming responses
prompt
➔ Backpressure
LLM
➔ Event-driven architecture
➔ Human-in-the-loop interaction
Tools, APIs
➔ Streaming real-time ingest
Agent agent Vector DB ➔ Retries, circuit breakers, timeouts
Memory ➔ Memory & tool integration
➔ CQRS
Humans
➔ Replication and failover
➔ Durable workflows
Agentic agent agent
➔ Distributed tracing
System multi-agent
protocol ➔ Discovery & mesh networking
➔ Multi-agent protocols: A2A, BeeAI
Poll question
16
Today’s agenda
Q&A
05
17
Bumpy path from POC to production
Top three enterprise challenges: uncertainty, privacy, and scale
52% 8+ months
fail to reach POC to
production production
Testing LLMs isn’t straightforward If you’re looking for vibes, it will be short lived
● There’s no .assertEqual(). ● LLMs are probabilistic pattern matchers - not deterministic APIs.
● Heuristic metrics are flawed. ● Building with them means thinking in systems, not functions.
● Human evals are expensive and inconsistent. ● It means controlling chaos, not eliminating it.
● Even stable outputs might still be wrong.
Privacy and compliance horror show
LLMs are leaky sieves creating numerous holes for security to plug
SaaS Agentic
22
Today’s agenda
Q&A
05
23
Agentic systems engineering for reliability
1. Execute a DDD and AI-DD process ➔ Produce context map, ubiquitous language, and bounded contexts
➔ Define overall orchestration and flow across bounded contexts
➔ Develop localized workflows for each bounded context
2. Define data sovereignty and scope ➔ Company-specific requirements (e.g., retention policies, audit logging)
➔ Country or regional regulations (e.g., GDPR, HIPAA, financial data rules)
3. Establish evaluation strategy ➔ Make reasoning visible and measurable from the start
➔ Build synthetic evaluation sets to test reasoning steps
4. Select the right AI models ➔ Reasoning models: OpenAI o3, Claude Sonnet, DeepSeek
➔ General models: OpenAI GPT-4o, Gemini Pro, LLaMa
➔ Small language models: Phi-4, Mistral 7B, Claude Haiku, Gemini Flash
➔ Fine-tuned industry models: DeepSeek-Coder, CodeLlama
5. Select agentic platform architecture ➔ Choose platform that enables services that transact and reason
➔ Rqmts: Durable execution, event-driven, memory, streaming, and tools support
➔ Rqmts: Elastic, <20ms p99 latencies, resilient, multi-region failover
6. Build developer workflow and agents ➔ Refine developer workflow
➔ Build initial versions of your agent(s)
7. Deploy and observe ➔ Release, monitor, and refine agent based on real-world behavior
Techniques for reducing uncertainty
Design to anticipate randomness while embracing failure as expected
Agentic awareness Take more LLM thinking time when observing uncertainty
Post-Inference
Model(s)
28
Humans
IOT Devices
Audio / Video
Metrics
Streaming Endpoints
Any Protocol | In/Out | Custom APIs Efficient
Agent Lifecycle Agent Memory 70% less compute
Mgmt Orchestration Database API + agentic combo
Elastic
Data, API and
1 2 3
P R 5M TPS
akka clustering
Agentic AI
Services Agent Connectivity & Adapters
Non-Blocking | Backpressure | Load Balanced
Agile
Prod in days
Secure | Observable | Scalable Semantic Search Multi-LLM, A2A Integration & MCP SDK + ops envs
Prompt 2
3
Events Resilient
1 0-0 RTO/RPO
multi-region,
multi-master data
Other replication
Vector
DB
LLMs Systems
The Akka agentic advantage
✓ Agentic, AI, apps & data Streaming endpoints Memory database
➔ Shared compute: agentic ➔ Agentic sessions with infinite context
✓ Hardened runtime ➔ Context snapshot pruning to avoid
co-execution with API services
✓ Simple, expressive SDK ➔ HTTP and gRPC custom API LLM token caps
endpoints ➔ In-memory context sharding, load
✓ Multi-region balancing, and traffic routing
➔ Custom protocols, media types,
✓ Automated ops and edge deployments ➔ Multi-region context replication
➔ Real-time streaming ingest, ➔ Memory filters for region-pinning and
benchmarked to over 1TB cross-session context creation
➔ Embedded context persistence with
Postgres event store
Haifeng Li – maintainer
Horn Coho AI
“Zero problems” augmenting high-performance “With Akka, we got to market 75% faster compared
audio and video streams on demand to other agentic solutions we had considered.”
Tomasz Wujec - Lead Developer Michael Ehrlich – CTO
Agentic is real
Let’s make it real for you
32
Q&A
Contact Us:
Tyler Jewell: [email protected]
InfoQ: [email protected]