0% found this document useful (0 votes)
424 views2 pages

Ai Agent Design Patterns Tutorial

This tutorial outlines six key design patterns for AI agent development, aimed at enhancing problem-solving and performance in intelligent systems. The patterns include ReAct, CodeAct, Modern Tool Use, Self-Reflection, Multi-Agent Workflow, and Agentic RAG, each offering unique capabilities for building scalable applications. Understanding and implementing these patterns can lead to more sophisticated and reliable AI solutions, addressing various use cases effectively.

Uploaded by

Tamer Al-Mashat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
424 views2 pages

Ai Agent Design Patterns Tutorial

This tutorial outlines six key design patterns for AI agent development, aimed at enhancing problem-solving and performance in intelligent systems. The patterns include ReAct, CodeAct, Modern Tool Use, Self-Reflection, Multi-Agent Workflow, and Agentic RAG, each offering unique capabilities for building scalable applications. Understanding and implementing these patterns can lead to more sophisticated and reliable AI solutions, addressing various use cases effectively.

Uploaded by

Tamer Al-Mashat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

AI Agent Design Patterns

A Comprehensive Tutorial

Introduction
There are many tools and possibilities for AI agents today, creating both exciting opportunities and a lot of noise. To cut through the
confusion, this tutorial provides a framework of six key design patterns you can leverage to build powerful, scalable agentic applications.
Each pattern represents a different approach to designing intelligent systems with specific capabilities and use cases.

Understanding AI agent design patterns can dramatically improve problem-solving, collaboration, and performance, leading to more
sophisticated and reliable AI applications. While prompt engineering is important, systems design matters more for production-ready
applications. These patterns can be used together, not just exclusively, to create powerful AI solutions.

Key Insight: LLMs (Large Language Models) are smart but static. To build truly powerful AI applications, we need to implement
systems that can think, interact, and improve. That's where these design patterns come in.

Table of Contents
1. ReAct Agent Pattern

2. CodeAct Agent Pattern

3. Modern Tool Use Pattern

4. Self-Reflection Pattern

5. Multi-Agent Workflow Pattern

6. Agentic RAG Pattern

Conclusion

1. ReAct Agent Pattern

What is a ReAct Agent?


The ReAct (Reasoning and Acting) pattern represents the classic loop of AI agent operation where the agent thinks, takes action,
observes the result, and repeats. This framework combines verbal reasoning traces with task-specific actions in an interleaved
manner.

ReAct Agent Workflow

Thought Action Observation Repeat


Agent reasons about the → Agent takes an → Agent observes the → Agent repeats the
task action result cycle

How ReAct Agents Work


ReAct is inspired by the synergies between "acting" and "reasoning" that allow humans to learn and perform tasks effectively. It
builds on Chain-of-Thought (CoT) prompting but adds the crucial element of interaction with external environments.

In a typical ReAct agent workflow:

1. The agent receives a task or query from the user


2. It generates a reasoning trace about how to approach the task (Thought)
3. It decides on and takes a specific action, like searching for information (Action)
4. It receives feedback or information from the environment (Observation)
5. Based on the observation, it generates a new thought and continues the loop until task completion

# Example ReAct Agent interaction pseudocode

User: "Who won the most recent Super Bowl and what was the score?"

# Agent processing begins


Thought: I need to find information about the most recent Super Bowl. I should search for it.
Action: Search("most recent Super Bowl winner and score")

Observation: The most recent Super Bowl was Super Bowl LVIII, held on February 11, 2024, where the
Kansas City Chiefs defeated the San Francisco 49ers with a score of 25-22 in overtime.

Thought: I have the information needed to answer the question.

# Agent provides answer to user


Answer: The Kansas City Chiefs won the most recent Super Bowl (Super Bowl LVIII) with a score of 25-22
against the San Francisco 49ers in overtime.

Implementation Concepts
ReAct can be implemented in various ways, but typically involves:

Prompting an LLM to generate combined reasoning traces and actions


Setting up a system to execute the actions requested by the LLM
Feeding back observations to the LLM for continued reasoning
Establishing a termination condition when the goal is achieved

Frameworks like LangChain provide built-in functionality to implement ReAct agents with different tools like web search,
calculation, and API calls.

Advantages Limitations
Improves fact-checking and reduces hallucinations by Can get stuck in loops or make inefficient decisions
retrieving real-time information Depends heavily on the quality of the LLM's reasoning
Makes LLM reasoning process explicit and transparent Limited by the tools and APIs available to the agent
Enables interactive problem-solving capabilities May require multiple iterations to solve complex
Used in most AI products today, including basic chat problems
assistants

Real-World Applications
Question-answering systems that require fact retrieval
Customer support assistants
Research assistants
Task-oriented conversational agents

Best Practices
Provide clear, detailed instructions in your prompts about the reasoning process
Include examples (few-shot prompting) to guide the agent's reasoning patterns
Implement safeguards to prevent infinite loops
Ensure tools are reliable and provide meaningful feedback

2. CodeAct Agent Pattern

What is a CodeAct Agent?


The CodeAct pattern extends the ReAct framework by incorporating the ability to generate and execute real code as part of the
agent's action space. Instead of just calling predefined APIs, a CodeAct agent writes and runs actual code (typically Python) to
solve problems, perform calculations, manipulate data, or interact with various systems.

CodeAct Agent Workflow

Result
Problem Analysis Code Generation Code Execution
Interpretation
Agent analyzes the → Agent writes code to solve → Agent runs the → Agent interprets
problem it code
results

How CodeAct Agents Work


CodeAct agents operate by:
1. Analyzing the user's request to determine what code needs to be written
2. Generating appropriate code in a suitable language (usually Python)
3. Executing the code in a secure sandbox environment
4. Interpreting the code execution results
5. Refining the code if needed based on execution results
6. Communicating the final solution back to the user

# Example CodeAct Agent workflow

User: "Calculate how much money I'd have if I invested $10,000 with 7% annual return for 10 years,
compounded monthly."

# Agent processing
Thought: I need to calculate a compound interest problem with monthly compounding. I'll write a Python
script to do this.

CodeAction:
```python
principal = 10000
annual_rate = 0.07
years = 10
monthly_rate = annual_rate / 12
months = years * 12

final_amount = principal * (1 + monthly_rate) ** months


print(f"Final amount after {years} years: ${final_amount:.2f}")
```

Result: Final amount after 10 years: $20014.80

# Agent provides answer to user


Answer: If you invested $10,000 with a 7% annual return for 10 years, compounded monthly, you would
have approximately $20,014.80.

Implementation Concepts
Implementing a CodeAct agent requires:
A secure code execution environment/sandbox to protect against malicious code
Clear prompting strategies to guide the LLM in generating correct code
Error handling mechanisms to deal with code that fails to execute properly
A feedback loop to allow the agent to learn from execution results
State management to track variables and computation results between interactions

Advantages Limitations
Dramatically more powerful than text-only agents Security concerns with executing dynamically
Can perform complex calculations and data generated code
transformations Higher computational resources required
Can interact with APIs, databases, and other systems LLMs may generate code with subtle bugs or edge
through code cases
Provides exact, verifiable results for computational tasks Debugging can be complex when code fails silently
Highly flexible - can solve problems that weren't Requires careful sandboxing and resource limitations
anticipated during design

Real-World Applications
Data analysis assistants
Software development companions
Automated research agents
Complex financial calculations
Scientific computing assistants

Best Practices
Implement strict sandboxing with resource limits (CPU, memory, execution time)
Restrict access to sensitive operations and file systems
Provide clear error messages back to the agent when code fails
Use a stateful approach to maintain variables between code executions
Include a mechanism for the agent to test and verify its own code

3. Modern Tool Use Pattern

What is the Modern Tool Use Pattern?


The Modern Tool Use pattern focuses on delegating specialized tasks to external tools and services instead of having the AI
attempt to do everything itself. The agent acts primarily as an intelligent middleman that routes tasks to the most appropriate
tools, formats information, and presents results to the user.

Unlike traditional tool use (which might be limited to simple API calls), Modern Tool Use often involves:
A large ecosystem of specialized tools
Dynamic selection of appropriate tools for each subtask
Sophisticated formatting and parsing of inputs and outputs
Hierarchical organization of tool selection

Modern Tool Use Architecture

LLM Agent (Orchestrator)


Search Tools Knowledge Tools Calculation Tools API Tools
Web, Academic, News Wikipedia, Databases Math, Finance Weather, Stocks

How Modern Tool Use Works


In the Modern Tool Use pattern:
1. The LLM analyzes the user's request to determine what information or actions are needed
2. It selects the most appropriate tool or sequence of tools from its available toolkit
3. It formats the appropriate query or command for the selected tool
4. The tool executes and returns results to the LLM
5. The LLM processes the tool's response and may decide to use additional tools
6. Finally, it synthesizes a comprehensive response for the user

# Example Modern Tool Use interaction

User: "What's the weather in New York today and should I bring an umbrella?"

# Agent processing
Thought: I need to check the weather forecast for New York today to answer this question. I'll use a
weather API tool.

Tool Selection: weather_api


Tool Input: {"location": "New York", "date": "today"}

Tool Response: {
"location": "New York, NY",
"date": "2025-05-04",
"conditions": "Partly cloudy with 40% chance of afternoon showers",
"temperature": {"high": 72, "low": 58, "current": 65},
"precipitation": {"probability": 0.4, "type": "rain"}
}

# Agent generates response based on tool output


Response: "The weather in New York today is partly cloudy with a 40% chance of afternoon showers. The
temperature will range from 58°F to 72°F. Since there's a significant chance of rain later in the day,
I'd recommend bringing an umbrella to be safe."

Implementation Concepts
Key components for implementing Modern Tool Use include:
A tool registry that catalogs available tools with their descriptions, input schemas, and capabilities
A tool selection mechanism that helps the LLM choose the most appropriate tool
Structured input/output formats for tool communication (often JSON)
Error handling for tool failures
A prioritization system that prefers specialized tools over general ones

As described in Andrew Ng's framework, Modern Tool Use should follow a hierarchical approach:

"Information seeking should always try to use specialized tools for vertical information like travel, finance, products first, then
fallback to general search if the specialized tools do not return good enough information."

Advantages Limitations
Leverages specialized, optimized services for better Increased system complexity and potential points of
performance failure
Provides access to real-time data and APIs Dependence on external services' availability
More efficient resource utilization by delegating heavy Potential latency issues from API calls
lifting May require specialized knowledge to develop new
Easily extensible by adding new tools tools
Reduces hallucinations by using authoritative external Tool selection might not always be optimal
sources

Real-World Applications
Personal assistants that integrate with multiple services
Research tools that combine multiple information sources
Customer service agents with access to product databases and knowledge bases
Workflow automation systems
Enterprise assistants that interface with internal tools

Best Practices
Provide clear, detailed descriptions of each tool's purpose and usage
Implement a fallback mechanism when tools fail
Use schemas to validate tool inputs and outputs
Create a hierarchy of tools from specialized to general
Log tool usage patterns to identify improvement opportunities
Implement timeouts for external tool calls

4. Self-Reflection Pattern

What is the Self-Reflection Pattern?


The Self-Reflection pattern enables an AI agent to critique its own work, identify mistakes or weaknesses, and iteratively improve
its outputs. This pattern addresses one of the most common causes of AI errors: the lack of self-evaluation and correction.

As Andrew Ng points out in his agentic design patterns series: "Most AI errors happen because this step is missing."

Self-Reflection Workflow

Initial Generation Self-Critique Refinement Final Output


Agent creates first → Agent evaluates its → Agent improves based on → Agent delivers improved
response work critique result

How Self-Reflection Works


In the Self-Reflection pattern:
1. The AI agent generates an initial response to the user's query
2. It then switches to a critical evaluation mode and examines its own output
3. The agent identifies potential issues, factual errors, logical inconsistencies, or areas for improvement
4. Based on this self-critique, it generates an improved version of the response
5. This cycle can repeat multiple times until a satisfactory quality level is reached

As described by Andrew Ng:

"This self-reflection process allows the LLM to spot gaps and improve its output on a variety of tasks including producing
code, writing text, and answering questions."

# Example Self-Reflection Agent workflow

User: "Write a short explanation of quantum computing for high school students."

# Initial generation
Initial Response: "Quantum computing uses quantum bits or qubits which can be both 0 and 1 at the same
time due to superposition. This allows quantum computers to solve complex problems much faster than
classical computers that use regular bits that can only be 0 or 1."

# Self-reflection phase
Self-Critique: "My explanation is technically correct but may be too abstract for high school
students. I didn't provide concrete examples or analogies that would make the concept more
approachable. The explanation is also quite brief and doesn't cover key quantum concepts like
entanglement or potential applications that would make the topic more interesting."

# Refinement based on self-critique


Improved Response: "Imagine you have a regular computer that uses bits - these are like light switches
that can be either ON (1) or OFF (0). Now, quantum computers use something special called 'qubits.'
What makes qubits amazing is that they can be ON, OFF, or somehow both at the same time - like a light
switch that's somehow in the middle position!

This special property is called 'superposition.' It's like being able to check many answers to a
problem all at once instead of one at a time. Another cool quantum feature is 'entanglement,' which is
like having two magical coins that always match each other's result when flipped, even if they're far
apart.

Scientists are excited about quantum computers because they might someday solve problems that would
take regular computers thousands of years, like designing new medicines or creating better materials.
While fully functional quantum computers are still being developed, they represent one of the most
exciting frontiers in technology today!"

Implementation Concepts
Implementing Self-Reflection can be done in several ways:

Single-Agent Reflection: The same LLM instance critiques its own output with specific prompting
Multi-Agent Reflection: One LLM instance generates content, while another is specifically prompted to critique it
Tool-Augmented Reflection: Using external tools to evaluate outputs (e.g., fact-checking against databases)
Structured Rubric-Based Evaluation: Using specific criteria to systematically evaluate different aspects of the output

Advantages Limitations
Significantly reduces errors and hallucinations Increases token usage and processing time
Improves output quality with minimal additional May still miss certain types of errors
prompt engineering Can sometimes be overly critical or focus on superficial
Makes reasoning more transparent and traceable improvements
Can adapt to feedback more effectively LLMs might be biased in their self-evaluation
Relatively simple to implement compared to other Without external references, cannot correct
patterns fundamental knowledge gaps

Real-World Applications
Content generation with quality assurance
Code review assistants
Academic writing and research assistants
Decision support systems
Educational tutoring systems

Best Practices
Provide specific criteria for self-evaluation rather than general instructions
Consider using different system prompts for generation vs. critique phases
Limit the number of reflection cycles to avoid diminishing returns
Maintain a record of both original and improved outputs for quality tracking
Supplement self-reflection with external factual verification when possible

5. Multi-Agent Workflow Pattern

What is the Multi-Agent Workflow Pattern?


The Multi-Agent Workflow pattern distributes complex tasks across multiple specialized AI agents, each with distinct roles,
creating a collaborative system similar to a team of experts. Instead of relying on a single agent to handle everything, this
approach breaks tasks down into specialized responsibilities managed by different agents.

Multi-Agent Workflow Architecture

Orchestrator Agent

Planner Researcher Writer Critic


Creates strategy Gathers information Creates content Evaluates quality

How Multi-Agent Workflows Work


Multi-Agent Workflows typically operate with this process:

1. A user submits a complex task or query


2. An orchestrator agent analyzes the task and breaks it into subtasks
3. The orchestrator assigns subtasks to specialized agents based on their roles
4. Specialized agents perform their assigned tasks, potentially communicating with each other
5. Results are combined, refined, and presented back to the user

As Andrew Ng explains in his writing on Multi-Agent Collaboration:

"Different agents might be built by prompting one LLM (or, if you prefer, multiple LLMs) to carry out different tasks. For
example, to build a software engineer agent, we might prompt the LLM: 'You are an expert in writing clear, efficient code.
Write code to perform the task...'"

# Example Multi-Agent Workflow for Software Development

User: "Create a simple web application that allows users to track their daily water intake."

# Orchestrator Agent assigns tasks


Orchestrator: "This task requires multiple specialists. I'll break this down into product
requirements, design, coding, and testing."

# Product Manager Agent


PM Agent: "Requirements: App should allow users to log water intake in ml/oz, track daily progress
towards a goal, show history over time, and provide reminders. Target audience is health-conscious
adults."

# Designer Agent
Designer Agent: "UI Design: Simple dashboard with current day's intake, progress bar showing % towards
daily goal, add intake button with quick-add options (glass, bottle), settings page for goal
adjustment, and history page with chart."

# Developer Agent
Developer Agent: "I'll implement this using React for frontend, with local storage for data
persistence. Components: Header, Dashboard, AddIntakeForm, ProgressBar, HistoryChart, Settings."

# QA Agent
QA Agent: "Test cases: Verify intake logging works correctly, ensure goal tracking updates properly,
check that history is stored and displayed correctly, verify responsive design works on mobile and
desktop."

# Orchestrator combines and delivers result


Orchestrator: "Here's your water tracking web application with requirements, design, implementation
code, and testing guidelines..."

Implementation Concepts
Key considerations for implementing Multi-Agent Workflows include:

Clear role definition for each agent with specific responsibilities


Communication protocols between agents (message passing)
State management to track the workflow progress
Coordination and orchestration mechanisms
Conflict resolution when agents disagree
Workflow sequencing (parallel vs. sequential execution)

There are several frameworks that help implement multi-agent systems:

AutoGen: Microsoft's framework for building applications with multiple agents


LangGraph: LangChain's graph-based framework for multi-agent workflows
Crew AI: Specialized framework for agent collaboration
ChatDev: Open source system that simulates a software company with different roles

Advantages Limitations
Reduces complexity through specialized agent roles Increased system complexity and overhead
Improves performance through focused attention on Coordination challenges and potential communication
subtasks bottlenecks
Enables parallel processing of different aspects of a task Higher computational and token costs
Creates built-in checks and balances through different Can be unpredictable in complex scenarios
agents' perspectives More difficult to debug when issues arise
More closely models human team collaboration

Real-World Applications
Collaborative software development environments
Research and analysis teams for complex topics
Content creation pipelines (research, writing, editing, fact-checking)
Business decision support systems
Project management assistants

Best Practices
Define clear roles, responsibilities and boundaries for each agent
Create structured communication protocols between agents
Implement a strong orchestration mechanism to coordinate workflows
Include observation and debugging capabilities to track agent interactions
Start simple and add complexity incrementally as the system stabilizes
Create fallback mechanisms when agent collaboration hits roadblocks

6. Agentic RAG Pattern

What is the Agentic RAG Pattern?


Agentic RAG (Retrieval-Augmented Generation) combines the power of retrieval-based systems with agent-like decision making
and reasoning capabilities. Unlike traditional RAG systems that follow a fixed retrieval-then-generate pattern, Agentic RAG involves
intelligent agents that can reason about what to retrieve, how to interpret the retrieved information, and when to seek additional
context.

Agentic RAG Workflow

Query Information Response Self-


Dynamic Retrieval
Analysis
Agent analyzes
→ Agent retrieves → Synthesis
Agent processes
→ Generation
Agent creates
→ Verification
Agent checks
information strategically
user query retrieved content coherent answer response quality

How Agentic RAG Works


Unlike static retrieval-then-read patterns, Agentic RAG involves iterative calls to the LLM, interspersed with tool or function calls
and structured outputs. The typical workflow includes:

1. An agent receives a query and analyzes what information is needed


2. It formulates targeted retrieval queries that may evolve as information is gathered
3. The agent evaluates retrieved information for relevance and sufficiency
4. If information is incomplete, it formulates new queries or explores different information sources
5. The agent synthesizes information from multiple sources, resolving conflicts
6. It generates a comprehensive response that cites sources appropriately
7. Optionally, it verifies the accuracy of its response against retrieved information

# Example Agentic RAG workflow for a health question

User: "What are the best exercises for someone with lower back pain?"

# Agent processes query


Agent Thought: "This is a health-related question about exercises for lower back pain. I need to find
scientifically supported information from credible health sources."

# Dynamic retrieval - first pass


Retrieval Query: "evidence-based exercises for lower back pain treatment"

Retrieved Documents: [Several health articles about general back exercises]

# Agent evaluates and refines search


Agent Analysis: "I have general information but need more specific details about exercise types,
intensity levels, and contraindications."

# Second retrieval with refined query


Refined Query: "specific recommended exercises for lower back pain from physical therapy sources"

Additional Documents: [Physical therapy guidelines and research papers]

# Agent synthesizes information and generates response


Response: "Research from medical sources indicates these exercises may help with lower back pain:
1. Gentle stretching: Cat-cow stretches and knee-to-chest pulls
2. Core strengthening: Partial crunches and bridge exercises
3. Low-impact aerobic exercise: Swimming or walking

Important safety notes: Start gradually, avoid jerky movements, and stop if pain increases. A physical
therapist can provide personalized recommendations for your specific condition.

Sources: American Academy of Orthopedic Surgeons, Mayo Clinic, and Harvard Health Publishing."

Implementation Concepts
Key components for implementing Agentic RAG include:

Intelligent query formulation mechanisms


Multiple retrieval sources with different specialties
Information evaluation and relevance filtering
Memory mechanisms to track what has been retrieved
Citation tracking and source attribution
Feedback loops to improve retrieval based on initial results

Advantages Limitations
More accurate responses through targeted, adaptive More complex to implement than standard RAG
retrieval Can be slower due to multiple retrieval rounds
Works with real-time data, not just model memory Higher computational and token costs
Reduces hallucinations by grounding responses in May struggle with highly specialized or obscure topics
retrieved information Quality depends on both retrieval sources and agent
Can find and synthesize information from multiple reasoning
sources
Creates more transparent, attributable responses

Real-World Applications
Advanced research assistants (like Perplexity)
Medical and legal information systems
Technical documentation assistants
Academic research tools
Knowledge-intensive enterprise applications

Best Practices
Incorporate diverse, high-quality information sources
Implement mechanisms to detect and resolve conflicting information
Use citation tracking to maintain attribution
Create evaluation metrics to assess retrieval quality
Establish thresholds for when additional retrievals are needed
Maintain transparency about information sources in responses

Conclusion: Combining Patterns for Powerful AI Applications


These six design patterns—ReAct Agent, CodeAct Agent, Modern Tool Use, Self-Reflection, Multi-Agent Workflow, and Agentic
RAG—provide a framework for building sophisticated, reliable AI applications. While each pattern offers unique capabilities and
advantages, the most powerful applications often combine multiple patterns to create systems that can reason, act, learn, and
collaborate.
For example, a comprehensive AI assistant might use:

ReAct as its fundamental reasoning and action framework


CodeAct capabilities for computational tasks
Modern Tool Use to leverage specialized external services
Self-Reflection to verify and improve its outputs
Multi-Agent approaches for complex tasks requiring different expertise
Agentic RAG for knowledge-intensive queries requiring research

Important Considerations
Design patterns are guidelines, not rigid rules—adapt them to your specific use cases
Start with simpler patterns (like ReAct and Self-Reflection) before attempting more complex ones
Consider computational costs and complexity when combining multiple patterns
Implement proper monitoring and evaluation to ensure patterns work effectively
Remember that even with these patterns, LLMs have fundamental limitations

As stated in the original framework: "𝗨𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱𝗶𝗻𝗴 𝗔𝗜 𝗮𝗴𝗲𝗻𝘁 𝗱𝗲𝘀𝗶𝗴𝗻 𝗽𝗮𝘁𝘁𝗲𝗿𝗻𝘀 𝗰𝗮𝗻 𝗶𝗺𝗽𝗿𝗼𝘃𝗲 𝗽𝗿𝗼𝗯𝗹𝗲𝗺-𝘀𝗼𝗹𝘃𝗶𝗻𝗴,
𝗰𝗼𝗹𝗹𝗮𝗯𝗼𝗿𝗮𝘁𝗶𝗼𝗻, 𝗮𝗻𝗱 𝗽𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲, 𝗹𝗲𝗮𝗱𝗶𝗻𝗴 𝘁𝗼 𝗺𝗼𝗿𝗲 𝘀𝗼𝗽𝗵𝗶𝘀𝘁𝗶𝗰𝗮𝘁𝗲𝗱 𝗮𝗻𝗱 𝗿𝗲𝗹𝗶𝗮𝗯𝗹𝗲 𝗔𝗜 𝗮𝗽𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝘀." With these
patterns in your toolkit, you're well-equipped to build the next generation of intelligent systems that can think critically, act
effectively, and solve complex problems.

References and Further Reading


Agentic Design Patterns Part 2: Reflection - Andrew Ng, DeepLearning.AI
Agentic Design Patterns Part 3: Tool Use - Andrew Ng, DeepLearning.AI
Agentic Design Patterns Part 4: Planning - Andrew Ng, DeepLearning.AI
Agentic Design Patterns Part 5: Multi-Agent Collaboration - Andrew Ng, DeepLearning.AI
What is a ReAct Agent? - IBM Think
ReAct - Prompt Engineering Guide
ReAct: Synergizing Reasoning and Acting in Language Models - Yao et al., 2022
What is Agentic RAG - Weaviate
CodeAct Agent From Scratch: A Complete Guide - Medium

Made with Genspark


© 2025 AI Agent Design Patterns Tutorial

You might also like