0% found this document useful (0 votes)
88 views2 pages

Deep Research AI Agentic System

The Deep Research AI Agentic System is an open-source multi-agent pipeline that autonomously conducts deep research on any topic using real-time web searches and large language models. It consists of a Research Agent that gathers and summarizes online information, and an Answer Drafting Agent that formulates coherent responses based on the research. The system is orchestrated using LangGraph and employs various technologies including Tavily for web search and Groq for LLM-based processing.

Uploaded by

jigafic632
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
88 views2 pages

Deep Research AI Agentic System

The Deep Research AI Agentic System is an open-source multi-agent pipeline that autonomously conducts deep research on any topic using real-time web searches and large language models. It consists of a Research Agent that gathers and summarizes online information, and an Answer Drafting Agent that formulates coherent responses based on the research. The system is orchestrated using LangGraph and employs various technologies including Tavily for web search and Groq for LLM-based processing.

Uploaded by

jigafic632
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Deep Research AI Agentic System — Project

Explanation
Overview

The Deep Research AI Agentic System is a multi agent pipeline that autonomously
performs deep research on any given topic using real-time web search and large
language models (LLMs). It is designed to be modular, scalable, and entirely open-
source powered. The system uses LangGraph to orchestrate agentic workflows,
Tavily API for online search, and Groq's high-speed inference API for LLM-based
summarization and answer generation.

System Architecture

The system is composed of two intelligent agents that work sequentially:

1. Research Agent

Function: Conducts an online search using the Tavily Search API and extracts
relevant web snippets.

 Input: A user query.


 Output: A cleaned, structured research summary based on real-time online
sources.
 Tools Used:

 SDK for advanced web search


tavily-python
 Groq LLM (e.g., llama3-8b-8192) to summarize results

2. Answer Drafting Agent

Function: Drafts a high-quality, coherent answer using the research summary and the
original user question.

 Input: Research summary + original question


 Output: Final answer
 Tools Used:

Groq LLM for structured, context-aware response generation

3. LangGraph Workflow

 Framework: LangGraph is used to model this pipeline as a directed graph,


where each node is an agent (or function).
 State Management: Uses TypedDict as the schema for state sharing between
nodes.
 Flow: question → research → draft → done
Technologies Used

Component Description
LangGraph Orchestrates multi-agent workflows
LangChain Initially used (optional now for tools)
Groq Fast inference for open-source models (LLaMA 3)
Tavily Real-time, advanced web search results
Python Core programming language
dotenv Securely loads API keys from .env file

File Structure
agentic-system/

├── agents/
│ ├── research_agent.py # Tavily + Groq summary
│ └── answer_agent.py # Groq-powered answer drafter

├── graph/
│ └── agent_graph.py # LangGraph workflow builder

├── utils/
│ └── groq_llm.py # Groq completion wrapper

├── main.py # CLI entry point
├── .env # API keys
└── requirements.txt # Python dependencies

Execution Flow

1. User enters a research question via CLI (main.py)


2. LangGraph workflow begins
3. Research Agent:

 Tavily fetches web results


 Groq LLM summarizes findings

4. Answer Agent:

 Uses research + question to generate final answer

5. Final output printed to terminal

You might also like