DEV Community

es404020
es404020

Posted on • Edited on

Building a Weather-Savvy AI Agent Using LangGraph, LangChain, and OpenAI

Have you ever wanted to build an AI assistant that can actually respond to questions about the weather using real-time data? In this blog post, I’ll show you how to build a smart agent that integrates LangGraph, LangChain, OpenAI, and a weather API to deliver real-time temperature updates.

We’ll build a conversational agent that:

✅ Understands your messages with GPT-4

✅ Detects when to use tools (like checking the weather)

✅ Returns the current temperature for any given location

Let’s dive in. 🌍🌦️


📦 Requirements

Before we begin, make sure you have the following Python packages installed:

pip install python-weather openai langchain langgraph python-dotenv
Enter fullscreen mode Exit fullscreen mode

Also, grab your OpenAI API key and pop it into a .env file like so:

OPENAI_API_KEY=your-api-key-here
Enter fullscreen mode Exit fullscreen mode

🧠 The Code

Step 1: Set Up Environment & Imports

We start by importing all the necessary libraries and loading our environment variables.

from dotenv import load_dotenv
from openai import OpenAI
import python_weather
import asyncio

from langgraph.graph import END, START, StateGraph, MessagesState
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
from langgraph.prebuilt import ToolNode
from typing import Literal

load_dotenv()
Enter fullscreen mode Exit fullscreen mode

Step 2: Create a Weather Tool 🌡️

We use python_weather to fetch the current temperature asynchronously.

async def generate_response(location):
    async with python_weather.Client(unit=python_weather.IMPERIAL) as client:
        weather = await client.get(location)
        return weather.temperature

@tool
def get_weather(location: str):
    """Call to get the current temperature."""
    return asyncio.run(generate_response(location))
Enter fullscreen mode Exit fullscreen mode

This tool will later be integrated with our AI agent.


Step 3: Connect Tools to the Model

Next, we bind the get_weather tool to GPT-4 using LangChain’s ChatOpenAI.

tools = [get_weather]
model = ChatOpenAI(model="gpt-4o-mini").bind_tools(tools)
Enter fullscreen mode Exit fullscreen mode

Step 4: Create the Agent Workflow using LangGraph

Here's where the magic happens! We define how the agent will think and act using a graph-based workflow.

def call_model(state: MessagesState):
    messages = state["messages"]
    response = model.invoke(messages)
    return {"messages": [response]}

def should_continue(state: MessagesState) -> Literal["tools", END]:
    messages = state["messages"]
    last_message = messages[-1]
    if last_message.tool_calls:
        return "tools"
    return END

workflow = StateGraph(MessagesState)
tool_node = ToolNode(tools)

workflow.add_node("agent", call_model)
workflow.add_node("tools", tool_node)

workflow.add_edge(START, "agent")
workflow.add_conditional_edges("agent", should_continue)
workflow.add_edge("tools", "agent")

graph = workflow.compile()
Enter fullscreen mode Exit fullscreen mode

This setup allows the agent to:

  1. Respond to user input
  2. Decide if it needs a tool
  3. Call the weather tool if needed
  4. Respond again with the result

Step 5: Visualize the Workflow 🌐

If you're using Jupyter, visualize your agent's graph with Mermaid:

from IPython.display import Image, display
from langchain_core.runnables.graph import MermaidDrawMethod

display(
    Image(
        graph.get_graph().draw_mermaid_png(
            draw_method=MermaidDrawMethod.API,
        )
    )
)
Enter fullscreen mode Exit fullscreen mode

Step 6: Try It Out!

Let’s test how the agent responds to different types of messages.

from langchain_core.messages import HumanMessage

messages1 = [HumanMessage(content="Hello, how are you?")]
messages2 = [HumanMessage(content="How is the temperature in Lagos?")]

graph.invoke({"messages": messages1})
graph.invoke({"messages": messages2})
Enter fullscreen mode Exit fullscreen mode

You’ll see GPT-4 respond conversationally to the first, and call the weather tool for the second. 🎯


🚀 Wrap-Up

You now have a fully functioning conversational AI agent that can:

✅ Understand user intent

✅ Decide when to use external tools

✅ Fetch real-time weather data

✅ Reply intelligently using GPT-4

This is just scratching the surface. You could add more tools (news, calendar, reminders) and turn this into a real personal assistant.


Got questions or building something similar? Drop a comment below! 🧠💬

Happy coding!

Top comments (0)