





















































DeepSeek is fast becoming the open-source LLM of choice for developers and engineers focused on speed, efficiency, and control.
Join "DeepSeek in Production" summit to see how Experts are fine-tuning DeepSeek for real-world use cases, building agentic workflows, and deploying at scale.
Seats are filling fast. Limited slots left. Book now at 30% off.
Apply codeDEEPSEEK30 at checkout to avail your 30% off (Expires on Sun, Aug 3).
This week, AI is dominating headlines. OpenAI's GPT-5 is launching, enhancing Microsoft Copilot and ChatGPT's Study Mode. Meta's stock surged on AI investments, while Tesla's $16.5 billion chip deal boosts Samsung. Gulf nations are betting big on AI as their "new oil," showcasing AI's transformative impact across industries and global economies, creating a stark contrast with struggling consumer businesses.
Want to find out more?
Let’s dig in!
LLM Expert Insights,
Packt
Date: August 27, 2025
Location: Los Angeles, California, USA
Format: In person
Cost: $295 (Normal pass) and $595 (VIP pass).
Visit for more details.
KDD 2025 — ACM SIGKDD Conference on Knowledge Discovery & Data Mining
Date: August 37, 2025
Location: Toronto, Canada
Format: In person (with potential virtual access)
Cost: $600 (One-day pass) to $1,750 (Full conference; based on membership).
Visitfor fee details.
Date: August 1920, 2025
Location: Half Moon Bay, California, USA
Format: In person
Cost: $1,995.
Visit for registration and pricing.
Upskilling with MCP and A2A protocols is your gateway to building AI agents. Don’t miss the chance to explore these events and get ahead.
In the rapidly evolving landscape of generative AI, creating truly autonomous systems requires thoughtful orchestration and sophisticated intent recognition. In this exclusive excerpt from his recent book, Building Business-Ready Generative AI Systems, AI expert and visionary Denis Rothman shares a practical approach for designing AI controllers that proactively select and execute tasks without explicit instruction.
Denis explains how leveraging GPT-4o’s semantic analysis capabilities allows your AI system to intelligently match vague or implicit user inputs to the correct action, be it sentiment analysis or semantic search.
Here’s a peek under the hood of how that works.
Selecting a scenario
The core of an AI controller is to decide what to do when it receives an input (system or human user). The selection of a task opens a world of possible methods that we will explore throughout the book. However, we can classify them into two categories:
Here, we’ll explore the second, more proactive approach. We’ll test two prompts with no instructions, no task tag, and no clue as to what is expected of the generative AI model. Although we will implement other, more explicit approaches later with task tags, a GenAISys AI controller orchestrator must be able to be proactive in certain situations.
The first prompt is an opinion on a movie, implying that a sentiment analysis might interest the user:
if prompt==1:
input = "Gladiator II is a great movie although I didn't like some of the scenes. I liked the actors though. Overall I really enjoyed the experience."
if prompt==2:
input = "Generative AI models such as GPT-4o can be built into Generative AI Systems. Provide more information."
To provide the AI controller with decision-making capabilities, we will need a repository of instruction scenarios.
Defining task/instruction scenarios
Scenarios are sets of instructions that live in a repository within a GenAISys. While ChatGPT-like models are trained to process many instructions natively, domain-specific use cases need custom scenarios (we’ll dive into these starting from Chapter 5). For example, a GenAISys could receive a message such as Customer order #9283444 is late. The message could be about a production delay or a delivery delay. By examining the sender’s username and group (production or delivery department), the AI controller can determine the context and, selecting a scenario, take an appropriate decision.
In this notebook, the scenarios are stored in memory. In Chapter 3, we will organize the storage and retrieval of these instruction sets in Pinecone vector stores.
In both cases, we begin by creating a repository of structured scenarios (market, sentiment, and semantic analysis):
scenarios = [
{
"scenario_number": 1,
"description": "Market Semantic analysis.You will be provided with a market survey on a give range of products.The term market must be in the user or system input. Your task is provide an analysis."
},
{
"scenario_number": 2,
"description": " Sentiment analysis Read the content and classify the content as an opinion If it is not opinion, stop there If it is an opinion then your task is to perform a sentiment analysis on these statements and provide a score with the label: Analysis score: followed by a numerical value between 0 and 1 with no + or - sign.Add an explanation."
},
{
"scenario_number": 3,
"description": "Semantic analysis.This is not an analysis but a semantic search. Provide more information on the topic."
}
]
We will also add a dictionary of the same scenarios, containing simple definitions of the scenarios:
# Original list of dictionaries
scenario_instructions = [
{
"Market Semantic analysis.You will be provided with a market survey on a give range of products.The term market must be in the user or system input. Your task is provide an analysis."
},
{
"Sentiment analysis Read the content return a sentiment analysis on this text and provide a score with the label named : Sentiment analysis score followed by a numerical value between 0 and 1 with no + or - sign and add an explanation to justify the score."
},
{
"Semantic analysis.This is not an analysis but a semantic search. Provide more information on the topic."
}
]
We now extract the strings from the dictionary and store them in a list:
# Extract the strings from each dictionary
instructions_as_strings = [
list(entry)[0] for entry in scenario_instructions
]
At this point, our AI controller has everything it needs to recognize intent—matching any incoming prompt to the best-fitting scenario.
Performing intent recognition and scenario selection
We first define the parameters of the conversational AI agent just as we did in the Conversational AI agent section:
# Define the parameters for the function call
mrole = "system"
mcontent = "You are an assistant that matches user inputs to predefined scenarios. Select the scenario that best matches the input. Respond with the scenario_number only."
user_role = "user
The orchestrator’s job is to find the best task for any given input, making the AI controller flexible and adaptive. In some cases, the orchestrator may decide not to apply a scenario and just follow the user’s input. In the following example, however, the orchestrator will select a scenario and apply it.
We now adjust the input to take the orchestrator’s request into account:
# Adjust `input` to combine user input with scenarios
selection_input = f"User input: {input}\nScenarios: {scenarios}"
print(selection_input)
GPT-4o will now perform a text semantic similarity search as we ran in the Semantic Textual Similarity Benchmark (STSB) section. In this case, it doesn’t just perform a plain text comparison, but matches one text (the user input) against a list of texts (our scenario descriptions):
# Call the function using your standard API call
response = openai_api.make_openai_api_call(
selection_input, mrole, mcontent, user_role
)
Our user input is as follows:
User input: Gladiator II is a great movie
Then, the scenario is chosen:
# Print the response
print("Scenario:",response )
The scenario number is then chosen, stored with the instructions that go with it, and displayed:
scenario_number=int(response)
instructions=scenario_instructions[scenario_number-1]
print(instructions)
For our Gladiator II example, the orchestrator correctly picks the sentiment analysis scenario:
{'Sentiment analysis Read the content return a sentiment analysis on this text and provide a score with the label named : Sentiment analysis score followed by a numerical value between 0 and 1 with no + or - sign and add an explanation to justify the score.'}
This autonomous task-selection capability—letting GenAISys choose the right analysis without explicit tags—will prove invaluable in real-world deployments (see Chapter 5). The program now runs the scenarios with the generative AI agent.
This kind of proactive orchestration is foundational to building truly agentic systems that don’t wait to be told exactly what to do but instead figure it out based on context. The full book goes much deeper, covering vector search, scenario libraries, custom instruction sets, and multi-user memory. If you're ready to level up from simple chatbots to intelligent workflows that think for themselves, Building Business-Ready Generative AI Systems by Denis Rothman is your blueprint.
Build Human-Centered Generative AI Systems with Agents, Memory, and LLMs for Enterprise
Here is the news of the week:
OpenAI's GPT-5 Set for August Launch
OpenAI's GPT-5 is reportedly launching in August. CEO Sam Altman has teased its advanced capabilities, demonstrating its ability to answer complex questions instantly. The new model will also include "mini" and "nano" versions and integrate reasoning capabilities, marking a significant leap for the AI giant.
Meanwhile, Microsoft's Copilot is set to receive OpenAI's GPT-5 model, likely alongside ChatGPT, with a new "Smart" chat mode identified. Expected in August, GPT-5 will enhance Copilot's ability to adapt its response strategy automatically, streamlining user experience. This integration underscores Microsoft's commitment to positioning Copilot as a leading, user-friendly AI assistant.
ChatGPT Introduces "Study Mode" for Deeper Learning
ChatGPT has launched "Study Mode," a new feature designed to provide step-by-step guidance for learning instead of instant answers. Available to most users, this mode uses interactive prompts, scaffolded responses, and personalized support to encourage critical thinking and deeper understanding. Built with input from educators and scientists, "Study Mode" aims to help students truly learn material for homework, test prep, and new concepts.
Tesla's $16.5 Billion Chip Deal Boosts Samsung's US Foundry Business
Tesla has inked a $16.5 billion deal with Samsung Electronics for its next-generation AI6 chips, to be produced at Samsung's new Taylor, Texas factory. This significant contract is a major win for Samsung's struggling foundry business, helping secure a key client and boosting its shares by 6.8%. While unlikely to immediately impact Tesla's EV sales or robotaxi rollout, the deal is crucial for Samsung's goal to expand its contract chip manufacturing and compete with rivals like TSMC.
AI Investments Fuel Meta's Revenue Growth and Stock Rally
Meta's stock surged 11% after a strong Q3 revenue forecast. This was largely fueled by significant AI investments. CEO Mark Zuckerberg's memo detailed ambitious plans for "superintelligence." He noted visible improvements in their AI systems. This massive spending spree, including poaching talent, acquiring startups, and building data centers, is paying off. These efforts are reassuring investors and driving substantial stock gains.
Whether it's a scrappy prototype or a production-grade agent, we want to hear how you're putting generative AI to work. Drop us your story at nimishad@packtpub.com or reply to this email, and you could get featured in an upcoming issue of AI_Distilled.
📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us.
If you have any comments or feedback, just reply back to this email.
Thanks for reading and have a great day!
That’s a wrap for this week’s edition of AI_Distilled 🧠⚙️
We would love to know what you thought—your feedback helps us keep leveling up.
Thanks for reading,
The AI_Distilled Team
(Curated by humans. Powered by curiosity.)