Chatgpt for python
Chatgpt for python
Ankit Pandey
CHAPTER 1
Introducing ChatGPT
for Python Developers
Who Is This Book For?
First of all, this book is for Python developers who don't have any training or experience
in artificial intelligence, natural language processing, machine learning, or deep
learning. You may have heard of the term "language model," but I'm going to assume
that it's not a term that you use every day.
Secondarily, you might be familiar with (or have tried) ChatGPT, but you don't quite
understand how everything works "under the hood," and you're not sure how to get
started in order to use Python and ChatGPT programmatically together to “AI enable”
your own applications and services.
1
ChAPTER 1 InTROduCInG ChATGPT fOR PyThOn dEvElOPERS
Overview
In this chapter, we're going to dive right in and explain a few terms that you might not
be familiar with, and then go straight to the ChatGPT Playground. The Playground is
essentially a web interface for developers that’s built on top of ChatGPT REST API. So,
once you’re familiar with the Playground, the calls and responses from the ChatGPT API
will feel like second nature to you.
This book is filled with tons of examples and example code, so why bother typing
everything yourself? Below is the URL to the GitHub repo to access the source code:
https://github.com/Apress/Beginning-ChatGPT-for-Python
Listing 1-1. PROMPT. A Simple Prompt with ChatGPT About Python Design
Patterns
Describe 3 commonly used design patterns used by Python developers and why
those patterns are important
2
ChAPTER 1 InTROduCInG ChATGPT fOR PyThOn dEvElOPERS
If you've had the opportunity to work at a company using Python professionally, then
you've probably received a question very similar to the prompt above in an interview.
The listing below is ChatGPT's response.
3
ChAPTER 1 InTROduCInG ChATGPT fOR PyThOn dEvElOPERS
easier to maintain since the creation logic is encapsulated in the factory method.
This is particularly useful in Python when dealing with a set of related objects that
need to be created dynamically.
3. Observer Pattern
Description: The Observer Pattern defines a one-to-many dependency between
objects so that when one object changes state, all its dependents are
notified and updated automatically. It's typically implemented with a
subject (or observable) object that holds a list of dependents (observers)
and notifies them of any state changes, usually by calling one of their
methods.
Importance: This pattern is crucial for creating event-driven applications
or when changes in the state of one object may need to affect the state
of other objects without tightly coupling them. It's widely used in GUI
applications, real-time data monitoring, and event handling systems.
The Observer pattern in Python helps in the development of responsive,
decoupled systems that can react to changes dynamically.
These design patterns are important because they solve recurring design
problems, making code more reusable, maintainable, and scalable. By using
these patterns, Python developers can ensure their codebase is robust,
efficient, and adaptable to change.
Not bad, huh? ChatGPT was not only able to provide a list of common design patterns
used by Python developers but also explain valid reasons on why they are important – all
done in natural language.
So, at the end of the day, what does this mean for Python developers? Well, it’s
actually a big deal. That's because developers are now able to create applications that
can have some type of "understanding" of text in natural language. Before the advent of
artificial intelligence (AI) tools like ChatGPT, developers typically had to create regular
expressions (also called “regex”) in order to perform rudimentary character and string
pattern matching within a body of text. However, pattern matching and natural
language understanding are two totally different things.
4
ChAPTER 1 InTROduCInG ChATGPT fOR PyThOn dEvElOPERS
Note If you already know that a regular expression can’t perform natural
language understanding or sentiment analysis, then feel free to skip this section.
I'm thoroughly convinced that every programmer somewhere in their lifetime has
met some guy who happens to be an expert in writing regular expressions.
Regular expressions are great because they serve the purpose of being able to
parse large amounts of text in order to find patterns within the text
programmatically.
However, one of the biggest downsides to regular expressions is that once they have
been written, they are extremely difficult to read (in my opinion, even by the developer
who originally wrote it).
So, let’s see how regex holds up against ChatGPT, which has natural language
processing (NLP) and natural language understanding (NLU) capabilities.
Listing 1-3 is a story of an impractically sad situation. However, it drives home the
point that although regular expressions can be used to find words and phrases within a
body of text, it can’t be used to provide any type of NLU.
Listing 1-3. Sadstory.txt: A Sad Story About a Kid Who Didn’t Eat Ice Cream
In the city of Buttersville,USA on Milkmaid street, there's a group of three friends:
Marion Yogurt, Janelle de Queso, and Steve Cheeseworth III. On a hot summer's
day, they heard the music from an ice cream truck, and decided to buy something to
eat.
Marion likes strawberries, Janelle prefers chocolate, and Steve is lactose
intolerant. That day, only two kids ate ice cream, and one of them bought
a bottle of room-temperature water. The ice cream truck was fully stocked
with the typical flavors of ice cream.
5
ChAPTER 1 InTROduCInG ChATGPT fOR PyThOn dEvElOPERS
Notice how it is a best practice to provide some type of separation, in this case,
“###”, when creating a prompt (like the one above) where you have both instructions
and data together. Later on, when we start using either the Playground or Python to
programmatically invoke the ChatGPT API, you’ll see that there’s a much better way to
provide that separation.
6
ChAPTER 1 InTROduCInG ChATGPT fOR PyThOn dEvElOPERS
So, after sending the prompt, ChatGPT will provide you an answer like this.
As you can see, ChatGPT can utilize NLP and NLU, and therefore, it can understand
(artificially intelligently, that is) the context of the scenario. It can interpret the
relationships between the kids, their preferences, and Steve’s lactose intolerance. It is
able to understand that the last names of the children, the street name, and the city
name are the names of dairy products but are obviously irrelevant to the question
at hand.
Since Steve is lactose intolerant and cannot eat ice cream, he would be
the kid left sad because he couldn't enjoy the ice cream like Marion and
Janelle.
7
ChAPTER 1 InTROduCInG ChATGPT fOR PyThOn dEvElOPERS
8
ChAPTER 1 InTROduCInG ChATGPT fOR PyThOn dEvElOPERS
In the context of AI, a “pre-trained model” refers to a neural network that has been
trained on a specific task or dataset before being made available for use by developers.
This training process involves exposing the model to large amounts of labeled and
categorized (also called, “annotated”) data and adjusting its internal parameters to
optimize its performance on the given task.
Let’s look at some of the models provided by OpenAI for developers to use to AI-
enable their applications.
• o1
• o1-mini
GPT-4 GPT-4 is one of the fastest generation of OpenAI's GPT set of models. GPT stands
for generative pre-trained transformer, and these models have been trained to
understand natural language as well as multiple programming languages. The GPT-4
models take text and images as inputs as the prompt and provide text as the output.
Some of the GPT-4 models available are
• gpt-4o
• gpt-4o-mini
• gpt-4o-realtime
• gpt-4o-audio
(continued)
9
ChAPTER 1 InTROduCInG ChATGPT fOR PyThOn dEvElOPERS
dAll·E The dAll·E model can generate and edit images given a natural language prompt.
later in this book in Chapter 5, we’re going to have some fun with the dAll·E model
to visualize the conversation of what is being discussed in your favorite podcast
episode. Some of the dAll·E models available are
• dall-e-3
• dall-e-2
The TTS model takes text and converts it to audio with surprisingly good results. In
most cases, the audio is almost indistinguishable from a human voice.
TTS Some of the TTS models available are
• tts-1
• tts-1-hd
Simply stated, the Whisper model converts audio into text.
In this book, we’re going to use the Whisper model to search for text in a podcast
Whisper episode.
The Embeddings model can convert large amounts of text into a numerical
representation of how the strings in the text are related. So how is that useful?
Embeddings Embeddings allow developers to do specific tasks using custom datasets. yes, this
means that you can train the Embeddings model on specific data that is relevant to
your application. This allows you to do operations such as • Searching within your
recommended)
• detecting anomalies (where outliers with little relatedness are identified)
• Measuring diversity (where similarity distributions are analyzed)
• Classifying data (where text strings are classified by their most similar label)
(continued)
10
ChAPTER 1 InTROduCInG ChATGPT fOR PyThOn dEvElOPERS
Moderations The Moderations models are fine-tuned models that can detect whether text may be
sensitive or unsafe. These models can analyze text content and classify it according
to the following categories:
• hate
• hate/threatening
• harassment
• harassment/threatening
• Self-harm
• Self-harm/intent
• Self-harm/instructions
• Sexual
• Sexual/minors
• violence
• violence/graphic
The Moderations models available are
• text-moderation-latest
• omni-moderation-latest
• text-moderation-stable
Since the debut of ChatGPT, OpenAI has continued to support their older AI models,
legacy and
but they have been labeled as “legacy” or “deprecated” models. These models
deprecated
continue to exist; however, they have released other models that are more accurate,
faster, and cheaper to use.
11
ChAPTER 1 InTROduCInG ChATGPT fOR PyThOn dEvElOPERS
12
ChAPTER 1 InTROduCInG ChATGPT fOR PyThOn dEvElOPERS
Figure 1-2. Modify the Temperature in Order to Get More (or Less) Creative
Responses
https://platform.openai.com/account/api-keys
As you can see from Figure 1-3, you can name your API key anything that you want.
13
ChAPTER 1 InTROduCInG ChATGPT fOR PyThOn dEvElOPERS
Figure 1-3. Before You Can Access the Playground or Make API Calls, You Need to
Have an API Key
You should be aware that as a requirement to create an API key, you will need to
provide to OpenAI a credit card so that you can be billed for usage of their models.
Now that you’ve got your API key, let’s go straight to the Chat Playground at the
following URL:
https://platform.openai.com/playground
Upon entering the Playground, click on the combobox at the top and select the Chat
option to start the Chat Playground, as shown in Figure 1-4.
14
ChAPTER 1 InTROduCInG ChATGPT fOR PyThOn dEvElOPERS
Figure 1-4. After Entering the Playground, Select the Chat Option
Figure 1-5 depicts the Chat Playground, with certain parts numbered so that they can
be easily identified.
Figure 1-5. The Chat Playground Can Be a Little Daunting at First Glance
15
ChAPTER 1 InTROduCInG ChATGPT fOR PyThOn dEvElOPERS
1. Model
Earlier in this chapter, we talked about the various models that are available for
developers. Click the Model field in order to see a list of models that are available.
You may also see that some models have a month and day associated with their
name, which is simply a snapshot of that model. Programmatically selecting a snapshot
enables developers to have some sort of predictability in the responses that they will
receive from ChatGPT, because the current models are always updated.
2. System
As you can see, the user interface for the Chat Playground is vastly more complex than
the ChatGPT website that everyone else uses. So, let’s talk about the System field (see
Figure 1-5, item 2).
In my opinion, ChatGPT can be described as “a vastly powerful form of artificial
intelligence…with amnesia." Therefore, when you're using ChatGPT programmatically,
you need to inform the system who it is in the conversation!
Figure 1-6, shown below, gives you a glimpse of the thousands of different roles that
ChatGPT can play in a conversation.
Figure 1-6. The System Field in the Chat Playground Allows You to Set the Role
That ChatGPT Will Play in the Conversation
16
ChAPTER 1 InTROduCInG ChATGPT fOR PyThOn dEvElOPERS
3. User/Assistant
The User field (Figure 1-5, item 3) in the Chat Playground is where you type your
prompt to ChatGPT, which can be anything that you want, for example, “describe how
telemedicine will affect the medical industry.”
When you initially load the Chat Playground, the Assistant field is not visible. In
order to make it appear, you need to click “User” button to switch to the Assistant field.
Now, you may be asking yourself, “Why is this field needed at all?” Well, that’s a fine
question. If you want ChatGPT to remember something that it has already told you in a
previous conversation, then you need to type into the Assistant field anything that it has
already told you that you think is relevant in order to continue with the conversation.
Remember, it is a vastly powerful AI, but it has amnesia!
4. Add (Optional)
Add (Figure 1-5, item 4) is where you’d click in order to add either an Assistant message
to the conversation or another User message. Now, you may ask, “What’s the point of
adding yet another User message to the conversation, when I can type what I want in the
original User field above?” Good question.
If you wanted to separate your command from your data, then you would use a
separate User message for that.
Do you remember in Listing 1-4 earlier in this chapter, where we had to use the “###”
to separate the command to ChatGPT from the data that we wanted it to analyze? Well,
this is now no longer needed because the command would be the first User message;
the data would be the second User message.
5. Temperature (Optional)
As noted earlier in this chapter, the temperature selector ranges between 0 and 2 and
allows you to select the “randomness” of the response.
17
ChAPTER 1 InTROduCInG ChATGPT fOR PyThOn dEvElOPERS
7. Code (Optional)
After you have submitted your prompt using the Playground, you can click on the code
button (Figure 1-4, item 7) in order to see the code necessary to send the same prompt
using any of the languages that they support.
You are encouraged to try these two prompts yourself and see what the responses
are. Adjust the settings for the temperature and token length to get familiar with how
those parameters affect the outcome.
Conclusion
You just learned more about how ChatGPT can be used by developers. We covered some
of the basics of the Chat Playground, which is a web interface for developers to interact
with the ChatGPT API.
18
ChAPTER 1 InTROduCInG ChATGPT fOR PyThOn dEvElOPERS
We talked about how to set the system, user, and assistant roles in the Chat
Playground and how to adjust settings such as the temperature and maximum length
of output.
You learned about some of the parameters and terminology necessary to use the
Chat Playground, such as the model, temperature, and tokens. Getting familiar with the
parameters of the Chat Playground is essential to knowing how to use the REST API since
the Playground is a subset of capabilities offered by the REST API.
In the next chapter, we’ll see how to use ChatGPT as your “pair-programmer” and
create a productivity app that gives us weather and arrival time to work.
19
CHAPTER 2
Overview
This chapter will walk you through obtaining and testing out your API key, get you
comfortable making calls to the OpenAI Python API for ChatGPT, and introduce you to
how it’s done for other models as well. Furthermore, we’ll be using ChatGPT as a pair-
programmer to get an application that will be able to receive the name of a city and the
time you’d like to arrive to work, then give you the weather and an estimated arrival time
based on traffic! Sounds exciting? Then, let’s jump right in.
21
© Lydia Evelyn, Bruce Hopkins 2024
L. Evelyn and B. Hopkins, Beginning ChatGPT for Python, https://doi.org/10.1007/979-8-8688-0929-3_2
Chapter 2 Using Chatgpt as YoUr pYthon pair-programmer
If Python 3 is installed, then your terminal will display the version that you have.
Now, of course, if your terminal is informing you that you have a noncompatible version
(i.e., older than 3.7.1), then you should update Python on your machine. Additionally, if
you don’t have Python 3 installed at all, then you should install it before proceeding.
Now that you have all the prerequisites, it’s time to install the OpenAI Python library
itself. Go back to your terminal window, and execute the following command:
pip install --upgrade openai
The command above will install the OpenAI library for you if it doesn’t already exist
and will update the library to the most recent version if it’s already there.
For Mac OS
First off, open Terminal. You can find it in the applications folder or use spotlight
(Command + Space) to search for it.
Next, edit your bash profile. For older MacOS versions, you’d use the command nano
~/.bash_profile. Users of newer MacOS versions will need to use nano ~/.zshrc. This will
open the profile file in a text editor.
Now, let’s add your environment variable. In the editor, add the line below, replacing
'your-api-key-here' with your actual API key without the single quotation marks.
export OPENAI_API_KEY='your-api-key-here'
22
Chapter 2 Using Chatgpt as YoUr pYthon pair-programmer
Let’s save and exit by pressing Ctrl+O to write the changes, then pressing Ctrl+X to
close the editor.
Now, you’re going to load your profile by using source ~/.bash_profile for older
Mac OS versions and source ~/.zshrc for the newer Mac OS versions. This will load the
updated profile.
Finally, we’re going to verify that we’ve done everything correctly. In the terminal,
type echo $OPENAI_API_KEY. If everything went well, it should display your API key,
confirming the setup.
echo $OPENAI_API_KEY
For Windows
Start by opening the command prompt. You can find it by searching "cmd" in the
start menu.
Now, we’re going to set the environment variable in the current session by using the
command below, replacing 'your-api-key-here' with your actual API key. This command
sets the OPENAI_API_KEY for the current session.
setx OPENAI_API_KEY 'your-api-key-here'
You can make the setup permanent by adding the variable through system
properties:
• Right-click “This PC” or “My Computer” and select “Properties.”
• Click “Advanced system settings.”
• Click the “Environment Variables” button.
• In the “System variables” section, click “New...” and enter OPENAI_
API_KEY as the variable name and your API key as the variable value.
To make sure everything is working properly, reopen the command prompt and type
the command below to verify the setup. It should display your API key.
echo %OPENAI_API_KEY%
For Linux
To set the environment variable for the current session, open a terminal window and use
the export command. Replace 'your-api-key-here' with your actual API key.
23
Chapter 2 Using Chatgpt as YoUr pYthon pair-programmer
export OPENAI_API_KEY='your-api-key-here'
To make the environment variable persistent across sessions, you can add it to your
shell's configuration file, such as ~/.bashrc for Bash. Here's how you can do it:
Open the configuration file in a text editor. For example:
nano ~/.bashrc
export OPENAI_API_KEY='your-api-key-here'
To verify that the environment variable is set correctly, you can echo its value in the
terminal. This command should display your API key:
echo $OPENAI_API_KEY
Note to prevent your .env file from being unintentionally shared via version
control, create a .gitignore file in your project's root directory. add a line with .env
to ensure the confidentiality of your api key and other sensitive information.
24
Chapter 2 Using Chatgpt as YoUr pYthon pair-programmer
Next, use the terminal or an IDE to create both the .gitignore and .env files. Copy
your API key, and replace 'your-api-key-here' with your actual API key without the single
quotation marks.
At this point, your .env file should look like this:
OPENAI_API_KEY='your-api-key-here'
Finally, you can import the API key into your Python code using the following
snippet.
Listing 2-1. Importing Your .env File into Your Python Application
import os
from dotenv import load_dotenv
from openai import OpenAI
# Load environment variables from .env
load_dotenv()
# Use the API key from the environment variable
api_key = os.getenv("OPENAI_API_KEY")
client = OpenAI()
25
Chapter 2 Using Chatgpt as YoUr pYthon pair-programmer
Listing 2-2. Coding Your API Key Directly into Your Application
from openai import OpenAI
# API key
API_KEY = "your-api-key-here"
# Initialize OpenAI client with hardcoded API key
client = OpenAI(api_key=API_KEY)
Now, let’s make our first application with the OpenAI API, and test out the key at the
same time by getting a list of models useable with the OpenAI API.
Note From this point on, the code examples will be accessing our api key with a
local .env file.
26
Chapter 2 Using Chatgpt as YoUr pYthon pair-programmer
After successfully getting an instance of the OpenAI and Model objects, the API will
provide what is referred to within the OpenAI API as a “SyncPage,” which is essentially a
dictionary with the structure shown in Table 2-1.
Note since objects can contain dictionaries (which can be hard to represent in
a table), we’re using the following notation “ ↳ ” to indicate the elements of the
dictionary. as you can see from table 2-1, “id,” “object,” “created,” and “owned_
by” are all elements of the “data” dictionary in the response.
Now that we have the details of the Model object, let’s talk about how we can test the API
key we obtained in the first chapter. There are actually a few ways to do this.
27
Chapter 2 Using Chatgpt as YoUr pYthon pair-programmer
Listing 2-3. Getting a List of Models Available with the OpenAI API by Calling
the models.list() method with model_lister.py
"""Module for interacting with OpenAI API to list the models available"""
import os
from dotenv import load_dotenv
from openai import OpenAI
models_list = client.models.list()
print(models_list)
After running the code in Listing 2-3, Listing 2-4 is a truncated response that you
should expect to see.
28
Chapter 2 Using Chatgpt as YoUr pYthon pair-programmer
Listing 2-5. Cleaning Up the Response We Get from the OpenAI API with
model_lister_pretty.py
"""Module for interacting with OpenAI API to list the models available"""
import os
from dotenv import load_dotenv
from openai import OpenAI
models_list = client.models.list()
29
Chapter 2 Using Chatgpt as YoUr pYthon pair-programmer
30
Chapter 2 Using Chatgpt as YoUr pYthon pair-programmer
Figure 2-1. The ChatGPT Tokenizer Can Give You a Quick Count of the Tokens in
Your Prompt
Listing 2-8. Getting a Response Back from ChatGPT to Build Our Weather App,
accuweather_forecaster.py
import requests
32
Chapter 2 Using Chatgpt as YoUr pYthon pair-programmer
"apikey": accuweather_api_key,
"q": city,
}
r e s p o n s e = r e q u e s t s . g e t ( b a s e _ u r
l o c a t i o n _ d a t a = r e s p o n s e . j s o n ( )
i f l o c a t i o n _ d a t a :
location_key = location_data[0]["Key"]
weather_url = f"http://dataservice.accuweather.com/current
conditions/v1/{location_key}"
params = {
"apikey": accuweather_api_key,
}
r e s p o n s e = r e q u e s t s . g e t ( w e a t h
w e a t h e r _ d a t a = r e s p o n s e . j s o n (
i f w e a t h e r _ d a t a :
weather_text = weather_data[0]["WeatherText"] temperature =
weather_data[0]["Temperature"]["Metric"]["Value"] print(f"The current
weather in {city} is {weather_text} with a temperature of {temperature}
°C.")
else:
print("Weather data not found.")
else:
print("Location not found.")
if __name__ == "__main__":
accuweather_api_key = "api-key"
city = input("Enter your city: ")
get_weather(accuweather_api_key, city)
Creating accuweather_forecaster.py
with the AccuWeather API
So, let’s follow the steps we got from ChatGPT. First, we need to set up an account with
AccuWeather. Go to https://developer.accuweather.com/ and hit register to sign up.
33
Chapter 2 Using Chatgpt as YoUr pYthon pair-programmer
After signing in, you’ll notice that the menu changes, and you can now navigate over
to My Apps.
Figure 2-3. Navigating to the My Apps Tab on the AccuWeather Developer Portal
Once you’re on the My Apps tab, you’re going to want to Add a new App in order to
get an API key to use in our application.
34
Chapter 2 Using Chatgpt as YoUr pYthon pair-programmer
35
Chapter 2 Using Chatgpt as YoUr pYthon pair-programmer
As you set things up, you’ll need to name your app and answer benign questions like
where the API will be used and what you plan to be creating with the API. As you can
see in Figure 2-5, we’ve called our AccuWeather app “Weather Tracker”; however, this
is not the name of our actual Python application, accuweather_forecaster.py.
The names don’t have to match. You can name your app in AccuWeather whatever you
like. The most important configuration to enable here is where you’re being asked to
specify the product you’re intending to use. Be sure to enable the Core Weather
Limited Trial. The difference between choosing Core Weather and MinuteCast is an
application
that works and an application that doesn’t and takes a long time to figure out why it
doesn’t (can you tell this was a memorable experience?).
It may take some time for your application to be approved, but usually, this is a very
quick process. When it’s done, you’ll see your new application on the My Apps page,
which will include your API key! Mission accomplished.
Figure 2-6. Seeing Your Newly Created App on the AccuWeather Developer Portal
36
Chapter 2 Using Chatgpt as YoUr pYthon pair-programmer
Listing 2-9. Inputting Our API Key from AccuWeather into Ouraccuweather_
forecaster.py app
if __name__ == "__main__":
accuweather_api_key = "your-accuweather-api-key-here"
city = input("Enter your city: ")
get_weather(accuweather_api_key, city)
Now, with everything all together, when you run the code, you should see a prompt
that asks you to input the name of a city.
Listing 2-10. After Running the Script, Enter a Valid City Name
Enter your city:
Once you provide a valid city name, you should find that everything runs smoothly
and, you’ll get a response that looks something like this (obviously, the forecast will
change depending on what city you decide to enter and, well, what the weather is like
the day you run it).
Listing 2-11. It Was a Little Chilly in New York City on the Day We Ran the
accuweather_forecaster.py Script
The current weather in New York City is Sunny with a temperature of 5.6°C.
We get a nice, basic description of the weather in New York City from running the
code, but there are a couple of things we can improve on.
AccuWeather has icons for various weather conditions. Right now, we can see that
the weather in New York is currently sunny, but we could also get a longer description
that describes the weather for the rest of the day. Americans might also prefer to see the
temperature in Fahrenheit (as American authors, that’s what we’re going to choose as
well). So, let’s get ChatGPT to help us make those changes.
So that ChatGPT can retain the context, we’re going to prompt ChatGPT in the same
Chat conversation as follows.
37
Chapter 2 Using Chatgpt as YoUr pYthon pair-programmer
import requests
38
Chapter 2 Using Chatgpt as YoUr pYthon pair-programmer
weather_url = f"http://dataservice.accuweather.com/
currentconditions/v1/{location_key}"
params = {
"apikey": accuweather_api_key,
}
r e s p o n s e = r e q u e s t s . g e t ( w e a t h
w e a t h e r _ d a t a = r e s p o n s e . j s o n (
i f w e a t h e r _ d a t a :
weather_text = weather_data[0]["WeatherText"]
temperature_imperial = weather_data[0]["Temperature"]
["Imperial"]["Value"]
# Fetching icon number and icon URL
icon_number = weather_data[0]["WeatherIcon"]
icon_url = f"http://developer.accuweather.com/sites/default/
files/{icon_number:02d}-s.png"
# Making API call to get weather headline
headline_url = f"http://dataservice.accuweather.com/forecasts/
v1/daily/1day/{location_key}"
response = requests.get(headline_url, params=params)
headline_data = response.json()
if headline_data and "Headline" in headline_data:
headline_text = headline_data["Headline"]["Text"]
print(f"The current weather in {city} is {weather_text}
with a temperature of {temperature_imperial}°F.")
print(f"Weather Headline: {headline_text}")
print(f"Weather Icon: {icon_url}")
else:
print("Weather headline not found.")
else:
print("Weather data not found.")
else:
print("Location not found.")
39
Chapter 2 Using Chatgpt as YoUr pYthon pair-programmer
if __name__ == "__main__":
accuweather_api_key = "your-accuweather-api-key-here"
city = input("Enter your city: ")
get_weather(accuweather_api_key, city)
```
This modified version of the code:
Converts temperature from Celsius to Fahrenheit.
Retrieves weather icon number and description.
Prints the weather icon number and constructs the icon URL.
Make sure to replace "your-accuweather-api-key-here" with your actual
AccuWeather API key.
When we run this code, we get exactly what we were looking for. We’ll enter a valid city
name again (New York City, in our case), and this is the response we got.
Listing 2-14. Response from the AccuWeather API After Running the Modified
Code in Our accuweather_forecaster API Key
Weather in New York City: Temperature: 45.0°F Weather Text: Sunny Weather
Headline: Expect rainy weather late tomorrow night through late Thursday night
Icon Number: 1 Icon URL:
http://developer.accuweather.com/sites/default/files/01-s.png
Now, our code works exactly the way we want it to, but there’s even more room for
improvement. If you’re using an IDE with any kind of linting, you might see quite a few
linting errors from the code we’re running now. These are some of the linting errors you
might see with the code as it is now.
Redefining name 'city' from outer scope (line 45) Redefining name 'city' from outer
scope (line 45) Missing timeout argument for method 'requests.get' can cause your
program to hang indefinitely
40
Chapter 2 Using Chatgpt as YoUr pYthon pair-programmer
Missing timeout argument for method 'requests.get' can cause your program to
hang indefinitely Missing timeout argument for method 'requests.get' can cause
your program to hang indefinitely Missing module docstring Missing function or
method docstring Line too long (102/100) Line too long (103/100) Line too long
(121/100) Constant name "accuweather_api_key" doesn't conform to
UPPER_CASE naming style
Note so, what exactly is linting? Linting is the process of analyzing code for
errors, but it’s mostly for stylistic inconsistencies. it helps maintain code quality,
enforces coding standards, and helps make code more readable by using common
conventions.
To clean this up, we’re just going to run this by ChatGPT, again, in the same Chat
conversation.
My IDE is making me aware of the following linting errors. Help me modify my code
to comply to the styling standards.
###
41
Chapter 2 Using Chatgpt as YoUr pYthon pair-programmer
Missing function or method docstring Line too long (102/100) Line too long
(103/100) Line too long (121/100) Constant name "accuweather_api_key"
doesn't conform to UPPER_CASE naming style
Listing 2-17. Getting a Response from ChatGPT That Resolves All of Our Linting
Errors in accuweather_forecaster_cleaned_up.py
To address the linting errors and improve the code's compliance with
styling standards, you can make the following modifications:
- Add module docstring to describe the purpose of the script.
- Add function docstring to describe the purpose of the `get_weather`
function.
- Rename the `accuweather_api_key` variable to `ACCUWEATHER_API_KEY` to
comply with the UPPER_CASE naming convention for constants.
- Remove redefining `city` from the outer scope by renaming the input
parameter of the `get_weather` function.
- Add a timeout argument to the `requests.get` calls to prevent hanging
indefinitely.
- Split long lines to comply with the maximum line length.
Here's the modified code:
```python
"""
Script to fetch current weather information using AccuWeather API.
"""
import requests
42
Chapter 2 Using Chatgpt as YoUr pYthon pair-programmer
Returns:
None
"""
base_url = "http://dataservice.accuweather.com/locations/v1/
cities/search"
params = {
"apikey": accuweather_api_key,
"q": city,
}
response = requests.get(base_url, params=params, timeout=10)
location_data = response.json()
if location_data:
location_key = location_data[0]["Key"]
weather_url = f"http://dataservice.accuweather.com/
currentconditions/v1/{location_key}"
params = {
"apikey": accuweather_api_key,
}
response = requests.get(weather_url, params=params, timeout=10)
weather_data = response.json()
if weather_data:
weather_text = weather_data[0]["WeatherText"]
temperature_imperial = weather_data[0]["Temperature"]
["Imperial"]["Value"]
# Fetching icon number and icon URL
icon_number = weather_data[0]["WeatherIcon"]
icon_url = (
43
Chapter 2 Using Chatgpt as YoUr pYthon pair-programmer
f"http://developer.accuweather.com/sites/default/files/"
f"{icon_number:02d}-s.png"
)
# Making API call to get weather headline
headline_url = (
f"http://dataservice.accuweather.com/forecasts/v1/
daily/1day/"
f"{location_key}"
)
response = requests.get(headline_url, params=params,
timeout=10)
headline_data = response.json()
if headline_data and "Headline" in headline_data:
headline_text = headline_data["Headline"]["Text"]
print(f"The current weather in {city} is {weather_text} "
f"with a temperature of {temperature_imperial}°F.")
print(f"Weather Headline: {headline_text}")
print(f"Weather Icon: {icon_url}")
else:
print("Weather headline not found.")
else:
print("Weather data not found.")
else:
print("Location not found.")
if __name__ == "__main__":
ACCUWEATHER_API_KEY = "your-accuweather-api-key"
user_city = input("Enter your city: ")
get_weather(ACCUWEATHER_API_KEY, user_city)
```
These modifications should address the linting errors and improve the code's readability
and compliance with styling standards. Be sure to replace "your-accuweather-api-
key-here" with your actual API key.
44
Chapter 2 Using Chatgpt as YoUr pYthon pair-programmer
45
Chapter 2 Using Chatgpt as YoUr pYthon pair-programmer
On the Google Maps Platform page, click Getting Started to set up your account for
using the APIs. After following the steps Google presents you with, you’ll be taken to the
page on Figure 2-8, where you can see the different APIs available with the Google Maps
Platform. But what’ll probably catch your attention first is the fact that you still have to
Finish Account Setup.
Finishing your account setup will compose of entering credit card information so you
can start the free trial that will allow you $200 worth of credit, which is more than
enough for the purposes of our testing.
46
Chapter 2 Using Chatgpt as YoUr pYthon pair-programmer
Figure 2-9. Navigating to the APIs & Services Tab on the Google Maps Platform
After properly setting up your account, you’ll be greeted by a welcome page. On the
left, you’ll find a menu icon you can click on to reveal a list of services you have access to.
You want to navigate to Apps & Services and then click Library.
47
Chapter 2 Using Chatgpt as YoUr pYthon pair-programmer
You’re going to want to click on Maps JavaScript API and then Enable it.
After enabling the JavaScript API, go back twice and open the menu again, and then
click on the Google Maps Platform to see the dashboard for Google Maps.
From here, we’re going to be looking at another side menu that looks similar to the
one from before, but here we’re going to click on Apps & Services again to see a different
page this time. From here, you can click on Routes to enable it.
Once you’ve enabled the API we need, navigate to the Keys & Credentials tab and
Create a new API key.
49
Chapter 2 Using Chatgpt as YoUr pYthon pair-programmer
Figure 2-13. The Keys and Credentials Page on the Google Maps Platform
Now that we have our Google Maps API key, we can make use of our pair
programmer. There are actually two ways we can use ChatGPT to make it easier to run
code that successfully retrieves the information we want: how far away is the
destination and how long will it take to get there (and here’s a fun fact – the Google
Maps Routes API factors in traffic trends in the estimated travel time, making our
application perfect for productivity!).
Here’s the cURL code from the Google Maps Platform documentation.
Listing 2-18. cURL Code for Using the Google Maps Routes API
curl -X POST -d '{
"origin":{
"location":{
"latLng":{
"latitude": 37.419734,
"longitude": -122.0827784
}
}
},
"destination":{
"location":{
"latLng":{
"latitude": 37.417670,
"longitude": -122.079595
}
}
},
"travelMode": "DRIVE",
"routingPreference": "TRAFFIC_AWARE",
"departureTime": "2023-10-15T15:01:23.045123456Z",
"computeAlternativeRoutes": false,
"routeModifiers": {
"avoidTolls": false,
"avoidHighways": false,
"avoidFerries": false
},
"languageCode": "en-US",
"units": "IMPERIAL"
}' \
-H 'Content-Type: application/json' -H 'X-Goog-Api-Key: YOUR_API_KEY' \
-H 'X-Goog-FieldMask: routes.duration,routes.distanceMeters,routes.
polyline.encodedPolyline' \
'https://routes.googleapis.com/directions/v2:computeRoutes'
51
Chapter 2 Using Chatgpt as YoUr pYthon pair-programmer
Remember, when practicing prompt engineering, the best practice is to separate the
instructions from the content with three hashes (###) to get the best results. To that end,
here’s the prompt we gave ChatGPT so that we could use this cURL code and produce
the results we desired using three hashes to separate the instructions from the content,
the content being the cURL code presented in Listing 2-18:
Listing 2-19. Prompting ChatGPT to Provide Python Code Using the cURL
Command As Context
I'm coding in Python and trying to build an application that makes a request to
the Google Maps API and pulls down the amount of time it'll take to get from one
location to the other. This is the cURL code that would accomplish that goal. Help
me convert this to something I can use in Python. ### [cURL code from listing 2-
18]
Listing 2-20. Getting a Response Back from ChatGPT with Useable Python Code
import requests
import json
# Define the payload
payload = {
"origin": {
"location": {
"latLng": {
"latitude": 37.419734,
"longitude": -122.0827784
}
}
},
52
Chapter 2 Using Chatgpt as YoUr pYthon pair-programmer
"destination": {
"location": {
"latLng": {
"latitude": 37.417670,
"longitude": -122.079595
}
}
},
"travelMode": "DRIVE",
"routingPreference": "TRAFFIC_AWARE",
"departureTime": "2025-02-17T17:00:00Z",
"computeAlternativeRoutes": False,
"routeModifiers": {
"avoidTolls": False,
"avoidHighways": False,
"avoidFerries": False
},
"languageCode": "en-US",
"units": "IMPERIAL"
}
# Define headers
headers = {
"Content-Type": "application/json",
"X-Goog-Api-Key": "YOUR_API_KEY",
"X-Goog-FieldMask": "routes.duration,routes.distanceMeters,routes.
polyline.encodedPolyline"
}
53
Chapter 2 Using Chatgpt as YoUr pYthon pair-programmer
Note in order for this code to run, you must set a date and time in the
“DepartUre-time” parameter that is beyond the time at which you run the code. in
other words, the departure time cannot be set in the past. in this example, the date
is set to a specific time in 2025, but if you’re running this code beyond that year,
you need to adjust the code in the “DepartUre-time” parameter in order for your
code to run as expected.
After running this code, this is the response you should get.
Listing 2-21. The Successful Response After Running the Code from ChatGPT
{'routes': [{'distanceMeters': 773, 'duration': '158s', 'polyline':
{'encodedPolyline': 'ipkcFjichVzQ@d@gU{E?'}}]}
54
Chapter 2 Using Chatgpt as YoUr pYthon pair-programmer
import requests
route_data = response.json()
if route_data["status"] == "OK":
route_info = route_data["routes"][0]["legs"][0]
distance = route_info["distance"]["text"]
duration = route_info["duration"]["text"]
print(f"Distance: {distance}, Duration: {duration}")
else:
print("Error:", route_data["status"])
else:
print("Error:", response.status_code)
55
Chapter 2 Using Chatgpt as YoUr pYthon pair-programmer
If you run this code, you should get this very clean, very simple response:
Distance: 2,789 mi, Duration: 1 day 17 hours
Conclusion
Let’s debunk a common myth and set the record straight: ChatGPT isn't some mind-
reading wizard! As an AI tool, it's not here to replace the expertise of developers and
architects – it's here to complement it and save a lot of time and hassle in the day-to-day
frustrations the average programmer faces on a regular bases. Sure, it can help translate
your natural language requests into code, but let's not forget the crucial role of human
judgment. Ultimately, it's up to developers to decide whether to use, tweak, or toss out
the resulting code.
56
CHAPTER 3
"""
Script to demonstrate API call using OpenAI's GPT-4 for chat completions.
"""
from dotenv import load_dotenv
from openai import OpenAI
# Load environment variables from .env
load_dotenv()
#instantiate the OpenAI object
client = OpenAI()
57
© Lydia Evelyn, Bruce Hopkins 2024
L. Evelyn and B. Hopkins, Beginning ChatGPT for Python, https://doi.org/10.1007/979-8-8688-0929-3_3
ChAPter 3 CreAtIng A BAsIC ChAtgPt ClIent In PythOn
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{
"role": "system",
"content": "You are a Python developer"
},
{
"role": "user",
"content": "Why is Python typically used for data science?"
}
],
temperature=0.85,
max_tokens=1921,
top_p=1,
frequency_penalty=0,
presence_penalty=0
)
print(response)
As you analyze the code in Listing 3-1, you’re going to see several things that are
quite familiar from the Chat Playground such as the model, messages, temperature,
and tokens.
Note In this chapter, we’ll use the Python datatypes, so you’ll see alist where
the official OpenAPI documentation specifies an Array.
Using OpenAI.chat.completions.create()
to Send Messages to ChatGPT
The OpenAI.chat.completions.create() method is a method that’s basically a one-to-
one representation of what you can do in the Chat Playground; therefore, this method
should feel like second nature to you.
58
ChAPter 3 CreAtIng A BAsIC ChAtgPt ClIent In PythOn
Table 3-1 describes the format of the parameter necessary to call the OpenAI.chat.
completions.create() method. Although the table is lengthy, after a quick glance, you
should see that only a few fields are actually required in order to successfully invoke
the method.
The response to the method is called a ChatCompletion.
59
ChAPter 3 CreAtIng A BAsIC ChAtgPt ClIent In PythOn
(continued)
60
ChAPter 3 CreAtIng A BAsIC ChAtgPt ClIent In PythOn
(continued)
61
ChAPter 3 CreAtIng A BAsIC ChAtgPt ClIent In PythOn
(continued)
62
ChAPter 3 CreAtIng A BAsIC ChAtgPt ClIent In PythOn
(continued)
63
ChAPter 3 CreAtIng A BAsIC ChAtgPt ClIent In PythOn
(continued)
64
ChAPter 3 CreAtIng A BAsIC ChAtgPt ClIent In PythOn
65
ChAPter 3 CreAtIng A BAsIC ChAtgPt ClIent In PythOn
OK, Table 3-1 appears to be a little daunting! However, as mentioned earlier, only the
model and messages are required parameters.
Additionally, we also have the code in Listing 3-1 above in order to show how the
parameters are actually used within a real application.
So, as you can see, as a Python developer, we have several options and parameters
available to use that ordinary people can’t do using the ChatGPT website or using the
Chat Playground.
Now, the one parameter that needs the most detailed explanation is the messages
parameter, so let’s analyze that further.
The good news is that if you refer back to Chapter 1 where we explained how to use the
Chat Playground, you can see that we’ve already have encountered the first three message
types! The only new message type that we’re not currently familiar with is the tool message.”
66
ChAPter 3 CreAtIng A BAsIC ChAtgPt ClIent In PythOn
Listing 3-2 is a snippet from Listing 3-1 that shows how the system message is
formatted.
messages=[
{
"role": "system",
"content": "You are a Python developer"
},
...
Listing 3-3 is a snippet from Listing 3-1 that shows how the user message is
formatted.
...
{
"role": "user",
"content": "Why is Python typically used for data science?"
}
...
67
ChAPter 3 CreAtIng A BAsIC ChAtgPt ClIent In PythOn
Note In case you forgot, the assistant message is used to “remind” ChatgPt
what it told you in a previous response. Ideally, this can allow you to continue a
conversation that you had with it weeks or months in the past.
Listing 3-4 is a snippet from Listing 3-1 that shows how the assistant message is
formatted.
...
{
"role": "assistant",
68
ChAPter 3 CreAtIng A BAsIC ChAtgPt ClIent In PythOn
Running chatgpt_client.py
So, after running the code we created in Listing 3-1, we can expect a response that will
look something like this in Listing 3-5.
ChatCompletion(id='chatcmpl-9ACnRg1bk54jYeIFbxJ3yDnomQmij',
choices=[Choice(finish_reason='stop', index=0, message=ChatCompletion
69
ChAPter 3 CreAtIng A BAsIC ChAtgPt ClIent In PythOn
So, as we take a brief look at Listing 3-5, we see that a bulk of the response (called a
ChatCompletion) is the answer to our question that we posed to ChatGPT in Listing 3-1 at
the very beginning of the chapter. However, there’s a lot of metadata around our response,
and let’s analyze the ChatCompletion object in more detail.
70
ChAPter 3 CreAtIng A BAsIC ChAtgPt ClIent In PythOn
(continued)
71
ChAPter 3 CreAtIng A BAsIC ChAtgPt ClIent In PythOn
ChatCompletionMessage
Table 3-7. The Structure of the ChatCompletionMessage
Field Type Description
role String This will always be the literal, "assistant."
content String or null This is a string that contains the response
from ChatGPT to our request.
If you indicated in Table 3-1 that you want
tool_calls List
ChatGPT to call a tool (which is currently a
function), then this list will exist in the
ChatCompletionMessage.
This is the ID of the tool called by ChatGPT.
↳id String
This is the type of the tool called by
↳type String ChatGPT. Only the literal, "function," is a
valid tool.
This is the function and the parameters that
↳function Object the model called.
72
ChAPter 3 CreAtIng A BAsIC ChAtgPt ClIent In PythOn
Conclusion
In this chapter, we took our experiences from Chapters 1 and 2 and created a fully
functioning ChatGPT client in Python. In the code for our ChatGPT client, we saw some
terms that we were already introduced to from the Chat Playground such as the model,
messages, temperature, and tokens.
We also saw that, as Python developers, OpenAI gives us a ton of additional options
to invoke ChatGPT that aren’t available to average everyday users or even to technical
people who use the Chat Playground. In this chapter, we took the time to explain these
options, with a focus on the messages that we can send.
Now that we have a working ChatGPT client in Python, let’s see how to leverage it for
the rest of the examples in the book!
73
CHAPTER 4
Using AI in the
Enterprise! Creating
a Text Summarizer
for Slack Messages
In today’s corporate world, it’s extremely common for companies to have an
instance of Slack (or Microsoft Teams) to organize themselves and use it as a central
place of communication to everyone in the company. Now, if you’ve ever used Slack
before, I think you know how easily a channel can become flooded with a ton of
messages because some important thing happened somewhere in the company or
the Of
world.
course, the more responsibility that you have within the company (i.e., manager,
team leader, architect, etc.), the more channels you’re expected to participate in. In
my opinion, Slack is a double-edged sword. You need to use it to do your job, but as a
developer, you definitely can’t attend a daily standup meeting and say, “Yesterday, uh, I
spent all day reading Slack. No roadblocks.”
Additionally, if you work for a company with clients in various time zones (which is
quite common nowadays), it’s pretty daunting to open Slack in the morning and see a
ton of messages that were posted while you were away from the keyboard.
So, in this chapter, we’re going to apply AI in the enterprise to make Slack more
useful. We’ll leverage the code in the previous chapter and create a Slack bot in Python
that will summarize the important conversations in a Slack channel. We're going to be
utilizing ChatGPT's capabilities for text summarization and focus a bit more on prompt
engineering.
75
© Lydia Evelyn, Bruce Hopkins 2024
L. Evelyn and B. Hopkins, Beginning ChatGPT for Python, https://doi.org/10.1007/979-8-8688-0929-3_4
Chapter 4 Using ai in the enterprise! Creating a text sUmmarizer for slaCk messages
76
Chapter 4 Using ai in the enterprise! Creating a text sUmmarizer for slaCk messages
• The mid-tier is a tricky place, because they have more knowledge and
experience than the tech support staff at Level 1; however, they don't
have the opportunity to directly get answers from the end-user.
So, let's work with a real-world example of a typical conversation within a typical
tech support channel within Slack. Below is a list of the team members and their roles
within a fictional company:
The listing below provides an example of a conversation between the team members
at a software startup. Fatima, the customer service representative, lets the team know
that their app is crashing immediately after launching (not a good problem to have).
Keith, the CTO, steps in immediately to escalate the issue.
77
Chapter 4 Using ai in the enterprise! Creating a text sUmmarizer for slaCk messages
Fatima [16:03 | 02/08/2019]: Absolutely, John. I'll reach out to the client
immediately and gather those details. Will update you all once I have them.
Dave [16:04 | 02/08/2019]: I understand the urgency here. Let's make sure
we keep the client informed about our progress Fatima. We don't want
them feeling left in the dark during this troubleshooting process.
John [16:20 | 02/08/2019]: I've checked the codebase, and so far, I haven't
found any obvious issues. It's strange that the app is crashing on load.
Could it be a memory-related issue? Keith, do we have any recent reports of
memory leaks or high memory usage?
Keith [16:22 | 02/08/2019]: I'll pull up the monitoring logs, John, and
check if there have been any memory-related anomalies in recent releases.
Let me get back to you on that.
Keith [17:05 | 02/08/2019]: Thanks for the update, Fatima. That's helpful
information. John, let's focus on testing the latest app update on an
iPhone X simulator with iOS 15.1 to see if we can replicate the issue.
78
Chapter 4 Using ai in the enterprise! Creating a text sUmmarizer for slaCk messages
John [17:06 | 02/08/2019]: Good idea, Keith. I'll set up the emulator and
run some tests right away.
Keith [17:30 | 02/08/2019]: John, any progress on replicating the issue on
the emulator?
John [17:32 | 02/08/2019]: Yes, Keith. I managed to reproduce the crash on
the emulator. It seems to be related to a compatibility issue with iOS 15.1
. I suspect it's due to a deprecated method call. I'll fix it and run
more tests to confirm.
John [18:03 | 02/08/2019]: Fixed the deprecated method issue, and the app
is no longer crashing on load. It looks like we've identified and resolved
the problem. I'll prepare a patch and send it to you, Keith, for review and
deployment.
Keith [18:04 | 02/08/2019]:
Thank you, please provide me with the
patch as soon as possible. Once I review it, we'll deploy the fix to the
app store.
Dave [18:06 | 02/08/2019]: Great job, team! John, please keep the client
informed about the progress and let them know we have a fix ready for them
on the next app update. Can someone make sure the release notes
reflect this?
John [18:07 | 02/08/2019]: Will do, Dave. I'll update the client and ensure
they're aware of the upcoming fix.
Keith [18:27 | 02/08/2019]: Patch reviewed and approved, John. Please
proceed with updating the app in the store. Let's aim to have it done
within the next hour.
79
Chapter 4 Using ai in the enterprise! Creating a text sUmmarizer for slaCk messages
Dave [18:40 | 02/08/2019]: Well done, team! Your efforts are greatly
appreciated. We managed to turn this urgent problem around in record time.
Let's keep up the good work!
Notice how the number of tokens is kept small, at 360, which will ultimately help
ChatGPT to create a short response. Also, see that the temperature is left at 1 (the default
value), but the Top P is increased all the way to its maximum (which is 1). Therefore, it’s
80
Chapter 4 Using ai in the enterprise! Creating a text sUmmarizer for slaCk messages
important to play around with temperature and other settings to control the tone and
variety of ChatGPT's responses.
Listing 4-3 below shows ChatGPT's summarization of the conversation from
Listing 4-1.
That's fantastic! It reduced that extremely long conversation down to a simple and brief
summary of only 68 words. As you can see, ChatGPT is not replacing a job here; however,
it will definitely make someone more productive. Now, let's try a few more prompts,
shall we?
81
Chapter 4 Using ai in the enterprise! Creating a text sUmmarizer for slaCk messages
Top P: 0
0
Frequency penalty:
0
Presence penalty:
As you may have noticed, the temperature is set to 0.82 in this example. We really
want ChatGPT to give us a concise answer here, so the “drier,” the better.
Again, the temperature is set below 1.0 so that ChatGPT can keep the answer brief
and easy to understand. We increased the token length so that ChatGPT can take the
time to explain some potentially technical concepts.
83
Chapter 4 Using ai in the enterprise! Creating a text sUmmarizer for slaCk messages
In summary, the team quickly addressed an urgent issue where a client's app
was crashing on load. They investigated the issue, identified a compatibility
problem with iOS 15.1, fixed it, and prepared a patch for deployment. The client
was kept informed throughout the process, and the fix was successfully
implemented, resulting in a relieved and grateful client.
1. Test the patch: Once the patch has been deployed to the app store,
thoroughly test it on various devices and operating systems to ensure that the
issue has been resolved and there are no new bugs or regressions introduced.
84
Chapter 4 Using ai in the enterprise! Creating a text sUmmarizer for slaCk messages
Now, of course, ChatGPT (like every other form of artificial intelligence on the planet) is
not entirely perfect. For instance, recommendation number three is a valid item to do,
except for the fact that it's not usually the responsibility of a QA person to communicate
anything directly to the client or customer. That communication can be done either
through tech support or with a product manager who has those communication
channels (especially if they are a significant customer). So, the suggestion is fine, but it's
not appropriate for a person with that role within a company.
85
Chapter 4 Using ai in the enterprise! Creating a text sUmmarizer for slaCk messages
Note in order to accomplish these steps, you will need to have administrative
access to a slack workspace. most developers will not have these levels of
permissions; therefore, in order to fully experiment, i recommend that you create
your own personal slack workspace for testing purposes. this way, you will have
all the rights and privileges to install your slack bot.
86
Chapter 4 Using ai in the enterprise! Creating a text sUmmarizer for slaCk messages
But, one step at a time. First, we’re going to make our Slack bot app, so head over the
Slack API website:
https://api.slack.com/
Figure 4-1. In Order to Create a Slack Bot, Go To the Slack API Website
Of course, you’ll need to have a Slack account in order for this to work, so if you don’t
have one, then you need to create one first.
After you have logged in, go to the top-right of the page and navigate to “Your apps
> Create your first app,” as shown in Figure 4-1 above. In Slack terminology, a “bot” is
an “app,” and bots are not allowed to run on a Slack instance unless they have been
registered with Slack first.
87
Chapter 4 Using ai in the enterprise! Creating a text sUmmarizer for slaCk messages
As shown in Figure 4-2, above, you’ll be taken to the Your Apps page where you can
manage your Slack apps. Immediately, you’ll see a pop-up to Create an App button in
the middle of the screen.
Select the option to create your app from scratch. This is because we want to be able
to manipulate all of the details of the application ourselves without overcomplicating
things with a bunch of default settings.
Afterwards, you’ll be prompted to specify a name for your bot and to select the
workspace that you want your bot to have access to, as shown in Figure 4-3 below.
Click the Create App button to proceed.
88
Chapter 4 Using ai in the enterprise! Creating a text sUmmarizer for slaCk messages
89
Chapter 4 Using ai in the enterprise! Creating a text sUmmarizer for slaCk messages
So, scroll down to add the following OAuth Scope to your Slack bot, as shown in
Figure 4-4, below.
• channels:history
• channels:read
• users:read
90
Chapter 4 Using ai in the enterprise! Creating a text sUmmarizer for slaCk messages
91
Chapter 4 Using ai in the enterprise! Creating a text sUmmarizer for slaCk messages
92
Chapter 4 Using ai in the enterprise! Creating a text sUmmarizer for slaCk messages
Click the Allow button to authorize the bot and allow the permissions you allotted in
the previous step.
93
Chapter 4 Using ai in the enterprise! Creating a text sUmmarizer for slaCk messages
Figure 4-8. Copy Your OAuth Token for Your Slack Bot App
Back on the OAuth & Permissions page, be sure to copy the bot token (which
usually starts with "xoxb-") from the page here, as shown in Figure 4-8.
94
Chapter 4 Using ai in the enterprise! Creating a text sUmmarizer for slaCk messages
Select the option “Add apps to this channel,” and then select the name of the Slack
bot that you specified earlier when you registered the bot with Slack.
Congratulations! You now have successfully registered a Slack bot app with Slack,
enabled it to read messages in your workspace, and added the Slack bot to a channel.
Before we can write the Python code to access the channel in our workspace, we need to
know what is the internal ID that Slack uses for our channel.
95
Chapter 4 Using ai in the enterprise! Creating a text sUmmarizer for slaCk messages
Listing 4-10 is a simple Python Slack bot that obtains the user name, timestamp, and
message content of each posting in the channel within a designated time period.
96
Chapter 4 Using ai in the enterprise! Creating a text sUmmarizer for slaCk messages
except SlackApiError as e:
print(f"Error: {e.response['error']}")
97
Chapter 4 Using ai in the enterprise! Creating a text sUmmarizer for slaCk messages
Let’s walk through this code together. First, we’re importing essential modules for
interfacing with the Slack API and managing date/time data. We then define the Slack
API token (SLACK_BOT_TOKEN) needed for authentication, along with the ID of the
target Slack channel (channel_id).
Next, we establish the start and end times for the message retrieval window,
specifying them in coordinated universal time (UTC). Our script converts these time
values into Unix timestamps since the Slack API expects timestamps in this format for
message retrieval.
If the API call is successful, our script proceeds to extract and process the messages.
Messages are presented in chronological order so that the oldest messages appear first.
To keep things clean, for each message, the script retrieves the user ID and message
timestamp. It then uses the client.users_info method to fetch additional information
about the user who sent the message.
In case of any errors during the API request process (SlackApiError), the script
catches and handles them by printing an error message.
Listing 4-11 shows the output after executing slack_chat_reader_bot.py, which is
truncated here since you already have the full text in Listing 4-1 earlier in this chapter.
John [2023-08-11T09:04:52] : Sure thing, Keith. I'll dive into the codebase
and see if I can find any potential culprits for the crashes.
John [2023-08-11T09:05:30] : Fatima, could you gather some additional
information from the client? Ask them about the specific device, operating
system, and any recent updates they might have installed.
...
98
Chapter 4 Using ai in the enterprise! Creating a text sUmmarizer for slaCk messages
• Making sure that the bot doesn’t make a bad situation worse.
Whenever the bot provides a summary, it should not post in the
channel itself because that could add a lot of noise to an already
noisy situation. The best practice is to have the bot send a private
message to the person asking for a summary (or whatever new
command that you create).
Conclusion
In this chapter, we talked about one of the various ways artificial intelligence can be
put to practical use within the enterprise today. We discussed what is truly "prompt
engineering," by discussing that prompt engineering cannot be accomplished by simply
textual input to ChatGPT alone. You definitely need to understand the ramifications
of all the input parameters to the ChatGPT API, in order to properly, and effectively,
perform prompt engineering.
99
Chapter 4 Using ai in the enterprise! Creating a text sUmmarizer for slaCk messages
100
CHAPTER 5
Multimodal AI:
Creating a Podcast
Visualizer with Whisper
and DALL·E 3
Now let’s introduce a new term: multimodal AI. In the simplest of terms, generative AI
models can create content in one of four formats:
• Text
• Audio
• Images
• Video
Each of those formats are a mode. Multimodal AI is the process of using multiple AI
models together to generate (or to understand) content where the input is one type of
mode and the output is a different type of mode.
Take, for example, OpenAI’s Whisper model. If you provide it audio, it is able to
create a transcription of everything said into text. The same thing applies to DALL·E. If
you supply it with a textual prompt, then it is able to generate an image of what you
described.
In this chapter, we’re going to take multimodal AI to the next level! As an avid
podcast listener, I’ve often wondered what the scenery, the imagery, the characters,
the subject, or the background looked like while listening to a very immersive story in
audio format.
101
© Lydia Evelyn, Bruce Hopkins 2024
L. Evelyn and B. Hopkins, Beginning ChatGPT for Python, https://doi.org/10.1007/979-8-8688-0929-3_5
Chapter 5 MultiModal ai: Creating a podCast Visualizer with whisper and dall·e 3
So, we’re going to create a Podcast Visualizer using multiple models from
OpenAI. There are a few steps involved, but the final results are stunning. While
listening to a podcast about a guy cooking some amazing things with tofu (don’t knock it
until you try it), the Podcast Visualizer came up with the image in Figure 5-1.
Figure 5-1. The AI-Generated Result of Visualizing a Podcast About Tofu Using
the GPT-4, Whisper, and DALL·E Models
102
Chapter 5 MultiModal ai: Creating a podCast Visualizer with whisper and dall·e 3
In order to make the code for the Podcast Visualizer easy to follow along, we’ll do
things separately in the following three steps:
• Step 1: Take a podcast episode, and use the Whisper model to get a
transcript.
Step 2: Take the resulting transcript, and use the GPT-4 model to
•
describe the visual aspects of what’s being discussed in the podcast
episode.
Step 3: Take the resulting description, and use the DALL·E model to
•
generate an image.
The code presented here in this chapter has tons of practical uses, for example:
• For people who are hearing impaired, you can easily turn a podcast
or radio program into a slideshow of images. This greatly enhances
the accessibility of the content
• For podcasters, you can now have a simple way to add a visual/hero
image to each of your episodes. This is useful since podcast players
such as Apple Podcasts and Spotify allow podcasters to display a
single image to associate with an individual episode. This can help
with engagement for your listeners.
Listing 5-1. The Whisper Model Performs Speech Recognition to Convert Audio
into Text
...I'm Martina Castro. Every episode we bring you fascinating, true stories to help you
improve your Spanish listening and gain new perspectives on the world. The
storyteller will be using intermediate Spanish and I'll be chiming in for context in
English. If you miss something, you can always skip back and listen again. We also
offer full transcripts at podcast. duolingo.com.
Growing up, Linda was fascinated with her grandmother, Erlinda. Erlinda
was a healer or curandera, someone who administers remedies for mental,
emotional, physical, or spiritual illnesses.
In Guatemala, this is a practice passed down orally through generations in
the same family. Mal de ojo, or the evil eye, is considered an illness by
many Guatemalans who believe humans have the power to transfer bad energy
to others. Neighbors would bring their babies to Linda's grandmother when
they suspected an energy imbalance. Su madre lo llevaba a nuestra casa para
curarlo...
If you’ve ever worked with a speech recognition system before (even with
sophisticated technologies like Siri and Alexa), you will know that it has problems, for
instance:
• Speech recognition has problems with punctuation
• Depending on who you ask, there are at least 170k words in the
English language. However, in conversational English we are
always using foreign words like:
104
Chapter 5 MultiModal ai: Creating a podCast Visualizer with whisper and dall·e 3
• Do you remember those words that sound the same but have
different spellings and meanings? The fantastic editor of this
book knows all of them!
• Would / Wood
• Flour / Flower
• Two / Too / To
• They’re / There / Their
• Pair / Pare / Pear
• Break / Brake
• Allowed / Aloud
As you can see from Listing 5-1, above, Whisper was able to understand all the
punctuation in the audio, identify all the foreign words (of which there were several),
understand the names, as well as the company name (”Duolingo”) within a URL! Of
course, if you noticed, it could also understand the difference between “wood” and
“would.”
105
Chapter 5 MultiModal ai: Creating a podCast Visualizer with whisper and dall·e 3
106
Chapter 5 MultiModal ai: Creating a podCast Visualizer with whisper and dall·e 3
• Icelandic
• Indonesian
• Italian
• Japanese
• Kannada
• Kazakh
• Korean
• Latvian
• Lithuanian
• Macedonian
• Malay
• Marathi
• Maori
• Nepali
• Norwegian
• Persian
• Polish
• Portuguese
• Romanian
• Russian
• Serbian
• Slovak
• Slovenian
• Spanish
• Swahili
• Swedish
107
Chapter 5 MultiModal ai: Creating a podCast Visualizer with whisper and dall·e 3
• Tagalog
• Tamil
• Thai
• Turkish
• Ukrainian
• Urdu
• Vietnamese
• Welsh
So, at the end of the day, it will be able to understand audio spoken by yourself and
probably any language spoken by your friends and colleagues.
Developers are limited to sending no more than 50 requests per minute to the API,
so this constraint needs to be taken into consideration if you want to transcribe vast
amounts of audio.
Whisper supports audio in flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm
formats. Regardless of the format that you use, the maximum file size to send to the API
is 25MB.
Now, If you haven’t worked extensively with audio files, please be aware that some
formats create really huge files (e.g., wav format) and others create really small files (e.g.,
m4a format). So, converting your file to a different format can help you with the 25MB
limitation. However, later in this chapter, we’ll see the code for a tool that takes a single
large audio file and splits it into multiple, smaller files.
Using OpenAI.audio.transcriptions.create()
to Transcribe Audio
The OpenAI.audio.transcriptions.create() method converts audio into text and is only
compatible with the Whisper model. Let’s take a look at Table 5-1 to find out what
needs to be in the method parameters in order to call it successfully.
108
Chapter 5 MultiModal ai: Creating a podCast Visualizer with whisper and dall·e 3
(continued)
109
Chapter 5 MultiModal ai: Creating a podCast Visualizer with whisper and dall·e 3
language string optional this is the language of the input audio. it’s optional,
but providing the value can improve the accuracy
and latency for the transcription
Note in this section, i'm presenting one of many possibilities available on how to
segment a large audio file into smaller pieces. For instance, you can use popular
audio editing apps (such as the open source tool, audacity, or the licensed tool,
adobe audition) to manually cut up a large file into smaller files.
FFmpeg is one of the most reliable ways to programmatically manipulate audio files,
and pydub is an open source library recommended in OpenAI’s documentation on using
Whisper with Python.
Listing 5-2 is the prompt that is sent to ChatGPT in order to get a basic app that splits
audio files.
After a little back and forth, we were able to create (as shown in Listing 5-3)audio_
the
splitter.py application that includes our edits to what was generated by ChatGPT.
111
Chapter 5 MultiModal ai: Creating a podCast Visualizer with whisper and dall·e 3
segment.export(os.path.join(output_directory, f"segment_{i}.mp3"),
format="mp3")
Our goal is simple: split an MP3 file into contiguous segments no longer than ten
minutes using the Python language. In this process, we follow these steps:
• Specify the input file path, output directory, and the desired segment
duration in milliseconds (ten minutes).
Load the MP3 file using the pydub library's AudioSegment class,
•
which provides functionalities for audio processing.
Iterate through the input MP3 file, segmenting it into smaller parts of
• the specified duration (ten minutes or less) using slicing.
Save each segment as a separate MP3 file in the specified output
• directory.
In order for this to work, you need to have the pydub library and FFmpeg properly
installed and configured in your project. FFmpeg is an extremely versatile media
converter which not only handles MP3 audio files, but various other audio file formats
(including m4a, ogg, and wav). It is able to convert video formats as well as static images
like png, jpeg, and gif.
After running the script, you’ll have a folder full of segmented audio files that are
ten minutes long or less. Using the audio_splitter.py utility, you have a simple to
use Python script to modify the settings that work best for you. For our purposes, the
goal here is to have audio files that are <25MB, so if you’re transcribing eight-hour legal
proceedings (for example) in WAV format, you may need to adjust the duration to be
shorter, like six minutes in length.
112
Chapter 5 MultiModal ai: Creating a podCast Visualizer with whisper and dall·e 3
When using the audio splitter, the best practice is to have the output folder to be a
different folder from the input, and you’ll see why when we start to invoke the OpenAI
audio.transcriptions.create() method using the Whisper model.
Listing 5-4. PROMPT. Asking ChatGPT to Use OpenAI’s Python Library and
Send MP3 Files to Whisper’s API
Temperature: 1
After some back and forth, Listing 5-5 shows the response ChatGPT gave us
that worked.
import os
import openai
from dotenv import load_dotenv
113
Chapter 5 MultiModal ai: Creating a podCast Visualizer with whisper and dall·e 3
if filename.endswith(".mp3"):
file_path = os.path.join(mp3_folder_path, filename)
try:
# Read the content of the MP3 file
with open(file_path, "rb") as f:
file_content = f.read()
114
Chapter 5 MultiModal ai: Creating a podCast Visualizer with whisper and dall·e 3
if __name__ == "__main__":
main()
As you can see from the code listing above, one of the major benefits of keeping the input
file in a separate folder from the output files is that you can give the script above the path
to the output folder and you don't have to worry about anything else. This script will grab
all the files in the output folder regardless of the number of files in that folder, so it could
be one or 100 files – it doesn't matter.
115
Chapter 5 MultiModal ai: Creating a podCast Visualizer with whisper and dall·e 3
Figure 5-2. If You’re Looking for a Good Podcast with Compelling Stories, I
Recommend Listening to “This American Life” (Image credit: WBEZ Chicago)
Each episode weaves together a series of stories centered around a specific theme or
topic. Some stories are investigative journalism, and others are simply interviews with
ordinary people with captivating stories. Episode 811 is entitled, “The one place I can't
go." and the file is 56MB in MP3 format. Since we already know that 56MB is way too big
to send to Whisper to get transcribed, we need the audio_splitter.py to break up the
file for us.
Listing 5-6 shows an excerpt from the full transcript of the episode.
Listing 5-6. The Partial Transcript of Episode 811 of “This American Life”
“...My younger cousin Camille is not really a dog person, but there is
one dog she adored. Her name was Foxy, because she looked exactly like a
fox, except she was black. She was the neighbor's dog, but she and Camille
seemed to have a real kinship, maybe because they both weren't very far
116
Chapter 5 MultiModal ai: Creating a podCast Visualizer with whisper and dall·e 3
from the ground. Camille was around four or five years old back then, and she had a
little lisp, so Foxy came out as Fozzie. I thought it was one of the cutest things I'd ever
heard.
The way Camille remembers Foxy, it's almost like a movie. Her memories feel
like endless summer, hazy and perfect, like a scene shot on crackly film.
I just remembered like the feeling of being excited to go and see Foxy. I
have an image in my head of like coming to the house, and I could see Foxy
was like outside. I can see Foxy through the door that leads to the garden.
There's a story about Camille and Foxy that I think about fairly often.
I've talked about it with my sister for years, but never with Camille. And
it's this. Once when they were playing..."
For brevity, we’re only showing an excerpt of the transcript. The full transcript itself
is over 8000 words due to the fact that the episode is nearly one hour in length.
Listing 5-7. The Prompt for GPT-4 to Create a Prompt for DALL·E
System: You are a service that helps to visualize podcasts. User: Read the following
transcript from a podcast. Describe for a visually impaired person the background and
subject that best represents the overall theme of the episode. Start with any of the
following phrases: - "A photo of" - "A painting of" - "A macro 35mm photo of" - "Digital
art of "
117
Chapter 5 MultiModal ai: Creating a podCast Visualizer with whisper and dall·e 3
DALL·E needs to know the type of image to generate so that’s why we need to specify
that the image should be a photo, painting, digital art, etc. We need to ensure that the
resulting text generated by GPT-4 is short, so we want to have a maximum length of 150
tokens. Also, in order to prevent GPT-4 from repeating some phrases multiple times, we
introduced a frequency penalty of 0.33.
Listing 5-8 shows the results from GPT-4 after reading the transcript of Episode 811
of This American Life.
Digital art of a young girl sitting in a garden with a black dog that
looks like a fox. The girl is smiling and the dog is wagging its tail. The
image has a hazy, dream-like quality, with crackly film effects to evoke
nostalgia.
Using OpenAI.images.generate()
to Create Images
In order to use the DALL·E model to dynamically create an image from a text prompt,
you need to call the OpenAI.images.generate() method.
118
Chapter 5 MultiModal ai: Creating a podCast Visualizer with whisper and dall·e 3
(continued)
119
Chapter 5 MultiModal ai: Creating a podCast Visualizer with whisper and dall·e 3
120
Chapter 5 MultiModal ai: Creating a podCast Visualizer with whisper and dall·e 3
121
Chapter 5 MultiModal ai: Creating a podCast Visualizer with whisper and dall·e 3
prompt=prompt,
size=size,
model="dall-e-3"
)
print("Image URL:", response.data[0].url)
Figures 5-3 and 5-4 below show the images generated from the text prompt in Listing
5-8 from the podcast episode 811 of This American Life.
Figure 5-3. The DALL·E-Generated Image of a Girl and Her Dog from Episode 811
of “This American Life” Podcast
122
Chapter 5 MultiModal ai: Creating a podCast Visualizer with whisper and dall·e 3
Figure 5-4. The DALL·E-Generated Image of a Girl and Her Dog from Episode 811
of “This American Life” Podcast
123
Chapter 5 MultiModal ai: Creating a podCast Visualizer with whisper and dall·e 3
of 3D-looking images? Maybe you need photos instead of digital art? Maybe you want
the image to be a close-up shot instead of a portrait? There are a lot of possibilities to
consider.
Regardless of your use case, here are two golden rules in order to get the most out of
your DALL·E prompts.
• 3D render
• Painting
• Abstract painting
• Expressive oil painting
• Oil painting (in the style of any deceased artist)
• Oil pastel
• Digital art
• Photo
• Photorealistic
• Hyperrealistic
• Neon photo
• 35-mm macro photo
• High-quality photo
• Silhouette
• Vaporware
• Cartoon
• Plush object
124
Chapter 5 MultiModal ai: Creating a podCast Visualizer with whisper and dall·e 3
• Marble sculpture
• Hand sketch
• Poster
• Pencil and watercolor
• Synth wave
• Comic book style
• Hand drawn
If you can describe a dream to another person, then you should have no problem
describing what you want to DALL·E.
125
Chapter 5 MultiModal ai: Creating a podCast Visualizer with whisper and dall·e 3
126
Chapter 5 MultiModal ai: Creating a podCast Visualizer with whisper and dall·e 3
Figure 5-6. Vector Art: A Man Wearing Sunglasses with a Beard of Flowers
127
Chapter 5 MultiModal ai: Creating a podCast Visualizer with whisper and dall·e 3
128
Chapter 5 MultiModal ai: Creating a podCast Visualizer with whisper and dall·e 3
129
Chapter 5 MultiModal ai: Creating a podCast Visualizer with whisper and dall·e 3
130
Chapter 5 MultiModal ai: Creating a podCast Visualizer with whisper and dall·e 3
Figure 5-10. Oil Painting: Ship on Stormy Water with Dynamic Lighting and
Lightning in the Background
Conclusion
In this chapter, we accomplished a lot! With a few scripts, we created a Podcast Visualizer.
131
Chapter 5 MultiModal ai: Creating a podCast Visualizer with whisper and dall·e 3
132
CHAPTER 6
Creating an Automated
Community Manager Bot
with Discord and Python
When you’re launching an app or a service, it’s important to build and maintain your
own community. Below are the telltale signs of a healthy user community:
• Members engage in meaningful discussions, sharing insights,
feedback, and support.
Disagreements or debates occur, but they are approached
•
constructively without resorting to personal attacks or derogatory
language.
There's an atmosphere of respect, where members listen to each
•
other and acknowledge differing opinions.
A mix of old and new members actively participate, ensuring the
•
community remains vibrant and doesn't stagnate.
Users contribute diverse content, from answering questions to
• sharing resources, which enriches the community's knowledge base.
There's a balance between giving and taking; members who seek
• help or information also offer it to others.
New members frequently join, often referred by existing members,
indicating that the community is seen positively and worth
•
recommending.
133
© Lydia Evelyn, Bruce Hopkins 2024
L. Evelyn and B. Hopkins, Beginning ChatGPT for Python, https://doi.org/10.1007/979-8-8688-0929-3_6
Chapter 6 Creating an automated Community manager Bot with disCord and python
No matter what type of app or service that I create, I would love for my user
community to exemplify the items listed above!
134
Chapter 6 Creating an automated Community manager Bot with disCord and python
135
Chapter 6 Creating an automated Community manager Bot with disCord and python
Figure 6-1. This Fake App from a Fake Bank Has Real Problems
136
Chapter 6 Creating an automated Community manager Bot with disCord and python
138
Chapter 6 Creating an automated Community manager Bot with disCord and python
At the top-right of the page, click on the button “New Application,” as shown in
Figure 6-5.
In both Discord and Slack terminology, a “bot” is an “app,” and bots are not allowed
to run on Discord servers unless they have been registered with Discord first.
Specify a name for the bot and click the "Create" button, as shown in Figure 6-6.
139
Chapter 6 Creating an automated Community manager Bot with disCord and python
140
Chapter 6 Creating an automated Community manager Bot with disCord and python
Be sure to familiarize yourself with the navigation menu on the left side of the page.
As you can see, we have several categories of settings to configure for our bot. By default,
we have landed on the “General Information” page, where we specify basic info about
our bot. If you have an icon ready for your bot, you can upload it here.
141
Chapter 6 Creating an automated Community manager Bot with disCord and python
• Scopes
• Bot
After we select the bot’s scope, we get to see all the permissions that are only
applicable to bots (be sure to scroll down to see them).
Bots can be pretty powerful depending on the permissions you give it. There are
permissions that allow the bot to act in the capacity of a normal human moderator, such
as managing the server, roles, and channels. Bots with these permissions can also kick
and ban members.
What we’re going to enable for our bot right now are the ones that allow the bot
to send and receive messages in text channels, and voice permissions allow the bot to
participate in voice channels. Simple enough, right?
142
Chapter 6 Creating an automated Community manager Bot with disCord and python
• Bot permissions
Although you haven’t written any Python code yet, now it’s time to invite your bot to
your server.
143
Chapter 6 Creating an automated Community manager Bot with disCord and python
Figure 6-10. If You Read the Screen Carefully Here, You Can See That Discord Has
a Sense of Humor
144
Chapter 6 Creating an automated Community manager Bot with disCord and python
Click on the "Authorize" button to give the bot the permission to run on your server.
If everything went smoothly, then you should see an automated message in the
general channel of your server that indicates that the process has been successful.
145
Chapter 6 Creating an automated Community manager Bot with disCord and python
Note For obvious reasons, using the word “token” here makes me nervous
because this word has two distinct meanings in this book due to the context, but
here’s a quick refresher on the meanings:
• when using discord and slack apis, a “token” is an
authentication token.
• when using openai apis, a “token” as a part of a word.
Go back to the Discord Developers website, and click the “Bot” category in the
settings navigation menu to continue.
Although you haven’t seen your token yet, you need to click the “Reset Token”
button, as shown in Figure 6-12.
Be sure to copy and save the ID token to someplace safe. You will need this token in
the Python code that’s presented later in this chapter.
Figure 6-12. Click on the “Reset Token” Button to See Your ID Token
146
Chapter 6 Creating an automated Community manager Bot with disCord and python
Scroll down the page to the section named “Privileged Gateway Intents” and enable
the option named “MESSAGE CONTENT INTENT.”
Note so, let's slow things down a bit and talk about intents. what exactly is an
"intent" and why is it needed? For the purposes of the discord api, you need to
specify explicitly every type of information that you want to be notified by discord
programmatically. otherwise, discord will constantly bombard you with events that
are not relevant to you or your bot. For example, for our purposes, we don’t care
when people join or leave the server. however, if you want to send a list of server
rules to anyone who joins your server for the first time, then you definitely would
want to enable the “serVer memBers intent.” when we deep dive into the code,
you'll see more information about intents.
Be sure to click on the green button, “Save Changes” to save your changes. The result
is shown in Figure 6-13.
148
Chapter 6 Creating an automated Community manager Bot with disCord and python
At the beginning of the script, you should notice that we specify the channel that we
are interested in monitoring with the "channel_to_watch" variable.
149
Chapter 6 Creating an automated Community manager Bot with disCord and python
Note did you know that when the bot sends an answer to a person’s question in
the discord server, discord will invoke the on_message(message) function of the
bot to give the bot the message that the bot just sent. this sounds like a recipe for
an infinite loop, doesn’t it? therefore, we have logic in place for the bot to ignore
messages sent from itself.
In the final lines of the on_message(message) function, we make sure that we give a
friendly reply to the original sender of the message by “tagging” them in the response.
As we mentioned before, this first version of the Q&A bot is dumb. It will respond to
your question when posted in the Discord server, but the response won’t actually
answer your question.
150
Chapter 6 Creating an automated Community manager Bot with disCord and python
As you carefully inspect Figure 6-14, you’ll see some key features such as
• On the right side, you’ll see that the bot is online with a green status
indicator.
• After asking a question in the channel, the bot will tag you directly.
151
Chapter 6 Creating an automated Community manager Bot with disCord and python
Figure 6-15. Providing a Name and Icon for the Second Bot
152
Chapter 6 Creating an automated Community manager Bot with disCord and python
import discord
153
Chapter 6 Creating an automated Community manager Bot with disCord and python
154
Chapter 6 Creating an automated Community manager Bot with disCord and python
Figure 6-16. This Bot Has a Strict Rule About Discussing “Puppies”; However,
Discussing “Kittens” Is Perfectly Fine
155
Chapter 6 Creating an automated Community manager Bot with disCord and python
Conclusion
We just went through all the steps necessary to create two functioning Discord bots in
Python. For those who are unfamiliar with the process of creating a Discord server, we
showed the process on how to set up a server to manage our community.
As you can see, we took a much different approach compared to our Slack bot that
we did in Chapter 4! The Slack bot that we created was pretty much focused on user
productivity within the workplace. The two Discord bots, on the other hand, are truly
focused on community management. We have everything in place for these bots to be
artificially intelligent with the help of OpenAI’s APIs. This is all accomplished in the final
two chapters.
156
CHAPTER 7
Adding Intelligence to
Our Discord Bots,
Part 1: Using the Chat
Model for Q&A
At this point, we have all the structure in place to make both our Discord bots that we
created in the previous chapter to be fully functional and artificially intelligent, and that's
what we're going to do in the final chapters of this book. The focus of this chapter will be
on our tech support bot, which was called tech_support_bot_dumb.py. Below are the
two major changes that we’re going to make:
157
© Lydia Evelyn, Bruce Hopkins 2024
L. Evelyn and B. Hopkins, Beginning ChatGPT for Python, https://doi.org/10.1007/979-8-8688-0929-3_7
Chapter 7 adding intelligenCe to our disCord Bots, part 1: using the Chat Model for Q&a
Making tech_support_bot.py
More Intelligent
Listing 7-1 contains the full contents of the Frequently Asked Questions that the fictional
customer support team has created based upon support tickets from users of the newly
launched mobile banking application.
158
Chapter 7 adding intelligenCe to our disCord Bots, part 1: using the Chat Model for Q&a
As you can see in theFAQ.txt file in Listing 7-1, there's no magic involved here.
It's simply a list of questions and the answers. Now, let's see the newly modifiedtech_
support_bot.py class. This is represented in Listing 7-2.
"""
tech_support_bot.py
A Discord bot integrated with ChatGPT for automated responses in a
designated channel.
159
Chapter 7 adding intelligenCe to our disCord Bots, part 1: using the Chat Model for Q&a
This script initializes a Discord bot using the discord.Client() class and listens for
events such as messages being sent. When a message is received in the specified
channel, the bot calls ChatGPT to generate a response based on the message
content, and sends the response back to the same channel.
Requirements:
- discord (https://pypi.org/project/discord.py/)
- chatgpt_client (Assumed to be a custom module providing interaction with
ChatGPT)
Usage:
1. Replace the DISCORD_TOKEN variable with your bot's token obtained from
the Discord Developer Portal.
2. Adjust the CHANNEL_TO_WATCH variable to specify the name of the channel
the bot should monitor and interact with.
3. Ensure that the chatgpt_client module is properly implemented and
accessible.
import discord
from chatgpt_client_for_qa_and_moderation import ChatGPTClient
# Bot's token for authentication
DISCORD_TOKEN = ''
# Name of the channel the bot should monitor and interact with
CHANNEL_TO_WATCH = 'q-and-a'
# Initialize the Discord client
discord_client = discord.Client()
# Create the system message for ChatGPT
system_message_to_chatgpt = "You are a virtual assistant that provides
support for the Crooks Bank banking app."
160
Chapter 7 adding intelligenCe to our disCord Bots, part 1: using the Chat Model for Q&a
Parameters:
message (discord.Message): The message received by the bot.
Returns:
None
"""
# Ignore messages sent by the bot to prevent self-responses
if message.author == discord_client.user:
return
161
Chapter 7 adding intelligenCe to our disCord Bots, part 1: using the Chat Model for Q&a
# Send the reply to the same channel where the original message was
received
await message.channel.send(reply)
Listing 7-3. Setting the System Message and FAQ Info for tech_support_bot.py
Next, we’re defining another variable that provides a reference to the file path
location where the frequently asked questions file is stored. Then we simply read the
file – easy peasy.
This function is the core part of our Discord bot. Whenever a message is posted in
our Discord server, the function on_message() is called. The async keyword is important
here because this enables on_message() to be called asynchronously, which is what we
want, i.e., whenever a message is posted.
163
Chapter 7 adding intelligenCe to our disCord Bots, part 1: using the Chat Model for Q&a
Here, we provide a nice user experience and show the user that the bot is “typing,”
while the user’s question is being sent to ChatGPT. When the response comes back, we
provide the reply back to the user.
Analyzing chatgpt_client_for_qa_and_
moderation.py
Now, as we look further into Listing 7-2 above, tech_support_bot.py instantiates the
ChatGPTClient object in the chatgpt_client_for_qa_and_moderation.py script which
(as we stated previously) is very similar to the
chatgpt_client.py script that we have
used before. The complete source forchatgpt_client_for_qa_and_moderation.py is
shown in Listing 7-4.
import os
from dotenv import load_dotenv
from openai import OpenAI
class ChatGPTClient:
def __init__(self, system_message, initial_instructions_to_chatgpt):
# Load environment variables from .env
load_dotenv()
# Use the API key from the environment variable
self.client = OpenAI()
self.system_message = system_message
self.initial_instructions_to_chatgpt = initial_instructions_
to_chatgpt
{
"role": "system",
"content": f'{self.system_message}'
},
{
"role": "user",
"content": f'{self.initial_instructions_to_chatgpt}'
},
{
"role": "user",
"content": f'{user_message}'
}
],
temperature=0.85,
max_tokens=1921,
top_p=1,
frequency_penalty=0,
presence_penalty=0
)
c o n d e n s e d _ r e s p o n s e = r e s p o n s e
r e t u r n c o n d e n s e d _ r e s p o n s e
165
Chapter 7 adding intelligenCe to our disCord Bots, part 1: using the Chat Model for Q&a
The send_message_from_discord() method doesn’t limit our use cases for this
class. At the moment, we’re using it for Q&A; however, any bot that uses this class would
be able to prompt ChatGPT in any way for any purpose.
Finally, we’re efficiently handling the response from the OpenAI API by extracting the
content of the generated message and returning a condensed version. This allows our
Discord bot to reply with just the information we’re looking for, not the entire response
that the Chat model returns. This is accomplished in a single line of code:
condensed_response = response.choices[0].message.content
Figure 7-1. Having a Discussion with Our Intelligent Discord Q&A Bot, tech_
support_bot.py
166
Chapter 7 adding intelligenCe to our disCord Bots, part 1: using the Chat Model for Q&a
Listing 7-6 shows the conversation between us and the Discord bot.
Listing 7-5. Our First Conversation with the Intelligent Q&A Discord Bot
Me: Hi, I'm new to this channel. What kind of questions can I ask?
Q and A Bot: Hello! Welcome to the Crooks Bank Mobile App support channel.
Here, you can ask any questions regarding our banking app. For example, you
can ask about:
1. How to install and use the app.
2. Features and services within the app.
3. Troubleshooting any issues you might encounter.
4. Security measures to protect your data and transactions.
5. Instructions for resetting your password.
6. Costs and fees associated with using the app.
7. How to link external bank accounts.
8. Customer support availability.
9. Feedback and suggestions for improving the app.
Feel free to ask any other questions you might have related to the Crooks
Bank Mobile App. We're here to assist!
Me: I have an old Blackberry phone, is the app supported on my device?
Q and A Bot: I'm sorry for the inconvenience, but at the moment, the Crooks
Bank Mobile App is currently only available for iOS and Android devices.
It is not supported on Blackberry or any other operating systems. You may
still manage your bank accounts by accessing our website through your
phone's browser.
Me: I'm an iPhone user. Can I sign into the app with face or touch ID?
Q and A Bot: Sure, here's how you can set up the Crooks Bank Mobile App on
your iPhone:
1. Open the App Store on your iPhone.
2. In the search bar at the top, type in "Crooks Bank Mobile App" and tap
"Search".
3. Once you can see the app in the search results, simply tap on the "Get"
button to download it.
167
Chapter 7 adding intelligenCe to our disCord Bots, part 1: using the Chat Model for Q&a
168
Chapter 7 adding intelligenCe to our disCord Bots, part 1: using the Chat Model for Q&a
So, with all this that we have accomplished, there’s one gigantic flaw that we can’t
ignore. In Listing 7-5 above, the bot said to the user:
Once you're logged in, you can set up Touch ID or Face ID for quicker,
secure login in the future.
No, no, no! Bad bot! In case you didn’t read the Frequently Asked Questions file
completely, please allow me to explain what went wrong here:
1. The FAQ.txt file in Listing 7-1 clearly states that the Crook’s Bank
mobile app is a mobile app. Currently, Touch ID can only be used
on Apple desktop and laptop computers. So, this doesn’t even
make sense.
Remember that the system message indicates to ChatGPT what its role is in the chat
conversation. Right now, it is told to be a helpful assistant, and therefore, it has taken
the liberty in order to guess that certain features exist within the mobile application.
Now, let's ensure that ChatGPT does not provide any assumptions. Update the system
message to be as shown below:
169
Chapter 7 adding intelligenCe to our disCord Bots, part 1: using the Chat Model for Q&a
And that’s it. TheFAQ.txt file remains exactly the same as before, and all we needed
to do is to update the system message to let ChatGPT know its role in the conversation.
Listing 7-6 has the new conversation between us and the Discord bot.
Listing 7-6. Our Second (and Better) Conversation with the Intelligent Q&A
Discord Bot
Now, that’s a lot better! Our tech support bot is “sticking to the script” and is not
allowed to make up or assume anything.
170
Chapter 7 adding intelligenCe to our disCord Bots, part 1: using the Chat Model for Q&a
Conclusion
In this chapter, we’ve accomplished a lot! We have a fully functioning Discord bot that
can be trained by anyone without any experience in AI, NLP, or machine learning using
a simple text file. We learned that a “helpful assistant” may at times become too helpful
and assume facts that are not true. However, we also reiterated the concept and the value
of the system message, which is an important part of prompt engineering.
Now that we have made our Q&A Discord bot to be intelligent, let’s find out how to
make our content moderator bot to be intelligent as well!
171
CHAPTER 8
Adding Intelligence
to Our Discord Bots,
Part 2: Using Chat and
Moderations Models for
Content Moderation
In this chapter, we’re going to take the steps necessary in order to make our content
moderator discord bot artificially intelligent. Let’s overview the changes that we’re going
to make:
• Hate
• Hate/threatening
• Harassment
• Harassment/threatening
• Self-harm
• Self-harm/intent
• Self-harm/instructions
173
© Lydia Evelyn, Bruce Hopkins 2024
L. Evelyn and B. Hopkins, Beginning ChatGPT for Python, https://doi.org/10.1007/979-8-8688-0929-3_8
Chapter 8 aDDING INteLLIGeNCe tO OUr DISCOrD BOtS, part 2: USING Chat aND MODeratIONS
MODeLS FOr CONteNt MODeratION
• Sexual
• Sexual/minors
• Violence
• Violence/graphic
Now, at this point, you may be asking yourself, if the Moderations class already
knows how to flag any harmful content, then why do we need to use the Chat class as
well? Good question.
Yes, the Moderations class will allow us to know about harmful content, but it does
not inform us about any other types of unwanted content for our use case, such as when
unscrupulous individuals try to lure our users into a scam. Remember, this is a Discord
server for a banking app, so scammers would definitely love to target all the members of
this Discord server since it’s a central location full of bank users!
Therefore, we’ll use the moderation_client.py to invoke the Moderations class
to know if any content in the Discord server is harmful, and we’ll reuse the chatgpt_
client_for_qa_and_moderation.py from the last chapter in order to invoke the Chat
method in order to be made aware of any other undesirable content is posted in the
Discord server, such as scam attempts.
174
Chapter 8 aDDING INteLLIGeNCe tO OUr DISCOrD BOtS, part 2: USING Chat aND MODeratIONS
MODeLS FOr CONteNt MODeratION
Using OpenAI.moderations.create()
to
Moderate Content
By using the Moderations models, developers are able to submit a string of text and to
subsequently know if it’s violent, hateful, and/or threatening or contains any form of
harassment.
Table 8-1 describes the format of the parameters necessary to call the OpenAI.
moderations.create() method. The service is very simple to use, since only one
parameter is required to properly invoke the service.
175
Chapter 8 aDDING INteLLIGeNCe tO OUr DISCOrD BOtS, part 2: USING Chat aND MODeratIONS
MODeLS FOr CONteNt MODeratION
Handling the Response
After successfully invoking the Moderations model, the method will provide a response
with the structure shown in Table 8-2.
Moderation (Dictionary)
Table 8-2. The Structure of the Moderation Response
(continued)
176
Chapter 8 aDDING INteLLIGeNCe tO OUr DISCOrD BOtS, part 2: USING Chat aND MODeratIONS
MODeLS FOr CONteNt MODeratION
Table 8-2. (continued)
Field Type Description
Boolean this indicates whether or not the text given contains content
↳self-harm/intent
↳
in which the speaker expresses that they are engaging
or intend to engage in acts of self-harm, such as suicide,
cutting, and eating disorders.
this indicates whether or not the text given contains content
↳self-harm/instructions
↳ Boolean that encourages the performing acts of self-harm, such as
suicide, cutting, and eating disorders. this includes content
that gives instructions or advice on how to commit such acts.
this indicates whether or not the text given contains content
↳sexual
↳ Boolean meant to arouse sexual excitement, such as the description
of sexual activity.
this includes content that promotes sexual services;
however, this excludes topics such as sex education and
wellness.
this indicates whether or not the text given contains content
↳sexual/minors
↳ Boolean that includes an individual under the age of 18.
this indicates whether or not the text given contains content
depicting death, violence, or physical injury.
↳violence
↳ Boolean
this indicates whether or not the text given contains content
depicting death, violence, or physical injury in graphic detail.
↳violence/graphic
↳ Boolean a list of the categories along with the scores given by the
model.
↳category_scores List Score for the category “hate.”
Score for the category “hate/threatening.”
↳↳ hate Numbe Score for the category “harassment.”
↳↳ hate/threatening r Score for the category “harassment/threatening.”
↳↳ harassment Numbe
↳↳ harassment/ r
threatening Numbe
↳↳ self-harm Number
r Score for the category “self-harm.”
Numbe (continued)
r
177
Chapter 8 aDDING INteLLIGeNCe tO OUr DISCOrD BOtS, part 2: USING Chat aND MODeratIONS
MODeLS FOr CONteNt MODeratION
Table 8-2. (continued)
Field Type Description
Numbe Score for the category “self-harm/intent.”
↳self-harm/intent
↳
r Score for the category “self-harm/instructions.”
↳self-harm/instructions
↳
Numbe Score for the category “sexual.”
↳sexual
↳
r Score for the category “violence.”
↳violence
↳
Numbe Score for the category “violence/graphic.”
↳violence/graphic
↳
r
Numbe of the Moderation response after invoking the
The listing below is an example
Moderations model. Table r 8-2 looks a little complex, but as you can see, if any of the
categories are labeled asNumbe
“true,” then the results.flagged node is labeled as “true.”
Take a look at Listing 8-1 for a practical example of the Moderation response.
r
{
"id": "modr-XXXXX",
"model": "text-moderation-005",
"results": [
{
"flagged": true,
"categories": {
"sexual": false, "hate": false,
"harassment": false, "self-harm":
false, "sexual/minors": false,
"hate/threatening": false,
"violence/graphic": false, "self-
harm/intent": false, "self-
harm/instructions": false,
178
Chapter 8 aDDING INteLLIGeNCe tO OUr DISCOrD BOtS, part 2: USING Chat aND MODeratIONS
MODeLS FOr CONteNt MODeratION
"harassment/threatening": true,
"violence": true,
},
"category_scores": {
"sexual": 1.2282071e-06, "hate":
0.010696256, "harassment": 0.29842457,
"self-harm": 1.5236925e-08,
"sexual/minors": 5.7246268e-08,
"hate/threatening": 0.0060676364,
"violence/graphic": 4.435014e-06, "self-
harm/intent": 8.098441e-10, "self-
harm/instructions": 2.8498655e-11,
"harassment/threatening": 0.63055265,
"violence": 0.99011886,
}
}
]
}
179
Chapter 8 aDDING INteLLIGeNCe tO OUr DISCOrD BOtS, part 2: USING Chat aND MODeratIONS
MODeLS FOr CONteNt MODeratION
def moderate_text(self, text):
moderation = self.client.moderations.create(input=text)
return moderation
In the previous chapters in this book, we created client scripts for the
various methods of the classes of the OpenAI API. Therefore, the class above,
ModerationResponse in moderation_client.py, should look quite familiar. At the end
of the day, we have a simple function, moderate_text(), which allows us to pass the text
that we want to evaluate and returns the response.
Making content_moderator_bot.py
More Intelligent
Now that we have moderation_client.py to invoke the Moderations model, let’s take a
look at the updated content_moderator_bot.py (formerly named,content_moderator_
bot_dumb.py) that will use the moderation_client.py to check for harmful content
and the chatgpt_client_for_qa_and_moderation.py (unmodified from the previous
chapter) to check for potential scams.
Listing 8-3 is the full source code for our intelligent Discord moderator bot,
content_moderator_bot.py.
"""
content_moderation_bot.py
A Discord bot integrated with ChatGPT and a moderation service for
automated content moderation in Discord servers.
This script initializes a Discord bot using the discord.Client() class and
listens for events such as messages being sent. When a message is received,
it calls both the ChatGPT API and a moderation service to analyze the
message content for rule violations. If the message is flagged by either
service, it deletes the message and sends a notification to the user
explaining why it was deemed inappropriate.
180
Chapter 8 aDDING INteLLIGeNCe tO OUr DISCOrD BOtS, part 2: USING Chat aND MODeratIONS
MODeLS FOr CONteNt MODeratION
Requirements:
- discord (https://pypi.org/project/discord.py/)
- chatgpt_client_for_qa_and_moderation (Assumed to be a custom module
providing interaction with ChatGPT)
- moderation_client (Assumed to be a custom module providing interaction
with a moderation service)
Usage:
1. Replace the DISCORD_TOKEN variable with your bot's token obtained from
the Discord Developer Portal.
2. Ensure that the chatgpt_client_for_qa_and_moderation and moderation_
client modules are properly implemented and accessible.
Note: This script assumes the presence of modules for interaction with
ChatGPT and a moderation service.
"""
import discord
from chatgpt_client_for_qa_and_moderation import ChatGPTClient
from moderation_client import ModerationResponse
181
Chapter 8 aDDING INteLLIGeNCe tO OUr DISCOrD BOtS, part 2: USING Chat aND MODeratIONS
MODeLS FOr CONteNt MODeratION
6. External links 7. Political messages or debate 8. Religious messages or
debate
If any of these violations are detected, respond with "FLAG" (in
uppercase without quotation marks). If the message adheres to the
rules, respond with "SAFE" (in uppercase without quotation marks).
"""
"""
Event handler triggered when the bot is successfully logged in and
ready to receive events.
"""
print('Logged in as', discord_client.user)
print('------')
"""
Event handler triggered when a message is received.
Parameters:
182
Chapter 8 aDDING INteLLIGeNCe tO OUr DISCOrD BOtS, part 2: USING Chat aND MODeratIONS
MODeLS FOr CONteNt MODeratION
Returns:
None
"""
# Ignore messages sent by the bot to prevent self-responses
if message.author == discord_client.user:
return
await message.delete()
# Mention the user who sent the inappropriate message
author_mention = message.author.mention
# Send a message mentioning the user and explaining why it was
inappropriate
await message.channel.send(f"{author_mention} This comment was
deemed inappropriate for this channel. " +
" If you believe this to be in error, please contact one of the
183
Chapter 8 aDDING INteLLIGeNCe tO OUr DISCOrD BOtS, part 2: USING Chat aND MODeratIONS
MODeLS FOr CONteNt MODeratION
Updates to the on_message(message) Function
After a message is received in any channel of the Discord server, the
on_message(message) function is invoked asynchronously. Here’s the most important
change to be aware of:
await message.delete()
# Mention the user who sent the inappropriate message
author_mention = message.author.mention
# Send a message mentioning the user and explaining why it was
inappropriate
await message.channel.send(f"{author_mention} This comment was
deemed inappropriate for this channel. " +
"If you believe this to be in error, please contact one of the
human server moderators.")
Here, we take each message that was posted in the Discord server and check it with
both the Moderations class and the Chat method. If either method return to inform
us that the message is flagged, then we delete the message in the channel and
inform the user that their message violated the rules.
Now that our content moderator Discord bot is intelligent, let’s give it a try!
184
Chapter 8 aDDING INteLLIGeNCe tO OUr DISCOrD BOtS, part 2: USING Chat aND MODeratIONS
MODeLS FOr CONteNt MODeratION
Running Our Intelligent Content Moderator Bot:
content_moderator_bot.py
Now let’s run our new and improved content moderator Python Discord bot, content_
moderator_bot.py. After executing the app, be sure to return back to your Discord server
and start asking questions. Figure 8-1 shows the bot in action.
Figure 8-1. Having a Discussion with Our Intelligent Discord Content Moderator
Bot: content_moderator_bot.py
Listing 8-4 shows a conversation between us and the Discord bot in order to test and
see what it can do.
185
Chapter 8 aDDING INteLLIGeNCe tO OUr DISCOrD BOtS, part 2: USING Chat aND MODeratIONS
MODeLS FOr CONteNt MODeratION
Me:Come to my website! http://www.google.com
Content Mod Bot: @PythonChatGPT This comment was deemed inappropriate for
this channel. If you believe this to be in error, please contact one of the
human server moderators.
Me:I'm sorry for breaking the rules. I'm a different person now
Content Mod Bot: @PythonChatGPT This comment was deemed inappropriate for
this channel. If you believe this to be in error, please contact one of the
human server moderators.
In both cases when unwanted content was posted in any channel of the Discord
server, not only was the offending user called out but the bad message was deleted.
Good bot!
Did you notice that the Moderation and Chat methods are able to read emojis
as well?
Conclusion
In this chapter, we created a fully functioning content moderator for our entire Discord
server! We leveraged both the Moderations and Chat methods from OpenAI to create a
custom content moderator that not only flags unsafe content like hateful and
threatening messages but also prevents the users of the Discord server from being
subject to unwanted solicitations.
186
Chapter 8 aDDING INteLLIGeNCe tO OUr DISCOrD BOtS, part 2: USING Chat aND MODeratIONS
MODeLS FOr CONteNt MODeratION
• The individual Discord bots that we created are aware to not respond
to messages that they send themselves. However, the bots are not
yet aware that they shouldn’t respond to messages sent by other
bots. In other words, if you run both bots at the same time, and
someone posts something bad in the “q-and-a” channel, the content
moderator will, of course, delete the message and inform everyone
that the message was deleted. However, since the tech support bot
doesn’t know that it shouldn’t respond to other bots, it will try to
create a response. Of course, bots should not talk to other bots.
187
APPENDIX 1
191
© Lydia Evelyn, Bruce Hopkins 2024
L. Evelyn and B. Hopkins, Beginning ChatGPT for Python , https://doi.org/10.1007/979-8-8688-0929-3
INDEX
192
INDEX
E
Embeddings model, 10 M
.env file creation, 24, 25 Mod Bot, 151
model_lister.py, 26
moderation_client.py, 179, 180
F Moderations models, 11, 174
Factory Method Pattern, 3 Multimodal AI
FAQ.txt file, 169 formats, 101
FFmpeg, 111, 112 Podcast Visualizer, 102
G, H N
Geocoding, 54 Natural language processing (NLP),
.gitignore file creation, 25 5, 7, 171
Google Maps API Natural language understanding
applications, 45 (NLU), 5, 7
dashboard, 49 Neural network, 8
JavaScript API, 48, 49 NLP, see Natural language
library, 47, 48 processing (NLP)
platform NLU, see Natural language
account setup, 46 understanding (NLU)
APIs & Services, 47, 49
documentation, 51
home page, 45, 46 O
keys & credentials tab, 49, 50 Observer pattern, 4
routes, 49–51 on_message(message) method, 150,
GPT-3.5, 9 163, 184
GPT-4, 9 on_message() method, 163, 164
193
INDEX
194
INDEX
195