ChatGPT Prompt to get Datasets for Machine Learning
Last Updated :
23 Jul, 2025
With the development of machine learning, access to high-quality datasets is becoming increasingly important. Datasets are crucial for assessing the accuracy and effectiveness of the final model, which is a prerequisite for any machine learning project. In this article, we'll learn how to use a ChatGPT[OpenAI] template prompt to collect a variety of datasets for different machine learning applications and collect these datasets in Python.
Step for Generating Dataset using ChatGPT
Step 1: Install OpenAI library in Python
!pip install -q openai
Step 2: Import OpenAI library in Python
Python
Step 3: Assign your API key to OpenAI environment variable
Python
openai.api_key = "YOUR_API_KEY"
Step 4: Create a custom function to call ChatGPT API
Python
def chat(message):
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": f"{message}"},
]
)
return response['choices'][0]['message']['content']
Step 5: Call that function and pass in the prompt
res = chat('Massage')
print(res)
Prompts to Gather/Generate Datasets for Machine Learning
Prompt 1:
Create a list of datasets that can be used to train {topic} models. Ensure that the datasets are available in CSV format. The objective is to use this dataset to learn about {topic}. Also, provide links to the dataset if possible. Create the list in tabular form with the following columns: Dataset name, dataset, URL, dataset description
Python
prompt ='''
Create a list of datasets that can be used to train logistic regression models.
Ensure that the datasets are available in CSV format.
The objective is to use this dataset to learn about logistic regression models
and related nuances such as training the models. Also provide links to the dataset if possible.
Create the list in tabular form with following columns:
Dataset name, dataset, URL, dataset description
'''
res = chat(prompt)
print(res)
Output:
Dataset name | Dataset | URL | Dataset description--- | --- | --- | ---Titanic - Machine Learning from Disaster | titanic.csv | https://www.kaggle.com/c/titanic/data | Contains data on passengers of the Titanic, including features such as age, sex, and class, along with whether they survived or not.Red Wine Quality | winequality-red.csv | https://archive.ics.uci.edu/dataset/186/wine+quality | Contains data on various physiochemical properties of red wine and their associated quality ratings.Bank Marketing | bank-additional-full.csv | https://archive.ics.uci.edu/dataset/222/bank+marketing | Contains information on a bank's telemarketing campaign, including contact details of customers and whether they subscribed to a term deposit or not.Breast Cancer Wisconsin (Diagnostic) | wdbc.csv | https://archive.ics.uci.edu/dataset/17/breast+cancer+wisconsin+diagnostic) | Contains data on various features extracted from digitized images of breast cancer biopsies, along with whether the biopsy was benign or malignant.Adult | adult.csv | https://archive.ics.uci.edu/dataset/2/adult | Contains demographic data on individuals, along with whether their income exceeds a certain threshold or not.Heart Disease | heart.csv | https://www.kaggle.com/ronitf/heart-disease-uci | Contains data on various medical measurements taken on individuals, along with whether they have heart disease or not. Pima Indians Diabetes | pima-indians-diabetes.csv | https://www.kaggle.com/uciml/pima-indians-diabetes-database | Contains data on various medical measurements taken on Pima Indian women, along with whether they have diabetes or not.Iris | iris.csv | https://archive.ics.uci.edu/dataset/53/iris | Contains data on various measurements taken on iris flowers, along with their species. Loan Prediction | train.csv | https://www.analyticsvidhya.com/datahack/contest/practice-problem-loan-prediction-iii/#ProblemStatement | Contains various demographic data on loan applicants, along with whether their application was approved or not.
Prompt 2:
Generate a dummy dataset to train and test a {machine learning model name} for educational purposes.
Python
res = chat('generate a dummy dataset to train and test a logistic regression model\
for educational purposes. Ensure that the dataset is available in csv format')
print(res)
Output:
Here is an example dummy dataset in CSV format for educational purposes:
```
Age,Gender,Income,Education,Employment_Status,Marital_Status,Loan_Approval
23,Male,25000,High School,Unemployed,Single,Not Approved
32,Female,45000,Bachelor's Degree,Employed,Married,Not Approved
45,Male,120000,Master's Degree,Employed,Married,Approved
38,Female,60000,Bachelor's Degree,Employed,Married,Approved
26,Male,32000,Associate's Degree,Employed,Single,Not Approved
29,Female,28000,High School,Employed,Single,Not Approved
41,Male,80000,Doctoral Degree,Employed,Divorced,Approved
54,Male,95000,Master's Degree,Employed,Married,Approved
```
The dataset contains demographic and financial information for eight individuals along with whether or not they were approved for a loan. The goal is to train a logistic regression model to predict loan approval based on the other variables.
Prompt 3:
List down datasets to practice {topic}, and the if possible also attach dataset links and descriptions. Create the list in tabular format
Python
prompt ='''
List down datasets to practice object detection,
if possible also attach dataset links and description.
Create the list in tabular format
'''
res = chat(prompt)
print(res)
Output:
| Dataset | Link | Description |
| :-------------- | :-------------------------------------------------------- | :-------------------------------------------------------------------- |
| COCO | https://cocodataset.org/#home | Common Objects in Context dataset, contains over 330K images |
| Pascal VOC | http://host.robots.ox.ac.uk/pascal/VOC/ | Pascal Visual Object Classes dataset, contains 20 object categories |
| Open Images | https://storage.googleapis.com/openimages/web/index.html | Contains over 9M images with object-level annotations |
| ImageNet | https://image-net.org/ | Large-scale dataset with over 14M annotated images and 21k categories |
| KITTI | https://www.cvlibs.net/datasets/kitti/ | Contains images of street scenes with object-level annotations |
| BDD100K | https://bdd-data.berkeley.edu/ | Large-scale diverse dataset for autonomous driving |
| DOTA | https://captain-whu.github.io/DOTA/index.html | Large-scale aerial images dataset with object detection annotations |
| WIDER FACE | http://shuoyang1213.me/WIDERFACE/ | Contains 32k images of faces with bounding box annotations |
| VisDrone | https://aiskyeye.com/ | Contains 10k images with annotations of various objects |
| MS COCO Text | https://www.robots.ox.ac.uk/~vgg/data/scenetext/ | Contains 63k images with text annotations |
These datasets can be used with popular object detection frameworks such as TensorFlow, PyTorch, and Keras.
Prompt 4:
Create a list of datasets for practicing on {topic}. Make sure they are available in CSV format. Also, provide links to the dataset.
Python
prompt ="""
Create a list of datasets for practicing on machine translation from english to hindi.
Make sure they are available in text format.
Also, provide links to the dataset.
"""
res = chat(prompt)
print(res)
Output:
1. TED Talks Corpus: This dataset contains parallel transcripts of TED talks in English and Hindi. It is available in text format and can be downloaded from the official website: https://www.ted.com/participate/translate
2. United Nations Parallel Corpus: This corpus contains parallel texts in Hindi and English from speeches delivered by UN delegates. It is available in text format and can be downloaded from the official website: https://www.un.org/dgacm/en/content/applications
3. OPUS Corpus: This corpus contains parallel texts in various languages including Hindi and English. It includes data from a wide range of domains such as news, legal documents, and subtitles. It is available in text format and can be downloaded from the official website: https://opus.nlpl.eu/
4. Bible Corpus: This dataset contains parallel texts of the Bible in Hindi and English. It is available in text format and can be downloaded from the official website: https://christos-c.com/bible_data/
5. Indian Language Parallel Corpus: This corpus contains parallel texts in Hindi and other Indian languages. It includes data from various domains such as news, novels, and Wikipedia articles. It is available in text format and can be downloaded from the official repository: https://github.com/AI4Bharat/indic-corpus
6. Covid-19 India Parallel Corpus: This corpus contains parallel texts in Hindi and English related to the Covid-19 pandemic in India. It includes data from news sources, government advisories, and social media. It is available in text format and can be downloaded from the official website: https://github.com/AI4Bharat/covid19-news/blob/master/parallel-corpus.md
7. BookCorpus: This dataset contains parallel texts of novels in Hindi and English. It is available in text format and can be downloaded from the official website: https://github.com/soskek/bookcorpus/tree/master/data
Note: Some of these datasets may require some preprocessing and cleaning before using for machine translation purposes.
To learn more about Chat GPT, you can refer to:
Conclusion:
In this article, we have seen how one can utilize ChatGPT API[OpenAI] in Python to gather or generate datasets to practice machine learning algorithms. ChatGPT is a one-stop good resource for dataset generation/gathering and many other applications. But keep in mind, these datasets are dummy or training datasets that can be used for practice purposes.
Similar Reads
Data Science Tutorial Data Science is a field that combines statistics, machine learning and data visualization to extract meaningful insights from vast amounts of raw data and make informed decisions, helping businesses and industries to optimize their operations and predict future trends.This Data Science tutorial offe
3 min read
Introduction to Machine Learning
What is Data Science?Data science is the study of data that helps us derive useful insight for business decision making. Data Science is all about using tools, techniques, and creativity to uncover insights hidden within data. It combines math, computer science, and domain expertise to tackle real-world challenges in a
8 min read
Top 25 Python Libraries for Data Science in 2025Data Science continues to evolve with new challenges and innovations. In 2025, the role of Python has only grown stronger as it powers data science workflows. It will remain the dominant programming language in the field of data science. Its extensive ecosystem of libraries makes data manipulation,
10 min read
Difference between Structured, Semi-structured and Unstructured dataBig Data includes huge volume, high velocity, and extensible variety of data. There are 3 types: Structured data, Semi-structured data, and Unstructured data. Structured data - Structured data is data whose elements are addressable for effective analysis. It has been organized into a formatted repos
2 min read
Types of Machine LearningMachine learning is the branch of Artificial Intelligence that focuses on developing models and algorithms that let computers learn from data and improve from previous experience without being explicitly programmed for every task.In simple words, ML teaches the systems to think and understand like h
13 min read
What's Data Science Pipeline?Data Science is a field that focuses on extracting knowledge from data sets that are huge in amount. It includes preparing data, doing analysis and presenting findings to make informed decisions in an organization. A pipeline in data science is a set of actions which changes the raw data from variou
3 min read
Applications of Data ScienceData Science is the deep study of a large quantity of data, which involves extracting some meaning from the raw, structured, and unstructured data. Extracting meaningful data from large amounts usesalgorithms processing of data and this processing can be done using statistical techniques and algorit
6 min read
Python for Machine Learning
Learn Data Science Tutorial With PythonData Science has become one of the fastest-growing fields in recent years, helping organizations to make informed decisions, solve problems and understand human behavior. As the volume of data grows so does the demand for skilled data scientists. The most common languages used for data science are P
3 min read
Pandas TutorialPandas is an open-source software library designed for data manipulation and analysis. It provides data structures like series and DataFrames to easily clean, transform and analyze large datasets and integrates with other Python libraries, such as NumPy and Matplotlib. It offers functions for data t
6 min read
NumPy Tutorial - Python LibraryNumPy (short for Numerical Python ) is one of the most fundamental libraries in Python for scientific computing. It provides support for large, multi-dimensional arrays and matrices along with a collection of mathematical functions to operate on arrays.At its core it introduces the ndarray (n-dimens
3 min read
Scikit Learn TutorialScikit-learn (also known as sklearn) is a widely-used open-source Python library for machine learning. It builds on other scientific libraries like NumPy, SciPy and Matplotlib to provide efficient tools for predictive data analysis and data mining.It offers a consistent and simple interface for a ra
3 min read
ML | Data Preprocessing in PythonData preprocessing is a important step in the data science transforming raw data into a clean structured format for analysis. It involves tasks like handling missing values, normalizing data and encoding variables. Mastering preprocessing in Python ensures reliable insights for accurate predictions
6 min read
EDA - Exploratory Data Analysis in PythonExploratory Data Analysis (EDA) is a important step in data analysis which focuses on understanding patterns, trends and relationships through statistical tools and visualizations. Python offers various libraries like pandas, numPy, matplotlib, seaborn and plotly which enables effective exploration
6 min read
Introduction to Statistics
Statistics For Data ScienceStatistics is like a toolkit we use to understand and make sense of information. It helps us collect, organize, analyze and interpret data to find patterns, trends and relationships in the world around us.From analyzing scientific experiments to making informed business decisions, statistics plays a
12 min read
Descriptive StatisticStatistics is the foundation of data science. Descriptive statistics are simple tools that help us understand and summarize data. They show the basic features of a dataset, like the average, highest and lowest values and how spread out the numbers are. It's the first step in making sense of informat
5 min read
What is Inferential Statistics?Inferential statistics is an important tool that allows us to make predictions and conclusions about a population based on sample data. Unlike descriptive statistics, which only summarize data, inferential statistics let us test hypotheses, make estimates, and measure the uncertainty about our predi
7 min read
Bayes' TheoremBayes' Theorem is a mathematical formula used to determine the conditional probability of an event based on prior knowledge and new evidence. It adjusts probabilities when new information comes in and helps make better decisions in uncertain situations.Bayes' Theorem helps us update probabilities ba
13 min read
Probability Data Distributions in Data ScienceUnderstanding how data behaves is one of the first steps in data science. Before we dive into building models or running analysis, we need to understand how the values in our dataset are spread out and thatâs where probability distributions come in.Let us start with a simple example: If you roll a f
8 min read
Parametric Methods in StatisticsParametric statistical methods are those that make assumptions regarding the distribution of the population. These methods presume that the data have a known distribution (e.g., normal, binomial, Poisson) and rely on parameters (e.g., mean and variance) to define the data.Key AssumptionsParametric t
6 min read
Non-Parametric TestsNon-parametric tests are applied in hypothesis testing when the data does not satisfy the assumptions necessary for parametric tests, such as normality or equal variances. These tests are especially helpful for analyzing ordinal data, small sample sizes, or data with outliers.Common Non-Parametric T
5 min read
Hypothesis TestingHypothesis testing compares two opposite ideas about a group of people or things and uses data from a small part of that group (a sample) to decide which idea is more likely true. We collect and study the sample data to check if the claim is correct.Hypothesis TestingFor example, if a company says i
9 min read
ANOVA for Data Science and Data AnalyticsANOVA is useful when we need to compare more than two groups and determine whether their means are significantly different. Suppose you're trying to understand which ingredients in a recipe affect its taste. Some ingredients, like spices might have a strong influence while others like a pinch of sal
9 min read
Bayesian Statistics & ProbabilityBayesian statistics sees unknown values as things that can change and updates what we believe about them whenever we get new information. It uses Bayesâ Theorem to combine what we already know with new data to get better estimates. In simple words, it means changing our initial guesses based on the
6 min read
Feature Engineering
Model Evaluation and Tuning
Data Science Practice