ChatGPT Optimizing Language Models For Dialogue
ChatGPT Optimizing Language Models For Dialogue
API
RESEARCH
BLOG
ABOUT
ChatGPT: Optimizing
Language Models
for Dialogue
We’ve trained a model called ChatGPT
which interacts in a conversational way.
The dialogue format makes it possible for
ChatGPT to answer followup questions,
admit its mistakes, challenge incorrect
premises, and reject inappropriate
requests. ChatGPT is a sibling model to
InstructGPT, which is trained to follow an
instruction in a prompt and provide a
detailed response.
TRY CHATGPT ↗
Samples
In the following sample, ChatGPT asks the
clarifying questions to debug code.
Sample 1 of 4 Previous Next
User
err := b.worker(ctx)
cancel()
if err == nil {
return <-resultWorkerErr
}
return multierror.Append(err, <-resultWorkerErr)
ChatGPT
User
ChatGPT
close(resultworkerErr)
Sample 1 of 4 Previous Next
TRY CHATGPT ↗
Methods
We trained this model using Reinforcement
Learning from Human Feedback (RLHF),
using the same methods as InstructGPT, but
with slight di!erences in the data collection
setup. We trained an initial model using
supervised fine-tuning: human AI trainers
provided conversations in which they played
both sides—the user and an AI assistant. We
gave the trainers access to model-written
suggestions to help them compose their
responses. We mixed this new dialogue
dataset with the InstructGPT dataset, which
we transformed into a dialogue format.
Limitations
ChatGPT sometimes writes plausible-
sounding but incorrect or nonsensical
answers. Fixing this issue is challenging,
as: (1) during RL training, there’s
currently no source of truth; (2) training
the model to be more cautious causes it to
decline questions that it can answer
correctly; and (3) supervised training
misleads the model because the ideal
answer depends on what the model
knows, rather than what the human
demonstrator knows.
ChatGPT is sensitive to tweaks to the
input phrasing or attempting the same
prompt multiple times. For example,
given one phrasing of a question, the
model can claim to not know the answer,
but given a slight rephrase, can
answer correctly.
The model is often excessively verbose
and overuses certain phrases, such as
restating that it’s a language model
trained by OpenAI. These issues arise
from biases in the training data (trainers
prefer longer answers that look more
comprehensive) and well-known over-
optimization issues. 1,2
Ideally, the model would ask clarifying
questions when the user provided an
ambiguous query. Instead, our current
models usually guess what the
user intended.
While we’ve made e!orts to make the
model refuse inappropriate requests, it
will sometimes respond to harmful
instructions or exhibit biased behavior.
We’re using the Moderation API to warn
or block certain types of unsafe content,
but we expect it to have some false
negatives and positives for now. We’re
eager to collect user feedback to aid our
ongoing work to improve this system.
Iterative deployment
Today’s research release of ChatGPT is the
latest step in OpenAI’s iterative deployment
of increasingly safe and useful AI systems.
Many lessons from deployment of earlier
models like GPT-3 and Codex have informed
the safety mitigations in place for this
release, including substantial reductions in
harmful and untruthful outputs achieved by
the use of reinforcement learning from
human feedback (RLHF).
Sample 1 of 3 Previous Next
User
ChatGPT
InstructGPT
Sample 1 of 3 Previous Next
TRY CHATGPT ↗
Footnotes
1. No purchase necessary, void where prohibited. Must be
at least 18 to enter. For contest details, see the Official
Rules. ↩
References
1. Stiennon, Nisan, et al. “Learning to summarize with
human feedback.” Advances in Neural Information
Processing Systems 33 (2020): 3008-3021. ↩
Authors
OpenAI
Acknowledgments
Contributors: John Schulman, Barret Zoph, Christina Kim,
Jacob Hilton, Jacob Menick, Jiayi Weng, Juan Felipe
Ceron Uribe, Liam Fedus, Luke Metz, Michael Pokorny,
Rapha Gontijo Lopes, Shengjia Zhao, Arun Vijayvergiya,
Eric Sigler, Adam Perelman, Chelsea Voss, Mike Heaton,
Joel Parish, Dave Cummings, Rajeev Nayak, Valerie
Balcom, David Schnurr, Tomer Kaftan, Chris Hallacy,
Nicholas Turley, Noah Deutsch, Vik Goel, Jonathan Ward,
Aris Konstantinidis, Wojciech Zaremba, Long Ouyang,
Leonard Bogdonoff, Joshua Gross, David Medina, Sarah
Yoo, Teddy Lee, Ryan Lowe, Dan Mossing, Joost Huizinga,
Roger Jiang, Carroll Wainwright, Diogo Almeida, Steph
Lin, Marvin Zhang, Kai Xiao, Katarina Slama, Steven Bills,
Alex Gray, Jan Leike, Jakub Pachocki, Phil Tillet, Shantanu
Jain, Greg Brockman, Nick Ryder
Filed Under
Announcements, Research
FEATURED API
ChatGPT Overview
DALL·E 2 Pricing
Whisper Examples
Alignment Docs
Startup Fund Terms & Policies
Status
Log in
BLOG INFORMATION
Index About Us
Research Our Charter
Announcements Our Research
Events Publications
Milestones Newsroom
Careers
OpenAI © 2015–2022 Privacy Policy
Terms of Use
# $ % & ' (