Discover millions of ebooks, audiobooks, and so much more with a free trial

From $11.99/month after trial. Cancel anytime.

What Is ChatGPT Doing: ... and Why Does It Work?
What Is ChatGPT Doing: ... and Why Does It Work?
What Is ChatGPT Doing: ... and Why Does It Work?
Ebook150 pages2 hours

What Is ChatGPT Doing: ... and Why Does It Work?

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Nobody expected this—not even its creators: ChatGPT has burst onto the scene as an AI capable of writing at a convincingly human level. But how does it really work? What's going on inside its "AI mind"? In this short book, prominent scientist and computation pioneer Stephen Wolfram provides a readable and engaging explanation that draws on his decades-long unique experience at the frontiers of science and technology. Find out how the success of ChatGPT brings together the latest neural net technology with foundational questions about language and human thought posed by Aristotle more than two thousand years ago.

LanguageEnglish
PublisherWolfram Media
Release dateMar 9, 2023
ISBN9781579550820
What Is ChatGPT Doing: ... and Why Does It Work?

Read more from Stephen Wolfram

Related to What Is ChatGPT Doing

Related ebooks

Intelligence (AI) & Semantics For You

View More

Related articles

Reviews for What Is ChatGPT Doing

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    What Is ChatGPT Doing - Stephen Wolfram

    What Is ChatGPT Doing ... and Why Does It Work?

    Copyright © 2023 Stephen Wolfram, LLC

    Wolfram Media, Inc. | wolfram-media.com

    ISBN-978-1-57955-081-3 (paperback)

    ISBN-978-1-57955-082-0 (ebook)

    Technology/Computers

    Library of Congress Cataloging-in-Publication Data:

    Names: Wolfram, Stephen, 1959- author.

    Title: What is ChatGPT doing ... and why does it work? / Stephen Wolfram.

    Other titles: ChatGPT

    Description: First edition. | [Champaign, Illinois] : Wolfram Media, Inc., [2023] | Includes bibliographical references.

    Identifiers: LCCN 2023009927 (print) | LCCN 2023009928 (ebook) | ISBN 9781579550813 (paperback) | ISBN 9781579550820 (ebook)

    Subjects: LCSH: Natural language generation (Computer science)—Computer programs. | Neural networks (Computer science) | ChatGPT. | Wolfram language (Computer program language)

    Classification: LCC QA76.9.N38 W65 2023 (print) | LCC QA76.9.N38 (ebook) | DDC 006.3/5—dc23/eng/20230310

    LC record available at https://lccn.loc.gov/2023009927

    LC ebook record available at https://lccn.loc.gov/2023009928

    For permission to reproduce images, contact [email protected].

    Visit the online version of this text at wolfr.am/SW-ChatGPT and wolfr.am/ChatGPT-WA. Click any picture to copy the code behind it.

    ChatGPT screenshots were generated with GPT-3, OpenAI’s AI system that produces natural language.

    First edition.

    Contents

    Preface

    What Is ChatGPT Doing ... and Why Does It Work?

    It’s Just Adding One Word at a Time · Where Do the Probabilities Come From? · What Is a Model? · Models for Human-Like Tasks · Neural Nets · Machine Learning, and the Training of Neural Nets · The Practice and Lore of Neural Net Training · Surely a Network That’s Big Enough Can Do Anything! · The Concept of Embeddings · Inside ChatGPT · The Training of ChatGPT · Beyond Basic Training · What Really Lets ChatGPT Work? · Meaning Space and Semantic Laws of Motion · Semantic Grammar and the Power of Computational Language · So ... What Is ChatGPT Doing, and Why Does It Work? · Thanks

    Wolfram|Alpha as the Way to Bring Computational Knowledge Superpowers to ChatGPT

    ChatGPT and Wolfram|Alpha · A Basic Example · A Few More Examples · The Path Forward

    Additional Resources

    Preface

    This short book is an attempt to explain from first principles how and why ChatGPT works. In some ways it’s a story about technology. But it’s also a story about science. As well as about philosophy. And to tell the story, we’ll have to bring together a remarkable range of ideas and discoveries made across many centuries.

    For me it’s exciting to see so many things I’ve so long been interested in come together in a burst of sudden progress. From the complex behavior of simple programs to the core character of language and meaning, and the practicalities of large computer systems—all of these are part of the ChatGPT story.

    ChatGPT is based on the concept of neural nets—originally invented in the 1940s as an idealization of the operation of brains. I myself first programmed a neural net in 1983—and it didn’t do anything interesting. But 40 years later, with computers that are effectively a million times faster, with billions of pages of text on the web, and after a whole series of engineering innovations, the situation is quite different. And—to everyone’s surprise—a neural net that is a billion times larger than the one I had in 1983 is capable of doing what was thought to be that uniquely human thing of generating meaningful human language.

    This book consists of two pieces that I wrote soon after ChatGPT debuted. The first is an explanation of ChatGPT and its ability to do the very human thing of generating language. The second looks forward to ChatGPT being able to use computational tools to go beyond what humans can do, and in particular being able to leverage the computational knowledge superpowers of our Wolfram|Alpha system.

    It’s only been three months since ChatGPT launched, and we are just beginning to understand its implications, both practical and intellectual. But for now its arrival is a reminder that even after everything that has been invented and discovered, surprises are still possible.

    Stephen Wolfram

    February 28, 2023

    What Is ChatGPT Doing ... and Why Does It Work?

    (February 14, 2023)

    It’s Just Adding One Word at a Time

    That ChatGPT can automatically generate something that reads even superficially like human-written text is remarkable, and unexpected. But how does it do it? And why does it work? My purpose here is to give a rough outline of what’s going on inside ChatGPT—and then to explore why it is that it can do so well in producing what we might consider to be meaningful text. I should say at the outset that I’m going to focus on the big picture of what’s going on—and while I’ll mention some engineering details, I won’t get deeply into them. (And the essence of what I’ll say applies just as well to other current large language models [LLMs] as to ChatGPT.)

    The first thing to explain is that what ChatGPT is always fundamentally trying to do is to produce a reasonable continuation of whatever text it’s got so far, where by reasonable we mean what one might expect someone to write after seeing what people have written on billions of webpages, etc.

    So let’s say we’ve got the text "The best thing about AI is its ability to. Imagine scanning billions of pages of human-written text (say on the web and in digitized books) and finding all instances of this text—then seeing what word comes next what fraction of the time. ChatGPT effectively does something like this, except that (as I’ll explain) it doesn’t look at literal text; it looks for things that in a certain sense match in meaning. But the end result is that it produces a ranked list of words that might follow, together with probabilities":

    And the remarkable thing is that when ChatGPT does something like write an essay what it’s essentially doing is just asking over and over again given the text so far, what should the next word be?—and each time adding a word. (More precisely, as I’ll explain, it’s adding a token, which could be just a part of a word, which is why it can sometimes make up new words.)

    But, OK, at each step it gets a list of words with probabilities. But which one should it actually pick to add to the essay (or whatever) that it’s writing? One might think it should be the highest-ranked word (i.e. the one to which the highest probability was assigned). But this is where a bit of voodoo begins to creep in. Because for some reason—that maybe one day we’ll have a scientific-style understanding of—if we always pick the highest-ranked word, we’ll typically get a very flat essay, that never seems to show any creativity (and even sometimes repeats word for word). But if sometimes (at random) we pick lower-ranked words, we get a more interesting essay.

    The fact that there’s randomness here means that if we use the same prompt

    Enjoying the preview?
    Page 1 of 1