0% found this document useful (0 votes)
14 views13 pages

LLM 1

The document provides an overview of Large Language Models (LLMs), detailing their workings, including the architecture and prompting techniques. It discusses key models such as GPT-3, BERT, and BART, along with concepts like in-context learning and fine-tuning for specific tasks. The document emphasizes the importance of transformer architecture and tokenization in the functionality of LLMs.

Uploaded by

gowdamandavi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views13 pages

LLM 1

The document provides an overview of Large Language Models (LLMs), detailing their workings, including the architecture and prompting techniques. It discusses key models such as GPT-3, BERT, and BART, along with concepts like in-context learning and fine-tuning for specific tasks. The document emphasizes the importance of transformer architecture and tokenization in the functionality of LLMs.

Uploaded by

gowdamandavi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Large Language Model (LLM)

How LLM Works?

GPT3: Parameters, Source Dataset


To Think

Understanding Building Boxes of LLM


Prompting and Prompt Engineering

In-​
context Learning
Chain of Thought Prompting

Least-​
to-​
most Prompting

Prompt the LLM to decompose the problem and


solve it easily first

What Makes LLM so Good


Transformer Architecture
Simplified Transformer Architecture

Tokenizing Text
Converting token into token-​
ID
Encoder Model, Decoder Model

BERT vs GPT
Encoder - Decoder Model

BART Model
What encoder actually do?

How GPT works ?


Fine-​
Tuning LLM for domain specific task

You might also like