100% found this document useful (1 vote)
5 views

Programming Large Language Models With Azure Open Ai: Conversational Programming and Prompt Engineering With Llms (Developer Reference) 1st Edition Esposito pdf download

The document is a detailed overview of the book 'Programming Large Language Models with Azure Open AI' by Francesco Esposito, which focuses on conversational programming and prompt engineering using large language models (LLMs). It includes chapters on the history and analysis of LLMs, prompt learning techniques, building personal assistants, and security concerns, among others. The book targets software architects and developers interested in leveraging Azure services for LLM applications.

Uploaded by

rollo0fylesaf
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
5 views

Programming Large Language Models With Azure Open Ai: Conversational Programming and Prompt Engineering With Llms (Developer Reference) 1st Edition Esposito pdf download

The document is a detailed overview of the book 'Programming Large Language Models with Azure Open AI' by Francesco Esposito, which focuses on conversational programming and prompt engineering using large language models (LLMs). It includes chapters on the history and analysis of LLMs, prompt learning techniques, building personal assistants, and security concerns, among others. The book targets software architects and developers interested in leveraging Azure services for LLM applications.

Uploaded by

rollo0fylesaf
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 56

Programming Large Language Models With Azure

Open Ai: Conversational Programming and Prompt


Engineering With Llms (Developer Reference) 1st
Edition Esposito download
https://textbookfull.com/product/programming-large-language-
models-with-azure-open-ai-conversational-programming-and-prompt-
engineering-with-llms-developer-reference-1st-edition-esposito/

Download more ebook from https://textbookfull.com


We believe these products will be a great fit for you. Click
the link to download now, or visit textbookfull.com
to discover even more!

Transforming Conversational AI: Exploring the Power of


Large Language Models in Interactive Conversational
Agents 1st Edition Michael Mctear

https://textbookfull.com/product/transforming-conversational-ai-
exploring-the-power-of-large-language-models-in-interactive-
conversational-agents-1st-edition-michael-mctear/

Clean Architecture With net Developer Reference 1st


Edition Esposito Dino

https://textbookfull.com/product/clean-architecture-with-net-
developer-reference-1st-edition-esposito-dino/

Beginning Game AI with Unity: Programming Artificial


Intelligence with C# Sebastiano M. Cossu

https://textbookfull.com/product/beginning-game-ai-with-unity-
programming-artificial-intelligence-with-c-sebastiano-m-cossu/

Learn AI-Assisted Python Programming with GitHub


Copilot and ChatGPT 1st Edition Leo Porter

https://textbookfull.com/product/learn-ai-assisted-python-
programming-with-github-copilot-and-chatgpt-1st-edition-leo-
porter/
Programming with MicroPython Embedded Programming with
Microcontrollers and Python 1st Edition Nicholas H.
Tollervey

https://textbookfull.com/product/programming-with-micropython-
embedded-programming-with-microcontrollers-and-python-1st-
edition-nicholas-h-tollervey/

Programming with MicroPython embedded programming with


Microcontrollers and Python First Edition Nicholas H.
Tollervey

https://textbookfull.com/product/programming-with-micropython-
embedded-programming-with-microcontrollers-and-python-first-
edition-nicholas-h-tollervey/

Programming Microsoft ASP NET MVC Dino Esposito

https://textbookfull.com/product/programming-microsoft-asp-net-
mvc-dino-esposito/

Open Heritage Data: An introduction to research,


publishing and programming with open data in the
heritage sector Henriette Roued-Cunliffe

https://textbookfull.com/product/open-heritage-data-an-
introduction-to-research-publishing-and-programming-with-open-
data-in-the-heritage-sector-henriette-roued-cunliffe/

Hands On Start to Wolfram Mathematica And Programming


with the Wolfram Language Cliff Hastings

https://textbookfull.com/product/hands-on-start-to-wolfram-
mathematica-and-programming-with-the-wolfram-language-cliff-
hastings/
Programming Large Language
Models with Azure Open AI:
Conversational programming and
prompt engineering with LLMs

Francesco Esposito
Programming Large Language Models with Azure Open AI:
Conversational programming and prompt engineering with
LLMs
Published with the authorization of Microsoft Corporation by: Pearson
Education, Inc.

Copyright © 2024 by Francesco Esposito.


All rights reserved. This publication is protected by copyright, and
permission must be obtained from the publisher prior to any prohibited
reproduction, storage in a retrieval system, or transmission in any form or by
any means, electronic, mechanical, photocopying, recording, or likewise. For
information regarding permissions, request forms, and the appropriate
contacts within the Pearson Education Global Rights & Permissions
Department, please visit www.pearson.com/permissions.
No patent liability is assumed with respect to the use of the information
contained herein. Although every precaution has been taken in the
preparation of this book, the publisher and author assume no responsibility
for errors or omissions. Nor is any liability assumed for damages resulting
from the use of the information contained herein.
ISBN-13: 978-0-13-828037-6
ISBN-10: 0-13-828037-1
Library of Congress Control Number: 2024931423
$PrintCode

Trademarks
Microsoft and the trademarks listed at http://www.microsoft.com on the
“Trademarks” webpage are trademarks of the Microsoft group of companies.
All other marks are property of their respective owners.

Warning and Disclaimer


Every effort has been made to make this book as complete and as accurate as
possible, but no warranty or fitness is implied. The information provided is
on an “as is” basis. The author, the publisher, and Microsoft Corporation
shall have neither liability nor responsibility to any person or entity with
respect to any loss or damages arising from the information contained in this
book or from the use of the programs accompanying it.

Special Sales
For information about buying this title in bulk quantities, or for special sales
opportunities (which may include electronic versions; custom cover designs;
and content particular to your business, training goals, marketing focus, or
branding interests), please contact our corporate sales department at
[email protected] or (800) 382-3419.
For government sales inquiries, please contact
[email protected].
For questions about sales outside the U.S., please contact
[email protected].

Editor-in-Chief
Brett Bartow

Executive Editor
Loretta Yates

Associate Editor
Shourav Bose

Development Editor
Kate Shoup

Managing Editor
Sandra Schroeder

Senior Project Editor


Tracey Croom

Copy Editor
Dan Foster

Indexer
Timothy Wright

Proofreader
Donna E. Mulder

Technical Editor
Dino Esposito

Editorial Assistant
Cindy Teeters

Cover Designer
Twist Creative, Seattle

Compositor
codeMantra

Graphics
codeMantra

Figure Credits
Figure 4.1: LangChain, Inc
Figures 7.1, 7.2, 7.4: Snowflake, Inc
Figure 8.2: SmartBear Software
Figure 8.3: Postman, Inc
Dedication

A I.
Perché non dedicarti un libro sarebbe stato un sacrilegio.
Contents at a Glance

Introduction

CHAPTER 1 The genesis and an analysis of large language models


CHAPTER 2 Core prompt learning techniques
CHAPTER 3 Engineering advanced learning prompts
CHAPTER 4 Mastering language frameworks
CHAPTER 5 Security, privacy, and accuracy concerns
CHAPTER 6 Building a personal assistant
CHAPTER 7 Chat with your data
CHAPTER 8 Conversational UI

Appendix: Inner functioning of LLMs

Index
Contents

Acknowledgments
Introduction

Chapter 1 The genesis and an analysis of large language models


LLMs at a glance
History of LLMs
Functioning basics
Business use cases
Facts of conversational programming
The emerging power of natural language
LLM topology
Future perspective
Summary

Chapter 2 Core prompt learning techniques


What is prompt engineering?
Prompts at a glance
Alternative ways to alter output
Setting up for code execution
Basic techniques
Zero-shot scenarios
Few-shot scenarios
Chain-of-thought scenarios
Fundamental use cases
Chatbots
Translating
LLM limitations
Summary

Chapter 3 Engineering advanced learning prompts


What’s beyond prompt engineering?
Combining pieces
Fine-tuning
Function calling
Homemade-style
OpenAI-style
Talking to (separated) data
Connecting data to LLMs
Embeddings
Vector store
Retrieval augmented generation
Summary

Chapter 4 Mastering language frameworks


The need for an orchestrator
Cross-framework concepts
Points to consider
LangChain
Models, prompt templates, and chains
Agents
Data connection
Microsoft Semantic Kernel
Plug-ins
Data and planners
Microsoft Guidance
Configuration
Main features
Summary

Chapter 5 Security, privacy, and accuracy concerns


Overview
Responsible AI
Red teaming
Abuse and content filtering
Hallucination and performances
Bias and fairness
Security and privacy
Security
Privacy
Evaluation and content filtering
Evaluation
Content filtering
Summary

Chapter 6 Building a personal assistant


Overview of the chatbot web application
Scope
Tech stack
The project
Setting up the LLM
Setting up the project
Integrating the LLM
Possible extensions
Summary

Chapter 7 Chat with your data


Overview
Scope
Tech stack
What is Streamlit?
A brief introduction to Streamlit
Main UI features
Pros and cons in production
The project
Setting up the project and base UI
Data preparation
LLM integration
Progressing further
Retrieval augmented generation versus fine-tuning
Possible extensions
Summary

Chapter 8 Conversational UI
Overview
Scope
Tech stack
The project
Minimal API setup
OpenAPI
LLM integration
Possible extensions
Summary

Appendix: Inner functioning of LLMs

Index
Acknowledgments

In the spring of 2023, when I told my dad how cool Azure OpenAI was
becoming, his reply was kind of a shock: “Why don’t you write a book about
it?” He said it so naturally that it hit me as if he really thought I could do it.
In fact, he added, “Are you up for it?” Then there was no need to say more.
Loretta Yates at Microsoft Press enthusiastically accepted my proposal, and
the story of this book began in June 2023.
AI has been a hot topic for the better part of a decade, but the emergence
of new-generation large language models (LLMs) has propelled it into the
mainstream. The increasing number of people using them translates to more
ideas, more opportunities, and new developments. And this makes all the
difference.
Hence, the book you hold in your hands can’t be the ultimate and
definitive guide to AI and LLMs because the speed at which AI and LLMs
evolve is impressive and because—by design—every book is an act of
approximation, a snapshot of knowledge taken at a specific moment in time.
Approximation inevitably leads to some form of dissatisfaction, and
dissatisfaction leads us to take on new challenges. In this regard, I wish for
myself decades of dissatisfaction. And a few more years of being on the stage
presenting books written for a prestigious publisher—it does wonders for my
ego.
First, I feel somewhat indebted to all my first dates since May because
they had to endure monologues lasting at least 30 minutes on LLMs and
some weird new approach to transformers.
True thanks are a private matter, but publicly I want to thank Martina first,
who cowrote the appendix with me and always knows what to say to make
me better. My gratitude to her is keeping a promise she knows. Thank you,
Martina, for being an extraordinary human being.
To Gianfranco, who taught me the importance of discussing and
expressing, even loudly, when something doesn’t please us, and taught me to
always ask, because the worst thing that can happen is hearing a no. Every
time I engage in a discussion, I will think of you.
I also want to thank Matteo, Luciano, Gabriele, Filippo, Daniele,
Riccardo, Marco, Jacopo, Simone, Francesco, and Alessia, who worked with
me and supported me during my (hopefully not too frequent) crises. I also
have warm thoughts for Alessandro, Antonino, Sara, Andrea, and Cristian
who tolerated me whenever we weren’t like 25-year-old youngsters because I
had to study and work on this book.
To Mom and Michela, who put up with me before the book and probably
will continue after. To my grandmas. To Giorgio, Gaetano, Vito, and Roberto
for helping me to grow every day. To Elio, who taught me how to dress and
see myself in more colors.
As for my dad, Dino, he never stops teaching me new things—for
example, how to get paid for doing things you would just love to do, like
being the technical editor of this book. Thank you, both as a father and as an
editor. You bring to my mind a song you well know: “Figlio, figlio, figlio.”
Beyond Loretta, if this book came to life, it was also because of the hard
work of Shourav, Kate, and Dan. Thank you for your patience and for
trusting me so much.
This book is my best until the next one!
Introduction

This is my third book on artificial intelligence (AI), and the first I wrote on
my own, without the collaboration of a coauthor. The sequence in which my
three books have been published reflects my own learning path, motivated by
a genuine thirst to understand AI for far more than mere business
considerations. The first book, published in 2020, introduced the
mathematical concepts behind machine learning (ML) that make it possible to
classify data and make timely predictions. The second book, which focused
on the Microsoft ML.NET framework, was about concrete applications—in
other words, how to make fancy algorithms work effectively on amounts of
data hiding their complexity behind the charts and tables of a familiar web
front end.
Then came ChatGPT.
The technology behind astonishing applications like ChatGPT is called a
large language model (LLM), and LLMs are the subject of this third book.
LLMs add a crucial capability to AI: the ability to generate content in
addition to classifying and predicting. LLMs represent a paradigm shift,
raising the bar of communication between humans and computers and
opening the floodgates to new applications that for decades we could only
dream of.
And for decades, we did dream of these applications. Literature and
movies presented various supercomputers capable of crunching any sort of
data to produce human-intelligible results. An extremely popular example
was HAL 9000—the computer that governed the spaceship Discovery in the
movie 2001: A Space Odyssey (1968). Another famous one was JARVIS
(Just A Rather Very Intelligent System), the computer that served Tony
Stark’s home assistant in Iron Man and other movies in the Marvel Comics
universe.
Often, all that the human characters in such books and movies do is
simply “load data into the machine,” whether in the form of paper
documents, digital files, or media content. Next, the machine autonomously
figures out the content, learns from it, and communicates back to humans
using natural language. But of course, those supercomputers were conceived
by authors; they were only science fiction. Today, with LLMs, it is possible
to devise and build concrete applications that not only make human–
computer interaction smooth and natural, but also turn the old dream of
simply “loading data into the machine” into a dazzling reality.
This book shows you how to build software applications using the same
type of engine that fuels ChatGPT to autonomously communicate with users
and orchestrate business tasks driven by plain textual prompts. No more, no
less—and as easy and striking as it sounds!

Who should read this book


Software architects, lead developers, and individuals with a background in
programming—particularly those familiar with languages like Python and
possibly C# (for ASP.NET Core)—will find the content in this book
accessible and valuable. In the vast realm of software professionals who
might find the book useful, I’d call out those who have an interest in ML,
especially in the context of LLMs. I’d also list cloud and IT professionals
with an interest in using cloud services (specifically Microsoft Azure) or in
sophisticated, real-world applications of human-like language in software.
While this book focuses primarily on the services available on the Microsoft
Azure platform, the concepts covered are easily applicable to analogous
platforms. At the end of the day, using an LLM involves little more than
calling a bunch of API endpoints, and, by design, APIs are completely
independent of the underlying platform.
In summary, this book caters to a diverse audience, including
programmers, ML enthusiasts, cloud-computing professionals, and those
interested in natural language processing, with a specific emphasis on
leveraging Azure services to program LLMs.
Assumptions

To fully grasp the value of a programming book on LLMs, there are a couple
of prerequisites, including proficiency in foundational programming concepts
and a familiarity with ML fundamentals. Beyond these, a working knowledge
of relevant programming languages and frameworks, such as Python and
possibly ASP.NET Core, is helpful, as is an appreciation for the significance
of classic natural language processing in the context of business domains.
Overall, a blend of programming expertise, ML awareness, and linguistic
understanding is recommended for a comprehensive grasp of the book’s
content.

This book might not be for you if…

This book might not be for you if you’re just seeking a reference book to find
out in detail how to use a particular pattern or framework. Although the book
discusses advanced aspects of popular frameworks (for example, LangChain
and Semantic Kernel) and APIs (such as OpenAI and Azure OpenAI), it does
not qualify as a programming reference on any of these. The focus of the
book is on using LLMs to build useful applications in the business domains
where LLMs really fit well.

Organization of this book

This book explores the practical application of existing LLMs in developing


versatile business domain applications. In essence, an LLM is an ML model
trained on extensive text data, enabling it to comprehend and generate
human-like language. To convey knowledge about these models, this book
focuses on three key aspects:
The first three chapters delve into scenarios for which an LLM is
effective and introduce essential tools for crafting sophisticated
solutions. These chapters provide insights into conversational
programming and prompting as a new, advanced, yet structured,
approach to coding.
The next two chapters emphasize patterns, frameworks, and techniques
for unlocking the potential of conversational programming. This
involves using natural language in code to define workflows, with the
LLM-based application orchestrating existing APIs.
The final three chapters present concrete, end-to-end demo examples
featuring Python and ASP.NET Core. These demos showcase
progressively advanced interactions between logic, data, and existing
business processes. In the first demo, you learn how to take text from an
email and craft a fitting draft for a reply. In the second demo, you apply
a retrieval augmented generation (RAG) pattern to formulate responses
to questions based on document content. Finally, in the third demo, you
learn how to build a hotel booking application with a chatbot that uses a
conversational interface to ascertain the user’s needs (dates, room
preferences, budget) and seamlessly places (or denies) reservations
according to the underlying system’s state, without using fixed user
interface elements or formatted data input controls.

Downloads: notebooks and samples


Python and Polyglot notebooks containing the code featured in the initial part
of the book, as well as the complete codebases for the examples tackled in the
latter part of the book, can be accessed on GitHub at:
https://github.com/Youbiquitous/programming-llm

Errata, updates, & book support


We’ve made every effort to ensure the accuracy of this book and its
companion content. You can access updates to this book—in the form of a
list of submitted errata and their related corrections—at:
MicrosoftPressStore.com/LLMAzureAI/errata
If you discover an error that is not already listed, please submit it to us at
the same page.
For additional book support and information, please visit
MicrosoftPressStore.com/Support.
Please note that product support for Microsoft software and hardware is
not offered through the previous addresses. For help with Microsoft software
or hardware, go to http://support.microsoft.com.

Stay in touch
Let’s keep the conversation going! We’re on X / Twitter:
http://twitter.com/MicrosoftPress.
Chapter 1

The genesis and an analysis of large


language models

Luring someone into reading a book is never a small feat. If it’s a novel, you
must convince them that it’s a beautiful story, and if it’s a technical book,
you must assure them that they’ll learn something. In this case, we’ll try to
learn something.
Over the past two years, generative AI has become a prominent buzzword.
It refers to a field of artificial intelligence (AI) focused on creating systems
that can generate new, original content autonomously. Large language
models (LLMs) like GPT-3 and GPT-4 are notable examples of generative
AI, capable of producing human-like text based on given input.
The rapid adoption of LLMs is leading to a paradigm shift in
programming. This chapter discusses this shift, the reasons for it, and its
prospects. Its prospects include conversational programming, in which you
explain with words—rather than with code—what you want to achieve. This
type of programming will likely become very prevalent in the future.
No promises, though. As you’ll soon see, explaining with words what you
want to achieve is often as difficult as writing code.
This chapter covers topics that didn’t find a place elsewhere in this book.
It’s not necessary to read every section or follow a strict order. Take and read
what you find necessary or interesting. I expect you will come back to read
certain parts of this chapter after you finish the last one.
LLMs at a glance

To navigate the realm of LLMs as a developer or manager, it’s essential to


comprehend the origins of generative AI and to discern its distinctions from
predictive AI. This chapter has one key goal: to provide insights into the
training and business relevance of LLMs, reserving the intricate mathematical
details for the appendix.
Our journey will span from the historical roots of AI to the fundamentals
of LLMs, including their training, inference, and the emergence of
multimodal models. Delving into the business landscape, we’ll also spotlight
current popular use cases of generative AI and textual models.
This introduction doesn’t aim to cover every detail. Rather, it intends to
equip you with sufficient information to address and cover any potential gaps
in knowledge, while working toward demystifying the intricacies surrounding
the evolution and implementation of LLMs.

History of LLMs
The evolution of LLMs intersects with both the history of conventional AI
(often referred to as predictive AI) and the domain of natural language
processing (NLP). NLP encompasses natural language understanding (NLU),
which attempts to reduce human speech into a structured ontology, and
natural language generation (NLG), which aims to produce text that is
understandable by humans.
LLMs are a subtype of generative AI focused on producing text based on
some kind of input, usually in the form of written text (referred to as a
prompt) but now expanding to multimodal inputs, including images, video,
and audio. At a glance, most LLMs can be seen as a very advanced form of
autocomplete, as they generate the next word. Although they specifically
generate text, LLMs do so in a manner that simulates human reasoning,
enabling them to perform a variety of intricate tasks. These tasks include
sentiment analysis, summarization, translation, entity and intent recognition,
structured information extraction, document generation, and so on.
LLMs represent a natural extension of the age-old human aspiration to
construct automatons (ancestors to contemporary robots) and imbue them
with a degree of reasoning and language. They can be seen as a brain for such
automatons, able to respond to an external input.

AI beginnings
Modern software—and AI as a vibrant part of it—represents the culmination
of an embryonic vision that has traversed the minds of great thinkers since
the 17th century. Various mathematicians, philosophers, and scientists, in
diverse ways and at varying levels of abstraction, envisioned a universal
language capable of mechanizing the acquisition and sharing of knowledge.
Gottfried Leibniz (1646–1716), in particular, contemplated the idea that at
least a portion of human reasoning could be mechanized.
The modern conceptualization of intelligent machinery took shape in the
mid-20th century, courtesy of renowned mathematicians Alan Turing and
Alonzo Church. Turing’s exploration of “intelligent machinery” in 1947,
coupled with his groundbreaking 1950 paper, “Computing Machinery and
Intelligence,” laid the cornerstone for the Turing test—a pivotal concept in
AI. This test challenged machines to exhibit human behavior
(indistinguishable by a human judge), ushering in the era of AI as a scientific
discipline.

Note
Considering recent advancements, a reevaluation of the original
Turing test may be warranted to incorporate a more precise
definition of human and rational behavior.

NLP
NLP is an interdisciplinary field within AI that aims to bridge the interaction
between computers and human language. While historically rooted in
linguistic approaches, distinguishing itself from the contemporary sense of
AI, NLP has perennially been a branch of AI in a broader sense. In fact, the
overarching goal has consistently been to artificially replicate an expression
of human intelligence—specifically, language.
The primary goal of NLP is to enable machines to understand, interpret,
and generate human-like language in a way that is both meaningful and
contextually relevant. This interdisciplinary field draws from linguistics,
computer science, and cognitive psychology to develop algorithms and
models that facilitate seamless interaction between humans and machines
through natural language.
The history of NLP spans several decades, evolving from rule-based
systems in the early stages to contemporary deep-learning approaches,
marking significant strides in the understanding and processing of human
language by computers.
Originating in the 1950s, early efforts, such as the Georgetown-IBM
experiment in 1954, aimed at machine translation from Russian to English,
laying the foundation for NLP. However, these initial endeavors were
primarily linguistic in nature. Subsequent decades witnessed the influence of
Chomskyan linguistics, shaping the field’s focus on syntactic and
grammatical structures.
The 1980s brought a shift toward statistical methods, like n-grams, using
co-occurrence frequencies of words to make predictions. An example was
IBM’s Candide system for speech recognition. However, rule-based
approaches struggled with the complexity of natural language. The 1990s saw
a resurgence of statistical approaches and the advent of machine learning
(ML) techniques such as hidden Markov models (HMMs) and statistical
language models. The introduction of the Penn Treebank, a 7-million word
dataset of part-of-speech tagged text, and statistical machine translation
systems marked significant milestones during this period.
In the 2000s, the rise of data-driven approaches and the availability of
extensive textual data on the internet rejuvenated the field. Probabilistic
models, including maximum-entropy models and conditional random fields,
gained prominence. Begun in the 1980s but finalized years later, the
development of WordNet, a semantical-lexical database of English (with its
groups of synonyms, or synonym set, and their relations), contributed to a
deeper understanding of word semantics.
The landscape transformed in the 2010s with the emergence of deep
learning made possible by a new generation of graphics processing units
(GPUs) and increased computing power. Neural network architectures—
particularly transformers like Bidirectional Encoder Representations from
Transformers (BERT) and Generative Pretrained Transformer (GPT)—
revolutionized NLP by capturing intricate language patterns and contextual
information. The focus shifted to data-driven and pretrained language
models, allowing for fine-tuning of specific tasks.

Predictive AI versus generative AI


Predictive AI and generative AI represent two distinct paradigms, each
deeply entwined with advancements in neural networks and deep-learning
architectures.
Predictive AI, often associated with supervised learning, traces its roots
back to classical ML approaches that emerged in the mid-20th century. Early
models, such as perceptrons, paved the way for the resurgence of neural
networks in the 1980s. However, it wasn’t until the advent of deep learning in
the 21st century—with the development of deep neural networks,
convolutional neural networks (CNNs) for image recognition, and recurrent
neural networks (RNNs) for sequential data—that predictive AI witnessed a
transformative resurgence. The introduction of long short-term memory
(LSTM) units enabled more effective modeling of sequential dependencies in
data.
Generative AI, on the other hand, has seen remarkable progress, propelled
by advancements in unsupervised learning and sophisticated neural network
architectures (the same used for predictive AI). The concept of generative
models dates to the 1990s, but the breakthrough came with the introduction
of generative adversarial networks (GANs) in 2014, showcasing the power of
adversarial training. GANs, which feature a generator for creating data and a
discriminator to distinguish between real and generated data, play a pivotal
role. The discriminator, discerning the authenticity of the generated data
during the training, contributes to the refinement of the generator, fostering
continuous enhancement in generating more realistic data, spanning from
lifelike images to coherent text.
Table 1-1 provides a recap of the main types of learning processes.

TABLE 1-1 Main types of learning processes

Type Definition Training Use Cases


Supervised Trained on Adjusts Classification,
labeled data parameters to regression
where each input minimize the
has a prediction error
corresponding
label
Self- Unsupervised Learns to fill in NLP, computer
supervised learning where the blank (predict vision
the model parts of input data
generates its own from other parts)
labels
Semi- Combines Uses labeled data Scenarios with
supervised labeled and for supervised limited labeled
unlabeled data tasks, unlabeled data—for
for training data for example, image
generalizations classification
Unsupervised Trained on data Identifies inherent Clustering,
without explicit structures or dimensionality
supervision relationships in reduction,
the data generative
modeling

The historical trajectory of predictive and generative AI underscores the


symbiotic relationship with neural networks and deep learning. Predictive AI
leverages deep-learning architectures like CNNs for image processing and
RNNs/LSTMs for sequential data, achieving state-of-the-art results in tasks
ranging from image recognition to natural language understanding.
Generative AI, fueled by the capabilities of GANs and large-scale language
models, showcases the creative potential of neural networks in generating
novel content.

LLMs
An LLM, exemplified by OpenAI’s GPT series, is a generative AI system
built on advanced deep-learning architectures like the transformer (more on
this in the appendix).
These models operate on the principle of unsupervised and self-supervised
learning, training on vast text corpora to comprehend and generate coherent
and contextually relevant text. They output sequences of text (that can be in
the form of proper text but also can be protein structures, code, SVG, JSON,
XML, and so on), demonstrating a remarkable ability to continue and expand
on given prompts in a manner that emulates human language.
The architecture of these models, particularly the transformer architecture,
enables them to capture long-range dependencies and intricate patterns in
data. The concept of word embeddings, a crucial precursor, represents words
as continuous vectors (Mikolov et al. in 2013 through Word2Vec),
contributing to the model’s understanding of semantic relationships between
words. Word embeddings is the first “layer” of an LLM.
The generative nature of the latest models enables them to be versatile in
output, allowing for tasks such as text completion, summarization, and
creative text generation. Users can prompt the model with various queries or
partial sentences, and the model autonomously generates coherent and
contextually relevant completions, demonstrating its ability to understand and
mimic human-like language patterns.
The journey began with the introduction of word embeddings in 2013,
notably with Mikolov et al.’s Word2Vec model, revolutionizing semantic
representation. RNNs and LSTM architectures followed, addressing
challenges in sequence processing and long-range dependencies. The
transformative shift arrived with the introduction of the transformer
architecture in 2017, allowing for parallel processing and significantly
improving training times.
In 2018, Google researchers Devlin et al. introduced BERT. BERT
adopted a bidirectional context prediction approach. During pretraining,
BERT is exposed to a masked language modeling task in which a random
subset of words in a sentence is masked and the model predicts those masked
words based on both left and right context. This bidirectional training allows
BERT to capture more nuanced contextual relationships between words. This
makes it particularly effective in tasks requiring a deep understanding of
context, such as question answering and sentiment analysis.
During the same period, OpenAI’s GPT series marked a paradigm shift in
NLP, starting with GPT in 2018 and progressing through GPT-2 in 2019, to
GPT-3 in 2020, and GPT-3.5-turbo, GPT-4, and GPT-4-turbo-visio (with
multimodal inputs) in 2023. As autoregressive models, these predict the next
token (which is an atomic element of natural language as it is elaborated by
machines) or word in a sequence based on the preceding context. GPT’s
autoregressive approach, predicting one token at a time, allows it to generate
coherent and contextually relevant text, showcasing versatility and language
understanding. The size of this model is huge, however. For example, GPT-3
has a massive scale of 175 billion parameters. (Detailed information about
GPT-3.5-turbo and GPT-4 are not available at the time of this writing.) The
fact is, these models can scale and generalize, thus reducing the need for task-
specific fine-tuning.

Functioning basics
The core principle guiding the functionality of most LLMs is autoregressive
language modeling, wherein the model takes input text and systematically
predicts the subsequent token or word (more on the difference between these
two terms shortly) in the sequence. This token-by-token prediction process is
crucial for generating coherent and contextually relevant text. However, as
emphasized by Yann LeCun, this approach can accumulate errors; if the N-th
token is incorrect, the model may persist in assuming its correctness,
potentially leading to inaccuracies in the generated text.
Until 2020, fine-tuning was the predominant method for tailoring models
to specific tasks. Recent advancements, however—particularly exemplified
by larger models like GPT-3—have introduced prompt engineering. This
allows these models to achieve task-specific outcomes without conventional
fine-tuning, relying instead on precise instructions provided as prompts.
Models such as those found in the GPT series are intricately crafted to
assimilate comprehensive knowledge about the syntax, semantics, and
underlying ontology inherent in human language corpora. While proficient at
capturing valuable linguistic information, it is imperative to acknowledge that
these models may also inherit inaccuracies and biases present in their training
corpora.

Different training approaches


An LLM can be trained with different goals, each requiring a different
approach. The three prominent methods are as follows:
Causal language modeling (CLM) This autoregressive method is used
in models like OpenAI’s GPT series. CLM trains the model to predict
the next token in a sequence based on preceding tokens. Although
effective for tasks like text generation and summarization, CLM models
possess a unidirectional context, only considering past context during
predictions. We will focus on this kind of model, as it is the most used
architecture at the moment.
Masked language modeling (MLM) This method is employed in
models like BERT, where a percentage of tokens in the input sequence
are randomly masked and the model predicts the original tokens based
on the surrounding context. This bidirectional approach is advantageous
for tasks such as text classification, sentiment analysis, and named entity
recognition. It is not suitable for pure text-generation tasks because in
those cases the model should rely only on the past, or “left part,” of the
input, without looking at the “right part,” or the future.
Sequence-to-sequence (Seq2Seq) These models, which feature an
encoder-decoder architecture, are used in tasks like machine translation
and summarization. The encoder processes the input sequence,
generating a latent representation used by the decoder to produce the
output sequence. This approach excels in handling complex tasks
involving input-output transformations, which are commonly used for
tasks where the input and output have a clear alignment during training,
such as translation tasks.
The key disparities lie in their objectives, architectures, and suitability for
specific tasks. CLM focuses on predicting the next token and excels in text
generation, MLM specializes in (bidirectional) context understanding, and
Seq2Seq is adept at generating coherent output text in the form of sequences.
And while CLM models are suitable for autoregressive tasks, MLM models
understand and embed the context, and Seq2Seq models handle input-output
transformations. Models may also be pretrained on auxiliary tasks, like next
sentence prediction (NSP), which tests their understanding of data
distribution.

The transformer model


The transformer architecture forms the foundation for modern LLMs.
Vaswani et al. presented the transformer model in a paper, “Attention Is All
You Need,” released in December 2017. Since then, NLP has been
completely revolutionized. Unlike previous models, which rely on sequential
processing, transformers employ an attention mechanism that allows for
parallelization and captures long-range dependencies.
The original model consists of an encoder and decoder, both articulated in
multiple self-attention processing layers. Self-attention processing means that
each word is determined by examining and considering its contextual
information.
In the encoder, input sequences are embedded and processed in parallel
through the layers, thus capturing intricate relationships between words. The
decoder generates output sequences, using the encoder’s contextual
information. Throughout the training process, the decoder learns to predict
the next word by analyzing the preceding words.
The transformer incorporates multiple layers of decoders to enhance its
capacity for language generation. The transformer’s design includes a context
window, which determines the length of the sequence the model considers
during inference and training. Larger context windows offer a broader scope
but incur higher computational costs, while smaller windows risk missing
crucial long-range dependencies. The real “brain” that allows transformers to
understand context and excel in tasks like translation and summarization is
the self-attention mechanism. There’s nothing like conscience or neuronal
learning in today’s LLM.
The self-attention mechanism allows the LLM to selectively focus on
different parts of the input sequence instead of treating the entire input in the
same way. Because of this, it needs fewer parameters to model long-term
dependencies and can capture relationships between words placed far away
from each other in the sequence. It’s simply a matter of guessing the next
words on a statistical basis, although it really seems smart and human.
While the original transformer architecture was a Seq2Seq model,
converting entire sequences from a source to a target format, nowadays the
current approach for text generation is an autoregressive approach.
Deviating from the original architecture, some models, including GPTs,
don’t include an explicit encoder part, relying only on the decoder. In this
architecture, the input is fed directly to the decoder. The decoder has more
self-attention heads and has been trained with a massive amount of data in an
unsupervised manner, just predicting the next word of existing texts.
Different models, like BERT, include only the encoder part that produces the
so-called embeddings.

Tokens and tokenization


Tokens, the elemental components in advanced language models like GPTs,
are central to the intricate process of language understanding and generation.
Unlike traditional linguistic units like words or characters, a token
encapsulates the essence of a single word, character, or subword unit. This
finer granularity is paramount for capturing the subtleties and intricacies
inherent in language.
The process of tokenization is a key facet. It involves breaking down texts
into smaller, manageable units, or tokens, which are then subjected to the
model’s analysis. The choice of tokens over words is deliberate, allowing for
a more nuanced representation of language.
OpenAI and Azure OpenAI employ a subword tokenization technique
Random documents with unrelated
content Scribd suggests to you:
CHAPTER VII.
SUMMER HOLIDAYS

It was the beginning of the summer holidays, and the Gordons,


with Sandy, had come to Skylemore that the young people might
spend their holidays together.
Many pleasant trips had been planned, and the first was to be a
picnic on Loch Katrine, which was not far from Skylemore.
It was early on a bright summer's morning when Dugald, with his
four prancing horses, appeared at the door, and the two Clans of
Gordons and Lindsays, to say nothing of Sandy, who was a
MacPherson, piled into the big break, along with many baskets full of
good things.
With a waving of caps and handkerchiefs off they went, and soon
they were driving along the beautiful mountain glens and through
the Trossachs, which means literally a wooded gorge or ravine.
"There is the loch now," cried Don, presently.
"No, that is Loch Achray," said his uncle, "and that mountain is
Ben Venue, but we shall see Loch Katrine very soon;" and it was not
long before Dugald drew up on the very edge of the loch itself, and
a camping-place was soon found under the trees and in sight of
Ellen's Isle.
Rugs were brought from the break and spread on the ground
around a big rock which was to serve as a table. Everybody helped
to unpack the big baskets, for all were as hungry as if they had had
no breakfasts.
Not much was said for a time but "Please pass that," and "Please
pass this," and "Isn't this good?" until finally even the boys decided
they had eaten enough.
"Papa, tell us about Ellen's Isle," said Janet, as they all sat around
after lunch, and tried to see who could throw a stone the farthest
into the water.
So Mr. Lindsay told them the story of the "fair Ellen," whom Sir
Walter Scott wrote of in his great poem called "The Lady of the
Lake." Ellen was called "the lady of the lake," and lived with her
father on the little island yonder. Then Mr. Lindsay told them of
"Roderick Dhu," and of the gatherings of the Clan Alpine which took
place in the old days in a glen not far away, and how at a signal
armed men wrapped in their plaids would spring up out of the
seemingly lonely dells and glens as if by magic.
Those were wild days in Scotland long ago, days of fierce fights
and brave deeds, when Clans met and rushed into battle with a wild
"slogan," as their battle-cry was known.
"Sandy says that he does not believe that 'Rob Roy' was a real
person; but he was, and lived right here, didn't he, Uncle Alan?" said
Don, eagerly, in defence of his hero.
"Indeed he did, and you would like to see his old home, wouldn't
you, Don?"
"Wouldn't I!" said Don, and his eyes shone.
"Well, we will go there sometime; it is now a sheep-farm, but was
once the old home of the Macgregors. In 'Rob Roy's' time bands of
lawless men came down from the north to steal cattle and do other
kinds of mischief. So the 'lairds' in these parts paid 'Rob Roy' and his
little band of followers to protect their property from these invaders
and robbers. In after days the band was formed into a regiment
called the 'Black Watch,' which to-day is one of the most famous of
the Scotch regiments."
Sir Walter Scott has done much to make this part of Scotland well
known, and people come from all over the world, and especially
from America, anxious to see the beautiful country of rocks and
glens and heather-clad mountains of which he wrote in his famous
novels and poems.
From the telling of stories our Clansmen soon turned to singing
songs, for the Scotch are full of sentiment, and are very fond of
music. Some of the most beautiful of our popular songs have come
from Scotland. There is one which is known the world over, and sung
as often by little American cousins as by little Scotch cousins; and
that is "Annie Laurie."
So when Aunt Jessie began to sing "Annie Laurie," all joined in
with a will, and sang one of the sweetest songs the world has ever
known:
"They sang of love and not of fame,
Forgot was Scotland's glory.
Each heart recalled a different name,
But all sang 'Annie Laurie.'"
After this there was a general scramble to get the things picked
up. The whole party mounted again to their seats in the break, and
Dugald made the four horses just fly for home; though they did not
need much urging, as every horse seems to know when his head is
turned homeward.

"Is that Robert Burns's house?" said Janet, in a disappointed


voice.
"Yes, my dear," said her father, "great men have often been
brought up in small houses like this. Bobby Burns was only a
ploughboy, but he became a great poet, one of the greatest in the
world."
Our little Scotch friends
were standing before the
little house at Ayr, where
Robert Burns was born. They
had come down from
Glasgow for the day to visit
that part of Scotland made
famous by the poet. It is
hard to say of whom the
Scotch people are most fond
and proud, Scott or Burns.
The young people had
looked forward with a great
deal of pleasure to this visit,
and they all felt pretty much
as Janet did.
It was a tiny house, what
the country people call a
"clay biggin," with a thatched
roof. Inside are many relics
of Burns, but the children
"OUR LITTLE SCOTCH FRIENDS WERE were, perhaps, more
STANDING BEFORE THE LITTLE HOUSE interested in "Alloway's Auld
AT AYR, WHERE ROBERT BURNS WAS Haunted Kirk." This is the
BORN." small church of which Burns
wrote in his poem, called
"Tam-o'-shanter," where Tam saw the witches dance, and from
whence he started on his wild ride, with the witches after him riding
on broomsticks. It is one of the chief attractions for visitors.
"Oh! it is a creepy poem," said Don; and you will all think so, too,
when you have read it.
They saw the "Auld Brig of Ayr," which means the old bridge,
across the river Ayr, and they walked along the "Banks and Braes o'
Bonnie Doon," of which Burns wrote and which he loved so well.
They visited the monument to Burns. Marjorie remarked that it was
not a very grand monument, not nearly so grand as that to Scott in
Edinburgh; and she was quite right.
"Not far from Ayr was the home of Annie Laurie," said Mr. Lindsay,
as the train speeded them back to Glasgow.
"Was she a real person, father?" eagerly exclaimed the little girls
together.
"Indeed she was, though her eyes were black and not blue," said
Mr. Lindsay.
"How do you know?" asked Janet, who liked to be exact.
"Because her portrait is still to be seen at Maxwelton House, near
the town of Dumfries, where she lived," replied her father.
"Well, I'd rather her eyes had been blue," said Marjorie, and the
children kept talking about blue and black eyes until they reached
the great St. Enoch's railway station at Glasgow.
There are so many delightful journeys to be made from Glasgow
by rail and steamer that it is one of the best starting points in all
Scotland for excursions, of which all children and most old folks are
so fond. The Lindsays and the Gordons were accordingly to stay in
Glasgow for a week, that the young people might enjoy more of
these rare treats, and take some of the lovely sails on the river Clyde
and among the near-by islands.
Don and Sandy were having some hot discussions as to which
was the finest city in Scotland,—Glasgow or Edinburgh. This was
about the only thing that the boys ever disagreed on. Sandy's father
came originally from Glasgow, so Sandy always stood up for it.
"It's a big city, and lots richer than Edinburgh; and think of all the
business that is done here, and of the lots and lots of ships that
come and go from all parts of the world. It's the largest city in
Scotland, too, and the second city of the kingdom," Sandy would
say.
"But it's not so beautiful as Edinburgh. It hasn't anything like
Princes Street, nor so many famous old buildings and historic places,
nor our great colleges. Anybody had rather live in Edinburgh—you
know you would, Sandy," Don would argue.
And the truth of it all was, both boys were right.
Early one morning found our party gathered on the steamer Lord
of the Isles for a cruise around the islands off the coast. They
passed the great ship-building yards of the Clyde, the largest in the
world, as they steamed down the river. The ships built upon the
Clyde have always been famous all over the world.
"There is Gourock Bay, where the great racing yachts anchor,"
said Mr. Lindsay. "It was always thought to be a lucky place to set
sail. It was from this bay that many of the yachts sailed for America
when they were to make the attempt to capture the 'America's Cup,'
that you doubtless all know about; but while these Clyde-built boats
were fine yachts, none of them have been lucky enough to bring
back the cup."
Next was seen the Island of Bute and the old Castle of Rothesay.
Here they entered a narrow bit of water, called the Kyles of Bute,
from which they entered Loch Fyne, famous for its fresh herrings.
Another steamer took them through the Crinan Canal, and thence
to Oban, the capital of the Western Highlands.
In this part of Scotland, called by every one the Highlands, are
the great deer forests of many thousands of acres, belonging to
some of the great families of Scotland, where the wild deer is
hunted, or "stalked," as it is called. Here, too, are wild moors,
stretching for miles and miles, where few people live except the
shepherds who look after the flocks.
There was another fine summer which was enjoyed greatly by our
little Scotch cousins, and that was when some young American
cousins came to visit the Lindsays, and they all went on Uncle Alan's
yacht for a lovely sail of many days, among the islands which fringe
the northern and western coasts of Scotland. It was on this occasion
that they all went to the Isle of Skye (some of you have probably
heard of the Skye terriers), and they stopped, too, at the Shetland
Isles, where the little horses come from. Every girl and boy wants to
own a dear little Shetland pony.
Didn't they have a splendid time on this trip! That was the time,
too, when Donald and Sandy got left behind on one of the islands
where they had all landed for a picnic,—but that's another story!
So many little cousins are waiting to talk about themselves that
we must really get our little Clan safely back home, and leave them
for the present to talk over the good times they have had together.
THE END.

THE LITTLE COUSIN SERIES


The most delightful and interesting accounts possible of child-life
in other lands, filled with quaint sayings, doings, and adventures.
Each 1 vol., 12mo, decorative cover, cloth, with six or more full-
page illustrations in color.
Price per volume $0.60

By MARY HAZELTON WADE (unless otherwise indicated)


Our Little African Cousin

Our Little Armenian Cousin

Our Little Brown Cousin

Our Little Canadian Cousin


By Elizabeth R. Macdonald
Our Little Chinese Cousin
By Isaac Taylor Headland

Our Little Cuban Cousin

Our Little Dutch Cousin


By Blanche McManus

Our Little English Cousin


By Blanche McManus

Our Little Eskimo Cousin

Our Little French Cousin


By Blanche McManus

Our Little German Cousin

Our Little Hawaiian Cousin

Our Little Indian Cousin

Our Little Irish Cousin

Our Little Italian Cousin

Our Little Japanese Cousin

Our Little Jewish Cousin

Our Little Korean Cousin


By H. Lee M. Pike

Our Little Mexican Cousin


By Edward C. Butler
Our Little Norwegian Cousin

Our Little Panama Cousin


By H. Lee M. Pike

Our Little Philippine Cousin

Our Little Porto Rican Cousin

Our Little Russian Cousin

Our Little Scotch Cousin


By Blanche McManus

Our Little Siamese Cousin

Our Little Spanish Cousin


By Mary F. Nixon-Roulet

Our Little Swedish Cousin


By Claire M. Coburn

Our Little Swiss Cousin

Our Little Turkish Cousin

THE GOLDENROD LIBRARY


The Goldenrod Library contains only the highest and purest
literature,—stories which appeal alike both to children and to their
parents and guardians.
Each volume is well illustrated from drawings by competent
artists, which, together with their handsomely decorated uniform
binding, showing the goldenrod, usually considered the emblem of
America, is a feature of their manufacture.
Each one volume, small 12mo, illustrated, decorated cover, paper
wrapper $0.35

LIST OF TITLES
Aunt Nabby's Children. By Frances Hodges White.
Child's Dream of a Star, The. By Charles Dickens.
Flight of Rosy Dawn, The. By Pauline Bradford Mackie
Findelkind. By Ouida.
Fairy of the Rhone, The. By A. Comyns Carr.
Gatty and I. By Frances E. Crompton.
Great Emergency, A. By Juliana Horatia Ewing.
Helena's Wonderworld. By Frances Hodges White.
Jackanapes. By Juliana Horatia Ewing.
Jerry's Reward. By Evelyn Snead Barnett.
La Belle Nivernaise. By Alphonse Daudet.
Little King Davie. By Nellie Hellis.
Little Peterkin Vandike. By Charles Stuart Pratt.
Little Professor, The. By Ida Horton Cash.
Peggy's Trial. By Mary Knight Potter.
Prince Yellowtop. By Kate Whiting Patch.
Provence Rose, A. By Ouida.
Rab and His Friends. By Dr. John Brown.
Seventh Daughter, A. By Grace Wickham Curran.
Sleeping Beauty, The. By Martha Baker Dunn.
Small, Small Child, A. By E. Livingston Prescott.
Story of a Short Life, The. By Juliana Horatia Ewing.
Susanne. By Frances J. Delano.
Water People, The. By Charles Lee Sleight.
Young Archer, The. By Charles E. Brimblecom.
COSY CORNER SERIES
It is the intention of the publishers that this series shall contain only
the very highest and purest literature,—stories that shall not
only appeal to the children themselves, but be appreciated by
all those who feel with them in their joys and sorrows.
The numerous illustrations in each book are by well-known artists,
and each volume has a separate attractive cover design.
Each 1 vol., 16mo, cloth $0.50

By ANNIE FELLOWS JOHNSTON

The Little Colonel. (Trade Mark)


The scene of this story is laid in Kentucky. Its heroine is a small
girl, who is known as the Little Colonel, on account of her fancied
resemblance to an old-school Southern gentleman, whose fine
estate and old family are famous in the region.

The Giant Scissors.


This is the story of Joyce and of her adventures in France. Joyce
is a great friend of the Little Colonel, and in later volumes shares
with her the delightful experiences of the "House Party" and the
"Holidays."

Two Little Knights of Kentucky.


Who Were the Little Colonel's Neighbors.
In this volume the Little Colonel returns to us like an old friend,
but with added grace and charm. She is not, however, the central
figure of the story, that place being taken by the "two little knights."

Mildred's Inheritance.
A delightful little story of a lonely English girl who comes to
America and is befriended by a sympathetic American family who are
attracted by her beautiful speaking voice. By means of this one gift
she is enabled to help a school-girl who has temporarily lost the use
of her eyes, and thus finally her life becomes a busy, happy one.

Cicely and Other Stories for Girls.


The readers of Mrs. Johnston's charming juveniles will be glad to
learn of the issue of this volume for young people.

Aunt 'Liza's Hero and Other Stories.


A collection of six bright little stories, which will appeal to all boys
and most girls.

Big Brother.
A story of two boys. The devotion and care of Steven, himself a
small boy, for his baby brother, is the theme of the simple tale.

Ole Mammy's Torment.


"Ole Mammy's Torment" has been fitly called "a classic of
Southern life." It relates the haps and mishaps of a small negro lad,
and tells how he was led by love and kindness to a knowledge of the
right.

The Story of Dago.


In this story Mrs. Johnston relates the story of Dago, a pet
monkey, owned jointly by two brothers. Dago tells his own story, and
the account of his haps and mishaps is both interesting and
amusing.

The Quilt That Jack Built.


A pleasant little story of a boy's labor of love, and how it changed
the course of his life many years after it was accomplished.

Flip's Islands of Providence.


A story of a boy's life battle, his early defeat, and his final
triumph, well worth the reading.

By EDITH ROBINSON

A Little Puritan's First Christmas.


A Story of Colonial times in Boston, telling how Christmas was
invented by Betty Sewall, a typical child of the Puritans, aided by her
brother Sam.
A Little Daughter of Liberty.
The author introduces this story as follows:
"One ride is memorable in the early history of the American
Revolution, the well-known ride of Paul Revere. Equally deserving of
commendation is another ride,—the ride of Anthony Severn,—which
was no less historic in its action or memorable in its consequences."

A Loyal Little Maid.


A delightful and interesting story of Revolutionary days, in which
the child heroine, Betsey Schuyler, renders important services to
George Washington.

A Little Puritan Rebel.


This is an historical tale of a real girl, during the time when the
gallant Sir Harry Vane was governor of Massachusetts.

A Little Puritan Pioneer.


The scene of this story is laid in the Puritan settlement at
Charlestown.

A Little Puritan Bound Girl.


A story of Boston in Puritan days, which is of great interest to
youthful readers.

A Little Puritan Cavalier.


The story of a "Little Puritan Cavalier" who tried with all his boyish
enthusiasm to emulate the spirit and ideals of the dead Crusaders.
A Puritan Knight Errant.
The story tells of a young lad in Colonial times who endeavored to
carry out the high ideals of the knights of olden days.

By OUIDA (Louise de la Ramée)

A Dog of Flanders: A Christmas Story.


Too well and favorably known to require description.

The Nurnberg Stove.


This beautiful story has never before been published at a popular
price.

By FRANCES MARGARET FOX

The Little Giant's Neighbours.


A charming nature story of a "little giant" whose neighbours were
the creatures of the field and garden.

Farmer Brown and the Birds.


A little story which teaches children that the birds are man's best
friends.

Betty of Old Mackinaw.


A charming story of child-life, appealing especially to the little
readers who like stories of "real people."

Brother Billy.
The story of Betty's brother, and some further adventures of Betty
herself.

Mother Nature's Little Ones.


Curious little sketches describing the early lifetime, or "childhood,"
of the little creatures out-of-doors.

How Christmas Came to the


Mulvaneys.
A bright, lifelike little story of a family of poor children, with an
unlimited capacity for fun and mischief. The wonderful never-to-be
forgotten Christmas that came to them is the climax of a series of
exciting incidents.
*** END OF THE PROJECT GUTENBERG EBOOK OUR LITTLE
SCOTCH COUSIN ***

Updated editions will replace the previous one—the old editions will
be renamed.

Creating the works from print editions not protected by U.S.


copyright law means that no one owns a United States copyright in
these works, so the Foundation (and you!) can copy and distribute it
in the United States without permission and without paying
copyright royalties. Special rules, set forth in the General Terms of
Use part of this license, apply to copying and distributing Project
Gutenberg™ electronic works to protect the PROJECT GUTENBERG™
concept and trademark. Project Gutenberg is a registered trademark,
and may not be used if you charge for an eBook, except by following
the terms of the trademark license, including paying royalties for use
of the Project Gutenberg trademark. If you do not charge anything
for copies of this eBook, complying with the trademark license is
very easy. You may use this eBook for nearly any purpose such as
creation of derivative works, reports, performances and research.
Project Gutenberg eBooks may be modified and printed and given
away—you may do practically ANYTHING in the United States with
eBooks not protected by U.S. copyright law. Redistribution is subject
to the trademark license, especially commercial redistribution.

START: FULL LICENSE


THE FULL PROJECT GUTENBERG LICENSE
PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK

To protect the Project Gutenberg™ mission of promoting the free


distribution of electronic works, by using or distributing this work (or
any other work associated in any way with the phrase “Project
Gutenberg”), you agree to comply with all the terms of the Full
Project Gutenberg™ License available with this file or online at
www.gutenberg.org/license.

Section 1. General Terms of Use and


Redistributing Project Gutenberg™
electronic works
1.A. By reading or using any part of this Project Gutenberg™
electronic work, you indicate that you have read, understand, agree
to and accept all the terms of this license and intellectual property
(trademark/copyright) agreement. If you do not agree to abide by all
the terms of this agreement, you must cease using and return or
destroy all copies of Project Gutenberg™ electronic works in your
possession. If you paid a fee for obtaining a copy of or access to a
Project Gutenberg™ electronic work and you do not agree to be
bound by the terms of this agreement, you may obtain a refund
from the person or entity to whom you paid the fee as set forth in
paragraph 1.E.8.

1.B. “Project Gutenberg” is a registered trademark. It may only be


used on or associated in any way with an electronic work by people
who agree to be bound by the terms of this agreement. There are a
few things that you can do with most Project Gutenberg™ electronic
works even without complying with the full terms of this agreement.
See paragraph 1.C below. There are a lot of things you can do with
Project Gutenberg™ electronic works if you follow the terms of this
agreement and help preserve free future access to Project
Gutenberg™ electronic works. See paragraph 1.E below.
1.C. The Project Gutenberg Literary Archive Foundation (“the
Foundation” or PGLAF), owns a compilation copyright in the
collection of Project Gutenberg™ electronic works. Nearly all the
individual works in the collection are in the public domain in the
United States. If an individual work is unprotected by copyright law
in the United States and you are located in the United States, we do
not claim a right to prevent you from copying, distributing,
performing, displaying or creating derivative works based on the
work as long as all references to Project Gutenberg are removed. Of
course, we hope that you will support the Project Gutenberg™
mission of promoting free access to electronic works by freely
sharing Project Gutenberg™ works in compliance with the terms of
this agreement for keeping the Project Gutenberg™ name associated
with the work. You can easily comply with the terms of this
agreement by keeping this work in the same format with its attached
full Project Gutenberg™ License when you share it without charge
with others.

1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside the
United States, check the laws of your country in addition to the
terms of this agreement before downloading, copying, displaying,
performing, distributing or creating derivative works based on this
work or any other Project Gutenberg™ work. The Foundation makes
no representations concerning the copyright status of any work in
any country other than the United States.

1.E. Unless you have removed all references to Project Gutenberg:

1.E.1. The following sentence, with active links to, or other


immediate access to, the full Project Gutenberg™ License must
appear prominently whenever any copy of a Project Gutenberg™
work (any work on which the phrase “Project Gutenberg” appears,
or with which the phrase “Project Gutenberg” is associated) is
accessed, displayed, performed, viewed, copied or distributed:
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade

Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.

Let us accompany you on the journey of exploring knowledge and


personal growth!

textbookfull.com

You might also like