0% found this document useful (0 votes)
156 views76 pages

CODEMagazine SeptemberOctober2023

The editorial discusses two stories about overcoming challenges in development. In the first story, the developer received an unexpected error when accessing documents beyond the first site in an API. They were able to resolve the issue by systematically testing different access levels. In the second story, when a colleague asked for help with a problem, the developer found it helpful to ask them to rephrase the question to better understand the issue and allow more time to think of a solution. The editorial advocates an "error-driven" or questioning approach to problem solving.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
156 views76 pages

CODEMagazine SeptemberOctober2023

The editorial discusses two stories about overcoming challenges in development. In the first story, the developer received an unexpected error when accessing documents beyond the first site in an API. They were able to resolve the issue by systematically testing different access levels. In the second story, when a colleague asked for help with a problem, the developer found it helpful to ask them to rephrase the question to better understand the issue and allow more time to think of a solution. The editorial advocates an "error-driven" or questioning approach to problem solving.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 76

Node.

js, AI Copilot, Laravel, Azure Active Directory

SEP
OCT
2023
codemag.com - THE LEADING INDEPENDENT DEVELOPER MAGAZINE - US $ 8.95 Can $ 11.95

The World of
Server-Side Events
Cover: @midjourney + onsightdesign

Preventing Azure Building Web Enlisting AI


AD Mistakes APIs With Node.js Copilots
DO YOU WONDER
HOW ARTIFICIAL
INTELLIGENCE CAN
BENEFIT YOU TODAY?
EXECUTIVE BRIEFINGS
Do you wonder how AI can help your business? Do you worry about privacy or regulatory issues stopping
you from using AI to its fullest? We have the answers! Our Executive Briefings provide guidance and concrete
advice that help decision makers move forward in this rapidly changing age of artificial intelligence and
Copilot-enabled software!

We will send an expert to your office to meet with you. You will receive:
1. An overview presentation of the current state of AI.
2. Learn how to use AI in your business while ensuring privacy of your and your clients’ information.
3. We’ll build a sample application built on your own HR documents – allowing your employees to query
those documents in English, which will cut down the number of questions that you
and your HR group have to answer.
4. A roadmap for future use of AI catered to what you do.

AI-SEARCHABLE KNOWLEDGEBASE AND DOCUMENTS


A great first step into the world of generative artificial intelligence, large language models (LLMs),
and GPT is to create an AI that provides your staff or clients access to your institutional knowledge,
documentation, and data through an AI-searchable knowledgebase. We can help you implement a first
system in a matter of days in a fashion that is secure and individualized to each user. Your data remains yours!
Answers provided by the AI are grounded in your own information and is thus correct and applicable.

COPILOTS FOR YOUR OWN APPS


Applications without Copilots are now legacy!
But fear not! We can help you build Copilot features into your applications in a secure and integrated fashion.

CONTACT US TODAY FOR A FREE CONSULTATION AND DETAILS ABOUT OUR SERVICES
codemag.com/ai-services
832-717-4445 ext. 9 • [email protected]
TABLE OF CONTENTS

Features
8 Top Azure Active Directory 58 Vite and Progressive Web Apps
Mistakes Shawn shows you how the development build environment Vite
uses abstraction for packaging, offline work, and installing a
Sahil examines some of the most common mistakes and misunderstood website outside the JavaScript framework.
concepts that cause insecure applications in Active Directory. Shawn Wildermuth
The protocols he covers are portable to any identity platform.
Sahil Malik
63 Authentication in Laravel, Part 2:
15 Building Web APIs Using Node.js Token Authentication
and Express: Part 3 In Part 2 of Bilal’s Authentication series, you learn about how
tokens work and what you can do to take advantage of the
In the third article of this series, Paul examines how to build a website token options that Laravel offers.
using Node.js and Express to serve web pages, how to use a templating Bilal Haidar
engine called Mustache to create dynamic web pages from the data
retrieved from API calls, and how to configure cross-domain resource
sharing (CORS).
Paul D. Sheriff

28 Applying DDD Thinking to Refactor Columns


Aggregate Filters 74 CODA: What Lies at Agile’s Heart
Julie explains how bounded context in Domain-Driven Design might
John takes a look at the history of Agile development and
be a good solution if you’ve got a complex series of iterations within
explores the difference between the process and the result.
filtering software.
John V. Petersen
Julie Lerman

34 Developing Real-Time Web


Applications with Server-Sent
Events in ASP.NET 7 Core
Departments
Joydip shows you how to build real-time web applications with Server-
Sent Events (SSE) in ASP.NET Core, including the core concepts of SSE,
6 Editorial
the features, benefits, and downsides of SSE, and how to implement
real-time updates, etc.
Joydip Kanjilal
12 Advertisers Index

46 Getting Started with AI Copilots in 73 Code Compilers


Your Own Applications
Markus looks at how you can get AI tools to handle some of the more
basic tasks with a tool called Copilot so your dev team can focus on
more important things.
Markus Egger

US subscriptions are US $29.99 for one year. Subscriptions outside the US pay $50.99 USD. Payments should be made in US dollars drawn on a US bank. American
Express, MasterCard, Visa, and Discover credit cards are accepted. Back issues are available. For subscription information, send e-mail to [email protected]
or contact Customer Service at 832-717-4445 ext. 9.
Subscribe online at www.codemag.com
CODE Component Developer Magazine (ISSN # 1547-5166) is published bimonthly by EPS Software Corporation, 6605 Cypresswood Drive, Suite 425, Spring, TX 77379 U.S.A.
POSTMASTER: Send address changes to CODE Component Developer Magazine, 6605 Cypresswood Drive, Suite 425, Spring, TX 77379 U.S.A.

4 Table of Contents codemag.com


EDITORIAL

Error-Driven Development
Any developer who’s been in this profession for any amount of time will relate to this story. I was
working on an application integration using a vendor’s REST API. The purpose of this integration was
to manipulate metadata across several different customer sites and the documents contained within.

My initial uses of the API went well. I was able cause a bug. But I digress. What happened sense in my mind. My technique was to ask the
to query the main list of sites and then I was here? Was some senior-developer Jedi mind student this question in return: “Can you ask
able to get a list of the documents for each trick used to root out the culprit? Or was it that a different way?” This required the student
site. The only problem was that once I tried to something else? to rethink the problem and gave me a few more
access documents beyond the first site in the beats to come up with an answer. This tech-
list, I received an HTTP 403 error. This error is a It was 100% something else. nique was often successful, and I continue its
security access error that says the current user use it today when working on solving problems
doesn’t have proper access to the requested re- Let’s talk about another development story. with other teammates.
source. Last week, I was working with a colleague who
came to me with the following message over So now you have a tool to solve sticky prob-
This was strange. The account I was using to Teams: “When you have a few cycles, can you lems. Simply ask yourself: “Can I look at this
test was a Site Administrator account with full take a look at the data for client ABDC to see another way?”
access. It was a head-scratcher. So I started why it might not be loading?” I was between
doing what all developers do when faced with tasks, so I took a look right away. This was a
a difficult challenge. I went fishing for ideas new client, so I queried the data warehouse to Rod Paddock
(AKA trying stuff). My first thought was to set see if they had their initial setup made. Here’s
up an account with different rights. I gave that the query:
a try only to be foiled by licensing. There were
no licenses available, so I was going to need to SELECT * FROM v_clients WHERE code = 'ABDC'
go a different route. I did what all good devel-
opers do: I consulted Google and came up with Zero records returned. I returned to Teams: “Hey
a few loose threads, but for the most part, it COLLEAGUE. Are you sure this client was set up
was no help. at all? Not seeing it in v_clients. Is the spelling
proper?” The answer came back immediately:
Back to the drawing board. I decided to return “Doh?!?!? Yeah, that’s the problem ABCD not
to the code to try to reproduce the error and see ABDC. Thanks for the assist!” Huzzah! Another
if there was any additional information I might victory. Was it my Jedi-level query skills that
have missed. With a few tweaks, I was able to solved this problem or was it something else?
reproduce the error. It returned the same vague
error. Okay, now what? I returned to the code It was 100% something else.
and tweaked something to see if I could get a
different answer. (It worked with my parents, In each of these situations, it wasn’t the micro
so why not this API? LOL.) After a few more solutions of error codes and queries that solved
failed attempts, I finally received a different the problem. Nope. What was it? I posit that
error value. It read something like: “The current the solution to the fix was applying a differ-
logged in user is not authenticated against this ent perspective to each given problem. In the
site.” former, it was triggering a different error code
and the latter solution was achieved by sim-
BINGO! I had a new error to investigate, this ply having someone else look at the problem.
time with actionable information. I returned to I can’t tell you how many times that particular
the documentation to be informed that I need- colleague helped me find a bug with laser-like
ed to authenticate against each site individu- precision in a millisecond just by looking at a
ally versus sharing a single common authenti- block of code for a moment. “There it is!” The
cation. I quickly tweaked my code to authenti- old fresh-pair-of-eyes technique seems to al-
cate with each site individually. Huzzah! This ways work.
worked perfectly. After this blocker, I was able
to finish my application integration in short or- Sometimes you need to step away from a
der. Error-driven development had worked. problem to find a different path to solving it.
There’s an old technique I learned when I was
I smile at the irony of causing errors (bugs) doing professional developer training. Some-
to find solutions to fixing other errors (more time students pose questions that I would find
bugs). It’s not every day that you get paid to difficult to understand or that just didn’t make

6 Editorial codemag.com
APPS
WITHOUT
COPILOTS
ARE NOW
LEGACY!
Microsoft has introduced Copilot-enabled applications and announced that all Microsoft applications
will follow this approach going forward. This aims to capitalize on the extreme productivity
gains this new paradigm promises. Microsoft also has tools to let you create Copilots for
your own applications and systems.

CODE Consulting can help you build Copilots into your own applications.

VISIT OUR COPILOT PAGE TO FIND OUT MORE ABOUT OUR SERVICES
AND TO SEE DEMONSTRATIONS AND EXAMPLES!
codemag.com/copilot
832-717-4445 ext. 9 • [email protected]
ONLINE QUICK ID 2309021

Top Azure Active Directory Mistakes


Like any developer, I’m constantly challenged by the pace of technology. I’ve been in this industry for a few years, and almost
anyone reading this with similar experience under their belt will agree with me that writing code today is quite different than it
was many years ago. Many years ago, we paid a lot of attention to every logic path. Before any product was released, a lot of care

was given to the API surface, and thinking through every Boy, I’m a worry wart, aren’t I? Well, when it comes to
possible way it would be used and abused. Products were computer security, such shortcuts and speedy develop-
released later, documentation was released first. Then ment will almost always land you in trouble. You need to
some of us worked closely with product groups and wrote understand how security works because your features are
books. no good if the system isn’t secure.

How times have changed! Development today is built as a So I thought it would be a good idea to write an article
skyscraper of SDKs that we constantly update, sometimes highlighting some of the most common mistakes and mis-
automatically, and almost never understand the inner understood concepts I’ve seen that cause developers to
Sahil Malik workings and dependencies of. Security vulnerabilities write insecure applications. I’ll keep my discussion spe-
www.winsmarts.com creep in after we ship code, and most companies don’t cific to Azure Active Directory, but the protocols that are
@sahilmalik have budgets to remediate or even detect these vulner- standard to many of the things I’ll talk about in this ar-
abilities. Developers ship code knowing it has bugs, as ticle are portable to any identity platform.
Sahil Malik is a Microsoft
time to market and feature lists trump everything else.
MVP, INETA speaker,
The recourse is the internet, constant beta state, and Also, it’s worth saying that these are not shortcomings of
a .NET author, consultant,
quick updates and frequent patches. Documentation is a Azure Active Directory, but common mistakes that people
and trainer.
soldier missing in action. Books are not something being make.
Sahil loves interacting written anymore, and it’s all about time to market for
with fellow geeks in real guidance as videos and blog posts. Learning has become With that background, let’s get started.
time. His talks and train- more of a reference.
ings are full of humor and
practical nuggets. Pair this with two other facts: products have gotten more Redirect URI Pollution
complex and we depend increasingly on computers for Almost every modern identity protocol depends on redi-
His areas of expertise are everything, and that well-funded nation state actors have rect URIs. Redirect URIs is a whitelist of URIs where the
cross-platform Mobile app great incentives to cause damage. identity provider feels safe sharing sensitive information,
development, Microsoft be it SAML that may post you back a SAML assertion, or
anything, and security And don’t even get me started with AI-driven develop- an OIDC flow that posts back tokens or codes that be
and identity.
ment where we aren’t even writing code; we’re hoping the exchanged for tokens. Those tokens are post backs and
computer will write it for us. are very sensitive. You should treat them with the same

Figure 1: Azure Active Directory redirect URI limitations

8 Top Azure Active Directory Mistakes codemag.com


care you’d treat credentials with. In fact, I’d go one step So best practice: Always trim down the redirect URIs to the
further: Treat them with greater security than credentials. bare minimum, and only to ones you trust, and remember
Credential leakage can be detected, but if a refresh token that your security is only as good as the target computer.
leaks, it’s much harder to detect and could stay valid for
a long time. Although resetting credentials in Azure AD
invalidates your refresh tokens too, you can’t take that as Mixing App Concerns
a guarantee. Similarly, a SAML post back that you don’t This brings me to my second beef, which is overusing a
secure can be used to establish a web session on a com- single app registration and not fully understanding the
puter you never intended to establish a session with. And ramifications of your decisions. An obvious improvement
in this cloud-ready world where all these endpoints are to what I described in the Redirect URI section is for
on the internet, you’ll soon find stolen cookies for sale the developer to have an app registration separate from
on the dark web. the production app registration. Look, I know having a
completely separate development/staging environment
Every respectable modern identity platform puts some can be impractical when it comes to identity, but having
common-sense limitations on redirect URIs. You can see a separate app registration, secrets, and developer identi-
Azure Active directory’s limitations in Figure 1. ties is a quick and easy win.

There are good reasons behind each of these limitations. But this problem of mixing app concerns goes a bit
For instance, requiring HTTPS ensures that the packets are deeper than that. Not every OIDC flow is equally secure.
not sniffable over the network. Also, browser protections The problem is that the flows are designed to offer you
ensure that a site masquerading as another will automati- the maximum security any individual platform offers. But
cally be detected. Or, for that matter, a man-in-the-middle there’s no way a single page application (SPA) can be as
sniffer will also be detected. This is, of course, defeatable secure as a thick client app. This is simply because the
if the trusted certificate store of the client computer can browser can’t offer the same crypto capabilities that a
be altered by the attacker. This is a common issue with mobile app can. Similarly, a mobile app can’t be as secure
certain governments or even organizations where they as a web app that never shares the tokens to the client
push certificates to your computer that can effectively app. The client computer is an untrustable surface.
allow a man-in-the-middle to sniff HTTPS traffic. So you
can’t assume that HTTPS is the final word in security. So the one obvious conclusion that comes out of this is
that you should always use the right OIDC flow for the
Of special mention is the localhost redirect URI. It’s quite right kind of app. The second conclusion that comes out
common to use localhost as a redirect URI when develop- of this is to offer your users the most secure app choice
ing locally or for certain categories of apps, specifically you can offer. For instance, if you have a choice between
thick client apps that don’t use brokers. The sad real- a SPA doing an OIDC flow to acquire an access token vs.
ity is that Azure AD does not have a good DevOps story, hosting APIs within the web app and doing an OIDC flow
and yeah, lecture me as much as you wish about Micro- suited more for web apps, all other things being equal,
soft Graph and the sales pitch. Developers are frequently lean toward the web app approach.
forced to develop in a production tenant because that’s
where all the licenses are, and that’s where all the data But this is where things get funny. Earlier, I talked about
resides. If they do set up a separate tenant, it’s quite an IT ogres being onerous while developer fairies try to keep
onerous task for them to move their app registration and the users happy. So imagine that Olaf the IT ogre refuses
all the necessary configuration, including all the users, to create a new app registration for you. But for a particu-
permissions, roles, etc. to another tenant. It ends up be- lar scenario, you need to support a web application and a
ing a lot of work writing scripts, relying on third-party thick client application. So instead of dealing with Olaf,
products, etc. you decide to enable public client flows in the same app
as the web app. This can be seen in Figure 2 under the
So guess what most developers do? You guessed it. They authentication section of an app registration.
develop in production (at least as far as Azure Active Direc-
tory goes), and there’s been a battle waging for the past You may be thinking that you’re using thick client applica-
thousand years between developer fairies and IT ogres. tion OIDC flows and web application flows in the same app
registration. But it’s important to see that you just opened
Now what URI will a developer use when developing lo- the gates to also allow device code flow, windows integrated
cally? You guessed it: https://localhost. If you go back auth flow, and ROPC flow. None of these are great choices
a few years, this used to be http://localhost. Invariably, when it comes to security. Don’t get me wrong, there are sit-
this slips into production. The developer either leaves uations where you need them and you have no alternative.
that redirect URI because they may still want to develop ROPC, for instance, relies on sending the user’s password
against the same app registration as production. Or they and username as a POST request. Device code flow involves
simply forget to remove that stray redirect URI. authenticating on a separate computer than where you did
your authentication. Neither of these works well with con-
The problem this causes is that a listener on any com- ditional access policies or device cleanliness. But you just
puter running on localhost can now accept your tokens, opened the doors to these less-than-ideal OIDC flows.
and oh, those listeners do exist and are hard to detect.
Organizations also try to reduce authentication prompts, What you should have done instead is restrict the less de-
so effectively, you’re SSOing a user into an app with a re- sirable OIDC flows to as low of a consent surface as pos-
direct URI to a nefarious listener that can now post your sible. And separate them in a new app registration. If Olaf
tokens to the internet. Ouch! the IT ogre objects, have him read this article please, ok?

codemag.com Top Azure Active Directory Mistakes 9


Figure 2: Enable public client flows

and the tenant ID. Now you can acquire access tokens for
as long as you desire. What’s worse, if such a client secret
is leaked, there’s no way to get an alert either.

It's for this reason that Microsoft doesn’t allow you to


create non-expiring client secrets from the user interface.
This can be seen in Figure 3. They want you to rotate
these secrets every now and then, although through the
API, you can create non-rotating client secrets.

You may be thinking that rotating secrets is a giant pain.


I’m not going to disagree with you there. But you need
to put in procedures to check for upcoming expirations
Figure 3: Expiring secrets only please and proactively rotate them. Afterall, Microsoft does al-
low you to create more than one client secret, so you can
achieve this rotation without outages.

However, there are a few things I’d like to recommend.

The obvious first recommendation is to have procedures


in place to rotate client secrets proactively. I already
talked about that.

Second, I prefer to use certificates over secrets. Secrets


can leak just like certificates, so certificates aren’t much
better. But the slight advantage here is that when you use
certificates, the protocol doesn’t send the actual certifi-
cate over the wire. This makes them slightly more secure
than a string. Additionally, you can add an extra layer
of check to have a CRL revoke certs immediately if you
detect a leak. Note that Azure AD won’t check for CRL but
your application can.
Figure 4: User assigned managed identity
Third, configure an app instance lock. A common attack
vector isn’t stealing your client secret, but rather it’s add-
Manage Client Secrets ing another secret in addition to the secret you use. When
a new secret is added, especially to a privileged app, you
App registrations allow for client secrets. Client secrets are can keep rotating the existing secret as much as you
necessary for certain flows, such as client credential flow. wish, and the newly added secret, presumably added by
Here you have two choices. Either you can use a client se- a bad actor, will continue to work. Using an app instance
cret that’s a string or you can use a certificate. The string is property lock, you can effectively lock down an app in-
no better than a password. Actually, it’s way worse. At least stance from certain kind of changes, such as credentials.
passwords are typed by the user, so enterprises can detect
if the same password was entered into an unintended place Fourth, check for logs for any secrets being added. Have
and immediately revoke your credential. Or hashes of pass- alerts on those cross-referencing the actor that added the
words can be compared with leaked passwords and most secret. If it’s an automated job, the job should nullify the
modern browsers will alert you to compromised passwords. alert. If it’s a user making the change, the alert should
Client secrets enjoy no such luxury. If they’re out in the surface up to the user or an admin.
wild, they are out. The client ID and tenant ID are public
info anyway. And all you need to gain an access token via Finally, avoid using secrets at all, which is where man-
the client credential flow is the client secret, the client ID, aged identities come in.

10 Top Azure Active Directory Mistakes codemag.com


Use Managed Identities
A very wise donkey in a famous movie once said, “What’s
the use of a secret that you can’t tell anyone?” I have a ®
slightly different take on that. The best secret is one that
you don’t know, so you never have to worry about leak-
ing it. This is exactly what managed identity gives you.

Instantly Search
Managed identities are service principles whose credentials
(in this case, certificates) are managed entirely by Azure.
They’re never shared with an app, and they’re never shared

Terabytes
with the user. They remain out of reach of any surface that
could leak them. Additionally, they’re automatically rotated
for you. How often are they rotated? I don’t care. Because I
don’t have to do that rotation and update a number of apps
and inform users when such rotation is occurring.

A managed identity can work on its own, where the ser-


vice principal backing the managed identity can be given
certain permissions. Beyond that, you can acquire access
tokens and call APIs such as Microsoft graph. Because
dtSearch’s document filters support:
it’s a service principal, you’re limited to app-only per- ‡ popular file types
‡ emails with multilevel attachments
missions. Usually, you’d do this by creating a new user-
assigned managed identity from the Azure portal or via
the MS Graph API. This can be seen in Figure 4. ‡ a wide variety of databases
Alternatively, you can also assign an Azure resource a ‡ web data
managed identity. For example, you can have a virtual
machine assume a managed identity. Here, when a pro-
cess running on the virtual machine wishes to reach out
to an API under that managed identity’s identity, it can Over 25 search options including:
ask for an access token for that managed identity, while
the process is running on the computer. Here, you have a ‡ efficient multithreaded search
choice of using a system-assigned managed identity or a
user-assigned managed identity. A system-assigned man-
‡ easy multicolor hit-highlighting
aged identity is cleaned up for you automatically when ‡ forensics options like credit card search
the resource is deleted, and it’s tied to exactly one re-
source. A resource example might be a virtual machine,
an app service, or many other such resources in Azure.

A clear disadvantage of a system-assigned managed iden- Developers:


tity is that you can’t create and pre-configure consent
permissions on the managed identity ahead of time and
‡ SD.s for :indows /inu[ macOS
‡ &ross-platform $3,s cover & -ava
and current .NET
then assign the identity to a resource. Imagine what a
problem this becomes in DevOps scenarios where the re-
source is being upgraded when the upgrade effectively
means delete and reprovision? ‡ )$4s on faceted search granular data
classification $]ure $:S and more
Because managed identities are more “trustable” than
identities whose secrets can leak, the access tokens are
valid for longer too, typically 24 hours.

Additionally, you can use workload identity federation to


Visit dtSearch.com for
configure a user-assigned managed identity or app regis-
tration in Azure AD to trust tokens for an external identity
provider, such as Github, Google Cloud, AWS, or others. ‡ hundreds of reviews and case studies
This is really amazing because this effectively allows you
to use the power of managed identities across tenants ‡ fully-functional enterprise and
and even across clouds. developer evaluations

Protect Your Secrets The Smart Choice for Text


I almost missed mentioning this. Secrets are like plutoni-
um, and, well, as much as I suggest that you use managed
Retrieval® since 1991
dtSearch.com 1-800-IT-FINDS
identities and workload identity federation, you’ll find
situations where you need to manage secrets yourself. In
circumstances like this, you should use a product such as
Azure Key Vault or HashiCorp Vault to store secrets. You

codemag.com Top Azure Active Directory Mistakes 11


can then use something like managed identity to acquire another day because it’s so different; let’s focus on OIDC
the secret at runtime, and either not save it locally, or for now.
save it in a trusted encryption, such as hardware-backed
encryption. Note that products such as Key Vault should In OIDC, you have three kinds of tokens.
not be used for high IO scenarios. If you have to reach
out to Key Vault every time you need the secret, your app The ID token is what establishes the user’s identity. The
won’t scale. But it serves the need of storing your creden- intended audience for this token is your app. Your app re-
tials safely very well. ceives this token, validates it, and issues its own mecha-
nism of maintaining the user identity, such as a cookie
for a web session.
Understand Token Validation
SAML has SAML assertions, and OIDC has access tokens, The access token is what establishes your ability to call
refresh tokens, and ID tokens. You also have PRTs (prima- an API, and it may or may not contain the user’s identity.
ry refresh tokens), which I like to call a powerful refresh The access token is valid for a shorter duration of time,
token, but let’s leave that out of the discussion for now. typically an hour, although it could be more or less, de-
pending upon various factors. The intended audience for
The important part here is that in this copy/paste-driven this kind of token is your API.
development world we live in, it’s important for you to
understand how tokens are validated. I’ll leave SAML for The refresh token has a much longer duration, intended
to be used to gain new access tokens so the user isn’t
shown an authentication challenge every time the access
token expires. The validity duration of the refresh token
depends on the kind of application you’re writing, and
not every kind of OIDC flow allows for a refresh token.

There are other kinds of tokens, such as auth_codes that


play intermediary roles depending on the kind of flow
you’re using.

Figure 5: The access token header There are a few important things you need to understand
about token validation.

In Azure AD, refresh tokens are opaque. You cannot de-


code them; they’re not intended for you to decode. You
ADVERTISERS INDEX don’t validate them, you don’t decode them, you simply
use them when you need a new access token. It’s up to
Azure AD to validate them and issue you an access token
Advertisers Index using them. Also, Azure AD has the prerogative to revoke
your refresh tokens without informing your app first.
CODE Consulting So don’t rely on application architectures that rely on
www.codemag.com/ai-services 2 refresh tokens never getting invalidated. For instance,
long-running processes under the user’s identity using
CODE Consulting a refresh token is a perfectly valid architecture pattern.
www.codemag.com/copilot 7 That’s what the “offline_access” scope is intended for
CODE Consulting anyway. But build an escape hatch to alert the user if
www.codemag.com/Code 75 the refresh token gets invalidated, so the user can reau-
thorize the offline process, which is basically getting a
CODE Consulting new refresh token with an interactive auth flow.
www.codemag.com/executivebriefing 76
ID Tokens and Access tokens are JWT tokens in Azure AD.
DevIntersection
OIDC does not mandate these to be JWT tokens, and at
www.devintersection.com 38
Advertising Sales: some point, they may not be. For instance, it’s possible
Tammy Ferguson dtSearch that the identity provider (Azure AD) in this case, could
832-717-4445 ext 26
[email protected]
www.dtSearch.com 11 move to encrypted access tokens. Encrypted access to-
kens would be a good improvement because they’re in-
LEAD Technologies
tended for the API, not the app. The API would have
www.leadtools.com 5
the ability to decrypt and decode them. The app, on
the other hand, can’t decrypt them and therefore decode
them. This is because the app has no business read-
ing and unpacking an access token. This is a common
mistake I see, where the app decodes an access token
and makes logic assumptions based on it. You’re setting
yourself up for failure by doing this.
This listing is provided as a courtesy
to our readers and advertisers. Finally, ID tokens are intended for your app. These are
The publisher assumes no responsi- also JWT tokens as of now.
bility for errors or omissions.

12 Top Azure Active Directory Mistakes codemag.com


4. If the key has changed, recheck the signature with
the new key. And yes, cache the new key.
5. If the key hasn’t changed, fail the validation of sig-
nature.

The problem is that OIDC doesn‘t dictate this key caching


and key rotation. There are so many OIDC compliant SDKs,
and even more hand-written validations that fail to fol-
low this pattern. I’m serious, if I had a quarter for every
time I saw this problem, I’d be writing this article from
a private jet in a jacuzzi surrounded by, well, maybe the
pilot. Anyway, I don’t have a quarter for every time I see
this problem, so, back to reality.

Microsoft SDKs, also known as MSALs, will perform this


caching logic. But many third-party libraries don’t. Let’s
be honest: Many MSALs don’t have token validation logic,
leaving developers to figure this out on their own.

Figure 6: The JWKS URI output And this is where we’re done talking about signature valida-
tion issues. But token validation has other common problems.

Both access tokens and ID tokens are intended to be vali- When you validate a token in Azure AD, you must vali-
dated, albeit by different parties, but the validation logic date, at the bare minimum, the audience and the tenant,
has a few things in common. and then beyond that, per your authorization logic, the
subject and other claims such as roles and scopes.
First, the tokens are comprised of three parts: the header,
the body, and the signature. What would happen if you didn’t validate the tenant?
Well, then you could just set up another Azure AD tenant
The header gives you “packet” information. Think of it and set up the same audience, and pass your validation.
as the label on your package. It tells you what kind of Hopefully you’re validating the subclaim (subject), but if
signature algorithm is used, and where to find the keys you weren’t, I’m in. Yay!
to validate the signature. Azure AD has the private key
to sign the access token or ID token. It never shares the How confident are you that the copy/paste code from
private key and it rotates the public private keypair every the internet that you used is indeed performing all this
now and then. The public key is available to you at an validation, and caching the signature validation cert? Are
anonymous URL. You can use the public key to validate you sure? Better double check.
the signature. In Figure 5, you can see the header for an
access token I just acquired from my tenant.
Abusing Claims
Of special note is the “alg,” which stands for algorithm, OIDC defines a bare minimum standard. It allows you to
and the “kid” or key ID claim. The idea is that Azure AD add claims that the standard doesn’t define but doesn’t
is telling you that it used RS256 to sign the token, and prevent you from adding. There are a few problems here.
you can find the keyID from the JWKS URI, which resides
at the following URL: First, this gives you the leeway to stray away from the
standard. I suggest that you don’t stray away from the
https://login.microsoftonline.com/ standard. Do the bare minimum validation that OIDC re-
<tenant_id>/discovery/keys quires, and then add more on top as necessary and very
judiciously.
You can see the key ID for my app in Figure 6. I got this
as a simple GET request to the JWKS URI. Second, don’t use tokens as a database. Tokens are sup-
posed to be validated at every authorization step. The
To validate the signature of the token, graph the signa- larger the token, the worse your performance will be. In
ture key from the JWKS URI. This is where I should men- fact, in many situations, it will break your application
tion one of the biggest issues I see so many applications logic. Many platforms or even SDKs will cap the token
make a mistake in. size. Browsers are especially notorious for this. I’ve seen
well known applications (that shall remain unnamed)
Azure AD will rotate the key every now and then. You’re with ridiculously low limits like 256 characters. My South
supposed to follow the following pattern: Indian friends have very long last names, and I have seen
even the bare minimum token go beyond the 256 char-
1. First, check if you have the signing key. If you don’t, acter limit.
ask Azure AD for it and cache the key.
2. Now, use the cached key to check the signature of Additionally, Azure AD has some common-sense protec-
the token. tions built in. For instance, if you’re a member of groups,
3. If the signature fails, re-query Azure AD for a new and you’re a member of many groups, you can config-
key, just in case the key has changed. ure your application to issue the groups claim. But you

codemag.com Top Azure Active Directory Mistakes 13


can’t really rely on this claim. Beyond a certain number Allow me to explain. Let’s say I allow guest accounts
of groups, you get a group overage claim that basically from someshadyorg.com, and their CEO happens to be so-
says, “hey the token got too large, but you can query MS meniceguy. Because their identity provider is untrustable,
Graph to get the groups.” Well, if the reliable way to get they can effectively send me an immutable claim value
groups is to call MS Graph anyway, why didn’t I just call for someshadyguy that effectively asserts that some-
MS Graph in the first place? That’s what you should have shadyguy is someniceguy. There’s nothing my Azure AD
done. Additionally, when you start using dynamic groups tenant or app can do to prove it otherwise, and I just
or nested groups, you start running into reliability and have to “trust” this other org.
performance issues. TLDR: Don’t use the groups claim; it
works nicely for simple demos. How do you guard against such problems? Well, the obvi-
ous answer is to establish trust with audits and reviews,
Finally, I must talk about claims transformation, but let and put safeguards in place. In Azure AD, this is as sim-
me talk about that in the light of immutable claims. ple as working with verified tenants, validating tenant
ID claims, and communicating with your customers what
their Azure AD might be federated/delegated to trust.
Immutable Claims
Certain IdPs, including Azure AD, allow you to tweak the Additionally, you can establish a limited blast radius in
values of claims. Luckily, Azure AD won’t let you do so case of a breach. Guard like a hawk what guest accounts
with any claim. Claims that you cannot and should not can and can’t do. Establish alerts and audits, and use ac-
change are called immutable claims. Those are the claims cess packages and access reviews to trim the access down
SPONSORED SIDEBAR: that you should base the user identity on. This is the to the bare minimum. In Azure AD, use conditional access
NameID claim in SAML (which is inside the subject claim) policies to establish location lockdown, device health,
Ready to Modernize or just the subclaim in OIDC. and authentication strength.
a Legacy App?
Azure AD tries its best to keep you from shooting yourself
Need advice on migrating in the foot. But it’s up to you, the application author, Summary
yesterday’s legacy to understand these nuances and not make mistakes. For I started this article with an alarmist approach. I don’t
applications to today’s
instance, a common mistake is to base the user’s identity regret that. In the furious and aggressive pace of inno-
modern platforms? Take
on an “email” claim. Why? Because UPN and email look vation, I worry that we, as an industry, are collectively
advantage of CODE
so similar, often the same. Why is basing the user’s iden- underestimating the importance of security. Every other
Consulting’s years of
experience and contact us tity on email a bad idea? Because the email address can day, I see some news article of a breach of some major
today to schedule a free change, just like the user’s name can change. Essentially, organization and leaking data.
consulting call to discuss your I could change my email to [email protected]
options. No strings. and I’m now the CEO of Microsoft. I hope you were vali- I wish I could say that these organizations have learned
No commitment. dating the tenant ID claim. But what if this was an in- their lesson, but I see the same org get targeted again,
Nothing to buy. sider attack? Changing one’s email is not something that and no, they didn’t learn a darn thing. I’m so tempted to
For more information, requires a high privilege. name names here, but I’d rather not get in trouble.
visit www.codemag.com/
consulting or email us at Specifically for this attack vector, Microsoft now has pro- Let me just say, though, that these are some very repu-
[email protected]. tections. Apps created after June 2023 that issue this table companies, with millions of customers, and that
claim will get a blank value that the admin can change you’re probably a customer of some. When this informa-
the behavior of. However, this doesn’t affect existing tion is leaked, guess what? The cat is out of the bag.
apps that are already making this mistake. And yes, if you
wish to shoot yourself in the foot going forward, it won’t Large organizations are woefully slow. They fight over
stop you from that either. budgets while their secrets leak, they clamor to add fea-
tures under ever-tightening deadlines, and guess what
Long story short, always rely on immutable claims in gets the cut? Security, because it doesn’t affect the next
Azure AD to establish user identity. quarter.

Can we please stop doing this nonsense as an industry


Understanding Trust and give security the importance it deserves?
Azure AD has a concept of an external identity provider
where Azure AD can delegate identity establishment to an All right, time for me to get off my soapbox. Stay safe and
external identity provider. There’s also a similar concept double check your apps.
called federation. And another loosely similar concept of
“guest accounts” where your Azure AD tenant allows an Sahil Malik
external visitor’s account, whose identity is established
by another Azure AD tenant.

In all of these situations, you’re trusting an identity pro-


vider that isn’t under your administrator’s influence.

You may have done everything right, rotating secrets,


validating tokens properly, even checking for immutable
claims, and none of those will keep you safe if the exter-
nal identity provider you trust isn’t trustable.

14 Top Azure Active Directory Mistakes codemag.com


ONLINE QUICK ID 2309031

Building Web APIs Using Node.js


and Express: Part 3
In the last two parts of this article series (in the May/June and July/August 2023 issues of CODE Magazine), you created a web
server using Node.js, Express, and JavaScript. You built a series of API routes to retrieve an array of products and a single product,
searched for a set of products, and added, edited, and deleted products. The product data was retrieved from a connection to

a SQL Server database. In this article, part three of the se- development projects. You can even open VS Code and use
ries, you’ll build a website using Node.js and Express to serve the terminal window in there. For this article, I’m using my
web pages. You’re going to see how to use a templating en- folder D:\Samples to create the new Node.js project. After
gine, Mustache, to create dynamic web pages from the data opening a command prompt within the development folder,
retrieved from API calls. To communicate from your website create a new folder under that folder and navigate to that
to your Web API server, you must configure cross-domain folder using the following two commands.
resource sharing (CORS). You’ll see how to enable CORS in
your Web API project. You’ll then build a set of search, add, mkdir AdvWorks
edit, and delete pages that make calls to your Web APIs. Icd AdvWorks
Paul D. Sheriff
Open the Visual Studio Code editor in this new folder us-
Create a Node.js Web Project Using ing the following command. Note that this is the word
http://www.pdsa.com
VS Code code followed by a space and a period (.). Paul has been working
Like my last two articles, this article is designed for you in the IT industry since
to follow along. You learn the most by doing, so please code .
1985. In that time, he has
successfully assisted hun-
feel free to join me to create a website together. You only
dreds of companies archi-
need a few tools, most of which you probably already From the menu system in VS Code, open a new terminal by
tect software applications
have on your computer. You need to install Node.js and selecting Terminal > New Terminal. Type in the following to solve their toughest
VS Code on your computer. You also need access to a SQL command in the terminal window to start building a new business problems. Paul
Server, so install one on your computer or use SQL Server JavaScript project. has been a teacher and
Express. Everything else is downloaded as you go along. mentor through various
For more information about installing these tools, please npm init mediums such as video
go back and review the instructions in Part 1 of this arti- courses, blogs, articles
cle (https://www.codemag.com/Article/2305031/Build- Within the terminal window, it asks for some information and speaking engage-
ing-Web-APIs-Using-Node.js-and-Express-Part-1). to describe this project. If you wish to accept the default ments at user groups and
answers, press the Enter key after each prompt; other- conferences around the
Let's get started creating this new website using Node.js wise enter the appropriate information for your project, world. Paul has multiple
and Express. Open a Command Prompt, the Windows Pow- as shown in Figure 1. At the end, answer Yes to save a courses in the www.plural-
erShell app, or some other terminal as appropriate for your file called package.json in your new project folder. The sight.com library (https://
computer and navigate to where you normally place your package.json file contains meta-data about your project bit.ly/3gvXgvj) and on
to help npm run the scripts in the project, install de- Udemy.com (https://bit.
pendencies, and identify the initial JavaScript file used ly/3WOK8kX) on topics
to start the application. The package.json file contains ranging from C#, LINQ,
meta-data about your project to help npm run the scripts JavaScript, Angular, MVC,
in the project, install dependencies, and identify the ini- WPF, XML, jQuery, and
Bootstrap. Contact Paul at
tial JavaScript file used to start the application.
[email protected].

Install Express and Nodemon


You’re now ready to install any additional utilities you
want to use in your application. You’re going to use the
Express framework for creating the web server, so install
that now. From within the terminal window, install the
Express framework using the following command:

npm install express

You’re also going to use the nodemon utility to automati-


cally detect changes to any files in your project and restart
your web server when appropriate. Install nodemon by us-
ing the following command within the terminal window:
Figure 1: Answer a series of questions to create a
package.json file for your project. npm install nodemon

codemag.com Building Web APIs Using Node.js and Express: Part 3 15


Modify the package.json File property under the Scripts property, as shown in the code
Open the package.json file and you should now see Ex- snippet below.
press and nodemon listed under the Dependencies prop-
erty. Also, notice that there’s a Main property that lists "scripts": {
index.js as its value. This is the starting JavaScript file "start": "nodemon index.js",
for the application. Because you want to use nodemon "test": "echo \"Error: no test
to watch for any changes in your js files, add a Start VSHFLÀHG? H[LW
},

Listing 1: In the index.js file is where you serve up the static HTML home page. Be sure to save the changes to the package.json file at
// Load the express module this point.
const express = require('express');
// Create an instance of express
const app = express(); Create Website Starting Point
// Create an instance of a Router
const router = express.Router(); Whether you’re creating a Web API project or a website
// Specify the port to use for this server project, the starting code for Express is very similar. Just
const port = 3010; like before, you need to create a file named index.js as
the starting point. Open the index.js file and add the code
&RQÀJXUHORFDWLRQ V RIVWDWLF+70/ÀOHV
app.use(express.static('public'));
shown in Listing 1. There are only a few differences in
this code from the Web API project created in the first
/** part of this article series as you will notice as you build
* GET the website in this article. First, set the port number to a
 #UHWXUQVLQGH[KWPOÀOH different value from that of your Web API project. Invoke
*/
app.get('/', (req, res, next) => { the method app.use(express.static('public')) to set the
res.status(200).send(); location of where your static HTML files are located. Within
}); the app.get() function, call the send() function with no
parameters to send the HTML to the requesting browser.
// Create web server to listen
RQWKHVSHFLÀHGSRUW
let server = app.listen(port, function () { How does the Express server know to use a file called index.
console.log(`AdvWorks web server is running html from the app.get("/" …) call? In the line of code
on http://localhost:${port}.`); app.use(express.static('public')), the express.static()
});
function assumes that a file named index.html is within
the public folder. Override this file name, if you wish, by
passing in a second parameter to set the index property to
the name of HTML file you wish to use. For example, if you
Listing 2: Create the index HTML page. rename the index.html file in the public folder to home.
<!DOCTYPE html> html, use the following code to use that file name:
<html>
<head> app.use(express.static('public',
<title>Product Maintenance</title>
<link rel="icon" type="image/x-icon" {
href="#"> "index": "home.html"
<style> }
.text-center { ));
text-align: center
}
</style> Create the HTML Home Page
</head> Because you declared your intention to use a folder named
public in the express.static('public') call, create a folder
<body>
<div class="text-center">
named public in your project and add a file named index.
<h1>Product Maintenance</h1> html. Add the code shown in Listing 2 to this new index.
html file. Make sure you put the <script> tag that refer-
<button onclick="getAllProducts();"> ences the googleapi.com all on one line. I had to break
Get All Products the line because of formatting restrictions in this article.
</button>
</div>
Create JavaScript File to Make API Call
<div class="text-center"> At the bottom of the index.html file, there are two
<textarea rows="10" cols="80" <script> tags. The first tag references the ajax library
id="products">
</textarea> from Google to help make API calls. The second <script>
</div> tag references a local file called index.js into which you’re
<script going to write the code to make the API calls to the Ad-
src="https://ajax.googleapis.com vWorksAPI project you created in the previous articles.
/ajax/libs/jquery/3.6.3
/jquery.min.js">
</script> On the index.html file, you can see a button that calls
<script src="./scripts/index.js"> a function named getAllProducts(). Create a file named
</script> index.js file in the public folder, and add the getAll-
</body>
</html>
Products() function, as shown in Listing 3. The code in
the getAllProducts() function should be very familiar to

16 Building Web APIs Using Node.js and Express: Part 3 codemag.com


anyone that who’s called APIs using jQuery. The asynchro- Listing 3: Create the function to make the API call from your index.html page.
nous getJSON() method is passed in two parameters: the
'use strict';
URL where the API call is located and a callback function
if the call is successful. The getJSON() method returns a const url = 'http://localhost:3000/api/product';
promise, so you can chain the .done() and .fail() methods
to the call to respond if the code succeeds or fails. If data function getAllProducts() {
is returned from the API call, that data is passed to the $.getJSON(url, function (response) {
$('#products').val(JSON.stringify(response));
anonymous function as the parameter named response. })
For now, you’re just going to place the stringified version .done(function () {
of the JSON data into the <textarea> named products. console.log("Success");
})
.fail(function (error) {
Try It Out $('#products').val(
Save the changes you’ve made to all the files in your "Can't retrieve Products.
project and start the website in the terminal window of Check Console Window " +
your editor by typing in npm start. Go to your browser JSON.stringify(error));
console.error(error);
and navigate to http://localhost:3010. You should see });
the HTML (Figure 2) from the index.html file appear on }
the screen. Click on the Get All Products button and
you should receive the message Can't retrieve Products.
Check Console Window. Open the F12 developer tools on
your browser and check the Console tab. You should see a
CORS error that looks like Figure 2.

Add CORS to the REST API Project


CORS stands for cross-origin resource sharing and is an
HTTP header mechanism that allows a web server to indi-
cate from which domains it will accept calls. By default,
a server blocks all calls that don’t originate from pages
or callers with their own domain. When you add CORS to
your Web API project, you specify all domains or only spe-
cific domains from which to accept remote calls. You can
even specify which verbs (GET, POST, PUT, DELETE, etc.)
to accept. Go back to the AdvWorksAPI project and open
a terminal window in your editor. Type in the following
command to install the CORS package.

npm install cors

Add a new file to the helpers folder named cors.js. Add


the code shown in Listing 4 to this new file. There is a
single function named configure() in this corsHelper ob-
ject. I’ve put some different ways you can configure CORS
within the function that you can refer to later. The first Figure 2: You should receive a CORS error when first attempting to make a call to your
commented line is app.use(cors()). If you use this call, API project.
your server will allow calls from all domains. If you create
a JSON Options object, you can add different properties
to specify domains and methods that your server will al-
low.

You now need to call this configure() method from your


index.js file so you can register CORS functionality within
your Web API project. Open the index.js file and add the
following code before you mount the routes:

&RQÀJXUH&256
const corsHelper = require('./helpers/cors');
FRUV+HOSHUFRQÀJXUH DSS 

// Mount routes from modules


router.use('/product',
require('./routes/product'));

Try It Out
Save all changes made to your AdvWorksAPI project.
Assuming that both projects are running, navigate to Figure 3: You should get an array of product data if everything worked correctly.

codemag.com Building Web APIs Using Node.js and Express: Part 3 17


Listing 4: Add a CORS helper object to initialize from what domains your server will allow calls.
// Load CORS module // Enable CORS for only one domain
const cors = require('cors'); let options = {
"origin": "http://localhost:3010"
/** };
 &256&RQÀJXUDWLRQ+HOSHU
* @reference // Enable CORS for only one domain
* https://expressjs.com/en // and only certain verbs
* /resources/middleware/cors.html // let options = {
*/ // "origin": "http://localhost:3010",
let corsHelper = {}; // "methods": "GET,POST"
// };
/**
&RQÀJXUH&256XVDJH // Pass options to cors() method
*/ app.use(cors(options));
FRUV+HOSHUFRQÀJXUH function (app) { }
// Enable CORS for all requests
// app.use(cors()); module.exports = corsHelper;

Listing 5: Create a hard-coded product page. http://localhost:3010 and click on the Get All Products
<!DOCTYPE html> button. If you’ve done everything correctly, you should
<html> see the product array appear in the text area, as shown
<head> in Figure 3.
<title>Product Maintenance</title>
</head>
Did You Get an Error?
<body> If you get an error message about not being able to make
<h1>Product List</h1> a connection, make sure that TCP/IP is enabled for SQL
Server on your computer. By default, it’s usually turned
<select size=10>
<option id="345">HL Road Frame - Red, 58</option>
off. Open the Computer Management app (Figure 4) in
<option id="346">Sport-100 Helmet, Red</option> Windows and expand the SQL Server Configuration Man-
<option id="348">Mountain Bike Socks, M</option> ager node, then expand the Protocols for MSSQLSERVER
</select> (or Protocols for SQLExpress if using SQLEXPRESS). Dou-
</body> ble-click on the TCP/IP protocol and in the dialog box,
</html>
change the status to Enabled.

Getting Started with Templating


In the simple example you just coded, the product data is
simply placed into an HTML text area. Most likely, you’re
going to want to display the products in an HTML table
or a drop-down list. You can build the table or drop-down
list by coding everything in JavaScript, but doing so can
get very messy. If you’ve ever used ASP.NET MVC, you
have a server-side templating language, Razor, to mix
HTML and code to build tables, drop-downs, and forms
dynamically.

There are several client-side templating engines that


work similarly to the server-side templating languages.
The most common client-side templating engines are EJS,
Mustache, Handlebars, and Pug. These templating engines
help you build web pages dynamically on the client-side.
I’m going to introduce you to Mustache in this article
because it’s the one that I’ve used many times over the
years. Once you learn one of these templating engines,
you’ll find that most of them look the same.

Mustache, and most of the templating engines, want you


to create a folder named views. Create the views folder
in your AdvWorks project, and within that folder, add a
new file named product.mustache. Into this new file,
place the HTML shown in Listing 5. For this first example,
you’re just going to use some hard-coded data. Eventu-
ally, you’ll replace this hard-coded data with data from a
Web API call, but first, let's learn how to call this page
Figure 4: Turn on the TCP/IP protocol for SQL Server if you get errors attempting to connect. using Mustache.

18 Building Web APIs Using Node.js and Express: Part 3 codemag.com


Create a Route to Display the Mustache Page Install Mustache Template VS Code Extension
As you did in the first article in this series, you’re going If you’re using VS Code, you should install the Mustache
to create modules to hold the different pieces of func- Template – Snippets and Autocomplete extension to
tionality for your web application. Add a new folder called help you format and work with Mustache within VS Code.
routes and add a new file named product.js in this folder. Click on the Extensions tab in VS Code and search for
Open the routes\product.js file add the code shown be- "Mustache". Install this extension, as shown in Figure 5.
low:
References for Mustache
// Create an instance of a Router There’s a lot more to Mustache than I’m covering in this
const router = require('express').Router(); article. To learn more about the Mustache templating en-
gine, I’ve listed a few links that provide you with more
// GET /product Route information:
router.get('/', (req, res, next) => {
// Use the view engine to render the page
res.render('product.mustache', {});
});

module.exports = router;

In this router.get() function, you call the render() func-


tion on the response (res) object, passing in the value
'product ' which corresponds to the file product.mus-
tache located in the views folder. The second parameter
to res.render() is the data that is to be used on the mus-
tache file to create the page. There’s no data for this page
yet, so just pass an empty JSON object.

How does Express know to use the file named product. Figure 5: Install the Mustache template extension to
mustache and not product.html or some other file? The make it easier to work with Mustache files.
answer is that it doesn't yet, but it will as soon as you
download Mustache and register it as your view engine.
Let's do that next. Listing 6: Configure the mustache-express package as the view engine for this website.
&RQÀJXUH0XVWDFKH7HPSODWLQJ(QJLQH
Add Mustache to Your Project let mustacheExpress =
Add the mustache templating engine to your AdvWorks require('mustache-express');
project by submitting the following command in the ter- // The following is the default
// change the directory name if you wish
minal window. //app.set('views', `${__dirname}/views`);
// Tell express the view engine you are using
npm install mustache-express app.set('view engine', 'mustache');
// Set the engine to Mustache
app.engine('mustache', mustacheExpress());
Open the index.js file and add the code shown in Listing 6
after the definition of the port constant. In this code, you // Mount routes from modules
load the Mustache module, tell Express what view engine router.use('/product',
you’re using with the app.set() function, and then register require('./routes/product'));
the engine using the app.engine() function. You now use &RQÀJXUHURXWHUVRDOOURXWHV
the router.use() function to create the product route with KDYHQRSUHÀ[
the routes defined in your routes\product.js file. app.use('/', router);

Remember how you use the tool nodemon to restart the


node application if a file changes? Well, because you’re
now using .html, and .mustache files, you need to inform Listing 7: Add some data to send to the Mustache page.
nodemon to monitor those files as well. Open the pack- res.render('product',
ages.json file and modify the Start property so it looks {
like the code shown below. The -e option passes a list "data": [
{
of file extensions to the nodemon utility to monitor for "productID": 354,
changes. This is a comma-separated list of extensions. Do "name": "Long-Sleeve Logo Jersey, L"
not put any spaces in between each. A space is what sep- },
arates the extensions from the file to start with, namely, {
"productID": 356,
index.js. "name": "HL Road Frame - Red, 62",
},
"scripts": { {
"start": "productID": 358,
"name": "HL Road Frame - Red, 48",
"nodemon -e mustache,js,html,json index.js", }
"test": HFKR?(UURUQRWHVWVSHFLÀHG? ]
 H[LW }
}, );

codemag.com Building Web APIs Using Node.js and Express: Part 3 19


• https://github.com/janl/mustache.js/ pages in ASP.NET Razor works. To illustrate, let's take the
• https://mustache.github.io/mustache.5.html part of the product page that creates the list of products
• https://drewword.medium.com/express-mustache- and move that off into a separate file. Open the views\
quick-start-36c3421af91 product.mustache file and cut to the clipboard the entire
<select>…</select> element. Where that code used to
Try It Out be, replace it with {{> _productList}}, as shown in the
Save all the changes made to the files in your project and following code snippet:
restart the AdvWorks project. Type in the query http://local-
host:3010/product and you should now see the HTML page <!DOCTYPE html>
with a list of hard-coded product data in a drop-down list. <html>
<head>
<title>Product Maintenance</title>
Feed Data to the Mustache Page </head>
Open the routes\product.js file replace the line of code
res.render('product', {}) with the code shown in Listing <body>
7. The second parameter to the res.render() method is <h1>Product List</h1>
now a hard-coded set of product data to be used to fill in
the data you hard-coded in the <select> element. Eventu- {{> _productList}}
ally, you’re going to replace this code with a call to the </body>
Web API, but let's first learn how to use the Mustache </html>
templating engine with hard-coded data.
This mustache token {{> …}} tells Mustache to locate
The second parameter passed to res.render() is an un- the file named after the greater than sign. In the code
named JSON object. Add a property to this object called above, that means to look for a file named _productList.
data that’s an array of product objects. The array in data mustache and insert the contents at that file into this
is what you’re going to tell Mustache to use to iterate location in the product.mustache file. Add a new file in
over, and create each <option> in the <select> list. Open the views folder named _productList.mustache and add
the views\product.mustache file and replace all the the code shown below.
hard-coded <option> elements with the following code:
<select size=10>
<select size=10> {{#data}}
{{#data}} <option id="{{productID}}">
<option id="{{productID}}"> {{name}}
{{name}} </option>
</option> {{/data}}
{{/data}} </select>
</select>
Try It Out
This new code you’ve entered contains Mustache tokens. Save all the changes you made to your project files. Open
These tokens are variable names that either represent a your browser and refresh the page, and you should see
collection of data {{#data}}, or a single property from the exact same page you did before. The difference is
within the array of objects such as {{productID}} and that you used two Mustache files to accomplish the same
{{name}}. These tokens are replaced with the data from thing. You’re going to see where this comes in handy as
the object you passed to the second parameter of the res. you work your way through this article.
render() function.

The token {{#data}} says to iterate over the array of Create a Table of Products
product objects contained in the data property. For each Open the routes\product.js file and add more properties
product object, emit all the HTML within the starting to each of the product objects in the array, as shown
{{#data}} token and the closing {{/data}} token. While in Listing 8. This provides better data to display in the
iterating through each, replace each property within the _productList partial page. Now that you have more prop-
tokens ({{productID}}, for example) with the data from erties for each product object, an HTML table is more ap-
that property in the product object. propriate for displaying that data. Open the views\_pro-
ductList.mustache file and replace the entire contents of
Try It Out the file with the HTML shown in Listing 9.
Save all the changes to the files in your AdvWorks project,
go to your browser, and refresh the page at http://local- Let's add a little bit of styling to the table. Open the
host:3010/product. You should see a different set of prod- views\product.mustache file and within the <head> ele-
ucts appear in your HTML list. These are now the products ment, add the following styles.
coming from the hard-coded product array in your router.
<style>
table, th, td {
Using Partial Pages border: 1px solid;
A great feature of Mustache is that it allows you to break border-collapse: collapse;
your HTML pages up into smaller chunks. This feature is }
called partial pages and is very similar to the way partial th, td {

20 Building Web APIs Using Node.js and Express: Part 3 codemag.com


Listing 8: Create a route to send some data to a Mustache page.
res.render('product', "standardCost": 13.0800,
{ "listPrice": 34.9900,
"data": [ PRGLÀHG'DWH: "2004-03-11"
{ },
"productID": 345, {
"name": "HL Road Frame - Red, 58", "productID": 347,
"productNumber": "FR-R92R-58", "name": "Sport-100 Helmet, Black",
"color": "Red", "productNumber": "HL-U509",
"standardCost": 1059.3100, "color": "Black",
"listPrice": 1500.0000, "standardCost": 13.0863,
PRGLÀHG'DWH: "2019-09-11" "listPrice": 34.9900,
}, PRGLÀHG'DWH: "2004-03-11"
{ }
"productID": 346, ]
"name": "Sport-100 Helmet, Red", }
"productNumber": "HL-U509-R", );
"color": "Red",

padding: 1em; Listing 9: Add an HTML file with some templating in it to render the product data.
text-align: left; <table>
} <thead>
tr:nth-child(even) { <tr>
background-color: #f2f2f2; <th>Product ID</th>
<th>Product Name</th>
} <th>Product Number</th>
.text-end { <th>Color</th>
text-align: right; <th class="text-end">Cost</th>
} <th class="text-end">Price</th>
</tr>
</style>
</thead>
<tbody>
{{#data}}
Try It Out <tr>
Save all the changes you have made to the files in your <td>{{productID}}</td>
<td>{{name}}</td>
AdvWorks project. Go to your browser and refresh the <td>{{productNumber}}</td>
page at http://localhost:3010/product. You should now <td>{{color}}</td>
see a page with the product data in a nicely formatted <td class="text-end">{{standardCost}}</td>
table, as shown in Figure 6. <td class="text-end">{{listPrice}}</td>
</tr>
{{/data}}
</tbody>
Formatting Columns </table>
In the HTML table, you can see that the standard cost and
the list price fields are both right-justified. Because each
of these values are dollar amounts, they should be format-
ted as currency values. Mustache is doing the rendering of
the data via their tokens {{ and }}. Within these tokens,
you’re only allowed to use properties or functions that are
attached to the object passed as the second parameter to
the res.render() function. This means you need to add a
couple of properties to that object to render the cost and
the price as currency values. Open the routes\product.js
file and add two new properties before (or after) the data
property within the second parameter passed to the res.
render() function, as shown in Listing 10.

Open the views\_productList.mustache file and change


the last two columns that contain the {{standardCost}}
and the {{listPrice}} tokens to the following code:

<td class="text-end">
{{costAsCurrency}}
</td> Figure 6: A table can be built using the Mustache templating engine.
<td class="text-end">
{{priceAsCurrency}}
</td> the this keyword to get the standardCost property from
the current row being processed. It uses the toLocale-
As Mustache processes each row in the array of product String() method on the Number data type in JavaScript
objects, it calls the costAsCurrency function which uses to convert the value to a U.S. currency format. It also

codemag.com Building Web APIs Using Node.js and Express: Part 3 21


calls the priceAsCurrency function, which uses the this References for tiny-json-http
keyword to get the listPrice property from the current There is much more to the tiny-json-http package than
row being processed and converts that to a U.S. currency I’m covering in this article. Here are some references if
format as well. you want to learn more about tiny-json-http:

• https://www.npmjs.com/package/tiny-json-http
Another Method to Make API Calls • https://snyk.io/advisor/npm-package/tiny-json-
Instead of using the hard-coded data you put into the http/functions/tiny-json-http.post
route, let's make a call to the Web API to retrieve the • https://github.com/brianleroux/tiny-json-http
product data from SQL Server. There are many tools, such
as the Google APIs used earlier, that allow you to make Add an HTTP Call to the Product Route
Web API calls within an Express application. I like us- Open the routes\product.js file and replace the entire
ing the tiny-json-http package as it has a very simple contents with the code shown in Listing 11. The first
syntax. Within your AdvWorks project, open a terminal line of code still sets up the router from express. Next,
window and install this package by typing in the follow- load the tiny-json-http module so you can use it with-
ing command: in this module. Create a constant that has the URL for
making the calls to the Web API project you created.
npm install tiny-json-http Within the router.get() function, wrap the call to your
Web API within a try…catch block. Call tiny.get() pass-
ing in a JSON object with the url property set to the
Listing 10: Add two properties to retrieve numbers are currency values. url constant. The response object that’s returned has a
res.render('product', { data property within the body that contains the array
"costAsCurrency": function () { of product objects.
return new Number(this.standardCost)
.toLocaleString("en-US", In this version of the res.render() function, the second
{ "style": "currency",
"currency": "USD" }); parameter has four properties on this object: isListVis-
}, ible, data, costAsCurrency, and priceAsCurrency. On the
"priceAsCurrency": function () { product.mustache page (Listing 12) you’re going to use
return new Number(this.listPrice) the {{#isListVisible}} token that says if this variable is
.toLocaleString("en-US",
{ "style": "currency", a true value, then display whatever is contained within
"currency": "USD" }); this token and its closing token {{/isListVisible}}. In this
}, case, you display the HTML table in the _productList.mus-
"data": [ tache file.
{
"productID": 345,
"name": "HL Road Frame - Red, 58", {{#isListVisible}}
"productNumber": "FR-R92R-58", {{> _productList}}
"color": "Red",
"standardCost": 1059.3100, {{/isListVisible}}
"listPrice": 1500.0000,
PRGLÀHG'DWH: "2019-09-11"
}, Make a Nicer Looking Product List Page
// REST OF THE OBJECTS HERE Open the views\product.mustache file and replace the
entire contents with the code shown in Listing 12. This
] HTML is like the code you wrote before, only you’re now
} adding the Bootstrap 5.x CSS framework so you can take
advantage of all the nice CSS classes available in this

Listing 11: Call the Web API using the tiny JSON HTTP package.
// Create an instance of a Router "data": data,
const router = require('express').Router(); "costAsCurrency": function () {
return new Number(this.standardCost)
// Load tiny-json-http module .toLocaleString("en-US",
const tiny = require('tiny-json-http'); { "style": "currency",
"currency": "USD" });
// Create URL for Web API calls },
const url = 'http://localhost:3000/api/product'; "priceAsCurrency": function () {
return new Number(this.listPrice)
// GET /product .toLocaleString("en-US",
router.get('/', async (req, res, next) => { { "style": "currency",
try { "currency": "USD" });
// Request data from Web API }
let response = await tiny.get({ }
"url": url );
}); } catch (err) {
// Get data from response next(err);
let data = response.body.data; }
// Render the page });
res.render('product',
{ module.exports = router;
"isListVisible": true,

22 Building Web APIs Using Node.js and Express: Part 3 codemag.com


Listing 12: Create a main page for display product data.
<!DOCTYPE html> <div class="container">
<html> {{#isListVisible}}
<head> {{> _productList}}
<meta charset="utf-8"> {{/isListVisible}}
<meta name="viewport" </div>
content="width=device-width,
initial-scale=1"> <script src="https://cdn.jsdelivr.net
<title>{{title}}</title> /npm/[email protected]/dist
<link href="https://cdn.jsdelivr.net /js/bootstrap.bundle.min.js"
/npm/[email protected]/dist/css integrity="sha384-kenU1KFdBIe4zVF0s0G1M
/bootstrap.min.css" rel="stylesheet" 5b4hcpxyD9F7jL+jjXkk+Q2h455rYXK
integrity="sha384-rbsA2VBKQhggwzxH7pPCaAq /7HAuoJl+0I4"
O46MgnOM80zW1RWuH61DGLwZJEdK2Kadq2F9CUG65" crossorigin="anonymous"></script>
crossorigin="anonymous"> </body>
</head> </html>

<body>

Listing 13: Add HTML to allow the user to input some search criteria.
<form class="mt-4" Greater Than Price
action="/product/search" method="get"> </label>
<div class="card"> <input type="text"
<div class="card-header bg-primary id="searchListPrice"
text-light"> name="searchListPrice"
<h5 class="card-title"> class="form-control"
Search for Products value="{{search.listPrice}}" />
</h5> </div>
</div> </div>
<div class="card-body"> <div class="card-footer bg-primary
<div class="form-group"> text-light">
<label for="searchName"> <button class="btn btn-success">
Product Name (or partial) Search
</label> </button>
<input type="text" <a href="/product"
id="searchName" class="btn btn-secondary">
name="searchName" Reset
class="form-control" </a>
value="{{search.name}}" /> </div>
</div> </div>
<div class="form-group"> </form>
<label for="searchListPrice">

framework. Due to publishing constraints of this maga- Try It Out


zine, I had to break the values in the attributes of the Save the changes you made to all the files in your Adv-
<link> and <script> elements. Please make sure these are Works project and start it up. Ensure that the AdvWorks-
all on one line when you put them into your file. API Web API project is still running. Go to your browser
and navigate to http://localhost:3010/product. You
Open the views\_productList.mustache file and at the should see a list of products that are coming from the
top of the file, add the header shown in the following Product table in your SQL Server. The look of the page
code snippet. Notice the use of the Bootstrap CSS classes should be nicer with the addition of the Bootstrap CSS
row, col, and text-center. framework, as shown in Figure 7.

<div class="row">
<div class="col text-center"> Search for Products
<h1>Product List</h1> In the Web API project, you created a search endpoint to
</div> allow you to search for products based on a partial name
</div> and/or if the list price is greater than a specified amount.
Create a new file in the views folder named _product-
Next, add the class attribute to the <table> element and Search.mustache file and add the code shown in Listing
add the Bootstrap classes shown in the following code. 13 to this new file.
These classes make formatting your HTML table easy.
Open the views\product.mustache and within the {{#is-
<table class="mt-2 table table-striped ListVisible}} tokens, add the token to bring in the new
table-bordered"> file you created. Make sure this file goes before the one
that references the _productList file.
// REST OF THE TABLE HERE
{{#isListVisible}}
</table> {{> _productSearch}}

codemag.com Building Web APIs Using Node.js and Express: Part 3 23


{{> _productList}} again, due to the publishing constraints of this maga-
{{/isListVisible}} zine some of the long lines that build the request vari-
able should be on one line. If you are copying the code
The HTML in the _productSearch file uses a Bootstrap from the magazine, please make sure to fix these up. The
panel to wrap up the search input fields. The form action search route function creates a search object from the val-
attribute says to post the input fields within this form to ues passed on the query line from the form post-back. It
a product/search route. When you click on the Search then determines which of the properties in the search ob-
button, the data from these two input fields are posted ject are filled in and builds the appropriate URL to submit
to this new route. to the Web API search route. Examples of what this URL
could look like are http://localhost:3000/api/product/
Create the new search route by opening the routes\prod- search?name=HL or http://localhost:3000/api/product/
uct.js file and add the code shown in Listing 14. Once search?name=HL&listPrice=1499, or some variation there-
of. This URL is then submitted to the Web API application
and the data is returned to be displayed on the list page.

Try It Out
Save the changes made to your project files and ensure that
the project is running. Navigate to the http://localhost:3010/
product page and type into the Product Name (or partial)
input field the value HL. Click the Search button and you
should see a few rows that match this criterion displayed
in your HTML table. Input the value 1439 into the Greater
Than Price input field and click the Search button again. You
should now see fewer rows. Finally, click the Reset button to
see the whole list of products re-appear in the table.

Product Detail Page


Now that you’ve displayed the complete list of products
from the Product table, it would be nice to be able to
click on one of the products and display a detail page.
You’re going to use this detail page for adding and edit-
ing product data. First, let's display the product data in
the appropriate input fields. Add a new file in the views
folder named _productDetail.mustache and add the code
shown in Listing 15. This is a typical input form with
some hidden input fields for keeping track of properties
that the user doesn’t need to see. For example, the is-
Figure 7: Bootstrap helps make all your web pages look more professional. ListVisible property determines if the list or detail page

Listing 14: Create a function to search for product data.


// GET /search Route // Render the page
router.get('/search', async (req, res, next) => { res.render('product',
try { {
// Create search object with "isListVisible": true,
// parameters from query line "search": search,
let search = { "data": data,
"name": req.query.searchName, "costAsCurrency": function () {
"listPrice": req.query.searchListPrice return new Number(this.standardCost)
}; .toLocaleString("en-US",
if (search.name || search.listPrice) { { "style": "currency",
let request = url; "currency": "USD" });
if VHDUFKQDPH VHDUFKOLVW3ULFH ^ },
request += `/search?name=${search.name} "priceAsCurrency": function () {
 OLVW3ULFH ^VHDUFKOLVW3ULFH`C; return new Number(this.listPrice)
} .toLocaleString("en-US",
if VHDUFKQDPH VHDUFKOLVW3ULFH ^ { "style": "currency",
request += `/search?name=${search.name}`; "currency": "USD" });
} }
if VHDUFKQDPH VHDUFKOLVW3ULFH ^ });
request += `/search }
?listPrice=${search.listPrice}`; else {
} // Redisplay the list
res.redirect('/product');
// Request data from Web API }
let response = await tiny.get({ } catch (err) {
"url": request next(err);
}); }
// Get data from response });
let data = response.body.data;

24 Building Web APIs Using Node.js and Express: Part 3 codemag.com


Listing 15: Create a detail page to add/edit product data
<div class="mt-4 row text-center"> value="{{productNumber}}" />
<div class="col"> </div>
<h2>Product Detail</h2> <div class="form-group">
</div> <label for="color" class="form-label">
</div> Color
</label>
{{#detail}} <input type="text" id="color" name="color"
<form class="mt-4" action="/product" class="form-control" value="{{color}}" />
method="post"> </div>
<input type="hidden" id="isAdding" <div class="form-group">
name="isAdding" value="{{isAdding}}" /> <label for="standardCost"
<input type="hidden" id="isListVisible" class="form-label">
name="isListVisible" Cost
value="{{isListVisible}}" /> </label>
<input type="hidden" id="productID" <input type="number" id="standardCost"
name="productID" value="{{productID}}" /> name="standardCost"
class="form-control"
<div class="form-group"> value="{{standardCost}}" />
<label for="name" class="form-label"> </div>
Product ID <div class="form-group">
</label> <label for="listPrice"
<input type="text" readonly disabled class="form-label">
class="form-control" Price
value="{{productID}}" /> </label>
</div> <input type="number" id="listPrice"
<div class="form-group"> name="listPrice" class="form-control"
<label for="name" class="form-label"> value="{{listPrice}}" />
Product Name </div>
</label>
<input type="text" id="name" name="name" <div class="mt-2 form-group">
class="form-control" value="{{name}}" /> <button class="btn btn-primary">
</div> Save
<div class="form-group"> </button>
<label for="productNumber" <a href="/product" class="btn btn-primary">
class="form-label"> Cancel
Product Number </a>
</label> </div>
<input type="text" id="productNumber" </form>
name="productNumber" class="form-control" {{/detail}}

is visible. The isAdding property determines if the user


is adding a new product or updating an existing product.

To add the ability to navigate to the detail page, open


the _productList.mustache file and just after first <tr>
in the <thead> element, add the following <th>.

<th>Actions</th>

Then immediately after the first <tr> in the <tbody> element,


add the following <td>. The href attribute within this an-
chor tag calls route product/ followed by the number of the
product ID the user clicked on. For example, product/345
is called if the user clicked on a product with the product ID
of 345. This route is going to return the detail page you just
created filled in with the data for the product 345.

<td>
<a href="product/{{productID}}"
class="btn btn-primary">
Edit
</a>
</td>

Open the product.mustache file and just below the {{/


isListVisible}} line, add the following code:

{{^isListVisible}}
{{> _productDetail}}
{{/isListVisible}} Figure 8: This detail page is displayed after clicking on the Edit button on the table.

codemag.com Building Web APIs Using Node.js and Express: Part 3 25


Listing 16: Create a route to retrieve a single product row.
// GET /id Route {
router.get('/:id', async (req, res, next) => { "isListVisible": false,
try { "isAdding": false,
// Build the request URL "detail": data
let request = url + `/${req.params.id}`; }
// Request data from Web API );
let response = await tiny.get({ } catch (err) {
"url": request next(err);
}); }
// Get data from response });
let data = response.body.data;
// Render the page
res.render('product',

Listing 17: Add an insert function to add a product to the SQL Server table.
// POST from Detail Page // PUT an updated product
router.post('/', async (req, res, next) => { response = await tiny.put({
try { "url": request,
// Declare the response object "data": product
let response = {}; });
// Get posted values from form }
let product = req.body; // TODO: Handle a 404 or a 400
if (product.isAdding != 'false') {
// POST a new product // Redisplay the list
response = await tiny.post({ res.redirect('/product');
"url": url, } catch (err) {
"data": product next(err);
}); }
} });
else {
let request = url +
`/${product.productID}`;

The {{^isListVisible}} is the inverse of the {{#isListVis- Update a Product


ible}} token. This token says if the isListVisible property When you click on the Save button, you want to post data
is false, display the contents between the two tokens. In from the detail form to the post() route you’re going to cre-
this case, if the isListVisible property is false, the detail ate. To post data, you must add a module in Express to parse
page is displayed instead of the product search and list the data in the form body into a JSON object. Open the in-
page. dex.js file and after the line of code app.engine('mustache',
mustacheExpress()), add the following code:
Open the routes\product.js file and after the other routes
in this file, add the new route shown in Listing 16. The // Load body-parser module
product is passed into this route by clicking on the anchor // (required for post-backs)
tag in the table. A call to the GET/id route in the Web const bodyParser = require('body-parser')
API is made to retrieve the single product row. The res. // Support JSON-encoded form body
render() function is called passing in the Product page app.use(bodyParser.json());
and a second parameter with three properties. The isList- // Support URL-encoded form body
Visible property is set to false so the templating engine app.use(bodyParser.urlencoded({
displays the detail page instead of the list page. Because extended: true
you just clicked on an existing product, the isAdding }));
property is set to false—if you’re going to save this data,
you want to do an UPDATE and not an INSERT. The detail Add a POST Route
property is the single product object that’s used in the Open the routes\product.js file and add a new post route,
tokens {{#detail}} and {{/detail}} on the detail page you as shown in Listing 17. Form data can only be posted
created in Listing 15. (not put as in the Web API), so whether you’re updating
or inserting data, you only call the post() route. That’s the
Try It Out reason for the isAdding property on the JSON object, so
Save the changes made to your project files and ensure you know whether to call the post or put on the Web API.
that the project is running. Navigate to the http://loc- After calling either the post or put, you are going to redi-
alhost:3010/product page and click on one of the Edit rect back to the product page so it can re-read the product
buttons to view the detail page (Figure 8). Click the back data and display any changes in the HTML table.
button on your browser and click on another Edit but-
ton to see the detail data for another product. You can Try It Out
click on the Save button, but nothing’s going to happen Save the changes made to your project files and ensure
yet. The Cancel button takes you back to the product that the project is running. Navigate to the http://local-
list page. host:3010/product page and click on one of the Edit but-

26 Building Web APIs Using Node.js and Express: Part 3 codemag.com


tons to view the detail page. Make some changes to one or Listing 18: Add a function to display a blank detail form.
two of the input fields and click the Save button. You should
// GET /add Route
see the changes you made appear in the product table. router.get('/add', async (req, res, next) => {
try {
res.render('product',
Insert a Product {
You’re going to need an Add button somewhere on the "isListVisible": false,
"isAdding": true,
page to allow the user to insert a new product. Open the "detail": {
views\_productSearch.mustache file and add the fol- "name": "",
lowing HTML just before the Search button: "productNumber": "",
"color": "Red",
"standardCost": 1,
<a href="product/add" "listPrice": 2
class="btn btn-secondary"> }
Add }
</a> );
} catch (err) {
next(err);
In the href attribute of this anchor tag, you call a route }
named product/add. Open the routes\product.js file and });
add a new method BEFORE the router.get("/:id"…) route,
as shown in Listing 18. In this route, render the product
page passing in an object that sets the isAdding property Listing 19: Add a route to delete a product row from the table.
to true and sets the detail property to all the input fields // GET /delete/id Route
on the page that should be initialized to a valid value. router.get('/delete/:id',
async (req, res, next) => {
Try It Out try {
// Build the request URL
Save the changes made to your project files and ensure let request = url + `/${req.params.id}`;
that the project is running. Navigate to the http://loc- // Request data from Web API
alhost:3010/product page and click on the Add button. response = await tiny.delete({
Enter some valid data for each input field and click the "url": request
});
Save button. You should see the product you created ap- // TODO: Handle 404
pear in the HTML product table.
// Redisplay the list
res.redirect('/product');
Delete a Product } catch (err) {
next(err);
The last piece of functionality to create is to allow the }
user to delete a product. I like putting a Delete button at });
the end of the row of each column. Open the views\_pro-
ductList.mustache file and just before the closing </tr>
within the <thead> element, add a new <th>:
Try It Out Getting the Sample Code
<th>Delete</th> Save the changes made to your project files and ensure
that the project is running. Navigate to the http://local- You can download the sample
Just before the closing </tr> within the </tbody> ele- host:3010/product page and click on the Delete button code for this article by visiting
ment add a new <td> element with an anchor tag in it. next to one of the products. Respond that you wish to www.CODEMag.com under
This anchor tag is going to call the product/delete route delete this row and you should see the product list re- the issue and article, or by
with the product ID for the current row at the end of the displayed minus the product you just deleted. visiting www.pdsa.com/
downloads. Select “Articles”
route. Before it calls this delete method, it’s a good idea
from the Category drop-
to confirm with the user that they really want to delete
this row. Summary down. Then select “Building
Web APIs Using Node.js and
In this series of articles, you built two web projects using Express: Part 3” from the Item
<td> Node.js and Express. The first web project is a set of API drop-down.
<a href="product/delete/{{productID}}" calls used to perform CRUD logic on a SQL Server table.
onclick = You added middleware to handle exceptions and learned
UHWXUQFRQÀUP 'HOHWHWKLV3URGXFW"  to read data from a configuration file. The second web
class="btn btn-danger"> project is a website used to serve up web pages for a user
Delete to interact with the product data. Using several different
</a> tools, you learned to call Web APIs from each page and
</td> create dynamic web pages using templating.

Add the new delete route (Listing 19) by opening the Paul D. Sheriff
routes\product.js file and adding the new route BEFORE
the router.get("/:id"…) route. This route extracts the ID
passed in from the req.params.id property and uses that
ID to build the call to the Web API delete route. After the
delete has occurred, the product list page is once again
displayed, minus the row just deleted.

codemag.com Building Web APIs Using Node.js and Express: Part 3 27


ONLINE QUICK ID 2309041

Applying DDD Thinking to Refactor


Aggregate Filters
This article reconsiders the problem (well, a solution) of filtering in software being built for a book publishing company. Leaning
on lessons from Domain-Driven Design (DDD), the company’s team (including devs, domain experts, and more) embarked on
the initial discovery phase and strategic design. They then selected one particular bounded context that came out of strategic

design—the contracting bounded context—to focus As the developers incorporate the contract aggregate fur-
on for the initial tactical design phase. That particular ther into their solution, it’s important that they make it
bounded context presented some interesting complexity easy for users to find a contract to work on, whether they
because of the many iterations a contract may go through want to make tweaks, create a new revision, or just look
as well as the possibility of co-authors on a book. The at some details. Users might want to search by author
added complexity makes the bounded context a great names, contract status, or one of the relevant dates, such
candidate for applying DDD patterns. as when the contract was first initiated or when an author
needs to respond to a version during negotiations.
The outcome of the tactical design is a contract aggre-
Julie Lerman gate (see Figure 1) defined by its entities, value objects, Although the lessons of this article apply to any stack,
@julielerman invariants and other business rules. The contract entity I’ll be using .NET Core and EF Core with a SQL Server
thedatafarm.com/contact itself is the root of the aggregate. A contract has one or database to explain the solution. The demo code is on
more versions including a version with default values and my GitHub account at https://github.com/julielerman/
Julie Lerman is a Microsoft
default specifications for every new contract. Each ver- FilteringwithEFCoreandDDD.
Regional director, Docker
sion includes a set of specifications that are encapsulated
Captain, and a long-time
in a value object. And each version also has one or more
Microsoft MVP who now
authors.
Factors that Lead to Filtering
counts her years as a coder
in decades. She makes
Problems
her living as a coach and The author class is also a value object that leans on yet In the solution, the EF Core DbContext designed to sup-
consultant to software another value object: PersonName. Figure 2 shows the port the aggregate takes care of ensuring that the ag-
teams around the world. properties of each of the aggregate classes in Visual Stu- gregate is mapped correctly to the relational database.
You can find Julie present- dio’s class designer. There’s a lot of logic involved to cre- Additionally, in order to protect the aggregate, this Db-
ing on Entity Framework, ate default versions, create revisions, finalize contracts, Context only exposes the aggregate root—the contract
Domain-Driven Design and etc. If you’re curious about that logic, it’s all detailed entity—for querying and updates. It’s possible to use the
other topics at user groups as part of my Pluralsight course: EF Core 6 and Domain- DbContext.Set<> method directly, but you should design
and conferences around Driven Design (https://www.pluralsight.com/courses/ your data access logic such that you are not circumvent-
the world. Julie blogs at ef-core-6-domain-driven-design). If you’re not familiar ing this guard rail other than, perhaps, in integration
thedatafarm.com/blog, is with DDD, you may want to start with the Domain-Driven tests as needed.
the author of the highly Design Fundamentals course also on Pluralsight that I co-
acclaimed “Programming authored with Steve Smith. For example, many of the details that would be helpful for
Entity Framework” books, selecting a contract such as author name or acceptance
and many popular videos The focus of this article is on enabling users to find pre- deadline are exposed throughout the aggregate in differ-
on Pluralsight.com. existing contracts by filtering. That seems like a problem ent classes. Yet accessing them to filter queries is overly
with a well-known and obvious solution. But is it really? complicated. It can be done but requires a lot of LINQ
trickery and expertise. And that’s just to filter the results.
The results themselves need to be a string composed of
appropriate highlights sufficient for the user to select the
exact one that they’re seeking. Listing 1 shows an ex-
ample of the most efficient solution I could come up with
to find a list of contracts based on the last name of any of
the authors. Remember that the authors could vary from
one version of the contract to another and, in this search,
the domain experts specified that this filter should only
find contracts with that author in the current version.

As you can see from that one method in Listing 1, it’s


possible to build up layers of queries via the aggregate
and its DbContext to support this. But for me, it was quite
a hair-puller, especially to filter by name as I needed to
drill into contract version, then each of its authors, and
then the Name property (remember it’s a value object),
and even further in to the LastName property. In the end,
Figure 1: The Contract entity is the root of the contract aggregate. this and the other queries I created worked well, although

28 Applying DDD Thinking to Refactor Aggregate Filters codemag.com


not perfectly, because I was only getting back the name
of one of the authors. It was at the point when I was try-
ing to determine how to also look for any co-authors that
I finally decided the effort was just too much.

The aggregate was intentionally designed to represent a


contract and its business logic. It exists to easily create
and modify a contract while maintaining its invariants.
However, that design has made the problem of search
somewhat complicated. Even if you were to use specifi-
cations to define, combine, and execute queries for you
(using the brilliant EF Specification from Steve Smith at
https://github.com/ardalis/specification), the tiered na-
ture of this aggregate makes defining and composing the
various specifications on which you may want to search
overly difficult.

Are You Working Too Hard?


This is when my ingrained habit of Domain-Driven De- Figure 2: Aggregate classes with their
sign thinking finally kicked in. Or perhaps the old Ben & properties and relationships
Jerry’s slogan “If it’s not fun, why do it?” did. The fact
that it was so hard to build up these queries is a big red
flag. If you glean nothing else from my pain, at least Listing 1: Using LINQ to find contracts against the aggregate is hard
remember this! As developers, it’s habitual for us to set a public List<KeyAndDescription>
goal and keep fighting every problem that comes our way GetContractPickListForAuthorLastName(string lastname)
until we achieve said goal. We slap more and more code {
onto the problem until it finally works. This is something var currentVersions = _context.Contracts
.Where (c => c.Versions
of an ego problem for many of us and I’m happy to admit .Any(v =>v.Authors
that it definitely is for me. .Any(a =>a.Name.LastName.StartsWith(lastname))
)
).Select(c => new
But DDD teaches us to take complex problems, break them {c.DateInitiated,
apart into simpler problems, and then compose them as Current = c.Versions.Where(v =>
needed. I’ve drawn these lessons into many areas of my v.Id == c.CurrentVersionId).FirstOrDefault()
life, whether building software or trying to follow a com- }).ToList();
var picklist = currentVersions.Select(v =>
plicated soup recipe. new SearchResult(v.Current.ContractId,
$"{v.DateInitiated.ToShortDateString()},
Once I woke up from my ego trip, I realized that the fil- {v.Current.WorkingTitle},
{v.Current.Authors.FirstOrDefault().Name.FullName}"),
tering was just too hard to accomplish using LINQ against v.ContractNumber)
the aggregate. I also knew that I had a very appropriate .ToList();
tool at my disposal that’s excellent at querying: the da- return picklist.OrderBy(pl => pl.Description).ToList();
tabase. My eventual solution was to create a new simple }
model, a new DBContext, and some SQL Server objects
(view, stored procedure, and a user-defined function).
Let’s see how that works. You can see that being built up in the concatenated
string within the search results along with the ContractId
and ContractNumber in the “way too hard” version in
Listing 1. I have created a DTO class in my solution called
As developers, it’s habitual for us SearchResults to easily capture that data.
to set a goal and keep fighting
every problem that comes our What Filters Do Users Need?
way until we achieve said goal. The domain experts also shared a number of ways that
their users look for contracts. They want to search by
author name parts: first, last, or both names and they
shouldn’t need the full name to get a good result. For ex-
What Filter Results Do Users Need ample, it’s common that people can’t remember if my last
to See? name has an “h” in it or not. They should be able search
The domain experts identified the key data that a user for authors with last name L or Le. Or the other common
should see in order to help them find a particular con- conundrum about my name: is it Julie or Julia? They can
tract. It should be a combination of the contract num- search for Jul.
ber, the names of all authors involved, the working title
of the book, and the date that the contract negotiation Because every new contract version imposes a timeframe
was initiated. For example, this fake book: Contract start: in which an author needs to respond, accepting or re-
6/13/2023, "Learning Razor Tricks", Author(s): Julie Ler- jecting it, users may need to look for a contract whose
man, Roland Guijt. response date is coming up soon.

codemag.com Applying DDD Thinking to Refactor Aggregate Filters 29


During strategic design, discussions about searching for The SearchContext class (see Listing 2) is read only (No-
contracts revealed that there are two reasons for search- Tracking) with the SearchResult configured as having no
ing. One is to find a particular contract to examine it or key.
modify its details. The other is simply for building re-
ports. Reporting is a very different set of problems that’s But with these new classes, how are you to build LINQ
guaranteed to evolve and should (and will) be handled queries? The answer is that you won’t build any! Remem-
separately from this solution. During strategic design, the ber my earlier nod to the power of the database. You’ll
team was careful to keep the contract searches focused only be using the SearchContext to execute raw SQL and
on those searches needed for working on contracts. capture results into SearchResult objects. You’ll see how
I’m doing this a bit further on.
One cross-over idea from the reporting system is that a
report will have a contract number on it, so a user with You may also have wondered how you’d be able to search
access to a contract number may want to search on that within the Description property for names or dates. Again,
as well. I won’t be implementing every one of these fil- you won’t! The SearchResult is only there to capture data
ters, but they can all follow a similar pattern. coming back from the raw SQL queries.

Implementing a New Model What About the Bounded Context


and DbContext and Database?
We’ve already established that querying against the aggre- An important point to understand is that this
gate using the DbContext designed for persisting the aggre- SearchResult isn’t any type of aggregate. It’s a simple
gate is too hard. That’s for contract management. For search- model and it lives in the same bounded context as the
es, I’ll use the SearchResult DTO and a new SearchContext. Contract aggregate. That also means that, by definition,
it accesses the same database that the ContractContext
The SearchResult class is very simple. As described ear- interacts with. What do I mean when I say “by defini-
lier, it has three properties: KeyValue, Description, and tion”? In DDD, you’re guided to have different databases
ContractNumber. The class is immutable and can only be for each bounded context. But here, both the aggregate
set by way of its public constructor. The private param- and the SearchResult model are part of one bounded
eterless ctor is there for EF to materialize query results. context and therefore should be able to use a shared
database.

public class SearchResult Again, with this scenario, I have one DbContext for read-
{ ing and writing and another that only reads. Because the
public SearchResult(Guid key, second DbContext performs no writes and doesn’t even
string description,string contractNumber) perform LINQ queries, it has no bearing on the database
{ schema. The primary DbContext, ContractContext, will
KeyValue = key; be used with EF Core migrations to control the database
Description = description; schema. You should never use the SearchContext with mi-
ContractNumber = contractNumber; grations because EF Core would assume that the database
} has one single table with three columns and trash what
private SearchResult(){} was created by ContractContext.

public Guid KeyValue { get; private set; } As you are reading about having two different DbContexts
public string? Description in a single bounded context sharing the same database,
{ get; private set; } you may be wondering about the case of multiple ag-
public string? ContractNumber gregates each with their own DbContext in this scenario.
{ get; private set; } That’s not relevant to this article, but I do want to ad-
} dress the question. If you have two aggregates, you need
to find a balance where a single DbContext serves them
both and serves and controls a single database. If achiev-
Listing 2: The SearchContext class ing this is anything less than easy-peasy, that should
raise a red flag, yet again, that something is probably
public class SearchContext : DbContext wrong with your models or, more likely, that your deter-
{
public SearchContext
mination that they belong in the same bounded context
(DbContextOptions<SearchContext> options):base(options) was in error.
{
ChangeTracker.QueryTrackingBehavior =
QueryTrackingBehavior.NoTracking; What About Those Database Objects?
}
public DbSet<SearchResult> SearchResults => Here comes the fun part of the solution where it’s time
Set<SearchResult>(); to put on your database hat or find someone in your org
that wears one. Recall that I mentioned one view, one
protected override void OnModelCreating stored procedure, and one function. (Is anyone hearing a
(ModelBuilder modelBuilder)
{ George Thorogood song now?) The stored procedure may
modelBuilder.Entity<SearchResult>().HasNoKey(); seem complicated but for a database whiz, probably not.
} The hardest part will be formatting the SQL so that it’s
} readable in this article!

30 Applying DDD Thinking to Refactor Aggregate Filters codemag.com


Figure 3: The CurrentContractVersions view in the database

If the sproc is complicated, does that raise a red flag and version of a particular contract. Note that for space con-
tell me I’m doing it wrong? To me, the answer is no. I’ll sideration, the figure doesn’t display the full GUID values.
be asking the database to do something it’s very good at
and my coded solution will be clean and easy to read. And
from a DDD perspective, that means it will be easier to
Explicit Sprocs or One to Rule
understand and maintain my code. Them All?
I originally used different stored procedures to execute the
Based on the ContractContext, the database tables are needed queries. That was tidy but it led to complexity in
structured so that the contract highlights are in a Con- the application because I had to have different methods
tracts table and each version of the contract is in a Ver- and different calls. It worked and felt nicely organized.
sions table. The Versions table also contains the data for
the SpecificationSet value object, thanks to EF Core’s There was a lot of duplication in these procedures. I re-
Owned Entities mapping. Because there could be multiple moved some of it by encapsulating the creation of the
authors (remember Author is also a value object) for a description string into a function. Otherwise, the only
version, there’s a third table, Version_Authors, that con- differences were in the parameter lists and the WHERE
tains the author data. If you’re curious about how my statements in each procedure’s subquery.
mappings and migrations came up with this schema, I go
through that in detail in the above-mentioned Pluralsight For example, to find contracts by author last name, there’s
course. I’m aware that a collection of Value objects is a single parameter (@LastName) and the subquery is:
seen as an anti-pattern. I thought long and hard before
I decided to take this path. It’s a decision made based select currentversionid
on plenty of experience, so I’m comfortable with it. In from currentcontractversions
fact, it’s the first time I’ve ever designed an aggregate where left(LastName,len(trim(@LastName)))=
that has a collection of value objects as a property of the trim(@LastName)
aggregate root.
To find contracts by date initiated, the sproc takes two
Let’s start with the view. Why do you need a view? The da- parameters, @initdatestart and @initdateend, with the
tabase schema is designed to persist the aggregate. Like subquery:
the LINQ query, I’ll have to dig through multiple tables
to find the data I need to build the query for the filter select currentversionid
and the results. By encapsulating the search details and from CurrentContractversions
output details into a single view, queries will be much where
simpler to build. cast(dateinitiated as date)>=@initdatestart)
and cast(dateinitiated as date)<=@initdateend
I thought it would be easier for you to comprehend the
view in designer mode as it’s displayed in Figure 3. You Adding filters meant adding a new stored procedure and
can see which fields I’m extracting from which tables and then a new method in the search service to call that pro-
how those tables relate to one another. The SQL for the cedure. Lots of copy/pasting and the red flag came up
view is captured in a migration file in the solution that again!
you can download. This gives me all of the fields I might
want to search on and all of the fields I need to output. I considered dynamic queries (which would have meant
embedding strings into the TSQL) and decided against
The view lists all versions of each contract (Figure 4) that because—eww—and that would introduce too many
and notes which ContractVersionId represents the current complications. However, I really wanted to make things

codemag.com Applying DDD Thinking to Refactor Aggregate Filters 31


Figure 4: The output of the database view

simpler in the application and had to push the boundar- WHERE left(LastName,len(trim(@LastName)))
ies of my TSQL skills to come up with a single stored =trim(@LastName) ;
procedure. This may have been a simpler task for someone
more adept at TSQL, but I did a lot of due diligence to Finally, I execute one last query against that temporary
ensure that I was making the best choice. I hope that table where I concatenate all of the info needed in the
research led me (and as I pass this on to you) to the best list. In other words, combining the date initiated, the
conclusion. working title, and, using an aggregate function, the
names of any authors involved in that contract’s current
The stored procedure takes in all three parameters. version. Again, I recommend perusing the TSQL if you are
interested in how I implemented it all.
CREATE PROCEDURE GetContractsFlexTempTable
@LastName varchar(15), SELECT groupednames.contractId as
@initdatestart varchar(20), KeyValue,[description],ContractNumber
@initdateend varchar(20) FROM
(SELECT contractid,currentversionid,
You can inspect the TSQL in the download to see the full dbo.BuildContractHighlights(various data)
details but I will highlight some of them here. AS [description],ContractNumber
FROM CurrentContractversions
There’s some trickery to deal with null dates, for example, WHERE currentversionid IN
if you’re looking for a contract initiated after June 1 with (SELECT currentversionid
no end date. I am pre-creating converting the incoming FROM #ContractSubSet)
start and end dates to create values if they are null. GROUP BY various data) groupednames

DECLARE @SDate DATETIME With this procedure in place, the database now has a
DECLARE @EDate DATETIME single entry point—this one stored procedure that takes
in my three parameters. And I can always expand the
SET @SDate = ISNULL(@initdatestart, '19000101') logic to add more parameters and filters. For some con-
SET @EDate = ISNULL(@initdateend, GETDATE()+100) text, the current version of this sproc is 57 lines. It isn’t
a beast. It just felt like it when I had to figure out how
And there’s some more trickery to set up a temporary ta- to write it!
ble in order to collect the key of any versions that match
the filter. I’ve truly put all of the pain in the database into the
view, stored procedure, and function. And, thanks to this,
select currentversionid into #ContractSubSet from you’ll see that the code in my app is simple, readable,
CurrentContractversions WHERE 1=2 and succinct.

Then, using a series of IF statements, I insert rows into


the view using WHERE statements based on whether there
Executing the Filters in the Application
are values in any of the three parameters. That way, I can Executing the stored procedure from the application is
combine Name and Date searches. I had to write each achieved with the help of a SearchParams class that en-
of those filters explicitly. Here’s a bit of the procedure capsulates all of the possible filter values you may need.
where I’ve determined that there is a lastname filter but This can be expanded on as needed.
no date filters.
public class SearchParams
IF @LastName IS NOT NULL {
BEGIN public SearchParams(string lastName,
IF (@initdatestart is NULL and string startDate, string endDate)
@initdateend is NULL) {
RQO\ÀOWHULQJRQODVWQDPH LastName=lastName=="" ? null:lastName;
INSERT INTO #ContractSubSet StartDate=startDate== "" ? null:startDate;
SELECT currentversionid EndDate = endDate== "" ? null:endDate;
FROM currentcontractversions }

32 Applying DDD Thinking to Refactor Aggregate Filters codemag.com


public string LastName { get; } Listing 3: The ContractFlexSearchService class
public string StartDate { get; }
public partial class ContractFlexSearchService
public string EndDate { get; }
{
} SearchContext _context;
static List<SearchResult> _results;
Your UI or API can populate this class and pass its values
along to the stored procedure call. In my sample applica- public ConractFlexSearchService(SearchContext context)
tion, I have a simple set of Razor pages in an ASP.NET {
_context = context;
Core website. The user can enter filter details into the
}
relevant text boxes—one for last name, one for start date, public List<SearchResult> SearchResults => results;
and one for end date—and then click a Filter button.
The razor page constructs a SearchParams object from the public async Task<List<SearchResult>> Service
form data and passes it on to a search service. There is (SearchParams sP)
also a Reset button that creates an empty SearchParams {
_results = _context.SearchResults
object. Here are the methods called when the buttons
.FromSqlInterpolated($"GetContractsFlexTempTable
are clicked. {sP.LastName},{sP.StartDate},{sP.EndDate}")
.ToList();
public async Task OnPostFilter() return _results;
{ }
var searchParams = new SearchParams }
(LastName, StartDate, EndDate);
ExecuteFilter(searchParams);
}

public async Task OnPostResetFilter()


{
var searchParams = new arams("", "", "");
ExecuteFilter(searchParams);
}

Both methods then pass their searchParams object to the


ExecuteFilter method, also in the Razor page.

private async Task ExecuteFilter


(SearchParams searchParams)
{
ContractHighlights =
await _service.CallService(searchParams);
} Figure 5: Flexible search filtered on Last Name and Start Date

The service call is in my ContractSearchFlexService class


shown in Listing 3. The original solution that used sepa- I owe a nod of thanks for some encouragement from a fel-
rate stored procs uses a ContractSearchService class, and low .NET developer (David Henley) in my original GitHub
you’ll see in the demo code that I added the word “Flex” repository for the Pluralsight course. His comment caused
to the assets that target the GetContractsFlexTempTable me to push myself to refactor the solution from the origi-
stored procedure. nal (with its many stored procedures and many API calls),
resulting in the handy SearchParams class and the super
Figure 5 shows an example of the Razor app using the simple ContractSearchFlexService class. And, as you prob-
flexible search where I’ve applied a name and start date ably know, getting to a happy place with a big refactor
filter, resulting in two of the sample data contracts to be really is fun!
displayed.
Julie Lerman
Beauty in Simplicity
Take a moment to gaze upon the beauty and simplic-
ity of the service class and its single method. Thanks
to the SearchParams class and the stored procedure
I created, I now have one single simple method. If I
need to add more filters, I can do that by expanding
the SearchParams class and the stored procedure. But
the service won’t need to change at all. The original
solution allowed only one filter at a time and its service
had separate methods for each filter as well as the “no
filter” option. I did refactor it to allow a single point of
entry to make it easy to use, but it’s a bit complicated to
maintain.

codemag.com Applying DDD Thinking to Refactor Aggregate Filters 33


ONLINE QUICK ID 2309051

Developing Real-Time Web


Applications with Server-Sent
Events in ASP.NET 7 Core
For real-time web applications, Server-Sent Events (SSE) provide an alternative to WebSockets and long polling, allowing server-
side updates to be sent to clients without the need for clients to seek updates repeatedly. In this article, I’ll discuss how to build
real-time web applications with SSE in ASP.NET Core, including the core concepts of SSE, the features, benefits, and downsides

of SSE, and how to implement real-time updates, etc. If The Server-Sent DOM Events are the SSE foundation. By
you’re to work with the code examples discussed in this subscribing to events produced by a server using the Event-
article, you’ll need the following installed in your system: Source interface, browsers may get alerts any time new
events occur. When an EventSource attempts to get data,
• Visual Studio 2022 it accepts an HTTP event stream connection from a specific
• .NET 7.0 URL and keep the connection open. A server-sent event is
• ASP.NET 7.0 Runtime one that is always pushed from the server to a web browser
rather than retrieved or requested.
If you don’t already have Visual Studio 2022 installed,
Joydip Kanjilal you can download it from here: https://visualstudio.mi- Message Format
[email protected] crosoft.com/downloads/. The message format in SSE is defined by W3C. It should be
noted that the SSE data sent by a server to a client should
Joydip Kanjilal is an MVP be in UTF-8 encoded format and have the following header:
(2007-2012), software Introducing Server-Sent Events (SSE)
architect, author, and Server-Sent Events (SSE or Event Source) is a W3C standard Content-Type: text/event-stream
speaker with more than
for real-time communication between servers and clients Cache-Control: no-cache
20 years of experience.
over HTTP. With SSE, the server may provide the client with Connection: keep-alive
He has more than 16 years
of experience in Microsoft
real-time event-driven changes through an HTTP connec-
.NET and its related tion. SSE is a standardized push technology conceptualized The data sent from the server to the client consists of
technologies. Joydip has first in 2004 and included as part of the HTML5 specifica- several messages, each separated by \n\n characters. A
authored eight books, tion. It enables you to transmit notifications, messages, field contains the following values:
more than 500 articles, and events from a server to a client over HTTP.
and has reviewed more • Data: Indicates the payload to be sent from the
than a dozen books. A protocol for streaming events, SSE is supported by the server to the client
majority of contemporary web browsers. These include • Retry: Optional and indicates the time the client
Edge, Firefox, Chrome, and Safari. SSE eliminates the need will wait before it attempts for a reconnection in the
to manually query the server or establish several connec- event of a connection drop
tions to provide changes to the client by enabling uni- • Event: Represents the event type defined by the ap-
directional, real-time communication between the server plication
and the client. Figure 1 demonstrates an overview of SSE • ID: Optional and represents the ID of the data trans-
works. mitted from the server to the client

34 Developing Real-Time Web Applications with Server-Sent Events in ASP.NET 7 Core


An event consists of a set of colon-separated key/value
pairs, with each pair terminated by a new line. The event
itself is separated by two new line characters.

The following is a template for a message comprising a


single event:

id: <messageId>\n (optional)


event: <eventType>\n (optional)
data: <event data - plain text>\n (mandatory)
\n
\n

The response is a series of events separated by a \n char-


acter, as shown here:

event: [event type]\n Figure 1: Server-Sent Events in action


data: [message]\n\n

The following is an example of an SSE response: as a result of network problems or server restarts.
In order to maintain an uninterrupted stream of up-
HTTP/1.1 200 OK dates without user intervention, the client will make
Content-Type: text/event-stream an effort to reconnect the connection. A strong and
dependable communication route is offered by the
event: event-1 automatic reconnection capability.
data: This is a sample text. • Browser compatibility: SSE is supported by the major-
event: event-2 ity of current web browsers, including Chrome, Firefox,
data: {"code": "p001", "quantity": 456} Safari, and Edge. It performs effectively in settings like
limited networks or outdated browser versions where
Why Server-Sent Events? WebSockets may not be accessible or permitted. When
Here are some of the key benefits of SSE in web applica- WebSockets are not practical for real-time communica-
tion development: tion, SSE may be used as an alternative.
• Seamless integration: SSE is simple to incorpo-
• Updates in real-time: SSE permits real-time com- rate into current web applications without requiring
munication between the server and client. Instead significant infrastructure modifications. It makes
of the client continually querying the server for new use of the already-existing HTTP infrastructure and
information, it enables the server to send changes doesn't call for any new network settings or unique
to the client as soon as they happen. For applica- server configurations. A number of server-side tech-
tions like live feeds, chat rooms, market tickers, and nologies, such as ASP.NET Core, Node.js, Django, and
alerts where quick updates are critical, this real- others, support SSE.
time feature is essential.
• Simplicity: For server-to-client communication, SSE For real-time changes in web applications, Server-Sent
offers a straightforward and lightweight protocol. Events provide a simple and effective option. Without the
SSE offers a lower overhead than other real-time complexity of conventional real-time protocols, they let
technologies, such as WebSockets, and doesn't need developers create responsive and engaging experiences, in-
complicated handshakes or bidirectional communi- crease server effectiveness, and increase user engagement.
cation. SSE messages are text-based, simple to read,
and easy to use. Key Features of SSE
• Reduced load on the server: SSE allows the server Some of the best features include:
to transmit updates to clients only when required,
reducing the workload on the server. This lessens • Unidirectional communication: SSE offers a serv-
the need for clients to submit the server queries er-to-client unidirectional communication channel.
frequently, which lowers server load and enhances In this type of communication, the server can trans-
scalability. SSE is very effective for applications that mit data to a connected client, but the client can-
have a lot of customers and need server resource not send data back to a server.
optimization. • Text-based protocol: Because SSE is a text-based
• Support for cross-origin communication: SSE protocol, messages are sent via HTTP in plain text,
enables cross-origin communication, enabling the which makes it simpler to debug and comprehend.
client to get updates from several domains or ori- Fields like "event," "data," and "id," which are sent
gins. The ability to stream data from a server that as a string of text lines in the form of event fields,
is housed on a different domain or subdomain is make up an SSE message.
handy in such situations. Cross-origin resource shar- • Real-time updates: SSE enables servers to trans-
ing (CORS) guidelines are followed by SSE to enable mit updates depending on server events, allowing
safe connection between several sources. servers to provide event-driven updates to clients
• Automatic reconnection: SSE connections are du- in real-time. A particular event name, data related
rable and can reestablish themselves if they are lost to the event, and, optionally, an identity that may

codemag.com Developing Real-Time Web Applications with Server-Sent Events in ASP.NET 7 Core 35
be given to the event are typically included in all • You can leverage simple AJAX requests and page
updates. The client can listen for specific events or reloads to implement polling in your applications.
receive all events sent by the server. • Clients repeatedly request updates even when there
• Connection persistence: SSE creates a durable are none, resulting in unnecessary network traffic
HTTP connection between the client and server that and increased server load.
endures for the same amount of time as the client. • This approach is suitable for scenarios where updates
SSE maintains the connection open to permit con- are infrequent or a real-time response is not a priority.
tinuous data streaming in the future, contrary to
conventional HTTP requests, which are transient and Long Polling
are closed when a response has been received. • Long polling reduces unnecessary requests to the
• Resilient: Because SSE connections are robust, if server and enables near real-time updates compared
the connection is lost, SSE will automatically re- to regular polling.
establish the connection. As soon as they become • Servers hold requests open until an update is available
disconnected, clients will make an effort to rejoin to rather than responding immediately to a client request.
the server, ensuring that updates continue to flow • The server responds when an update is available.
consistently and seamlessly. Then, the client sends a new request to keep the
• Cross-origin support: Support for cross-origin com- connection alive.
munication: SSE allows for the client to receive up- • When no updates are available within a particular time-
dates from a domain or origin other than the web- frame, the server responds with an empty response. The
site to which it is linked. You can configure cross- client sends a new request and continues listening.
origin resource sharing (CORS) rules on the server to • Although long polling reduces the frequency of re-
control access and security. quests and enables a real-time response, it still in-
volves frequent connections and overhead due to
request/response cycles.
How Do Server-Sent Events Work?
Server-Sent Events (SSE) establish a long-lived connec- WebSocket
tion between a server and its connected client. Once • WebSocket enables communication between servers
this connection is established, the server communicates and consumers over a single, persistent, reliable,
event-driven changes to the client over a single HTTP and full-duplex connection.
connection. Thanks to the SSE connection management • Web Socket is ideally suited for applications requir-
layer and parsing algorithms, a server can submit new ing continuous data transfers, such as chat applica-
events one by one while HTTP responses can remain open. tions and collaboration tools.
Here are the series of steps that outline how SSE works: • Due to server-side infrastructure requirements, Web-
Socket isn’t supported in all legacy or restricted
1. Server-Sent Events (SSE) establishes a persistent environments such as older browsers and certain
connection between a server and its client. network configurations.
2. Once this connection is established, the server com-
municates event-driven changes to the client over a Server-Side Events
single HTTP connection. As soon as the SSE connec- • SSE provides a lightweight, unidirectional approach
tion is established, the server can start sending SSE to server-client communication over HTTP.
events to the client. • Contrary to WebSockets, communication between
3. Once the server receives an SSE request, it processes server and client in server-sent events runs in only
the request. Once processing is over, the server re- one direction, from server to client.
sponds with the appropriate SSE headers. • SSE enables real-time updates without the complex-
4. Next, the server sets the response headers to indi- ity of WebSockets.
cate that SSE events will follow. • SSE is well suited for scenarios where communica-
5. When the client receives the SSE event, it extracts tion is unidirectional, i.e., the server needs to for-
the event fields and takes appropriate action based ward updates to clients, such as news feeds, notifi-
on the data received. cations, or real-time monitoring dashboards.

Polling vs. Long Polling vs. Use Cases


WebSocket vs. Server Sent Events WebSockets provide bidirectional communication between a
There are several techniques for real-time communication server and a client, which makes them suitable for real-time
between clients and servers. Each of these techniques has polling apps, chat apps, etc. Server-Sent Events support a
its own characteristics and use cases. Polling and Long unidirectional communication between client and server.
Polling are simple to use but they aren't as efficient as This means that the messages are transmitted in single direc-
WebSocket and Server-Side Events. Here's how these tech- tion only, i.e., from server to client. They are often used for
niques compare and contrast against each other. push notifications, news feeds, and other similar purposes.

Polling
• Polling involves a client sending requests to the server Implementing SSE
at regular intervals to check if there are any updates. The server must inform the client that the content type
• On receiving the request, the server responds with of the response should be text/event-stream. Upon estab-
new data if one is available or an empty response if lishing a connection between the server and client, the
no data has been updated. server keeps it active until HTTP requests are received.

36 Developing Real-Time Web Applications with Server-Sent Events in ASP.NET 7 Core codemag.com
Unless a timeout occurs or there are no further events from the messages the shared location. The key advantage
to process, the connection remains open. If a timeout of this pattern is that the producer and the consumer are
occurs, the client can reconnect to the server using the decoupled and disconnected from one another, in other
built-in reconnection mechanism. Figure 2 illustrates a words, they don't have any knowledge of the other.
typical implementation of a SSE server and client.
Creating the View
The first step is connecting to an EventSource, which Replace the source code of the Index.cshtml file with the
is accomplished by initializing an EventSource instance source code given in Listing 2.
with the URL of the stream to connect to. Under the
hood, EventSource connects to the server by sending an
HTTP request. The server responds to the request with a
stream of event data having text/event-stream as the
content type. Until the server determines there’s no more
data to send, or until the client actively closes the con-
nection using EventSource.close, the connection between
the server and client persists. A keep-alive message can
be sent every minute or so to avoid a timeout.

Building a Simple Application Using


SSE in ASP.NET 7 Core
It’s time for writing some code. Let’s now examine how to
build a simple ASP.NET Core 7 Web API application using
GraphQL.

Create a New ASP.NET Core 7 Project in Visual Studio 2022


You can create a project in Visual Studio 2022 in several
ways. When you launch Visual Studio 2022, you'll see the
Start window. You can choose "Continue without code"
to launch the main screen of the Visual Studio 2022 IDE. Figure 2: An implementation of SSE

To create a new ASP.NET Core 7 Project in Visual Studio


2022: Listing 1: The HomeController Class
using Microsoft.AspNetCore.Mvc;
1. Start the Visual Studio 2022 IDE. using System.Collections.Concurrent;
2. In the “Create a new project” window, select “ASP. using System.Text;
NET Core Web-App (Model-View-Controller)” and click using System.Text.Json;
Next to move on. namespace SSE_Demo.Controllers
3. Specify the project name as SSE_Demo and the path {
where it should be created in the “Configure your public class HomeController : Controller
{
new project” window. private readonly BlockingCollection<string> _producerConsumerCollection = new
4. If you want the solution file and project to be cre- BlockingCollection<string>();
ated in the same directory, you can optionally check public HomeController()
{
the “Place solution and project in the same direc- for (int i = 0; i < 10; i++)
tory” checkbox. Click Next to move on. {
5. In the next screen, specify the target framework and _producerConsumerCollection.Add(string.Format("The product code is: {0}\n",
Guid.NewGuid().ToString()));
authentication type as well. Ensure that the "Config- }
ure for HTTPS," and "Enable Docker Support" check- }
boxes are unchecked because you won’t use any of public ActionResult Index()
{
these in this example. return View();
6. Leave the “Do not use top-level statements” check- }
public ActionResult About()
box unchecked. {
7. Click Create to complete the process. return View();
}
public ActionResult GetMessage()
Creating the Controller {
Listing 1 contains the complete source code of the var result = string.Empty;
HomeController.cs file. var stringBuilder = new StringBuilder();
if (_producerConsumerCollection.TryTake(out result, TimeSpan.
In the code example given in Listing 1, the HomeCon- FromMilliseconds(1000)))
troller class uses a BlockingCollection to store items. A {
var serializedData = JsonSerializer.Serialize(new { message = result });
BlockingCollection is a thread-safe collection that follows stringBuilder.AppendFormat("data: {0}\n\n", serializedData);
the producer-consumer pattern and enables items to be }
added or removed from the collection concurrently. In a
return Content(stringBuilder.ToString(), "text/event-stream");
typical Producer Consumer pattern, the producer is respon- }
sible for generating messages and storing them in a shared }
location. You can then have multiple consumers reading }

codemag.com Developing Real-Time Web Applications with Server-Sent Events in ASP.NET 7 Core 37
JOIN US at the 2ND ANNUAL

OCT 3-5, 2023 • MGM GRAND LAS VEGAS

CHARLES LAMANNA HEATHER COOK JULIE STRAUSS NIRAV SHAH


Corporate Vice President, Business Principal PM Manager, Dynamics General Manager, Power Platform Vice President, Microsoft
Applications & Platform, Microsoft 365 & Power Platform Pro Developer & Dataverse, Microsoft
Community Success, Microsoft Admin Experiences, Microsoft

RYAN CUNNINGHAM SANGYA SINGH STEPHEN SICILIANO KENDRA SPRINGER


Vice President Microsoft Power Apps, Vice President of Product, Vice President, Power Principal Design Lead for Power
Microsoft Power Pages. Microsoft Automate, Microsoft Virtual Agents and Conversational
AI, Microsoft

Opening Keynote & Attendee Party in Iconic Grand Garden Arena


• 156 In-Depth Sessions
• 5 Cutting Edge Keynotes
• 21 In-Depth Optional Workshops
• Community Power Platform Area
• Tuesday Evening Attendee Party
• Wednesday Evening Meetups

PowerPlatformConf.com #MPPC23
Announcing Thursday’s Featured Guest Keynote

Adam Grant
Organizational psychologist at Wharton,
bestselling author of Think Again, and
host of the podcasts WorkLife & Re:Thinking

Full Conference Oct 3 – 5 with Optional Full Day


Deep Dive Workshops Oct 1, 2 & 6
Sunday, October 1, 2023 Monday, October 2, 2023 Friday, October 6, 2023
• 1 Day Dynamics 365 • Build an Immersive Customer Experience • Build Your Own Intelligent BOT
Marketing Training Portal Using Power Pages Using the New Power Virtual Agents
• Build a Modern App with • Dataverse Essentials: Enabling Citizen & Authoring Canvas!
Power Apps Low-Code Makers to Build Your Next • Day After Dashboard in a Day with
Scalable Business Apps Microsoft Fabric
• Learn How to Build PCF Controls
to Give Low-Code App Makers • Empowering Robotic Process Automation • Empowering Makers: Exploring Copilot/
Even More Power! Enthusiasts: Harnessing the Potential of AI Capabilities in the Power Platform
• Power Platform + List Formatting Power Automate • Mastering Model-Driven Power Apps:
Integrations Master Class • Everything You Wanted to Know About From Beginner to App Maker
• Power Platform Administration Power BI... But Were Afraid to Ask! • Power Automate Cloud Flows
and Governance • Build, Deploy, and Scale Power Apps! Hands-on Workshop
• The New and Improved! PVA • Power Platform Makeover: Designing • Power Pages Advanced Workshop
Chatbot in a Day Pretty, Performant and Intuitive Solutions
• Power Up Skills with Power BI
• Using Human-Centered Design • Unleash Your Inner App-Builder: Create a
• Unlocking EfÀciency and Automation
Principles to Build Engaging Fully-Functional Inventory Management
Potential With Power Automate
Solutions Across the App in One Day! Process Mining
Power Platform

Register Today with Discout Code: CODEAD100


(Discount is for New Registrations Only. Not Retroactive)
The EventSource instance establishes a persistent connec- Building a Real-Life Application
tion to an HTTP server in order to receive these events sent
by the server in a text/event-stream format. The connection
Using SSE in ASP.NET Core 7
between the client and the server will stay open until closed In this section, we’ll examine how to build an application
by calling the EventSource.close() method. When you ex- that sends out notifications to the connected clients at
ecute the application, the product codes generated by your regular intervals of time. A scheduler runs in the back-
controller will be displayed in the user interface. Figure 3 ground and pulls records from the database at pre-defined
shows the product codes as displayed in the web browser. intervals of time. This data is then pushed to the connect-
ed clients using SSE. Figure 4 captures the complete flow.

Listing 2: The Index.cshtml file Now that we know the flow of the application, let’s exam-
@{ ine the components of the application.
ViewBag.Title = "Home Page";
} Application Components
<script>
function display() { In this application, there are three projects involved:
var source = new EventSource('/home/getmessage');
var ul = document.getElementById("sse"); • SSE_Server
source.onmessage = function (e) { • SSE_Client
var li = document.createElement("li");
var retrievedData = JSON.parse(e.data) • SSE_Scheduler
li.textContent = retrievedData.message;
ul.appendChild(li); Figure 5 shows the components of the application together
}
} with the classes and interfaces used to build the components.
window.addEventListener("DOMContentLoaded", display, false);
</script> As evident from their names, SSE_Server is the server proj-
<ul id="sse">
</ul> ect that sends out messages when requested for all con-
nected clients. SSE_Client is the client project that con-
nects to the server to retrieve messages. SSE_Scheduler is
used to send data to the server at regular intervals of time.

After a client has established a connection with the serv-


er, it calls the Subscribe method of the server to subscribe
to notifications. Once this process is successful, the cli-
ent can invoke either the GetMessage(id) to retrieve a
message pertaining to a particular client or the GetMes-
sages() method to retrieve all messages from the server.

In the sections that follow, you’ll create the classes and


interfaces for each of these projects mentioned earlier. I’ll
mention the class and interface names together with the
name of the project they belong to.

Create the Notification Class (SSE_Scheduler)


Create a new class named Notification in a file having the same
Figure 3: The product codes generated by the controller are displayed in the web browser. name with a .cs extension and write the following code in there:

public class 1RWLÀFDWLRQ


{
public string Id { get; set; }
public string Message { get; set; }
public DateTime MessageTime
{ get; set; } = DateTime.Now;
}

Create the INotificationRepository Interface (SSE_Scheduler)


Create a new .cs file named INotificationRepository in
your project and replace the default generated code with
the following code snippet:

public interface ,1RWLÀFDWLRQ5HSRVLWRU\


{
public7DVN/LVW1RWLÀFDWLRQ!!*HW1RWLÀFDWLRQV 
public7DVN1RWLÀFDWLRQ!
*HW1RWLÀFDWLRQ(string Id);
public Task $GG1RWLÀFDWLRQ
 1RWLÀFDWLRQQRWLÀFDWLRQ 
Figure 4: The complete flow of the custom Notification Application }

40 Developing Real-Time Web Applications with Server-Sent Events in ASP.NET 7 Core codemag.com
Create the NotificationRepository Class (SSE_
Scheduler)
The NotificationRepository class implements the methods
of the INotificationRepository interface. Create a new
class named NotificationRepository in a file having the
same name with a .cs extension. Now write the source
code given in Listing 3 in there.

Register the NotificationRepository instance (SSE_Scheduler)


The following code snippet illustrates how an instance
of type INotificationRepository is added as a singleton
service to the IServiceCollection.

builder.Services.AddSingleton
,1RWLÀFDWLRQ5HSRVLWRU\
1RWLÀFDWLRQ5HSRVLWRU\! 

You’ll now create a hosted service in the scheduler project that


reads messages from the repository and pushes them to the
server after every n seconds. In this example, the value of n
is hard-coded as 60. You can change it per your requirements.

Create the Hosted Service (SSE_Scheduler)


To create a custom hosted service, create a class that
implements the IHostedService interface. It should con-
tain definitions for the StartAsync and StopAsync meth-
ods. Listing 4 shows the complete source code of the
CustomHostedService class. Figure 5: The components of the application

Register the CustomHostedService instance


The following code snippet illustrates how the hosted ser- public interface ICustomMessageQueue
vice is registered. {
void Register(string id);
builder.Services.AddHostedService void Deregister(string id);
<CustomHostedService>(); ICollection<string> Keys { get; }
IAsyncEnumerable<string>
The Program.cs file (SSE_Scheduler) DequeueAsync(string id,
Because you’re using ASP.NET Core 7 in this example, all code CancellationToken
necessary to add services to the container and configure the cancelToken = default);
request processing pipeline should reside in the Program.cs IAsyncEnumerable<string>
file. Listing 5 shows the complete source code of the Pro- DequeueAsync(CancellationToken
gram.cs file of the SSE_Scheduler project for your reference. cancelToken = default);
7DVN(QTXHXH$V\QF 1RWLÀFDWLRQ
Create the Message Queue (SSE_Server) QRWLÀFDWLRQ&DQFHOODWLRQ7RNHQ
The following code snippet shows the ICustomMessage- cancelToken);
Queue interface: }

Listing 3: The NotificationRepository Class


public class 1RWLÀFDWLRQ5HSRVLWRU\ : ,1RWLÀFDWLRQ5HSRVLWRU\
{ public async7DVN/LVW1RWLÀFDWLRQ!!*HW1RWLÀFDWLRQV
private/LVW1RWLÀFDWLRQ! {
BQRWLÀFDWLRQV new/LVW1RWLÀFDWLRQ!  return await7DVN)URP5HVXOW BQRWLÀFDWLRQV 
}
public 1RWLÀFDWLRQ5HSRVLWRU\()
{ public async7DVN1RWLÀFDWLRQ!*HW1RWLÀFDWLRQ(string Id)
BQRWLÀFDWLRQV$GG new1RWLÀFDWLRQ {
{ return await7DVN)URP5HVXOW BQRWLÀFDWLRQV
Id = "1", FirstOrDefault(x => x.Id == Id));
Message = 7KLVLVWKHÀUVWPHVVDJH, }
MessageTime = DateTime.Now
}); public async Task $GG1RWLÀFDWLRQ
 1RWLÀFDWLRQQRWLÀFDWLRQ
BQRWLÀFDWLRQV$GG new1RWLÀFDWLRQ {
{ BQRWLÀFDWLRQV$GG QRWLÀFDWLRQ 
Id = "2", }
Message = "This is the second message", }
MessageTime = DateTime.Now
});
}

codemag.com Developing Real-Time Web Applications with Server-Sent Events in ASP.NET 7 Core 41
Listing 4: The CustomHostedService Class
public sealed class CustomHostedService : *HW1RWLÀFDWLRQV 5HVXOW
IHostedService, IAsyncDisposable
{ IRUHDFK YDUQRWLÀFDWLRQLQQRWLÀFDWLRQV
private readonly {
,1RWLÀFDWLRQ5HSRVLWRU\ LI QRWLÀFDWLRQ,V3URFHVVHG
BQRWLÀFDWLRQ5HSRVLWRU\ {
private Timer? _timer; HttpContent body =
public CustomHostedService new StringContent(JsonSerializer.
 ,1RWLÀFDWLRQ5HSRVLWRU\QRWLÀFDWLRQ5HSRVLWRU\ 6HULDOL]H QRWLÀFDWLRQ 
 !BQRWLÀFDWLRQ5HSRVLWRU\ QRWLÀFDWLRQ5HSRVLWRU\ Encoding.UTF8, "application/json");
YDUUHVSRQVH FOLHQW3RVW$V\QF DSLQRWLÀFDWLRQ
public async Task StartAsync "postmessage", body).Result; }
(CancellationToken cancellationToken) }
{ }
_timer = new Timer(SendMessage, null, public async Task StopAsync
TimeSpan.Zero, TimeSpan.FromSeconds(60)); (CancellationToken cancellationToken)
} {
private void SendMessage(object? state) await Task.Delay(TimeSpan.FromSeconds(60), cancellationToken);
{ }
using var client = new HttpClient(); public async ValueTask DisposeAsync()
new Uri("http://localhost:5101/" + {
DSLQRWLÀFDWLRQ  _timer.Dispose();
YDUQRWLÀFDWLRQV  }
QRWLÀFDWLRQ5HSRVLWRU\ }

Listing 5: The Program.cs file of SSE_Scheduler Project


var builder = WebApplication.CreateBuilder(args); <CustomHostedService>();
var app = builder.Build();
// Add services to the container.
&RQÀJXUHWKH+773UHTXHVWSLSHOLQH
builder.Services.AddControllers();
app.UseAuthorization();
builder.Services.AddSingleton
,1RWLÀFDWLRQ5HSRVLWRU\ app.MapControllers();
1RWLÀFDWLRQ5HSRVLWRU\! 
builder.Services.AddHostedService app.Run();

Listing 6: The CustomMessageQueue Class


public class CustomMessageQueue : ICustomMessageQueue $"Error encountered " +
{ $"when adding a new message to the queue."); );
private ConcurrentDictionary }
<string, Channel<string>> else
_concurrentDictionary; {
await channel.Writer.
public CustomMessageQueue() :ULWH$V\QF QRWLÀFDWLRQ
{ Message, cancelToken);
_concurrentDictionary = }
new ConcurrentDictionary<string, }
Channel<string>>();
} public IAsyncEnumerable<string>
DequeueAsync(string id,
public void Register(string id) CancellationToken cancelToken = default)
{ {
bool success = bool success =
_concurrentDictionary.TryAdd _concurrentDictionary.TryGetValue
(id, Channel.CreateUnbounded<string>()); (id, out Channel<string> channel);
if (!success) if (success)
{ {
throw new ArgumentException return channel.Reader.
($"The client Id {id} is already registered"); ReadAllAsync(cancelToken);
} }
} else
{
public ICollection<string> Keys throw new ArgumentException
{ ($"The client Id {id} isn't registered");
get { return _concurrentDictionary.Keys; } }
} }
public void Deregister(string id)
{ public async IAsyncEnumerable<string> DequeueAsync(CancellationToken
_concurrentDictionary.TryRemove(id, out _); cancelToken = default)
} {
IAsyncEnumerable<string> result;
public async Task EnqueueAsync
1RWLÀFDWLRQQRWLÀFDWLRQ foreach (var keyValuePair in
CancellationToken cancelToken) _concurrentDictionary)
{ {
bool success = _concurrentDictionary.TryGetValue( await foreach (string str in DequeueAsync(keyValuePair.Key,
QRWLÀFDWLRQ,G cancelToken))
out Channel<string> channel {
); yield return str;
}
if (!success) }
{ }
throw new ArgumentException( }

42 Developing Real-Time Web Applications with Server-Sent Events in ASP.NET 7 Core codemag.com
Listing 7: The Subscribe Method
public async Task<IActionResult> Subscribe(string id) await foreach (var message
{ in _messageQueue.DequeueAsync
Response.StatusCode = 200; (id, HttpContext.RequestAborted))
Response.Headers.Add("Cache-Control", "no-cache"); {
Response.Headers.Add("Connection", "keep-alive"); await streamWriter.WriteLineAsync
Response.Headers.Add("Content-Type", ($"Message received: " +
"text/event-stream"); $"{message} at {DateTime.Now}");
await streamWriter.FlushAsync();
try }
{ }
1RWLÀFDWLRQQRWLÀFDWLRQ  catch (Exception ex)
new1RWLÀFDWLRQ  {
QRWLÀFDWLRQ,G LG return BadRequest(ex.Message);
QRWLÀFDWLRQ0HVVDJH $"Subscribed to" + }
$" client {id}"; ÀQDOO\
_messageQueue.Register(id); {
StreamWriter streamWriter = _messageQueue.Deregister(id);
new StreamWriter(Response.Body); }
await _messageQueue.EnqueueAsync
 QRWLÀFDWLRQ return Ok();
HttpContext.RequestAborted); }

The CustomMessageQueue class implements the methods Listing 8: The PostMessage Action Method
of the ICustomMessageQueue interface. The code list- public async Task<IActionResult> PostMessage
ing given in Listing 6 shows the CustomMessageQueue >)URP%RG\@1RWLÀFDWLRQQRWLÀFDWLRQ
class. {
try
{
Install NuGet Package(s) BPHVVDJH4XHXH5HJLVWHU QRWLÀFDWLRQ,G 
So far so good. The next step is to install the necessary await _messageQueue.EnqueueAsync
NuGet Package(s). To install the required packages into QRWLÀFDWLRQ+WWS&RQWH[W5HTXHVW$ERUWHG 
your project, right-click on the solution and the select return Ok();
}
Manage NuGet Packages for Solution…. Now search for catch (Exception ex)
the package named Lib.AspNetCore.ServerSentEvents in {
the search box and install it. Alternatively, you can type return BadRequest(ex.Message);
the command shown below at the NuGet Package Manager }
Command Prompt: }

PM> Install-Package
Lib.AspNetCore.ServerSentEvents

Alternatively, you can install these packages by executing


the following commands at the Windows Shell:

dotnet add package


Lib.AspNetCore.ServerSentEvents

Create the NotificationController Class (SSE_Server)


The NotificationController provides action methods that
clients can invoke. A client can subscribe to the server to
receive messages. The following code snippet shows the
Subscribe method that accepts a client ID as a parameter. Figure 6: The SSE Client having Id 1 is now subscribed to receive messages from the server.

Listing 9: The GetMessage(string) Action Method


public async Task<IActionResult> GetMessage(string id) }
{ }
Response.ContentType = "text/event-stream"; catch (Exception ex)
{
try return BadRequest(ex.Message);
{ }
StreamWriter streamWriter = ÀQDOO\
new StreamWriter(Response.Body); {
await foreach (var message in _messageQueue.Deregister(id);
_messageQueue.DequeueAsync }
(id, HttpContext.RequestAborted))
{ return Ok();
await streamWriter.WriteLineAsync }
($"{DateTime.Now} {message}");
await streamWriter.FlushAsync();

codemag.com Developing Real-Time Web Applications with Server-Sent Events in ASP.NET 7 Core 43
Listing 10: The GetMessages Action Method
public async Task<IActionResult> GetMessages() _messageQueue.DequeueAsync())
{ {
Response.Headers.Add("Content-Type", await streamWriter.WriteLineAsync
"text/event-stream"); ($"{DateTime.Now} {message}");
Response.Headers.Add("Cache-Control", await streamWriter.FlushAsync();
"no-cache"); }
Response.Headers.Add("Connection",
"keep-alive"); return Ok();
Response.StatusCode = 200; }
catch (Exception ex)
try {
{ return BadRequest(ex.Message);
StreamWriter streamWriter = }
new StreamWriter(Response.Body); }
await foreach (var message in

Listing 11: The NotificationController Class


[Route("api/[controller]")] {
[ApiController] StreamWriter streamWriter =
public class 1RWLÀFDWLRQ&RQWUROOHU : ControllerBase new StreamWriter(Response.Body);
{ await foreach (var message
private readonly ICustomMessageQueue _messageQueue; in _messageQueue.DequeueAsync(id))
public 1RWLÀFDWLRQ&RQWUROOHU {
(ICustomMessageQueue messageQueue) await streamWriter.WriteLineAsync(
{ $"Client Id: {id} Message: {message}" +
_messageQueue = messageQueue; $" Time: {DateTime.Now}");
} await streamWriter.FlushAsync();
}
[HttpGet("subscribe/{id}")] }
public async Task<IActionResult> Subscribe(string id) catch (Exception ex)
{ {
Response.StatusCode = 200; return BadRequest(ex.Message);
Response.Headers.Add("Cache-Control", }
"no-cache"); ÀQDOO\
Response.Headers.Add("Connection", {
"keep-alive"); _messageQueue.Deregister(id);
Response.Headers.Add("Content-Type", }
"text/event-stream");
return Ok();
try }
{
1RWLÀFDWLRQQRWLÀFDWLRQ new1RWLÀFDWLRQ  [HttpGet("getmessages")]
QRWLÀFDWLRQ,G LG public async Task<IActionResult> GetMessages()
{
QRWLÀFDWLRQ0HVVDJH $"Subscribed to " + Response.Headers.Add("Content-Type", "text/event-stream");
$"client {id}"; Response.Headers.Add("Cache-Control", "no-cache");
Response.Headers.Add("Connection", "keep-alive");
_messageQueue.Register(id); Response.StatusCode = 200;
StreamWriter streamWriter =
new StreamWriter(Response.Body); try
await _messageQueue.EnqueueAsync {
QRWLÀFDWLRQ+WWS&RQWH[W5HTXHVW$ERUWHG  StreamWriter streamWriter =
new StreamWriter(Response.Body);
await foreach ( await foreach (var message in
var message in _messageQueue.DequeueAsync _messageQueue.DequeueAsync())
(id, HttpContext.RequestAborted) {
) await streamWriter.WriteLineAsync
{ ($"{DateTime.Now} {message}");
await streamWriter.WriteLineAsync await streamWriter.FlushAsync();
($"Message received: {message} at {DateTime.Now}"); }
await streamWriter.FlushAsync();
} return Ok();
} }
catch (Exception ex) catch (Exception ex)
{ {
return BadRequest(ex.Message); return BadRequest(ex.Message);
} }
ÀQDOO\ }
{
_messageQueue.Deregister(id); [HttpPost]
} [Route("[action]")]
public async Task<IActionResult>
return Ok(); PostMessage >)URP%RG\@1RWLÀFDWLRQQRWLÀFDWLRQ
} {
try
[HttpGet("getmessage/{id}")] {
public async Task<IActionResult> BPHVVDJH4XHXH5HJLVWHU QRWLÀFDWLRQ,G 
GetMessage(string id) await _messageQueue.EnqueueAsync
{ QRWLÀFDWLRQ+WWS&RQWH[W5HTXHVW$ERUWHG 
Response.StatusCode = 200; return Ok();
Response.Headers.Add("Cache-Control", }
"no-cache"); catch (Exception ex)
Response.Headers.Add("Connection", {
"keep-alive"); return BadRequest(ex.Message);
Response.Headers.Add("Content-Type", }
"text/event-stream"); }
}
try

44 Developing Real-Time Web Applications with Server-Sent Events in ASP.NET 7 Core codemag.com
Listing 12: The Program.cs file of the SSE Console Client Application
HttpClient client = new HttpClient(); {
client.Timeout = TimeSpan.FromSeconds(60); while (!streamReader.EndOfStream)
{
string url = $"http://localhost:5101/" + var message =
DSLQRWLÀFDWLRQVXEVFULEH; await streamReader.ReadLineAsync();
Console.WriteLine(message);
while (true) }
{ }
try }
{ catch (Exception ex)
Console.WriteLine("Establishing connection" + {
" with the server."); throw;
using (var streamReader = }
new StreamReader(await client.GetStreamAsync(url))) }

It then creates an instance of the Notification class and


populates it with the client ID and a text message. This
message is sent to the client to inform that it has been
successfully subscribed to the server. The EnqueueAsync
method is used to register the client.

Listing 7 shows the source code of the Subscribe action


method.

The Scheduler posts messages to the server at pre-defined


intervals of time by calling the PostMessage action meth-
od. Listing 8 contains the source code of the PostMes-
sage action method.
Figure 7: The text message for client 1 is displayed at the console window of the SSE_
Refer to the GetMessage(string id) and GetMessages() Client project.
methods. Although the GetMessage(string id) method
is used to retrieve message pertaining to a registered
client, the GetMessages() method returns all messag- Create a Console Client (SSE_Client)
es pertaining to all clients available in the message The code given in Listing 12 shows the console client
queue. that would be used to connect to the SSE service, retrieve
messages, and then display them at the console window.
Listing 9 contains the code listing for the GetMessage
(string id) method. Listing 10 shows the GetMessages()
method that returns all messages in the message queue. Executing the Application
Listing 11 provides the complete source code of the Noti- Start the SSE_Server project followed by the SSE_Client
ficationController class. console project. Figure 6 shows how the client has sub-
scribed to receive messages from the server.
Create a New Console Application Project in Visual
Studio 2022 Change the URL in the console project to the following:
Let's create a console application project in Visual Studio
2022 that you'll use for building the client application. string url = $"http://localhost:5101/" +
When you launch Visual Studio 2022, you'll see the Start DSLQRWLÀFDWLRQJHWPHVVDJH
window. You can choose Continue without code to launch
the main screen of the Visual Studio 2022 IDE. Now, run the three projects SSE_Server, SSE_Scheduler,
and SSE_Client in this order.
To create a new Console Application project in Visual Stu-
dio 2022 Preview: Figure 7 shows the message for client 1 displayed at the
console.
1. Start Visual Studio 2022.
2. In the Create a new project window, select Console
App, and click Next to move on. Conclusion
3. Specify the project name as SSE_Client and the path SSE enables you to send notifications from an SSE server
where it should be created in the Configure your new to the connected clients whenever an event occurs. Con-
project window. trary to traditional polling techniques, SSE is beneficial
4. If you want the solution file and project to be cre- because the server notifies the connected clients only
ated in the same directory, you can optionally check when an event occurs. If you need bidirectional commu-
the Place solution and project in the same directory nication between a server and its clients, you should use
checkbox. Click Next to move on. WebSockets instead.
5. In the next screen, specify the target framework you
would like to use for your console application. Joydip Kanjilal
6. Click Create to complete the process.

codemag.com Developing Real-Time Web Applications with Server-Sent Events in ASP.NET 7 Core 45
ONLINE QUICK ID 2309061

Getting Started with AI Copilots


in Your Own Applications
Applications without Copilots are now legacy! I have now made this statement on a number of occasions, and it’s become
somewhat of a battle-cry within the CODE organization. Some might find the statement a bit strong or premature, but I feel
that as we push forward, this is indeed how we have to approach software development from now on. How did we get here?

Just a few months ago it seems that statements like this paradigm and places LLMs at the base of pretty much any
would not only have been preposterous, but nobody would upcoming Microsoft product, whether you are an Office user,
have even known what I’m talking about. Then, OpenAI or whether you are a system administrator concerned with
released ChatGPT at the end of November 2022 and GPT-4 security, or anything in between. For instance, you’ll be able
in March of 2023. These artificial intelligence (AI) products to use AI built into MS Teams to provide you with a sum-
not only took most of the tech world by surprise, but they mary of a meeting you missed. You can then have the same
captured the imagination of people who normally don’t AI create a to-do list based on the decisions made in that
take an interest in software products. All of a sudden, I’d meeting. You can also have that UI create a PowerPoint pre-
go to a BBQ with people who had no connection to the sentation or a Word document based on what was discussed.
Markus Egger software industry, and all they would want to talk to me Perhaps if one of the discussion points was that a new low-
[email protected] about is artificial intelligence! I’ve explained generative AI code application needs to be created, you can have the AI
to retired people, hairdressers, and journalists. I’ve held ex- create a Power Platform application that satisfies the crite-
Markus is the founder and ecutive briefings showing some of America’s largest compa- ria. And most of the time, the AI will be able to do it better
publisher of CODE Magazine nies how this technology can elevate what they are doing. than most human users. And if not, then you either interfere
and is EPS President and It seems to not matter how technical a person is; everyone by hand or ask the AI to change something. You can do all
Chief Software Architect.
is fascinated by AI (and at times, also a little afraid). of that in plain English or any other language of your choice.
He is also a Microsoft RD
(Regional Director) and the
one of the longest (if not THE
I haven’t seen anything like this since the earlier days We refer to this approach as an AI “Copilot.” The AI is not
longest) running Microsoft of the internet, except this happened much faster. Much autonomous. The AI does not take over. But it’s always
MVP (Most Valuable Profes- faster! In fact, ChatGPT is the fastest adopted product there to assist you and do what you want, even if you’re
sional). Markus is also a of all times (at least outside of China) gaining millions not a programmer or don’t even express yourself particu-
renowned speaker and author. of users in a matter of days. Yet one can argue whether larly well or accurately. Welcome to the Age of Copilot!
it’s even a “product” or whether it’s “just a tech demo.” This may sound like science fiction, and half a year ago it
Markus’ client list contains Perhaps the truth lies somewhere in between. would have been, but this is now concrete reality.
some of the world’s larg-
est companies, including Whatever the case may be, one thing is clear: ChatGPT as Copilots are not only for Microsoft applications. Copilots
many on the Fortune 500. a product is not the grand vision. Instead, the technol- are a new application paradigm (just like Windows was
Markus has been published ogy of what powers ChatGPT—the large language models once a new paradigm over command line interfaces or the
extensively including MSDN (LLMs) that OpenAI created—are the basis for a com- web was a new paradigm over desktop apps). It’s hard to
Magazine, Visual Studio pletely new generation of software that, for the first time imagine an application that can’t benefit from the Copi-
Magazine, his own CODE in any modern developer’s career, completely changes how lot paradigm. Having built a number of business applica-
Magazine, and much more. we write programs and interact with computers. We move tions with Copilot features, I can testify to the enormous
Markus focuses on develop- from a model of clear and unambiguous instructions that productivity increase this provides. (Even though we are
ment in .NET (Windows, we code into software and that results in very predictable arguably just scratching the surface.) Applications sup-
Web, Windows Phone, and and meticulously orchestrated results, to a fuzzier, but porting this new paradigm have a considerable benefit
WinRT) as well as Android orders of magnitude, more powerful paradigm. We now over apps that don’t. Conversely, applications that do not
and iOS. He is passionate interact with software much more like we would interact have such features, even if they’re dominant players in
about overall application with a human. The interaction is rich and powerful, but their market segments today, will find themselves leap-
architecture, SOA, user also, at times, somewhat unpredictable. frogged and made obsolete. It’s an enormous opportunity
interfaces, general develop- but can be a great danger for those who “miss the boat.”
ment productivity, and
building maintainable and
reusable systems. Many Copilot/AI features in modern
applications would have been It’s an enormous opportunity
considered science fiction just but can be a great danger
a few months ago. for those who “miss the boat.”

It isn’t just my friends and neighbors who got inspired by


this rapid new development. Microsoft is invested in OpenAI And that’s why we consider applications without Copi-
and has to be given a lot of credit for seeing the poten- lot features to be legacy applications. The good news
tial in LLMs. Microsoft is the first company to use this new is that it’s often not all that hard to add Copilot fea-

46 Title article codemag.com


tures to existing applications (unlike previous paradigm
shifts, where we had to abandon DOS-based apps when openai.api_type = "open_ai"
we moved to Windows, and then abandoned them again openai.api_base = "https://api.openai.com/v1"
when we moved to the Web, and then do it all over again openai.api_version = ""
for Mobile apps). In this article, I aim to explain some of openai.api_key = "<YOUR API KEY>"
the basics that are required to engage in this journey as os.environ["OPENAI_API_KEY"] = openai.api_key
a developer. Just like Windows development couldn’t be
explained in a single article to a mainframe developer, I response = openai.Completion.create(
won’t be able to explain this type of AI-driven develop- engine="text-davinci-003",
ment in a single article. However, I hope to at least give prompt="Please tell me a story.",)
you a solid start.
print(response.choices[0].text)
I also invite you to check out many of the other resources
we make available, such as our free State of .NET and This example is straightforward. It pulls in an “openai”
CODE Presents webinars, our training videos and classes, client package, which then gets configured to access the
CODE Executive Briefings (www.codemag.com/Executiv- OpenAI API subscription (make sure you put in your own
eBriefing), as well as our blog on www.codemag.com/ API key). The most important part is the call to “openai.
blog, which all cover many of these topics. And feel free Completion.create().” The completions API allows you
to email me (or CODE in general: [email protected]). to send some text to the model and ask it to “complete
it.” In other words, you send it a prompt and ask it to
It all starts with understanding OpenAI. respond. Everything you do with LLMs revolves around More Information
“completing text,” whether it’s a simple completion as in
this example, or whether it’s a series of back-and-forth CODE has recently done
Getting Started with (Azure) OpenAI completions, which you then perceive as a chat or a dia- quite a number of free
In the July/August 2023 issue of CODE Magazine, Sahil log. You could also ask the AI to return different versions online webinars that cover
Malik and Wei-Meng Lee both wrote articles explaining of a completion, which are called “choices,” but in most AI and, specifically, the
use of OpenAI’s LLMs in
the fundamentals of OpenAI’s large language models scenarios, you just want a single version, hence we can
development scenarios.
(LLMs) and even showed how to write your own code to simply retrieve the first (and only) choice.
We recommend checking
call these AI models. Today, we’ll take this a step further. out recent State of .NET
But before we can run, we need to walk, so let’s start with presentations (codemag.com/
a fundamental example. StateOfDotNet) as well as
It's OpenAI’s job to train models. our CODE Presents webinar
OpenAI’s LLMs are currently the most talked about and series (codemag.com/
accessible, and for good reason. (There are others, and
It’s Microsoft’s job to integrate AI
CodePresents).
we expect this to become a large ecosystem of compet- with all kinds of other services,
ing and cooperating LLMs). The OpenAI models are very Also, take a look at our
such as databases and security. blogs, which include one
powerful and enable impressive scenarios. One of them
is their own implementation at https://chat.openai.com, That’s why I like using Azure OpenAI. dedicated to AI. Check it out
which is what people generally refer to as “ChatGPT.” I at codemag.com/Blog/AI.
like to think of that site mainly as a tech-demo for this
powerful new development paradigm. (Which isn’t en-
tirely accurate, as OpenAI is developing it into more and This example could be used almost unchanged running
more of an extensible platform in its own right). Behind against Azure OpenAI. You’d simply change the API type
this chatbot interface sit OpenAI services, which provide to access Azure and point the base URL at the Azure
a list of multiple individual LLMs that all have their own resource you want to use (see below). No further code
strengths and weaknesses. We can use those in our own change is required. I like using Azure OpenAI. If you al-
applications as well. ready have an Azure OpenAI account, you can simply add
an OpenAI resource the same exact way as you would
There are two fundamental ways to call OpenAI servic- create any other Azure resource (such as a web app or
es. One is through OpenAI directly (which is what Sahil a database). One of the benefits through Azure is that
and Wei-Meng showed in their articles), and the other is you will be billed as part of your overall Azure invoice.
through Microsoft’s version called “Azure OpenAI.” The I find that, especially for serious usages and enterprise
two approaches are very similar (and the code practically scenarios, it’s nice to have everything available and man-
identical), so why choose one over the other? That’s an ageable in one place.
excellent question and perhaps the answer isn’t entirely
clear at this point. You can use both approaches and fol- Another difference between Azure OpenAI and direct Open-
low the examples in this article very closely. For instance, AI is that the OpenAI organization is a provider of AI
if you sign up for OpenAI directly (which you can do on models, while Microsoft is a provider of many different
https://platform.openai.com), you can then explore the Cloud services that form a coherent whole. It isn’t Open-
APIs, sign up for a subscription (yes, this costs money, AI’s purpose or job to integrate their AI models with
but not very much; you can fiddle with it for quite a while other things, such as security, other cognitive services,
and probably spend less than a dollar), play with exam- search, databases, or many other things. OpenAI focuses
ples, or write code like the following Python example: entirely on creating and evolving AI. Microsoft’s purpose,
on the other hand, is to not worry about those details,
import os but to provide a large platform, of which one puzzle piece
import openai is LLMs. The difference may not be immediately apparent,

codemag.com Getting Started with AI Copilots in Your Own Applications 47


but we’re now starting to see more and more integration There are a few aspects of the C# example worth pointing
across Azure offerings. For instance, we’ll see, later in out: Note that the URL for the Azure OpenAI service is
this article, that, through Azure, it’s possible to directly exposed as defined for my deployment specifically (unlike
integrate things like Azure Cognitive Search, Blob Stor- the direct OpenAI endpoints, which are the same for ev-
age, and more, into the use of LLMs. eryone and every use). As with everything in Azure, when
you create an OpenAI resource, you specify the name as
On a sidenote: The first question almost every customer asks well as the deployment location. Azure gives you com-
me when it comes to incorporating LLMs into their own sys- plete control over what region you’d like to deploy to
tems is whether their data is secure. After all, if you just (so if you have a legal or logistical reason to keep your
type a question into chat.openai.com, whatever you type AI service in a specific geographic region, you can do so
in is not kept private. Instead, it’s used to further train through Azure OpenAI). Furthermore, you can create your
OpenAI’s models. (Remember: It’s OpenAI’s main purpose own named deployments of LLMs. For instance, if you
to push the development of better and better AI models). need a model that can generate marketing text, you could
OpenAI states that this is not the case for their paid offer- create a deployment called “Marketing Model” and then
ings (a statement I personally trust) but it’s apparent that choose which actual model that maps to (such as “text-
businesses trust Microsoft more (after all, they would have davinci-003”) and evolve that over time as more powerful
much to lose and little to gain by not keeping customer data models become available, without having to change your
secure and private). Microsoft guarantees that data won’t be source code that accesses such a model. Nice! (In this
used for training (or anything else). Many businesses are al- example, I deployed a GTP3.5 model and called it “Chat-
ready comfortable putting SQL Server (and many other forms Bot.” See below for more details.)
of data) into the Azure Cloud, and this equation essentially
remains the same if you use Azure’s OpenAI services.
A Simple Chat Example
Usually, the first implementation all our customers ask us
to create is a chatbot, similar to chat.openai.com, but
Using Azure OpenAI, your data completely secure, so employees can use this kind of fea-
tures set without giving away company secrets (as has
remains secure and private. accidently happened with Samsung. You might want to
Google that). This is also a great example to go beyond
the very basics of OpenAI LLMs, as it shows a chain of
interaction.
Assuming that you want to go the Azure route, you can
set up an Azure OpenAI resource and run the previous Py- To get started with a chatbot using Azure OpenAI, you
thon example unchanged, once you point it to the Azure first must create an Azure OpenAI resource through the
URLs and provide an Azure API Key. Just like OpenAI Azure Portal (https://portal.azure.com), as shown in Fig-
directly, Azure OpenAI can be called in different ways, ure 1. (Note: If you’d rather follow this example using
including through the Python SDK. You could also make OpenAI directly, you can follow all the concepts presented
direct HTTP requests (it’s a REST-based API, after all). In here, although the code will be different as the REST API
addition, because you’re now in the Microsoft world, .NET will have to be used low-level). The user interface and
client-packages are provided for easy access from .NET flow for this is very similar to creating other resources
languages. I find that to be an extremely compelling fea- in Azure (such as App Services). Currently, there aren’t a
ture because AI is often built into existing applications to lot of options or settings. The most important one is the
significantly enhance their usefulness, and many of those name of the resource, which also defined the “endpoint”
applications are written in languages other than Python. (the URL) of your OpenAI resource. Creating a new Azure
OpenAI resource automatically creates API keys, which
The following is a C# example that uses the Azure OpenAI can be accessed through the Azure Portal (you’ll need this
services to perform the equivalent tasks of the Python a bit further down).
example above.
It's interesting that it’s possible to select which region
using Azure; (and therefore, which data center and which geographic
using Azure.AI.OpenAI; location) the service is to be deployed to (this is espe-
cially important if you have a legal requirement to run
var client = new OpenAIClient( your AI in regions such as the European Union).
new Uri("https://<APP>.openai.azure.com/"),
new AzureKeyCredential("<YOUR API KEY>")); Note that as I’m writing this article, the list of available
regions is somewhat limited. This is simply a resource
var response = await client.GetCompletionsAsync( limitation. In fact, there’s a waitlist to even sign up for
"ChatBot", Azure OpenAI, although I hope that by the time you read
"Please tell me a story."); this article, this limitation isn’t an issue for you. Micro-
soft’s working hard in establishing more data centers
Console.WriteLine(response.Value.Choices[0].Text); around the world that can run these kinds of workloads,
which are often extremely resource intensive. However,
As you can see, the code is very similar. In fact, I find the current list serves the purpose of making OpenAI
the programming language used to call these kinds of AI services available in major regions. (For more informa-
services to be almost irrelevant. If the language can make tion, visit https://www.codemag.com/StateOfDotNet and
HTTP calls, you’re good to go. watch one of my recent recordings, such as the BUILD

48 Getting Started with AI Copilots in Your Own Applications codemag.com


2023 recap webinar, in which I provide more information model decides the response wasn’t great, you can simply
on how fast Microsoft is bringing new data centers online ask it to do it again, and the second time around, it’ll likely
and how incredibly large some of these things are). do much better.

To create your chatbot, you can use a GPT3.5 model. To


Understanding Models deploy this model, click on the link on the overview page
Once the fundamental Azure OpenAI resource is created, of the OpenAI resource in the Azure Portal, which takes you
you need to “deploy a model.” To do this, you need to to the Azure OpenAI Studio (https://oai.azure.com). This
first understand what “models” in OpenAI are. When us- is a developer’s tool like the developer’s portal on openai.
ing applications like ChatGPT, an LLM is used to generate/ com (see above and also in the OpenAI articles in the July/
calculate text responses to the user’s input. At least, that’s August 2023 issue of CODE Magazine). Azure OpenAI Studio
the simplified version of what really happens. To be at least is a great tool to try out various techniques. It’s also the
somewhat more accurate, you must dig a little bit deeper. tool used to create model deployments. To create yours,
click on “Deployments” and then click “Create new deploy-
When the user enters a “prompt” (the message sent to the ment” (Figure 2). You can pick the model you assume is
LLM) in ChatGPT, there isn’t just a single model that re- best (I generally start with GPT 3.5, which seems to be a
sponds. And it doesn’t respond with a complete message. good baseline). Unlike when using OpenAI directly, Azure
Instead, multiple models make up a whole system that gets OpenAI allows you to create your own name for the model.
invoked to generate a response. Depending on the need In this case, I named my model “ChatBot.” I could have
and intent of the user, different models are more appropri- also called my model deployment the same as the base
ate than others. Some models are great at creating elabo- model (“gpt-35-turbo,” in this case), but I prefer to give
rate responses but are slower and more expensive (because it a more abstract name. This way, if I later decide a dif-
they are more resource-intensive to operate). Other models ferent model is better suited for my chatbot example, I
might be much faster but provide simpler responses (which can change the model without having to change the model
may be completely appropriate for your specific need). names in my code. A minor point perhaps, but one I like
And yet other models are better at specific tasks, such as over the raw OpenAI approach of always going with the
creating or explaining source code. (Yes, LLMs can write underlying model name.
programs—some better than others). Models also differ in
their ability to handle large amounts of text. Each model Now that you have the Azure OpenAI resource created and
supports a certain number of “tokens,” which roughly maps a model deployed, you can write code to start putting
to text length (more on that below).

As you become a more experienced AI programmer, under-


standing the characteristics of various models, as well as
staying on top of which new models become available, is
an important skill. (A list of current models can be found
at https://learn.microsoft.com/en-us/azure/cognitive-
services/openai/concepts/models).

It’s also interesting to understand that LLMs do not create


large amounts of text. Instead, they predict a single word
based on some prompt they receive. This may seem strange
to you, as you’ve likely seen ChatGPT create large amounts
of text. This is achieved by continuing to ask the model to
keep running and generating “just one more word” until
a satisfying complete output is achieved. This has no im-
mediate impact on the samples in this article, but it’s an
interesting characteristic to understand. For one, it gives
an idea of the performance characteristics (as you can
imagine, returning a very large amount of text calls mod-
els many times and is thus a resource-intensive process).
It also explains why models sometimes generate nonsense
(although this is less of a problem with the most capable
models, such as GPT3.5 or GPT4). The models simply don’t
“think” about the complete response they’re going to give.
Instead, generating “one more word” sometimes sends
them down a path that isn’t necessarily the most appropri-
ate. Once the entire response is generated, it’s possible to
hand it back to a model and ask it whether it thinks the
result is appropriate for the original prompt. Once it can
analyze the entire result it generated, it often understands
whether it was sensible or not. As it turns out, this is a
trick that works rather well (especially with GPT4 models)
to verify that a response is correct and prevent “hallucina-
tions.” (I know a lot of humans that could benefit from the
same thought process, but that’s a different matter). If the Figure 1: Creating an Azure OpenAI resource is very similar to creating many other Azure services.

codemag.com Getting Started with AI Copilots in Your Own Applications 49


Figure 2: Creating a new model deployment in Azure OpenAI Studio.

best “ask it questions.” This is often referred to as “prompt


engineering.” I find this term already becoming overused,
as it describes a wide range of techniques, from users sim-
ply typing questions to developers coding very advanced
flows of sequences of questions/prompts, often with large
parts of this back-and-forth banter between a program and
the AI completely hidden from the user. When I refer to
“prompt engineering,” I usually refer to the latter. To me,
prompt engineering implies an engineering component,
the way a software developer understands it.

When sending prompts to an AI, there’s a lot more than just


a single block of text sent in hopes of a meaningful AI re-
sponse. For one thing, there are different types of prompts.
There’s the “system prompt,” which is a prompt that’s often
sent to an LLM as the very first prompt. It’s invisible to the
user, and it’s often used to configure the overall system. A
Figure 3: An extremely simple chatbot is already able to tell a coherent and completely system prompt might be something like this:
made-up story.
You are a professional customer service
representative who works for CODE Magazine
Artificial Intelligence to work. You can do so from any type and answers professionally and concisely.
of application you’d like. To keep things simple, I create a You only answer questions that relate to
C# command line application. To access the Azure OpenAI CODE Magazine.
services, I add the “Azure.AI.OpenAI” NuGet package to
my project. Now, all I have to do is create an instance of This type of prompt is never visible to the user. It rep-
the client-side object that’s conveniently provided to me resents additional input to the model that configures
using this package, set the endpoint URL as well as API its overall behavior. This also reveals one of the major
key (which can be found in the Azure Portal: see above), paradigm shifts that developers have to learn when de-
and I’m ready to start firing messages at the AI. The pre- veloping against these AI models: Instead of setting pa-
vious C# code snippet shows a first version of that. rameters in a very precise way, you simply tell the model
in plain English (or any other language, for that matter)
We haven’t really done much here in terms of fine-tuning what you want. It’s both extremely powerful and some-
the setup, but this code already produces an amazing re- what scary, because the AI will interpret it any way it
sult (Figure 3). All this code does is call the service with sees as most appropriate. The results will vary every time
a simple prompt (“tell me a story”) using all the default you interact with the AI. It’s also extremely powerful and
settings, yet the response you get is quite elaborate. Even works quite well. Imagine what kinds of properties you’d
this simple example produces results that would have have to set to achieve the same result that this system
been inconceivable just a few months ago. And we’ve just prompt will presumably achieve!
started to scratch the surface.
As you get more experienced in the use of LLMs, you will
find this kind of programming exhilarating! I encourage
Understanding Prompts you to experiment with system prompts. To spark your
A critical skill when coding AI (or even just interacting imagination, here’s a prompt I often use when I dem-
with AI from a user’s viewpoint) is to understand how to onstrate the Copilots we integrate into our applications:

50 Getting Started with AI Copilots in Your Own Applications codemag.com


You are a customer serice agent. You answer “chat completions” scenario. A chat is essentially a series
precisely yet light-heartedly and sometimes of completions and the API is very similar, but it makes it
you are funny. Because you are used for easier to create larger conversations.
demo purposes, replace all names that appear
in the output with fake names for The new example, which combines a system prompt and a
privacy reasons. flexible user prompt, now looks like this:

Now there’s something you don’t see every day! It solves using Azure;
a problem I originally struggled with when I wanted to using Azure.AI.OpenAI;
show off AI in real-world applications. I often couldn’t
show the generated results for privacy reasons. But using var client = new OpenAIClient(
a system prompt like this, the AI understands that every- new Uri("https://<APP>.openai.azure.com/"), SPONSORED SIDEBAR:
thing that appears to be a name should be replaced with new AzureKeyCredential("<YOUR API KEY>"));
a made-up name instead. This is the kind of thing that’s AI-Searchable
very difficult to do with traditional programming tech- var options = Knowledgebase
niques—how would you even write an algorithm that de- new ChatCompletionsOptions { MaxTokens = 1000 };
tects what’s a name, let alone replace it with fake names, One of the first scenarios
and then apply that fake name consistently going forward options.Messages.Add(new ChatMessage( most companies want
in follow-up conversations? Because we’re dealing with ChatRole.System, to implement is an AI-
a model that is, at the very least, very good at apply- """ searchable knowledgebase.
This way, documents, such
ing statistical mathematical tricks to fake intelligence, it You are a friendly and funny chap,
as employee manuals, can
understands this kind of request and does quite well with used to amuse an online audience.
be “indexed” and then used
it. Amazing indeed! You respond as if you were John Cleese from
by Generative AI to answer
Monty Python but with an old-English, medieval
questions. This is fun, but
Here's one more system prompt I often use to amuse my- way of talking. most companies quickly
self when I sit in front of my computer at 3 a.m. to meet """)); outgrow such an approach.
the latest deadline (after all, you gotta have some fun The next step is a system
every now and then, don’t you?): var userPrompt = Console.ReadLine(); that can provide this
options.Messages.Add(new ChatMessage( capability in a way that’s
You respond in the style of John Cleese from ChatRole.User, userPrompt)); secure and tailored (not
Monty Python, acting as a medieval peasant. every user should be able
var response = await to see everything and not
If the output created by this doesn’t put a smile on your client.GetChatCompletionsAsync( "ChatBot", options); all information applies to
face, brave sir knight, I must humbly request you turn in all scenarios) and provides
your geek card. <g> Console.WriteLine( real-time information,
response.Value.Choices[0].Message.Content); rather than pre-indexed
As you can imagine, using a system prompt effectively is static documents. Finally,
important. You want to use a system prompt that’s pre- This generates a response, which is also known as a organizations realize that
cise. At the same time, you don’t want the system prompt “prompt.” The AI considers itself to be an “assistant” to if such a system also had
to be too long. After all, when dealing with LLMs, you’re the person using it (or at least OpenAI considers it as access to data (in a secure
and appropriate fashion),
constantly battling size limitations, as they can only pro- such—the AI itself has no opinion on the matter) and
such a system evolves from
cess a certain amount of text. Therefore, try to be specific therefore, this is known as the assistant prompt.
a fringe offering to the
without wasting too much space on the system prompt.
center that the business
You might wonder why the response is considered a evolves around and that
“prompt” at all. Isn’t it just the “response?” Yes and no. all other systems are
As you have longer conversations with the AI, you some- controlled from.
If this doesn’t put a smile on your how need to preserve the state of the conversation. For
face, brave sir knight, I must humbly instance, if you ask the AI “What is a Ferrari,” it responds Regardless of how far y
with something like “an Italian sports car.” If you then ou want to travel down
request you turn in your geek card! subsequently ask “what is its typical color,” the AI needs this rabbit hole, our
to somehow know what “it” is. To us humans, it’s obvious CODE Consulting
that “it” refers to the “Ferrari” from the prior question. division can help you in
The next type of prompt we’re interested in is the user However, an LLM is a completely stateless system. It does implementing such a system
prompt. This is the core question you send to the AI. If not memorize prior answers or dialogs. Therefore, if you quickly. Find out how at
you’ve ever used ChatGPT, the user prompt is what you want the AI to understand such context, you must send codemag.com/AI-Docs.
type into the interface on the web page. A common usage it all the parts of the conversation that are meaningful
pattern is to let the user enter the text and then send it for the context. The easiest way to do so is to simply
to the AI as the user prompt. Note that this isn’t the only add to the collection of prompts, including the assistant
way to use this type of prompt. Very often, you’ll engineer prompts, on every subsequent call:
other ways of coming up with the input text (as you will
see below). For now, let’s enhance the prior example by using Azure;
reading the input from the console so the user can enter using Azure.AI.OpenAI;
anything they want.
var client = new OpenAIClient(
Also, while I’m at it, I’m going to switch from a simple new Uri("https://<APP>.openai.azure.com/"),
“completions” scenario to the slightly more advanced new AzureKeyCredential("<YOUR API KEY>"));

codemag.com Getting Started with AI Copilots in Your Own Applications 51


Note: There are various prompt engineering techniques
var options = that can help with this problem. For instance, instead of
new ChatCompletionsOptions { MaxTokens = 1000 }; sending everything back and forth, you could ask the AI
to summarize and rephrase the current question in a way
options.Messages.Add(new ChatMessage( that includes all required information. I will dive deeper
ChatRole.System, into such techniques in a future article.
"""
You are a friendly and funny chap, You may find yourself wondering how you’d extract a sum-
used to amuse an online audience. mary of the most significant points of a conversation. It’s
You respond as if you were John Cleese from an extremely difficult programming task, after all. But the
Monty Python but with an old-English, medieval answer is right in front of your eyes: Ask the AI to sum-
way of talking. marize all the prompts for you. You can simply start a new
""")); chain of interaction with the AI independently of your main
conversation. You can create a new system prompt that in-
while (true) dicates to the AI that it’s used to create concise summaries.
{ You then send it the entire collection of previous prompts
Console.ForegroundColor = ConsoleColor.White; and add a new user prompt that asks the AI to summarize
Console.Write(">> "); everything. This is an example where the user prompt isn’t,
var prompt = Console.ReadLine(); in fact, generated by a human user, but instead, the “user”
if (prompt?.ToLower() == "exit") break; is your application. This generates a new assistant prompt
with a summary of everything that has happened so far.
options.Messages.Add(new ChatMessage( On your main conversation, you then clear out your collec-
ChatRole.User, prompt)); tion of prompts (except the system prompt) and add the
new/single assistant prompt containing the summary. You
Console.ForegroundColor = ConsoleColor.Cyan; proceed from that point forward with the next human-gen-
Console.WriteLine(); erated user prompt. This works quite well! The AI won’t be
var currentResponse = string.Empty; aware of every single detail of the prior conversation (and
var response = await neither are humans when you talk to them for a while), but
client.GetChatCompletionsAsync( it will have enough context to where it’s usually not appar-
"ChatBot", options); ent that this truncation happened. In my experience, this
creates quite a natural flow that will do an amazing job of
Console.WriteLine( making you think you’re conversing with an actual person.
response.Value.Choices[0].Message.Content);
Console.WriteLine();
Understanding Tokens
// Adding the AI response to the collection of One of the questions as yet unanswered is “when is a prompt
// prompts to preserve "state" sequence too long?” Each language model has a certain text
options.Messages.Add(new ChatMessage( size limitation. This includes all the text going in and out of
ChatRole.Assistant, currentResponse)); the model. In other words, limitations are a combination of
} input and output. Let’s say the size limitation is 1000 char-
acters and you send it a prompt that’s 200 characters long:
In this version, you first set up a system prompt. Then, The response can’t be longer than 800 characters.
you enter a loop (until the user types EXIT) and collect a
new user prompt, add it to the collection of prompts, send However, limitations aren’t measured in characters or
it to the AI, retrieve the assistant prompt back (the an- words. Instead, they are measured in “tokens.” A to-
swer), and display it in the console (Figure 4). You then ken is a mathematical representation of a word or a text
add the assistant prompt to the collection of prompts, sequence such as special characters, line feeds, white
add the next user prompt, and repeat the process. This space, and so on. There’s no simple answer to how charac-
way, all of the relevant context is available to the AI on ters and words are converted into tokens. There are some
every call, and thus it can generate meaningful responses useful estimates, such as “a token, on average, encodes
as if it remembered the entire conversation. about four characters” or “think of tokens as syllables.”
Such estimates give you a rough idea and some context,
You may notice that the approach of adding to the queue but they can also be wildly wrong.
of prompts could create a very large amount of text quite
quickly. As I’ve already pointed out above, language The correct answer is to convert text into tokens to get
models have size limitations. You’ll find yourself con- the actual and accurate result. This can be done using a
stantly wrestling with this limitation. Even the best lan- “tokenizer.” In Python, you can use a library called TikTo-
guage models are quite limited in that respect. So, what ken. In C#, the equivalent is—wait for it—SharpToken.
can you do if the collected size of the prompts gets too You can add SharpToken as a NuGet package and then use
long? The simplest answer is that you start dropping user it to encode text like this:
and agent prompts once they get above a certain num-
ber. (For instance, Bing’s chat feature currently starts var encoding =
dropping prompts after five roundtrips). Another option SharpToken.GptEncoding.
is to get a summary of the most important aspects of the GetEncodingForModel("gpt-4");
conversation so far and stick that summary into a single var tokens = encoding.Encode(text);
prompt. var tokenCount = tokens.Count;

52 Getting Started with AI Copilots in Your Own Applications codemag.com


Note that you need to choose the right token encoding
for the model you’re using. However, for any of the mod-
ern GPT models, “gpt-4” encoding works well (there’s no
separate encoding for GPT 3.5).

This gives you a list of integer numbers (“token” is a


fancy term for “integer number,” in this case). You could
turn this list of numbers back into the original text (and
all LLMs operate on such tokens rather than the actual
text—after all, all artificial intelligences are fancy math
processors, and that requires numbers). You could turn
tokens back into the original text like so:

var encoding =
SharpToken.GptEncoding.
GetEncodingForModel("gpt-4");
var text = encoding.Decode(tokens);

For our purposes, however, you’ll just use this to figure


out the exact length of the prompts. If you calculate the
tokens for every prompt in the sequence, you know when
you go above the published maximum token size for the
model you’ve chosen (see Microsoft’s model reference al-
ready mentioned above). When you detect a problem, you
can start truncating or summarizing the prior history as
described above. Voila! Figure 4: A more sophisticated and entertaining version of our chatbot can hold an
ongoing conversation in a humorous way.
In case you are interested what tokens are like, the text
“Hello, world!” results in the following integer tokens:
This creates output that’s more pleasing and natural, as
9906, 11, 1917, 0 it eliminates potentially large wait states (it can take
several seconds to produce responses, and often more for
One part of all of this that’s still a bit fuzzy is that the maxi- very large responses). It’s an easy way to give the user
mum token count includes the output as well. Let’s say that the impression that the response was instantaneous.
you use the gpt-35-turbo model, which has a documented
maximum token count of 4,096. Applying a very rough esti- With that said, I haven’t quite made up my mind about
mate, this is about 8,000 words. (The largest token count cur- whether this is an approach I want to use going forward.
rently supported by any OpenAI model is 32,768 in the gpt- I’ve already mentioned above that LLMs can produce in-
4-32k model. Not surprisingly, it’s also by far the most expen- correct output or outright “hallucinate.” (I’m not a big
sive model to use). If you use the tokenization approach to fan of this term because I feel it gives too much agency
determine that you’ve already used up 3,700 tokens (to make to AIs. After all, we’re not dealing with people or intel-
up an example), that leaves 396 tokens maximum response ligent beings. We’re dealing with statistical math engines
size (about 800 words). To not create an error response, you that produce statistically probable text output. They do
then have to limit the response size to a maximum of 396 to- not “hallucinate.” It simply turns out that sometimes
kens. This can be done by setting the corresponding property nonsense is statistically the most likely answer.) I wonder
on the API. Not everything in OpenAI is specified in plain if we’ll get ourselves into trouble by letting LLMs dump
English. There are a handful of properties, such as size limits “stream of consciousness” types of answers on users with-
or randomization settings that have an impact on the predict- out first checking the result. Checking the result is quite
ability of the generated response. See below for an example. possible (as discussed above), but to do so, you first need
to have the entire result, then check it, redo it if you’re
not happy with the result, and then show it to the user.
Streaming Results This is contrary to streaming output. However, if the re-
In the examples shown so far, you’ve fired a request into sponse takes a while, you’ll have to employ other user
OpenAI and waited for the complete response to be pro- interface tricks to keep the user happy and fool them
vided. This is known as “synchronous access.” However, into thinking the response was quicker than it really was.
when using the ChatGPT application, users observe a dif- After all, you don’t want a user interface that appears to
ferent behavior, where word after word appears in the be doing nothing for more than a second.
user interface, almost as if the AI actually spoke one
word after another. This is a pattern that is quite natural At this point, I’m assuming that we’ll use both stream-
and one that users seem to have already come to expect, ing and non-streaming interfaces. When it really must be
even though this entire paradigm hasn’t been around very right (as is often the case in enterprise scenarios), we
long. This behavior is known as “streaming.” probably won’t be able to stream. On the other hand,
there are plenty of scenarios where accuracy isn’t as im-
Using the C# API, it’s relatively simple to create this sort portant (such as when using AI to help us write an email
of streaming behavior. Listing 1 shows the complete or a marketing text), in which streaming is an easy way
implementation. to create a good user experience.

codemag.com Getting Started with AI Copilots in Your Own Applications 53


Listing 1: A complete chat implementation with streaming response.
using Azure; Console.ForegroundColor = ConsoleColor.Cyan;
using Azure.AI.OpenAI; Console.WriteLine();

var client = new OpenAIClient( var response = await


new Uri("https://<APPNAME>.openai.azure.com/"), client.GetChatCompletionsStreamingAsync(
new AzureKeyCredential("<YOUR API KEY>")); "ChatBot", options);

var options = var sb = new StringBuilder();


new ChatCompletionsOptions { MaxTokens = 1000 }; await foreach (var choice in
response.Value.GetChoicesStreaming())
options.Messages.Add(new ChatMessage( {
ChatRole.System, await foreach (var message in
""" choice.GetMessageStreaming())
You are a friendly and funny chap, {
used to amuse an online audience. if (!string.IsNullOrEmpty(message.Content))
You respond as if you were John Cleese from {
Monty Python but with an old-English, medieval Console.Write(message.Content);
way of talking. sb.Append(message.Content);
""")); }
}
while (true) }
{
Console.ForegroundColor = ConsoleColor.White; Console.WriteLine();
Console.Write(">> "); Console.WriteLine();
var prompt = Console.ReadLine();
if (prompt?.ToLower() == "exit") break; options.Messages.Add(new ChatMessage(
ChatRole.Assistant, sb.ToString()));
options.Messages.Add(new ChatMessage( }
ChatRole.User, prompt));

SPONSORED SIDEBAR: Adding Your Own Data ate an accurate answer. (Another concern that applies in
We have now created a rather nice chatbot that can con- very many scenarios is security and access rights).
AI Training verse coherently and probably fool people into thinking
they are talking to a person. This is impressive! But it A great way to support such a scenario is a pattern known
Our CODE Training can only hold interest for a short period of time. To make as Retrieval Augmented Generation (which results in the
division was the first a truly useful artificial intelligence, or even a Copilot, somewhat unfortunate acronym RAG). The general idea is
training organization that you need to make the AI understand your specific needs, that you first let the user enter a question, such as “How
offered in-depth training
data, and institutional knowledge. do I turn on the AC?” Then, you must retrieve all of the
about how to create
information you have about the air conditioning system
Copilots for your own
applications. We’re now
Let’s create an example scenario, in which you imagine specific to the property the user has rented. You then
regularly scheduling these that you’re running a vacation rental business that rents take all that information, hand it to the AI, and let the
types of training classes out properties to vacationers all over the world. Vacation- AI answer the original question.
(and others), which can ers may have many questions about the properties they
be attended online, in- rent, as well as the area they are visiting. I’m sure every-
person, or be delivered, one in the vacation rental business must have answered
customized, at and for your questions such as “How do I turn on the air conditioning To make a truly useful AI, you need
organization. system?” or “How do I operate the safe?” a million times to make it understand your own
and would rather offload such tasks to an AI that’s avail-
Find out more at able 24 hours a day and doesn’t get annoyed easily. data and institutional knowledge.
codemag.com/Training.
In his article in the July/August 2023 issue of CODE Mag-
azine, Wei-Meng Lee created an example using the Python In some ways, this is easier said than done. First, you need
LangChain package to index his entire collection of CODE to detect the user’s intent. Do they want to just chat about
Magazine issues, so the AI could answer specific ques- something LLMs know (such as “what is an air conditioning
tions related to everything CODE has ever published. This system?”), or do you need to retrieve different types of infor-
is very useful, but for the vacation rental example, you mation? In this example, intent detection is relatively trivial,
need to go a step further. Wei-Meng created a static index assuming you’re creating an AI specifically for answering
of all magazines and then kept using it (an approach that such questions. The intent is always that you need to find
is useful for some scenarios), but you need to use data documents related to the property they rented. (Similarly,
that is current up to the minute. Furthermore, and more Bing Chat always knows that the intent includes some sort of
importantly, you need to apply additional logic to create web search before the question can be answered.) Therefore,
a correct and meaningful answer. For instance, it’s not intent detection isn’t a big issue for this scenario. However,
useful in this scenario to index the documentation you I’m mentioning intent detection here, because many Copilot
have for all air conditioning systems in all of the vacation scenarios must do some heavy lifting around this concept,
homes. Instead, you need to know which property the and I’ll explore this in a future article. (For instance, a user’s
vacationer has rented, whether it has an air conditioning question in a business application may have to do with cus-
system (or whether the vacationer is authorized to use tomers or with product databases or with invoices or… you
it), and then only use this information for the AI to cre- get the idea. To have an AI answer such questions, you first

54 Getting Started with AI Copilots in Your Own Applications codemag.com


need to figure out what domain the user’s input relates to. AI thon. Microsoft’s announced (but unavailable as I write
can help answer that, but it isn’t trivial). It’s generally safe this article) Prompt Flow engine presumably also supports
to assume that for any AI Copilot scenario, intent detection similar syntax.
is t\he first priority for any request.
If you fire this prompt (completely invisible to the user)
Because you know that the user’s intent includes search- into OpenAI, it creates a useful result that can then be
ing for information you may have, the next step is to used to retrieve documents from Azure Cognitive Search
figure out which documents apply. But how can you do (or similar services, such as Amazon’s Elastic Search).
that? If you had a bunch of text description or even docu- However, responses are often somewhat unpredictable.
ments in a SQL Server database, you could retrieve them Sometimes, you get a response such as the desired “air
from there, but how would you turn the question “how do conditioning manual price.” Sometimes, the response is
I turn on the AC?” into a SQL Server query that returns wordier as in “a good search term might be ‘air condition-
appropriate documents? After all, SELECT * FROM Docu- ing manual price’”, which is correct, but not that useful.
ments WHERE Body = ‘how do I turn on the AC?’ isn’t Sometimes it provides the first answer, but wraps it into
going to return anything useful. double quotes, which has special meaning to search en-
gines and thus creates wrong results.
When it comes to this new world of AI development, I’ve
discovered that things are a lot easier if your system has How can you handle this problem and what creates it in the
access to other AI-powered services. For instance, if you first place? For one thing, you can turn the “Temperature”
use Azure Cognitive Search, it would be somewhat easier to down to 0. “Temperature” is a parameter you can send to
return a list of documents that match the criteria. Not just an LLM. It’s a value between 0 and 2 (usually defaulted
that, but it could return such documents in a ranked fash- to 0.7) that defines the randomness of the response. If
ion, with the most important ones being returned first. You you turn this parameter to 0, the response to the same
could send it a search term such as “air conditioning manu- question will be predictable and consistent. The higher the
al” and filter it by the name of the property the vacationer value, the more random the response becomes. High ran-
has rented. This leaves us with the problem of turning the domness is useful if you want to write stories, but in tech-
original question (which might be much longer and include nical scenarios like this, you want high predictability, so
other aspects, such as “how do I turn on the AC and how I recommend turning randomness to 0 for this operation.
much will it cost me to do so?”) into a useful search term
(such as “air conditioning manual pricing”) that will return Another problem lies in the core nature of this request.
useful information. (Note that the user may use the term It’s what is referred to as a “zero-shot prompt.” This
“AC” and you may have to search for “air conditioning”). means that you simply sent a request to the AI but pro-
vided “zero guidance” (or “zero examples”) as to what
Again, the answer becomes clear once you’ve adopted the you wanted in return. Therefore, it tries its best in coming
new style of development in this new world driven by AI: up with a response, but it will be somewhat different in
You ask a large language model to come up with a useful nature for each request based on the question the user
search term. A user prompt such as this will give you a asked. Try to unit test that!
starting point:
In my experience, far better results are achieved with
Your task is to create a search term useful for prompt engineering techniques known as “one shot,” or,
Azure Cognitive Search. Provide a search term better yet, “few shot.” In a “few shot prompt,” you provide
for the following prompt: a few examples in addition to the question. Therefore, I
propose that you use the following prompt template:
How do I turn on the AC and how much will it
cost me to do so? Your task is to create a search term useful for
Azure Cognitive Search. Provide a search term
As you can see, this prompt is “engineered” to include a for the following prompt:
request related to what the user has entered as well as
the user’s original question. This prompt can be created Q: How do I open the safe?
by simply creating the string in memory. I find it easier to A: safe manual
create at least a simple text template engine that helps
with this task. I’ve created one internally that supports Q: How do I unlock the door with the keypad?
the following syntax. (Note: EPS may make it available A: keypad door operation
as an open-source engine if there’s enough interest. Feel
free to contact me if you feel it would be useful for the Q: What attractions are there around the property
community). Using such an engine, you might be able to and what do tickets cost?
create a template like this to create the output string: A: local attractions

Your task is to create a search term useful for Q: {OriginalQuestion}


Azure Cognitive Search. Provide a search term A:
for the following prompt:
In this example, I provided patterns of what I expect.
{OriginalQuestion} Those patterns may not be very close to what the ques-
tions are that the user asks, but they still provide enough
Note that there are some components out there that al- guidance to the AI to drastically improve the response.
ready do something similar, such as LangChain in Py- Also, because it establishes a pattern of questions (Q)

codemag.com Getting Started with AI Copilots in Your Own Applications 55


and answers (A), the returned response will be similar to stinctive reaction is to always use the most powerful
what’s in the provided answer examples (which, for in- model. But the more powerful the model, the slower it is
stance, do not have quotes around them). It doesn’t mat- and the more each call costs. API calls are usually charged
ter how you format your few-shot prompt. For instance, for in 1,000 token increments. They’re cheap individually,
instead of having used “Q:” and “A:” to indicate ques- but they can add up quickly! Therefore, choosing the right
tions and answers, I could have used “User:” and “Search model is often critical for performance and economics.
Term:”. It just matters that I provide a few examples of
what I expect. (The shorter version is preferrable, because Because you don’t need the AI to write a novel that can
the API charges by token count). win the Pulitzer Prize in response to a request to find a
good search term, you can probably do with a less capable
This may seem strange at first. But hey! Welcome to the model that’s cheaper and faster. As more and more models
new world of AI! <g> These types of prompt engineering become available, perhaps there may even be models in the
techniques are among the critical skills to learn when it future that aren’t as crazy resource-intensive and can run
comes to AI programming. I’m sure CODE will produce on local computers without incurring any API expenses.
quite a few articles and blog posts about these sorts of Whatever the case may be, a good prompt engineer and
techniques for years to come. AI developer will consider these tradeoffs on each call to
the AI. A single question a user asks may invoke several
There’s one more interesting aspect when it comes to different models behind the scenes to ultimately come up
finding a good search term and that relates to the model with a good answer. For this specific need, I encourage you
you use. When starting with LLM development, the in- to experiment with some of the simpler models to find a
search term before you then use something like GPT3.5 or
GPT4 to follow the next steps I’m about to describe.

Augmenting Prompts
Now that you have a search term, you can fire that search
term into Azure Cognitive Search and retrieve a list of
ranked documents.

Note: This assumes that you already have a list of docu-


ments in Cognitive Search (or a similar index). Creating
such an index is beyond the scope of this article, but the
short version is that you can create a Cognitive Search
resource in Azure (similar to OpenAI resources or App Ser-
vices and many others) and then add documents and other
content either manually, or by having it automatically sync
with various data sources. This allows you to add anything
from PDFs and HTML documents all the way to database
content. Some of this is fully automated, although I often
find that it’s useful to hand-craft indexes to achieve the
best results for AIs. I’ll explore this in detail in a future ar-
ticle. For now, let’s assume that there’s such a search index
and you’ve retrieved a list of information from it based on
the search-term you had the AI generate above.

At this point, you need to generate a final user prompt


that you can hand over to the AI to generate a meaning-
ful answer. You do this by injecting or “augmenting” the
prompt with the information you retrieved before a re-
sponse is generated (thus the name “Retrieval Augmented
Generation”). In its simplest form, you create a prompt
string that literally includes the content of the documents
you retrieved. Showing the complete prompt might take
up several pages in this magazine, so let’s look at a con-
ceptual prompt template instead:

Based on the following documents,


please answer the question below:

<% foreach document in documents %>


====
{document.Title}
{document.Content}
<% end foreach %>
Figure 5: CODE’s internal knowledgebase now uses a Copilot to help our people with a
variety of questions. It’s based on the techniques introduced in this article. Answer this question: {OriginalQuestion}

56 Getting Started with AI Copilots in Your Own Applications codemag.com


I think you can see where this is going: You create a large that it will be useful in a lot of scenarios. I also predict
SPONSORED SIDEBAR:
prompt with all the content you retrieved from the search. there will be a lot of scenarios where you will still do this
At the very beginning, you state what you want the AI to manually, either because you want to incorporate differ- Adding Copilots to
do. The result is each document with title and actual con- ent data sources, or because you want to add more logic Your Apps
tent. (Note that I added an arbitrary separator sequence (such as “which vacation rental property has the user
of four equal signs before each document. This is another rented?”) to your systems. The future is here and
prompt engineering technique I learned from experience you don’t want to get
that seems to help the AI make sense of what I give it). This is a good example of where Azure OpenAI shines over left behind. Unlock the
Finally, I ask the AI a second time (yet another technique just OpenAI. After all, it’s not OpenAI’s job to incorporate true potential of your
that usually improves the result) to answer the original RAG patterns into their models. But it is Microsoft’s job to software applications by
question the user had, and then the question (such as add more and more integration features of this kind, and adding Copilots. CODE
“how do I turn on the AC?”) is merged into the prompt. I expect this to happen at a rapid pace. This is one of the Consulting can assess your
main reasons I often choose Azure OpenAI over OpenAI applications and provide
The result is output that is “grounded” in the specific directly (even though I use both approaches). you with a roadmap for
adding Copilot features
information you provided. It will be of very high qual-
and optionally assist you
ity and far less prone to hallucinations than what you’ll
encounter in raw ChatGPT. Also, the information you pro- Conclusion in adding them to your
applications. Reach
vided will remain private and secure. It won’t be passed There you have it! You now know some of the funda- out to us today to get
on to OpenAI for further model training or anything of mental techniques required to build Copilot-powered ap- your application
that kind. (Figure 5 shows CODE’s internal knowledge- plications. Admittedly, there remain many unanswered assessment scheduled.
base, which now features our own built-in CODE Copilot. questions, such as “how do you create a search index on www.codemag.com/ai
It was built on this very technique). SQL Server data?” or “how do I go beyond a command-
line user interface?” Similarly, there are specific tasks to
One aspect I haven’t talked about yet is that of size. which solutions may not be immediately apparent, such
Again, you’re up against token limits, and information re- as summarizing large blocks of information or detect-
turned from search services can be lengthy. However, it’s ing user intent in scenarios that require consolidation of
also ranked, meaning that the most applicable results are many different types of data, information, and features.
returned first. I have used a simple token count mecha-
nism where I add up documents as long as I’m under the These topics are beyond the scope of this article, but not
token limit and then ignore the rest. After all, it’s less beyond the scope of what we intend to cover in CODE Maga-
important. This works in many scenarios. Another option zine in upcoming issues. In fact, I anticipate that we will
is to make multiple calls to the AI to summarize each continuously publish a wide range of articles that all revolve
document, so it’s much shorter yet maintains the impor- around this new world of software we now all live in. I hope
tant parts. This way, you can inject far more information, you’ll stay tuned for my own articles, blog posts, webinars,
but it is also more resource intensive. and presentations on the subject, and you’ll find that many
other authors and presenters are creating similar content.
In future articles, I’ll explore the details of making search
index use more efficient. One way to do so is to tune what I’m very excited! In the thirty years I’ve spent in the
gets indexed. For instance, the indexed document can be software industry, I’ve never experienced anything quite
summarized when it’s saved into the index, and therefore, like this. The pace of development is incredible and even
a summarized and smaller version can be always available somewhat intimidating. At the same time, the surprising
if needed. Another technique I’ll explore in future articles results AI-driven software can produce are fascinating. It
is to handcraft indexing mechanisms for rectangular data doesn’t happen often that software you wrote completely
(business data stored in databases such as SQL Server, or surprises you in the capabilities it exhibits, but that’s
invoices, products, and so on) so it can be found by AI. exactly what happens on a regular basis with software
that uses LLMs as a backbone. It is easily the most fun
I’ve ever had in development.
Using Pre-Baked RAG Features
The pattern I just introduced is useful for quite a few sce- I anticipate a fascinating journey. I hope you’ll join me!
narios, which is why I went through it in a manual fashion
here. I predict that you’ll spend quite a bit of time imple- Markus Egger
menting the RAG pattern for a wide variety of uses. With
that said, it isn’t always necessary to do this by hand. The
example I’ve shown above could have also been automat-
ed away completely. Microsoft has introduced features in
Azure that allow combining OpenAI LLMs with Cognitive
Search directly. Using this approach (which is currently
available in a preview version), you can automatically
link Azure Cognitive Search to OpenAI, making much of
this happen transparently and without knowing how the
RAG pattern works. It can be used to add anything in
Cognitive Search as well as data stored in Azure Storage
and, to a limited extent, even in SQL Server.

How well this will work remains to be seen once Microsoft


makes more of these features available to us. I predict

codemag.com Getting Started with AI Copilots in Your Own Applications 57


ONLINE QUICK ID 2309071

Vite and Progressive Web Apps


Many web developers use Vite to build their web applications, but as requirements change, you need to be able to make apps
out of your websites. Luckily, Vite has a solution for you. At a high level, Vite is a development build environment and abstracts
details from the underlying project framework. Because of this abstraction, you can achieve key features like packaging,

offline use, and being able to install a website as an app to deal with offline usage, caching, and other features,
outside the JavaScript framework you’re using. In this ar- this feature is not the same as a PWAs. As discussed earlier,
ticle, I’ll show you how. unless you write custom code, PWAs can also use server
workers for background processes, enable offline mode, and
do updates in a more standard way. But this article is about
What’s a Progressive Web App using PWAs in your Vite project, so let’s talk about Vite.
Although there’s always been a difference between build-
ing mobile apps and building websites, Progressive Web
Applications (PWAs) mean to fill in the gap between Vite
Shawn Wildermuth them. The basic features that PWAs offer to web develop- Vite (rhymes with ‘beat’) is taking the JavaScript world by
[email protected] ers include: storm. It presents a way to develop your applications in a
wildermuth.com very quick fashion. What might not be obvious is that Vite
@shawnwildermuth • Installation: Can be installed from the web or sub- isn’t a particular framework or library; it’s simply a tool
mitted to app stores. Web assets are installed on for running an application and reacting to changes. This
Shawn Wildermuth has
been tinkering with the local OS. isn’t dependent on any specific way you write your own
computers and software • Platform-agnostic: Can integrate with host OSs to code. Instead, it’s a development time tool for working
since he got a Vic-20 give the user a more native experience. with JavaScript or TypeScript.
back in the early ’80s. • Offline: Supports running the PWA offline so you
As a Microsoft MVP since don’t need a persistent network connection to Vite has a plug-in that can implement PWAs for you. The
2003, he’s also involved launch the application. plug-in is called vite-plugin-pwa. The plug-in is specific to
with Microsoft as an ASP. • Background processing: Supports multiple threads Vite, so it doesn’t matter what web framework you’re using.
NET Insider and ClientDev via Service Workers. It’s agnostic to the underlying application. So, this works if
Insider. He’s the author • Versioning: PWAs can have a consistent way to see you’re using Vite for React, Vue, Svelte-Kit, or even Vanilla
of over twenty Pluralsight if the code needs to be updated. JS. I like this approach because I can learn to handle a PWA
courses, written eight • Protocol handlers: Associate files with the PWA. in one place and apply it to multiple technologies.
books, an international
conference speaker, and To be clear, PWAs aren’t magical. They’re just web tech- Next up, let’s add PWA support to a Vite project.
one of the Wilder Minds. nologies so they have limitations about performance, and
You can reach him at his theming your application to look like the underlying OS You can get the starting project from https://github.
blog at http://wildermuth. is up to you. PWAs aren’t meant to replace native apps or com/wilder-minds/vite-pwa.
com. He’s also making his app frameworks like Flutter, MAUI, and Xamarin.
first, feature-length docu-
Installing the PWA Plug-in
mentary about software
The PWA Plugin is just another Vite plug-in. You can in-
developers today called
“Hello World: The Film.”
stall it via NPM like this:
You can see more about it at For a more in-depth explanation
npm install vite-plugin-pwa -save-dev
http://helloworldfilm.com. of PWAs, see Chris Love’s article:
https://www.codemag.com/ The purpose of the VitePWA plug-in is to enable creation
Article/1805091/Introducing- of a manifest.webmanifest file and JavaScript files to
set up and run the service worker. The VitePWA plug-in
Progressive-Web-Apps-The- automatically creates these files without you having to
No-Store-App-Solution. understand the nature of the files.

This installs the actual package but doesn’t configure it.


Once the package is installed, you need to configure it
For a long time, you’ve been able to install a website as in vite.config.js file. You’ll need to import the VitePWA
an application in most browsers. For example, in Edge (or object into the file and call it to add the plug-in:
any Chromium browser), you can add a website as an app,
as seen in Figure 1. import^ÀOH85/7R3DWK85/`from 'node:url'

This just installs the website in an OS window. It asks for import^GHÀQH&RQÀJ`from 'vite'
metadata because it only has the <title></title> as the import vue from '@vitejs/plugin-vue'
name of the app, as seen in Figure 2: import { VitePWA } from "vite-plugin-pwa";

This is great for certain projects, but PWAs extend this idea KWWSVYLWHMVGHYFRQÀJ
even further. Although hosting a website in its own window export defaultGHÀQH&RQÀJ ^
does some of what PWAs do, unless you write specific code plugins: [

58 Vite and Progressive Web Apps codemag.com


vue(),
VitePWA()
],
...
})

This enables the plug-in, but only in actual Vite builds. If


you start the Vite preview feature, you’ll be able to look
at the Application tab in the browser’s developer tools, as
seen in Figure 3.

These errors occur because you haven’t started configur-


ing the plug-in. To do this, you can send in an options
object to the VitePWA plug-in:

export defaultGHÀQH&RQÀJ ^
plugins: [
vue(),
VitePWA({
manifest: {}
}),
...
Figure 1: Installing a website as an app
For most people, you’ll want to be able to test and debug
this in development mode. So the first real configuration
that I’d suggest you do is to enable devOptions:enabled:

export defaultGHÀQH&RQÀJ ^
plugins: [
vue(),
VitePWA({
manifest: {},
devOptions: {
enabled: true
} Figure 2: Installing a website as an application
}),

With this switch turned on, the manifest and associated


files are created for you. In order to enable installation
as a PWA, you’ll need, at a minimum, a 512x512 icon. You
can add several sizes of icons, but let’s start with the first
icon inside the manifest options:

export defaultGHÀQH&RQÀJ ^
plugins: [
vue(),
VitePWA({
manifest: {
icons: [
{
src: '/icons/icon-512x512.png',
sizes: '512x512',
type: 'image/png'
}
]
},
devOptions: {
enabled: true
}
}),

If you run the Vite project (e.g., “vite dev”), you’ll see
that the app is now installable, as seen in Figure 4:

You’ll notice that the browser has named the app “vite-
pwa”. This is the default name. If you open the dev-tools Figure 3: The empty manifest file

codemag.com Vite and Progressive Web Apps 59


in the browser, you can click on the Application Tab and name: "Bechdel Films",
see information about the manifest, as shown in Figure 5. short_name: "Bechdel",
description: 6LPSOHDSSWRYLHZÀOPV,
To set this up, you’ll want to add the identity properties icons: [
in the manifest configuration: {
src: '/icons/icon-512x512.png',
VitePWA({ sizes: '512x512',
manifest: { type: 'image/png'
}
]
},
devOptions: {
enabled: true
}
}),
Figure 4: The installation button
With these changes, you’ll need to stop and restart the
“vite dev” call in order for the VitePWA plug-in to recreate
the manifest. Once you do that, you’ll see the information
in the browser tools, as seen in Figure 6.

With the basic metadata complete, you can see how in-
stalling the app works.

Installing a PWA App


Now the fun part starts. You can install the app using
that magic button in the address bar. This prompts you to
install the site as an application, as shown in Figure 7.

Once installed, depending on which browser and OS you


use to install it, you’ll get a prompt to about how to use
it in the system. As seen in Figure 8, Edge allows you to
install it into Windows Start Menu, Desktop, and Taskbar.

Once you install it, you’ll be able to launch it from the


Figure 5: The default name device/OS (as seen in Figure 9, on Windows).

Installing the PWA is only the beginning. Now you need


to make it act like an application.

Offline Support
Although you can certainly create a native application
that doesn’t support offline usage, a PWA has different
requirements. While in the browser, caching can help load
certain assets (e.g., HTML, JS, CSS), but typically, this
still depends on checking the server for a new version
even if there’s a cache. In PWAs, all the assets to load the
page need to be stored for offline use. To do this, it uses
Cache storage. If you run the installed app, you can still
Figure 6: A working manifest file load the dev tools. With this view, you can see the cache
storage that’s being used, as seen in Figure 10.

This cache storage stores the assets necessary to run


the project. You’ll notice that the cache storage is called
“workbox-precache.” But what’s the workbox? The work-
box is simply a set of parameters inside which your ap-
plication runs. This is important as it defines how the ser-
vice worker interacts with the application and the server.
Luckily, the PWA plug-in allows you to configure this.

By default, the PWA plug-in caches HTML, JS, and CSS. If


you need other file types, you can configure the workbox
to include other files:

VitePWA({
Figure 7: Installation prompt ...

60 Vite and Progressive Web Apps codemag.com


Figure 8: Installation options.

workbox: {
globPatterns: [ "**/*.{js,css,html,pdf}"]
}
})

This allows you to help cache assets. In general, if an


asset is in src/assets or public folders, it will be cached
by the PWA.

Although supporting the site assets is a central part of


the workbox, inevitably most websites also need to call
APIs. The PWA plug-in allows you to configure runtime
caching. First, you’ll need to tell the plug-in which calls
to cache by using the urlPattern property:

workbox: {
globPatterns: ["**/*.{js,css,html,pdf}"], Figure 9: Installed on the OS

runtimeCaching: [{
urlPattern: ({ url }) => {
return url.pathname.startsWith("/api");
}
...

In this example, I’m caching any calls from the Java-


Script calls whose URL starts with “/api”. For me, this
includes any API calls. Again, this caching strategy is
helpful for offline support so that the JavaScript calls
to fetch are intercepted and returned with the cache
data when the real API isn’t available. To make this work
in the way you want, you need a couple more pieces of
configuration.

First, define the cache handler:

workbox: {
globPatterns: ["**/*.{js,css,html,pdf}"],

runtimeCaching: [{
urlPattern: ({ url }) => {
return url.pathname.startsWith("/api"); Figure 10: HTML page caching

codemag.com Vite and Progressive Web Apps 61


runtimeCaching: [{
urlPattern: ({ url }) => {
return url.pathname.startsWith("/api");
},
handler: "CacheFirst" as const,
options: {
cacheName: "api-cache",
cacheableResponse: {
statuses: [0, 200]
}
}
}]
}

Although there are a number of options, naming the


cache and specifying what HTTP Status Codes to actually
cache is really common. By naming the cache, you can
easily see the caching in the Application tab of the dev
tools, as in Figure 11.

In order to see this work, you might need to force the


service worker to update on reload (instead of using the
cached version), as seen in Figure 12.

Now that you have that working, let’s talk about handling
Figure 11: API caching in action updates to the application.

Configurating Updates
When the network is available and the code or markup
has changed, you’ll need a way of updating the cached
code. By default, the behavior is to prompt the user to
update the application by recreating the cache from the
server. To get this behavior, you don’t need to configure
this option.

If you want the application to just get the newest version


on every launch/reload, you can specify that the registra-
tion is to auto update by adding another configuration
detail:

VitePWA({
...,
Figure 12: Forcing the Service Worker to update on page reload registerType: 'autoUpdate'
})

Source Code }, For most applications, the autoUpdate is the cor-


handler: "CacheFirst" as const, rect behavior. To be clear, this doesn’t just invalidate
The source code can be
the markup, code, and CSS caches, but also invalidates
downloaded at https://github.
com/wilder-minds/vite-pwa In this example, I’m using the CacheFirst handler. This the runtimeCaches as well. Clearing these caches is
handler type is defined in Chrome’s workbox documenta- almost always the option you want, and is the de-
tion. The most common handlers include: fault. It’s strongly recommended you don’t change this
behavior.
• CacheFirst: Read the cache before falling back to
calling the network.
• CacheOnly: Only read from the cache, never fallback Where Are We?
to network. Creating, building, and debugging PWAs can be straight-
• NetworkFirst: Call the network first and, if it fails, forward, no matter the underlying framework. The PWA
fallback to the cache. plug-in for Vite represents a platform-agnostic way to ac-
• NetworkOnly: Always call the network, don’t use complish creating a PWA and implementing many of the
the cache. features at a higher level than inside your application.
The plug-in should simplify how you want to support
Finally, you’ll want to add some options to the cache: PWAs in your applications.

workbox: { Shawn Wildermuth


globPatterns: ["**/*.{js,css,html,pdf}"],

62 Vite and Progressive Web Apps codemag.com


ONLINE QUICK ID 2309081

Authentication in Laravel, Part 2:


Token Authentication
In my previous article (May/June 2023 CODE Magazine), I introduced Authentication in Laravel. Laravel uses guards to determine
an authenticated user by specifying which authentication provider should be used for authentication. Guards are defined in the
config/auth.php file and can be customized to fit the needs of your application. Several types of guards are available in Laravel,

including session, token, and stateless guards. In the • It improves performance by reducing the amount of
previous article, I covered authenticating users with the data that needs to be sent with each request.
Session Guard in detail. In this article, I’ll cover Token • It simplifies the implementation of authentication
Guard and explore how to authenticate a user in Laravel mechanisms in complex systems and enables easy
using tokens only. integration with other systems and applications.
• It can be used in distributed systems, where au-
Token Authentication in Laravel is typically used when thentication needs to be performed across multiple
building stateless APIs. The client application (such as a servers without requiring the servers to share au-
mobile app or a JavaScript application) needs to authen- thentication data.
ticate with the server on every request without storing Bilal Haidar
any session information on the server.
How Laravel Implements Token [email protected]
https://www.bhaidar.dev
Token Authentication works by issuing a token to the client Authentication? @bhaidar
upon successful authentication, which is then used to au- Laravel offers two packages for Token Authentication: Bilal Haidar is an
thenticate subsequent requests. The client sends this token Laravel Passport and Laravel Sanctum. accomplished author,
in the request headers, and the server uses it to authenticate Microsoft MVP of 10 years,
the user and authorize access to the requested resource. • Laravel Passport: Passport is a full-featured OAuth2 ASP.NET Insider, and has
server implementation that provides a complete solu- been writing for CODE
In Laravel, Sanctum is the Token Authentication imple- tion for API authentication. It allows clients to au- Magazine since 2007.
mentation used to authenticate API requests through the thenticate with your API using various OAuth2 flows,
auth:sanctum middleware, which protects the routes un- including password grant, authorization code grant, With 15 years of extensive
der the routes/api.php file. and client credentials grant. Passport requires more experience in Web devel-
configuration and set up than Sanctum, but it offers opment, Bilal is an expert
There are other ways to implement Token Authentication more advanced features for OAuth2 authentication. in providing enterprise
in Laravel other than Sanctum. Still, being a package de- • Laravel Sanctum: Sanctum is a lightweight pack- Web solutions.
veloped and maintained by the Laravel team, it’s consid- age that provides a simple way to authenticate API
He works at Consolidated
ered the optimal implementation for Token Authentica- requests using tokens. It's designed for single-page
Contractors Company in
tion in the Laravel application. applications, mobile applications, and APIs that need
Athens, Greece as a full-
a straightforward and easy-to-use authentication sys- stack senior developer.
tem. Sanctum uses Token Authentication, and it does
What’s Token Authentication? not support OAuth2 authentication flows. Bilal offers technical
Token Authentication is a method in which a token is consultancy for a variety
exchanged between the client and server to establish and Both packages provide a secure and reliable way to au- of technologies including
maintain authentication status. In this method, a token thenticate API requests, but they differ in complexity and Nest JS, Angular, Vue JS,
is issued to a user after successful authentication. It’s the features they offer. Choose the package that best fits JavaScript and TypeScript.
then used for subsequent authentication attempts in- the specific requirements of your application.
stead of sending the user's credentials, such as username
and password, with each request. Before you begin working on your application, consider
whether Laravel Passport or Laravel Sanctum would be more
The token is typically a string of characters generated suitable for your needs. If your application must support
by the server. It contains the user's identity and other OAuth2, then Laravel Passport is the appropriate choice.
relevant information that allows the server to validate the
authenticity of the request. The token is usually included On the other hand, if you're looking to authenticate a
in the HTTP Header of each request sent by the client, and single-page application, or mobile application, or gener-
the server can use it to verify that the request is coming ate API tokens, then Laravel Sanctum is the recommended
from an authenticated user. choice. Although it doesn't support OAuth2, it offers a
more straightforward API authentication development ex-
Token Authentication is helpful for several reasons: perience.

• It improves security by reducing the risk of creden- Laravel Passport


tials being intercepted or stolen because the user's This package uses OAuth2, an industry-standard proto-
credentials are only exchanged once, during initial col for authentication and authorization, to generate and
authentication. validate access tokens.

codemag.com Authentication in Laravel, Part 2: Token Authentication 63


When a user logs in, Passport generates an access token 1. When a user logs in or signs up, Sanctum generates
and a refresh token. The access token is a JSON Web To- a unique, encrypted session cookie that’s returned to
ken (JWT) containing the user's ID, the token's expiration the user's client-side SPA application.
time, and any additional data required. The refresh token 2. The session cookie is stored in the browser's cookie
requests a new access token when the current access to- storage and sent with each subsequent request to
ken expires. the Laravel application.
3. The Laravel Sanctum middleware validates the ses-
To generate an access token, Laravel uses a combination sion cookie and checks whether it’s associated with
of the user's ID, a secret key, and the current time to de- a valid user account.
velop a unique signature. This signature is then encoded 4. If the session cookie is valid, the Laravel application
as a JWT and returned to the client. processes the request and returns the response.
5. If the session cookie is invalid or has expired, the
When the client makes a subsequent request, it includes Laravel application returns a 401 Unauthorized re-
the access token in the Authorization header of the HTTP sponse, indicating that the user needs to log in
request. Laravel's middleware verifies the validity of the again.
token by checking its signature against the secret key
and expiration time. If the token is valid, the request is Laravel Sanctum also provides mechanisms for revoking
allowed to proceed. session cookies. If a user logs out or their account is
deleted, any active session cookies associated with their
If the token is invalid or has expired, Laravel returns a 401 performance can be invalidated. This ensures that unau-
Unauthorized response. If the access token has passed, but thorized users cannot use an account after deactivating
the refresh token is still valid, the client can use this to it.
obtain a new access token without having to log in again.
Overall, Laravel Sanctum is a powerful and flexible pack-
Overall, Laravel Passport is designed to be secure, effi- age for Token Authentication that can significantly sim-
cient, and easy to use. By leveraging industry-standard plify the building of secure SPA and mobile applications
protocols like OAuth2 and JWT, Laravel provides a robust in Laravel.
and reliable authentication mechanism that can be easily
integrated into any Laravel application.
Applying Token Authentication
Laravel Sanctum in Laravel
Laravel Sanctum is a lightweight authentication system In this section, let’s focus on how to use Laravel Sanctum
for SPAs (single-page applications), mobile applications, to authenticate users. The examples I give include both
and Token Authentication APIs. SPAs and Mobile applications.

Laravel Sanctum helps solve two different problems: API Create a New Laravel Application
Tokens and SPA Authentication. Start by creating a new Laravel application. There are
several methods for creating a new Laravel app. I chose
API Tokens the Docker-based one using the Laravel Sail package. You
Sanctum provides a straightforward way to create API to- can read more about Laravel installation by following this
kens for your users. You can integrate Sanctum into your URL: https://laravel.com/docs/10.x/installation.
application's Account Settings page, where users can gen-
erate and manage their API tokens. These tokens usually Choose the method that best suits you. Before you start,
have a long lifespan but can be manually revoked by the make sure you have Docker running on your computer.
user. Laravel Sanctum stores user API tokens in a single
database table and verifies incoming HTTP requests using I'm using a MacBook Pro. I start by running this com-
the Authorization header that contains a valid API token. mand:

Token Authentication is a method of authenticating users curl -s \


in a mobile application: "https://laravel.build/token-auth-laravel"\
| bash
1. The user submits their login credentials to a Login
endpoint within the Laravel application. This creates a new Laravel application on your computer
2. A token is generated and returned to the mobile ap- under the directory named token-auth-laravel.
plication after successfully validating the credentials.
3. The token is stored locally and sent with every sub- After the installer finishes, run the following commands:
sequent request to the Laravel API within the request
header. 1. Build up the Docker container.
4. Laravel API validates the token in the request header
and performs authentication of the request accord- ./vendor/bin/sail up
ingly.
2. Install the NPM packages.
SPA Authentication
Laravel Sanctum doesn’t use tokens for Single Page Ap- ./vendor/bin/sail npm install
plications (SPAs) but instead relies on session cookies for
authentication. Here's how Sanctum works for SPAs: 3. Serve the application.

64 Authentication in Laravel, Part 2: Token Authentication codemag.com


Figure 1: The Laravel 10 landing page

./vendor/bin/sail run dev Laravel Breeze for API removes your application's front-
end-related files and folders. This makes it suitable for
The application is now accessible at http://localhost. projects that only serve APIs. As a result, the package.
Open the URL in the browser, and you'll see the same json file and other front-end files are removed.
view as in Figure 1.
If you wish to serve your API project using Laravel Breeze
Next, let's install the Laravel Breeze starter kit. The Laravel for API, you can do so by running the following command:
team provides this starter kit for scaffolding Laravel authen-
tication and Profile management. This is my ultimate choice ./vendor/bin/sail \
when starting a new Laravel project. Trust me: It saves you artisan serve
a lot of time! You can read more about Laravel Breeze here:
https://laravel.com/docs/10.x/starter-kits#laravel-breeze. This command starts the Laravel Sail development envi-
ronment and serves your API project.
Laravel Breeze comes in four flavors:
Login Mobile Application Users
• Breeze & Blade The routes/api.php file governs the communication be-
• Breeze & React tween the Laravel API application and a mobile applica-
• Breeze & Vue tion. Let’s explore the content of this file.
• Breeze & Next.js/API
<?php
I'll use Breeze & Next.js/API for this article.
Route::middleware(['auth:sanctum'])
Run the following commands to install Laravel Breeze: ->get('/user', function (Request $request) {
return $request->user();
./vendor/bin/sail \ });
composer require \
laravel/breeze --dev The code snippet defines a GET route at the path /api/
user inside the routes/api.php file. This route is pro-
./vendor/bin/sail \ tected by the auth::sanctum middleware, meaning the
artisan breeze:install api user must be authenticated using Laravel Sanctum before
accessing this route.
./vendor/bin/sail \
artisan migrate The code inside the closure function of the route defi-
nition retrieves the currently authenticated user using
The Laravel Breeze for API added the basic configura- the $request->user() method. This method returns an
tion settings for Laravel Sanctum and all necessary HTTP instance of the authenticated user model if the user is
Controllers to log in, log out, register users, and more. authenticated or null if the user is not.

codemag.com Authentication in Laravel, Part 2: Token Authentication 65


Listing 1: Authenticate Mobile application users.
public function __invoke(Request $request) $request->password,
{ $user->password
$request->validate([ )
'email' => 'required|email', ) {
'password' => 'required', throw ValidationException::withMessages([
'device_name' => 'required', 'email' => [
]); 'The provided credentials are incorrect.'
],
$user = User::query() ]);
->where('email', $request->email) }
!ÀUVW 
return $user
if ( ->createToken($request->device_name)
!$user || ->plainTextToken;
!Hash::check( }

When a user makes a GET request to the /api/user end- 1. The controller method accepts an HTTP request via
point with a valid authentication token, the closure func- the $request parameter.
tion returns a JSON response containing the user object, 2. The $request parameter is then validated to ensure
which includes details such as the user's name, email ad- that it contains the required fields for authentica-
dress, and other information stored in the user model. tion. These fields are the user's email, password, and
the device name they are using to log in.
If the user is not authenticated or the authentication to- 3. Next, the method queries the User model to find a
ken is invalid, the auth:sanctum middleware automatically user with the email address provided in the request.
returns a 401 Unauthorized response, denying the user ac- 4. If the user is found, their password is checked to
cess. ensure that it matches the password provided in the
request. If the password does not match, a validation
When communicating with Laravel API endpoints, both exception is thrown with an error message stating
the request and response payloads contain JSON data. that the credentials are incorrect.
Hence, it’s essential to continually include two request 5. If the user's email and password are validated suc-
header keys in every request to the API: cessfully, the method generates a new token for the
user using the createToken() method provided by
• Accept Sanctum. This token is associated with the device
• Content-Type name provided in the request.
6. Finally, the method returns the plain text value of
The Content-Type header of the request should be set to the token to the client. The client can use this to-
application/json because you’re sending JSON data in the ken for subsequent authenticated requests to the
request body. This header tells the server that the request server.
body is in JSON format and should be parsed as such.
In this case, the result of authenticating a user is that a
The Accept header, on the other hand, is used to indicate new token is generated and sent to the user. This token
the desired response format. In this case, you’re return- should then be added to the header of every future re-
ing JSON data, so the Accept header can be set to ap- quest to the application.
plication/json to tell the server that you expect a JSON
response. Back to the original defined route in routes/api.php file:

Let’s add a new post endpoint to log in users coming from <?php
a Mobile application. To do so, add the following route to
the routes/api.php file: Route::middleware(['auth:sanctum'])
->get('/user', function (Request $request) {
Route::post('/login', LoginController::class); return $request->user();
});
Next, let’s create the LoginController class as an invok-
able controller using this command: Assuming that the user is authenticated and owns a to-
ken, an incoming request to the URL /api/user will be
./vendor/bin/sail \ authenticated using the Sanctum guard represented by
artisan make:controller \ the middleware auth::sanctum.
API/Auth/LoginController --invokable
This middleware first checks if a session cookie exists in
I’m placing the new controller inside the app/Http/Con- the incoming request (this is the default for SPA applica-
trollers/API/Auth folder. tions). Otherwise, it tries to locate a token in the request
header. For mobile users, that’s the default behavior.
Open the LoginController class and paste the source code Laravel Sanctum validates the token and accordingly al-
shown in Listing 1 inside the __invoke() controller method. lows or forbids access to the /api/user route.

Let’s explain the code in Listing 1 step by step. Now, let's add a few tests to ensure this code runs.

66 Authentication in Laravel, Part 2: Token Authentication codemag.com


Listing 2: LoginController Tests
class LoginControllerTest extends TestCase
{
use RefreshDatabase; $response->assertOk();

/** @test */ $this->assertNotEmpty(


public function login_mobile_user() $response->getContent()
{ );
$user = User::factory()->create([
'email' => '[email protected]', $this->assertDatabaseHas('personal_access_tokens',
'password' => bcrypt('password'), [
]); 'name' => 'Bilal iPhone 12',
'tokenable_type' => User::class,
$response = $this->post('/api/login', [ 'tokenable_id' => $user->id
'email' => $user->email, ]);
'password' => 'password', }
'device_name' => 'Bilal iPhone 12', }
]);

Create a new test using this command: Listing 3: More tests for LoginController
/** @test */
./vendor/bin/sail \ public function access_user_endpoint()
artisan make:test \ {
Sanctum::actingAs(User::factory()->create());
API/Auth/LoginControllerTest
$response = $this->get('/api/user');
Paste the source code in Listing 2 inside the LoginCon-
trollerTest file. $response->assertOk();
}

Run this test using this command:

./vender/bin/sail \ of the LoginControllerTest file.


WHVWÀOWHUORJLQBPRELOHBXVHU
Run this test using this command:
The test should run and pass. Now let's look at it in some
detail and explain exactly what’s happening here. ./vender/bin/sail \
WHVWÀOWHU"access_user_endpoint"
This test verifies that the /api/login endpoint properly
authenticates mobile users using Laravel Sanctum. Here's The test should run and pass. Let’s see the result.
what the test is doing:
This test verifies that a user can access the /api/user end-
1. The use RefreshDatabase statement ensures point when authenticated using Laravel Sanctum. Here's how:
that the test database is reset before each test.
This ensures that the tests run with a consistent 1. The Sanctum::actingAs() method logs in a user and
state. creates a new token for them. The User::factory()-
2. The $user variable is created using the User::factory()- >create() method creates a new user in the database
>create() method, which makes a new user in the and logs them in using Sanctum.
database. The user is given a specified email and 2. The $response variable is set to the result of sending a
password for testing purposes. GET request to the /api/user endpoint. This endpoint is
3. The $response variable is set to the result of sending protected and requires an authenticated user to access it.
a POST request to the /api/login endpoint with the 3. The $response->assertOk() method verifies that the
user's email, password, and device name provided in server returns a 200 HTTP status code, indicating
the request payload. that the request was successful.
4. The $response->assertOk() method verifies
that the server returns a 200 HTTP status code, indi- Register Mobile Application Users
cating that the request was successful. Next, let’s add an endpoint to allow Mobile application
5. The $this->assertNotEmpty($response->get- users to register on the application. For that, create a
Content()) method verifies that the response con- new controller for registration using this command:
tent is not empty. Remember, the /api/login end-
point returns the user’s token. ./vendor/bin/sail \
6. The $this->assertDatabaseHas() method verifies artisan make:controller \
that a personal access token was created for the user API/Auth/RegisterController --invokable
in the personal_access_tokens table with the speci-
fied device name. Paste the code in Listing 4 inside the newly created con-
troller.
Let’s add another test to ensure that the authenticated
user can access the default /api/user endpoint. 1. The request()->validate() method is used to validate
the incoming request data. This ensures the request
Paste the testing method shown in Listing 3 at the end contains valid data before creating a new user.

codemag.com Authentication in Laravel, Part 2: Token Authentication 67


Listing 4: Register mobile application user
class RegisterController extends Controller ],
{ 'device_name' => ['required']
public function __invoke(Request $request) ]);
{
request()->validate([ $user = User::create([
'name' => ['required'], 'name' => request('name'),
'email' => [ 'email' => request('email'),
'required', 'password' => bcrypt(request('password')),
'email', ]);
'unique:users,email'
], return $user
'password' => [ ->createToken(request('device_name'))
'required', ->plainTextToken;
'min:8', }
FRQÀUPHG }

Listing 5: Register Controller Tests


class RegisterControllerTest extends TestCase 'device_name' => $user['device_name'],
{ ]);
use RefreshDatabase;
$response->assertSuccessful();
/** @test */
public function register_new_user() $this->assertNotEmpty($response->getContent());
{
$user = [ $this->assertDatabaseHas('users',
'name' => 'Bilal Haidar', [
'email' => '[email protected]', 'email' => $user['email']
'password' => 'password', ]);
'device_name' => 'Bilal iPhone 12',
]; $this->assertDatabaseHas('personal_access_tokens',
[
$response = $this->post('/api/register', [ 'name' => $user['device_name']
'name' => $user['name'], ]);
'email' => $user['email'], }
'password' => $user['password'], }
 SDVVZRUGBFRQÀUPDWLRQ  !XVHU> SDVVZRUG @

2. The validation rules are defined in an array and in- Paste the code in Listing 5 inside the RegisterControl-
clude the following: lerTest file.
• name: required, must be present in the request.
• email: required, must be a valid email address, The register_new_user() test method creates a new user
and must be unique in the users’ table. array with the following fields:
• password: required, must be at least eight char-
acters long, and must match the confirmation • name: the name of the user
password. • email: the email address of the user
• device_name: required, must be present in the • password: the password of the user
request. • device_name: the device's name associated with
3. A new user is created using the User::create() meth- the user's access token
od if the validation passes. The user's name, email,
and password are obtained from the request data and The test then makes a POST request to the /api/register
stored in the users’ table. endpoint with the user data in the request body.
4. The user's password is encrypted using the bcrypt()
method, which hashes the password and ensures that The assertSuccessful() method ensures that the response
it cannot be read in plain text. has a status code of 200 or 201, indicating that the user
5. Finally, the user's personal access token is created was successfully created.
using the createToken() method, which generates
a new token for the user and associates it with the The assertNotEmpty() method is used to ensure that the
provided device name. The plainTextToken attribute response content is not empty.
of the token is returned to the client, which can be
used to authenticate future requests. The assertDatabaseHas() method ensures that the new-
ly created user is stored in the database. The first call
Next, let’s add a test to ensure this endpoint works prop- checks for the existence of the user's email in the users'
erly. table, and the second call checks for the presence of the
device name in the personal_access_tokens table.
Create a new test using this command:
Logout Mobile Application Users
./vendor/bin/sail \ Let’s look at how to implement a log out functionality
artisan make:test \ for your mobile application users. Start by creating a new
API/Auth/RegisterControllerTest controller and add a new route to routes/api.php file.

68 Authentication in Laravel, Part 2: Token Authentication codemag.com


Run this command to create a new LogoutController class. cess token associated with the authenticated user
has been deleted from the personal_access_tokens
./vendor/bin/sail \ database table.
artisan make:controller \
API/Auth/LogoutController --invokable Configure Laravel Sanctum for SPA Applications
Laravel Sanctum uses Laravel's cookie-based session au-
Next, add the /api/logout route inside the routes/api. thentication to authenticate users in your SPA applica-
php file. You want only authenticated users to access the tion. The log in workflow goes as follows:
logout route. Therefore, refactor the routes/api.php file a
little, as shown in Listing 6. 1. Start by requesting a CSRF cookie from Sanctum,
which allows you to make CSRF-protected requests to
Let’s now add the functionality to log out the user. Re- normal endpoints like /login.
place the __invoke() method inside LogoutController 2. Then, request the normal Laravel /login endpoint.
class with the following code snippet: 3. Laravel issues a cookie holding the user's session.
4. Any future requests to the Laravel API now include
public function __invoke(Request $request) this cookie, so your user is authenticated for the life-
{ time of that session.
return request()
->user() Configuration settings are needed when trying to authen-
->currentAccessToken() ticate your SPA using Laravel Sanctum. I will detail them
->delete(); in this section step by step so that you can follow along.
}
First, your SPA and Laravel API must be on the same top-
The currentAccessToken() method is called on the authen- level domain. You cannot have your SPA on domainA.com
ticated user to retrieve the access token associated with and the Laravel API on domainB.com. The reason is that
their current session. Calling the delete() method on the Laravel Sanctum sets up an HttpOnly Lax cookie. This
access token deletes it and effectively logs out the user. cookie is secure and cannot be read or stolen. But most
importantly, it cannot be shared across different domains.
Next, let’s add a test to ensure this endpoint works properly.
You can read more about HttpOnly and Lax cookies here:
Create a new test using this command: https://developer.mozilla.org/en-US/docs/Web/HTTP/
Headers/Set-Cookie.
./vendor/bin/sail \
artisan make:test \ Now, create a Vue3 application to use while setting up
API/Auth/LogoutControllerTest and configuring Laravel Sanctum for SPA authentication.

Paste the code in Listing 7 inside the LogoutController- Run the following command to create a new Vue3 project.
Test file. The test in Listing 7 does the following:
npm init vue@latest
1. The Sanctum::actingAs($user) method is called to
authenticate the newly created user by generating Follow the instructions on the screen to create and
an access token using Laravel Sanctum's actingAs() run your SPA application. Notice the last step. When
method. This allows the user to make authenticated
requests to the API.
Listing 6: Add logout route to routes/api.php
2. The $this->post('/api/logout') method is used to
send a POST request to the /api/logout route to log Route::post('/login', LoginController::class);
out the authenticated user. Route::post('/register', RegisterController::class);
3. The assertOk() method is called on the response ob- Route::middleware(['auth:sanctum'])
ject to ensure that the response status code is 200 ->group(function() {
OK, indicating that the logout request was success- Route::get('/user', function (Request $request) {
return $request->user();
ful. });
4. The $this->assertDatabaseCount('personal_access_ Route::post('/logout', LogoutController::class);
tokens', 0) method is used to ensure that the ac- });

Listing 7: LogoutController tests


class LogoutControllerTest extends TestCase Sanctum::actingAs($user);
{
use RefreshDatabase; $response = $this->post('/api/logout');

/** @test */ $response->assertOk();


public function logout_user()
{ $this->assertDatabaseCount(
$user = User::factory()->create([ 'personal_access_tokens',
'email' => '[email protected]', 0
'password' => bcrypt('password'), );
]); }
}

codemag.com Authentication in Laravel, Part 2: Token Authentication 69


you run the SPA application, the application URL is: Let’s explain some of the settings in this file.
http://127.0.0.1:5173.
The path field specifies which Laravel route paths accept
Note also, when you run the Laravel API application, it cross-origin requests. The asterisk here means all paths in
will be running on the default URL and route: http:// this application can accept cross-origin requests.
localhost:8000.
The allowed_methods field specifies what HTTP methods
Both applications have the same top-level domain; hence, can be received on a cross-origin request. This can be
Laravel Sanctum can be used. POST, GET, PUT, etc. The asterisk here means all methods
are allowed on cross-origin requests.
Start by configuring the CORS. What is CORS anyways?
The allowed_origins field specifies what Origins are al-
CORS stands for Cross-Origin Resource Sharing, a security lowed to send requests to this Laravel API application.
feature implemented in modern web browsers that allows In this case, it only allows the Origin specified in the
web applications to make cross-origin requests to APIs FRONTEND_URL environment variable. In case you don’t
or resources hosted on other domains. In simpler terms, configure this environment variable, it automatically as-
CORS is a mechanism that allows a web page to request a sumes the SPA is hosted over http://location:3000.
different domain than the one that served the web page.
The allowed_origins_pattern field allows you to specify
By default, web browsers prevent web pages from making a regex (https://regexr.com) to match the Origins.
cross-origin requests to resources on different domains to
prevent malicious attacks, such as cross-site scripting and The allowed_headers field specifies what HTTP Headers are
cross-site request forgery. allowed to send in cross-origin requests. The asterisk here
means all HTTP Headers are allowed for cross-origin requests.
CORS provides a way for web developers to specify which
domains are allowed to make cross-origin requests to The exposed_headers field specifies what HTTP Headers
their resources and which types of requests are allowed this Laravel API application would like to share with the
(e.g., GET, POST, PUT, DELETE, etc.). SPA JavaScript application.

CORS works by adding specific headers to HTTP requests and The max_age field specifies the amount of time to cache
responses that indicate whether a particular request is al- Preflight CORS requests. Let’s explore Preflight requests.
lowed. These headers include Access-Control-Allow-Origin,
Access-Control-Allow-Headers, Access-Control-Allow-Meth- Preflight requests are a mechanism used by the browser to
ods, and Access-Control-Allow-Credentials, among others. determine whether it’s safe to make a cross-origin request to
a server. A preflight request is an HTTP OPTIONS request sent
What is an Origin anyway? An Origin is a combination of to the server before the actual cross-origin request is made.
a scheme (also known as the protocol, for example, HTTP
or HTTPS), hostname, and port (if specified). The preflight request includes headers such as Origin,
Access-Control-Request-Method, and Access-Control-Re-
Therefore, the domains we have so far, http://local- quest-Headers, which specify the origin of the request,
host:5173 and http://locationhost:8000, have two dif- the HTTP method that will be used in the actual request,
ferent origins because the ports are different. That’s why and the custom headers that will be sent in the actual
you should configure CORS at the Laravel API application request, respectively.
to allow this communication between two applications
hosted on two different origins. The server must respond to the preflight request with the
appropriate CORS headers, including Access-Control-Allow-
To configure the CORS in a Laravel application, go to the Origin, Access-Control-Allow-Methods, and Access-Control-
config/cors.php file. Listing 8 shows the content of this file. Allow-Headers, indicating that the request is allowed and
which origins, methods, and headers are allowed.
Listing 8: CORS.config file Once the browser receives the appropriate CORS headers
return [ in response to the preflight request, it proceeds with the
'paths' => ['*'], actual cross-origin request.
'allowed_methods' => ['*'],
Finally, the supports_credentials field specifies whether
'allowed_origins' => [env( the Laravel API application wants to share the Cookies
'FRONTEND_URL', 'http://localhost:3000'
)], with the SPA application.

'allowed_origins_patterns' => [], Now open the .env file and update the FRONTEND_URL
'allowed_headers' => ['*'],
environment variable to match the URL of the SPA ap-
plication.
'exposed_headers' => [],
FRONTEND_URL=http://localhost:5173
'max_age' => 0,

'supports_credentials' => true, Make sure you don’t add a trailing forward slash. That’s
]; very important to remember.

70 Authentication in Laravel, Part 2: Token Authentication codemag.com


Next, you need to add the SESSION_DOMAIN environ- Add the following line to the application’s .env file as
ment variable. This variable defines the cookie domain follows:
used to identify a session in the Laravel API application.
This will determine which domains the cookie is available SANCTUM_STATEFUL_DOMAINS=localhost:5173
to in your application.
That’s all you need to configure Laravel Sanctum to au-
SESSION_DOMAIN=localhost thenticate SPA users.

Ensure that you don’t add a trailing forward slash, a port Login SPA Users
number, or a scheme (HTTP and HTTPS). When you installed Laravel Breeze for APIs, it included
all the necessary server-side code to authenticate, log
Finally, you need to add one more environment variable, out, register, reset passwords, and many other functions.
the SANCTUM_STATEFUL_DOMAINS. Navigate to app/Http/Controllers/Auth folder and study
the code.
SANCTUM_STATEFUL_DOMAINS is an environment variable
used in Laravel Sanctum that specifies the domains for Locate the routes/web.php file and navigate to the last
which Sanctum's stateful authentication mechanisms will line in that file:
be used.
require __DIR__.'/auth.php';
Stateful authentication in Sanctum involves cookies to
authenticate the user. When a user logs in, Sanctum cre- It requires the auth.php routes file. If you open this file,
ates a cookie containing a signed JSON Web Token (JWT) you will find all the necessary routes to Login, Logout,
that identifies the user. This cookie is sent with every Register, Reset Password, and many other routes you need
subsequent request to the server, allowing the server to for your SPA.
authenticate the user without requiring their credentials
with each request. For instance, here is a code snippet to show the endpoint
route to allow SPA users to log in to the application:
However, cookies can only be sent to the domain that
sent them and not to other domains. This means that if Route::post('/login', [
you have a single-page application (SPA) that requests AuthenticatedSessionController::class,
your API from a different domain, the cookies that Sanc- 'store'
tum sets for authentication won't be sent with the re- ])
quests, and the user won't be authenticated. ->middleware('guest')
->name('login');
To enable stateful authentication for a different domain,
add the domain to the SANCTUM_STATEFUL_DOMAINS The AuthenticatedSessionController@store method is
environment variable. This tells Sanctum to also allow responsible for authenticating the user. This is the same
cookies to be sent to that domain, so the user can be logic used in any Laravel application, not only in Laravel
authenticated across domains. API applications.

Listing 9: Login.vue component


<script setup> }
import axios from 'axios'
import { ref } from 'vue' </script>

axios.defaults.withCredentials = true <template>


<div>
const form = ref({
email: null, <h2>Sign in to your account</h2>
password: null,
}) <form @submit.prevent="onLogin">

const user = ref() <label>Email address</label>


<input v-model="form.email">
const onLogin = async () => {
await axios.get( <label>Password</label>
'http://localhost:1006/sanctum/csrf-cookie' <input v-model="form.password">
)
<button>Sign in</button>
await axios.post(
'http://localhost:1006/login', { </form>
'email': form.value.email,
'password': form.value.password, <pre>
}) {{ user }}
</pre>
let { data } = await axios.get(
'http://localhost:1006/api/user' </div>
) </template>

user.value = data

codemag.com Authentication in Laravel, Part 2: Token Authentication 71


Laravel Sanctum
Sanctum offers token
authentication, which allows
users to obtain access tokens
and use them to access
protected API routes. Sanctum
uses Laravel’s built-in session
and cookie authentication
for managing tokens, making
it more suitable for use in
applications where there’s a
mix of traditional web views
and API endpoints. Figure 2: The Login page in-action

Let’s add an HTML Form to the SPA application to allow 3. Create a reactive form object with email and pass-
the user to log in to the Laravel API application. word properties initialized to null.
4. Create a reactive user object to hold the user data
First, let’s install Axios (https://axios-http.com/docs/ once logged in.
intro) into the SPA application. Navigate to the SPA ap- 5. Define an onLogin() function that will be called
plication and run the following command: when the form is submitted. This function does the
following:
npm install axios • Sends a request to the server to get a CSRF to-
ken by calling axios.get('http://localhost:1006/
One nice thing I like about Axios is that it does some sanctum/csrf-cookie'). This request is a must to
things out of the box. For example, it takes the CSRF let the Laravel API application issue a CSRF token
cookie generated by the Laravel application and sets it as to protect all of the non-GET requests.
an HTTP Header when sending the requests to the Laravel • Sends a POST request to the server to authenti-
API application. cate the user by calling axios.post('http://loc-
alhost:1006/login', { email: form.value.email,
Also, I’ve installed and configured Taliwindcss. To install password: form.value.password }).
Tailwindcss, follow this guide: https://v2.tailwindcss. • Sends a GET request to the server to get the user data
com/docs/guides/vue-3-vite by calling axios.get('http://localhost:1006/api/
user'). The user data is stored in the data variable.
Add a new Login.vue component. Paste the code you see • Sets the user object to the data variable so that
in Listing 9 inside this new component file. it can be displayed in the template.
6. Include an HTML form with a submit binding @sub-
Let’s discover the significant sections of the Login com- mit.prevent=”onLogin”
ponent. 7. Bind the email and password fields of the HTML form
to the corresponding form.email and form.pass-
1. Start by importing the Axios and ref functions from word fields.
the Vue library.
2. Set the Axios.defaults.withCredentials configuration When the user clicks the button, the form submits, and
property to true allowing cross-site requests to include eventually, the user details are retrieved accordingly for a
credentials such as cookies. successful login request.

72 Authentication in Laravel, Part 2: Token Authentication codemag.com


CODE COMPILERS

One last step is to import the Login.vue com- (Continued from 74)
ponent into the App.vue component to see the
Login page in the browser. Snowbird environment being one built on trust,
on respect-centered values within which people Sep/Oct 2023
Figure 2 shows the result of trying the Login wanted to work. Finally, it all boiled down to Volume 24 Issue 5
form in the browser. Agile being about “mushy stuff of values and
culture.” Channeling Hamilton, the history Group Publisher
By doing this, you finish implementing Laravel page is a contemporaneous account by one who Markus Egger
Sanctum for both SPA and Mobile applications. was in the room when it happened! Associate Publisher
Rick Strahl

Conclusion What Lies at the Heart Editor-in-Chief


Rod Paddock
Laravel offers comprehensive solutions for se- of Agile? Managing Editor
curity, including authentication, authorization, Among other things, egalitarianism, the notion Ellen Whitney
and other related features. You can confidently of equality is at the heart of Agile. At Snowbird,
add a robust security layer to your application there were 17 supremely talented and interna- Contributing Editor
John V. Petersen
with the necessary knowledge. In the next in- tionally recognized thought leaders who, no
stallment of this series on building MVC appli- doubt, disagreed on many things while at the Content Editor
Melanie Spiller
cations with Laravel, I’ll delve into the topic of same time, agreed on many things. To check
Authorization in Laravel. most of the differences and ego for the ben- Editorial Contributors
efit of what the group set out to do, ultimately Otto Dobretsberger
Jim Duffy
Happy Laravelling! for the benefit of the whole world was egali- Jeff Etter
tarianism through a “deliberative assembly” or Mike Yeager
Bilal Haidar sorts. In Denmark, the parliament is called the
Writers in This Issue
Folketing (people’s thing). The “things” were Markus Egger Bilal Haidar
deliberative assemblies, such as a tribal coun- Joydip Kanjilal Julie Lerman
cil. While there was structure and a hierarchy, Sahil Malik John Petersen
Paul D. Sheriff Shawn Wildermuth
any man was free to plead his matter and be
heard. In any conflict resolution, there’s give Technical Reviewers
and take. Nobody gets everything. But if not Markus Egger
Rod Paddock
done well, everybody gets nothing. In this re-
gard, it should almost be instinctual to cooper- Production
Friedl Raffeiner Grafik Studio
ate. But it’s often difficult because egos get www.frigraf.it
involved, and that’s why being Agile is so dif- Graphic Layout
ficult. Individually, each must live the values so Friedl Raffeiner Grafik Studio in collaboration
that collectively, the group can. There’s plenty with onsight (www.onsightdesign.info)
of room for compromise, but that compromise Printing
must be subject to Agile values. Agile requires Fry Communications, Inc.
active cooperation, active accountability, and 800 West Church Rd.
Mechanicsburg, PA 17055
active engagement. Anything less and you’ll be
introducing dysfunction, the seeds of technical Advertising Sales
debt, into your organization. Tammy Ferguson
832-717-4445 ext 26
[email protected]
If you’re in a decision-making capacity and if
you’re either seeking to implement Agile or Circulation & Distribution
General Circulation: EPS Software Corp.
address some Agile dysfunction, you’d be well Newsstand:American News Company (ANC)
served to look at history. Look at the history
of how Agile came to be. And from there, look Subscriptions
Subscription Manager
to how the basis of that is rooted in the earli- Colleen Cade
est democratic practices in recorded history in [email protected]
order to understand what Agile actually is and
US subscriptions are US $29.99 for one year. Subscriptions
to understand what lies at the heart of Agile. outside the US are US $50.99. Payments should be made
If your organization runs its software develop- in US dollars drawn on a US bank. American Express,
ment operation like a top-down hierarchy, Agile MasterCard, Visa, and Discover credit cards accepted.
Back issues are available. For subscription information,
won’t work. Just as you can’t fly a car, you can’t e-mail [email protected].
waterfall Agile.
Subscribe online at
www.codemag.com
John V. Petersen
CODE Developer Magazine
6605 Cypresswood Drive, Ste 425, Spring, Texas 77379
Phone: 832-717-4445

codemag.com CODA: What Lies in Agile’s Heart? 73


CODA

CODA:
What Lies at Agile’s Heart?
As soon as I heard the term applied to software development and considering the 17 individuals who
codified, ratified, and released the Agile Manifesto in February, 2001, I instinctively knew the trail the
Agile Manifesto’s authors were blazing was the right path. I knew that because Agile wasn’t conjured

out of whole cloth. Many of the practices we The Manifesto’s Principles


refer to as Agile pre-date Agile. In spite of We are uncovering better The Manifesto’s principles are found here:
being so difficult to implement and the nay- http://agilemanifesto.org/principles.html.
saying, I’ve always regarded the journey, along ways of developing From a historical perspective, the Agile Mani-
with the pain of experience, to be worth it software by doing it and festo was released after the February 2001
because that’s the necessary feedback we need helping others do it. meetings at the Snowbird Ski Resort in Salt
to improve. A common refrain for a typical Lake City, UT. The 12 principles were published
team may be “Our releases are too buggy; we soon thereafter. The principles are very impor-
need to reduce defects.” How do we do that? tant in that they transcend specific practices.
The answer lies in the heart of Agile. Let’s go Agility is an active, not a passive thing. Agil- Although the principles are important, they
find it. ity is practiced by being it, not talking about aren’t relevant for this inquiry into what lies
it. The principle concern is quality software at the heart of Agile because the principles are
If you’re not familiar with the Agile Manifesto through better means. The work itself is the backstopped by the values. Their essence is
or if it has been a while since you’ve read it, only input the arbiter of quality should need to that matter what we specifically do in the Ag-
take some time to review: www.agilemani- consider because values are in the work itself. ile space, we must always transparently inspect
festo.org. One page that’s often overlooked Therefore, the only lens to view quality through and adapt our work. And that process involves a
is Jim Highsmith’s history page: www.agile- is the work. constant inquiry into whether what we’re doing
manifesto.org/history.html. Jim is one of the is consistent with Agile Principles and Values.
17 Agile authors. As it turns out, that history The four values on the left in the Manifest
page has been an easter egg of sorts, hiding in could be distilled into four words: It's worth mentioning that this question of
plain sight, for it gives us an inkling of what what lies at the heart of Agile may either be
lies in Agile’s heart. • Cooperation simple or impossible. It just depends on where
• Accountability you begin your journey. In my experience, most
• Engagement Agile advocacy content begins with a tool. In
Doesn’t the Agile Manifesto • Agility such cases, the advocacy is substantively prod-
Already Answer the Question? uct advertising. “Use our tool and your Agile
No, it doesn’t. The Manifesto is the by-product The Manifesto closes with [emphasis added]: implementation will be smooth” is often the re-
of, among other things, Agile practices. That’s frain. If you’ve settled on Agile, jumping from
how we know Agile wasn’t conjured out of tool to tool won’t work because no tool can
whole cloth. Somewhere between various prac- ever be the thing that tells you what Agile is
tices being created and implemented and the While there is value in the and makes you understand it better. For sure, a
formal Manifesto, something was brought to tool may help you better apply Agile. But when
bear. Therefore, although we can look to the
items on the right, we value things go wrong and you must diagnose your
Manifesto to know what Agile is, we must peel- the items on the left more. “Agile dysfunction,” no tool is going to help
back the onion to get to the root of what lies you. Your values and your commitment to them
at the heart of Agile. are your only fuel source. What is that “Agile
dysfunction” you’re dealing with? Why aren’t
The closing statement is pragmatism in ac- things working? We can answer that question
The Manifesto’s Values tion. If a process must be followed because by analogy and looking to what happened at
There are four values: of some law, rule, or regulation, then it must Snowbird in 2001.
be followed. It’s often necessary for software
• Individuals and interactions over process- development to seek better ways for its Agile
es and tools environment to accommodate processes, tools, The “Deeper Theme”
• Working software over comprehensive plans, or other terms and conditions that, his- I’m closing this installment with what was
documentation torically, have been carried out in a non-Agile mentioned at the outset, the history page:
• Customer collaboration over contract ne- manner. Despite best efforts, tension may rise http://agilemanifesto.org/history.html. I must
gotiation between the values on the left and the right. confess that for many years, I never read this
• Responding to change over following a How to deal with that tension is often a make- page. But once I did, it all became clear. There’s
plan or-break proposition for Agile’s success in an discussion of a “deeper theme that drives many,
organization. These values are based on what but not all…”. There’s further discussion on the
The context for these four values is found in the lies at the heart of Agile. Get your goggles on,
following affirmation [emphasis added]: we have more onion peeling to do! (Continued on page 72)

74 CODA: What Lies in Agile’s Heart? codemag.com


CUSTOM SOFTWARE DEVELOPMENT
STAFFING TRAINING/MENTORING SECURITY

MORE THAN JUST


A MAGAZINE!
Does your development team lack skills or time to complete all your business-critical software projects?
CODE Consulting has top-tier developers available with in-depth experience in .NET,
web development, desktop development (WPF), Blazor, Azure, mobile apps, IoT and more.

Contact us today for a complimentary one hour tech consultation. No strings. No commitment. Just CODE.

codemag.com/code
832-717-4445 ext. 9 • [email protected]
DO YOU WONDER
HOW ARTIFICIAL
INTELLIGENCE CAN
HELP YOUR BUSINESS?
Do you worry about privacy or regulatory issues stopping you from using AI to its fullest?
We have the answers!

We will send an expert to your office to meet with you. You will receive:
1. An overview presentation of the current state of AI.
2. Learn how to use AI in your business while ensuring privacy of your and your clients’ information.
3. We’ll build a sample application built on your own HR documents – allowing your employees to query
those documents in English, which will cut down the number of questions that you
and your HR group have to answer.
4. A roadmap for future use of AI catered to what you do.

CONTACT US TODAY FOR A FREE CONSULTATION AND DETAILS ABOUT OUR SERVICES
codemag.com/executivebriefing
832-717-4445 ext. 9 • [email protected]

You might also like