0% found this document useful (0 votes)
30 views76 pages

CODE 3-2024 Web

Uploaded by

aviansh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views76 pages

CODE 3-2024 Web

Uploaded by

aviansh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 76

JSON, JavaScript.

Azure, EF Core 8, SQL Server, REST, SOAP

MAY
JUN
2024

The DNA
codemag.com - THE LEADING INDEPENDENT DEVELOPER MAGAZINE - US $ 8.95 Can $ 11.95

of a Database
Developer
Cover AI generated - Markus Egger

Using Complex Working Understanding


Properties with JSON Azure
in EF Core 8 Documents Migrations
ARE YOU WONDERING
HOW ARTIFICIAL
INTELLIGENCE CAN
BENEFIT YOU TODAY?
©shutterstock
EXECUTIVE BRIEFINGS
Are you wondering how AI can help your business? Do you worry about privacy or regulatory issues stopping
you from using AI to its fullest? We have the answers! Our Executive Briefings provide guidance and
concrete advise that help decision makers move forward in this rapidly changing Age of Artificial Intelligence
and Copilots!

We will send an expert to your office to meet with you. You will receive:
1. An overview presentation of the current state of Artificial Intelligence.
2. How to use AI in your business while ensuring privacy of your and your clients’ information.
3. A sample application built on your own HR documents – allowing your employees to query
those documents in English and cutting down the number of questions that you
and your HR group have to answer.
4. A roadmap for future use of AI catered to what you do.

AI-SEARCHABLE KNOWLEDGEBASE AND DOCUMENTS


A great first step into the world of Generative Artificial Intelligence, Large Language Models (LLMs),
and GPT is to create an AI that provides your staff or clients access to your institutional knowledge,
documentation, and data through an AI-searchable knowledgebase. We can help you implement a first
system in a matter of days in a fashion that is secure and individualized to each user. Your data remains
yours! Answers provided by the AI are grounded in your own information and is thus correct and applicable.

COPILOTS FOR YOUR OWN APPS


Applications without Copilots are now legacy!
But fear not! We can help you build Copilot features into your applications in a secure and integrated
fashion.

CONTACT US TODAY FOR A FREE CONSULTATION AND DETAILS ABOUT OUR SERVICES.
codemag.com/ai-services
832-717-4445 ext. 9 • [email protected]
TABLE OF CONTENTS

Features
7 CODE: 20 Years Ago 46 S tages of Data: The DNA of
Markus continues his reflection on what the company, the magazine,
and the industry have been up to for the last three decades.
a Database Developer, Part 1
Markus Egger Whether you’re going to an interview as the applicant or the
interviewer, you’ll be glad that Kevin came up with this collection
of the things you ought to know if you want to succeed.
10 Async Programming in JavaScript Kevin Goff
Sahil shows you how to coordinate the multiple processors that
you need to do anything in today’s high-paced computing world.
Sahil Malik
59 From SOAP to REST to GraphQL
If you need to store, move, or access data, you’ll need to know
how to make sure that all of your systems talk to each other.
16  anipulating JSON Documents
M Joydip explains how SOAP, REST, and GraphQL combine to make
that a smooth process.
in .NET 8 Joydip Kanjilal
JavaScript Object Notation (JSON) can help you configure settings
and transfer data, but it really shines when it comes to creating
and manipulating documents in .NET 8. Paul shows you how.

Departments
Paul Sheriff

33  alue Object’s New Mapping:


V
EF Core 8 ComplexProperty 6 Editorial
Julie looks at the many changes in EF Core 8 that were released at
the end of last year and finds that ComplexProperty mapping really
takes the cake. 30 Advertisers Index
Julie Lerman
73 Code Compilers
38  reparing for Azure with
P
Azure Migrate Application
and Code Assessment
Your company is switching to platform-as-a-service (PaaS) from
on-premises and you need to re-platform your applications.
Mike shows you how to make that happen using Azure Migrate
Application and code assessment.
Mike Rousos

US subscriptions are US $29.99 for one year. Subscriptions outside the US pay
$50.99 USD. Payments should be made in US dollars drawn on a US bank. American
Express, MasterCard, Visa, and Discover credit cards are accepted. Back issues are
available. For subscription information, send e-mail to [email protected]
or contact Customer Service at 832-717-4445 ext. 9.
Subscribe online at www.codemag.com
CODE Component Developer Magazine (ISSN # 1547-5166) is published bimonthly
by EPS Software Corporation, 6605 Cypresswood Drive, Suite 425, Spring, TX
77379 U.S.A. POSTMASTER: Send address changes to CODE Component Developer
Magazine, 6605 Cypresswood Drive, Suite 425, Spring, TX 77379 U.S.A.

4 Table of Contents codemag.com


EDITORIAL

Unfinished Paintings
I spent last weekend in rainy (normally sunny) Southern California. During this trip, I managed to corral
the kids into going to the Academy Awards Museum. There, we came across a set of drawings from the
original Disney animated film The Little Mermaid (Figure 1). From what I could deduce from the drawings,

I believe they are what are called key frames. tarps like you can buy at Walmart), I saw street So being the fresh-faced coder, how could
In animation, the artists draw key frames, which posters, statues, and other paintings from his I learn to better my craft? I did this several
represent different transitions in the animation extensive set of works. One of these works ways. I read every book I could get my hands
and are then passed onto other artists who fill in stood out to me. Figure 2 is an image of an on, I went to the “big” city of Portland where
the frames between the keys. As I described what unfinished work started not soon before his un- I discovered knowledge heaven in the form of
these frames were for to my kids (and a random timely death. I spent time studying the image, Powell books, and, finally, I read every com-
onlooker), I discovered mentally why I’m fasci- looking at it from different angles and vantage puter magazine I could get my hands on. The
nated with such works. I’m fascinated by them points. I tried to picture Keith working on this articles that I really took a liking to were the
because they’re unfinished. I find that these arti- painting in his studio in the flow of artistic cre- ones where the authors explained the process
facts of the creative process give me insights into ativity. This image took me into a creative flow of how they achieved a solution. Using art as
the mind of the artist creating them. state. All from an unfinished painting. a metaphor, they took me through the rough
drawings, pencil sketches, rough demos, base
I love the rough drawings, erasure marks, rough For some reason, I’ve always loved works from coats, detail work, and finally, a finished
lines, and all the things that show how the art- artists that are incomplete works versus com- working solution. I was watching techno art-
ist works. This took me back to another mu- pleted works. This is my inner creator coming ists take me through their process from unfin-
seum visit. out. It takes me back to the earliest parts of ished to finished work. This is my process to
my career where I was the “lone wolf” devel- this day.
Last year I was lucky enough to attend an exhi- oper/network admin for a small resort in Cen-
bition of the works of Keith Haring at the Broad tral Oregon. This was back in the late 80s when Many of the unique solutions I’ve built over
Museum in downtown Los Angeles. I’ve been a there was no internet like we have now. There the years come from a process like this one.
follower of Keith’s work for decades now and were no GitHub repositories, no code blogs, I’m tasked with seeing if an idea might work.
don’t miss any chance to see collected exhibits no www.stackoverflow.com. Nope. There were For instance, many years ago, I was tasked
of his work. This exhibit stood out to me as crappy 1200 or 2400 baud modems by which we with building a solution where we embedded
I saw works and materials he’d used that I’d reached out into the world to gather our knowl- code in Microsoft Word documents that gave
never read about or witnessed. I saw full blown edge from forums like CompuServe and Genie. It users the ability to create dynamic scripts for
murals painted on camping tarps (yes ,camping was the dark ages. LOL. call centers. I started this process to see if I
could embed code into a work document. This
was my rough sketch. I then took the output
from that document and built an HTML-based
script using the metadata embedded in the
document, another sketch. I then combined
these two together into a rough demonstra-
tion for the client. The client liked what they
saw, and we went through the process repeat-
edly until we had a good working solution. This
code lasted nearly 10 years until a new solu-
tion was implemented. It enjoyed many years
of success, all originating from a rough sketch
of code.

This process has worked for me time and time


again. I love just trying stuff to see if it works.
If it does, I file it away for future use; if not, I
look at other solutions. I’m always experiment-
ing, always sketching, always creating unfin-
ished paintings.

 Rod Paddock

Figure 1: A key frame Figure 2: An unfinished work by Keith Haring

6 Editorial codemag.com
ONLINE QUICK ID 2405011

CODE: 20 Years Ago


The end of 2023 was the start of our “30 years of CODE” celebration year, which will continue throughout all of 2024. To look
back at those 30 years, I wrote articles in the last two issues of CODE Magazine, looking at what happened 25 and 30 years ago.
This time, I’ll look back 20 years and explore the latest and greatest of the early-to-mid 2000s. I remember it as a somewhat

turbulent time. The dotcom bubble had burst, and the projects we were working on in the consulting and custom
glory days of technology seemed to be over. Was the in- app dev side of the business weren’t classic dotcom com-
ternet really all it was supposed to be or was it just a panies. Also, we’d started CODE Magazine in the Spring of
passing fad? It was hard to say. 2000 and focused primarily on the new world of software
development that Microsoft was generating. The Java pro-
gramming language was of interest to a lot of people but
The Aftermath of the Dotcom Bubble had some issues that were, as of then, unaddressed, and
One of the main catalysts for the dot-com-bubble bursting one way to fix it was Microsoft’s approach of re-invent-
was the overvaluation of many internet-based companies ing the language in a top-secret project headed up by
that had little or no profits but huge expectations. Inves- language-guru Anders Hejlsberg, codenamed “Cool.” (This Markus Egger
tors poured money into these ventures, hoping to cash in became C#, and yes, C# is still cool. You may have seen [email protected]
on the next big thing, but many of them turned out to be the T-shirt).
unsustainable or unprofitable. Some of the most notorious Markus, the dynamic
examples of dotcom failures were Pets.com, Webvan, eToys, C# became a key component of the then nascent .NET founder of CODE Group and
and Boo.com, which burned through millions of dollars in ecosystem, which did away with the concept of the pro- CODE Magazine’s publisher,
is a celebrated Microsoft
a matter of months before going bankrupt. The collapse of gramming language driving everything and instead cre-
RD (Regional Director) and
these and other companies sent shockwaves through the ated a development framework that could be used equally
multi-award-winning MVP
stock market, wiping out billions of dollars in value and from various languages. This was a concept that jived
(Most Valuable Profession-
causing many investors to lose confidence in the sector. very much with what we believed a modern software de- al). A prolific coder, he’s
velopment magazine should be talking about, and thus influenced and advised top
The world of software development wasn’t immune to the CODE Magazine found itself in a sweet spot of sorts. Other Fortune 500 companies
effects of the dotcom bubble bursting. Many software magazines, like Visual Basic Programmer’s Journal, Fox- and has been a Microsoft
developers who’d been hired by dotcom startups found Pro Advisor, and many more, suddenly didn’t look so hot contractor, including on
themselves out of work when their employers went under. anymore. A lot of this wasn’t a coincidence. After all, the Visual Studio team.
Some of them had to accept lower salaries or switch ca- we’d long been partnering very closely with Microsoft—I Globally recognized as
reers, and others tried to start their own businesses or join worked for the Visual Studio team as a contractor on vari- a speaker and author,
more established companies. The demand for web develop- ous projects—and we were strong believers in these new Markus’s expertise spans
ment skills decreased, as many companies scaled back or concepts. Artificial Intelligence (AI),
canceled their online projects. The failure of many dot- .NET, client-side web, and
coms also raised questions about the viability and quality All these goings-on meant that we were somewhat pro- cloud development, focus-
of some of the emerging web technologies and standards, tected from the dotcom mess. Yes, we also lost some cus- ing on user interfaces,
such as Java, XML, and HTML. Some critics argued that tomers, and the pool of potential new customers shrank. productivity, and maintain-
these technologies were overhyped and underdelivered, We had to tighten our belts a bit, but overall, we came able systems. Away from
and that they weren’t suitable for building complex and through it all reasonably well. I remember it as a time work, he’s an enthusiastic
reliable applications. Others defended these technologies that was painful for us, but not to an existential level. windsurfer, scuba diver,
ice hockey player, golfer,
and claimed that they were still evolving and improving, And despite all the internet disillusionment, we remained
and globetrotter, who
and that they would eventually prove their worth. stout believers that it wasn’t the internet that was the
loves gaming on PC or
problem but rather the problem was the idea that the
Xbox during rainy days.
Despite the challenges and setbacks that the dotcom bub- internet made economic fundamentals obsolete. In other
ble bursting posed for the software industry, it also had words, we considered it crucially important to push for-
some positive effects. It forced many companies to rethink ward with internet-related technologies. As a Microsoft-
their business models and strategies, and to focus more on focused organization (and a Microsoft partner), this
customer needs and satisfaction, rather than on growth meant mainly focusing on ASP.NET as the backbone of
and hype. It encouraged more innovation and experimenta- almost all web applications that we wrote. We had largely
tion as some developers sought to create new and better ignored earlier versions of ASP, but then there was this
solutions for the web. It also paved the way for the emer- young kid of a program manager straight out of college
gence of new players and platforms, such as Google, Ama- with a vision of a better web development environment. I
zon, eBay, and PayPal, which took advantage of the oppor- was very impressed with his early demos. He was a funny
tunities and gaps in the market that the dotcom crash had and rather likable kid, and he always wore red shirts. His
left behind. These companies would go on to become some name was Scott someone or another. I think he still works
of the most successful and influential in the history of the at Microsoft today. <grin>
internet and to shape the future of software development.
And before you ask, most of the web applications we
wrote in those days were mainly built for Internet Explor-
The Impact for CODE er, the de facto standard browser of the time. Netscape
Luckily for us at the CODE Group, the dotcom turbulences had faded in importance as they lost the “great browser
were less severe than for other companies. Most of the wars” against Microsoft, and Firefox wasn’t a thing yet.

codemag.com CODE: 20 Years Ago 7


Internet Explorer had taken the web and HTML from being market, but also a steady horse to bet on for a while. I
a simple mechanism to displaying hyperlinked text and remember consulting around quite a few Microsoft tech-
simple documents that supported only laughable levels nologies back then, and they were all solid investments. I
of visual design, to a far more functional user interface never worried that the time we spent becoming experts in
technology. I vividly remember sitting in some internet one Microsoft technology or another would be wasted. (In
meetings at Microsoft (this was pre-open-source and be- future installments of this series of articles, I will take a
ing part of these secret closed-door meetings was a big much different look at this aspect of Microsoft.)
thing for us geeks) where the idea of a DOM (Document
Object Model) was first discussed. How awesome would it Operating systems were a big thing 20 years ago. Micro-
be to be able to interact with elements on a page using soft’s Windows XP is still one of the most liked versions
scripting technologies? Mind-boggling! (JavaScript was of Windows (Figure 1). It was Microsoft’s first departure
already a thing back then, of course, but VBScript was from the “battleship gray” user interface design and into
considered equally viable by many). In hindsight, it’s now a more colorful world. Most of us will fondly remember the
easy to blame Microsoft for having created many non- default background image of some very green pastures,
standard-compliant HTML problems, but back in the day, overlaid with visual elements that featured far more col-
there were no standards and Microsoft had taken it upon ors than in the past. I remember that some of our cus-
themselves to push things forward as fast as possible. tomers thought it looked “like a candy store” and it took
Although I had to suffer many of the later problems this them a while to get used to it. Some even used Windows
caused (and I assume, so did you), I still think it was the NT as client operating systems because of it. Ultimately,
right thing to do at the time, and I give them credit for it. Windows XP did become a fan favorite and a very good
operating system for its time.
Other Important Tech It may have seemed like that to a lot of us back then, but
.NET, and Visual Studio were super important technologies it wasn’t just a Microsoft world when it came to operating
for all professional developers. Yes, Linux was also impor- systems. Linux was always around and important in certain
tant, but when you worked with enterprise customers, scenarios. But there also was this niche operating system
Microsoft was where the lucrative projects were. (Some called Mac OS X. 20 years ago, I thought it was neat. After
Linux enthusiasts may disagree, and I don’t want to take all, it had been born out of the very geeky-cool NeXT Step
anything away from them, but we weren’t successful in operating system, developed by the NeXT company founded
making money with Linux in those days.) Microsoft’s anti- by Steve Jobs, and later integrated into Apple with Steve
trust lawsuit had come to an end, and although it had an Job’s return. It didn’t yet play a big role overall, and most
impact on how Microsoft had to operate for quite some people would never have considered buying a Mac. We used
time to come, it did solidify Microsoft’s position and the Macs in our magazine department, but generally, it seemed
company represented a very solid and steady bet for most like something that wasn’t very important to the business
enterprises around the world. When Microsoft pushed world, and it was almost non-existent in our software de-
out technology in the early 2000s, you could assume it velopment considerations. (Another topic I’ll have to re-
was going to be not just important and successful in the visit in the next articles in this series.)

Figure 1: A typical Windows XP screen shot

8 CODE: 20 Years Ago codemag.com


Although we built a lot of web-based applications even
20 years ago, it should also be mentioned that, for most
businesses, it was a “thick client” world. Many business
applications were built as WinForms applications, because
cross-platform deployment wasn’t a big consideration for
business applications. After all, why worry about users
on a Mac when nobody in business used Macs? Therefore,
WinForms applications, and later, WPF applications, were
a very important market segment. (This isn’t talked about
very much anymore, but people are often surprised when
I tell them how many companies are still building Rich
Windows Client Apps even today.)

Apple was also not a real player in mobile computing yet. Figure 2: The BlackBerry Figure 3: The first Xbox was released for the Christmas
Yes, Apple had the Newton years earlier, but that was 6000, released in 2003 2001 season.
ahead of its time and was soon discontinued. Palm Pilots
were also a thing of the past. But RIM’s (Research-In-Mo-
tion) BlackBerry was all the rage for mobile enthusiasts
(Figure 2). It may have later gotten Hilary Clinton into
trouble as her email device of choice, but it was the state-
of-the-art mobile business solution for quite some time.
It seems quaint today, but it was considered unthinkable
that a device without a physical keyboard could be feasible
in business scenarios. This was an idea that Microsoft CEO
Steve Balmer held onto way too long, in the process killing
Microsoft’s phone business. Today, most people don’t even
remember that Microsoft had a strong position in that mar-
ket segment, with Windows Mobile and Windows CE.

The Fun Stuff


The early 2000s were also a fun time. After all, Microsoft
released the first Xbox (Figure 3). What a machine that was!
And the games it had! It may not have immediately been
a strong competitor for the already established PlayStation
system, but it sure put out some classics. Halo, Combat Figure 4: Halo: Combat Evolved
Evolved (Figure 4) was groundbreaking. It finally made first-
person shooters truly work on consoles and brought such ex-
periences into living rooms. But there were also others. For them like that anymore. Or actually, they do: This was
me, The Elder Scrolls III: Morrowind and Star Wars: Knights of also the start of the superhero movies, an era in which
the Old Republic stand out as particular time sinks. we are still stuck, it seems. Additionally, Pirates of the
Caribbean resonated with people. The Passion of the Christ
It’s hard to believe that many of today’s (and yesterday’s) was released back then, and so was Finding Nemo. Star
biggest brands in computer gaming date back to this era, Wars Episodes I through III also fall into this time period.
whether you do your gaming on the Xbox, PC, PlaySta- Yeah. I know. We shall not talk about that.
tion, or elsewhere. I very fondly remember classics such
as Grand Theft Auto III, Max Payne, Warcraft III, Deus Ex,  Markus Egger
Half-Life 2, The Sims, God of War, and Metal Gear Solid II, 
to name just a few. Would you have guessed that World of
Warcraft is now 20 years old? It seems that many of the
games from back then are still on my “to be played one of
these days soon” list, and not because they are classics,
but just because I haven’t quite gotten to them yet.

Although gaming went through a great period, I wasn’t as


excited about music back then. Chart toppers from 2003 and
2004 really don’t resonate with me as true classics. I guess
50 Cent would disagree and rate In Da Club as one of the all-
time great songs, but I’m not rushing out to buy a remix. The
whole music industry took a blow from the Napster era and
the Apple iPod, released in 2001, hadn’t reinvigorated things
yet. Or maybe I just wasn’t that into that kind of music.

I did enjoy going to the movies back then though. There


were some great ones. Lord of the Rings is at the top of my
favorites from 2001 to 2003. Ah… they just don’t make

codemag.com CODE: 20 Years Ago 9


ONLINE QUICK ID 2405021

Async Programming in JavaScript


Some of us like to brag about how old we are because we started working on languages like Assembler or Fortran or Basic. This
was a while ago, when computers were very simple, and although those languages were extremely cryptic and they made us
productive, it’s also a reality that we were doing a lot less with them. We were building much shorter buildings. As time moved on,

computers became a lot more powerful, and our custom- Let me first start by describing the code you’ve cloned here.
ers demanded that we build skyscrapers. You might have
heard of something called Moore's law, which is the ob- Code Structure
servation that the number of transistors on an integrated The code you’ve cloned is a simple nodejs project. It uses
circuit will double every two years with a minimal rise in ExpressJS to serve a static website from the public folder,
cost. Yes, computers have become very powerful, but just as can be seen in Figure 1.
due to basic physics and energy density issues, we’ve also
hit a wall in absolute computing power that we can pack To run it, just follow the instructions in readme.md. At a
within a single processor. As a result, the world started high level, it’s a matter of running npm install, and hitting
Sahil Malik moving toward multiple cores, and multiple processors; F5 in VSCode. Additionally, you’ll see that in index.js as seen
www.winsmarts.com even your phone—or maybe even your watch—now has in Listing 1, in addition to serving the public folder as a
@sahilmalik multiple cores or multiple processors inside it. static website, I’m also exposing an API at the “/random”
URL. This API is quite simple; it waits for five seconds and
Sahil Malik is a Microsoft
When these multiple cores or multiple processes try to work returns a random number. I have a wait here to demonstrate
MVP, INETA speaker,
together, adding two processors doesn't always equal 2X the what effect blocking processes, such as this wait, can have
a .NET author, consultant,
performance. Sometimes it can even be less than 1X because your browser’s UI. The reason I’ve written this code in Node-
and trainer.
the competing processors might be working against each JS is because I could use identical code for the wait on both
Sahil loves interacting other. For sure, the benefit you get will be less than 2X be- client and server, although this isn’t a hard requirement.
with fellow geeks in real cause some overhead is spent on coordination. Now imagine
time. His talks and train- if you have a 64-core processor, how would that look? And Let’s also briefly examine the client side code. The in-
ings are full of humor and how would you write code for it? There will always be that dex.html file is quite simple. It references jQuery to help
practical nuggets. one really smart guy on your team that understands the dif- simplify some of the JavaScript I’ll write. It has a but-
ference between mutexes and semaphores, and that smart ton called btnRandom that calls a JavaScript. It has a
His areas of expertise are guy will act like the cow that gives one can of milk and tips div called “output” where the JavaScript can show mes-
cross-platform Mobile app over two. His smarts will make rest of the team unproduc- sages to communicate to the user. The idea is that I’ll call
development, Microsoft tive because, let’s be honest, these concepts can be hard to a function that blocks for five seconds, and I’ll show a
anything, and security understand, harder to write, and very hard to debug. “start” and the random number output message when the
and identity.
function starts and then when it’s done.
Although it’s no surprise that as we're building more
complex software, our platforms and languages have also Additionally, I’ve placed a text area where users can type
evolved to help us deal with this complexity, so the entire freely. The function takes five seconds to complete, so
team of mere mortals is productive. Languages have also what I’d like you to try is, within those five seconds, try
evolved to support more complex paradigms, and JavaScript to type in that text area. If you can type in that text area
is no exception. while the function is executing, that’s a non-blocking UI,
which is a good user interface. But if the UI is frozen and
In this article, I'm going to explore a back-to-basics ap- you cannot type in that textbox while your function runs,
proach by explaining asynchronous programming in Ja- that’s a bad user experience.
vaScript. Let's get started.
The user interface of my simple HTML file looks like Fig-
ure 2. The index.html file can be seen in Listing 2.
A Little Pretext
Before I get started, there’s a little challenge I must deal
with. Demonstrating asynchronous concepts through text A Synchronous Call
and images as they appear in this article can be difficult. So Let’s first start by writing a simple JavaScript function that
I'm going to describe the various concepts, but you should takes five seconds to execute. At the end of five seconds, it
also grab the associated code for this article at the follow- simply returns a random number. This function is basically
ing URL: https://github.com/maliksahil/asyncjavascript. the same function you see in index.js called “waitForMil-
I recommend running the code side-by-side as you read this liSeconds”, except that for now, I’ll just run it client side,
article as that will help cement the concepts. and the function itself will return a random number.

The function is called on the click event of the button you


see in Figure 2. As soon as the user clicks on the button,
it shows a “Start” message in the output div. Then the
function runs for five seconds and five seconds later, you
should see a random number shown in the output div. The
Figure 1: The project code for the synchronous call is refenced in index.html as
structure Figure 2: The user interface scripts/1.sync.js and can be seen in Listing 3.

10 Async Programming in JavaScript codemag.com


Go ahead and run npm start (or F5 in VSCode) and visit Listing 1: The index.js server side file
the browser at http://localhost:3000. Click the “Click”
button from Figure 2, and immediately try typing in the const express = require('express');
const app = express();
text area below. What do you see? const PORT = 3000;

You’ll notice that until the call completes and the random app.use(express.static('public'));
app.get("/random", (request, response) => {
number is shown in the output div, the page is essentially waitForMilliSeconds(5000);
frozen. It accepts no input from the user. In fact, the page is const random = {
dead: It accepts or responds to no events. This is certainly a "random": Math.floor(Math.random() * 100)
};
bad user experience but may also lead to inexplicable bugs. response.send(random);
});

Callbacks app.listen(PORT, () =>


console.log(`Server listening on port: ${PORT}`));
Now let’s explore a technique in JavaScript called callbacks.
If you remember what you first did, this line stands out: function waitForMilliSeconds(milliSeconds) {
var date = new Date();
var curDate = null;
randomNum = waitForMilliSeconds(5000); do { curDate = new Date(); }
while (curDate - date < milliSeconds);
This means that the returnvalue of waitForMilliSeconds is }
what gets populated in randomNum.

We’ve learned from other languages that you could pass in Listing 2: index.html
a function pointer to waitForMilliSeconds. Wouldn’t it be <html>
nice if waitForMilliSeconds could call that function point-
er when it’s done with its five seconds of blocking work? <head>
<script
src="https://code.jquery.com/jquery-3.7.1.min.js"
To facilitate that, modify your waitForMilliSeconds function, integrity=".."
as shown below in the next snippet. The login has been crossorigin="anonymous"></script>
</head>
trimmed for brevity and the only change is that instead of
sending back a return value, you’re now accepting a param- <body>
eter called callbackFunc. When you’re done with your work, Press button to make async call:
<button type="button" id="btnRandom">Click</button>
you simply call this callback function and pass in the result. <br />
<div id="output"></div>
function waitForMilliSeconds( <script src="scripts/1.sync.js"></script>
milliSeconds, callbackFunc) { <br/>
var date = new Date(); <textarea rows="5" cols="40">Try typing here
.. while the long running call is running ..</textarea>
random = Math.floor(Math.random() * 100); </body>
callbackFunc(random); </html>
}

Accordingly, how you call this method also changes. This Listing 3: 1.sync.js client side synchronous JS
can be seen below.
$("#btnRandom").on("click", function () {
$("#output").text("Start");
waitForMilliSeconds(5000, (random) => { randomNum = waitForMilliSeconds(5000);
$("#output").text(random); $("#output").text(randomNum);
}); });

function waitForMilliSeconds(milliSeconds) {
As you can see, you’re now calling waitForMilliSeconds var date = new Date();
with two input parameters. The first parameter instructs var curDate = null;
do { curDate = new Date(); }
the function to wait for five seconds and the second is an while (curDate - date < milliSeconds);
anonymous function parameter. This function gets called return Math.floor(Math.random() * 100);
once waitForMilliSeconds is done and it calls the callback- }
Func variable function.

Before you run this, what do you expect the behavior will Promise
be? Will it block the UI or not? Let’s find out. Go ahead and JavaScript has yet another way of structuring your code,
run this. You’ll notice that although the code seems to have which is Promises. You may have seen them when writ-
a different structure, the callback seemed to have no effect ing AJAX code, where your code can make an HTTP call
on the single-threaded nature of the code. The UI still blocks. to the server without refreshing the whole page. This is
how complex apps such as Google Maps were born. Before
Bummer! Google Maps, navigating a map required you to refresh
the whole page. It was a horrible user experience, until
Well, at least you learned a new concept here, and that someone showed us a better way. Technically speaking,
such callbacks have no effect on the single-threaded na- Outlook for the web was leveraging this technique al-
ture of execution. ready, but hey, this isn’t a race.

codemag.com Async Programming in JavaScript 11


Listing 4: Using Promises Awful! Promises and lies. I guess that didn’t solve the
problem either.
function waitForMilliSeconds(milliSeconds) {
const myPromise = new Promise((resolve, reject) => {
var date = new Date();
... XHR
random = Math.floor(Math.random() * 100);
resolve(random); At this point, you must be thinking that you’ve written
}); so much AJAX code, and that code leveraged Promises,
return myPromise; callbacks, and other paradigms. For sure you didn’t see
}
that blocking behavior there. What is so special about
AJAX that it doesn’t block?
Listing 5: Simple XHR request There are many ways to write AJAX code. I’ve referenced
$("#btnRandom").on("click", function () { jQuery and certainly jQuery has abstractions that help write
$("#output").text("Start"); AJAX code. The most basic way to call AJAX is by using XHR.
const xhr = new XMLHttpRequest();
xhr.addEventListener("loadend", () => { The way XHR works is that you instantiate a new instance
$("#output").text(xhr.responseText); of XMLHttpRequest. You subscribe to the loadend event
});
and fire your HTTP request. Now whenever the call re-
xhr.open("GET", "/random"); turns, the loadend event gets called and you get the re-
sults. You can process accordingly whether it’s an error or
xhr.send();
$("#output").text("Sent xhr request"); success.
});
Let’s start by instantiating the XMLHttpRequest.

Listing 6: XHR and Promises const xhr = new XMLHttpRequest();


function makeXhrCall() {
const myPromise = new Promise((resolve, reject) => { Before you send the request, let’s subscribe to the loa-
const xhr = new XMLHttpRequest(); dend event. You’re going to call an anonymous method
xhr.addEventListener("loadend", () => {
resolve(xhr.responseText); when the event gets called, which shows whatever re-
}); sponse the server sent.
xhr.open("GET", "/random");
xhr.send();
}); xhr.addEventListener("loadend", () => {
return myPromise; $("#output").text(xhr.responseText);
} });

Next, let’s send the request.


There’s also pure client-side code. What if you were to use
Promises instead of callbacks. Will the code not be single xhr.open("GET", "/random");
threaded then? Let’s find out. xhr.send();

The idea behind a JavaScript Promise is that the func- You can see the final code that puts all this together in
tion doesn’t return a value, but instead returns a Promise. Listing 5.
The Promise will either resolve (succeed) or reject (fail).
When it resolves, it can send back a success output. If it Remember from Listing 1, the server-side code is basical-
fails, it can send back an error. ly the same code you’ve been using except now instead of
running on the client, it’s running on the server. It waits
Your caller then uses standard paradigms around Promises five seconds and sends back a random number.
to handle success with a Then method.
Now go ahead and run this by referencing this script, press-
Let’s modify the waitForMilliSeconds method to now re- ing F5 to refresh the browser, and clicking the button.
turn a Pomise and resolve it on success. You can see this
method in Listing 4. Very interestingly, now the UI doesn’t block. How odd is
that?
This allows you to write calling code:
Although this is great, wouldn’t it be nice if complex cli-
waitForMilliSeconds(5000).then( (random) => { ent-side code could be afforded the luxury of being multi-
$("#output").text(random); threaded? This XHR-based code feels so complicated. My
}); example was simple, but imagine how this could look with
multiple dependencies, inputs dependent on other XHR
Let’s run this code again and hit the click button. What calls succeeded, timing issues, etc. Ugh!
do you see?

Ah! Yet again, although the code functionally is accurate, it Promises and XHR
still blocks the UI thread. The code is still single threaded. It Let’s start by cleaning this code up a bit. You’ve already
responds to no input while the function is running and sud- seen Promises in action. Can you combine XHR and Prom-
denly reacts to key strokes queued up in those five seconds. ises together to help write code that’s simpler? Sure! All

12 Async Programming in JavaScript codemag.com


you’d have to do is abstract out all this XHR code into
its own method that returns a Promise. When you do an
xhr.send(), just return the Promise. When XHR’s loadend ®
event is called, either resolve or reject the Promise as per
the return results. This can be seen in Listing 6.

By doing so, the calling code becomes a lot simpler, as


can be seen below.
Instantly Search
Terabytes
makeXhrCall().then((output) => {
$("#output").text(output);
});

Now go ahead and run this code. It runs just like before
and it doesn’t block the UI thread. Is this because you’re
using a Promise or that you’re using an XHR? Well, you
did use a Promise on a loop that was entirely on the cli-
ent side and that did block the UI. This non-UI blocking
magic is built into XHR. dtSearch’s document filters support:
• popular file types
Async Await • emails with multilevel attachments
Recently, JavaScript introduced support for async await
keywords. The Promise code looks cleaner than pure XHR
• a wide variety of databases
code, but it still feels a bit inside out. Imagine if you had • web data
three Promises you needed to wait for and those inputs
go into two more Promises, which finally go into another
AJAX call? Luckily, Promises do have concepts such as
resolveAll etc., which do help. They’re an improvement
over pure XHR code. However, the code becomes severely
Over 25 search options including:
indented and you’re stuck in a hell hole of brace matching • efficient multithreaded search
and keeping your code under 80 characters width.
• easy multicolor hit-highlighting
Async await helps you tackle that problem. Look at the sync • forensics options like credit card search
code example from Listing 3. To convert that from sync to
async, all you have to do is add the async keyword in front
of the function. In other words, change this line of code:

function waitForMilliSeconds(milliSeconds) { Developers:


into this line of code:
• SDKs for Windows, Linux, macOS
• Cross-platform APIs cover C++, Java
async function waitForMilliSeconds(milliSeconds) { and current .NET
That’s it! Okay, I lied. Your calling pattern changes slightly • FAQs on faceted search, granular data
too, but for the better. Your calling code changes like this: classification, Azure, AWS and more
$("#btnRandom").on("click", async () => {
$("#output").text("Start");
random =
await waitForMilliSeconds(5000);
$("#output").text(random);
}); Visit dtSearch.com for
• hundreds of reviews and case studies
The changes aren’t severe at all. All you had to do was
follow two rules: • fully-functional enterprise and
developer evaluations
• Put “await” in front of a method call that is in-
tended to be async.
• You cannot use await in any method that isn’t async The Smart Choice for Text
itself.
Retrieval® since 1991
Now you can finally start writing async code that doesn’t
look like a severely indented case of brace-matching dis- dtSearch.com 1-800-IT-FINDS
ease. You can extrapolate this to an XHR example, as can
be seen in Listing 7.

codemag.com Async Programming in JavaScript 13


Listing 7: Async XHR calls As you will see shortly, you have full control on the “data”
property and you can define your own structure. Let’s get
let makeXhrCall = async () => {
const myPromise = new Promise((resolve, reject) => { to that in a moment. First let’s focus on what “waitForMil-
const xhr = new XMLHttpRequest(); liSeconds” looks like in this worker world.
xhr.addEventListener("loadend", () => {
resolve(xhr.responseText);
}); Below is an abbreviated version of waitForMilliSeconds
xhr.open("GET", "/random"); that communicates back to the caller via the postMessage
xhr.send(); method.
});
return myPromise;
} function waitForMilliSeconds(milliSeconds){
...
random = Math.floor(Math.random() * 100);
Listing 8: Async Await with Workers postMessage(random);
const worker = new Worker('./scripts/hardwork.js'); }

$("#btnRandom").on("click", async () => {


$("#output").text("Start"); I removed the actual logic for brevity, but it’s the same wait-
random = await doHardWork(); ForMilliSeconds function I’ve used numerous times in this
$("#output").text(random); article already. That’s it. This is what the worker looks like.
});
async function doHardWork() { Now let’s see how calling this worker looks.
return new Promise((resolve) => {
worker.postMessage({
command: "Start", "milliSeconds": 5000 }); As can be seen in Listing 8, you first start by creating a
worker.addEventListener("message", (output) => { worker object using the “hardwork.js” file. Then, in the
resolve(output.data); doHardWork method, post a message with a custom ob-
});
}); ject structure, which is made available as the message.data
} property to the worker. I think “add a listener” for “mes-
sage” and whenever a message is available, which is after
the five second loop, I grab the random number output and
SPONSORED SIDEBAR Now when you run this code, although it’s quite simpli- resolve it. I think to use the async await pattern to set the
fied, unless it’s an XHR call, it still seems to block the UI. output div’s value.
Adding Copilots
to Your Apps Async Await with Workers
Wait a minute. This looks quite similar to the XHR object,
doesn’t it? Over there also, you had an addEventListener,
The future is here now What’s so peculiar about XHR that it doesn’t block the only the actual event was different.
and you don’t want to get UI thread? It seems to work on an Eventing model.
left behind. Unlock the Luckily for you, you can leverage that same capability Go ahead and run this code. Remember that this is a client-
true potential of your using workers in JavaScript. Workers in JavaScript is a side loop. Go ahead and click the button. The code behaves as
software applications by topic in itself, but for my purposes here, you need to before, but now the UI thread no longer locks. The text area
adding Copilots. separate out the blocking client-side code in its own remains responsive while the loop is running without XHR.
worker, which, in this case, means its own JavaScript
CODE Consulting can
assess your applications
file. Great. You’ve finally achieved the panacea of clean code,
and provide you with that doesn’t block the UI thread.
a roadmap for adding This worker will now listen for, and respond to, messages
after the work is done. So go ahead and create a new file
Copilot features and
optionally assist you in under the Scripts folder called hardwork.js. Here’s where Summary
adding them to your you intend to run your hard-working loop that takes five As computers become increasingly powerful and complex,
applications. seconds to complete. it’s reasonable to assume that we're going to have to write
increasingly complicated code. To write that complicated
Reach out to us today Inside this hardwork.js, you first need to add an event code, we're going to have to learn new patterns, such as
to get your application listener for “message”. This looks like: the asynchronous patterns available in JavaScript.
assessment scheduled.
www.codemag.com/ai addEventListener("message", (message) => { In this article, I discussed many such patterns, and I built
if (message.data.command === "Start") { a story bit by bit to show you how you can use the vari-
waitForMilliSeconds( ous paradigms in JavaScript to create non-blocking code
message.data.milliSeconds) that’s easy to maintain.
}
}); There’s a lot more to learn, of course, and I'm sure, as
time moves forward, greater demands and better patterns
As you can see, you’re adding an event listener for “mes- will emerge.
sage”. Whenever this is called, you can assume that some
input parameters are sent to you, in this case, via “mes- This is the beauty of our industry. Never a boring day.
sage.data.command”, which tells you what action to take.
The code is pretty simple, so you just look for “Start”. Until next time, happy coding!
Additionally, the data object has another property called
milliSeconds that tells you how long to wait before re-  Sahil Malik
turning a random number. 

14 Async Programming in JavaScript codemag.com


Copilot • AI • GitHub • .NET • Visual Studio • C# • Azure AI • Blazor • ChatGPT • ASP.NET • OpenAI • and more…

SEPT 10-12 WORKSHOPS SEPT 8, 9 & 13


MGM GRAND, LAS VEGAS, NEVADA
Let the Experts Guide You on Your AI Journey:

SCOTT HANSELMAN SCOTT HUNTER MILAN KAUR DAN WAHLIN HEATHER DOWNING
Vice President, Vice President, Director Product Manager, Principal Cloud International Speaker &
Developer Community, Product Management Microsoft Developer Advocate, Developer Advocate
Microsoft for Azure Developer Microsoft
Experience, Microsoft

JEFF FRITZ JOHN PAPA MARKUS EGGER ZOINER TEJADA MICHELE LEROUX
Principal Program Partner GM - Cloud President and Chief CEO and Architect, BUSTAMANTE
Manager, Microsoft Advocacy, Microsoft Software Architect, Solliance President & Architect,
CODE Group Solliance

[email protected]

BONUS: AI for Decision Makers Track

Questions Answered, Strategies Delivered, Relationships Built


ONLINE QUICK ID 2405031

Manipulating JSON Documents


in .NET 8
JavaScript Object Notation (JSON) is a great way of storing configuration settings for a .NET application. JSON is also an efficient
method to transfer data from one machine to another. JSON is easy to create, is human readable, and is easy for programming
languages to parse and generate. This text-based format for representing data is language agnostic and thus easy to use in C#,

JavaScript, Python, and almost any programming lan- Nested Objects


guage existing today. In this article, you’re going to learn Each value after the name can be one of the various data
multiple methods of creating and manipulating JSON doc- types shown in Table 1. Although this isn’t a large list of data
uments in .NET 8. In addition, you’ll learn how to serialize types, all data can be expressed with just this set of types.
and deserialize C# objects to and from JSON.
Look at Figure 4 to see an example of a JSON object that
has a value of each of the data types for each name. The
JSON Structure "name" property has a string value "John Smith" enclosed
As shown in Figure 1, a JSON object is made up of a within double quotes. The "age" property is a numeric with
Paul D. Sheriff collection of name/value pairs. You may hear these also a value of 31. The "ssn" property is empty as represented
http://www.pdsa.com expressed as key/value pairs, or, in C# terms, this is a by the keyword null. The "address" property is another
property and a value. In C#, a JSON object is the equiva- JSON object and is thus enclosed by curly braces. The
Paul has been working lent of an object, record, or a struct with property names, "phoneNumbers" property is a JSON array, with each value
in the IT industry since and the values you assign to those properties. JSON can of the array another JSON object. As you see, JSON is very
1985. In that time, also be a collection of one or more objects (Figure 2 flexible and can represent almost any type of structure you
he has successfully
and Figure 3). In C#, this would be the equivalent of a can possibly need in your programming tasks.
assisted hundreds of
dictionary, hash table, keyed list, or associative array. Al-
companies’ architect
though I’m going to use C# in this article, the constructs
software applications
mentioned are universal across all modern programming JSON Manipulation Classes
to solve their toughest
business problems. Paul languages. Because of this, any language can manipulate There are several classes within a couple of namespaces in
has been a teacher and these JSON objects easily. .NET 8 that you use to work with JSON in your C# applications.
mentor through various The first namespace is System.Text.Json and the second
mediums such as video A JSON object begins with a left brace and ends with namespace is System.Text.Json.Nodes. You have most likely
courses, blogs, articles and a right brace (see Figure 1). Each name is followed already used the JsonSerializer class to serialize C# objects to
speaking engagements by a colon and the name/value pairs are separated by JSON, or to deserialize a JSON string into a C# object. Howev-
at user groups and a comma. Each name must be wrapped within double er, there are other classes you can use to add nodes to a JSON
conferences around the quotes. All string values must be wrapped within double object, set and retrieve values, and create new JSON objects.
world. Paul has multiple quotes.
courses in the www. The System.Text.Json Namespace
pluralsight.com library A JSON array is an ordered collection of values that be- Within this namespace are classes and structures to help
(https://bit.ly/3gvXgvj) gins with a left bracket and ends with a right bracket. you manipulate JSON documents including the JsonSe-
and on Udemy.com Each value within the array is separated by a comma. The rializer class. Table 2 provides a description of each of
(https://bit.ly/3WOK8kX) values within the array may be a single item, such as a the classes within this namespace. Each of these classes
on topics ranging from C#, string or a number (Figure 2), or each value within the is illustrated throughout this article. All the classes and
LINQ, JavaScript, Angular, array may be a JSON object (Figure 3). structures within this namespace are immutable. This
MVC, WPF, XML, jQuery,
and Bootstrap. Contact
Paul at [email protected].

Figure 1: JSON objects are made up of name/value pairs.

Figure 2: JSON arrays can have just simple values as Figure 3: Each element in a JSON array can be a JSON
their elements. object.

16 Manipulating JSON Documents in .NET 8 codemag.com


Data Type Description
Number A signed number value. The number may be a whole number (integer) or a decimal with a fractional part.
String A set of zero or more Unicode characters enclosed within double quotes. Strings support a backslash escaping syntax just as you find in C#.
Boolean Either a true or a false value.
Object A JSON object using the JSON object syntax previously explained.
Array A set of zero or more values using the JSON array syntax previously explained. Each element within the array may be of any of the types
shown in this table.
Null Use the keyword null to signify an empty value for this name.
Table 1: JSON has a limited set of data types available for a value.

Class / Structure Description


JsonDocument A class that represents an immutable (read-only) document object model (DOM) of a JSON object. Use this class when you don’t
have a C# class to deserialize the JSON into and you need to access the name/value pairs programmatically.
JsonProperty This structure represents a single JSON property within a JSON object. For example, "colorId": 1 is an example of a JsonProperty.
JsonElement This structure represents a single value within a JSON property. For example, the number one (1) within the property "colorId": 1
is the JsonElement.
JsonSerializer A class used to serialize a JSON string into a C# object, or to deserialize a C# object into a JSON string.
Utf8JsonWriter A class that can be used to emit a JSON document one property at a time. This class is a high-performance, forward-only, non-
cached method of writing UTF-8 encoded JSON text.
Table 2: The System.Text.Json namespace contains classes and structures for manipulating and serializing JSON objects

Class Description
JsonObject This class represents a mutable (read/write) JSON document. This class is like the JsonDocument class from the System.Text.Json namespace.
JsonNode This class represents a mutable (read/write) node within a JSON document. This class is like the JsonProperty class from the System.Text.Json
namespace.
JsonValue This class represents a mutable (read/write) JSON value. This class is like the JsonElement class from the System.Text.Json namespace.
JsonArray This class represents a mutable (read/write) JSON array.
Table 3: The System.Text.Json.Nodes namespace contains classes for manipulating in-memory JSON objects as a document object model

means that once they’ve been instantiated with data, Below the Using statement, create a variable named jo that is
they cannot be modified in any way. an instance of a JsonObject. The JsonObject provides the abil-
ity to add, edit, and delete nodes within the JSON document.
The System.Text.Json.Nodes Namespace
The classes in this namespace (Table 3) are for creating JsonObject jo = new();
and manipulating in-memory JSON documents using a DOM.
These classes provide random access to JSON elements, al-
low adding, editing, and deleting elements, and can con-
vert dictionary and key value pairs into JSON documents.

Build In-Memory JSON Documents


I recommend following along with this article to ensure
that you have a solid foundation for manipulating JSON
documents. The tools needed to follow the step-by-step
instructions in this article are .NET 6, 7, or 8, and either
Visual Studio 2022 or VS Code. Create a console applica-
tion named JsonSamples. The goal of this first example
is to create the following JSON object.

{
"name": "John Smith",
"age": 31
}

Of course, there are many ways to create this JSON object


using C# and the JSON classes. For this first example, open
the Program.cs file, delete any lines of code in this file,
and add the following Using statement as the first line.

using System.Text.Json.Nodes; Figure 4: Each element in a JSON object can be another object, or an array.

codemag.com Manipulating JSON Documents in .NET 8 17


Add two name/value pairs to the JsonObject using the C# };
KeyValuePair class, as shown in the following code.
Console.WriteLine(jo.ToString());
jo.Add(new KeyValuePair<string,
JsonNode?>("name", "John Smith")); Because the JsonObject can be initialized with a Diction-
ary object, you may use the same syntax you used to cre-
jo.Add(new KeyValuePair<string, ate a Dictionary object in the constructor of the JsonOb-
JsonNode?>("age", 31)); ject, as shown in the following code:

Write out the JSON document using the ToString() meth- using System.Text.Json.Nodes;
od on the JsonObject.
JsonObject jo = new() {
Console.WriteLine(jo.ToString()); ["name"] = "John Smith",
["age"] = 1
The output from this statement is the JSON object shown };
earlier. Notice how the JSON is nicely formatted. If you
wish to remove all of the carriage returns, line feeds, and Console.WriteLine(jo.ToString());
whitespace between all the characters, change the call
from the ToString() method to the ToJsonString() method Create Nested JSON Objects
instead. You should then see the following JSON appear Not all JSON objects are simple name/value pairs. Some-
in the console window. times you need one of the properties to be another JSON
object. The "address" property is another JSON object
{"name":"John Smith","age":31} that has its own set of name/value pairs, as shown in the
following snippet:
Use a New C# 12 Feature
In C# 12 (.NET 8) you can create the JsonObject using the {
following syntax. Note that the square brackets used in the "name": "John Smith",
code are a new C# 12 feature that allows you to initialize "age": "31",
the new JsonObject object without using the new keyword. "ssn": null,
"isActive": true,
JsonObject jo = "address": {
[ "street": "1 Main Street",
new KeyValuePair<string, "city": "Nashville",
JsonNode?>("name", "John Smith"), "stateProvince": "TN",
new KeyValuePair<string, "postalCode": "37011"
JsonNode?>("age", 31) }
]; }

Using a Dictionary Class To create the above JSON object, create a new instance
You can pass an instance of a Dictionary<string, Json- of a JsonObject and, using a Dictionary object, build the
Node?> to the constructor of the JsonObject to create structure you need, as shown in the following code snip-
your JSON document. Replace the code in the Program.cs pet:
file with the following:
using System.Text.Json.Nodes;
using System.Text.Json.Nodes;
JsonObject jo = new() {
Dictionary<string, JsonNode?> dict = new() { ["customer"] = "Acme",
["name"] = "John Smith", ["IsActive"] = true,
["age"] = 31 ["address"] = new JsonObject() {
}; ["street"] = "123 Main Street",
["city"] = "Walla Walla",
JsonObject jo = new(dict); ["stateProvince"] = "WA",
["postalCode"] = "99362",
Console.WriteLine(jo.ToString()); ["country"] = "USA"
}
Using a JsonValue Object };
The Add() method on the JsonObject class also allows you
to pass in the name and a JsonValue object. Pass in the Console.WriteLine(jo.ToString());
value to static method Create() on the JsonValue class to
create a new JsonValue object.
Parse JSON Strings into Objects
using System.Text.Json.Nodes; JSON documents are commonly stored as strings in a file or
in memory. Instead of attempting to read specific values in
JsonObject jo = new() { the JSON using File IO or string parsing, you can parse the
{ "name", JsonValue.Create("John Smith") }, string into a JsonNode object. Once in this object, it’s very
{ "age", JsonValue.Create(31) } easy to retrieve single values, or entire nodes.

18 Manipulating JSON Documents in .NET 8 codemag.com


Listing 1: To save typing, I have created a class with some JSON documents
namespace JsonSamples; ""postalCode"": ""37011""
}
public class JsonStrings }";
{
public const string PERSON =
@"{ public const string PERSON_ADDRESS_PHONES =
""name"": ""John Smith"", @"{
""age"": 31, ""name"": ""John Smith"",
""ssn"": null, ""age"": 31,
""isActive"": true ""ssn"": null,
}"; ""isActive"": true,
""address"": {
public const string PHONE_NUMBERS = ""street"": ""1 Main Street"",
@"[ ""city"": ""Nashville"",
{ ""state"": ""TN"",
""type"": ""Home"", ""postalCode"": ""37011""
""number"": ""615.123.4567"" },
}, ""phoneNumbers"": [
{ {
""type"": ""Mobile"", ""type"": ""Home"",
""number"": ""615.345.6789"" ""number"": ""615.123.4567""
}, },
{ {
""type"": ""Work"", ""type"": ""Mobile"",
""number"": ""615.987.6543"" ""number"": ""615.345.6789""
} },
]"; {
""type"": ""Work"",
public const string PERSON_ADDRESS = ""number"": ""615.987.6543""
@"{ }
""name"": ""John Smith"", ]
""age"": 31, }";
""ssn"": null, }
""isActive"": true,
""address"": {
""street"": ""1 Main Street"",
""city"": ""Nashville"",
""state"": ""TN"",

Instead of repeating the same set of JSON strings {


throughout this article, I’m creating a single class (List- "name": "John Smith",
ing 1) with some string constants to represent differ- "age": 31,
ent JSON documents. The PERSON constant is a JSON "ssn": null,
document that contains four properties to represent a "isActive": true
single person. The PHONE_NUMBERS constant is a JSON }
array of a few phone numbers where each number has Object
a type property with a value such as Home, Mobile, or
Work. The PERSON_ADDRESS constant contains a nested The first Console.WriteLine() statement emits the JSON
address property that is another object with a street, object, and the second Console.WriteLine() statement
city, state, and postalCode properties. The PERSON_AD- reports the kind of object contained in the JsonNode
DRESS_PHONES constant contains person information, object that’s the type of Object. Of course, you may
address information, and an array of phone numbers all pass to the Parse() method any of the other constant
in one JSON object. strings. Write the following code in the Program.cs file
to parse the PHONE_NUMBERS constant into a JsonNode
Create a JsonNode Object object:
The JsonNode class is probably the class you will use most
often. It’s very flexible and contains most of the function- using JsonSamples;
ality you need when manipulating JSON documents. For using System.Text.Json.Nodes;
example, to parse the PERSON constant into a JsonNode
object, place the following code into the Program.cs file: // Parse string into a JsonNode object
JsonNode? jn = JsonNode.Parse(
using JsonSamples; JsonStrings.PHONE_NUMBERS);
using System.Text.Json.Nodes;
Console.WriteLine(jn?.ToString());
// Parse string into a JsonNode object Console.WriteLine(jn!.GetValueKind());
JsonNode? jn = JsonNode.Parse(JsonStrings.PERSON);
Run the console application and you should see the fol-
Console.WriteLine(jn!.ToString()); lowing displayed in the console window:
Console.WriteLine(jn!.GetValueKind());
[
Run the console application and you should see the fol- {
lowing displayed in the console window: "type": "Home",

codemag.com Manipulating JSON Documents in .NET 8 19


"number": "615.123.4567" {
}, "name": "John Smith",
{ "age": 31,
"type": "Mobile", "ssn": null,
"number": "615.345.6789" "isActive": true
}, }
{ Object
"type": "Work",
"number": "615.987.6543"
} Read Data from JSON Documents
] There are a few different ways you can read individual
Array name/value pairs from a JSON document. Both the Json-
Element and the JsonNode classes allow you to get at the
Notice that in this case, the GetValueKind() method re- data within the JSON.
ports this as an Array instead of an Object. When the
JSON string that is read in starts with a square bracket, Once you’ve parsed some JSON into a JsonDocument ob-
it’s a JSON array instead of a JSON object. ject, you must always use the RootElement property to
retrieve specific values within the JSON document. You
Create a JsonDocument Object can either place the RootElement property into a Json-
Earlier in this article, you learned that you could add items Element object, or you can use the full path of the Json-
to a JsonObject in its constructor. There’s no constructor Document.RootElement property. Write the code shown
for the JsonDocument object, so you must use the Parse() in Listing 2 into the Program.cs file.
method to get valid JSON data into this object. The Json-
Document object is a very efficient object to use when In Listing 2, you parse the PERSON string into a Json-
all you need to do is to read data from a JSON document. Document. You then get the property called "name"
Once the data is parsed into the JsonDocument, access from the JSON document and place this into a JsonEle-
the RootElement property to retrieve the JSON. Write the ment object named je. Because the "name" property is a
following code into the Program.cs file: string value, call the GetString() method on the je vari-
able to extract the value from the "name" property. If
using JsonSamples; you don’t wish to use a separate variable, you may ac-
using System.Text.Json; cess the jd.RootElement property directly by calling the
GetProperty("age") to get the "age" element. Call the
// Parse string into a JsonDocument object GetInt32() method on this element to extract the "age"
using JsonDocument jd = JsonDocument.Parse(JsonStrings.PERSON); value and display it on the console. Run the application
and you should see the following output from Listing 2
// Get Root JsonElement structure displayed in the console window:
JsonElement je = jd.RootElement;
Name=John Smith
Console.WriteLine(je.ToString()); Age=31
Console.WriteLine(je.ValueKind);
Retrieve Data in a Nested Object
In the code above, you retrieve the RootElement prop- Look back at Listing 1 and notice that the PERSON_AD-
erty and place it into a new variable of the type Json- DRESS constant is the one with the nested "address" ob-
Element. It’s this class that you’re going to use to read ject. To access the "city" value within the "address", re-
the data from JSON document, as you shall soon learn. place the code in the Program.cs file with the following:
Run the application and you should see the following dis-
played in the console window: using JsonSamples;
using System.Text.Json;

Listing 2: Retrieve values from the JSON using the RootElement property // Parse string into a JsonDocument object
using JsonSamples; using JsonDocument jd =
using System.Text.Json; JsonDocument.Parse(JsonStrings.PERSON_ADDRESS);

// Parse string into a JsonDocument object // Get a specific property from JSON
using JsonDocument jd =
JsonDocument.Parse(JsonStrings.PERSON); JsonElement je = jd.RootElement
.GetProperty("address").GetProperty("city");
// Get a specific property from JSON
JsonElement je = // Get the string value from the JsonElement
jd.RootElement!.GetProperty("name");
Console.WriteLine($"City={je!.GetString()}");
// Get the numeric value
// from the JsonElement After parsing the string into the JsonDocument object, access
Console.WriteLine( the RootElement property and call the GetProperty("address") to
$"Name={je!.GetString()}"); get to the "address" property, and then call GetProperty("city")
Console.WriteLine(
$"Age={jd.RootElement! to get to the "city" property. Once you have this element in
.GetProperty("age")!.GetInt32()}"); a JsonElement object, call the GetString() method to retrieve
the value for the "city" property.

20 Manipulating JSON Documents in .NET 8 codemag.com


Run the application and you should see the following dis- Console.WriteLine(
played in the console window: $"City={node!.AsValue()}");

City=Nashville The above code parses the PERSON_ADDRESS string into a


JsonNode object. It then uses an indexer on the jn variable to
Reading Data Using JsonNode drill down to the address.city node. This node is placed into
Unlike the JsonDocument object, the JsonNode object a new JsonNode object named node. The value of the "city"
has an indexer that allows you to specify the name in property is retrieved using the AsValue() method and dis-
double brackets to retrieve that specific node, as shown played on the console window when you run this application.
in the following code:

// Parse string into a JsonNode object Add, Edit, and Delete Nodes
JsonNode? jn = To add a new name/value pair to a JSON document, cre-
JsonNode.Parse(JsonStrings.PERSON); ate a JsonObject object out of the PERSON JSON string
constant and convert it to a JsonObject using the AsOb-
// Get the age node ject() method. Once you have a JsonObject, use the Add()
JsonNode? node = jn!["age"]; method to create a new name/value pair, in this case
"hairColor": "Brown".
With this new JsonNode object, node, retrieve the value
as a JsonValue using the AsValue() method. With the using JsonSamples;
JsonValue object, you can report the path of where this using System.Text.Json.Nodes;
value came from, the type (string, number, Boolean,
etc.), and get the value itself as shown in the following // Parse string into a JsonObject
code: JsonObject? jo = JsonNode.Parse(
JsonStrings.PERSON)?.AsObject();
// Get the value as a JsonValue
JsonValue value = node!.AsValue(); jo?.Add("hairColor", JsonValue.Create("Brown"));
Console.WriteLine($"Path={value.GetPath()}");
Console.WriteLine($"Type={value.GetValueKind()}"); Console.WriteLine(jo?.ToString());
Console.WriteLine($"Age={value}");
Replace the code in the Program.cs file with the code
Another option is to retrieve the value using the listed above and run the application to see the following
GetValue<T>() method, as shown in the following code: displayed in the console window:

// Get the value as an integer {


int age = node!.GetValue<int>(); "name": "John Smith",
Console.WriteLine($"Age={age}"); "age": 31,
"ssn": null,
If you type in the above code snippets into the Program. "isActive": true,
cs file and run the application, the following should be "hairColor": "Brown"
displayed in the console window: }

Path=$.age Updating a Node


Type=Number Use the JsonNode object to update a value in a name/value
Age=31 pair. Parse the JSON into a JsonNode object, then access
Age=31 the name using an indexer. Set the value using the equal
sign just as you would any normal assignment in .NET.
Retrieve Data in a Nested Object
Look back at Listing 1 and look at the PERSON_ADDRESS using JsonSamples;
constant. This JSON string is the one with the nested using System.Text.Json.Nodes;
"address" object. To access the "city" value within the
"address", you replace the code in the Program.cs file // Parse string into a JsonNode object
with the following: JsonNode? jo = JsonNode.Parse(
JsonStrings.PERSON);
using JsonSamples;
using System.Text.Json.Nodes; jo!["age"] = 42;

// Parse string into a JsonNode object Console.WriteLine(jo?.ToString());


JsonNode? jn = JsonNode.Parse(
JsonStrings.PERSON_ADDRESS); Replace the code in the Program.cs file with the code
listed above and run the application to see the following
// Get the address.city node code displayed in the console window. Notice that the
JsonNode? node = age value has changed from 31 to 42.
jn!["address"]!["city"];
{
// Display string value from the JsonNode "name": "John Smith",

codemag.com Manipulating JSON Documents in .NET 8 21


"age": 42, object separated by commas for each element you wish
"ssn": null, to create in the array. Write the following code into the
"isActive": true Program.cs file:
}
using System.Text.Json.Nodes;
Deleting a Node
The Remove() method on a JsonObject is used to delete a JsonArray ja = [
name/value pair from a JSON document. Create a JsonOb- new JsonObject() {
ject object out of the JSON PERSON string constant. ["name"] = "John Smith",
Once you have a JsonObject, use the Remove() method, ["age"] = 31
passing in the name you wish to remove from the JSON. },
In the code below, you remove the "age" name/value new JsonObject() {
pair: ["name"] = "Sally Jones",
["age"] = 33
using JsonSamples; }
using System.Text.Json.Nodes; ];

// Parse string into a JsonObject Console.WriteLine(ja.ToString());


JsonObject? jo = JsonNode.Parse( Console.WriteLine(ja.GetValueKind());
JsonStrings.PERSON)?.AsObject();
Run the application and you should see the following dis-
jo?.Remove("age"); played in the console window:

Console.WriteLine(jo?.ToString()); [
{
Replace the code in the Program.cs file with the code "name": "John Smith",
listed above and run the application to see the following "age": 31
displayed in the console window. The "age": 31 name/ },
value pair has been removed from the JSON document. {
"name": "Sally Jones",
{ "age": 33
"name": "John Smith", }
"ssn": null, ]
"isActive": true Array
}
Manipulate an Array
Like most arrays in .NET, you can easily add and remove
Working with Arrays elements within the array. Given the previous JsonArray
In addition to a simple object, JSON can contain arrays object declaration, you can insert a new entry into the
of strings, numbers, Booleans, and JSON objects. Instead array by adding the following code after the declaration.
of using the JsonObject to represent a JSON document, The Insert() method lets you specify where in the array
use the JsonArray class to represent a list of items. For you wish to add the new object. In this case, you are
example, to create an array of string values, replace the adding a new element into the first position of the array.
code in the Program.cs file with the following:
ja.Insert(0, new JsonObject() {
using System.Text.Json.Nodes; ["name"] = "Charlie Chaplin",
["age"] = "50"
JsonArray ja = [ "John", "Sally", "Charlie"]; });

Console.WriteLine(ja.ToString()); You can always create a new JsonObject first, initialize it


Console.WriteLine(ja.GetValueKind()); with some data, then add that new JsonObject to the Json-
Array using the Add() method. The Add() method adds the
Run the application and you should see the following JsonObject to the end of the array.
code displayed in the console window. Notice that after
the JSON array is displayed, the type reported from the JsonObject jo = new() {
call to the GetValueKind() method is "Array". ["name"] = "Buster Keaton",
["age"] = 55
[ };
"John", ja.Add(jo);
"Sally",
"Charlie" Array elements may be removed by either a reference to
] the actual object, or by using an index number, as shown
Array in the following two lines of code:

To create an array of JSON objects, use the same syntax ja.Remove(jo);


with the square brackets, but create a new JsonObject ja.RemoveAt(2);

22 Manipulating JSON Documents in .NET 8 codemag.com


Extract a JSON Array from a JSON Node Type the above code into the Program.cs file and run the
Look back at Listing 1 to view the PERSON_ADDRESS_ application to see the following values displayed in the
PHONES constant string. Within this JSON object, there’s console window:
an object named "phoneNumbers" that contains an array
of phone number objects. To extract the phone number Type=Home, Phone Number=615.123.4567
objects from this string, you first need to parse the string Type=Mobile, Phone Number=615.345.6789
into a JsonNode object. You then retrieve the value from Type=Work, Phone Number=615.987.6543
"phoneNumbers" object and convert it into a JsonArray
object using the AsArray() method, as shown in the fol- Iterate Over Array Values Using JsonDocument
lowing code: If you wish to use the JsonDocument class instead of a
JsonNode class, the following code illustrates the differ-
using JsonSamples; ences between the two classes. After parsing the PHONE_
using System.Text.Json.Nodes; NUMBERS string constant into a JsonDocument, convert the
RootElement property, which is an array, into an ArrayEnu-
// Parse string into a JsonNode object merator using the EnumerateArray() method. You may now
JsonNode? jn = JsonNode.Parse( iterate over the array of JsonElement objects and display
JsonStrings.PERSON_ADDRESS_PHONES); the phone number type and the phone number itself.

// Get the Phone Numbers Array using JsonSamples;


JsonArray? ja = jn!["phoneNumbers"]!.AsArray(); using System.Text.Json;

Console.WriteLine(ja.ToString()); // Parse string into a JsonDocument object


Console.WriteLine(ja.GetValueKind()); using JsonDocument jd = JsonDocument.Parse(
JsonStrings.PHONE_NUMBERS);
Place this code into the Program.cs file and run the
application to display the following in the console win- JsonElement.ArrayEnumerator elements =
dow: jd.RootElement.EnumerateArray();
foreach (JsonElement elem in elements) {
[ Console.WriteLine(
{ $"Type={elem.GetProperty("type")},
"type": "Home", Phone Number={elem.GetProperty("number")}");
"number": "615.123.4567" }
},
{
"type": "Mobile", Listing 3: Retrieve a single item from an array
"number": "615.345.6789" using JsonSamples;
}, using System.Text.Json.Nodes;
{
string? value = string.Empty;
"type": "Work",
"number": "615.987.6543" // Create JsonNode object from Phone Numbers
} JsonNode? jn = JsonNode.Parse(
JsonStrings.PHONE_NUMBERS);
]
Array // Cast phone numbers as an array
JsonArray ja = jn!.AsArray();
Iterate Over Array Values Using JsonNode // Search for Home number
Once you have a JsonArray object, you may iterate over JsonNode? tmp =
each value in the array to extract the different property ja.FirstOrDefault(row =>
values. In the code shown below, you parse the PHONE_ (string)(row!["type"]!
.GetValue<string>()) == "Home");
NUMBERS string constant into a JsonNode object. Next,
convert this JsonNode object into a JsonArray using the // Extract the home number value
AsArray() method. Use a foreach loop to iterate over each value = tmp!["number"]!.GetValue<string>();
element in the array and emit the "type" and "number" Console.WriteLine($"Home Number={value}");
properties onto the console window.

using JsonSamples;
using System.Text.Json.Nodes; Listing 4: A sample runtime configuration file
{
// Parse string into a JsonNode object "runtimeOptions": {
JsonNode? jn = JsonNode.Parse( "tfm": "net8.0",
"framework": {
JsonStrings.PHONE_NUMBERS); "name": "Microsoft.NETCore.App",
"version": "8.0.0"
JsonArray? nodes = jn!.AsArray(); },
"configProperties": {
foreach (JsonNode? node in nodes) {
"System.Runtime...": false
Console.WriteLine($"Type={node!["type"]}, }
Phone Number={node!["number"]}"); }
} }

codemag.com Manipulating JSON Documents in .NET 8 23


The output is the same as the output shown above when Parsing JSON from a File
using the JsonNode object to iterate over the array values. There are a couple of different methods you may use to
extract JSON from a file. You can use .NET File I/O classes,
Get a Single Phone Number from Array or you can use the IConfiguration interface. Let’s start by
Looking back at Listing 1, you see the PHONE_NUMBERS looking at how you can read JSON values using the .NET
string constant, which is a JSON array. After you parse this File I/O classes.
data into a JsonNode, you might wish to retrieve just the
home phone number from this array. After converting the Read Runtime Configuration Settings
phone numbers to a JsonArray object, use the FirstOrDe- When you run a .NET application, there’s a runtimecon-
fault() method to search where the "type" value is equal fig.json file (Listing 4) created with information about
to "Home". If this node is found, extract the number value the application. The JSON object shown below is an ex-
to display on the console window, as shown in Listing 3. ample of what’s generated from a console application.
If you type the code shown in Listing 3 into the Program. When you run an ASP.NET web application, there will be
cs file and run the application, the code displays "Home additional information in this file.
Number=615.123.4567" in the console window.
If you wish to read the .NET Framework version from this
file, you need to first open the file and parse the text into
a JsonNode or JsonDocument object, as shown in Listing
Listing 5: Use File I/O to read a value from the runtime configuration file 5. You then access the runtimeOptions.framework.ver-
using System.Text.Json.Nodes; sion property to retrieve the value "8.0.0". Replace all the
code in the Program.cs file with the code shown in Listing
string? value = string.Empty;
string fileName = 5 and run the application to display the runtime version.
$"{AppDomain.CurrentDomain.FriendlyName}
.runtimeconfig.json"; Create an appsettings.json File
if (File.Exists(fileName)) { In most .NET applications you write, you most likely will
JsonNode? jn = JsonNode.Parse( need a file to store global settings such as a connection
File.ReadAllText(fileName)); string, logging levels, and other application-specific set-
value = jn!["runtimeOptions"]
!["framework"] tings. This information is generally stored in a file named
!["version"]?.GetValue<string>(); appsettings.json. Add this file to the console application
} and put the settings shown in Listing 6 into this file.
Console.WriteLine(json);
Once the file is created, click on the file and bring up the
Properties window. Set the Copy to Output Directory
property to Copy always. You should put the connection
Listing 6: Create an application settings file in the console application string all on one line. I had to break it across several lines
due to the formatting constraints of the print magazine.
{
"ConnectionStrings": {
"DefaultConnection": Read appsettings.json File Using File I/O
"Server=Localhost; Because the appsettings.json file only contains text, you
Database=AdventureWorks;
Trusted_Connection=Yes; can read in the JSON contained in this file using the .NET
MultipleActiveResultSets=true; File I/O classes, as shown in Listing 7. In this code, you
TrustServerCertificate=True;" set the fileName variable to point to the location of the
},
"AdvWorksAppSettings": { appsettings.json file. Because the appsettings.json file is
"ApplicationName": "Adventure Works", in the same folder as the executable that’s running, you
}, don’t need to specify a path to the file. If the file exists,
"Logging": { read all of the text using the ReadAllText() method of
"LogLevel": {
"Default": "Information", the File class and pass that text to the Parse() method
"Microsoft.AspNetCore": "Warning" of the JsonNode class. Once you have the JSON in the
} JsonNode object, you can read the value from the Con-
}
} nectionStrings.DefaultConnection property. Type the
code shown in Listing 7 into the Program.cs file, run the
application, and you should see the connection string in
Listing 7: Read the appsettings.json file using .NET File I/O classes the appsettings.json file displayed in the console window.
using System.Text.Json.Nodes;
Writing to a File
string connectString = string.Empty; If you’re running a WPF application, or a console application,
string fileName = "appsettings.json"; it’s perfectly acceptable to write data back to the appset-
if (File.Exists(fileName)) { tings.json file. Of course, you wouldn’t want to do this when
// Read settings from file running an ASP.NET web application. The code shown in List-
JsonNode? jd = JsonNode.Parse( ing 8 reads in the appsettings.json file, adds a new name/
File.ReadAllText(fileName)); value pair, then writes the new JSON back to the appsettings.
// Extract the default connection string json file. Type the code in Listing 8 into the Program.cs file
connectString = jd!["ConnectionStrings"] and run the application to produce the following results:
!["DefaultConnection"]?
.GetValue<string>() ?? string.Empty;
} {
"ConnectionStrings": {
Console.WriteLine(connectString); "DefaultConnection": "Server=...;"

24 Manipulating JSON Documents in .NET 8 codemag.com


}, Listing 8: Write a new value to the appsettings.json file
"AdvWorksAppSettings": {
"ApplicationName": "Adventure Works", string fileName = "appsettings.json";
JsonObject? jo = null;
"LastDateUsed": "12/27/2023"
}, if (File.Exists(fileName)) {
"Logging": { // Read settings from file
jo = JsonNode.Parse(
"LogLevel": { File.ReadAllText(fileName))?.AsObject();
"Default": "Information",
"Microsoft.AspNetCore": "Warning" if (jo != null) {
// Locate node to add to
} JsonObject? node =
} jo!["AdvWorksAppSettings"]?.AsObject();
}
// Add new node
node?.Add("LastDateUsed",
JsonValue.Create(
Using the IConfiguration Interface DateTime.Now.ToShortDateString()));
Instead of using the .NET File I/O classes, you can take // Write back to file
advantage of the IConfiguration interface and the Con- File.WriteAllText(fileName, jo?.ToString());
figurationBuilder class to read in a JSON file. To use the }
ConfigurationBuilder class, you must add two packages to }
your project. Console.WriteLine(jo?.ToString());

• Microsoft.Extensions.Configuration
• Microsoft.Extensions.Configuration.Json Listing 9: Use the ConfigurationBuilder class to read in a JSON file

After adding these two packages to your project, you can using Microsoft.Extensions.Configuration;
write the code shown in Listing 9. In this code, you pass string? value = string.Empty;
in the runtime configuration file name (see Listing 4) string fileName =
to the AddJsonFile() method on the ConfigurationBuilder. $"{AppDomain.CurrentDomain.FriendlyName}
.runtimeconfig.json";
The Build() method is called to create the configuration
builder object, which reads the JSON file into memory IConfiguration config =
and converts the text into a JSON document. Use the new ConfigurationBuilder()
.AddJsonFile(fileName)
GetSection() method to retrieve a specific section within .Build();
the JSON file. In this case, you’re asking for the runtime-
Options section. From the section variable, you can now IConfigurationSection section =
config.GetSection("runtimeOptions");
retrieve the framework version number. Type in the code value = section["framework:version"]
in Listing 9 into the Program.cs file, run the applica- ?? string.Empty;
tion, and you should see the version number appear in
Console.WriteLine(value);
the console window.

Read the appsettings.json File


You previously read the appsettings.json file using the Listing 10: Read the appsettings.json file using the ConfigurationBuilder class
.NET File I/O classes. In Listing 10 you’re now going to using Microsoft.Extensions.Configuration;
use the ConfigurationBuilder to read the same file. Be-
cause the appsettings.json file is in the same folder as string connectString;
the executable that’s running, you don’t need to specify IConfiguration config =
a path to the file. Type the code in Listing 10 into the new ConfigurationBuilder()
Program.cs file, run the application and you should see .AddJsonFile("appsettings.json")
the connection string appear in the console window. .Build();

IConfigurationSection section =
Bind Settings to a Class config.GetSection("ConnectionStrings");
Instead of reading values one at a time from a configura- connectString =
section["DefaultConnection"]
tion file, you can bind a section within a configuration ?? string.Empty;
file to a class with just one line of code. Create a class
named AppSettings and add a property that maps to each Console.WriteLine(connectString);
name in the configuration file. In the following code,
there’s a sole property named ApplicationName that
maps to the "ApplicationName" property in the appset- To perform the binding operation, add the package Micro-
tings.json file shown in Listing 6. soft.Extensions.Configuration.Binder to your project
using the NuGet Package Manager. Add the following code
namespace JsonSamples; to the Program.cs file and run the application to see the
application name displayed in the console window:
public class AppSettings
{ using JsonSamples;
public string ApplicationName { get; set; } using Microsoft.Extensions.Configuration;
= string.Empty;
} AppSettings entity = new();

codemag.com Manipulating JSON Documents in .NET 8 25


public class Person
IConfiguration config = {
new ConfigurationBuilder() public string? Name { get; set; }
.AddJsonFile("appsettings.json") public int Age { get; set; }
.Build(); public string? SSN { get; set; }
public bool IsActive { get; set; }
config.Bind("AdvWorksAppSettings", entity);
public override string ToString()
Console.WriteLine(entity.ApplicationName); {
return $"{Name}, Age={Age},
SSN={SSN}, IsActive={IsActive}";
Serialize C# Object to JSON }
So far, everything you’ve done manipulates JSON using C# }
code. Another excellent feature of .NET is that you may se-
rialize your C# objects into JSON using just a few lines of After creating the Person class, replace all the code in the
code. This is very handy for sending C# objects over the Program.cs file with the following:
internet via Web API calls. In fact, the code you’re going
to learn about now is exactly how ASP.NET sends your data using JsonSamples;
across the internet when writing Web API calls. To illustrate using System.Text.Json;
this concept, right mouse-click on the project and add a new
class named Person, as shown in the following code snippet: Person entity = new() {
Name = "John Smith",
namespace JsonSamples; Age = 31,
SSN = null,
IsActive = true
};
Listing 11: Add a JsonSerializerOptions object to control how the JSON is formatted
using JsonSamples; Console.WriteLine(
using System.Text.Json; JsonSerializer.Serialize(entity));

Person entity = new() {


Name = "John Smith", This code uses the Serialize() method of the JsonSerializer
Age = 31, class from the System.Text.Json namespace. Pass in the in-
SSN = null, stance of your C# object to the Serialize() method and a string
IsActive = true of JSON is returned. Run the application and you should see
};
the following string of JSON appear in your console window:
JsonSerializerOptions options = new() {
PropertyNamingPolicy = {"Name":"John Smith","Age":31,
JsonNamingPolicy.CamelCase,
WriteIndented = true "SSN":null,"IsActive":true}
};
Notice that there’s no indentation or spaces between the
Console.WriteLine(
JsonSerializer.Serialize(entity, options)); values. Also notice that the names are the exact same
case as your C# class property names. JSON usually uses
camel case for names (first letter is lower-case), whereas
C# uses Pascal case (first letter is upper-case).
Listing 12: Add JSON attributes to your C# class properties to control serialization
using System.Text.Json.Serialization; Change Casing of Property Names
If you wish to change the casing of the property names,
namespace JsonSamples;
create an instance of a JsonSerializerOptions object and
public class PersonWithAttributes set the PropertyNamingPolicy to the enumeration value
{ of CamelCase (as seen in Listing 11). Change the for-
[JsonPropertyName("personName")]
[JsonPropertyOrder(1)] matting of the JSON to become indented by setting the
public string? Name { get; set; } WriteIndented property to true.
[JsonPropertyName("personAge")]
[JsonPropertyOrder(2)]
public int Age { get; set; } Type the code in Listing 11 into the Program.cs file and
public string? SSN { get; set; } run the application to display the following JSON in the
public bool IsActive { get; set; } console window:
[JsonIgnore]
public DateTime? CreateDate { get; set; } {
[JsonIgnore(Condition = "name": "John Smith",
JsonIgnoreCondition.WhenWritingNull)] "age": 31,
public DateTime? ModifiedDate { get; set; }
"ssn": null,
public override string ToString() "isActive": true
{ }
return $"{Name}, Age={Age},
SSN={SSN}, IsActive={IsActive}";
} Go back to the Program.cs file and change the Proper-
} tyNamingPolicy property to JsonNamingPolicy.Snake-

26 Manipulating JSON Documents in .NET 8 codemag.com


CaseUpper to produce each property as upper-case with Listing 13: Serialize the C# object with the JSON attributes applied
each word in the property name separated by an under-
using JsonSamples;
score, as shown in the following output: using System.Text.Json;

{ PersonWithAttributes entity = new() {


Name = "John Smith",
"NAME": "John Smith", Age = 31,
"AGE": 31, SSN = null,
"SSN": null, IsActive = true,
// This property is never serialized
"IS_ACTIVE": true CreateDate = DateTime.Now,
} // Comment this property to
// remove from serialization
ModifiedDate = DateTime.Now
Go back to the Program.cs file and change the Proper- };
tyNamingPolicy property to JsonNamingPolicy.Snake-
CaseLower to produce each property as lower-case with JsonSerializerOptions options = new() {
PropertyNamingPolicy =
each word in the property name separated by an under- JsonNamingPolicy.CamelCase,
score, as shown in the following output: WriteIndented = true
};
{ Console.WriteLine(
"name": "John Smith", JsonSerializer.Serialize(entity, options));
"age": 31,
"ssn": null,
"is_active": true { Getting the Sample Code
} "ssn": null,
You can download the sample
"isActive": true,
code for this article by visiting
The other enumeration values you may set the Proper- "modifiedDate": "2024-01-28T12:48:53",
www.CODEMag.com under the
tyNamingPolicy to are JsonNamingPolicy.KebabCaseUp- "personName": "John Smith", issue and article, or by visiting
per or JsonNamingPolicy.KebabCaseLower. This policy "personAge": 31 www.pdsa.com/downloads.
separates each word in the property name with a dash } Select “Articles” from the
instead of an underscore, as shown below: Category drop-down. Then
There are a few things to notice about this output. The select “XML Serialization and
{ modifiedDate appears in the output because the value is Validation in .NET 6/7” from
"name": "John Smith", not null. The Name and Age C# properties are emitted as the Item drop-down.
"age": 31, personName and personAge in JSON. These two proper-
"ssn": null, ties also appear at the end of the object because their
"is-active": true order was set to one (1) and two (2) respectively. If you
} go back to the code in the Program.cs file, comment the
ModifiedDate property, and rerun the application, the
ontrol Serialization Using JSON Attributes ModifiedDate value will not display in the output.
In the System.Text.Json.Serialization namespace are
some attributes you may use to decorate C# class proper- Serializing Objects with Enumerations
ties to help you control how each property is serialized. Another common scenario is that you have a class with an
There are several attributes you may use, but the most enumeration as one of the property types. When you se-
used are JsonPropertyName, JsonIgnore, and JsonProp- rialize that object, you can either emit the numeric value
ertyOrder. Listing 12 shows a PersonWithAttributes of the enumeration or use the string representation of
class with these attributes applied to different properties. the enumeration itself. Create a new enumeration named
PersonTypeEnum.
When you decorate a C# property with the JsonProper-
tyName attribute, you pass the JSON name to use to the namespace JsonSamples;
attribute when this property is serialized. When a C# class
is serialized, the order of the properties is in the order public enum PersonTypeEnum
they appear in the class. To change the order in which {
the properties are serialized, add a JsonPropertyOrder Employee = 1,
attribute to each property and set the order in which you Customer = 2,
want them to appear. If the JsonPropertyOrder attribute Supervisor = 3
is not applied to a property, the default number is zero }
(0). To never have a property serialized into JSON, apply
the JsonIgnore attribute with no parameters. You may Create a class named PersonWithEnum that has a Per-
also set the Condition property of the JsonIgnore at- sonType property that is of the type PersonTypeEnum.
tribute to not serialize the data when the value is null.
namespace JsonSamples;
After creating the PersonWithAttributes class, write the
code shown in Listing 13 in the Program.cs file. In this public class PersonWithEnum
listing, notice that the CreateDate and ModifiedDate {
properties are both set to the current date and time. If public string? Name { get; set; }
you run the code shown in Listing 13, the following out- public PersonTypeEnum PersonType { get; set; }
put is displayed in the console window:

codemag.com Manipulating JSON Documents in .NET 8 27


Listing 14: Create a JwtSettings class to be nested within an AppSettings class
namespace JsonSamples; public string[] AllowedIPAddresses
{ get; set; } = [];
public class JwtSettings
{ #region ToString Override
public string Key { get; set; } public override string ToString()
= string.Empty; {
public string Issuer { get; set; } return $"{Key} - {Issuer} –
= string.Empty; {Audience} - {MinutesToExpiration}";
public string Audience { get; set; } }
= string.Empty; #endregion
public int MinutesToExpiration }
{ get; set; }

Listing 15: Write code to serialize a nested object to view the JSON output
using JsonSamples; }
using System.Text.Json; };

AppSettingsNested entity = new() { JsonSerializerOptions options = new() {


ApplicationName = "JSON Samples", PropertyNamingPolicy = JsonNamingPolicy.CamelCase,
JWTSettings = new() { WriteIndented = true
Key = "ALongKeyForASymmetricAlgorithm", };
Issuer = "JsonSamplesAPI",
Audience = "PDSCJsonSamples", Console.WriteLine(
MinutesToExpiration = 60 JsonSerializer.Serialize(entity, options));

public override string ToString() er class. This class works with the serializer and instead
{ of emitting the numeric value of enumeration properties,
return $"{Name}, Type={PersonType}"; it emits the string representation of the enumeration.
}
} JsonSerializerOptions options = new() {
PropertyNamingPolicy =
Open the Program.cs file and write the code shown in the JsonNamingPolicy.CamelCase,
code snippet below: WriteIndented = true,
Converters =
using JsonSamples; {
using System.Text.Json; new JsonStringEnumConverter()
}
PersonWithEnum entity = new() { };
Name = "John Smith",
PersonType = PersonTypeEnum.Supervisor After adding the JsonStringEnumConverter object, run
}; the application and the following should now display in
the console window:
JsonSerializerOptions options = new() {
PropertyNamingPolicy = JsonNamingPolicy.CamelCase, {
WriteIndented = true "name": "John Smith",
}; "personType": "Supervisor"
}
Console.WriteLine(
JsonSerializer.Serialize(entity, options)); Serialize a Nested Object
Often you have a property in a class that is itself another
When you run the application, the following output is class. Don't worry, the JSON serialization process handles
displayed in the console window. this situation just fine. To illustrate, create a class called
JwtSettings, as shown in Listing 14. Next, create a class
{ named AppSettingsNested that has two properties: Appli-
"name": "John Smith", cationName and JWTSettings. The data type for the JWT-
"personType": 3 Settings property is the JwtSettings class you just created.
}
namespace JsonSamples;
Notice that the personType property has a value of 3,
which equates to the Supervisor enumeration value. Go public class AppSettingsNested
back to the Program.cs file and add the following using {
statement at the top of the file: public string ApplicationName
{ get; set; } = string.Empty;
using System.Text.Json.Serialization;
public JwtSettings JWTSettings
Set the Converters property in the JsonSerializerOptions { get; set; } = new();
object to use an instance of the JsonStringEnumConvert- }

28 Manipulating JSON Documents in .NET 8 codemag.com


Now that you have a nested class, write the code shown in Listing 16: Create a generic List<T> and serialize it to create a JSON array
Listing 15 into the Program.cs file. In this code, you fill in
using JsonSamples;
the ApplicationName property, create a new instance of a using System.Text.Json;
JwtSettings class for the JWTSettings property, and then fill
in each property in that class too. Run the application and you List<PersonWithEnum> list = new()
{
should see the following displayed in your console window: new PersonWithEnum {
Name = "John Smith",
{ PersonType = PersonTypeEnum.Supervisor
},
"applicationName": "JSON Samples", new PersonWithEnum {
"jwtSettings": { Name = "Sally Jones",
"key": "ALongKeyForASymmetricAlgorithm", PersonType = PersonTypeEnum.Employee
}
"issuer": "JsonSamplesAPI", };
"audience": "PDSCJsonSamples",
"minutesToExpiration": 60, JsonSerializerOptions options = new() {
PropertyNamingPolicy =
"allowedIPAddresses": [] JsonNamingPolicy.CamelCase,
} WriteIndented = true
} };
Console.WriteLine(
Serialize a Dictionary JsonSerializer.Serialize(list, options));
If you have a generic Dictionary<TKey, TValue> object, or
a KeyValuePair object filled with data, the JSON serializer
emits those as a single object. Open the Program.cs file Listing 17: Use the Deserialize() method to convert a JSON string into a C# object
and add the code shown below.
using JsonSamples;
using System.Text.Json;
using System.Text.Json;
// JSON names must match
// C# properties by default
Dictionary<string, object?> dict = new() string json = @"{
{ ""Name"": ""John Smith"",
{"name", "John Smith"}, ""Age"": 31,
""SSN"": null,
{"age", 31}, ""IsActive"": true
{"isActive", true} }";
};
// Deserialize JSON string into Person
Person? entity = JsonSerializer
JsonSerializerOptions options = new() { .Deserialize<Person>(json);
WriteIndented = true
Console.WriteLine(entity);
};

Console.WriteLine(
Listing 18: Pass in options to control the deserialization process
JsonSerializer.Serialize(dict, options));
using JsonSamples;
When you run the application, the console window dis- using System.Text.Json;
plays the following JSON object: string json = @"{
""name"": ""John Smith"",
{ ""age"": 31,
""ssn"": null,
"name": "John Smith", ""isActive"": true
"age": 31, }";
"isActive": true // Override case matching
} JsonSerializerOptions options = new() {
PropertyNameCaseInsensitive = true
};
Serialize a List
To write a JSON array, you can use any of the IEnumerable // Deserialize JSON string into Person
objects in .NET such as an array or a List<T>. To illustrate, Person? entity =
JsonSerializer.Deserialize<Person>(json, options);
use the PersonWithEnum class and create a generic list
of two PersonWithEnum objects, as shown in Listing 16. Console.WriteLine(entity);
Type the code shown in Listing 16 into the Program.
cs file and run the application to display the following
output in the console window. }
]
[
{
"name": "John Smith", Deserialize JSON into a C# Object
"personType": 3 Now that you’ve seen how to serialize a C# object into
}, JSON, let's look at reversing the process. To illustrate, cre-
{ ate a JSON string with the JSON property name that exactly
"name": "Sally Jones", matches the C# property name in the class you wish to dese-
"personType": 1 rialize this JSON into. You use the JsonSerializer class (List-

codemag.com Manipulating JSON Documents in .NET 8 29


Listing 19: Use the JsonNode object to deserialize a JSON string into a C# object
using JsonSamples; PropertyNameCaseInsensitive = true,
using System.Text.Json.Nodes; };
using System.Text.Json;
// Parse string into a JsonNode object
string json = @"{ JsonNode? jn = JsonNode.Parse(json);
""name"": ""John Smith"",
""age"": 31, // Deserialize JSON string into Person
""ssn"": null, // using the JsonNode object
""isActive"": true Person? entity =
}"; jn.Deserialize<Person>(options);

JsonSerializerOptions options = new() { Console.WriteLine(entity);

ing 17) like you did for serializing, but call the Deserialize() case. If you forget to use the options and the property
method passing in the data type you wish to deserialize the names have different casing, then no data is mapped from
JSON string into and the string itself. Type the code in List- the JSON to the C# object, so an empty object is returned
ing 17 into the Program.cs file and run the application to from the Deserialize() method.
see the following results displayed in the console window:
Deserialize Using the JsonNode Object
John Smith, Age=31, SSN=, IsActive=True Another option for deserializing JSON into a C# object is
to use either the JsonNode or the JsonDocument classes.
Use Serialization Options The code shown in Listing 19 uses the JsonNode object
Just like you did when serializing, if the case of the JSON to illustrate. The JsonDocument class looks very similar to
property names doesn’t match the C# property names, you that of the JsonNode. Type this code into the Program.
may set the PropertyNameCaseInsensitive property to cs file and run the application to get the same output as
true in the options and pass those options to the Dese- you saw in the last example.
rialize() method, as shown in Listing 18. Notice that in
this listing the JSON property names start with lower- Deserializing Enumeration Values
If you know that the JSON object is going to have the
string representation of a C# enumeration, set the Con-
verters property to a new instance of the JsonString-
EnumConverter class in the JsonSerializerOptions object,
ADVERTISERS INDEX as shown in Listing 20. If you forget to include the Con-
verters property on the JsonSerializerOptions, a JsonEx-
ception is thrown. Type the code shown in Listing 20
Advertisers Index into the Program.cs file and run the application to see
the following output appear in the console window:
CODE Consulting--AI Services
www.codemag.com/ai-services2 John Smith, Type=Supervisor

CODE Consulting Convert JSON File to a Person Object


www.codemag.com/Code 75 Right mouse-click on the JsonSamples console applica-
CODE Staffing tion and create a new folder named JsonSampleFiles.
www.codemag.com/staffing76 Right mouse-click on the JsonSampleFiles folder and add
a new file named person.json. Place the following JSON
dtSearch object into this file:
www.dtSearch.com 13
{
DevIntersection
"name": "Sally Jones",
www.devintersection.com 15
Advertising Sales: "age": 39,
Tammy Ferguson LEAD Technologies "ssn": "555-55-5555",
832-717-4445 ext 26
[email protected]
www.leadtools.com 5 "isActive": true
}

Open the Program.cs file and replace the contents of the


file with the code shown in Listing 21. In this sample,
you’re using a FileStream object to open the file and
stream it to the Deserialize() method. Thus, you should
use the Using statement in front of the FileStream decla-
ration so it will be disposed of properly. Run the applica-
This listing is provided as a courtesy
tion and you should see the following output displayed in
to our readers and advertisers. the console window:
The publisher assumes no responsi-
bility for errors or omissions. Sally Jones, Age=39,
SSN=555-55-5555, IsActive=True

30 Manipulating JSON Documents in .NET 8 codemag.com


Listing 20: If you use enumerations, be sure to include the JsonStringEnumConverter in the serialization options
using JsonSamples; // thrown on Deserialize()
using System.Text.Json.Serialization; Converters =
using System.Text.Json; {
new JsonStringEnumConverter()
string json = @"{ }
""name"": ""John Smith"", };
""personType"": ""Supervisor""
}"; // Deserialize JSON string
PersonWithEnum? entity = JsonSerializer
JsonSerializerOptions options = new() { .Deserialize<PersonWithEnum>(json, options);
PropertyNameCaseInsensitive = true,
// Avoid an exception being Console.WriteLine(entity);

Convert a JSON Array in a File to a List of Person Objects Listing 21: Read a JSON file and convert the JSON object in the file into a C# object
Right mouse-click on the JsonSampleFiles folder and add using JsonSamples;
a new file named persons.json. Place the following JSON using System.Text.Json;
array into this file:
string fileName =
$"{AppDomain.CurrentDomain.BaseDirectory}
[ JsonSampleFiles\\person.json";
{
"name": "John Smith", using FileStream stream = File.OpenRead(fileName);
"age": 31,
JsonSerializerOptions options = new() {
"ssn": null,
PropertyNameCaseInsensitive = true,
"isActive": true };
},
{ // Deserialize JSON string into Person
"name": "Sally Jones", Person? entity = JsonSerializer
.Deserialize<Person>(stream, options);
"age": 39,
"ssn": "555-55-5555", Console.WriteLine(entity);
"isActive": true
}
] Listing 22: Read an array of JSON objects from a file and convert to a list of person objects
Open the Program.cs file and type in the code shown in using JsonSamples;
using System.Text.Json;
Listing 22. This code is almost the same as the code you
wrote to deserialize a single person object; the only differ- string fileName =
ence is that you pass the data type List<Person> to the De- $"{AppDomain.CurrentDomain.BaseDirectory}
serialize() method. Once you have the collection of Person JsonSampleFiles\\persons.json";
objects, iterate over the collection and display each person using FileStream stream = File.OpenRead(fileName);
on the console window. Run the application and you should
see the following output displayed in the console window: JsonSerializerOptions options = new() {
PropertyNameCaseInsensitive = true,
John Smith, Age=31, };
SSN=, IsActive=True // Deserialize JSON string into List<Person>
Sally Jones, Age=39, List<Person>? list = JsonSerializer
SSN=555-55-5555, IsActive=True .Deserialize<List<Person>>(stream, options);

if (list != null) {
Get Maximum Age from List of Person Objects foreach (var item in list) {
After reading in a list of objects, you may now use LINQ Console.WriteLine(item);
operations or any Enumerable methods such as Min, Max, }
Sum, and Average on that list. In the code shown in List- }
ing 23, the Max() method is applied to the list and the
maximum value found in the Age property is displayed on
the console window. When you run this application, the able interface, so you need to prefix them with the Using
value reported back should be thirty-nine (39). statement. You must start each JSON document by calling
the WriteStartObject() or the WriteStartArray() method.
You then call the appropriate method to write a string, a
Using the Utf8JsonWriter Class number, a Boolean, a null, or a comment. Finally, call the
The Utf8JsonWriter class is a high-performance, forward- WriteEndObject() or the WriteEndArray() method to close
only, non-cached method of writing JSON documents. the JSON document. Type the code shown in Listing 24
Just like with serialization, you can control the output of into the Program.cs file and run the application to display
the JSON to include white space, and indentation. List- the output shown below in the console window:
ing 24 shows how to write a single JSON object into a
MemoryStream object. Note that both the MemoryStream {
and the Utf8JsonWriter objects implement the IDispos- "name": "John Smith",

codemag.com Manipulating JSON Documents in .NET 8 31


Listing 23: After deserializing a list of person objects, apply the Max() method to calculate the largest numeric value
using JsonSamples; };
using System.Text.Json;
// Deserialize JSON string
string fileName = List<Person>? list = JsonSerializer
$"{AppDomain.CurrentDomain.BaseDirectory} .Deserialize<List<Person>>(stream, options);
JsonSampleFiles\\persons.json";
// Calculate maximum age
using FileStream stream = File.OpenRead(fileName); int maxAge = list?.Max(row => row.Age) ?? 0;

JsonSerializerOptions options = new() { Console.WriteLine(maxAge);


PropertyNameCaseInsensitive = true,

Listing 24: The Utf8JsonWriter object is a forward-only cursor for emitting JSON quickly
using System.Text.Json; writer.WriteStartObject();
using System.Text; writer.WriteString("name", "John Smith");
writer.WriteNumber("age", 31);
JsonWriterOptions options = new() { writer.WriteBoolean("isActive", true);
Indented = true writer.WriteEndObject();
}; writer.Flush();

using MemoryStream ms = new(); string json = Encoding.UTF8


using Utf8JsonWriter writer = .GetString(ms.ToArray());
new(ms, options);
Console.WriteLine(json);

Listing 25: The Utf8JsonWriter object can write arrays as well as single objects
using System.Text.Json; writer.WriteBoolean("isActive", true);
using System.Text; writer.WriteEndObject();
writer.WriteStartObject();
JsonWriterOptions options = new() { writer.WriteString("name", "Sally Jones");
Indented = true writer.WriteNumber("age", 39);
}; writer.WriteBoolean("isActive", true);
writer.WriteEndObject();
using MemoryStream ms = new(); writer.WriteEndArray();
using Utf8JsonWriter writer = writer.Flush();
new(ms, options);
string json = Encoding.UTF8
writer.WriteStartArray(); .GetString(ms.ToArray());
writer.WriteStartObject();
writer.WriteString("name", "John Smith"); Console.WriteLine(json);
writer.WriteNumber("age", 31);

SPONSORED SIDEBAR "age": 31, "age": 39,


"isActive": true "isActive": true
CODE Is Hiring! } }
]
CODE Staffing is accepting Write a JSON Array
resumes for various open As just mentioned, you may also write a JSON array of data
positions ranging from using the Utf8JsonWriter class. The only difference is that Summary
junior to senior roles.
you start writing using the WriteStartArray() method, and In this article, you were introduced to the many different
We have multiple then call the WriteStartObject() method to create your first classes in .NET that are used to manipulate JSON docu-
openings and will consider JSON object in the array. You then continue this process ments. If you need fast, read-only access to JSON docu-
candidates who seek until all your array elements are written. Finish the JSON ments, the JsonDocument class is what you should use. If
full-time employment or array document by calling the WriteEndArray() method, as you need the ability to modify JSON in your application,
contracting opportunities. shown in Listing 25. Type the code shown in Listing 25 use the JsonNode class. Serialization is accomplished us-
For more information: into the Program.cs file and run the application to display ing the JsonSerialization, JsonDocument, or the Json-
www.codestaffing.com. the output shown below in the console window: Node classes. When you’re dealing with configuration
files such as the appsettings.json file in your application,
[ take advantage of the IConfiguration interface and the
{ ConfigurationBuilder class. Finally, to write JSON docu-
"name": "John Smith", ments one name/value pair at a time, the Utf8JsonWriter
"age": 31, class gives you the most control over how your document
"isActive": true is formatted.
},
{  Paul D. Sheriff
"name": "Sally Jones", 

32 Manipulating JSON Documents in .NET 8 codemag.com


ONLINE QUICK ID 2405041

Value Object’s New Mapping:


EF Core 8 ComplexProperty
EF Core 8 was released late in 2023 and, if you haven’t kept up, there are some important and interesting things to be aware of.
There were over 100 tweaks and additions and another 125 bug fixes. I’ll be highlighting those that are most important and a
few that piqued my interest or just my curiosity. If you want to explore all of the enhancements and new features on GitHub,

here’s a link for those issues: https://bit.ly/EFCore8Fea- have a first name property and a last name property in a
tures. The list of issues for the fixes can be perused at Customer class.
https://bit.ly/EFCore8Fixes.
public class Customer
If you’ve followed my tech wanderings over the years, {
it may be no surprise that my A#1 favorite new feature public int CustomerId { get; set; }
is the ComplexProperty mapping, an alternative to using public string FirstName { get; set; }
Owned Entities to map complex types and value objects. public string LastName { get; set; }
The new ComplexProperty mapping provides a far superior public DateOnly FirstPurchase { get; set; }
way to map complex types (and therefore, value objects) } Julie Lerman
than the Owned Entity mapping we’ve been using since @julielerman
the beginning of EF Core. To be clear, there are still some Instead of using two string types for every class that needs thedatafarm.com/contact
scenarios that are not yet supported, so you may end up a person’s name, you can create a new class that only has
those two strings. Julie Lerman is a Microsoft
using a mix of the two mappings until ComplexProperty is
Regional director, Docker
complete. It's the team’s intention for this to eventually
public class Customer Captain, and a long-time
replace Owned Entities in their entirety. ComplexProperty
{ Microsoft MVP who now
is a big deal and it was a big deal for the team to execute. counts her years as a coder
public int CustomerId { get; set; }
It’s at the top of their “what’s new” lists as well. in decades. She makes
public PersonName Name { get; set; }
public DateOnly FirstPurchase { get; set; } her living as a coach and
Although the OwnsOne and OwnsMany mappings have consultant to software
}
fulfilled the basic need to map classes that are used as teams around the world.
public class PersonName
properties of entities, the work that they were doing un- You can find Julie present-
{
der the covers was complicated and led to numerous side public string FirstName { get; set; } ing on Entity Framework,
effects. The team had tweaked the inner logic a number public string LastName { get; set; } Domain-Driven Design and
of times across versions, creating breaking changes along } other topics at user groups
the way, but never really solved the problem properly. and conferences around
They have been contemplating a replacement for some the world. Julie blogs at
PersonName is a complex type and can be used as a prop-
time and have finally pulled it off. thedatafarm.com/blog, is
erty of any other class, such as this ShipLabel class, which
the author of the highly
is obviously missing an address but that’s only to keep
There are some caveats, however, which are a few capabili- acclaimed “Programming
this explanation simple.
ties that didn’t make it into EF Core 8 but will be ready for Entity Framework” books,
EF Core 9. In those cases, we just continue using the owned and many popular videos
public class ShipLabel
entity mappings. I’ll explain the caveats after I allow you to on Pluralsight.com.
{
feast your eyes on the new ComplexProperty mapping. public int Id { get; set; }
pubic DateOnly Printed {get; set;}
public PersonName Name { get; set; }
TLDR Complex Types and Value Objects }
Let’s be sure we’re all on the same page. A complex type
is a class that doesn’t have any identity and is used as The most important attribute of PersonName is that it
a property of another class. An easy example is if you has no identity.

Listing 1: PersonName class implemented as a value object


public class PersonName {
{ return obj is PersonName name &&
public PersonName(string firstName, FirstName == name.FirstName &&
string lastName) LastName == name.LastName;
{ }
FirstName = firstName;
LastName = lastName; public override int GetHashCode()
} {
public string FirstName { get; init; } return HashCode.Combine(FirstName,
public string LastName { get; init; } LastName);
}
public override bool Equals(object? obj) }

codemag.com Value Object’s New Mapping: EF Core 8 ComplexProperty 33


A value object is a critical Domain-Driven Design con- Hello, or Welcome Back, Complex
struct that enhances a complex type by ensuring that it’s
immutable and that its equality is always based on the Properties
values of every property in the type by overriding the Entity Framework, and I mean pre-EF Core, had a concept
Equals and GetHashCode methods. Listing 1 shows a ver- of complex property mappings that were more natural
sion of PersonName that’s defined as a value object. than owned entities. EF Core 8 harkens back to that con-
cept, although with a different implementation. EF Core
Whether PersonName is a simple complex type or a value still won’t make an assumption that a complex type is
object, EF Core will see that type in its model when it’s anything but a malformed entity that needs a key prop-
used as a property of another entity, such as Customer. erty. But we have a new way to map it with the Complex-
However, it will balk because it can only assume that it’s Property mapping.
another entity but can’t figure out what to do with it be-
cause it has no key property. This results in a runtime ex- Whereas previously you’d have used the OwnsOne method
ception when EF Core is trying to work out the data model. for the owned property, now you use the ComplexProperty
method.
The Problem(s) with Owned Entities override protected void
Owned entities originally came into EF Core to enable this OnModelCreating(ModelBuilder modelBuilder)
mapping. They were a new paradigm for handling com- {
plex types, different from EF6 and earlier. By mapping //modelBuilder.Entity<Customer>()
the Name property of Customer as an owned entity (using .OwnsOne(c => c.Name);
the OwnsOne mapping), EF Core knows that it’s okay that modelBuilder.Entity<Customer>()
there’s no key. However, in order to track and persist it, .ComplexProperty(c => c.Name);
EF Core, under the covers, treats that PersonName object }
as an entity in a relationship with Customer. It does so
by using a shadow property to infer a key in memory but Like OwnsOne, ComplexProperty has its own methods
ensures that the values are stored as individual fields in where you can further define the property, for example,
the Customers table in the database. That is the default tying it to a backing field or specifying that it’s required.
behavior. You can configure the mapping to store the val-
ues in a separate database table. But what’s most important is that EF Core just treats this
as a property and when storing it, explodes the FirstName
It was a very clever solution that leveraged the existing and LastName properties out to fields in the Customer’s
behavior of EF Core. However, because of the complexity of table (by default). It doesn’t set up a fake relationship or
faking the key, there were problematic side effects. For ex- fake key and then have to tangle with those every time
ample, EF Core couldn’t comprehend if you left the property you track, save, or retrieve data. Because it’s not being
null or needed to edit it. Some of those problems were re- treated as a separate entity, you can also share it among
solved but there are others still. For example, you can’t copy instances as needed. The logic in Listing 2 will succeed.
an owned object from one entity instance to multiple other
classes. Listing 2 shows logic that attempts to create two It’s interesting to compare the visualizations of the model
separate shipping label objects for one person. It will fail. as well (using the wonderful EF Core Power Tool’s diagram
tool) shown in Figure 1. As an Owned Entity, the DbCon-
Why? When this code assigns a person.Name to the second text sees the PersonName as its own entity in a one-to-one
label, EF Core moves it from the label it was already as-
signed to. The first label will no longer have a Name—the
property is now null. In the docs, the EF Core team explains
other common use cases that will fail as well. Keep in mind
that in unit tests that don’t involve EF Core, the code poses
no problem and is sensible. It’s EF Core’s tracking that fails.
And as I said earlier, the team tried variations on how to
handle these types of problems across versions of EF Core,
all the while pondering how to implement a better map-
ping, rather than continuing to try to make owned entities
work across the various needed scenarios.

Listing 2: Retrieving a Customer and Using its Name for a new Label
var storedCustomer = ctx.Customers.First();
var label = new ShipLabel
{
Name = storedCustomer.Name,
};
var label2 = new ShipLabel
{
Name = storedCustomer.Name,
};
ctx.AddRange(label, label2); Figure 1: Data Model with Person mapped as an Owned
ctx.SaveChanges(); Entity vs. a Complex Property

34 Value Object’s New Mapping: EF Core 8 ComplexProperty codemag.com


relationship with Customer. Notice that there’s even a Cus- Listing 3: DebugView (LongView) with Name mapped as a ComplexProperty in Customer
tomerId shadow property. As a ComplexProperty, EF Core and ShipLabel
comprehends that it’s just another property of Customer. Customer {CustomerId: 1} Unchanged
CustomerId: 1 PK
In addition to noting the differences between the model FirstPurchase: '1/1/2021'
in Figure 1, it’s also interesting to see the DebugView Name (Complex: PersonName)
FirstName: 'John'
for the Customer and ShipLabel entities as they’re seen LastName: 'Doe'
by the change tracker prior to calling SaveChanges in ShipLabel {Id: -2147482647} Added
Listing 2. Id: -2147482647 PK Temporary
Name (Complex: PersonName)
FirstName: 'John'
First is the view for the OwnsOne mapping, and there’s so LastName: 'Doe'
much going on here that I’m only showing the ShortView. ShipLabel {Id: -2147482646} Added
Id: -2147482646 PK Temporary
Customer {CustomerId: 1} Unchanged Name (Complex: PersonName)
FirstName: 'John'
Customer.Name#PersonName {CustomerId: 1} LastName: 'Doe'
Unchanged FK {CustomerId: 1}
ShipLabel {Id: -2147482647} Added
ShipLabel {Id: -2147482646} Added
ShipLabel.Name#PersonName up to do to learn about records because there are many
{ShipLabelId: -2147482646} Added FK ways to express them.
{ShipLabelId: -2147482646}

With the owned entity mapping, the Customer’s Name and A Quick Records Overview
the Name of only one of the ShipLabels (remember, EF Records have a number of formats. I spent a lot of time
Core moved it, not copied it), are tracked separately and understanding the various ways to express a record to
it’s a bit convoluted. choose the correct flavor. The documentation was very
helpful (https://learn.microsoft.com/en-us/dotnet/
Listing 3 shows the DebugView when PersonName is csharp/language-reference/builtin-types/record), but it
mapped as a ComplexProperty: This time, I’m sharing the still took a few read throughs for me. I have encapsu-
LongView with more details because it’s so easy to read. lated some of the important details in Table 1 for a quick
The details look just as you would expect. And EF Core is reference.
doing a lot less work to manage the PersonName data.
To begin with, a record, by default, is a reference type.
It’s much simpler and the side effects of the fake entities But a record struct is a value type—the correct choice for
just disappear. a value object. There are more decisions to make.

You can define a record struct as a positional record,


Classes, Records, and Value Objects, which has nothing more than a primary constructor and
Oh My! looks like this:
The PersonName value object shown in Listing 1 includes
a primary constructor to make it simpler to instantiate. public record struct PersonName (
string FirstName, string LastName);
This leads us to the first caveat of ComplexProperty. In
EF Core 8, it won’t work with a class that has a primary That’s the entire implementation! Internally, C# infers the
constructor—emphasis on class. If you want to map this FirstName and LastName string properties.
class with ComplexProperty, you need to remove that con-
structor and use an object initializer to instantiate a new Currently my PersonName record has no logic and is a
PersonName like this: good candidate for a positional record. If you have no
need for logic or further constraints in the object, the
new PersonName{FirstName ="John",LastName="Doe"} streamlined positional syntax is awesome.

Otherwise, you’ll need to go back to mapping with OwnsOne. But—and this is a big but—on its own, a record struct
is not, I repeat, not, immutable. Therefore, it fails the
What about records instead of a class? I recall first seeing requirement of a value object. Luckily, C#12 added the
the exploration that the C# team was doing on records capability to make a record struct read only.
at an MVP summit quite a few years ago. Because of how
they simplified creating value objects, I was definitely public readonly record struct PersonName (
eager to see them come into the language. Record types string FirstName, string LastName);
internalize equality comparison so you don’t need to
override the Equals or GetHasCode methods every single That’s a very succinctly expressed and simple value object.
time.
If you do need additional logic, you can express the re-
However, records did not play very well with owned enti- cord more like a class with properties and other logic ex-
ties and again, there were side effects to worry about. plicitly defined, as I’m doing here, using init accessors
Therefore, I never used records until EF Core 8 brought us to ensure that it’s still immutable. This is an example of
the ComplexProperty mapping and I had a bit of catching a read-only record struct without positional properties.

codemag.com Value Object’s New Mapping: EF Core 8 ComplexProperty 35


Declaration Type Mutability Mapped as an OwnedEntity, EF Core can sort this out.
class Reference type Mutable unless designed otherwise
The default mapping results in Name_FirstName and
Name_LastName columns in the Customers table both be-
record Reference type Immutable ing nullable.
record struct Value type Mutable unless designed otherwise
The OwnedEntity mapping comprehends the null property
readonly record struct Value type Immutable
when I create and store a customer without a name.
Table 1: How classes and records are interpreted
var customer=new Customer
{
public readonly record struct PersonName FirstPurchase=new DateOnly(2021, 1, 1)
{ };
public FirstName X { get; init; } ctx.Customers.Add(customer);
public LastName Y { get; init; } ctx.SaveChanges();

public string FullName => When Name is null, the Name_FirstName and Name_Last-
$"{FirstName} {LastName}"; Name database fields are both null as well. When I re-
} trieve the customer, EF Core returns a Customer with a
null Name property.
There’s an interesting capability of records that you
should consider, which is that it’s possible to create a new At some point, ComplexProperty will have the same be-
instance of a record with new values. This feature uses a havior. But currently (in EF Core 8), specifying Name as a
with expression to replace property values. nullable type results in a runtime exception. The exception
you get is dependent on how the value object is defined.
For example, I might have instantiated a PersonName using:
If the value object is a class, you get a message about the
var jazzgreat=new PersonName("Ella", fact that it can’t be optional when EF Core is attempting
"Fitzgeralde"); to build the data model based on the DbContext mappings.

Then I discover the typo of the “e” at the end of her System.InvalidOperationException: 'Configuring the complex
name. Of course, with only two properties, I could eas- property 'Customer.Name' as optional is not supported,
ily create a new instance from scratch. But if you have call 'IsRequired()'. See https://github.com/dotnet/efcore/
a lot of properties, you could use the with expression issues/31376 for more information.'
syntax:
Making it required just so EF Core is happy is not a pleas-
jazzgreat=jazzgreat with ing solution. It should only be required if your domain
{LastName = "Fitzgerald"}; invariants specify that Name should be required. If it’s re-
quired, you can use ComplexProperty. If not, you’re stuck
There are a lot of other nuances of records that you can with OwnsOne.
learn about in the docs at https://learn.microsoft.com/
en-us/dotnet/csharp/language-reference/builtin-types/ If PersonName is a record struct (with or without posi-
record. tional properties), you’ll trigger a different exception. EF
Core configures the database fields from the value ob-
Because I found it confusing to sort out all of the behav- ject properties (Customer_FirstName, and Customer_Last-
iors of the various flavors of record types, I’ve listed the Name) as non-nullable fields. At runtime, the database
critical aspects of each (as well as class for comparison) will throw an exception saying that it can’t insert a null
in Table 1. value into a non-nullable column.

Note: If you look into the GitHub issue referenced in the


Null Value Object Properties exception message, please add your vote to make sure
The most important caveat about EF Core 8’s implementa- the EF Core team addresses this. I do expect it to be sup-
tion of ComplexProperty, which is a deal breaker for some, ported in EF Core 9 but every vote from the community
is that it doesn’t currently support null objects. We saw helps them prioritize.
this same problem with owned entities in an earlier ren-
dition, but owned entities now support null properties.
What Else Is and Isn’t Supported?
As an example, let’s say I made PersonName nullable in Nullability of complex types, whether or not they are val-
the customer type. Perhaps this isn’t a logical change, but ue objects, is an important topic, indeed. But there are a
it serves my demonstration purpose. few other points to be aware of: some limitations as well
as things that are supported. Let’s start with the good
public class Customer news about primary constructors.
{
public int CustomerId { get; set; } Records and Record Structs with Primary Constructors
public PersonName? Name { get; set; } I said above that you can’t map a class with primary con-
public DateOnly FirstPurchase { get; set; } structors using ComplexProperty. Well, happily, it works
} with the records! You can map a ComplexProperty with

36 Value Object’s New Mapping: EF Core 8 ComplexProperty codemag.com


the positional records (which are declared in their entire- ComplexProperty Owned Entity
ty by a primary constructor) and non-positional records
that have a primary constructor. I also successfully tested Class Yes Yes
this with positional record structs, positional read-only Record Yes With side effects
record structs, and non-positional record structs. I defi- Record Struct Yes No (must be a ref type!)
nitely prefer primary constructors over expression build- Record with Primary CTOR Yes Yes
ers to instantiate an object.
Class with Primary CTOR No Yes
JSON Support, or Lack Thereof, for Now Collections No (EF Core 9) Yes (OwnsMany)
Although JSON support has been improving in EF Core, Nested No (EF Core 9) Yes
some of it thanks to Owned Entities, it does not exist yet Data Annotation available Yes Yes
for ComplexProperty.
Store in its own table No Yes
For example, if PersonName is mapped as an owned en- Map to JSON column No (EF Core 9?) Yes
tity, you can append the ToJson() method to the OwnsOne Seeding via DbContext/Migrations No Yes
mapping resulting in the object being stored in some type
Supported in Cosmos provider No Yes
of char field in your relational database table. A Person-
Name is stored as Table 2: EF Core 8 support for ComplexProperty and OwnedEntity mappings

{"FirstName ":"John","LastName":"Doe"}
The question becomes (for some): Should you mix and SPONSORED SIDEBAR
ComplexProperty does not yet support this capability. match the two mappings? I think the answer is yes. I
Additionally, its inability to transform complex types to don’t think it needs to be seen as a maintenance prob- AI Executive
JSON also means that you cannot use ComplexProperty lem. What ComplexProperty currently solves, it does very Briefing
with the CosmosDb provider that stores all its data as well—and does so better than OwnedEntity. For the cases
JSON. Not yet. This is another feature that is tagged that you still need to use Owned Entity, continue to use Experience the game-
in the GitHub repo as “consider for current release,” so them. But keep an eye on those cases because you’ll be changing impact of
hopefully that means EF Core 9. able to replace more (or all?) of those mappings when EF AI through CODE
Core 9 comes out. Consulting’s Executive
Collections of Complex Types: Coming Soon Briefing service. Uncover
Owned Entity not only provides the OwnsOne mapping, Table 2 provides you with a list of possible ways to define the immense potential
but also OwnsMany. Therefore, it’s possible to have a complex types and value objects and whether or not that and wide-ranging
property in your entity that’s a collection of the owned expression is supported with ComplexProperty and Owned benefits of AI in every
types. ComplexProperty doesn’t yet support this, but the Entity mappings in EF Core 8. The EF Core team absolutely industry. Our briefing
team has said it will be in EF Core 9. Keep in mind that plans for a near future when we can completely eliminate provides strategic
value object collections are a disputed topic. Some call OwnsOne and OwnsMany from our code. Until then, take guidance for seamless
implementation,
them an anti-pattern. But I’ve found some edge cases advantage of the tool that works best for each scenario.
covering crucial aspects
where they are quite useful. In fact, I have a collection And test, test, test.
such as infrastructure,
of Author value objects in my EF Core and Domain-Driven talent acquisition, and
Design course on Pluralsight. And even though I’m using The limitations are listed in the docs at this link (https:// leadership.
EF Core 8 in the course, I still had to map that particular learn.microsoft.com/en-us/ef/core/what-is-new/ef-
value object as an owned entity. Happily (and intention- core-8.0/whatsnew#current-limitations). Each has a link Discover how to
ally), the sample application in that course has another to the relevant issue on GitHub and you can let the team effectively integrate
value object that provided a great example of using a know which are important to you by voting for these is- AI and propel your
record and ComplexProperty mapping. sues in GitHub. organization into future
success.
Nested Complex Types: Also Coming Soon  Julie Lerman
That Pluralsight course also demonstrates nesting value  Contact us today
objects. The Author value object has a PersonName prop- to schedule your
erty similar to the one I’ve been using in this article. Be- executive briefing and
embark on a journey
cause Author is already mapped as an owned entity, I had
of AI-powered growth.
to tack on its PersonName property as an owned entity
www.codemag.com/ai
as well. You definitely can’t combine Owned Entities and
ComplexProperty when nesting.

More importantly, even if Author was a ComplexProperty,


nesting is not yet supported in EF Core 8. In the end, not
only was I forced to use Owned Entity mappings for those
two value objects, because they were owned entities, I
had to declare them as classes, not records.

Is ComplexProperty Ready for


Your Software?
I’m very happy to see and use ComplexProperty. Although
it’s not perfect yet, it already does solve many scenarios.

codemag.com Value Object’s New Mapping: EF Core 8 ComplexProperty 37


ONLINE QUICK ID 2405051

Preparing for Azure with


Azure Migrate Application and
Code Assessment
There are many advantages to hosting ASP.NET and ASP.NET Core applications in Azure. Platform-as-a-Service (PaaS) environments
like Azure App Service, Azure Kubernetes Service, and Azure Container Apps provide easy and automatic scalability, reliability,
and availability in multiple geographies. What’s more, these environments allow you to focus on the apps themselves without

the overhead of maintaining underlying infrastructure. SDK tool. The tool scans ASP.NET and ASP.NET Core solu-
But for apps that are already deployed on-premises, it tions (and, optionally, their binary dependencies) for a
can be difficult to know how to get started re-platform- wide variety of potential issues that need to be addressed
ing to one of these environments. Even though .NET ap- prior to running in Azure PaaS environments.
plications can often by deployed to Azure App Service
with minimal changes, there are usually some changes Although this article focuses on the experience of using
required and discovering what those are can take some Azure Migrate application and code assessment for .NET,
trial and error. there is also a Java version of the tool available. To learn
more about the Java experience, please visit https://learn.
Mike Rousos This article introduces a new feature: Azure Migrate ap- microsoft.com/azure/developer/java/migration/appcat.
[email protected] plication and code assessment. This new feature allows
analyzing the source code, configuration, and binaries
Mike Rousos is a Principal of an application to discover upfront what changes will
Relationship to Other Azure Migrate
Software Engineer on the be needed for the app to work in Azure. Azure Migrate Features
.NET Customer Engage- application and code assessment make it easy to plan Using Azure Migrate to prepare for a migration to the
ment Team. A member of
re-platforming to Azure and highlights what work will be cloud isn’t new, of course. Azure Migrate has helped users
the .NET team since 2004,
needed along the way. Azure Migrate application and code discover and assess on-premises infrastructure for some
he has worked on a wide
assessment is a developer-focused experience available as time. Azure Migrate also includes the Data Migration As-
variety of feature areas and
contributed content to the both a Visual Studio extension and a command line .NET sistant to help users assess SQL Server databases for mi-
.NET team blog, .NET Conf
sessions, Channel 9 videos,
and .NET development e-
books like “.NET Microser-
vices: Architecture for
Containerized .NET Appli-
cations.” Outside of work,
Mike is involved in his
church and enjoys reading,
writing, and games of all
sorts. His primary hobby,
though, is spending time
with his four kids.

Figure 1: Azure Migrate application and code assessment extension download page

38 Preparing for Azure with Azure Migrate Application and Code Assessment codemag.com
gration to Azure SQL DB, Azure SQL Managed Instance,
or SQL Server on an Azure VM. Azure Migrate can even
discover web apps hosted on-premises and (if no blocking
issues are detected) automate migrating them to Azure
with its App Service Migration Assistant tool.

Up until now, though, Azure Migrate hasn’t had the ability


to look at the source code of the applications it detected.
The App Service Migration Assistant tool can provide in-
sight into whether there are likely to be issues migrating
to Azure based on IIS configuration, but it doesn’t have
visibility into what’s happening inside the code of the
application. The Azure Migrate application and code as-
sessment feature fills this gap. The Visual Studio exten-
sion or command line tool can assess your source code of
your solution and identify potentially problematic APIs
and code patterns. This assessment is a useful follow-up
to the discovery by other Azure Migrate features. It is
recommended to use Azure Migrate’s existing application
discovery features to gain insight into the complete set
of applications to be migrated. Then, after the applica- Figure 2: The Re-platform to Azure command opens the new tool UI.
tions have been reviewed and prioritized, Azure Migrate
application and code assessment can be used by devel-
opers to dive deeply into the applications that will be
moved to learn more details about potential issues and to
plan the migration.

Installing Azure Migrate Application


and Code Assessment
As a Visual Studio extension, the Azure Migrate applica-
tion and code assessment feature is easy to install. It’s
available on the Visual Studio Marketplace at https://
marketplace.visualstudio.com/items?itemName=ms-dot-
nettools.appcat, as shown in Figure 1. From that site,
use the download button to download the .vsix installer,
and then open the installer and wait while it installs the
Visual Studio extension.

Note that the Azure Migrate application and code assess-


ment Visual Studio extension requires Visual Studio 2022.
If you don’t have Visual Studio 2022, the free commu- Figure 3: Selecting projects to be analyzed for Azure migration readiness
nity edition is available at https://visualstudio.microsoft.
com/downloads.
also binary dependencies, as shown in Figure 4. Choos-
ing to analyze source code will look at the source code
Analyzing a Solution of the selected projects—C# or Visual Basic code files,
To begin analyzing a solution, open the solution in Visual project files, config files, and static content. This op-
Studio and right-click on it in the solution explorer. With tion should typically be checked. If you also check the
the Azure Migrate application and code assessment exten- Binary dependencies option, the tool also analyzes
sion installed, there will be a new command available: binaries that the projects depend on. This includes ref-
Re-platform to Azure, as shown in Figure 2. Choosing erences to loose DLLs, references to assemblies (other
this option opens a new user interface for managing than .NET Framework components) in the global assem-
Azure Migrate application and code assessment reports. bly cache, or referenced NuGet packages. Choosing to
analyze binaries produces the most comprehensive set
After clicking New Report, you will be able to choose of issues the application may run into while migrating,
which projects in the solution you wish to assess, as but it also includes more issues than analyzing only
shown in Figure 3. Note that the Azure Migrate applica- source code and may include issues that you’re not able
tion and code assessment feature automatically analyzes to fix directly (because they exist in components you
dependencies of selected projects, so you only need to don’t have source code for) or that aren’t relevant if
choose top-level projects you’re interested in. All recur- they exist in code paths not used from your application.
sive project-to-project dependencies of the selected proj- A useful strategy is to begin only by analyzing source
ects will be included automatically. code and later consider producing another report with
binary analysis enabled if there are binary dependencies
After clicking next, the following UI will allow you to whose Azure-readiness you’re unsure of and would like
choose whether to scan only source code and settings or to learn more about. Because issues in binary dependen-

codemag.com Preparing for Azure with Azure Migrate Application and Code Assessment 39
cies can’t be fixed directly by updating source code, they The report’s dashboard includes the total number of proj-
are typically addressed by finding updated versions of ects scanned, the number of incidents discovered, and
the binaries, working with partners who can change the graphics showing the incidents by category and severity.
source code, or finding alternative solutions that work The report include both a number of issues that are the
better in Azure. types of problems detected and a number of incidents
that are the individual occurrences of the issues.
Once you click the Analyze button, the extension analyzes
the selected projects for any potential issues re-platform- Azure Migrate application and code assessment issues are
ing to Azure. This analysis will take anywhere from a few each assigned one of four severities:
seconds to a few minutes, depending on the size of the
projects. When the analysis is complete, you’ll be shown 1. Mandatory: Mandatory issues are those that likely
a report summarizing the results. (See Figure 5.) This re- need to be addressed before the application will work
port can be saved to disk and returned to later using the in Azure. An example of a mandatory issue is using
save icon (or common shortcuts like Ctrl+S). Windows authentication to authenticate web app us-
ers. Because that authentication mechanism depends
on the on-premises Active Directory environment, it
will likely need to be updated in the cloud to use
Azure AD or some other authentication alternative.
2. Optional: Optional issues are opportunities to im-
prove the application when it’s running in Azure, but
they aren’t blocking issues. As an example, storing
app settings or secrets in a web.config file is consid-
ered an optional issue. That pattern will continue to
work when deployed in Azure, just like on-premises,
so no changes are required. Apps that are hosted in
Azure can take advantage of services like Azure App
Configuration and Azure Key Vault to store settings
in ways that are easier to share and update and that
are more secure. So, there’s an optional issue to be-
gin taking advantage of these services as part of the
Azure re-platform.
3. Potential: Potential issues represent situations where
there might need to be a change made for the app to
work in Azure, but it’s also possible that no change is
needed, depending on the details of the scenario. This
Figure 4: Choosing whether to analyze source code or binaries severity is common and requires an engineer to review

Figure 5: The Azure Migrate application and code assessment report dashboard

40 Preparing for Azure with Azure Migrate Application and Code Assessment codemag.com
the incidents. As an example, connecting to a SQL viewing incidents for a particular project, you can choose
Server database is a potential issue because whether to view all incidents for the project or to view incidents
a change is needed depends on whether the database per component (a single source file or binary dependency
that’s used is accessible from Azure. If the database is considered a component).
is already hosted in Azure or is accessible from Azure
(via Express Route, for example), no changes are need- In incident detail views, there will be a state drop-down
ed. On the other hand, if the database used exists box indicating whether each incident is Active, Resolved,
on-premises without a way for the app to connect to or Not Applicable (N/A). All incidents begin as Active
it once it’s running in Azure, thought will need to be and you can change the state as you investigate. Reports
given to how this dependency will work after the app can be saved (using the save icon in the top right of the
is re-platformed. Perhaps the database will need to be report) and the state will be persisted so that you can
migrated alongside the app or perhaps a solution like return to the same report in the future and continue to
Express Route or Hybrid Connections will be needed to review remaining issues and further update the state. As
make the database accessible. you review the incidents in the report, mark incidents
4. Information: Information issues are useful pieces that don’t need to be addressed in your solution as not
of information for the developer to know but don’t applicable and those that you’ve fixed as resolved. The
require any action. As of the time of this writing, incident detail pages also give descriptions of why the
there aren’t any information issues in Azure Migrate incidents were identified, why they matter, and how you
application and code assessment for .NET (although can address them. These detail views, shown in Figure 6,
there are a couple in the Java version of the tool). include links to documentation and links to the locations
in source code where the issues were detected.
In addition to severity, each incident in the report in-
cludes a story point number. This is a unitless number In addition to working with reports in the Visual Stu-
representing the relative effort estimated to address the dio IDE, it’s possible to export the reports to share with
incident (if, in fact, it needs to be addressed). These others. The Export button in the top right of the report
shouldn’t be used to estimate the precise amount of work interface allows you to export the report in three differ-
in terms of hours or days but can be used as a rough ent formats:
estimate for comparing two projects. If one solution has
500 story points worth of issues and another has 200 • Export as HTML produces the most readable report for
story points worth of issues, it’s probably true that the sharing with others. The HTML report, shown in Figure
solution with fewer story points of issues will be simpler 7, has all the same dashboards and views as the Visual
and easier to re-platform. Studio UI and includes snapshots of the latest state of
all incidents. This report is best for sharing with others
From the initial dashboard, you can navigate to views dis- for viewing issues and investigation progress.
playing aggregate issues (all incidents organized by issue • Export as CSV produces a report with the same in-
type) or projects (incidents organized by project). When formation but in a spreadsheet format. As seen in

Figure 6: Azure Migrate application and code assessment issue details

codemag.com Preparing for Azure with Azure Migrate Application and Code Assessment 41
helpful resource listing parts of the application requiring
review. Once you’ve looked at the highlighted parts of the
application, you can have confidence that you understand
what work (if any) is required prior to re-platforming it
to Azure.

Example Walkthrough
As an example, I’ve used the Azure Migrate application
and code assessment feature to analyze the updated
eShop Northern Mountains sample application found at
https://github.com/dotnet/eshop. eShop is a multi-ser-
vice ASP.NET Core e-commerce solution. Although it does
have a number of external dependencies, the sample was
created with cloud deployment in mind so there shouldn’t
be too many issues to address aside from the identifica-
tion of the external dependencies.

Because the eShop sample is comprised of multiple ser-


vices, I needed to make sure to select all entry points
when choosing projects to assess, as shown in Figure 9.

Figure 7: Azure Migrate application and code assessment HTML report Assessing the source code was quick. Even with its mul-
tiple projects, the eShop sample isn’t large, so analysis
finished in just a few seconds. Altogether, 16 projects
Figure 8, this report format doesn’t have the nice were scanned and 41 total incidents of nine different
charts and graphics of the HTML view but can be types of issues were detected. Of the 41 incidents, eight
useful when you need a simple spreadsheet repre- had mandatory severity, eight had optional severity, and
sentation of the issues that you can annotate and 25 had potential severity. This ratio of issue severities is
update as you discuss and review the incidents. typical. The report dashboard is shown in Figure 10.
• Export as JSON produces a machine-readable JSON
representation of the incidents. This export option Normally, I like to review incidents in Azure Migrate appli-
isn’t meant for human consumption. Instead, this cation and code assessment reports one project at a time.
option produces JSON reports that can be parsed by In this case, though, with a relatively low number of total
other applications programmatically. incidents, it’s just as easy to use the Aggregate Issues
view and review them all at once. Here are the issues that
Not every incident in the report requires action. It’s com- were identified in the eShop sample:
mon for Azure Migrate application and code assessment
reports to include many potential issues that don’t ac- • RabbitMQ usage. The only mandatory incidents are
tually require changes. And some of the issues will be eight instances of RabbitMQ usage detected in the
optional. The best way to think about the reports is as a EventBusRabbitMQ project and shown in Figure 11.

Figure 8: Azure Migrate application and code assessment CSV report.

42 Preparing for Azure with Azure Migrate Application and Code Assessment codemag.com
As explained in the issue’s description, these inci- that the referenced services will be available when de-
dents relate to a dependency on a RabbitMQ queue ployed to Azure. Looking at the code, I see that these
that will need to be made available in Azure as part are the same calls that use the hardcoded URLs re-
of re-platforming efforts. Several potential strate- viewed earlier. Those incidents were about the hard-
gies are presented, either using an alternative mes- coded URL strings whereas these ones are about the
saging system like Azure Service Bus or running a HTTP client API usage, but both relate to the same de-
RabbitMQ cluster in Azure. In order for eShop to pendency, so these items have already been reviewed.
work properly in the cloud, though, it will be neces- • Local file usage: The final potential issue reported is
sary to follow one of these suggestions to make the a single incident of file IO occurring in the Catalog.
messaging done in EventBusRabbitMQ work. API project. I see from the incident details that the
project is calling File.ReadAllText. To ensure that the
• Database usage: The next most common issue is the accessed file path will be available from Azure, I need
eight incidents of database usage detected across to review the file (CatalogContextSeed.cs) where the
several different eShop projects. In all these cases, call occurs. I can click the link included in the incident
the assessment has found data being read from or report to navigate directly to the relevant code in my
written to databases using Entity Framework Core. solution. Doing so, I see that this call is part of seed-
The incidents’ descriptions explain that I need to ing the database with information on first run and
ensure that the database used will be available from that the file read is deployed alongside the applica-
the Azure environment I migrate the eShop solution tion, so everything should work just as well in Azure as
to. Similar to the RabbitMQ incidents earlier, these it did on-premises. This one, also, can be marked N/A.
relate to an external dependency that needs to be • Static content: The last remaining set of issues are
accessible for eShop to work properly. three optional incidents about several eShop projects
• Connection strings: The next issue category I looked serving static content. These incidents are optional
at was connection strings. Spread across five proj- because no action is required. However, the issue de-
ects, there were six incidents of connection strings scription explains that once deployed to Azure, there
being detected in configuration files. These were may be performance and scalability benefits of serv-
strings like EventBus: amqp://localhost and Redis: ing static content such as the files identified here
localhost. Once again, these are indications of de- using Azure Blob Storage and Azure CDN.
pendencies the eShop solution has on services out-
side its own processes. The connection strings point
to localhost but AMQP and Redis services will not
be available locally when run in an Azure App Ser-
vice or AKS environment. As before, these incidents
are reminders to make sure you have messaging and
caching services available for eShop to use in Azure
and that configuration is updated when deploying to
Azure to take advantage of those services.
• Hardcoded URLs: The next issue type I looked at
were the five incidents of hardcoded URLs. In the
case of eShop, these were all references to other
eShop services that will be invoked via the Aspire
framework. URLs included http://catalog-api/health
and http://basket-api. Because these services are
made accessible via Aspire, the URLs will continue to
work in Azure and there aren’t any changes needed in
the application. I can mark these incidents as N/A to
indicate that they’re false positives. If there had been
external URLs in use, I would have had to review the
URLs to make sure that the services they represented
would be available in eShop’s new environment.
• Caching: The next most common issue type was
the five incidents of caching APIs being used. In
eShop’s case, these were all in the Basket.API proj-
ect and represented Redis usage that the basket API
uses to maintain the user shopping basket state. As
explained in the issue description, I need to ensure
that a Redis instance is available for the basket API
to use in the cloud and should also explore using
an external caching solution like Azure Cache for
Redis so that cached state can be shared between
instances of the basket API service if I need to scale
out to multiple instances.
• HTTP usage: The report also includes four incidents of
outgoing HTTP calls being made. Much like many of the
other incidents, these potential incidents represent an Figure 9: Assessing the updated .NET eShop sample with Azure Migrate application and
external dependency that I need to review to ensure code assessment

codemag.com Preparing for Azure with Azure Migrate Application and Code Assessment 43
line, meaning that analysis doesn’t need to depend on
Visual Studio and can be scripted, if needed.

Like all .NET SDK tools, the Azure Migrate application and
code assessment CLI tool is installed using a .NET CLI
command:

dotnet tool install -g dotnet-appcat

That command pulls down a NuGet package containing


the latest version of the dotnet-appcat tool and installs
it globally. The tool is run using the appcat command.
The simplest way to use the Azure Migrate application
and code assessment CLI is to run appcat analyze solu-
tion.sln where solution.sln is the path to the solution or
project file you want to assess. Running analyze without
a project or solution specified causes the tool to search
for projects in the current directory. This command starts
an interactive command line experience, allowing you to
choose which projects from the solution you want to as-
sess, whether you want to analyze only source code or
source code and binaries, and what output format you
Figure 10: Assessment results for the eShop sample solution want (HTML, CSV, or JSON)—all the same options as with
the Visual Studio extension! See Figure 12.

Those are all of the issues detected in the eShop sample. Once the appcat CLI tool gathers the necessary data from
It’s a larger list than initially expected, perhaps, but, as you, it proceeds with analysis and publishes results to a
we thought, the issues were almost entirely about exter- report using your desired format.
nal dependencies that will need to be migrated to Azure
along with the eShop solution. There was nothing block- One important note about the CLI experience is that
ing in the report except for the helpful reminders that the the solution must be able to build without errors. Visual
solution will need database, message queue, and caching Studio can provide the necessary symbol information for
solutions in the cloud that may look different from on- analysis, but when running from the command line in-
premises, and provisioning those (and updating the app stead, the target project must be in a buildable state.
to use them) needs to be part of the re-platforming plan.
In addition to mimicking the Visual Studio experience,
the Azure Migrate application and code assessment com-
Using the Command Line Interface mand line tool accepts parameters that allow all decisions
In addition to the Visual Studio extension discussed in to be made up-front so that the tool runs completely
the rest of this article, the Azure Migrate application and automatically, allowing for a non-interactive experience
code assessment feature is also available via a .NET com- suitable for scripting. To do this, use the --non-interac-
mand line tool. The Visual Studio extension offers the tive parameter and make sure to specify report format
most functionality (because the reports maintain state and components to analyze via the command line. Here’s
for which incidents have been reviewed), but all the same an example command line for assessing a C# project non-
assessment capabilities are available from the command interactively:

Figure 11: RabbitMQ related issues in eShop

44 Preparing for Azure with Azure Migrate Application and Code Assessment codemag.com
appcat analyze eShopLegacyMVC.csproj -s HTML
-r OutputPath --code --non-interactive

This command assesses the eShopLegacyMVC.csproj proj-


ect, only considers source code (thanks to --code), and
puts an HTML report in the OutputPath folder. The output is
the same as the Azure Migrate application and code assess-
ment Visual Studio extension produced and it all runs au-
tomatically from the command line as shown in Figure 13.

Road Map
The Azure Migrate application and code assessment feature
already provides insights necessary to plan a successful Azure
migration. In the future, though, there are even more useful
features planned. Key features coming in future versions of
Azure Migrate application and code assessment include:

1. GitHub Copilot Chat integration: Azure Migrate ap-


plication and code assessment is great at identifying
points of interest in an application that should be
Figure 13: Azure Migrate application and code assessment command line output
reviewed to be prepared for re-platforming to Azure.
It’s only an analysis tool, though, and guidance
that’s given on how to address issues is static. By
partnering with GitHub Copilot Chat, it will be pos-
sible for users to have reports from Azure Migrate ap-
plication and code assessment summarized for them
and to have back-and-forth conversations about the
best way to review and remediate issues. GitHub Co-
pilot Chat will be knowledgeable about the issues
discovered by Azure Migrate application and code
assessment and will be able to provide step-by-step
instructions for configuring Azure resources, updat-
ing code, and more, to address them.
2. Multiple Azure targets: A feature available for the
Java version of Azure Migrate application and code
assessment that isn’t yet available for the .NET ver-
sion of the tool is the ability to specify a desired
Azure target environment during analysis (for exam-
ple, Windows Azure App Service or AKS with a Linux
container). The issues that apply to different Azure
environments overlap a lot and, currently, Azure Mi-
grate application and code assessment for .NET as-
sesses a general Azure PaaS environment that covers
issues of interest in any potential target. But this
feature will be coming soon to Azure Migrate ap- Figure 12: The Azure Migrate application and code assessment CLI offers the same
plication and code assessment for .NET. In the next options as the Visual Studio extension.
version, you will be able to specify an Azure environ-
ment to target and issue severity levels and descrip-
tions will be updated for that environment. You can learn more about Azure Migrate application
3. Binary output analysis: Today, Azure Migrate applica- and code assessment from documentation available at
tion and code assessment can scan binary dependen- https://learn.microsoft.com/azure/migrate/appcat/over-
cies referenced by a solution. But there still has to be view. As you use the new feature, please send feedback if
a source solution as a starting point. In the next ver- you have ideas for how the experience could be improved
sion of the tool, the command line interface will allow or if you run into any bugs. The feature is brand new,
scanning compiled binaries in a given folder directly. so there are bound to be opportunities to improve it as
customers use the Visual Studio extension and CLI and
share their experiences. To get instructions on how to
Wrap-Up leave feedback, click the Leave Feedback link in the top
Azure Migrate has been helping developers migrate so- right corner of the Azure Migrate application and code
lutions from on-premises to Azure for years. With the assessment's Visual Studio UI or use the Q&A tab of the
addition of the Azure Migrate application and code as- Visual Studio marketplace page at https://marketplace.vi-
sessment feature, that migration help now extends to sualstudio.com/items?itemName=ms-dotnettools.appcat.
understanding the inner workings of your solution and
source-level changes that are needed for the application  Mike Rousos
to work in Azure platform-as-a-service environments! 

codemag.com Preparing for Azure with Azure Migrate Application and Code Assessment 45
ONLINE QUICK ID 2405061

Stages of Data: The DNA of a


Database Developer, Part 1
Long before I started in IT, I wanted to teach. The process of learning something, then performing it, crafting it, learning from
mistakes, further crafting it, and sharing that journey with others appealed to me as a great career. Rod Paddock (Editor-in-
Chief for CODE Magazine) recently told me that the next issue of CODE would deal with databases. At the same time, I’ve been

seeing many questions on LinkedIn that boil down to, ticle. This will be a two-part article. Throughout both,
“How do I increase my SQL/database skills to get a data I’m going to mention some topics that I covered in prior
analyst/database developer job?” In this industry, that CODE Magazine articles where the content is still just as
question is complicated, as it means different things to relevant today.
different people. It’s arrogant to claim to have all the
answers, because doing so would presume that someone You can find many web articles with titles like, “Here are
knows about every job requirement out there. Having said the 30 best SQL Server interview questions you should be
that, I’ve worked in this industry for decades, both as an prepared for.” There are many good ones and I recommend
employee and as a contractor. I’d like to share what skills reviewing them as much as you can. I also recommend
Kevin S. Goff have helped me get and keep a seat at the table. getting your proverbial hands dirty inside of Microsoft
www.KevinSGoff.net SQL Server Management Studio. On that note, I’m using
Microsoft SQL Server, which means I’ll be covering some
@StagesOfData
Opening Remarks: Microsoft-specific topics. Having said that, many of the
Kevin S. Goff is Database
architect/developer/
A Rebirth of SQLServer Skills topics in this article are relevant to other databases.
Over the last few years, there have been intense opportu-
speaker/author, and
nities and job growth in the general area of data analyt- I didn’t want to call this article “The 13 things you should
has been writing for
CODE Magazine since ics. That’s fantastic! There’s also been a reality check of study before a SQL interview,” because I’m going beyond
2004. He was a member of something that database professionals warned about last that. I’m covering what I think makes for a good SQL/
the Microsoft MVP program decade: the need for those using self-service BI tools to database developer. Yes, there’s overlap, as I want to share
from 2005 through 2019, have some basic SQL and data management skills. As part some of the specific skills companies are often looking for.
when he spoke frequently of research, I spent a substantial amount of time reading
community events in the LinkedIn posts, and talking to recruiters and other devel-
Mid-Atlantic region and opers about this, and there’s one common theme: There’s First, Know Basic SQL
also spoke regularly for still a booming need for SQL and data handling skills. I “Knowing basic SQL” is really two things: understanding
the VS Live/Live 360 want to make a joke about being an old-time SQL person the SQL language (according to the ANSI SQL standard)
Conference brand from and a “boomer,” but I was born one month after the of- and understanding specific features in the database prod-
2012 through 2015. ficial end of the boomer generation. uct (in this article, Microsoft SQL Server) and some of the
physical characteristics of Microsoft databases. There are
I’m a big sports and music fan and I often hear people talk great books out there, but these topics tend to come up
about the “DNA” of great athletes and musicians. In this again and again. I’ll start with some index basics, even
context, the definition isn’t referring to the biological as- before getting into some language basics.
pects of a person. It’s more the traits they carry with them
and the habits they’ve burned into themselves that they Know the Different Types of Indexes
leverage regularly to do their jobs and do them well. I cer- A common question is the difference between a clustered
tainly hope that everyone who’s willing to work hard will get index and a non-clustered index. With this and other top-
a job, and I’m equally excited that I’ve seen a resurgence of ics in this article, I’m not going to write out a full defini-
“what SQL Server skills should a data person have?” tion, because other websites have done a great job. But
here are things I think you should know.
I know people who build great websites and great vi-
sualizations in reporting tools, where the available data You can only create one clustered index per table and
was very clean and prepared by an existing data team. that index defines the sorted order of the table. For a
However, for every one individual job out there like that, sales table that might have millions of rows, a clustered
there’s more than one job where you’ll have to put on index on either the sale transaction ID or possibly the
your SQL/data-handling hat. sales date will help queries that need to scan over many
rows in a particular order.
I’ve worn multiple hats in the sense that I’ve always had
work. I make mistakes, I underestimate, I still commit You can have many non-clustered indexes. They serve
many silly errors that we all, as developers, wish we could the purpose for more selective queries: that is, finding
avoid. Although no one person can possibly cover every sales between two dates, finding sales for specific prod-
possible SQL/data skill that will make someone success- ucts, specific geographies, etc. A non-clustered index
ful, I stepped back and thought, “What has helped me might contain just one column (composite index), or it
to help clients? What has been the difference-maker on could contain multiple columns if you’ll frequently need
a project?” And that’s why I decided to write this ar- to query on a combination of them.

46 Stages of Data: The DNA of a Database Developer, Part 1 codemag.com


Figure 1: Execution plan, only the execution operator is a Table Scan with statistics on the number of rows read

Figure 2: Execution plan with an INDEX SEEK, which is far more efficient (only one row read)

Related: Know When SQL Server Will Use These Indexes: If I run the same query again and then look at the execu-
Just because you create an index doesn’t mean SQL Server tion, I’ll see a very different story: a INDEX SEEKCLUS-
automatically uses it. For instance, you might create in- TERED INDEX SEEK where SQL Server only needed to read
dexes that SQL Server won’t use, either because the SQL one row (Figure 2).
statement you’re using isn’t search argument optimizable
or because you might not realize what something like Next, I’ll query the table for all names where the last
compound indexes will (and won’t) do. name is “Richardson”.

I’m going to take the table Person.Person from the Ad- select * from testperson
ventureWorks database and create my own table. I’ll also where lastname = 'Richardson'
create two indexes: a clustered index on the primary key
(BusinessEntityID) and a non-clustered index on the Does the clustered index help us at all? Unfortunately,
Last Name and the First Name. not really. Although SQL Server scans a clustered index
instead of a row table (heap), SQL Server must scan all
drop table if exists dbo.TestPerson 19,972 rows (Figure 3).
go
select * into dbo.TestPerson from Person.Person To help with queries based on a last name, I’ll create a
non-clustered index on the Last Name column.
Now I’ll use a single query to retrieve the row for a spe-
cific Business Entity ID: create nonclustered index [ix_LastName]
on TestPerson ( LastName)
select * from TestPerson where BusinessEntityID
= 12563 After creating the index on Last Name, let’s query for a
specific last name, and then for a specific last name and
In the absence of any index, SQL Server must perform a first name:
table scan against the table and read all 19,972 rows.
Here’s what SQL Server returns for an execution plan (Fig- select * from testperson
ure 1). where lastname = 'Richardson'

Although the query runs in a split second, SQL Server had select * from testperson
to read through all the rows. It’s not exactly optimal. where lastname = ‘Richardson’and
firstname = 'Jeremy'
Now let’s create a clustered index that drives the sort
order of the table: In the case of the first query, SQL Server uses a more effi-
cient INDEX SEEK on the new index. However, it does need
create clustered index to perform what’s called a KEY LOOKUP into the clustered
[ix_BusinessEntityClustered] on TestPerson index, to retrieve all the columns (because I did a SELECT
(BusinessEntityID) * to ask for all the columns).

codemag.com Stages of Data: The DNA of a Database Developer, Part 1 47


Figure 3: Execution Plan, now showing a Clustered Index scan on all 19K rows

To achieve the same general optimization, you’d need to


create a separate index where the leftmost column (or only
column) is the first name. Bottom line: SQL Server reads
the leftmost key columns, so keep that in mind when de-
signing indexes, and query against multiple columns.

You can find many websites that talk about composite


indexes. Here’s a particularly good one: https://learn.
microsoft.com/en-us/answers/questions/820442/sql-
server-when-to-go-for-composite-non-cluster-in

Indexes and Fill Factors


Some developers who’re very good at SQL queries hesitate
to get involved with questions that get closer to what
some might normally feel is “the job of a DBA.” As an ap-
Figure 4: Execution Plan, and you’re back to a scan, because the query couldn’t use the index plications developer, I’ll freely admit that I don’t have full
knowledge of a DBA. However, there are *some* topics
that cross over into the application space, and even if you
What happens when you query both a first and last name? don’t use them, it’s still important to at least understand
You only retrieve one row, so the server does less work why some tasks are not performing well.
to return the results back to the application. However,
under the hood, you’d see something different. Yes, SQL As a basic introduction, when you create an index, you
Server still performs an INDEX SEEK but must perform the can define the percentage of space on each leaf-level in-
filtering on the first name when it reads the 99 rows from dex page that SQL Server fills with data. Anything less
the clustered index. In this case, the non-clustered index than 100% represents the remainder that SQL Server re-
certainly helped to narrow the search, but SQL Server still serves for future index growth. If I specify 90% (often a
needed to “scan” through the 99 rows to find all the in- default), SQL Server leaves 10% empty for instances when
stances of “Jeremy”. additional data is added to the table. Setting a value too
high (or too low) in conjunction with index fragmenta-
Okay, so what happens if you create a composite index on tion can lead to “opportunities for improvement.”
the Last name and First name?
And Now for Some Basic SQL Syntax
create nonclustered index [ix_LastNameFirstName] Many SQL interviews start with making sure the person
on TestPerson ( LastName, FirstName) knows the difference between an INNER JOIN, OUTER
JOIN, FULL JOIN, etc. I’m going to go over some exam-
If you run the query again to retrieve both the last name ples in a moment, even though there are many websites
and the first name, SQL Server performs an INDEX SEEK that cover this. Before I start, here’s a little secret during
and reads just one row into the Key Lookup to get all the SQL assessments: You can put a person’s knowledge of
columns for that single person. You’re back in optimiza- JOIN types to the test by presenting them with a scenario
tion heaven. along the lines of this:

Okay, so a composite index further optimizes the query. “Suppose I have 100 customers in a customer master and
Here’s the last question: Suppose I query only on the first 100,000 rows in a sales table. You can assume that every
name to retrieve all the people named Jeremy? Will SQL sale record is for a valid customer. In other words, there
Server use the FirstName column from the index and opti- are no orphaned sales rows. If I do an INNER JOIN be-
mize as well as it did when I used the last name? Figure tween the rows based on customer ID, how many rows
4 doesn’t give us great news. should I expect to get? If I do a LEFT OUTER JOIN, how
many rows should I expect to get?”
Unfortunately, SQL Server won’t perform an INDEX SEEK.
Although SQL Server uses the LastNameFirstName index, Yes, that’s an interview question floating out there, I kid
it performs an INDEX SCAN through all 19,992 rows. It you not. The problem is, you don’t know if the person
only finds one “hit” and performs a key lookup to retrieve is trying to see what other questions you might ask, or
the non-key columns. maybe the person is trying to see if the two numbers

48 Stages of Data: The DNA of a Database Developer, Part 1 codemag.com


(100 and 100,000) will be distractors. Just like spelling Okay, let’s take the original query and turn it into a LEFT
bee contestants will ask, “Can you use the word in a sen- OUTER JOIN:
tence,” don’t be afraid to ask questions about the nature
of the tables/keys/row counts that might be relevant. select Vend.Name,
SUM(TotalDue) as VendorTotal
Want to be able to handle questions on different JOIN types? from Purchasing.Vendor as Vend
First, practice with a database. If you don’t have one, take left outer join
some of the data in the Microsoft SQL Server AdventureWorks Purchasing.PurchaseOrderHeader AS
database or the Contoso retail data. I’ll use AdventureWorks POHon Vend.BusinessEntityID =
for a few examples with INNER and OUTER JOIN. POH.VendorID
group by Vend.Name
Suppose I have a Vendor Table with 104 vendors. I have a order by VendorTotal desc
sales Table with thousands of sales rows. I have sales for
some vendors but not all. Assume the following: In this case, I can say with certainty (assuming no or-
phaned/invalid foreign keys) that SQL Server will return
• We have a Vendor table with 104 rows. 104 rows. I can say that with confidence because I’m
• Every PurchaseOrderHeader Vendor ID is a valid val- doing a LEFT OUTER JOIN. I’m telling SQL Server the fol-
ue as a Business Entity ID in the Vendor table. lowing: Give me ALL the rows from the table to the left of
• Each TotalDue row in the OrderHeader table is a the LEFT statement (the vendor table), and either sum the
positive value (i.e., no negative numbers). vendor dollars or just give me a NULL value if the vendor
didn’t have any sales. Given the conditions, I can say with
What can I say about the number of rows I’ll get back high confidence that I’ll get back 104 rows.
from this query, which will generate one row for each
Vendor with sales, and a summary of the Order dollars? Now, I’m going to throw the proverbial “curveball.” Sup-
pose I want all the vendors (all 104), regardless of whether
select Vend.Name, they’ve had an order. This time, I want the dollar amount to
SUM(TotalDue) as VendorTotal only reflect the sales in 2012. Will this give me 104 rows?
from Purchasing.Vendor as Vend
join Purchasing.PurchaseOrderHeader AS POH select Vend.Name,
on Vend.BusinessEntityID = POH.VendorID SUM(TotalDue) as VendorTotal
group by Vend.Name from Purchasing.Vendor as Vend
order by VendorTotal desc left outer join
Purchasing.PurchaseOrderHeader AS POH
If someone asks you how many specific rows you’ll get on Vend.BusinessEntityID =
back, there’s no way to answer that. Yes, you’ll get back POH.VendorID
one row for each vendor that had at least one sale. How- where (OrderDate BETWEEN
ever, because you don’t know exactly how many vendors '1-1-2012' and '12-31-2012' )
DIDN’T have sales, the row count could be as low as 0 ven- group by Vend.Name
dors or as high as 104 vendors. You’re doing an INNER JOIN order by VendorTotal desc
(i.e., basic JOIN) that gives you vendors who have sales,
will return one row for each vendor (because you specified
a GROUP BY), and provides the sum of the Order Dollars.
If your reply is, “Well, I didn’t know
Okay, let’s change the query to this: that, but I’d never write it that way,”
select Vend.Name, take a moment to realize that you
SUM(TotalDue) as VendorTotal might inherit code where someone
from Purchasing.Vendor as Vend
join Purchasing.PurchaseOrderHeader AS POH
didn’t know, but DID write it that way.
on Vend.BusinessEntityID = POH.VendorID
where (OrderDate
BETWEEN '1-1-2012' and '12-31-2012') Some might say, “Sure, I’ll get 104 rows. The dollars could
group by Vend.Name be lower because we’re filtering on a year, but we’ll still
order by VendorTotal desc get 104 rows.” They might defend the answer of 104 rows,
because I’ve specified a LEFT OUTER JOIN. As one of the
I’ll ask the same question: What can you say about the Bob contractors said in Office Space, “Hold on a second
number of rows you’ll get back and the sum of the dollar there, professor.” You’ll only get the vendors who have
amounts? All you can say is that it won’t be MORE than an order. (When I run it, I specifically get 79, but the
the first query, because you’ve set a filter condition in the purpose of this discussion is to demonstrate that I’m no
Where clause to only read the rows with an Order Date in longer guaranteed to get all of them).
2012. If all the orders were in 2012 to begin with, I’d gen-
erally expect the numbers to be the same as the first query. Here’s why. That WHERE clause works a little *too* well.
If there were orders in other years, I’d expect either the Even with the LEFT OUTER JOIN, the WHERE clause re-
count of vendors and/or the sum of the dollar amounts to stricts the number of rows. I’ve known developers who
be lower. Again, I can’t predict the specific number of rows: were surprised to learn this, and I’ve seen code in produc-
All I can say is that it’s somewhere between 0 and 104. tion that fell victim to this.

codemag.com Stages of Data: The DNA of a Database Developer, Part 1 49


How can you get all vendors (in this case, 104), AND only get good developer will surely test out the difference and do the
the sales dollars for 2012 (or a NULL). You need to remove necessary research when writing a query that calls for a HAV-
the WHERE from the main query. There are several ways. ING and figure it out through some trial and error. Although
Some will localize the condition in the LEFT OUTER JOIN, it might seem a bit unfair to expect a person to verbalize the
or (my preference) use a subquery. Here are both solutions: answer to every question, the ability to express an answer is
sometimes rooted in the amount of actual experience. (This
select Vend.Name, is such an important distinction that I’ll cover it a second
SUM(TotalDue) as VendorTotal time in the advanced SQL Server section.)
from Purchasing.Vendor as Vend
left outer join That’s certainly not a full list of basic SQL Server topics, but
Purchasing.PurchaseOrderHeader AS POH these are items and nuances that I’d expect database de-
on Vend.BusinessEntityID = POH.VendorID velopers to know. Before I move on to some advanced SQL
and (OrderDate BETWEEN Server topics, I want to stop and bring up a point: *MOST*
'1-1-2012' and '12-31-2012' ) technical interviews are performed by people who can lis-
group by Vend.Name ten to an answer and judge appropriately. If you can es-
order by VendorTotal desc tablish a conversational dynamic with an interviewer, you
can usually tell if the other person is adept at assessing
select Vend.Name, your responses. I say “*most*, because there’s something
SUM(TotalDue) as VendorTotal that still occurs in this industry, and it personally infuri-
from Purchasing.Vendor as Vend ates me, and that’s when a “tech screening” is being done
Too Much Theory and left outer join by a person who’s given a script. It happens. Sadly, there’s
Not Enough Fresh Air (select VendorID, TotalDue from no hotline you can call to report such intellectual laziness
Purchasing.PurchaseOrderHeader (and believe me, I can use stronger words). You can try to
When I was a young
developer, a project manager where (OrderDate BETWEEN call it out in the interview process, but obviously you run
fought against me being able '1-1-2012' and '12-31-2012' ) ) POH the risk of ruining your chances. My only advice is to just
to get my hands on anything on Vend.BusinessEntityID = POH.VendorID answer the question as best you can.
other than VERY small test group by Vend.Name
order by VendorTotal desc
rows. He also argued against
me sitting in meetings with
Some Advanced SQL Server
the business people. He was In Part 2 of this series, I’ll talk more about CROSS JOIN Ask ten different SQL developers what constitutes “basic
convinced that he and his and SELF-JOIN scenarios. Basically, you use CROSS JOIN SQL” versus “advanced SQL,” and you’ll get many different
colleague could write the when you want to create a Cartesian product of the table answers. Here’s my two cents and it’s probably controver-
perfect specification, and rows. For instance, say you have 45 employees in a sales sial: Anything that a standard ORM or Entity Framework
that the team should be able table and a calendar table with 52 weeks, and you want to generates as SQL code for an OLTP application is usually
to write pure and optimized create 2,340 rows, one for each employee/week combina- basic SQL. Anything that a data framework can’t generate
code without ever having tion. You use a SELF-JOIN when you need to query a table (or doesn’t do well, so much so, in fact, that it becomes
a chance to test it against back into itself (i.e., you query it as Table A, then join a good training example of how NOT to write something)
large or actual data. He even to the same table as Table B, usually because the single is advanced. Yes, that’s my story and I’m sticking to it.
scoffed when I randomly table has a relationship across columns). Again, I’ll cover
generated a large set of test them more in Part 2, because I want to review the topic Anything dealing with JOIN statements on matching
rows. Reading and studying
of recursion and common table expressions. keys, WHERE clauses to filter, and basic aggregation (i.e.,
is important, but it’s when
summing the sales dollars across millions of sales rows
you get your hands dirty (and
Do you know what a UNION does and know the difference and aggregating to one row per Product Brand), all of
crushed by the weight of hard
lessons) that you become a between UNION and UNION ALL? Here’s how I answer that that falls under basic SQL querying.
data professional. question. Suppose you have three different tables (or
three query results), with the same number of columns. Most of the other language features are “non-basic.” Okay,
You want to append all of them into one table. If you I’ll acknowledge that the difference between HAVING and
know for absolute certain that the three tables don’t WHERE takes less time than explaining the nuances (and
contain duplicated rows (across all the columns), or you shortcomings) of PIVOT because the former is generally
don’t care if you get duplicate rows, a UNION ALL will an “either/or” and the latter has several parts to it. Still,
give all the rows from all three tables. If Table/Result the next ones are some advanced topics.
A had 100 rows, B had 200 rows, and C had 300 rows, a
UNION ALL guarantees a result of 600 rows. Know the difference between HAVING and WHERE. Here’s
the bottom line: You use HAVING to filter on aggregated
If there’s any risk of duplicated rows, you should perform a amounts, and you use WHERE when you’re directly referring
UNION. A UNION will perform a duplicate check based on to columns in a table. Suppose you’re retrieving the cus-
all the columns (under the hood, SQL Server performs a SORT tomer ID and the Sum of Sales Dollars and grouping by the
DISTINCT). This can be expensive for large sets of data with customer. In the query, you only want Customers with more
many columns but might be necessary. If someone presents than one million dollars in sales. You can use HAVING Sum
the scenario of three tables with the row counts I described, (SalesDollars) > 1000000 after the GROUP BY statement.
and asked what a UNION (not a UNION ALL) will generate for
a final result count, the answer depends on whether any of Alternatively, you can sum the sales dollars by customer
the rows are duplicated across the tables. to a temporary result set or derived table. When you query
from the temporary result set or derived table, you can
Do you know the difference between HAVING and WHERE? I use WHERE, because the resulting rows are already ag-
know good developers who stumble on this question. Yes, a gregated down to specific columns.

50 Stages of Data: The DNA of a Database Developer, Part 1 codemag.com


Know how the RANKing functions work. These particu- gating across multiple one-to-many relationships, where
larly come in handy instead of trying to use TOP (N). For the two (or more) child tables have no relation to each
instance, if you want to get the top five selling products other beyond their common relationship with the parent.
by state, the RANK functions are preferable to TOP (N).
Also remember that there are three flavors of the general Understand Triggers and Capturing changes. I’ve writ-
RANK functions: ROW_NUMBER(), RANK(), and DENSE_ ten about this in prior CODE articles (most recently, Janu-
RANK(). The first doesn’t account for ties, the second ary/February 2024), so I don’t want to repeat details. Most
accounts for ties and leave gaps, and the third accounts environments need (or require) solutions to track changes
for ties and close gaps. Recently I caught an interviewer to data. Whether it’s writing automated scripts for trig-
trying to trick me into a TOP N response when the query gers (or just hand-coding triggers), or using Change Data
in question called more for a RANK function. Capture, or relying on some third-party tool, it’s critical
to know how to log changes and how to report on them.
Select * from (
select PurchaseOrderID, EmployeeID, VendorID, How do you optimize queries? You have a query that
ShipMethodID, OrderDate, TotalDue, suddenly runs slowly and you need to figure out why. It’s 2024
rank() over (partition by shipmethodid Here’s where there’s seldom a single right answer. There
order by TotalDue desc) as RankNum are several diagnostic philosophies here. One of the and there’s
from Purchasing.PurchaseOrderHeader ) t things I’ll do is to hit the low-hanging fruit first. Have no excuse for
where RankNum <= 3 there been new application deployments, did the volume
order by ShipMethodID, ranknum of data spike astronomically overnight, can you run SQL
not capturing
Profiler or use the SQL Query store to get a bigger picture, an audit trail
etc. The list of the “low-hanging fruit” can be very long of database
Understand how CROSS APPLY and OUTER APPLY work. and five different people can contribute 10 uniquely valid
These have been somewhat controversial topics in the ideas, but the point is to identify whether some signifi- changes.
SQL Server World. Introduced in SQL Server 2005, they’re cant event triggered the drop in performance. You have
great for patterns where you need to perform more
gymnastics than a regular JOIN (such as accumulating, Second, I’ll review the query, the data, and the execution Temporal
reading across—or dare I use a COBOL term—“Perform plan. Again, I might be looking for low-hanging fruit, but tables (SQL
Varying”). In my last article (CODE Magazine, January/ at least it’s targeted toward the specific query. Over the Server 2016),
February 2024), I wrote that CROSS APPLY can be used to years, I’ve become more cognizant that a solution using
get cumulative results. (Additionally, Microsoft added op- views (and views calling views) are not unlike blowing air Change Data
tional clauses in SUM OVER for ROWS BOUND PRECEDING into a balloon: At some point, you provide one more shot capture (SQL
in SQL Server 2012). There can be a performance hit with of air and the balloon pops. My point is that adding “one
any of these features, as they potentially need to process more table” or “one more join” to a view five distinct Server 2008),
and aggregate many rows over many iterations. I typically times doesn’t mean that the progressive impact will be and you still
use these for overnight reporting jobs or any instance consistent for all five. This is something I want to devote have triggers
where users accept that the results won’t be instant. to a future article, but for right now, keep an eye on solu-
tions that have nested views. They sometimes can lead (from the early
Over the years, I’ve discovered that CROSS APPLY has to “too much air got pushed into the balloon.” In Part days)! Pick one,
some other interesting uses, such as using a CROSS APPLY 2 of this article, I’m going to talk more about this, and
as an alternative to UNPIVOT: how certain execution operators in the execution plan study it, and
can give you a clue that you have an issue. implement it.
create table TestTable
( SaleDate date, Division1Sales money, If the problem is indeed “views gone wild,” what’s the
Division2Sales money, solution? Well, “it depends.” I’m not a big fan of views
Division3Sales money) calling views calling views to begin with, and so I’ll try to
delve into the workflow of the desired query. If it’s some-
insert into testtable values thing that only needs to be produced one or two times
('1/1/2023',100,200,300), a day, I’ll wonder if a materialized snapshot table with a
('1/2/2023',200,300,400), crafted stored procedure will do the trick. Alternatively,
('1/3/2023',200,300,0) maybe the culprit is a stored procedure that has five que-
ries in it, and maybe the third of the five is the one tak-
SELECT SaleDate, Division, Sales ing all the time. Ultimately, those who can deal with bad
FROM testtable performance are the ones who have gone through it many
CROSS APPLY ( times. Quite simply, this isn’t an easy topic.
VALUES ('Division1', Division1Sales),
('Division2', Division2Sales) , Isolation Levels, Snapshot Isolation Levels,
('Division3', Division3Sales)) and Read Uncommitted (“Dirty Reads”)
Temp (Division, Sales) No one should judge a developer negatively for not know-
WHERE isnull( Sales,0) <> 0 ing about a specific feature. I’ve known good SQL Server
developers who were unaware of a feature, or perhaps got
Know when subqueries are required. I talked about it backwards. Having said that, I’m going to go out on a
this in my last CODE Magazine article (January/February limb and say that if there’s one area of knowledge that can
2024), using the seemingly simple example of multiple help you stand out as someone with good skills, it’s Isola-
aggregations and when a subquery is needed. Ultimately, tion Levels. Odds are (and I emphasize, “odds are”) that
almost every developer runs into this situation of aggre- someone who can speak to Isolation Levels, read/write

codemag.com Stages of Data: The DNA of a Database Developer, Part 1 51


locks, and the SQL Server snapshot isolation level is likely of the day) when dirty reads can help, provided those
to show a good command of other SQL Server fundamen- instances are well-managed. There’s also a third risk. This
tals. I’d almost call this an intellectual “gateway” feature. one is covered less often in web articles because it’s not
very common, but it’s still a risk. Suppose Task A’s trans-
In SQL Server, there are five isolation levels: Read Un- action is performing inserts such that a page split occurs
committed (dirty read), READ COMMMITTED (the default in the SQL Server table. If Task B is using a dirty read, it’s
isolation level), Repeatable Read, Serializable, and the conceivable that Task B could pull back duplicate rows
Snapshot Isolation Level (added in SQL Server 2005 and that came from the page splits.
it comes in two flavors). I’m going to cover the first two
(read uncommitted and READ COMMMITTED) and the last Okay, so a READ COMMMITTED means that users might get
one (snapshots) in this article, and I’ll cover Repeatable timeout errors (or wait longer for the results of queries), and
Read and Serializable in the next article. a dirty read means that avoiding those timeout situations
might lead you to reading “bad/dirty” data. This was a chal-
Any time I’ve ever been asked to talk about isolation levels, lenge for a long time. Fortunately, Microsoft SQL Server 2005
I explain them in terms of “Task A” and “Task B.” Both tasks addressed this with one of the more important features in
could be users running an option in an application, or one of the history of the database engine: the SNAPSHOT isolation
them could be automated jobs, etc. If anyone ever wants to level. (Notice how I’ve skipped past REPEATABLE READ and
test your knowledge of isolation levels, I highly recommend SERIALIZABLE. I’ll cover them in Part 2 of this series).
talking in terms of a “Task A” and “Task B” scenario.
Now I’ve gotten to the SNAPSHOT Isolation Level. I’m bi-
I’ll start with the default, which is READ COMMMITTED. ased, but this is a fantastic feature. This comes in two fla-
Let’s say Task A initiates a transaction that updates thou- vors, the standard SNAPSHOT (that I call a static SNAPSHOT)
sands of rows into a table. While that transaction is un- and a READ COMMMITTED SNAPSHOT (that I refer to as a “dy-
derway, Task B tries to read some of those rows. Because namic SNAPSHOT). This gives us the good benefits of dirty
the default isolation level is “READ COMMMITTED” in SQL read and READ COMMMITTED but without the downsides.
Server, that means Task B will be locked from reading up-
dates from Task A (or could possibly timeout) until Task First, to use the standard “static” snapshot, you need to
A finishes. The specific reason is because Task A’s update enable it in the database, because SQL Server is going to
has a write lock on the rows, and that prevents Task B use the TempDB database more heavily:
from attaining a shared lock to read the committed ver-
sion of the rows. Task B needs to wait for Task A to fin- -- Code to alter a database to suppport
ish and commit the transaction (or for Task A to rollback -- static snapshots
because of some error in Task A). ALTER DATABASE AdventureWorks2014
SET ALLOW_SNAPSHOT_ISOLATION ON
There can be problems here, and discussions are not without
controversy. Task B might simply need to get an “approxima- Here’s the scenario for a static snapshot:
tion” of data. In other words, it doesn’t matter much if Task
A commits or rolls back; it’s small in the grand scheme of • Customer XYZ has a credit rating of “Fair”.
what Task B wants to query. What Task B can do is set the • Task A starts a transaction and updates the custom-
isolation level to a read uncommitted (dirty read). The ben- er’s credit rating to “Good”. This transaction could
efit is that Task B won’t run a risk of timeout waiting for Task take several seconds or even minutes to complete if
A to finish, and a dirty read is sometimes a bit faster than it’s a long and intensive job
a normal READ COMMMITTED, because SQL Server isn’t even • You’ve seen that Task B will have to wait (or might
attempting to put a shared lock on the rows. even timeout) if you use the default READ COM-
MMITTED. You’ve also seen that Task B returns a
Sounds great? Well, dirty reads bring some real risks. and value of “Good” if it uses a READ UNCOMMITTED,
here are three of them. which isn’t “good” if Task A rolls back.
• Suppose that Task B starts its own read session/read
First, suppose task A’s transaction is updating both a transaction and simply wants to get the last commit-
header table and two child detail tables. Task B performs ted version (“Fair”). Task B can initiate a transaction,
a dirty read between the update of the header table and set the ISOLATION LEVEL to “SNAPSHOT”, and then
the child tables. Task B sees the new header row but query the table: Task B’s query returns “Fair”. It won’t
doesn’t yet see the related detail rows. In other words, give you a dirty read and it won’t give you a timeout.
Task B returns an incomplete picture of what Task A’s
transaction intended to write. That’s not good!

Second, suppose Task A starts and updates one or more of Many DBAs refer to snapshot
the tables, but performs a rollback at the end (because of isolation as versioning.
some post-validation error, etc.). Suppose Task B read the
data using READ UNCOMMITTED just before the rollback. Task
B will be returning data that never officially saw the light
of day because Task A rolled it back. That’s also not good! Sounds great, doesn’t it? Overall, it is, although there’s
one downfall. Suppose Task A commits its transaction,
These two alone should provide caution to developers who but Task B is still in the middle of its READ session of
use READ UNCOMMITTED. Again, there can be specific in- SNAPSHOT. If Task B queries the row again (in the same
stances (often tied to workflow throughout the course read session), it continues to return “Fair” because it

52 Stages of Data: The DNA of a Database Developer, Part 1 codemag.com


continues to read from the last committed version AT “The Gang Of Four”, Erich Gamma, Richard Helm, John
THE TIME its own READ session started. (This is why Vlissides, and Ralph Johnson) came up with some great
many DBAs refer to snapshot isolation as versioning). “short but sweet” names for patterns. Well, I’m not tal-
ented enough to come up with “short but sweet terms,”
Here’s what Microsoft did to handle that. In my opinion, but I have a name for this pattern. I call it the “stuff
this ranks as one of the finest achievements in the his- the X number of variable business values into a single
tory of the SQL Server Database engine. They added an comma-separated list, and then jam that as a single new
optional clause to the SNAPSHOT isolation level. By per- column in the report. (Side note: I’d never succeed in
forming the following setting, you are guaranteed that product branding.)
EVERY default READ COMMMITTED will give you the most
recently committed version. I’ve had business users who were happy with this result,
and IT managers who breathed a sigh of relief that there
-- Code to alter a database to suppport was a simple solution that wouldn’t disturb the expected
-- READ COMMMITTED SNAPSHOT row count.
-- This will configure SQL Server to read
-- verions from TempDB Okay, so how can you do this? Prior to SQL Server 2019, a
common approach was to use the “FOR XML” statement.
ALTER DATABASE AdventureWorks2014 In SQL Server 2019, Microsoft added a new STRING_AGG
set read_committed_snapshot on function to read over a set of rows and “aggregate” values
into a comma-separated string. Take a look at STRING_
Go back to the previous example. If Task B reads the row AGG, which works very nicely. Also, look at STRING_SPLIT,
a second time, it picks up the change from Task A. Again, a long overdue feature to easily do the reverse: convert a
it’s because of the dynamic nature of READ COMMMITTED comma-separates string of value into a result set.
SNAPSHOT.
Second, this is one that initially seems like a simple job
You don’t even need to start read sessions with a SNAP- for MAX and GROUP BY but winds up being a bit more in-
SHOT isolation. By turning on READ_COMMITTED_SNAP- volved. Because I’ve established myself as a long-winded
SHOT in the database, every single READ COMMMITTED branding author, I’d like to refer to this pattern as the
(the default level) automatically pulls the most recently “I thought I only needed a GROUP BY and MAX, but now
committed version of the row right at that time. I need to go back and pull one other column that’s not
directly part of the aggregation.” (I’d fail marketing.)
Does that mean that the READ COMMMITTED SNAPSHOT
will work wonders right away? It might, but there’s one Suppose you have a query that initially shows the single
catch. As SQL Server is using TempDB as a version store, highest transaction by customer. It could be something
DBAs will likely need to step in to deal with the manage- as simple as this:
ment of TempDB.
SELECT CustomerID, Max(SaleAmount)
From Sales
Now for Something a Little Bit Different GROUP BY CustomerID
Here are two more SQL Server tricks and approaches that
experienced database developers have up their sleeves. But now you’re asked to show the Sales Person associated
with that sale. You can try all day long, but you won’t be
First, suppose you have a report that shows one row per able to write this as a single query. You’ll need a subquery
product with the sum of sales. The company sells the of some type.
product in different sizes, but the report only shows the
summary of sizes. Additionally, you could have a tie within a customer.
Maybe the customer bought something from Sales Per-
Now suppose the business has an urgent request. At any son A for $100 and something else from Sales Person B
one time, maybe two or three of the sizes need to be for $100. There are two sales transactions for the same
replenished (which you’d pull from a query). You want to amount with a different Sales Person.
show the list of sizes, except the report row granularity
is fixed and you can’t add additional row definitions to Here’s another situation when spotting this pattern (and
the report. Also, because there might be 20 sizes, you writing a correct query) can sometimes indicate a devel-
can’t add 20 columns out to the right of the rest of the oper’s level of experience.
data, with a checkbox for the sizes you need to replenish
because that would take up too much space. ;with tempcte as
( SELECT CustomerID,
But wait! There’s something you can do. After talking to the max(SaleAmount) as MaxSale
users, they just need to know the X number of sizes, nothing from Sales Group by CustomerID)
else. It’s almost like seeing a tooltip on a webpage, except
this is a generated report. Maybe all you need to do is just Select tempcte.*, OriginalTable.SalesPersonID,
add one more column to the report, called “Sizes to Replen- OriginalTable.SaleID, OriginalTable.SaleDate
ish”, and show a comma-separated list of the sizes? From Tempcte
Join dbo.Sales OriginalTable
If you’ve ever read the famous “Design Patterns” book, on Tempcte.ID = OriginalTable.ID and
you’ll know that the authors (affectionately known as tempcte.MaxSale = OriginalTable.Saleamount

codemag.com Stages of Data: The DNA of a Database Developer, Part 1 53


There are multiple ways to write this. Some might use • Rows in the legacy system with cost values that the
the Common Table Expression as an in-line subquery, and target system could not process (dirty data that had
some might use CROSS APPLY, etc. This point is that when never been validated)
you aggregate across many rows to come up with a SUM • Multiple rows in the target system that were marked
or a MAX or whatever, you might need to bring along as active, when only one row should have been
some additional columns for the ride. (Aha! Maybe I can marked as active, based on the granularity
call this the “Tag-along” aggregation pattern!)
Before I go on, I want to talk about that last word: granu-
Finally, I want to bring up a pattern that I’ve only had to larity. It might sound pedantic to invoke that term during
deal with twice (to my recollection). It was a bit unusual, a meeting about fixing data, but as it turns out, three
and very humbling. I had to write a query that summa- different functional areas in the company had different
rized process duration, and mistakenly thought that a few opinions about what constituted “granularity of an ac-
subqueries and aggregations would do the trick. It turned tive row.” The problem is that some business users aren’t
into a hair-pulling afternoon, although I benefitted from going to think in the most precise terms about this (nor
it. I came up with a working solution, and then learned should they be expected to). They’ll simply look at three
later that another SQL Server author (Itzik Ben-Gan) had candidate rows on a screen, point to the second, and
documented the pattern, called “gaps and islands.” Be- say, “THAT ONE! That’s the active row, and the others
cause I already presented the code in a prior CODE Maga- shouldn’t be, because of this and this and this.”
zine article (January/February 2018), I’m not going to re-
paste the code here. If you want to see a more advanced And that, folks, is how you come one step closer to defining
On the Subject of example of where a developer can mistakenly oversimplify (in this case) what constitutes the definition of an active row.
Starting Out a problem, I documented my entire thought process in
the January/February 2018 issue of CODE Magazine (“A Okay, back to the original story, one of the first things we
There’s no single roadmap
in learning a discipline (in SQL Programming Puzzle: You never stop learning”). provided to management was a summary of what percent-
this case, being a database age of bad rows fell into each category. We had roughly
6,000 rows identified as “bad” with 20% falling into Cat-
developer). As people often Data Profiling: I’ve Seen Fire (Audits) egory A, 50% into Category B (and then a sub-breakdown
say, “It’s a journey.” Just make
sure “journey” is a verb as well and I’ve Seen Rain (More Audits) of Category B), 10% into Category C, etc.
as a noun. I’ll admit, I have a bit of a wise-guy side to me, and it
nearly came out in a recent conversation. Someone asked The queries were a series of COUNT and EXISTS
if I’d done data profiling and the proverbial little devil
on my left shoulder wanted to say, “Well, I don’t know DECLARE @CategoryACount int , @CategoryBCount int
that specific term, but in my last project, I had to take a
large number of rows from a bad OLTP system and search SET @CategoryACount = (select count(*) from
for patterns.” CostDetailsLegacy outside
where MarkedArchive = true
Obviously, I didn’t make that joke, but here’s what I DID And exists
say: You can’t spell “accountability” without “count.” (select 1 from CostProduction inside
To me, accountability isn’t about “who can we blame if where outside.<Key1> = inside.<Key1> and
something goes wrong?” Accountability means “what can outside.<Key2> = inside.<Key2> and
we count on” or, in the world of databases, “all rows are Inside.ConditionForActive = true
present and accounted for in some manner.”

For example, I worked on a project where we had some SET @CategoryBCount = (select count(*) from
pretty significant cost discrepancies between two sys- CostDetailsLegacy outside
tems. We knew the issues stemmed from code between where MarkedArchive = true
the two systems that needed to be refactored. Before we And exists (select 1 from CostProduction inside
could dive into that, we had to come up with a plan where outside.<Key1> = inside.<Key1> And
RIGHT AWAY to fix the data. Of course, you can’t fix a outside.<Key2> = inside.<Key2> and
problem (or in this case, a myriad of problems) without inside.ConditionForActive = true
identifying all the issues, and that was the first order
of business: identifying all the different ways data had Additionally, management might provide an error factor:
gone bad. Maybe if cost rates differ by less than two cents (round-
ing errors, bad math approaches involving integer divi-
Without going into specifics, we found four different sion, etc.), and *maybe* they’ll elect to tackle that later.
scenarios. Of those four, two of them had sub-scenarios. Once I saw a manager flip out when they saw the top
Some of these were simple and “low-hanging fruit” and of the list of discrepancies sorted by variance and the
some were more complicated. Here were some of them: overall row count. They thought that because the top 10
rows were off by a large percentage and we had thousands
• Rows in the legacy system marked as deleted/ar- of rows, that we had a disaster. It turns out that after
chived, but still in the production system row 20, the variances dropped to pennies, with a slew of
• Rows in the legacy system marked as deleted, but numbers off by just a small amount. So even within your
should not have been categories, check the deviation among the rows.
• Rows in the legacy system with multiple cost com-
ponents, where the target system only ever recog- I’m not going to devote two pages of code to the specif-
nized the first component ics (it’ll just give me flashbacks and nightmares anyway).

54 Stages of Data: The DNA of a Database Developer, Part 1 codemag.com


Hopefully you get the idea: You need to identify all the and Y, and what are the differences between X and Y?”
anomaly conditions before you can fix them. Addition- Person B might very well have worked on projects involv-
ally, management might decide to go over a particular ing X and Y, but might stumble on a definition, and finally
scenario first, because it’s the most egregious, or easiest respond, “Well, I can’t give a textbook definition, but I’ve
to fix, etc. worked on both and know they’re different, and I’m sure
I can easily look them up on Google.” Person A might ac-
Obviously, this won’t fix any data, but it gives you a good cept that when push comes to shove, Person B “probably”
structured starting point for a game plan, as opposed to understands enough.
playing the proverbial whack-a-mole when you see bad
data. This is what I mean by data accountability: what When developers face questions about the difference
rows you can “count on” as being valid, which ones you between normalization and denormalization and try to
can’t, and why. Trust me, the auditors are trained to think describe some common terms in data warehousing, they
in these terms (and others). sometimes can fall a little short, even if they’ve success-
fully worked on projects. Yes, I realize that memorizing
standard answers might not be the way to go, but….
Always Know the Product/Language
Shortcomings Let’s take normalization and denormalization first. The
I was on an interview team where I asked a question that goal of normalized databases is to reduce/eliminate re-
some liked and some wondered what planet I came from. dundancy, to ensure data integrity, and to optimize get-
It was for an ETL position where SSIS and T-SQL would be ting data into the database as efficiently as possible.
used heavily, and so I asked, “What three things about You might have to write queries to join data across many
SSIS annoy you the most that you wish worked differ- tables, but a SQL Server developer shouldn’t fear that
ently?” prospect. Typically, application developers are the only
individuals who’ll be accessing normalized tables with
I wasn’t so much interested in the specific answers as SQL code.
much as I was interested in their though process and
product experiences. At the risk of making an abso- Denormalized databases are a different breed: Power us-
lute statement, it’s unlikely that someone has a strong ers and “non-developers” might be more likely to read
amount of prior experience with SSIS and has been a them because data can be redundant in denormalized
happy camper about the product all the time. I love SSIS tables. One of the major goals of denormalized databases
and I’ve debated with developers who refused to use it, is to optimize getting data OUT of the database as quickly
but that doesn’t mean I haven’t (out of frustration) come as possible. Fewer complex JOIN statements are needed,
up with alternate definitions for what the letters SSIS which can appeal to power users who know how to use
stand for! The SSIS Metadata manager in the data flow reporting tools but want to query the data with minimal
has gone from frustrating to barely acceptable over the effort.
years. SSIS’s handling of package-level variables at the
script level makes me think of two people talking with Finally, something you can use to impress your friends
plastic cups and a string. I have other pet peeves. Again, at the dinner table: Some data warehouse initiatives
I love SSIS, but the product has weak spots, and I’m cu- (especially ones with a large number of disparate data
rious to know what a developer thinks is a weak spot sources) shape data into normalized structures FIRST
because it usually means they’ve cut their teeth with it. as part of an overall ETL effort, before turning around
and flattening them into denormalized structures. For
When Microsoft first implemented the PIVOT in T-SQL developers who’ve only ever worked in one type of data-
2005, I was mildly excited. I had done many queries in- base, this concept might initially seem like overkill, but
volving aging brackets, and looked forward to converting it’s an invaluable approach for making sure data in the
some code I was writing to use PIVOT. The story’s end- data warehouse is clean.
ing wasn’t quite as happy as I expected. It streamlines
some things, but also introduces other “gotchas” to be Yes, the capabilities of certain BI and reporting tools, and
aware of (dynamic lists, multiple columns, and the need even advancements in database features, have created
to PIVOT on one pre-filtered/pre-defined result set, as some grey areas, but these two concepts still breathe on
opposed to mixing PIVOT with other T-SQL constructs and their own today, and they’re not going away.
with an irregular result sets). I was once asked in an
interview what new language feature gave me the most The topic of data warehouses is a bit of a different story.
frustration. I chose PIVOT: not because I didn’t under- There are developers who can cleanly describe the differ-
stand it, but because I’d had enough practical experience ences between normalized and denormalized databases
that I *sometimes* wondered if the pre-SQL Server 2005 but might admit to a shortfall of knowledge on data ware-
approach was better. I’m reasonably confident that the housing concepts.
interviewer would have accepted different answers, so
long as I said something that demonstrated I’d gotten my If you plan to work in the data warehousing area, there’s
hands dirty with it. one book I absolutely recommend you read: “The Data
Warehouse Toolkit: The Complete Guide to Dimensional
Modeling” by Ralph Kimball and Margy Ross. There are
Normalization, Denormalization, multiple editions of this book. Even if you can only get
and Warehousing Basics access to the first edition, it’s still worth reading. I real-
There’s a classic argument that goes along the lines of ize that it almost sounds cultish to recommend a book so
this: Person A will ask, “What’s a short definition for X strongly, but the book actively engages in different busi-

codemag.com Stages of Data: The DNA of a Database Developer, Part 1 55


ness cases where Kimball and Ross use repeatable pat- with an answer sheet who are unable to determine if a
terns to handle different scenarios. It’s not just a book to particular response aligns with a predefined answer.
quote, it’s also a book with approaches to live by.
Okay, let’s go back to fact tables. The scenario I described
(a sales record) is the most common type of fact table:
a transactional fact table. I can aggregate all the sales
Next time someone says, “Why dollars (or aggregate by dimensional groupings) and that
measure will mean something. You refer to the measures
should I study the Kimball in a transactional fact table as fully additive.
methodology? Isn’t that
antiquated?” here is a proper reply. Certain businesses want to take a picture of the sum of
transactions over some period. Imagine a bank taking
“Do you think that all databases a picture of all a person’s charges and all their depos-
built on Kimball fundamentals its during the month and coming up with a balance. A
company might, as part of a month ending process, read
have suddenly disappeared?” across all the bank transactions and write out a periodic
Knowing how to implement Kimball snapshot table that holds the account ID, month ID, sum
ideas will be relevant for a LONG of charges, some of deposits, and the balance. Let’s say
they do that for three months and so the snapshot fact
time. It’s not a technology, table holds three rows. Can you sum the numbers for the
it’s not a language: It’s a collection charges and deposits into something meaningful? In this
case, yes. Can you sum the balance for each of the three
of approaches, and an outstanding months? Well, much as you might like that, the balance
one that that. isn’t cumulative. Unlike the first two measures, the bal-
ance isn’t fully additive. Yes, you could take an average
of it, which means that measure is semi-additive, and
not cumulative. Periodic snapshot fact tables are not
There are many different data warehousing concepts, but uncommon.
here are a few you should know:
There’s also a third type of fact table, usually involving an
First, what are Fact Tables and Dimension Tables? activity where you’re not looking to sum up numbers (like
Loosely speaking, Fact Tables represent activity/busi- sales or number of items produced) but merely counting
ness processes. They could be sales, returns, transactions, the number of times something happened. For instance,
manufacturing production and quality, or even survey let’s say I teach two classes a day. Yesterday, 28 out of
evaluations: SOMETHING happened, and you want to ana- 30 attended Class A, and 18 out of 19 attended Class
lyze those somethings over time. They could occur five B. The next day, the numbers are 29 out of 30, and 17
million times a day or just 100 times a week. Last night, out of 19, respectively. You might have a Factless fact
I bought some new art supplies for my daughter—that table that records nothing more than the number of times
transaction might very well be sitting in a fact table. More something happened (attendance for a particular class
on that in a minute. by a particular instructor). All you really get out of the
Factless fact table is the count, although that count over
Dimension tables represent the business entities that are time (or average) might have analytic value.
associated with the activity mentioned above. I bought
the supplies at a certain time of the evening from a store Okay, I’m going to bring up one more topic that’s likely
in a particular sales district that belongs to a certain to come up during an interview: slowly-changing dimen-
sales region. The products were specific art supplies made sions. Here’s how I describe it: suppose a product was
by a certain company and came in certain packaging. introduced in 2022 and sold in all of 2022 for a manu-
facturer price of $50. On 1/1/2023, the price increased
Here’s the single most important word in data warehous- to $52. Then on 7/15/2023, the price increased to $53.
ing: the word “by.” Sales departments want to see the
rollup of sales dollars “by” city, “by” customer type, “by” Here’s the big question: Do you care about tracking sales
date/month or even “by” time of day, “by” product manu- during the time when the product was $50, versus $52,
facturer and product type. So those core identifiers that versus $53?
were part of the sale (the exact date/time, the customer
ID, the product UPC, the store ID and maybe even sales Here’s another example. From the time I started paying
register number) belong to larger groups. taxes (i.e., age 18, first job), I’ve lived at 10 different
addresses (kind of scary to think about it). Let’s say some
If the product was a seasonal item, sales might want to government agency tracks information about me. Do they
track sales across years by season (seasonality), where care if was a resident in one area for X years/months and
the actual dates might vary a bit. (I like to use the classic another area for a different amount of time?
example of tracking sales of fish foods during Lent each
year, where Lent can be different weeks each year). When you care about aggregating fact data not only by
an attribute (a product ID or a Social Security Number),
Now, you can look up fact tables and dimension tables but by changes to that attribute that have occurred over
and find several equally valid definitions. This is one of time, you refer to that as a Type 2 Slowly Changing Di-
the reasons I detest unqualified people doing interviews mension.

56 Stages of Data: The DNA of a Database Developer, Part 1 codemag.com


Now you’re wondering, “Okay, what’s a Type 1 Slowly some very irregular production history, believing that I’d
Changing Dimension?” It’s simply when some attribute somehow concocted a magic formula to suddenly show
for an entity changes many times and there’s no analytic what happened on a day three months ago. It wasn’t
need to aggregate fact data over time by each change. magic, it was basic data warehousing concepts. At risk
of repetition, and at risk of our famed editor striking
There are challenges associated with Type 2 Slowly Chang- this sentence because it’s overkill (smile), I can’t stress
ing Dimensions: late-arriving fact data and late-arriving enough the value of reading the Ralph Kimball method-
dimensional changes. On the former, suppose you receive ologies in his Data Warehousing books.
sales data today (late April 2024), but it represents trans-
actions that occurred in January 2024. When you write out Ever see those TV commercials where two people are debat-
the fact data, you need to make sure the fact row points ing about whether something happened the day before?
to the proper VERSION of that dimension at that time. On Suddenly someone shows up with instant replay equipment
the latter (and this can be trickier): suppose you receive to show what happened. You want to be the person who
information today that a product’s price changed back on can show up with the necessary replay equipment.
11/23/2023? You might be tempted to say, “Okay, we’ll go
back and update the dimensional pointers for those fact
sales transactions that occurred after 11/23/2023 to point Other Miscellaneous Things
to the correct version of the product’s change.” Well, that Think globally: Your applications aren’t the only ones that
depends on the database policy regarding “updating” fact might touch the data you manage. When you’re looking
tables. In some environments, fact rows are INSERT ONLY— at audit trail requirements, you can’t assume that your
you CANNOT update a fact row. In this instance, you’d fix processes are the ONLY ONES that will touch data. Once I
the X number of fact rows that would have been associ- got into a spirited argument with a SQL Server author that
ated with the new version of the product change (had you I highly respected. In the argument, the author stated
received it in a timely manner), INSERT new rows with re- that the new OUTPUT INTO clause that Microsoft added
versing sales amounts, and then INSERT the rows the way to SQL Server 2005 meant that triggers and other audit
you would have in the first place. Yes, this happens! trail features would go away. He defended his position
by saying he’d never work for a company that didn’t pipe
ALL their insertions through stored procedures. I told him
And Speaking of Snapshots… that it just wasn’t practical to think that way, that other
At some point, many businesses need to see what data jobs might access and modify data.
looked like based on a point in time. This can take a
variety of forms. Suppose you report on daily production
every morning at 5AM. You might want to see what a
product looked like between January 10 and January 11 In the database world, there are
(possibly because of some significant event). Here’s an-
other one: maybe you need to see what a customer profile
a few RED rules (rules you always
looked like back on February 3. Finally, suppose you want follow) and lots of BLUE rules (rules
to show the trend of inventory on the first day of the you try to follow but might need
month, the last day of the month, and any monthly aver-
ages, over the course of a year. to bend). At the end of the day,
the pragmatists usually prevail—
If it sounds like I’m partly repeating myself (that is, I’m
mentioning some of the scenarios from the data ware-
so long as they also have a plan
housing section), you’re correct. Snapshot tables, audit for tomorrow and the next day.
trail tables, and slowly changing dimensions, they all have
some overlap. Even if you don’t work in a full-fledged
data warehousing team, your ability to solve some of
these challenges can increase your value. Some people Here's another topic: using PowerShell. I’ll freely admit,
have this impression of data warehouses as a bunch of old I don’t jump with joy when I use PowerShell. However, it
backup tapes from data 30 years ago that’s kept offsite in can be an invaluable tool for “piecing together” different
some dusty storage area. I thought that way at the begin- processes, especially database processes.
ning of my career, and I could not have been more wrong.
Suppose you need to run a standard ETL data job, exe-
Database professionals have different opinions about cute some custom API out in the operating system, and
the structure of such tables. On the one extreme, some then run a final database job? You’d like to use SQL Server
might simply use change data capture to log every single Agent, as you could easily add Steps 1 and 3 as job steps,
change, and then use queries to reconstruct history at and then place the PowerShell script in between the steps.
any one point in time. Granted, that could be a lot of SQL
code, but it can be done. Some create temporal tables Here are some ways I’ve used PowerShell:
(or use the temporal table feature added in SQL Server
2016) to possibly reduce the amount of code needed to • To deploy reports to an SSRS or Power BI report
reconstruct history. Some create periodic snapshot tables server area under a different service account
where the periods represent specific established business • When I had a specific custom process where I need-
timelines (beginning of month, end of month, average ed to tap into ADO.NET in between job steps, and
during the month, etc.). On one occasion, a client was SSIS and .NET modules were not a deployment op-
stunned that I was able to reconstruct a timeline to show tion.

codemag.com Stages of Data: The DNA of a Database Developer, Part 1 57


Recommended Reading and I tried to focus more on core skills in this article. And
trust me, a month after this issue is printed, I’ll say to
Experimenting myself, “Rats, I forgot to mention some approach that
There are many fantastic SQL authors out there, such as could have made this article better.” For that reason, I’m
Itzik Ben-Gan, Greg Low, Brent Ozar. They have content splitting this article into multiple parts.
on multiple websites. That’s just three, and there are
many other great ones as well. But here’s something I do want to say. Those who follow
sports might recall a player who was criticized for not giv-
I’ve written on SQL Server, T-SQL, and Data Warehousing ing full effort during practice sessions, and reacted during
concepts in many CODE Magazine articles. You can go to a press conference by repeatedly saying, “C’mon, we’re talk-
https://codemag.com/People/Bio/Kevin.Goff and see a ing about practice. Not a game, but practice.” Nearly all of
listing of all my articles. us can think back to some time in our lives when we greatly
improved our skills in some area. I’m betting that more
Here are ones I’d like to call out in particular: focus and more practice and more studying played a role.

• Two T-SQL articles Reading SQL Server books and blogs is great, but what’s
• https://codemag.com/Article/2401051/Stag- even greater is taking some of those skills and trying
es-of-Data-Some-Basic-SQL-Server-Patterns- them out on your own databases. Microsoft has Adven-
and-Practices tureWorks and Contoso Retail demo databases. When I
• https://codemag.com/Article/1801051/A-SQL- wrote articles on COVID data, I found many Excel/CSV
SPONSORED SIDEBAR Programming-Puzzle-You-Never-Stop-Learning files with statistics. Yes, it took some work to assemble
• A Power BI article: those into meaningful databases, but it was worth it. If
Ready to • Stages of Data: COVID Data, Summary Dash- you’re starting out at a new job, or even applying for a
Modernize a boards, and Source Data Tips (codemag.com) new job, you want people to watch you do something and
• Four articles on SQL Server reporting Services: say, “Wow, that person has obviously done this before.”
Legacy App? • https://codemag.com/Article/1805051/
Refactoring-a-Reporting-Services-Report-with-
Need advice on migrating
yesterday’s legacy Some-SQL-Magic
Summary: What I Almost Called
applications to today’s • https://codemag.com/Article/1711061/SQL- This Article
modern platforms? Take Server-Reporting-Services-Eight-Power-Tips I’ve been working on this article for over four months. As
advantage of CODE • https://codemag.com/Article/1705061/SQL- most authors can attest, what you start with and what
Consulting’s years of Server-Reporting-Services-Seven-Power-Tips you finish with can be different things. As I look back
experience and contact • https://codemag.com/Article/1605111/SQL- over this, the content itself didn’t change much, but the
us today to schedule a Server-Reporting-Services-Eight-Power-Tips reasons I wrote it evolved. As I mentioned earlier, I’ve
FREE consulting call to seen many LinkedIn questions where something like this
discuss your options. I’ve created data projects and dashboard pages from per- would be helpful. I also wrote this because I wanted to
sonal data for everything from my weekly health stats to share what things I’ve seen many times. I’ll never claim to
No strings. No
personal finances. The more you practice, the better! have all the answers on what makes a good database de-
commitment.
veloper, but I’ve instructed people at a tech school whose
For more information, It’s great to read and absorb information from websites mission was to help people get jobs (or get better jobs)
www.codemag.com/ and books. Yes, sometimes it’s because you’re trying to and I’ve mentored other developers. I always wanted to
consulting or email us at solve a specific problem at work, so you already know take an inventory of what fundamentals I think others
[email protected]. you’re getting your hands dirty and you just need to how will find important, just to make sure I hadn’t forgotten
to use your hands. Other times, you might be research- anything (and I’ll freely admit that I’d forgotten the spe-
ing or learning a topic where you haven’t gotten your cifics of Fill Factor). I also know someone who’s consider-
hands dirty. All learning is kinetic in some way—a person ing a career in this industry. I’ve been successful in my
could read a book on how to perform open heart surgery career: I’ve made many mistakes, and I’ve learned hard
100 times and be able to quote each line in the book, lessons as well! I wanted to look back on what areas of
for instance. Well, I’m not saying that implementing a knowledge have helped me to be successful.
Type 2 changing dimension is open heart surgery, but
the more you can demonstrate to others that you CAN I’ve been talking to a developer that I mentored for a few
do something…. As someone who’s interviewed people, I years. Their first response was, “Wow, you’re really trying
might find someone’s personal example (a good example) to expose interviewers!!!” As Eric Idle said to John Cleese
of implementing a Type 2 SCD, or someone being able to during the famous “Nudge Nudge Wink Wink” skit: “Oh no,
open two query windows with a test table to demonstrate no, no, (pause), YES!”
READ COMMMITTED SNAPSHOT, to be very compelling.
There are topics I covered in this article in more detail
than others. I went into a fair amount of detail on the
One Thing I Won’t Talk About Snapshot Isolation Level, but only briefly talked about
(But One Final Thing That I Will) Change Data Capture and logging. There are other web
There are other great tools and technologies that database articles out there, including some from me. As I’ve linked
developers use. One that comes to mind is Python. Database in this article, there were some topics that I’ve previously
developers who also work on the .NET side will sometimes covered in CODE Magazine and didn’t want to repeat.
use Entity Framework. Those who work on the ETL side might
use third-party tools such as COZYROC and possibly different  Kevin S. Goff
Master Data Management tools. The list goes on and on. 

58 Stages of Data: The DNA of a Database Developer, Part 1 codemag.com


ONLINE QUICK ID 2405071

From SOAP to REST to GraphQL


Simple Object Access Protocol (SOAP) can be used to build web services that support interoperability between different platforms
and technologies. REST (an acronym for Representational State Transfer) is another popular way of building lightweight APIs
that can run over HTTP. As an open-source query language, GraphQL promises a more potent way of accessing information

than SOAP or REST alone. This article aims to provide a that allows different applications to interact with one an-
comprehensive overview of the evolution of wеb APIs, other over a network by leveraging XML as the message
exploring thе transition from SOAP to REST, and finally format. You can take advantage of SOAP to build interop-
to GraphQL. It will delve into thе motivation behind еach erable web services that work with disparate technologies
architectural style, and their characteristics, benefits, and platforms. The structure and content of XML mes-
and drawbacks. By understanding the progression from sages, as well as a set of communication guidelines, are
SOAP to REST and thе emergence of GraphQL, developers outlined in a SOAP document.
can makе informed decisions when choosing the right API
design for their projects. Note that ASP.NET Core doesn't have any built-in support
for SOAP. Rather, the .NET Framework provides built-in Joydip Kanjilal
If you’re to work with the code examples discussed in this support for working with ASMX and WCF. Using third-par- [email protected]
article, you need the following installed in your system: ty libraries, you can still build applications that leverage
SOAP in ASP.NET Core. Figure 1 demonstrates how SOAP Joydip Kanjilal is an MVP
• Visual Studio 2022 works. (2007-2012), software
• .NET 8.0 architect, author, and
speaker with more than
• ASP.NET 8.0 Runtime
Anatomy of a SOAP Message 20 years of experience.
He has more than 16 years
If you don’t already have Visual Studio 2022 installed on A SOAP (Simple Object Access Protocol) message is an XML- of experience in Microsoft
your computer, you can download it from here: https:// based structure used to exchange information between web .NET and its related
visualstudio.microsoft.com/downloads/. services across diverse networks and platforms. A typical technologies. Joydip has
SOAP message is comprised of several elements that define authored eight books,
In this article, I'll examine the following points: the message's structure, content, and optional features. more than 500 articles,
SOAP messages are designed to be extensible, neutral, and and has reviewed more
• SOAP, REST, and GraphQL and their benefits independent of any specific programming model or trans- than a dozen books.
• The key differences between SOAP, REST, and GraphQL port protocol, typically HTTP or HTTPS.
• The benefits and drawbacks of SOAP, REST, and GraphQL
• How to use each of these tools in enterprise apps A typical SOAP message comprises four key elements, as
shown in Figure 2.
After gaining this knowledge, you’ll build three applica-
tions: one each using SOAP, REST, and GraphQL. • Envelope
• Header (optional)
• Body
What Is Simple Object Access • Fault
Protocol (SOAP)?
Simple Object Access Protocol (SOAP) is a communication This next shippet is how the structure of a typical SOAP
protocol for data exchange in a distributed environment message looks:

Figure 1: Simple Object Access Protocol (SOAP) at work Figure 2: A SOAP message

codemag.com From SOAP to REST to GraphQL 59


<?xml version="1.0"?> Example SOAP Request and Response
The format of a typical SOAP request looks like this:
<soap:Envelope
xmlns:soap= POST /<host>:<port>/<context>/
"http://www.w3.org/2003/05/soap-envelope/" <database ID> HTTP/1.0
soap:encodingStyle= Content-Type: text/xml; charset=utf-8
"http://www.w3.org/2003/05/soap-encoding">
<?xml version="1.0"?>
<soap:Header> <env:Envelope xmlns:env=
<!-- SOAP Header (Optional) --> "http://schemas.xmlsoap.org/soap/envelope/">
</soap:Header>
<env:Header>
<soap:Body> </env:Header>
<!-- SOAP Body -->
<soap:Fault> <env:Body>
<!-- SOAP Fault --> </env:Body>
</soap:Fault>
</soap:Body> </env:Envelope>

</soap:Envelope> The following code snippet illustrates how a typical SOAP


request looks:
SOAP Envelope
Every SOAP message is encapsulated inside a root element POST /ProductPrice HTTP/1.1
called SOAP Envelope. It defines the XML namespace and Host: www.example.org
contains two mandatory child elements: the SOAP Header Content-Type: application/soap+xml;
charset=utf-8
and the SOAP Body.
Content-Length: nnn
<soap:Envelope
<?xml version="1.0"?>
xmlns:soap=
<soap:Envelope
"http://www.w3.org/2003/05/soap-envelope/"
xmlns:soap=
soap:encodingStyle=
"http://www.w3.org/2003/05/soap-envelope/"
"http://www.w3.org/2003/05/soap-encoding">
soap:encodingStyle=
"http://www.w3.org/2003/05/soap-encoding">
SOAP Header
The SOAP Header is an optional element that contains
<soap:Body xmlns:m=
additional information or metadata about the SOAP mes-
"http://www.abcxyz.org/product">
sage, such as authentication credentials, security tokens, <m:GetProductPrice>
or routing instructions. <m:ProductCode>HP_Envy_i9</m:ProductCode>
</m:GetProductPrice>
<soap:Header>
</soap:Body>
<!-- Optional SOAP Header elements -->
</soap:Header>
</soap:Envelope>

SOAP Body The format of a typical SOAP response looks like this:
The SOAP Body represents the main body of the SOAP
message, which contains the data or parameters for the HTTP/1.0 200 OK
method being sent to the web service. It should be noted Content-Type: text/xml; charset=utf-8
that the SOAP Body element is mandatory. It can have
one or more child elements that represent the actual pay- <?xml version="1.0"?>
load and one or more fault elements in the event of an <env:Envelope xmlns:env=
error. "http://schemas.xmlsoap.org/soap/envelope/">

<soap:Body> <env:Header>
<!-- SOAP Body content --> </env:Header>
</soap:Body>
<env:Body>
SOAP Fault </env:Body>
During the processing of a SOAP message, an optional
element called SOAP Fault can be used to convey error or </env:Envelope>
fault information back to the client in case of errors or
exceptions that occur during the process. And here's how a SOAP response to the above SOAP re-
quest looks:
<soap:Fault>
<!-- Fault details --> HTTP/1.1 200 OK
</soap:Fault> Content-Type: application/soap+xml;

60 From SOAP to REST to GraphQL codemag.com


charset=utf-8
Content-Length: nnn

<?xml version="1.0"?>

<soap:Envelope
xmlns:soap=
"http://www.w3.org/2003/05/soap-envelope/"
soap:encodingStyle=
"http://www.w3.org/2003/05/soap-encoding">

<soap:Body Figure 3: Interactions between components in a web service


xmlns:m="http://www.abcxyz.org/product">
<m:GetProductPriceResponse>
<m:Price>6500.00</m:Price> • Service requester or the service consumer: This
</m:GetProductPriceResponse> component is responsible for requesting a resource
</soap:Body> from the service provider and, similar to the service
</soap:Envelope> provider, it encompasses the application, the plat-
form, and the middleware.
• Service registry: This is an optional component
WSDL, UDDI, and Binding that resides in a central location so that service
This section provides a brief overview of the WSDL and providers can publish service descriptions and the
UDDI. It also briefly discusses SOAP binding. service requesters or service consumers can look up
those service descriptions.
Web Services Description Language (WSDL)
Web Services Description Language, or WSDL, is a lan-
guage based on XML used for the purpose of describing Strategies for Building a SOAP Service
and locating web services. A standard format is used to To build a SOAP service, you can use any of the following:
describe an interface to a web service that includes op- WCF connected service, third-party libraries, or manual
erations, inputs, output parameters, and the message implementation. Let’s have a look.
format.
WCF Connected Service
Universal Description, Discovery, and Integration (UDDI) A Connected Service in Windows Communication Founda-
Universal Description, Discover, and Integration, or UDDI, tion (WCF) represents a feature in Visual Studio that fa-
represents a platform-independent registry intended to cilitates the use of and access to web services, including
provide a standard mechanism for service discovery, regis- SOAP-based services. Using it, you can create client code
tration, and versioning to enable organizations to publish capable of communicating with a web service seamlessly.
and discover web services on the internet.
Third-Party Libraries
A UDDI registry is independent of any platform intended You can also create SOAP services using third-party librar-
to provide a standard mechanism for the discovery, reg- ies such as Apache CXF, JAX-WS (Java API for XML Web
istration, and versioning of services, thereby enabling Services), and Spring-WS (Spring Web Services). You can
businesses to publish and discover web services on the also leverage Windows Communication Foundation (WCF)
internet. The key components of the SOAP architecture for building SOAP services in .NET environment.
work together to define a standardized message format
for exchanging structured data between applications us- Manual Implementation
ing different transport protocols. SOAP messages can be manually constructed and pro-
cessed using programming languages that support XML
SOAP Binding parsing and SOAP protocols, such as C# or Java. To do
SOAP binding defines how SOAP messages are transmitted this, follow these steps:
using transport protocols, such as HTTP, SMTP, or TCP. This
document specifies the message format, SOAP encoding 1. Define the service interface, operations, messages,
style, and binding rules that must be followed when send- and bindings in the WSDL document.
ing messages over SOAP. 2. Leverage the programming language of your choice
to implement the service logic.
3. Use XML parsing libraries or manually handling XML
SOAP Web Services Architectural serialization and deserialization for SOAP messages
Components is the option.
The SOAP web services architecture thrives on the syn- 4. Keep an eye on the HTTP server for SOAP requests
ergy among the following three components, as shown and responding appropriately.
in Figure 3:

• Service provider: This is the component that pro- DataContract and ServiceContract
vides the web service and encompasses the applica- In SOAP, DataContract and ServiceContract are key con-
tion itself, the platform on which the application cepts used to define the structure of data and the opera-
executes, and the middleware. tions supported by the service.

codemag.com From SOAP to REST to GraphQL 61


DataContract 1. Start the Visual Studio 2022 IDE.
In a DataContract, the terms and conditions used in the 2. In the “Create a new project” window, select “ASP.
data exchange between data providers and consumers are NET Core Web API” and click Next to move on.
outlined along with the format, structure, quality, seman- 3. Specify the project name as SOAP_Demo and the
tics, etc. It’s typically defined using XML Schema Defini- path where it should be created in the “Configure
tion (XSD) or a similar language in SOAP. These could your new project” window.
be complex types that are passed between service opera- 4. If you want the solution file and project to be cre-
tions. You should decorate your classes or structs with the ated in the same directory, you can optionally check
DataContract attribute to indicate that they are serializ- the “Place solution and project in the same direc-
able and understandable by the service provider and the tory” checkbox. Click Next to move on.
service consumer. Leverage the DataMember attribute to 5. In the next screen, specify the target framework and
decorate the properties or methods of classes or structs authentication type as well. Ensure that the "Config-
that should take part in the serialization process. ure for HTTPS," "Enable Docker Support," “Do not use
top-level statements,” and the “Enable OpenAPI sup-
Here's an example of a typical data contract: port” checkboxes are unchecked because you won’t
use any of these in this example.
[DataContract] 6. Remember to leave the Use controllers checkbox checked
public class MyClass because you won’t use minimal API in this example.
{ 7. Click Create to complete the process.
[DataMember]
public int Id { get; set; } A new ASP.NET Core Web API project is created. You’ll use
this project to build the SOAP service in ASP.NET Core.
[DataMember]
public string Text { get; set; } Install NuGet Package(s)
} So far so good. The next step is to install the necessary
NuGet Package(s). To install the required package into
ServiceContract your project, right-click on the solution and then select
SOAP web services expose operations and methods Manage NuGet Packages for Solution…. Now search for
through the ServiceContract that specifies the set of the package named SoapCore in the search box and in-
available operations, their input parameters, and return stall it. Alternatively, you can type the commands shown
types, i.e., the type of the return values. A ServiceCon- below at the NuGet Package Manager Command Prompt:
tract consists of an interface or class implemented in
the server-side code and acts as a blueprint for service PM> Install-Package SoapCore
implementation, specifying the operations clients can
invoke. You can also install this package by executing the follow-
ing commands at the Windows Shell:
An OperationContract represents an individual opera-
tion within a ServiceContract and specifies the operation dotnet add package SoapCore
name, input parameters, and output types. You must ap-
ply the OperationContract attribute to a service operation Create the Data Contract
to indicate that clients can access the operation. To create the data contract, create a new class named
Customer in a file called Customer.cs and write the fol-
The following code snippet illustrates a service contract: lowing code in there:

[ServiceContract] [DataContract]
public interface IMyDemoService public class Customer
{ {
[OperationContract] [DataMember]
string GetText(int id); public int Id { get; set; }
} [DataMember]
public string FirstName { get; set; }
[DataMember]
Implement a SOAP Service in public string LastName { get; set; }
ASP.NET Core [DataMember]
In this section, I’ll examine how to build a SOAP service public string Address { get; set; }
in ASP.NET Core. The section that follows outlines the }
series of steps needed to create a new ASP.NET Core Web
API project in Visual Studio. Create the CustomerRepository
The CustomerRepository class extends the ICustomerRe-
Create a New ASP.NET Core 8 Project in Visual Studio 2022 pository interface and implements its methods, as shown
You can create a project in Visual Studio 2022 in several in the code given in Listing 1.
ways. When you launch Visual Studio 2022, you'll see the
Start window. You can choose "Continue without code" Create the Service Contract
to launch the main screen of the Visual Studio 2022 IDE. To create a service contract, create an interface called
ICustomerService and write the code given in Listing
To create a new ASP.NET Core 8 Project in Visual Studio 2022: 2 in there. The CustomerService class extends the ICus-

62 From SOAP to REST to GraphQL codemag.com


tomerService interface and implements its methods. Once the Program.cs file is given in Listing 3 for your refer-
you’ve created the ICustomerService interface, create a ence.
new class named CustomerService and write the code
given in Listing 2 in there. Create the SOAP Client
You’ll now create a simple Console application to consume
Configure the SOAP Service the SOAP service you created earlier. The SOAP client ap-
Lastly, you should configure your SOAP service to run it. plication consumes the SOAP service and displays the
To do this, add ICustomerRepository and ICustomerSer- data retrieved at the Console window.
vice instances to the container using the following code
snippet: You can create a project in Visual Studio 2022 in several
ways. When you launch Visual Studio 2022, you'll see the
builder.Services.AddScoped Start window. You can choose “Continue without code”
<ICustomerService, to launch the main screen of the Visual Studio 2022 IDE.
CustomerService>();
builder.Services.AddScoped To create a new Console Application Project in Visual Stu-
<ICustomerRepository, dio 2022:
CustomerRepository>();
1. Start the Visual Studio 2022 IDE.
This enables you to use dependency injection to retrieve 2. In the Create a new project window, select Console
these instances at runtime. The complete source code of App, and click Next to move on.

Listing 1: The ICustomerRepository interface


public interface ICustomerRepository {
public Task < List < Customer >> GetCustomers(); },
public Task < Customer > GetCustomer(int id); new Customer() {
} Customer_Id = 3, FirstName = "Carlton",
LastName = "Kramer", Address = "New York, USA"
public class CustomerRepository: ICustomerRepository { }
private readonly List < Customer > customers = };
new List < Customer > () {
new Customer() { public async Task < List < Customer >> GetCustomers() {
Customer_Id = 1, FirstName = "Rob", return await Task.FromResult(customers);
LastName = "Miles", Address = "Boston, USA" }
public async Task < Customer > GetCustomer(int id) {
}, return await Task.FromResult
new Customer() { (customers.FirstOrDefault(x => x.Customer_Id == id));
Customer_Id = 2, FirstName = "Lewis", }
LastName = "Walker", Address = "London, UK" }

Listing 2: The ICustomerService interface


[ServiceContract] (ICustomerRepository customerRepository)
public interface ICustomerService {
{ _customerRepository = customerRepository;
[OperationContract] }
Task<List<Customer>> GetCustomers(); public async Task<List<Customer>> GetCustomers()
{
[OperationContract] return await _customerRepository.GetCustomers();
Task<Customer> GetCustomer(int id); }
} public async Task<Customer> GetCustomer(int id)
{
public class CustomerService : ICustomerService return await _customerRepository.GetCustomer(id);
{ }
private readonly ICustomerRepository _customerRepository; }
public CustomerService

Listing 3: Configuring SOAP endpoints in the Program.cs file


using SOAP_Demo; // Configure the HTTP request pipeline.
using SoapCore; app.UseRouting();
app.UseAuthorization();
var builder = WebApplication.CreateBuilder(args); app.MapControllers();

// Add services to the container. app.UseEndpoints(endpoints =>


builder.Services.AddScoped {
<ICustomerService, CustomerService>(); _ = endpoints.UseSoapEndpoint
builder.Services.AddScoped <ICustomerService>("/CustomerService.asmx",
<ICustomerRepository, CustomerRepository>(); new SoapEncoderOptions(),
builder.Services.AddControllers(); SoapSerializer.XmlSerializer);
});
var app = builder.Build();
app.Run();

codemag.com From SOAP to REST to GraphQL 63


You’ll observe that a file named References.cs has been
added to the client application. You can now write the
following piece of code to invoke the GetCustomers ser-
vice operation asynchronously.

ICustomerService soapServiceChannel =
new CustomerServiceClient
(CustomerServiceClient.EndpointConfiguration.
BasicHttpBinding_ICustomerService_soap);
var response = await
soapServiceChannel.GetCustomersAsync();

The following code snippet shows the complete source


code of the Program.cs file of the client application.

using CustomerServiceReference;

ICustomerService soapServiceChannel =
new CustomerServiceClient
(CustomerServiceClient.
EndpointConfiguration.
BasicHttpBinding_ICustomerService_soap);
Figure 4: Adding a new Service Reference var response = await soapServiceChannel.GetCustomersAsync();

foreach (Customer customer in response)


Console.WriteLine(customer.FirstName);

When you run the application, the first names of the cus-
tomers will be displayed at the console window.

Call the SOAP Service from Postman


You can also call the GetCustomers service operation us-
ing Postman, as shown in Figure 6.

From SOAP to REST


Representational State Transfer (REST) is a lightweight,
and scalable approach to build web services that can run
over HTTP. It’s often used for building distributed sys-
Figure 5: SOAP service reference added tems and APIs due to its simplicity, flexibility, and wide
adoption among developers. REST is a set of architectural
constraints.
3. Specify the project name as SOAP_Cient and the
path where it should be created in the Configure RESTful systems adhere to a set of constraints that stan-
your new project window. dardize the communication between components, en-
4. If you want the solution file and project to be cre- hancing scalability and performance. By leveraging HTTP
ated in the same directory, you can optionally check methods such as GET, PUT, POST, and DELETE, REST en-
the Place solution and project in the same directory ables the creation of well-defined and predictable APIs.
checkbox. Click Next to move on. This approach allows loose coupling between client and
5. In the next screen, specify the target framework server, promoting flexibility and ease of maintenance in
you’d like to use for your console application. distributed systems.
6. Click Create to complete the process.
This not only promotes reliability and resilience, but also
Once your Console application is ready, you can follow allows scalability and efficient communication between
these steps to add a reference to your SOAP service into components. Figure 7 shows a typical REST-based appli-
the client application. cation at work. By understanding the principles of REST,
developers can design robust and flexible systems that
1. Right click on the client application and select Add are scalable and high performant.
-> Service Reference, as shown in Figure 4.
2. Click on Next three times and let the default selec- Resources
tions be used for this example. A fundamental tenet of REST is resources, which are
3. Click on Submit to initiate the add reference process. distinguished by unique URLs and may be managed via
conventional HTTP methods such as GET, POST, PUT, and
Once the reference to the SOAP service has been added DELETE. This approach allows clients to access and modify
successfully, the following screen will be displayed, as representations of these resources through well-defined
shown in Figure 5. interfaces. Resources can be represented in various for-

64 From SOAP to REST to GraphQL codemag.com


Figure 6: The SOAP response of the GetCustomers service operation in Postman

mats, allowing more flexibility in how data is manipu-


lated and transferred.

Common Misconceptions about REST


In this section, I’ll examine a few common misconcep-
tions related to REST among the developer community.

REST Is a Protocol
It should be noted that REST is not a standard or a pro-
tocol. Representational State Transfer (REST) refers to an
architectural style and a set of architectural constraints
used for developing networked applications that defines a
set of guidelines and principles for developing web servic-
es that are scalable, maintainable, and loosely coupled.
Figure 7: REST application at work
REST Is Only Used for Web Services
Although REST was originally designed for creating web
services, it can also be used for other types of applica- the architectural style, including statelessness, caching,
tions such as mobile apps or IoT devices. As long as the uniform interface, etc.
principles of statelessness, client-server architecture, and
resource-based communication are followed, any type of URLs Must Contain Nouns Only
application can be built using REST. Another misconception about REST is that it can only be
used with HTTP. Although RESTful APIs typically use HTTP
REST Requires the Use of HTTP as the underlying protocol, REST itself is not tied to any
Although HTTP is commonly used in conjunction with specific protocol and can be implemented over other pro-
REST due to its widespread adoption and support for tocols like CoAP or WebSocket.
various request methods, it’s not a requirement. The prin-
ciples of resource identification and manipulation are ap-
plicable to any network protocol. Key Principles of REST
There are several key principles that underpin REST archi-
Every API that Uses HTTP Is Automatically Considered tecture, which are listed in this section. By adhering to
RESTful these key principles, developers can design scalable, reli-
No, not actually. Just because an API uses HTTP doesn’t able, and efficient web services that meet the demands of
mean it follows the principles of REST. A genuinely REST- today's applications as far as performance and flexibility
ful API should adhere to all the constraints set out by is concerned.

codemag.com From SOAP to REST to GraphQL 65


Resource-Based In a typical client-server architecture, the client and the
In REST, a resource is an object, data, or service that a server are decoupled and they don't have any knowledge
client may access across a network, usually the internet, of each other, thereby enabling them to grow and evolve
using a specified URL. Resource identifiers in a RESTful independent of each other.
system are known as URIs (Uniform Resource Identifiers),
and they are conceptual entities or data representations. Cacheable
In a RESTful architecture, you can identify resources us- Resources should be cacheable to improve network efficien-
ing URIs. Resources are concepts that can be represented cy. Each response should state whether it is cacheable on
in any format such as HTML, XML, JSON, etc. the client side and for how long. When the client requests
data in the future, it retrieves the data from its cache,
Code on Demand thereby eliminating the need to transmit the request to the
A RESTful architecture provides support for an optional server again for future requests. When managed effectively,
feature that allows code to be downloaded and executed caching reduces client and server traffic, enhancing avail-
as applets or scripts to extend client functionality. The ability and performance. However, you should take proper
number of features that need to be pre-implemented is measures to ensure that clients don't have stale data. The
reduced, simplifying the client experience. The ability to server can include caching-related headers (e.g., Cache-
download features after deployment improves the exten- Control or Last-Modified) in the response to indicate to the
sibility of the system. By delivering executable code to client how long the response can be cached.
the client, the server may enhance the client's capability
via the code on demand feature. When you fill out any
registration form on a site, your web browser alerts you Benefits of Using REST
if there are errors. For example, when you fill in a regis- The following are the benefits of REST at a glance:
tration form, your browser displays any errors while you
type, such as incorrect SSN numbers, email addresses, etc. • Scalability: RESTful architectures are inherently
scalable due to their stateless nature, allowing for
Stateless easy scaling of services to meet changing demands
REST is stateless, requiring each client request to provide and making it easier to handle a large number of
all information needed for processing as part of query requests.
parameters, request headers or URI. It should be noted • Simplicity: With its emphasis on standard HTTP
that in a typical RESTful architecture, the server doesn’t methods and status codes, REST simplifies the com-
retain any client-specific information between requests, munication process between clients and servers.
thus enabling improved scalability and load balancing by • Flexibility: The ability to work with different data
allowing any server to process any client request. representations and to support various client types
adds flexibility to RESTful services.
Layered Architecture • Performance: By caching resources, you can reduce
Requests and responses to REST APIs are routed through client-server interactions, which can greatly im-
several layers spread across multiple tiers. A layered sys- prove the performance of your application.
tem architecture is one in which the application is split • Interoperability: REST APIs work over standard pro-
across various layers to isolate presentation, application tocols like HTTP, enabling seamless communication
processing, and data management in a layered architec- between different systems regardless of their imple-
ture. These layers work together to fulfill client requests mentation details.
while clients are unaware of them. It helps if you design • Technology Agnostic: REST APIs are technology ag-
your RESTful architecture to split your RESTful services nostic, enabling you to write both server and client
across multiple layers, thereby fostering the separation of applications in different programming languages.
concerns in your application. The underlying technology can also be changed on
either side of the communication, if needed, with-
Uniform Interface out affecting the functionality.
RESTful architecture encourages uniform and standard-
ized interfaces for interaction with a resource as part of
its basic principles. The interface typically consists of How Does REST Work?
four main HTTP methods: GET, POST, PUT, and DELETE. Initially, the client sends a request to the server over
RESTful services should have a uniform interface. In a HTTP in order to initiate communication with the server.
RESTful architecture, the server transmits information to The server acknowledges and processes the request and
a client in a standard format. In RESTful architecture, a then produces a response, as appropriate. Responses are
formatted resource is known as a representation. Note usually in the form of data, such as JSON or XML, which
that the format of a resource may differ from its internal the client can display or use. REST APIs use predefined
representation on the server. A server can, for instance, methods like GET, PUT, POST, DELETE to perform diverse
store data in text and send it in HTML format. In a REST- operations on the server. These methods correspond to
ful architecture, by decoupling implementations from the various actions, such as retrieving data, updating old
services they provide, they can evolve independently. data, creating new data, and deleting old data.

Client-Server Architecture
As a rule of thumb, a RESTful architecture should be
What Are REST APIs?
based on a client-server architecture. Although the cli- How Do They Work?
ent requests resources from the server, the server pro- REST APIs communicate data between client and server
vides resources as appropriate to the authorized clients. using HTTP requests. Once a client sends a request, the

66 From SOAP to REST to GraphQL codemag.com


server acknowledges and processes it and then sends an interconnected devices. As the web industry continues to
appropriate response to the client. Responses are usually evolve, REST's flexibility and compatibility with different
in the form of data, such as JSON or XML, which the client programming languages make it well-positioned to meet
can display or use. REST APIs use predefined methods like the demands of modern web development practices.
GET, POST, PUT, and DELETE to perform diverse operations
on the server. The methods apply to various actions, such
as retrieving data, creating new data, updating old data,
Implementing a RESTful Application
and deleting old data. Using these principles and meth- in ASP.NET Core
ods, REST APIs can effectively communicate and transfer Let’s now build a RESTful web application in ASP.NET Core.
data between applications. A REST API encompasses a Follow the steps outlined in a previous section where
collection of guidelines and standards that can help you you’ve built a ASP.NET Core Web API application in Visual
build applications that can interact and share informa- Studio. The only difference is that you’ll build a minimal
tion over the internet. It adheres to REST principles, a API application in Visual Studio by leaving the Use con-
stateless architectural approach, to develop scalable and trollers checkbox unchecked. You’ll use this project to
adaptable web services. build the RESTful application in ASP.NET Core.

Create the Model Class


Challenges of REST Assuming that the minimal API application has been cre-
Although REST offers many advantages, some challenges ated, create a new class named Product in a file having
exist, such as maintaining statelessness leading to in- the same name with a .cs extension and write the follow-
creased network overhead and designing consistent and ing code in there:
meaningful URI structures can be complex in large-scale
applications. public class Product
{
Here are the key challenges of RESTful architecture: public int Product_Id { get; set; }
public string Product_Name { get; set; }
• Limited support for performing complex opera- public string Description { get; set; }
tions: It’s important to note that the REST API can public string SKU { get; set; }
only be used for CRUD (Create, Read, Update, De- public decimal Price { get; set; }
lete) operations through a limited set of HTTP meth- }
ods (GET, PUT, POST, DELETE). Performing complex
operations may require multiple requests or custom For the sake of simplicity and brevity, I’ll skip other model
endpoints. classes here.
• No standard error handling mechanism: There’s no
standardized error handling in REST. There can be Create the ProductRepository
a range of REST implementations for implementing The ProductRepository abstracts all calls to the database.
consistent error handling mechanisms, such as error The ProductRepository class extends the IProductReposi-
codes, error messages, and error formats. tory interface and implements its methods as shown in
• Over-fetching and under-fetching: REST APIs Listing 4.
typically return a fixed representation of resources.
When clients only require a subset of the resource Create the RESTful API
data or need additional data not included in the Finally, let’s create the API endpoints in the Program.cs
response, it can result in inefficiencies. file. Remember, this is a Minimal API application, so you
• Versioning: As REST APIs evolve, introducing chang- don’t use any APIController class here. The following code
es can break existing client implementations. When snippet shows how you can create the API endpoints:
many clients consume the API at the same time,
maintaining backward compatibility and versioning app.MapGet("/getproducts", async
can be challenging. (IProductRepository productRepository)
• Security issues: Security challenges exist when using => await productRepository.GetProducts());
REST APIs, such as data exposure, unauthorized ac-
cess, and protection against XSS and CSRF attacks. Im- app.MapGet("/getproduct/{id:int}", async
plementing proper security measures, such as authen- (IProductRepository productRepository, int id)
tication, authorization, and encryption, is imperative. => await productRepository.GetProduct(id)
is Product product ?
Results.Ok(product) : Results.NotFound());
The Future of REST
With technology advancing at such a lightning-fast pace, app.MapPost("/addproduct", async
REST seems to have a bright future. With its simplicity and (IProductRepository productRepository,
scalability, REST is expected to remain a fundamental ar- Product product) =>
chitectural style for designing networked applications. As {
more businesses embrace cloud computing and microser- await productRepository.AddProduct(product);
vices architecture, REST will play a crucial role in enabling return Results.Created
seamless communication between various services and ($"/addproduct/{product.Product_Id}", product);
systems. Furthermore, with the rise of Internet of Things });
(IoT) devices and mobile applications, RESTful APIs will
be essential in facilitating data exchange between these app.MapDelete("/deleteproduct/{id}", async

codemag.com From SOAP to REST to GraphQL 67


Listing 4: The IProductRepository interface
public interface IProductRepository },
{ new Product {
public Task<List<Product>> GetProducts(); Product_Id = 3,
public Task<Product> GetProduct(int id); Product_Name = "DELL XPS Laptop",
} Description = "Dell XPS 9730 Laptop,
Intel Core i9 32GB 1TB SSD",
public class ProductRepository SKU = "DEL/i9/1TB",
{ Quantity = 50,
private readonly List<Product> products = Price = 7500.00m
new List<Product> { }
new Product { };
Product_Id = 1, public async Task<List<Product>> GetProducts()
Product_Name = "HP Envy Laptop", {
SKU = "HPL/i9/1TB", return await Task.FromResult(products);
Description = }
"HP Envy Laptop i9 32 GB RAM 1 TB SSD", public async Task<Product> GetProduct(int id)
Quantity = 100, {
Price = 6500.00m return await Task.FromResult
}, (products.FirstOrDefault
new Product { (x => x.Product_Id == id));
Product_Id = 2, }
Product_Name = "Lenovo Legion Laptop", }
Description =
"Lenovo Legion Laptop i7 16 GB RAM 1 TB SSD",
SKU = "Len/i9/1TB/SSD",
Quantity = 150,
Price = 6000.00m

(int id, IProductRepository productRepository) => From REST to GraphQL


{ It was Facebook's desire to find a method of accessing
await productRepository.DeleteProduct(id); data that was both more efficient and more elegant that
}); led to the development of GraphQL in the year 2012.
GraphQL achieves this by adopting a declarative approach
Although the MapGet method has been used to create the toward retrieving and manipulating data and enabling the
HttpGet endpoints, the MapPost method has been used to API clients to request only the data they require, thereby
create the HttpPost endpoint. Similarly, the MapDelete meth- improving performance and efficiency while reducing un-
od has been used here to create the HttpDelete endpoint. necessary data transfer. In order to retrieve data from
different endpoints using REST APIs, the client must make
Replace the content of the Program.cs file with source multiple requests, which can be inefficient and time in-
code given in Listing 5. You can now execute the ap- tensive. If your API is proficient in retrieving all the data
plication and then use Postman to launch the endpoints. your application requires, you won't encounter any issues
with over-fetching or under-fetching.

The primary goal of GraphQL is to simplify the process


of querying data by enabling clients to request exactly
the format and structure of data they need in a single
call. Remember, you need to make several calls to your
server to retrieve data split across multiple data stores,
as shown in Figure 8.

Here are the key reasons why GraphQL is considered a bet-


ter alternative to REST in some use cases.

• Data Retrieval: REST often necessitates accessing


various endpoints to collect different pieces of in-
formation, potentially leading to over-fetching or
under-fetching data. GraphQL enables clients to
specify the precise data they require from a single
endpoint, minimizing the volume of data transmit-
ted across the network. Over fetching is defined as a
situation in which an API returns more information
than is necessary for your application. For instance,
a client might request Order ID and Order Date and
receive Order Id, Order Date, and Product Id instead.
Under fetching occurs when an API doesn’t provide
all the data your application requests. A client may
request Order Id and Order Date, but only receive Or-
Figure 8: A typical GraphQL API Architecture der Id. As a consequence, the client may experience

68 From SOAP to REST to GraphQL codemag.com


Listing 5: Configuring the REST endpoints in the Program.cs file
using REST_Demo; Results.Ok(product) : Results.NotFound());

var builder = WebApplication.CreateBuilder(args); app.MapPost("/addproduct", async


(IProductRepository productRepository,
// Add services to the container. Product product) =>
{
builder.Services.AddScoped<IProductRepository, await productRepository.AddProduct(product);
ProductRepository>(); return Results.Created
var app = builder.Build(); ($"/addproduct/{product.Product_Id}", product);
});
// Configure the HTTP request pipeline.
app.MapDelete("/deleteproduct/{id}", async
app.MapGet("/getproducts", async (int id, IProductRepository productRepository) =>
(IProductRepository productRepository) {
=> await productRepository.GetProducts()); await productRepository.DeleteProduct(id);
app.MapGet("/getproduct/{id:int}", async });
(IProductRepository productRepository, int id)
=> await productRepository.GetProduct(id) app.Run();
is Product product ?

reduced performance and improper or inefficient use and the types of operations (queries and mutations)
of memory, CPU, and network resources. that can be performed, thereby helping with valida-
• Versioning: Versioning in REST APIs manages chang- tion and introspection.
es to the APIs by assigning different versions, such as • Improved performance: For applications that re-
v1, v2, and so on. GraphQL eliminates the necessity quire complex queries combining multiple resources,
for version control by allowing clients to specify the GraphQL can be more efficient than REST because it
data they need in the query, making it easier for APIs can gather all data in a single request.
to evolve without breaking the existing queries. APIs
built with GraphQL do not require separate version- There are certain downsides as well:
ing because clients or API consumers can define their
requirements in the query and fetch the required data • Learning curve: Despite its simplicity, GraphQL is quite
without breaking the existing queries. complex for those who are unfamiliar with its concepts.
• Type System: GraphQL employs a strongly typed The learning curve for designing schemas, resolving
schema specifying the data format you can request. queries, and securing GraphQL APIs can be quite steep.
This schema functions as a consensus between the • Caching challenges: Due to the dynamic nature of
client and the server, thereby enabling the early de- GraphQL queries, client-side and server-side caching
tection of errors. By recognizing potential errors up can be more challenging compared to REST, where
front, you can resolve the errors in a planned way URLs can easily serve as cache keys.
before they impact your clients. • Increased complexity: GraphQL adds more complex-
ity to the server-side implementation in contrast to
conventional REST APIs. You should use resolvers to
Benefits and Downsides of GraphQL get the data you need. Managing complex queries
Here are the key benefits of GraphQL: might incur additional effort.
• Rate limiting: Implementing rate limiting in
• Efficient data querying: With GraphQL, clients can GraphQL is more complex than in REST because it's
query multiple resources and retrieve related data in harder to predict the cost of a query due to its flex-
a single request. They can traverse the data graph ible nature.
and retrieve only the required data, avoiding the • Security considerations: GraphQL APIs must be
over-fetching of unnecessary fields or nested objects. carefully designed to avoid potential vulnerabilities.
• Reduced network traffic: GraphQL reduces the net- Exposing too much data or functionality through the
work traffic and bandwidth consumption by minimiz- API can increase the attack surface, making proper
ing the payload size of the responses. This explains authentication and authorization crucial.
why applications that leverage GraphQL often exhibit
better performance compared to RESTful applications.
• Versioning and evolution: With GraphQL, depre- GraphQL vs. REST
cated fields or types can be marked to signal clients Although REST and GraphQL are two of the most popular
for migration, allowing for smooth API evolution approaches for building APIs, there are subtle differences
without breaking existing clients. between the two:
• Support for real-time data: With GraphQL subscrip-
tions, clients can subscribe to specific data changes • Request format: Each endpoint in REST specifies a
in real-time. Once subscribed, the clients are noti- set of resources and operations, and the client can
fied using events about any changes made to the typically retrieve or modify all resources using HTTP
data in real-time. methods, such as GET, POST, PUT, or DELETE. With
• Strongly typed schema: GraphQL enforces a robust GraphQL, clients request data based on a specific
typing system and a well-defined schema, providing structure that matches the server's schema.
clarity for the available data types and fields. The • Data Retrieval: Each resource in REST can only
GraphQL schema defines the structure of the data be accessed through a particular endpoint, mean-

codemag.com From SOAP to REST to GraphQL 69


ing the client needs to make multiple requests to Request:
retrieve related data or complex object structures.
On the contrary, GraphQL enables the client to re- GET /api/address?employee_id=1
trieve the exact data they need with a single query,
thereby reducing the number of requests required to Response:
fetch the data and minimizing network bandwidth
consumption. This can lead to fewer data transfers {
and more efficient API performance. "street": "Banjara Hills"
• Type System: In GraphQL, there’s a strongly typed "city": "Hyderabad"
schema system that defines the types, fields, and "country": "India"
relationships between them. The client and server }
have a clear contract using this approach, and ad-
vanced tooling capabilities, such as schema intro- You can do the same in GraphQL in a much more elegant way.
spection and auto generation, are also available. On
the other hand, REST APIs do not typically include Request:
formal type systems, which makes them flexible but
also challenging to handle. query {
• Caching: REST enables you to cache frequently ac- employee (id: 1) {
cessed data for faster access during subsequent calls id
to access the same piece of data, thereby reducing name
network traffic and improving application perfor- address
mance. REST APIs can use HTTP caching techniques {
to minimize the data sent between clients and serv- street
ers. Native caching mechanisms, on the other hand, city
are not supported by GraphQL APIs. A GraphQL API country
relies on client-side caching mechanisms to opti- }
mize performance because query parameters may af- }
fect the responses. }
• Protocol: Although REST works only with HTTP pro-
tocol, there are no protocol constraints in GraphQL.
In other words, GraphQL is agnostic of the transport Here's the GraphQL response to the preceding query:
layer—you can use it with any transport layer pro-
tocol. {
• Partial Responses: GraphQL allows clients to retrieve "employee": {
specific information in a single query, minimizing "id": 1
data transmission. This is beneficial for sluggish net- "name": "Joydip"
works or when accessing APIs via mobile apps. "address": {
"street": "Banjara Hills"
Applications with complex queries, a large number of "city": "Hyderabad"
data sources, or unpredictable data needs may benefit "country": "India"
from GraphQL. REST may be more appropriate for simple }
CRUD applications when a mapping exists between the re- }
sources and their endpoints. However, in REST, resources }
are typically returned in their entirety, which can lead to
over-fetching. Note that you could retrieve the entire data in just one
call.
Comparing REST and GraphQL
Here's how REST and GraphQL compare against each oth-
Building a Simple Application
er in a typical request/response scenario. Consider two Using GraphQL
entities, Employee and Address. The former stores em- It’s time for writing some code. Let’s now examine how to
ployee details and the latter contains address details of build a simple ASP.NET Core 8 Web API application using
the employees. The following code snippets illustrate the GraphQL. A typical Order Processing System is comprised
requests and responses in REST to retrieve the details (in- of several entities such as Store, Supplier, Order, Product,
cluding address information) of an employee. Customer, etc. In this example, you'll implement only the
Store part of it for simplicity.
Request:
Let’s now examine how to create a ASP.NET Core 8 project
GET /api/employee?id=1 in Visual Studio 2022.

Response: Create a New ASP.NET Core 8 Project in


Visual Studio 2022
{ You can create a project in Visual Studio 2022 in several
"id": 1 ways. When you launch Visual Studio 2022, you'll see the
"name": "Joydip" Start window. You can choose "Continue without code"
} to launch the main screen of the Visual Studio 2022 IDE.

70 From SOAP to REST to GraphQL codemag.com


To create a new ASP.NET Core 8 Project in Visual Studio public string Email { get; set; }
2022: public string Phone { get; set; }
}
1. Start the Visual Studio 2022 IDE.
2. In the “Create a new project” window, select “ASP. The other entity classes are not being shown here for
NET Core Web API” and click Next to move on. brevity and also because this is a minimalistic implemen-
3. Specify the project name as GraphQL_Demo and the tation to illustrate how you can work with GraphQL in
path where it should be created in the “Configure ASP.NET Core 7.
your new project” window.
4. If you want the solution file and project to be cre- Create the IStoreRepository Interface
ated in the same directory, you can optionally check Create a new .cs file named IStoreRepository in your proj-
the “Place solution and project in the same direc- ect and replace the default generated code with the fol-
tory” checkbox. Click Next to move on. lowing code snippet:
5. In the next screen, specify the target framework and
authentication type as well. Ensure that the "Con- public interface IStoreRepository
figure for HTTPS," "Enable Docker Support," and the {
“Enable OpenAPI support” checkboxes are unchecked public Task<List<Store>> GetStores();
because you won’t use any of these in this example. public Task<Store> GetStore(int Id);
6. As you'll not leverage minimal APIs in this example, }
leave the Use controllers checkbox checked.
7. Click Create to complete the process. Create the StoreRepository Class
Next, create a new class named StoreRepository in a file
In this application, you’ll take advantage of HotChocolate having the same name with a .cs extension. Now write the
to generate GraphQL schemas. With HotChocolate, you following code in there:
can build an extra layer on top of your application layer
that uses GraphQL. It’s easy to set up and configure, and public class StoreRepository : IStoreRepository
it eliminates the clutter of generating schemas. {

Install NuGet Package(s) }


So far so good. The next step is to install the neces-
sary NuGet Package(s). To install the required packages The StoreRepository class illustrated in the code snippet
into your project, right-click on the solution and the se- below implements the methods of the IStoreRepository
lect Manage NuGet Packages for Solution…. Now search interface:
for the packages named HotChocolate.AspNetCore, and
HotChocolate.AspNetCore.Playground in the search box public async Task<List<Store>> GetStores()
and install them one after the other. Alternatively, you {
can type the commands shown below at the NuGet Package return await Task.FromResult(stores);
Manager Command Prompt: }

PM> Install-Package public async Task<Store> GetStore(int Id)


HotChocolate.AspNetCore {
PM> Install-Package return await Task.FromResult(stores.
HotChocolate.AspNetCore.Playground FirstOrDefault(x => x.Id == Id));
}
You can also install these packages by executing the fol-
lowing commands at the Windows Shell: The complete source code of the StoreRepository class is
given in Listing 6.
dotnet add package
HotChocolate.AspNetCore Register the StoreRepository instance
dotnet add package The following code snippet illustrates how an instance of
HotChocolate.AspNetCore.Playground type IStoreRepository is added as a scoped service to the
IServiceCollection.
Create the Model Class
Create a new class named Order in a file having the same builder.Services.AddScoped
name with a .cs extension and write the following code <IStoreRepository, StoreRepository>();
in there:
Create the GraphQL Query Class
public class Store A GraphQL query is defined as a request sent by the cli-
{ ent to the server. In GraphQL, clients request data from
public int Id { get; set; } the server using queries that adhere to a specific struc-
public string Name { get; set; } ture and syntax per the GraphQL specification. When us-
public string Address { get; set; } ing GraphQL queries, clients may specify data and the
public string City { get; set; } response format.
public string State { get; set; }
public string Country { get; set; } There are several fields in the query that represent the
public string Zip { get; set; } data that was retrieved from the API. The data contained

codemag.com From SOAP to REST to GraphQL 71


Listing 6: The StoreRepository class
public class StoreRepository : IStoreRepository Email = "[email protected]"
{ },
private readonly List<Store> stores = new Store
new List<Store> {
{ Id = 3,
new Store Name = "Harrods",
{ Address = "Flat 60 Davis Road",
Id = 1, City = "Bradford",
Name = "Walmart", State = "Yorkshire",
Address = "274 Reagan Apt. 919", Country = "UK",
City = "Huntington", Zip = "BD1 1BL",
State = "West Virginia", Phone = "1111111111",
Country = "USA", Email = "[email protected]"
Zip = "25049", }
Phone = "1111111111", };
Email = "[email protected]" public async Task<List<Store>> GetStores()
}, {
new Store return await Task.FromResult(stores);
{ }
Id = 2, public async Task<Store> GetStore(int Id)
Name = "Amazon", {
Address = "57526 Michelle Ferry Suite 714", return await Task.FromResult(
City = "Edmond", stores.FirstOrDefault(x => x.Id == Id));
State = "Oklahoma", }
Country = "USA", }
Zip = "66347",
Phone = "1111111111",

Listing 7: The StoreQuery class


using HotChocolate.Subscriptions; {
List<Store> stores =
namespace GraphQL_Demo await storeRepository.GetStores();
{ await eventSender.SendAsync
public class StoreQuery ("Returned a List of Stores", stores);
{ return stores;
public async Task<List<Store>> }
GetAllStores([Service] }
IStoreRepository storeRepository, }
[Service] ITopicEventSender eventSender)

Listing 8: The StoreType class


using HotChocolate.Types; s.Address).Type<StringType>();
descriptor.Field(s =>
namespace GraphQL_Demo s.City).Type<StringType>();
{ descriptor.Field(s =>
public class StoreType : s.State).Type<StringType>();
ObjectType<Store> descriptor.Field(s =>
{ s.Country).Type<StringType>();
protected override void Configure descriptor.Field(s =>
(IObjectTypeDescriptor s.Zip).Type<StringType>();
<Store> descriptor) descriptor.Field(s =>
{ s.Email).Type<StringType>();
descriptor.Field descriptor.Field(s =>
(s => s.Id).Type<IdType>(); s.Phone).Type<StringType>();
descriptor.Field(s => }
s.Name).Type<StringType>(); }
descriptor.Field(s => }

in each of these fields can be traversed and retrieved by Create a GraphQL Subscription
nesting them. Create a new .cs file named StoreQuery in You should also create a subscription to enable your
your project and replace the default generated code with GraphQL server to notify all subscribed clients when an
the code given in Listing 7. event occurs. Create a new class named StoreSubscription
and replace the default generated code with the source
Create the GraphQL Object Type code given in Listing 9.
In GraphQL, Object Types are used to describe the type of data
fetched using your API and they are represented by creating Configure GraphQL Server in ASP.NET Core
a class that derives the GraphQL.Types.ObjectGraphType class. Once you've created the Query type to expose the data
Create a new file named StoreType.cs in your project and re- you need, you should configure GraphQL Server in the
place the default code with the code given in Listing 8. Program.cs file using the following code snippet:

72 From SOAP to REST to GraphQL codemag.com


Figure 9: The storeById query in execution!

builder.Services.AddGraphQLServer() Listing 9: The StoreSubscription class


.AddType<StoreType>() using HotChocolate.Execution;
.AddQueryType<StoreQuery>() using HotChocolate.Subscriptions;
.AddSubscriptionType<StoreSubscription>()
.AddInMemorySubscriptions(); namespace GraphQL_Demo
{
public class StoreSubscription
You can then call the MapGraphQL method to register the {
middleware: [SubscribeAndResolve]
public async ValueTask<ISourceStream
<List<Store>>> OnStoreGet([Service]
app.MapGraphQL();
ITopicEventReceiver eventReceiver,
CancellationToken cancellationToken)
When you register this middleware, the GraphQL server will {
be available at /graphql by default. You can also customize return await eventReceiver.SubscribeAsync
the endpoint where the GraphQL server will be hosted by <List<Store>>
("Returned Stores", cancellationToken);
specifying the following code in the Program.cs file: }
}
app.MapGraphQL("/graphql/mycustomendpoint"); }

Listing 10: Configuring GraphQL in the Program.cs file


using GraphQL_Demo;
using HotChocolate.AspNetCore; var app = builder.Build();
using HotChocolate.AspNetCore.Playground;
// Configure the HTTP request pipeline.
var builder = WebApplication.CreateBuilder(args);
app.UseAuthorization();
// Add services to the container.
builder.Services.AddScoped app.MapControllers();
<IStoreRepository, StoreRepository>(); app.UsePlayground(new PlaygroundOptions
{
builder.Services.AddGraphQLServer() QueryPath = "/graphql",
.AddType<StoreType>() Path = "/playground"
.AddQueryType<StoreQuery>() });
.AddSubscriptionType<StoreSubscription>()
.AddInMemorySubscriptions(); app.MapGraphQL();
app.Run();
builder.Services.AddControllers();

codemag.com From SOAP to REST to GraphQL 73


CODE COMPILERS

Listing 11: The StoreController class


using Microsoft.AspNetCore.Mvc;

namespace GraphQL_Demo.Controllers May/Jun 2024


{ Volume 25 Issue 3
[Route("api/[controller]")]
[ApiController]
public class StoreController : ControllerBase Group Publisher
{ Markus Egger
private IStoreRepository _storeRepository; Editor-in-Chief
public StoreController Rod Paddock
(IStoreRepository storeRepository)
{ Managing Editor
_storeRepository = storeRepository; Ellen Whitney
} Content Editor
Melanie Spiller
[HttpGet("{id}")]
public async Task<Store> GetStore(int id) Writers in This Issue
{ Markus Egger Kevin Goff
return await _storeRepository.GetStore(id); Joydip Kanjilal Julie Lerman
} Sahil Malik Mike Rousos
Paul D. Sheriff
[HttpGet("GetStores")] Technical Reviewers
public async Task<List<Store>> GetStores() Markus Egger
{ Rod Paddock
return await _storeRepository.GetStores();
} Production
} Friedl Raffeiner Grafik Studio
} www.frigraf.it
Graphic Layout
Friedl Raffeiner Grafik Studio in collaboration
with onsight (www.onsightdesign.info)
Listing 10 shows the complete source of the Program.cs file.
Printing
Fry Communications, Inc.
Now execute the application and browse the /playground 800 West Church Rd.
endpoint. Next, execute the following query: Mechanicsburg, PA 17055
Advertising Sales
query Tammy Ferguson
{ 832-717-4445 ext. 26
[email protected]
storeById (id: 1)
Circulation & Distribution
{ General Circulation: EPS Software Corp.
id Newsstand: Ingram Periodicals, Inc.
name International Bonded Couriers (IBC)
Media Solutions
address
Source Interlink International
}
Subscriptions
}
Circulation Manager
Colleen Cade
Figure 9 shows the output on execution of the application. 832-717-4445 ext. 28
[email protected]
Create the StoreController Class
US subscriptions are $29.99 USD for one year.
Finally, you need to build the controller class to expose Subscriptions outside the US are $50.99 USD.
the endpoints to the outside world so that they can be Payments should be made in US dollars drawn
consumed by the authenticated clients or consumers of on a US bank. American Express, MasterCard,
Visa and Discover credit cards accepted.
the API. To do this, create a new API Controller in your Back issues are available. For subscription
project named StoreController and write the code given in information, email [email protected]
Listing 11 in there. or contact customer service at 832-717-4445 ext. 9.
Subscribe online at
www.code-magazine.com
Conclusion
CODE Developer Magazine
The requirements and constraints of your project will de- EPS Software Corporation / Publishing Division
termine the most appropriate choice between SOAP, REST, 6605 Cypresswood Drive, Ste 425, Spring, Texas 77379 USA
or GraphQL. SOAP is a good choice for enterprise-level Phone: 832-717-4445
applications because of better support for security and
its robust contract definitions. REST can significantly
enhance a resource-oriented application because of bet-
ter performance, enhanced scalability, and simplicity.
GraphQL offers a flexible, efficient approach to data re-
trieval and manipulation, making it an excellent choice
for applications that require strong typing, real-time data
updates, and efficient data retrieval capabilities.

 Joydip Kanjilal


74 From SOAP to REST to GraphQL codemag.com


CUSTOM SOFTWARE DEVELOPMENT
STAFFING TRAINING/MENTORING SECURITY

MORE THAN JUST


A MAGAZINE!
Does your development team lack skills or time to complete all your business-critical software projects?
CODE Consulting has top-tier developers available with in-depth experience in .NET,
web development, desktop development (WPF), Blazor, Azure, mobile apps, IoT and more.

CONTACT US TODAY FOR A COMPLIMENTARY ONE HOUR TECH CONSULTATION.


NO STRINGS. NO COMMITMENT. JUST CODE.
©shutterstock

codemag.com/code
832-717-4445 ext. 9 • [email protected]
STAFFING

UNLOCK
STAFFING
EXCELLENCE
Top-Notch IT Talent, Contract Flexibility, Happy Teams, and a
Commitment to Customer Success Converge with CODE Staffing

Our IT staffing solutions are engineered to drive your business forward while
saving you time and money. Say goodbye to excessive overhead costs and
lengthy recruitment efforts. With CODE Staffing, you’ll benefit from contract
flexibility that caters to both project-based and permanent placements. We
optimize your workforce strategy, ensuring a perfect fit for every role and
helping you achieve continued operational excellence.

Ready to Discuss Your IT Staffing Needs?

Visit our website to find out more about how we are changing
the staffing industry.

Website: codestaffing.com
Yair Alan Griver (yag)
Chief Executive Officer
Direct: +1 425 301 1590
Email: [email protected]

You might also like