CODE 3-2024 Web
CODE 3-2024 Web
MAY
JUN
2024
The DNA
codemag.com - THE LEADING INDEPENDENT DEVELOPER MAGAZINE - US $ 8.95 Can $ 11.95
of a Database
Developer
Cover AI generated - Markus Egger
We will send an expert to your office to meet with you. You will receive:
1. An overview presentation of the current state of Artificial Intelligence.
2. How to use AI in your business while ensuring privacy of your and your clients’ information.
3. A sample application built on your own HR documents – allowing your employees to query
those documents in English and cutting down the number of questions that you
and your HR group have to answer.
4. A roadmap for future use of AI catered to what you do.
CONTACT US TODAY FOR A FREE CONSULTATION AND DETAILS ABOUT OUR SERVICES.
codemag.com/ai-services
832-717-4445 ext. 9 • [email protected]
TABLE OF CONTENTS
Features
7 CODE: 20 Years Ago 46 S tages of Data: The DNA of
Markus continues his reflection on what the company, the magazine,
and the industry have been up to for the last three decades.
a Database Developer, Part 1
Markus Egger Whether you’re going to an interview as the applicant or the
interviewer, you’ll be glad that Kevin came up with this collection
of the things you ought to know if you want to succeed.
10 Async Programming in JavaScript Kevin Goff
Sahil shows you how to coordinate the multiple processors that
you need to do anything in today’s high-paced computing world.
Sahil Malik
59 From SOAP to REST to GraphQL
If you need to store, move, or access data, you’ll need to know
how to make sure that all of your systems talk to each other.
16 anipulating JSON Documents
M Joydip explains how SOAP, REST, and GraphQL combine to make
that a smooth process.
in .NET 8 Joydip Kanjilal
JavaScript Object Notation (JSON) can help you configure settings
and transfer data, but it really shines when it comes to creating
and manipulating documents in .NET 8. Paul shows you how.
Departments
Paul Sheriff
US subscriptions are US $29.99 for one year. Subscriptions outside the US pay
$50.99 USD. Payments should be made in US dollars drawn on a US bank. American
Express, MasterCard, Visa, and Discover credit cards are accepted. Back issues are
available. For subscription information, send e-mail to [email protected]
or contact Customer Service at 832-717-4445 ext. 9.
Subscribe online at www.codemag.com
CODE Component Developer Magazine (ISSN # 1547-5166) is published bimonthly
by EPS Software Corporation, 6605 Cypresswood Drive, Suite 425, Spring, TX
77379 U.S.A. POSTMASTER: Send address changes to CODE Component Developer
Magazine, 6605 Cypresswood Drive, Suite 425, Spring, TX 77379 U.S.A.
Unfinished Paintings
I spent last weekend in rainy (normally sunny) Southern California. During this trip, I managed to corral
the kids into going to the Academy Awards Museum. There, we came across a set of drawings from the
original Disney animated film The Little Mermaid (Figure 1). From what I could deduce from the drawings,
I believe they are what are called key frames. tarps like you can buy at Walmart), I saw street So being the fresh-faced coder, how could
In animation, the artists draw key frames, which posters, statues, and other paintings from his I learn to better my craft? I did this several
represent different transitions in the animation extensive set of works. One of these works ways. I read every book I could get my hands
and are then passed onto other artists who fill in stood out to me. Figure 2 is an image of an on, I went to the “big” city of Portland where
the frames between the keys. As I described what unfinished work started not soon before his un- I discovered knowledge heaven in the form of
these frames were for to my kids (and a random timely death. I spent time studying the image, Powell books, and, finally, I read every com-
onlooker), I discovered mentally why I’m fasci- looking at it from different angles and vantage puter magazine I could get my hands on. The
nated with such works. I’m fascinated by them points. I tried to picture Keith working on this articles that I really took a liking to were the
because they’re unfinished. I find that these arti- painting in his studio in the flow of artistic cre- ones where the authors explained the process
facts of the creative process give me insights into ativity. This image took me into a creative flow of how they achieved a solution. Using art as
the mind of the artist creating them. state. All from an unfinished painting. a metaphor, they took me through the rough
drawings, pencil sketches, rough demos, base
I love the rough drawings, erasure marks, rough For some reason, I’ve always loved works from coats, detail work, and finally, a finished
lines, and all the things that show how the art- artists that are incomplete works versus com- working solution. I was watching techno art-
ist works. This took me back to another mu- pleted works. This is my inner creator coming ists take me through their process from unfin-
seum visit. out. It takes me back to the earliest parts of ished to finished work. This is my process to
my career where I was the “lone wolf” devel- this day.
Last year I was lucky enough to attend an exhi- oper/network admin for a small resort in Cen-
bition of the works of Keith Haring at the Broad tral Oregon. This was back in the late 80s when Many of the unique solutions I’ve built over
Museum in downtown Los Angeles. I’ve been a there was no internet like we have now. There the years come from a process like this one.
follower of Keith’s work for decades now and were no GitHub repositories, no code blogs, I’m tasked with seeing if an idea might work.
don’t miss any chance to see collected exhibits no www.stackoverflow.com. Nope. There were For instance, many years ago, I was tasked
of his work. This exhibit stood out to me as crappy 1200 or 2400 baud modems by which we with building a solution where we embedded
I saw works and materials he’d used that I’d reached out into the world to gather our knowl- code in Microsoft Word documents that gave
never read about or witnessed. I saw full blown edge from forums like CompuServe and Genie. It users the ability to create dynamic scripts for
murals painted on camping tarps (yes ,camping was the dark ages. LOL. call centers. I started this process to see if I
could embed code into a work document. This
was my rough sketch. I then took the output
from that document and built an HTML-based
script using the metadata embedded in the
document, another sketch. I then combined
these two together into a rough demonstra-
tion for the client. The client liked what they
saw, and we went through the process repeat-
edly until we had a good working solution. This
code lasted nearly 10 years until a new solu-
tion was implemented. It enjoyed many years
of success, all originating from a rough sketch
of code.
Rod Paddock
Figure 1: A key frame Figure 2: An unfinished work by Keith Haring
6 Editorial codemag.com
ONLINE QUICK ID 2405011
turbulent time. The dotcom bubble had burst, and the projects we were working on in the consulting and custom
glory days of technology seemed to be over. Was the in- app dev side of the business weren’t classic dotcom com-
ternet really all it was supposed to be or was it just a panies. Also, we’d started CODE Magazine in the Spring of
passing fad? It was hard to say. 2000 and focused primarily on the new world of software
development that Microsoft was generating. The Java pro-
gramming language was of interest to a lot of people but
The Aftermath of the Dotcom Bubble had some issues that were, as of then, unaddressed, and
One of the main catalysts for the dot-com-bubble bursting one way to fix it was Microsoft’s approach of re-invent-
was the overvaluation of many internet-based companies ing the language in a top-secret project headed up by
that had little or no profits but huge expectations. Inves- language-guru Anders Hejlsberg, codenamed “Cool.” (This Markus Egger
tors poured money into these ventures, hoping to cash in became C#, and yes, C# is still cool. You may have seen [email protected]
on the next big thing, but many of them turned out to be the T-shirt).
unsustainable or unprofitable. Some of the most notorious Markus, the dynamic
examples of dotcom failures were Pets.com, Webvan, eToys, C# became a key component of the then nascent .NET founder of CODE Group and
and Boo.com, which burned through millions of dollars in ecosystem, which did away with the concept of the pro- CODE Magazine’s publisher,
is a celebrated Microsoft
a matter of months before going bankrupt. The collapse of gramming language driving everything and instead cre-
RD (Regional Director) and
these and other companies sent shockwaves through the ated a development framework that could be used equally
multi-award-winning MVP
stock market, wiping out billions of dollars in value and from various languages. This was a concept that jived
(Most Valuable Profession-
causing many investors to lose confidence in the sector. very much with what we believed a modern software de- al). A prolific coder, he’s
velopment magazine should be talking about, and thus influenced and advised top
The world of software development wasn’t immune to the CODE Magazine found itself in a sweet spot of sorts. Other Fortune 500 companies
effects of the dotcom bubble bursting. Many software magazines, like Visual Basic Programmer’s Journal, Fox- and has been a Microsoft
developers who’d been hired by dotcom startups found Pro Advisor, and many more, suddenly didn’t look so hot contractor, including on
themselves out of work when their employers went under. anymore. A lot of this wasn’t a coincidence. After all, the Visual Studio team.
Some of them had to accept lower salaries or switch ca- we’d long been partnering very closely with Microsoft—I Globally recognized as
reers, and others tried to start their own businesses or join worked for the Visual Studio team as a contractor on vari- a speaker and author,
more established companies. The demand for web develop- ous projects—and we were strong believers in these new Markus’s expertise spans
ment skills decreased, as many companies scaled back or concepts. Artificial Intelligence (AI),
canceled their online projects. The failure of many dot- .NET, client-side web, and
coms also raised questions about the viability and quality All these goings-on meant that we were somewhat pro- cloud development, focus-
of some of the emerging web technologies and standards, tected from the dotcom mess. Yes, we also lost some cus- ing on user interfaces,
such as Java, XML, and HTML. Some critics argued that tomers, and the pool of potential new customers shrank. productivity, and maintain-
these technologies were overhyped and underdelivered, We had to tighten our belts a bit, but overall, we came able systems. Away from
and that they weren’t suitable for building complex and through it all reasonably well. I remember it as a time work, he’s an enthusiastic
reliable applications. Others defended these technologies that was painful for us, but not to an existential level. windsurfer, scuba diver,
ice hockey player, golfer,
and claimed that they were still evolving and improving, And despite all the internet disillusionment, we remained
and globetrotter, who
and that they would eventually prove their worth. stout believers that it wasn’t the internet that was the
loves gaming on PC or
problem but rather the problem was the idea that the
Xbox during rainy days.
Despite the challenges and setbacks that the dotcom bub- internet made economic fundamentals obsolete. In other
ble bursting posed for the software industry, it also had words, we considered it crucially important to push for-
some positive effects. It forced many companies to rethink ward with internet-related technologies. As a Microsoft-
their business models and strategies, and to focus more on focused organization (and a Microsoft partner), this
customer needs and satisfaction, rather than on growth meant mainly focusing on ASP.NET as the backbone of
and hype. It encouraged more innovation and experimenta- almost all web applications that we wrote. We had largely
tion as some developers sought to create new and better ignored earlier versions of ASP, but then there was this
solutions for the web. It also paved the way for the emer- young kid of a program manager straight out of college
gence of new players and platforms, such as Google, Ama- with a vision of a better web development environment. I
zon, eBay, and PayPal, which took advantage of the oppor- was very impressed with his early demos. He was a funny
tunities and gaps in the market that the dotcom crash had and rather likable kid, and he always wore red shirts. His
left behind. These companies would go on to become some name was Scott someone or another. I think he still works
of the most successful and influential in the history of the at Microsoft today. <grin>
internet and to shape the future of software development.
And before you ask, most of the web applications we
wrote in those days were mainly built for Internet Explor-
The Impact for CODE er, the de facto standard browser of the time. Netscape
Luckily for us at the CODE Group, the dotcom turbulences had faded in importance as they lost the “great browser
were less severe than for other companies. Most of the wars” against Microsoft, and Firefox wasn’t a thing yet.
Apple was also not a real player in mobile computing yet. Figure 2: The BlackBerry Figure 3: The first Xbox was released for the Christmas
Yes, Apple had the Newton years earlier, but that was 6000, released in 2003 2001 season.
ahead of its time and was soon discontinued. Palm Pilots
were also a thing of the past. But RIM’s (Research-In-Mo-
tion) BlackBerry was all the rage for mobile enthusiasts
(Figure 2). It may have later gotten Hilary Clinton into
trouble as her email device of choice, but it was the state-
of-the-art mobile business solution for quite some time.
It seems quaint today, but it was considered unthinkable
that a device without a physical keyboard could be feasible
in business scenarios. This was an idea that Microsoft CEO
Steve Balmer held onto way too long, in the process killing
Microsoft’s phone business. Today, most people don’t even
remember that Microsoft had a strong position in that mar-
ket segment, with Windows Mobile and Windows CE.
computers became a lot more powerful, and our custom- Let me first start by describing the code you’ve cloned here.
ers demanded that we build skyscrapers. You might have
heard of something called Moore's law, which is the ob- Code Structure
servation that the number of transistors on an integrated The code you’ve cloned is a simple nodejs project. It uses
circuit will double every two years with a minimal rise in ExpressJS to serve a static website from the public folder,
cost. Yes, computers have become very powerful, but just as can be seen in Figure 1.
due to basic physics and energy density issues, we’ve also
hit a wall in absolute computing power that we can pack To run it, just follow the instructions in readme.md. At a
within a single processor. As a result, the world started high level, it’s a matter of running npm install, and hitting
Sahil Malik moving toward multiple cores, and multiple processors; F5 in VSCode. Additionally, you’ll see that in index.js as seen
www.winsmarts.com even your phone—or maybe even your watch—now has in Listing 1, in addition to serving the public folder as a
@sahilmalik multiple cores or multiple processors inside it. static website, I’m also exposing an API at the “/random”
URL. This API is quite simple; it waits for five seconds and
Sahil Malik is a Microsoft
When these multiple cores or multiple processes try to work returns a random number. I have a wait here to demonstrate
MVP, INETA speaker,
together, adding two processors doesn't always equal 2X the what effect blocking processes, such as this wait, can have
a .NET author, consultant,
performance. Sometimes it can even be less than 1X because your browser’s UI. The reason I’ve written this code in Node-
and trainer.
the competing processors might be working against each JS is because I could use identical code for the wait on both
Sahil loves interacting other. For sure, the benefit you get will be less than 2X be- client and server, although this isn’t a hard requirement.
with fellow geeks in real cause some overhead is spent on coordination. Now imagine
time. His talks and train- if you have a 64-core processor, how would that look? And Let’s also briefly examine the client side code. The in-
ings are full of humor and how would you write code for it? There will always be that dex.html file is quite simple. It references jQuery to help
practical nuggets. one really smart guy on your team that understands the dif- simplify some of the JavaScript I’ll write. It has a but-
ference between mutexes and semaphores, and that smart ton called btnRandom that calls a JavaScript. It has a
His areas of expertise are guy will act like the cow that gives one can of milk and tips div called “output” where the JavaScript can show mes-
cross-platform Mobile app over two. His smarts will make rest of the team unproduc- sages to communicate to the user. The idea is that I’ll call
development, Microsoft tive because, let’s be honest, these concepts can be hard to a function that blocks for five seconds, and I’ll show a
anything, and security understand, harder to write, and very hard to debug. “start” and the random number output message when the
and identity.
function starts and then when it’s done.
Although it’s no surprise that as we're building more
complex software, our platforms and languages have also Additionally, I’ve placed a text area where users can type
evolved to help us deal with this complexity, so the entire freely. The function takes five seconds to complete, so
team of mere mortals is productive. Languages have also what I’d like you to try is, within those five seconds, try
evolved to support more complex paradigms, and JavaScript to type in that text area. If you can type in that text area
is no exception. while the function is executing, that’s a non-blocking UI,
which is a good user interface. But if the UI is frozen and
In this article, I'm going to explore a back-to-basics ap- you cannot type in that textbox while your function runs,
proach by explaining asynchronous programming in Ja- that’s a bad user experience.
vaScript. Let's get started.
The user interface of my simple HTML file looks like Fig-
ure 2. The index.html file can be seen in Listing 2.
A Little Pretext
Before I get started, there’s a little challenge I must deal
with. Demonstrating asynchronous concepts through text A Synchronous Call
and images as they appear in this article can be difficult. So Let’s first start by writing a simple JavaScript function that
I'm going to describe the various concepts, but you should takes five seconds to execute. At the end of five seconds, it
also grab the associated code for this article at the follow- simply returns a random number. This function is basically
ing URL: https://github.com/maliksahil/asyncjavascript. the same function you see in index.js called “waitForMil-
I recommend running the code side-by-side as you read this liSeconds”, except that for now, I’ll just run it client side,
article as that will help cement the concepts. and the function itself will return a random number.
You’ll notice that until the call completes and the random app.use(express.static('public'));
app.get("/random", (request, response) => {
number is shown in the output div, the page is essentially waitForMilliSeconds(5000);
frozen. It accepts no input from the user. In fact, the page is const random = {
dead: It accepts or responds to no events. This is certainly a "random": Math.floor(Math.random() * 100)
};
bad user experience but may also lead to inexplicable bugs. response.send(random);
});
We’ve learned from other languages that you could pass in Listing 2: index.html
a function pointer to waitForMilliSeconds. Wouldn’t it be <html>
nice if waitForMilliSeconds could call that function point-
er when it’s done with its five seconds of blocking work? <head>
<script
src="https://code.jquery.com/jquery-3.7.1.min.js"
To facilitate that, modify your waitForMilliSeconds function, integrity=".."
as shown below in the next snippet. The login has been crossorigin="anonymous"></script>
</head>
trimmed for brevity and the only change is that instead of
sending back a return value, you’re now accepting a param- <body>
eter called callbackFunc. When you’re done with your work, Press button to make async call:
<button type="button" id="btnRandom">Click</button>
you simply call this callback function and pass in the result. <br />
<div id="output"></div>
function waitForMilliSeconds( <script src="scripts/1.sync.js"></script>
milliSeconds, callbackFunc) { <br/>
var date = new Date(); <textarea rows="5" cols="40">Try typing here
.. while the long running call is running ..</textarea>
random = Math.floor(Math.random() * 100); </body>
callbackFunc(random); </html>
}
Accordingly, how you call this method also changes. This Listing 3: 1.sync.js client side synchronous JS
can be seen below.
$("#btnRandom").on("click", function () {
$("#output").text("Start");
waitForMilliSeconds(5000, (random) => { randomNum = waitForMilliSeconds(5000);
$("#output").text(random); $("#output").text(randomNum);
}); });
function waitForMilliSeconds(milliSeconds) {
As you can see, you’re now calling waitForMilliSeconds var date = new Date();
with two input parameters. The first parameter instructs var curDate = null;
do { curDate = new Date(); }
the function to wait for five seconds and the second is an while (curDate - date < milliSeconds);
anonymous function parameter. This function gets called return Math.floor(Math.random() * 100);
once waitForMilliSeconds is done and it calls the callback- }
Func variable function.
Before you run this, what do you expect the behavior will Promise
be? Will it block the UI or not? Let’s find out. Go ahead and JavaScript has yet another way of structuring your code,
run this. You’ll notice that although the code seems to have which is Promises. You may have seen them when writ-
a different structure, the callback seemed to have no effect ing AJAX code, where your code can make an HTTP call
on the single-threaded nature of the code. The UI still blocks. to the server without refreshing the whole page. This is
how complex apps such as Google Maps were born. Before
Bummer! Google Maps, navigating a map required you to refresh
the whole page. It was a horrible user experience, until
Well, at least you learned a new concept here, and that someone showed us a better way. Technically speaking,
such callbacks have no effect on the single-threaded na- Outlook for the web was leveraging this technique al-
ture of execution. ready, but hey, this isn’t a race.
The idea behind a JavaScript Promise is that the func- You can see the final code that puts all this together in
tion doesn’t return a value, but instead returns a Promise. Listing 5.
The Promise will either resolve (succeed) or reject (fail).
When it resolves, it can send back a success output. If it Remember from Listing 1, the server-side code is basical-
fails, it can send back an error. ly the same code you’ve been using except now instead of
running on the client, it’s running on the server. It waits
Your caller then uses standard paradigms around Promises five seconds and sends back a random number.
to handle success with a Then method.
Now go ahead and run this by referencing this script, press-
Let’s modify the waitForMilliSeconds method to now re- ing F5 to refresh the browser, and clicking the button.
turn a Pomise and resolve it on success. You can see this
method in Listing 4. Very interestingly, now the UI doesn’t block. How odd is
that?
This allows you to write calling code:
Although this is great, wouldn’t it be nice if complex cli-
waitForMilliSeconds(5000).then( (random) => { ent-side code could be afforded the luxury of being multi-
$("#output").text(random); threaded? This XHR-based code feels so complicated. My
}); example was simple, but imagine how this could look with
multiple dependencies, inputs dependent on other XHR
Let’s run this code again and hit the click button. What calls succeeded, timing issues, etc. Ugh!
do you see?
Ah! Yet again, although the code functionally is accurate, it Promises and XHR
still blocks the UI thread. The code is still single threaded. It Let’s start by cleaning this code up a bit. You’ve already
responds to no input while the function is running and sud- seen Promises in action. Can you combine XHR and Prom-
denly reacts to key strokes queued up in those five seconds. ises together to help write code that’s simpler? Sure! All
Now go ahead and run this code. It runs just like before
and it doesn’t block the UI thread. Is this because you’re
using a Promise or that you’re using an XHR? Well, you
did use a Promise on a loop that was entirely on the cli-
ent side and that did block the UI. This non-UI blocking
magic is built into XHR. dtSearch’s document filters support:
• popular file types
Async Await • emails with multilevel attachments
Recently, JavaScript introduced support for async await
keywords. The Promise code looks cleaner than pure XHR
• a wide variety of databases
code, but it still feels a bit inside out. Imagine if you had • web data
three Promises you needed to wait for and those inputs
go into two more Promises, which finally go into another
AJAX call? Luckily, Promises do have concepts such as
resolveAll etc., which do help. They’re an improvement
over pure XHR code. However, the code becomes severely
Over 25 search options including:
indented and you’re stuck in a hell hole of brace matching • efficient multithreaded search
and keeping your code under 80 characters width.
• easy multicolor hit-highlighting
Async await helps you tackle that problem. Look at the sync • forensics options like credit card search
code example from Listing 3. To convert that from sync to
async, all you have to do is add the async keyword in front
of the function. In other words, change this line of code:
SCOTT HANSELMAN SCOTT HUNTER MILAN KAUR DAN WAHLIN HEATHER DOWNING
Vice President, Vice President, Director Product Manager, Principal Cloud International Speaker &
Developer Community, Product Management Microsoft Developer Advocate, Developer Advocate
Microsoft for Azure Developer Microsoft
Experience, Microsoft
JEFF FRITZ JOHN PAPA MARKUS EGGER ZOINER TEJADA MICHELE LEROUX
Principal Program Partner GM - Cloud President and Chief CEO and Architect, BUSTAMANTE
Manager, Microsoft Advocacy, Microsoft Software Architect, Solliance President & Architect,
CODE Group Solliance
Figure 2: JSON arrays can have just simple values as Figure 3: Each element in a JSON array can be a JSON
their elements. object.
Class Description
JsonObject This class represents a mutable (read/write) JSON document. This class is like the JsonDocument class from the System.Text.Json namespace.
JsonNode This class represents a mutable (read/write) node within a JSON document. This class is like the JsonProperty class from the System.Text.Json
namespace.
JsonValue This class represents a mutable (read/write) JSON value. This class is like the JsonElement class from the System.Text.Json namespace.
JsonArray This class represents a mutable (read/write) JSON array.
Table 3: The System.Text.Json.Nodes namespace contains classes for manipulating in-memory JSON objects as a document object model
means that once they’ve been instantiated with data, Below the Using statement, create a variable named jo that is
they cannot be modified in any way. an instance of a JsonObject. The JsonObject provides the abil-
ity to add, edit, and delete nodes within the JSON document.
The System.Text.Json.Nodes Namespace
The classes in this namespace (Table 3) are for creating JsonObject jo = new();
and manipulating in-memory JSON documents using a DOM.
These classes provide random access to JSON elements, al-
low adding, editing, and deleting elements, and can con-
vert dictionary and key value pairs into JSON documents.
{
"name": "John Smith",
"age": 31
}
using System.Text.Json.Nodes; Figure 4: Each element in a JSON object can be another object, or an array.
Write out the JSON document using the ToString() meth- using System.Text.Json.Nodes;
od on the JsonObject.
JsonObject jo = new() {
Console.WriteLine(jo.ToString()); ["name"] = "John Smith",
["age"] = 1
The output from this statement is the JSON object shown };
earlier. Notice how the JSON is nicely formatted. If you
wish to remove all of the carriage returns, line feeds, and Console.WriteLine(jo.ToString());
whitespace between all the characters, change the call
from the ToString() method to the ToJsonString() method Create Nested JSON Objects
instead. You should then see the following JSON appear Not all JSON objects are simple name/value pairs. Some-
in the console window. times you need one of the properties to be another JSON
object. The "address" property is another JSON object
{"name":"John Smith","age":31} that has its own set of name/value pairs, as shown in the
following snippet:
Use a New C# 12 Feature
In C# 12 (.NET 8) you can create the JsonObject using the {
following syntax. Note that the square brackets used in the "name": "John Smith",
code are a new C# 12 feature that allows you to initialize "age": "31",
the new JsonObject object without using the new keyword. "ssn": null,
"isActive": true,
JsonObject jo = "address": {
[ "street": "1 Main Street",
new KeyValuePair<string, "city": "Nashville",
JsonNode?>("name", "John Smith"), "stateProvince": "TN",
new KeyValuePair<string, "postalCode": "37011"
JsonNode?>("age", 31) }
]; }
Using a Dictionary Class To create the above JSON object, create a new instance
You can pass an instance of a Dictionary<string, Json- of a JsonObject and, using a Dictionary object, build the
Node?> to the constructor of the JsonObject to create structure you need, as shown in the following code snip-
your JSON document. Replace the code in the Program.cs pet:
file with the following:
using System.Text.Json.Nodes;
using System.Text.Json.Nodes;
JsonObject jo = new() {
Dictionary<string, JsonNode?> dict = new() { ["customer"] = "Acme",
["name"] = "John Smith", ["IsActive"] = true,
["age"] = 31 ["address"] = new JsonObject() {
}; ["street"] = "123 Main Street",
["city"] = "Walla Walla",
JsonObject jo = new(dict); ["stateProvince"] = "WA",
["postalCode"] = "99362",
Console.WriteLine(jo.ToString()); ["country"] = "USA"
}
Using a JsonValue Object };
The Add() method on the JsonObject class also allows you
to pass in the name and a JsonValue object. Pass in the Console.WriteLine(jo.ToString());
value to static method Create() on the JsonValue class to
create a new JsonValue object.
Parse JSON Strings into Objects
using System.Text.Json.Nodes; JSON documents are commonly stored as strings in a file or
in memory. Instead of attempting to read specific values in
JsonObject jo = new() { the JSON using File IO or string parsing, you can parse the
{ "name", JsonValue.Create("John Smith") }, string into a JsonNode object. Once in this object, it’s very
{ "age", JsonValue.Create(31) } easy to retrieve single values, or entire nodes.
Listing 2: Retrieve values from the JSON using the RootElement property // Parse string into a JsonDocument object
using JsonSamples; using JsonDocument jd =
using System.Text.Json; JsonDocument.Parse(JsonStrings.PERSON_ADDRESS);
// Parse string into a JsonDocument object // Get a specific property from JSON
using JsonDocument jd =
JsonDocument.Parse(JsonStrings.PERSON); JsonElement je = jd.RootElement
.GetProperty("address").GetProperty("city");
// Get a specific property from JSON
JsonElement je = // Get the string value from the JsonElement
jd.RootElement!.GetProperty("name");
Console.WriteLine($"City={je!.GetString()}");
// Get the numeric value
// from the JsonElement After parsing the string into the JsonDocument object, access
Console.WriteLine( the RootElement property and call the GetProperty("address") to
$"Name={je!.GetString()}"); get to the "address" property, and then call GetProperty("city")
Console.WriteLine(
$"Age={jd.RootElement! to get to the "city" property. Once you have this element in
.GetProperty("age")!.GetInt32()}"); a JsonElement object, call the GetString() method to retrieve
the value for the "city" property.
// Parse string into a JsonNode object Add, Edit, and Delete Nodes
JsonNode? jn = To add a new name/value pair to a JSON document, cre-
JsonNode.Parse(JsonStrings.PERSON); ate a JsonObject object out of the PERSON JSON string
constant and convert it to a JsonObject using the AsOb-
// Get the age node ject() method. Once you have a JsonObject, use the Add()
JsonNode? node = jn!["age"]; method to create a new name/value pair, in this case
"hairColor": "Brown".
With this new JsonNode object, node, retrieve the value
as a JsonValue using the AsValue() method. With the using JsonSamples;
JsonValue object, you can report the path of where this using System.Text.Json.Nodes;
value came from, the type (string, number, Boolean,
etc.), and get the value itself as shown in the following // Parse string into a JsonObject
code: JsonObject? jo = JsonNode.Parse(
JsonStrings.PERSON)?.AsObject();
// Get the value as a JsonValue
JsonValue value = node!.AsValue(); jo?.Add("hairColor", JsonValue.Create("Brown"));
Console.WriteLine($"Path={value.GetPath()}");
Console.WriteLine($"Type={value.GetValueKind()}"); Console.WriteLine(jo?.ToString());
Console.WriteLine($"Age={value}");
Replace the code in the Program.cs file with the code
Another option is to retrieve the value using the listed above and run the application to see the following
GetValue<T>() method, as shown in the following code: displayed in the console window:
Console.WriteLine(jo?.ToString()); [
{
Replace the code in the Program.cs file with the code "name": "John Smith",
listed above and run the application to see the following "age": 31
displayed in the console window. The "age": 31 name/ },
value pair has been removed from the JSON document. {
"name": "Sally Jones",
{ "age": 33
"name": "John Smith", }
"ssn": null, ]
"isActive": true Array
}
Manipulate an Array
Like most arrays in .NET, you can easily add and remove
Working with Arrays elements within the array. Given the previous JsonArray
In addition to a simple object, JSON can contain arrays object declaration, you can insert a new entry into the
of strings, numbers, Booleans, and JSON objects. Instead array by adding the following code after the declaration.
of using the JsonObject to represent a JSON document, The Insert() method lets you specify where in the array
use the JsonArray class to represent a list of items. For you wish to add the new object. In this case, you are
example, to create an array of string values, replace the adding a new element into the first position of the array.
code in the Program.cs file with the following:
ja.Insert(0, new JsonObject() {
using System.Text.Json.Nodes; ["name"] = "Charlie Chaplin",
["age"] = "50"
JsonArray ja = [ "John", "Sally", "Charlie"]; });
using JsonSamples;
using System.Text.Json.Nodes; Listing 4: A sample runtime configuration file
{
// Parse string into a JsonNode object "runtimeOptions": {
JsonNode? jn = JsonNode.Parse( "tfm": "net8.0",
"framework": {
JsonStrings.PHONE_NUMBERS); "name": "Microsoft.NETCore.App",
"version": "8.0.0"
JsonArray? nodes = jn!.AsArray(); },
"configProperties": {
foreach (JsonNode? node in nodes) {
"System.Runtime...": false
Console.WriteLine($"Type={node!["type"]}, }
Phone Number={node!["number"]}"); }
} }
• Microsoft.Extensions.Configuration
• Microsoft.Extensions.Configuration.Json Listing 9: Use the ConfigurationBuilder class to read in a JSON file
After adding these two packages to your project, you can using Microsoft.Extensions.Configuration;
write the code shown in Listing 9. In this code, you pass string? value = string.Empty;
in the runtime configuration file name (see Listing 4) string fileName =
to the AddJsonFile() method on the ConfigurationBuilder. $"{AppDomain.CurrentDomain.FriendlyName}
.runtimeconfig.json";
The Build() method is called to create the configuration
builder object, which reads the JSON file into memory IConfiguration config =
and converts the text into a JSON document. Use the new ConfigurationBuilder()
.AddJsonFile(fileName)
GetSection() method to retrieve a specific section within .Build();
the JSON file. In this case, you’re asking for the runtime-
Options section. From the section variable, you can now IConfigurationSection section =
config.GetSection("runtimeOptions");
retrieve the framework version number. Type in the code value = section["framework:version"]
in Listing 9 into the Program.cs file, run the applica- ?? string.Empty;
tion, and you should see the version number appear in
Console.WriteLine(value);
the console window.
IConfigurationSection section =
Bind Settings to a Class config.GetSection("ConnectionStrings");
Instead of reading values one at a time from a configura- connectString =
section["DefaultConnection"]
tion file, you can bind a section within a configuration ?? string.Empty;
file to a class with just one line of code. Create a class
named AppSettings and add a property that maps to each Console.WriteLine(connectString);
name in the configuration file. In the following code,
there’s a sole property named ApplicationName that
maps to the "ApplicationName" property in the appset- To perform the binding operation, add the package Micro-
tings.json file shown in Listing 6. soft.Extensions.Configuration.Binder to your project
using the NuGet Package Manager. Add the following code
namespace JsonSamples; to the Program.cs file and run the application to see the
application name displayed in the console window:
public class AppSettings
{ using JsonSamples;
public string ApplicationName { get; set; } using Microsoft.Extensions.Configuration;
= string.Empty;
} AppSettings entity = new();
Listing 15: Write code to serialize a nested object to view the JSON output
using JsonSamples; }
using System.Text.Json; };
public override string ToString() er class. This class works with the serializer and instead
{ of emitting the numeric value of enumeration properties,
return $"{Name}, Type={PersonType}"; it emits the string representation of the enumeration.
}
} JsonSerializerOptions options = new() {
PropertyNamingPolicy =
Open the Program.cs file and write the code shown in the JsonNamingPolicy.CamelCase,
code snippet below: WriteIndented = true,
Converters =
using JsonSamples; {
using System.Text.Json; new JsonStringEnumConverter()
}
PersonWithEnum entity = new() { };
Name = "John Smith",
PersonType = PersonTypeEnum.Supervisor After adding the JsonStringEnumConverter object, run
}; the application and the following should now display in
the console window:
JsonSerializerOptions options = new() {
PropertyNamingPolicy = JsonNamingPolicy.CamelCase, {
WriteIndented = true "name": "John Smith",
}; "personType": "Supervisor"
}
Console.WriteLine(
JsonSerializer.Serialize(entity, options)); Serialize a Nested Object
Often you have a property in a class that is itself another
When you run the application, the following output is class. Don't worry, the JSON serialization process handles
displayed in the console window. this situation just fine. To illustrate, create a class called
JwtSettings, as shown in Listing 14. Next, create a class
{ named AppSettingsNested that has two properties: Appli-
"name": "John Smith", cationName and JWTSettings. The data type for the JWT-
"personType": 3 Settings property is the JwtSettings class you just created.
}
namespace JsonSamples;
Notice that the personType property has a value of 3,
which equates to the Supervisor enumeration value. Go public class AppSettingsNested
back to the Program.cs file and add the following using {
statement at the top of the file: public string ApplicationName
{ get; set; } = string.Empty;
using System.Text.Json.Serialization;
public JwtSettings JWTSettings
Set the Converters property in the JsonSerializerOptions { get; set; } = new();
object to use an instance of the JsonStringEnumConvert- }
Console.WriteLine(
Listing 18: Pass in options to control the deserialization process
JsonSerializer.Serialize(dict, options));
using JsonSamples;
When you run the application, the console window dis- using System.Text.Json;
plays the following JSON object: string json = @"{
""name"": ""John Smith"",
{ ""age"": 31,
""ssn"": null,
"name": "John Smith", ""isActive"": true
"age": 31, }";
"isActive": true // Override case matching
} JsonSerializerOptions options = new() {
PropertyNameCaseInsensitive = true
};
Serialize a List
To write a JSON array, you can use any of the IEnumerable // Deserialize JSON string into Person
objects in .NET such as an array or a List<T>. To illustrate, Person? entity =
JsonSerializer.Deserialize<Person>(json, options);
use the PersonWithEnum class and create a generic list
of two PersonWithEnum objects, as shown in Listing 16. Console.WriteLine(entity);
Type the code shown in Listing 16 into the Program.
cs file and run the application to display the following
output in the console window. }
]
[
{
"name": "John Smith", Deserialize JSON into a C# Object
"personType": 3 Now that you’ve seen how to serialize a C# object into
}, JSON, let's look at reversing the process. To illustrate, cre-
{ ate a JSON string with the JSON property name that exactly
"name": "Sally Jones", matches the C# property name in the class you wish to dese-
"personType": 1 rialize this JSON into. You use the JsonSerializer class (List-
ing 17) like you did for serializing, but call the Deserialize() case. If you forget to use the options and the property
method passing in the data type you wish to deserialize the names have different casing, then no data is mapped from
JSON string into and the string itself. Type the code in List- the JSON to the C# object, so an empty object is returned
ing 17 into the Program.cs file and run the application to from the Deserialize() method.
see the following results displayed in the console window:
Deserialize Using the JsonNode Object
John Smith, Age=31, SSN=, IsActive=True Another option for deserializing JSON into a C# object is
to use either the JsonNode or the JsonDocument classes.
Use Serialization Options The code shown in Listing 19 uses the JsonNode object
Just like you did when serializing, if the case of the JSON to illustrate. The JsonDocument class looks very similar to
property names doesn’t match the C# property names, you that of the JsonNode. Type this code into the Program.
may set the PropertyNameCaseInsensitive property to cs file and run the application to get the same output as
true in the options and pass those options to the Dese- you saw in the last example.
rialize() method, as shown in Listing 18. Notice that in
this listing the JSON property names start with lower- Deserializing Enumeration Values
If you know that the JSON object is going to have the
string representation of a C# enumeration, set the Con-
verters property to a new instance of the JsonString-
EnumConverter class in the JsonSerializerOptions object,
ADVERTISERS INDEX as shown in Listing 20. If you forget to include the Con-
verters property on the JsonSerializerOptions, a JsonEx-
ception is thrown. Type the code shown in Listing 20
Advertisers Index into the Program.cs file and run the application to see
the following output appear in the console window:
CODE Consulting--AI Services
www.codemag.com/ai-services2 John Smith, Type=Supervisor
Convert a JSON Array in a File to a List of Person Objects Listing 21: Read a JSON file and convert the JSON object in the file into a C# object
Right mouse-click on the JsonSampleFiles folder and add using JsonSamples;
a new file named persons.json. Place the following JSON using System.Text.Json;
array into this file:
string fileName =
$"{AppDomain.CurrentDomain.BaseDirectory}
[ JsonSampleFiles\\person.json";
{
"name": "John Smith", using FileStream stream = File.OpenRead(fileName);
"age": 31,
JsonSerializerOptions options = new() {
"ssn": null,
PropertyNameCaseInsensitive = true,
"isActive": true };
},
{ // Deserialize JSON string into Person
"name": "Sally Jones", Person? entity = JsonSerializer
.Deserialize<Person>(stream, options);
"age": 39,
"ssn": "555-55-5555", Console.WriteLine(entity);
"isActive": true
}
] Listing 22: Read an array of JSON objects from a file and convert to a list of person objects
Open the Program.cs file and type in the code shown in using JsonSamples;
using System.Text.Json;
Listing 22. This code is almost the same as the code you
wrote to deserialize a single person object; the only differ- string fileName =
ence is that you pass the data type List<Person> to the De- $"{AppDomain.CurrentDomain.BaseDirectory}
serialize() method. Once you have the collection of Person JsonSampleFiles\\persons.json";
objects, iterate over the collection and display each person using FileStream stream = File.OpenRead(fileName);
on the console window. Run the application and you should
see the following output displayed in the console window: JsonSerializerOptions options = new() {
PropertyNameCaseInsensitive = true,
John Smith, Age=31, };
SSN=, IsActive=True // Deserialize JSON string into List<Person>
Sally Jones, Age=39, List<Person>? list = JsonSerializer
SSN=555-55-5555, IsActive=True .Deserialize<List<Person>>(stream, options);
if (list != null) {
Get Maximum Age from List of Person Objects foreach (var item in list) {
After reading in a list of objects, you may now use LINQ Console.WriteLine(item);
operations or any Enumerable methods such as Min, Max, }
Sum, and Average on that list. In the code shown in List- }
ing 23, the Max() method is applied to the list and the
maximum value found in the Age property is displayed on
the console window. When you run this application, the able interface, so you need to prefix them with the Using
value reported back should be thirty-nine (39). statement. You must start each JSON document by calling
the WriteStartObject() or the WriteStartArray() method.
You then call the appropriate method to write a string, a
Using the Utf8JsonWriter Class number, a Boolean, a null, or a comment. Finally, call the
The Utf8JsonWriter class is a high-performance, forward- WriteEndObject() or the WriteEndArray() method to close
only, non-cached method of writing JSON documents. the JSON document. Type the code shown in Listing 24
Just like with serialization, you can control the output of into the Program.cs file and run the application to display
the JSON to include white space, and indentation. List- the output shown below in the console window:
ing 24 shows how to write a single JSON object into a
MemoryStream object. Note that both the MemoryStream {
and the Utf8JsonWriter objects implement the IDispos- "name": "John Smith",
Listing 24: The Utf8JsonWriter object is a forward-only cursor for emitting JSON quickly
using System.Text.Json; writer.WriteStartObject();
using System.Text; writer.WriteString("name", "John Smith");
writer.WriteNumber("age", 31);
JsonWriterOptions options = new() { writer.WriteBoolean("isActive", true);
Indented = true writer.WriteEndObject();
}; writer.Flush();
Listing 25: The Utf8JsonWriter object can write arrays as well as single objects
using System.Text.Json; writer.WriteBoolean("isActive", true);
using System.Text; writer.WriteEndObject();
writer.WriteStartObject();
JsonWriterOptions options = new() { writer.WriteString("name", "Sally Jones");
Indented = true writer.WriteNumber("age", 39);
}; writer.WriteBoolean("isActive", true);
writer.WriteEndObject();
using MemoryStream ms = new(); writer.WriteEndArray();
using Utf8JsonWriter writer = writer.Flush();
new(ms, options);
string json = Encoding.UTF8
writer.WriteStartArray(); .GetString(ms.ToArray());
writer.WriteStartObject();
writer.WriteString("name", "John Smith"); Console.WriteLine(json);
writer.WriteNumber("age", 31);
here’s a link for those issues: https://bit.ly/EFCore8Fea- have a first name property and a last name property in a
tures. The list of issues for the fixes can be perused at Customer class.
https://bit.ly/EFCore8Fixes.
public class Customer
If you’ve followed my tech wanderings over the years, {
it may be no surprise that my A#1 favorite new feature public int CustomerId { get; set; }
is the ComplexProperty mapping, an alternative to using public string FirstName { get; set; }
Owned Entities to map complex types and value objects. public string LastName { get; set; }
The new ComplexProperty mapping provides a far superior public DateOnly FirstPurchase { get; set; }
way to map complex types (and therefore, value objects) } Julie Lerman
than the Owned Entity mapping we’ve been using since @julielerman
the beginning of EF Core. To be clear, there are still some Instead of using two string types for every class that needs thedatafarm.com/contact
scenarios that are not yet supported, so you may end up a person’s name, you can create a new class that only has
those two strings. Julie Lerman is a Microsoft
using a mix of the two mappings until ComplexProperty is
Regional director, Docker
complete. It's the team’s intention for this to eventually
public class Customer Captain, and a long-time
replace Owned Entities in their entirety. ComplexProperty
{ Microsoft MVP who now
is a big deal and it was a big deal for the team to execute. counts her years as a coder
public int CustomerId { get; set; }
It’s at the top of their “what’s new” lists as well. in decades. She makes
public PersonName Name { get; set; }
public DateOnly FirstPurchase { get; set; } her living as a coach and
Although the OwnsOne and OwnsMany mappings have consultant to software
}
fulfilled the basic need to map classes that are used as teams around the world.
public class PersonName
properties of entities, the work that they were doing un- You can find Julie present-
{
der the covers was complicated and led to numerous side public string FirstName { get; set; } ing on Entity Framework,
effects. The team had tweaked the inner logic a number public string LastName { get; set; } Domain-Driven Design and
of times across versions, creating breaking changes along } other topics at user groups
the way, but never really solved the problem properly. and conferences around
They have been contemplating a replacement for some the world. Julie blogs at
PersonName is a complex type and can be used as a prop-
time and have finally pulled it off. thedatafarm.com/blog, is
erty of any other class, such as this ShipLabel class, which
the author of the highly
is obviously missing an address but that’s only to keep
There are some caveats, however, which are a few capabili- acclaimed “Programming
this explanation simple.
ties that didn’t make it into EF Core 8 but will be ready for Entity Framework” books,
EF Core 9. In those cases, we just continue using the owned and many popular videos
public class ShipLabel
entity mappings. I’ll explain the caveats after I allow you to on Pluralsight.com.
{
feast your eyes on the new ComplexProperty mapping. public int Id { get; set; }
pubic DateOnly Printed {get; set;}
public PersonName Name { get; set; }
TLDR Complex Types and Value Objects }
Let’s be sure we’re all on the same page. A complex type
is a class that doesn’t have any identity and is used as The most important attribute of PersonName is that it
a property of another class. An easy example is if you has no identity.
Listing 2: Retrieving a Customer and Using its Name for a new Label
var storedCustomer = ctx.Customers.First();
var label = new ShipLabel
{
Name = storedCustomer.Name,
};
var label2 = new ShipLabel
{
Name = storedCustomer.Name,
};
ctx.AddRange(label, label2); Figure 1: Data Model with Person mapped as an Owned
ctx.SaveChanges(); Entity vs. a Complex Property
With the owned entity mapping, the Customer’s Name and A Quick Records Overview
the Name of only one of the ShipLabels (remember, EF Records have a number of formats. I spent a lot of time
Core moved it, not copied it), are tracked separately and understanding the various ways to express a record to
it’s a bit convoluted. choose the correct flavor. The documentation was very
helpful (https://learn.microsoft.com/en-us/dotnet/
Listing 3 shows the DebugView when PersonName is csharp/language-reference/builtin-types/record), but it
mapped as a ComplexProperty: This time, I’m sharing the still took a few read throughs for me. I have encapsu-
LongView with more details because it’s so easy to read. lated some of the important details in Table 1 for a quick
The details look just as you would expect. And EF Core is reference.
doing a lot less work to manage the PersonName data.
To begin with, a record, by default, is a reference type.
It’s much simpler and the side effects of the fake entities But a record struct is a value type—the correct choice for
just disappear. a value object. There are more decisions to make.
Otherwise, you’ll need to go back to mapping with OwnsOne. But—and this is a big but—on its own, a record struct
is not, I repeat, not, immutable. Therefore, it fails the
What about records instead of a class? I recall first seeing requirement of a value object. Luckily, C#12 added the
the exploration that the C# team was doing on records capability to make a record struct read only.
at an MVP summit quite a few years ago. Because of how
they simplified creating value objects, I was definitely public readonly record struct PersonName (
eager to see them come into the language. Record types string FirstName, string LastName);
internalize equality comparison so you don’t need to
override the Equals or GetHasCode methods every single That’s a very succinctly expressed and simple value object.
time.
If you do need additional logic, you can express the re-
However, records did not play very well with owned enti- cord more like a class with properties and other logic ex-
ties and again, there were side effects to worry about. plicitly defined, as I’m doing here, using init accessors
Therefore, I never used records until EF Core 8 brought us to ensure that it’s still immutable. This is an example of
the ComplexProperty mapping and I had a bit of catching a read-only record struct without positional properties.
public string FullName => When Name is null, the Name_FirstName and Name_Last-
$"{FirstName} {LastName}"; Name database fields are both null as well. When I re-
} trieve the customer, EF Core returns a Customer with a
null Name property.
There’s an interesting capability of records that you
should consider, which is that it’s possible to create a new At some point, ComplexProperty will have the same be-
instance of a record with new values. This feature uses a havior. But currently (in EF Core 8), specifying Name as a
with expression to replace property values. nullable type results in a runtime exception. The exception
you get is dependent on how the value object is defined.
For example, I might have instantiated a PersonName using:
If the value object is a class, you get a message about the
var jazzgreat=new PersonName("Ella", fact that it can’t be optional when EF Core is attempting
"Fitzgeralde"); to build the data model based on the DbContext mappings.
Then I discover the typo of the “e” at the end of her System.InvalidOperationException: 'Configuring the complex
name. Of course, with only two properties, I could eas- property 'Customer.Name' as optional is not supported,
ily create a new instance from scratch. But if you have call 'IsRequired()'. See https://github.com/dotnet/efcore/
a lot of properties, you could use the with expression issues/31376 for more information.'
syntax:
Making it required just so EF Core is happy is not a pleas-
jazzgreat=jazzgreat with ing solution. It should only be required if your domain
{LastName = "Fitzgerald"}; invariants specify that Name should be required. If it’s re-
quired, you can use ComplexProperty. If not, you’re stuck
There are a lot of other nuances of records that you can with OwnsOne.
learn about in the docs at https://learn.microsoft.com/
en-us/dotnet/csharp/language-reference/builtin-types/ If PersonName is a record struct (with or without posi-
record. tional properties), you’ll trigger a different exception. EF
Core configures the database fields from the value ob-
Because I found it confusing to sort out all of the behav- ject properties (Customer_FirstName, and Customer_Last-
iors of the various flavors of record types, I’ve listed the Name) as non-nullable fields. At runtime, the database
critical aspects of each (as well as class for comparison) will throw an exception saying that it can’t insert a null
in Table 1. value into a non-nullable column.
{"FirstName ":"John","LastName":"Doe"}
The question becomes (for some): Should you mix and SPONSORED SIDEBAR
ComplexProperty does not yet support this capability. match the two mappings? I think the answer is yes. I
Additionally, its inability to transform complex types to don’t think it needs to be seen as a maintenance prob- AI Executive
JSON also means that you cannot use ComplexProperty lem. What ComplexProperty currently solves, it does very Briefing
with the CosmosDb provider that stores all its data as well—and does so better than OwnedEntity. For the cases
JSON. Not yet. This is another feature that is tagged that you still need to use Owned Entity, continue to use Experience the game-
in the GitHub repo as “consider for current release,” so them. But keep an eye on those cases because you’ll be changing impact of
hopefully that means EF Core 9. able to replace more (or all?) of those mappings when EF AI through CODE
Core 9 comes out. Consulting’s Executive
Collections of Complex Types: Coming Soon Briefing service. Uncover
Owned Entity not only provides the OwnsOne mapping, Table 2 provides you with a list of possible ways to define the immense potential
but also OwnsMany. Therefore, it’s possible to have a complex types and value objects and whether or not that and wide-ranging
property in your entity that’s a collection of the owned expression is supported with ComplexProperty and Owned benefits of AI in every
types. ComplexProperty doesn’t yet support this, but the Entity mappings in EF Core 8. The EF Core team absolutely industry. Our briefing
team has said it will be in EF Core 9. Keep in mind that plans for a near future when we can completely eliminate provides strategic
value object collections are a disputed topic. Some call OwnsOne and OwnsMany from our code. Until then, take guidance for seamless
implementation,
them an anti-pattern. But I’ve found some edge cases advantage of the tool that works best for each scenario.
covering crucial aspects
where they are quite useful. In fact, I have a collection And test, test, test.
such as infrastructure,
of Author value objects in my EF Core and Domain-Driven talent acquisition, and
Design course on Pluralsight. And even though I’m using The limitations are listed in the docs at this link (https:// leadership.
EF Core 8 in the course, I still had to map that particular learn.microsoft.com/en-us/ef/core/what-is-new/ef-
value object as an owned entity. Happily (and intention- core-8.0/whatsnew#current-limitations). Each has a link Discover how to
ally), the sample application in that course has another to the relevant issue on GitHub and you can let the team effectively integrate
value object that provided a great example of using a know which are important to you by voting for these is- AI and propel your
record and ComplexProperty mapping. sues in GitHub. organization into future
success.
Nested Complex Types: Also Coming Soon Julie Lerman
That Pluralsight course also demonstrates nesting value Contact us today
objects. The Author value object has a PersonName prop- to schedule your
erty similar to the one I’ve been using in this article. Be- executive briefing and
embark on a journey
cause Author is already mapped as an owned entity, I had
of AI-powered growth.
to tack on its PersonName property as an owned entity
www.codemag.com/ai
as well. You definitely can’t combine Owned Entities and
ComplexProperty when nesting.
the overhead of maintaining underlying infrastructure. SDK tool. The tool scans ASP.NET and ASP.NET Core solu-
But for apps that are already deployed on-premises, it tions (and, optionally, their binary dependencies) for a
can be difficult to know how to get started re-platform- wide variety of potential issues that need to be addressed
ing to one of these environments. Even though .NET ap- prior to running in Azure PaaS environments.
plications can often by deployed to Azure App Service
with minimal changes, there are usually some changes Although this article focuses on the experience of using
required and discovering what those are can take some Azure Migrate application and code assessment for .NET,
trial and error. there is also a Java version of the tool available. To learn
more about the Java experience, please visit https://learn.
Mike Rousos This article introduces a new feature: Azure Migrate ap- microsoft.com/azure/developer/java/migration/appcat.
[email protected] plication and code assessment. This new feature allows
analyzing the source code, configuration, and binaries
Mike Rousos is a Principal of an application to discover upfront what changes will
Relationship to Other Azure Migrate
Software Engineer on the be needed for the app to work in Azure. Azure Migrate Features
.NET Customer Engage- application and code assessment make it easy to plan Using Azure Migrate to prepare for a migration to the
ment Team. A member of
re-platforming to Azure and highlights what work will be cloud isn’t new, of course. Azure Migrate has helped users
the .NET team since 2004,
needed along the way. Azure Migrate application and code discover and assess on-premises infrastructure for some
he has worked on a wide
assessment is a developer-focused experience available as time. Azure Migrate also includes the Data Migration As-
variety of feature areas and
contributed content to the both a Visual Studio extension and a command line .NET sistant to help users assess SQL Server databases for mi-
.NET team blog, .NET Conf
sessions, Channel 9 videos,
and .NET development e-
books like “.NET Microser-
vices: Architecture for
Containerized .NET Appli-
cations.” Outside of work,
Mike is involved in his
church and enjoys reading,
writing, and games of all
sorts. His primary hobby,
though, is spending time
with his four kids.
Figure 1: Azure Migrate application and code assessment extension download page
38 Preparing for Azure with Azure Migrate Application and Code Assessment codemag.com
gration to Azure SQL DB, Azure SQL Managed Instance,
or SQL Server on an Azure VM. Azure Migrate can even
discover web apps hosted on-premises and (if no blocking
issues are detected) automate migrating them to Azure
with its App Service Migration Assistant tool.
codemag.com Preparing for Azure with Azure Migrate Application and Code Assessment 39
cies can’t be fixed directly by updating source code, they The report’s dashboard includes the total number of proj-
are typically addressed by finding updated versions of ects scanned, the number of incidents discovered, and
the binaries, working with partners who can change the graphics showing the incidents by category and severity.
source code, or finding alternative solutions that work The report include both a number of issues that are the
better in Azure. types of problems detected and a number of incidents
that are the individual occurrences of the issues.
Once you click the Analyze button, the extension analyzes
the selected projects for any potential issues re-platform- Azure Migrate application and code assessment issues are
ing to Azure. This analysis will take anywhere from a few each assigned one of four severities:
seconds to a few minutes, depending on the size of the
projects. When the analysis is complete, you’ll be shown 1. Mandatory: Mandatory issues are those that likely
a report summarizing the results. (See Figure 5.) This re- need to be addressed before the application will work
port can be saved to disk and returned to later using the in Azure. An example of a mandatory issue is using
save icon (or common shortcuts like Ctrl+S). Windows authentication to authenticate web app us-
ers. Because that authentication mechanism depends
on the on-premises Active Directory environment, it
will likely need to be updated in the cloud to use
Azure AD or some other authentication alternative.
2. Optional: Optional issues are opportunities to im-
prove the application when it’s running in Azure, but
they aren’t blocking issues. As an example, storing
app settings or secrets in a web.config file is consid-
ered an optional issue. That pattern will continue to
work when deployed in Azure, just like on-premises,
so no changes are required. Apps that are hosted in
Azure can take advantage of services like Azure App
Configuration and Azure Key Vault to store settings
in ways that are easier to share and update and that
are more secure. So, there’s an optional issue to be-
gin taking advantage of these services as part of the
Azure re-platform.
3. Potential: Potential issues represent situations where
there might need to be a change made for the app to
work in Azure, but it’s also possible that no change is
needed, depending on the details of the scenario. This
Figure 4: Choosing whether to analyze source code or binaries severity is common and requires an engineer to review
Figure 5: The Azure Migrate application and code assessment report dashboard
40 Preparing for Azure with Azure Migrate Application and Code Assessment codemag.com
the incidents. As an example, connecting to a SQL viewing incidents for a particular project, you can choose
Server database is a potential issue because whether to view all incidents for the project or to view incidents
a change is needed depends on whether the database per component (a single source file or binary dependency
that’s used is accessible from Azure. If the database is considered a component).
is already hosted in Azure or is accessible from Azure
(via Express Route, for example), no changes are need- In incident detail views, there will be a state drop-down
ed. On the other hand, if the database used exists box indicating whether each incident is Active, Resolved,
on-premises without a way for the app to connect to or Not Applicable (N/A). All incidents begin as Active
it once it’s running in Azure, thought will need to be and you can change the state as you investigate. Reports
given to how this dependency will work after the app can be saved (using the save icon in the top right of the
is re-platformed. Perhaps the database will need to be report) and the state will be persisted so that you can
migrated alongside the app or perhaps a solution like return to the same report in the future and continue to
Express Route or Hybrid Connections will be needed to review remaining issues and further update the state. As
make the database accessible. you review the incidents in the report, mark incidents
4. Information: Information issues are useful pieces that don’t need to be addressed in your solution as not
of information for the developer to know but don’t applicable and those that you’ve fixed as resolved. The
require any action. As of the time of this writing, incident detail pages also give descriptions of why the
there aren’t any information issues in Azure Migrate incidents were identified, why they matter, and how you
application and code assessment for .NET (although can address them. These detail views, shown in Figure 6,
there are a couple in the Java version of the tool). include links to documentation and links to the locations
in source code where the issues were detected.
In addition to severity, each incident in the report in-
cludes a story point number. This is a unitless number In addition to working with reports in the Visual Stu-
representing the relative effort estimated to address the dio IDE, it’s possible to export the reports to share with
incident (if, in fact, it needs to be addressed). These others. The Export button in the top right of the report
shouldn’t be used to estimate the precise amount of work interface allows you to export the report in three differ-
in terms of hours or days but can be used as a rough ent formats:
estimate for comparing two projects. If one solution has
500 story points worth of issues and another has 200 • Export as HTML produces the most readable report for
story points worth of issues, it’s probably true that the sharing with others. The HTML report, shown in Figure
solution with fewer story points of issues will be simpler 7, has all the same dashboards and views as the Visual
and easier to re-platform. Studio UI and includes snapshots of the latest state of
all incidents. This report is best for sharing with others
From the initial dashboard, you can navigate to views dis- for viewing issues and investigation progress.
playing aggregate issues (all incidents organized by issue • Export as CSV produces a report with the same in-
type) or projects (incidents organized by project). When formation but in a spreadsheet format. As seen in
codemag.com Preparing for Azure with Azure Migrate Application and Code Assessment 41
helpful resource listing parts of the application requiring
review. Once you’ve looked at the highlighted parts of the
application, you can have confidence that you understand
what work (if any) is required prior to re-platforming it
to Azure.
Example Walkthrough
As an example, I’ve used the Azure Migrate application
and code assessment feature to analyze the updated
eShop Northern Mountains sample application found at
https://github.com/dotnet/eshop. eShop is a multi-ser-
vice ASP.NET Core e-commerce solution. Although it does
have a number of external dependencies, the sample was
created with cloud deployment in mind so there shouldn’t
be too many issues to address aside from the identifica-
tion of the external dependencies.
Figure 7: Azure Migrate application and code assessment HTML report Assessing the source code was quick. Even with its mul-
tiple projects, the eShop sample isn’t large, so analysis
finished in just a few seconds. Altogether, 16 projects
Figure 8, this report format doesn’t have the nice were scanned and 41 total incidents of nine different
charts and graphics of the HTML view but can be types of issues were detected. Of the 41 incidents, eight
useful when you need a simple spreadsheet repre- had mandatory severity, eight had optional severity, and
sentation of the issues that you can annotate and 25 had potential severity. This ratio of issue severities is
update as you discuss and review the incidents. typical. The report dashboard is shown in Figure 10.
• Export as JSON produces a machine-readable JSON
representation of the incidents. This export option Normally, I like to review incidents in Azure Migrate appli-
isn’t meant for human consumption. Instead, this cation and code assessment reports one project at a time.
option produces JSON reports that can be parsed by In this case, though, with a relatively low number of total
other applications programmatically. incidents, it’s just as easy to use the Aggregate Issues
view and review them all at once. Here are the issues that
Not every incident in the report requires action. It’s com- were identified in the eShop sample:
mon for Azure Migrate application and code assessment
reports to include many potential issues that don’t ac- • RabbitMQ usage. The only mandatory incidents are
tually require changes. And some of the issues will be eight instances of RabbitMQ usage detected in the
optional. The best way to think about the reports is as a EventBusRabbitMQ project and shown in Figure 11.
42 Preparing for Azure with Azure Migrate Application and Code Assessment codemag.com
As explained in the issue’s description, these inci- that the referenced services will be available when de-
dents relate to a dependency on a RabbitMQ queue ployed to Azure. Looking at the code, I see that these
that will need to be made available in Azure as part are the same calls that use the hardcoded URLs re-
of re-platforming efforts. Several potential strate- viewed earlier. Those incidents were about the hard-
gies are presented, either using an alternative mes- coded URL strings whereas these ones are about the
saging system like Azure Service Bus or running a HTTP client API usage, but both relate to the same de-
RabbitMQ cluster in Azure. In order for eShop to pendency, so these items have already been reviewed.
work properly in the cloud, though, it will be neces- • Local file usage: The final potential issue reported is
sary to follow one of these suggestions to make the a single incident of file IO occurring in the Catalog.
messaging done in EventBusRabbitMQ work. API project. I see from the incident details that the
project is calling File.ReadAllText. To ensure that the
• Database usage: The next most common issue is the accessed file path will be available from Azure, I need
eight incidents of database usage detected across to review the file (CatalogContextSeed.cs) where the
several different eShop projects. In all these cases, call occurs. I can click the link included in the incident
the assessment has found data being read from or report to navigate directly to the relevant code in my
written to databases using Entity Framework Core. solution. Doing so, I see that this call is part of seed-
The incidents’ descriptions explain that I need to ing the database with information on first run and
ensure that the database used will be available from that the file read is deployed alongside the applica-
the Azure environment I migrate the eShop solution tion, so everything should work just as well in Azure as
to. Similar to the RabbitMQ incidents earlier, these it did on-premises. This one, also, can be marked N/A.
relate to an external dependency that needs to be • Static content: The last remaining set of issues are
accessible for eShop to work properly. three optional incidents about several eShop projects
• Connection strings: The next issue category I looked serving static content. These incidents are optional
at was connection strings. Spread across five proj- because no action is required. However, the issue de-
ects, there were six incidents of connection strings scription explains that once deployed to Azure, there
being detected in configuration files. These were may be performance and scalability benefits of serv-
strings like EventBus: amqp://localhost and Redis: ing static content such as the files identified here
localhost. Once again, these are indications of de- using Azure Blob Storage and Azure CDN.
pendencies the eShop solution has on services out-
side its own processes. The connection strings point
to localhost but AMQP and Redis services will not
be available locally when run in an Azure App Ser-
vice or AKS environment. As before, these incidents
are reminders to make sure you have messaging and
caching services available for eShop to use in Azure
and that configuration is updated when deploying to
Azure to take advantage of those services.
• Hardcoded URLs: The next issue type I looked at
were the five incidents of hardcoded URLs. In the
case of eShop, these were all references to other
eShop services that will be invoked via the Aspire
framework. URLs included http://catalog-api/health
and http://basket-api. Because these services are
made accessible via Aspire, the URLs will continue to
work in Azure and there aren’t any changes needed in
the application. I can mark these incidents as N/A to
indicate that they’re false positives. If there had been
external URLs in use, I would have had to review the
URLs to make sure that the services they represented
would be available in eShop’s new environment.
• Caching: The next most common issue type was
the five incidents of caching APIs being used. In
eShop’s case, these were all in the Basket.API proj-
ect and represented Redis usage that the basket API
uses to maintain the user shopping basket state. As
explained in the issue description, I need to ensure
that a Redis instance is available for the basket API
to use in the cloud and should also explore using
an external caching solution like Azure Cache for
Redis so that cached state can be shared between
instances of the basket API service if I need to scale
out to multiple instances.
• HTTP usage: The report also includes four incidents of
outgoing HTTP calls being made. Much like many of the
other incidents, these potential incidents represent an Figure 9: Assessing the updated .NET eShop sample with Azure Migrate application and
external dependency that I need to review to ensure code assessment
codemag.com Preparing for Azure with Azure Migrate Application and Code Assessment 43
line, meaning that analysis doesn’t need to depend on
Visual Studio and can be scripted, if needed.
Like all .NET SDK tools, the Azure Migrate application and
code assessment CLI tool is installed using a .NET CLI
command:
Those are all of the issues detected in the eShop sample. Once the appcat CLI tool gathers the necessary data from
It’s a larger list than initially expected, perhaps, but, as you, it proceeds with analysis and publishes results to a
we thought, the issues were almost entirely about exter- report using your desired format.
nal dependencies that will need to be migrated to Azure
along with the eShop solution. There was nothing block- One important note about the CLI experience is that
ing in the report except for the helpful reminders that the the solution must be able to build without errors. Visual
solution will need database, message queue, and caching Studio can provide the necessary symbol information for
solutions in the cloud that may look different from on- analysis, but when running from the command line in-
premises, and provisioning those (and updating the app stead, the target project must be in a buildable state.
to use them) needs to be part of the re-platforming plan.
In addition to mimicking the Visual Studio experience,
the Azure Migrate application and code assessment com-
Using the Command Line Interface mand line tool accepts parameters that allow all decisions
In addition to the Visual Studio extension discussed in to be made up-front so that the tool runs completely
the rest of this article, the Azure Migrate application and automatically, allowing for a non-interactive experience
code assessment feature is also available via a .NET com- suitable for scripting. To do this, use the --non-interac-
mand line tool. The Visual Studio extension offers the tive parameter and make sure to specify report format
most functionality (because the reports maintain state and components to analyze via the command line. Here’s
for which incidents have been reviewed), but all the same an example command line for assessing a C# project non-
assessment capabilities are available from the command interactively:
44 Preparing for Azure with Azure Migrate Application and Code Assessment codemag.com
appcat analyze eShopLegacyMVC.csproj -s HTML
-r OutputPath --code --non-interactive
Road Map
The Azure Migrate application and code assessment feature
already provides insights necessary to plan a successful Azure
migration. In the future, though, there are even more useful
features planned. Key features coming in future versions of
Azure Migrate application and code assessment include:
codemag.com Preparing for Azure with Azure Migrate Application and Code Assessment 45
ONLINE QUICK ID 2405061
seeing many questions on LinkedIn that boil down to, ticle. This will be a two-part article. Throughout both,
“How do I increase my SQL/database skills to get a data I’m going to mention some topics that I covered in prior
analyst/database developer job?” In this industry, that CODE Magazine articles where the content is still just as
question is complicated, as it means different things to relevant today.
different people. It’s arrogant to claim to have all the
answers, because doing so would presume that someone You can find many web articles with titles like, “Here are
knows about every job requirement out there. Having said the 30 best SQL Server interview questions you should be
that, I’ve worked in this industry for decades, both as an prepared for.” There are many good ones and I recommend
employee and as a contractor. I’d like to share what skills reviewing them as much as you can. I also recommend
Kevin S. Goff have helped me get and keep a seat at the table. getting your proverbial hands dirty inside of Microsoft
www.KevinSGoff.net SQL Server Management Studio. On that note, I’m using
Microsoft SQL Server, which means I’ll be covering some
@StagesOfData
Opening Remarks: Microsoft-specific topics. Having said that, many of the
Kevin S. Goff is Database
architect/developer/
A Rebirth of SQLServer Skills topics in this article are relevant to other databases.
Over the last few years, there have been intense opportu-
speaker/author, and
nities and job growth in the general area of data analyt- I didn’t want to call this article “The 13 things you should
has been writing for
CODE Magazine since ics. That’s fantastic! There’s also been a reality check of study before a SQL interview,” because I’m going beyond
2004. He was a member of something that database professionals warned about last that. I’m covering what I think makes for a good SQL/
the Microsoft MVP program decade: the need for those using self-service BI tools to database developer. Yes, there’s overlap, as I want to share
from 2005 through 2019, have some basic SQL and data management skills. As part some of the specific skills companies are often looking for.
when he spoke frequently of research, I spent a substantial amount of time reading
community events in the LinkedIn posts, and talking to recruiters and other devel-
Mid-Atlantic region and opers about this, and there’s one common theme: There’s First, Know Basic SQL
also spoke regularly for still a booming need for SQL and data handling skills. I “Knowing basic SQL” is really two things: understanding
the VS Live/Live 360 want to make a joke about being an old-time SQL person the SQL language (according to the ANSI SQL standard)
Conference brand from and a “boomer,” but I was born one month after the of- and understanding specific features in the database prod-
2012 through 2015. ficial end of the boomer generation. uct (in this article, Microsoft SQL Server) and some of the
physical characteristics of Microsoft databases. There are
I’m a big sports and music fan and I often hear people talk great books out there, but these topics tend to come up
about the “DNA” of great athletes and musicians. In this again and again. I’ll start with some index basics, even
context, the definition isn’t referring to the biological as- before getting into some language basics.
pects of a person. It’s more the traits they carry with them
and the habits they’ve burned into themselves that they Know the Different Types of Indexes
leverage regularly to do their jobs and do them well. I cer- A common question is the difference between a clustered
tainly hope that everyone who’s willing to work hard will get index and a non-clustered index. With this and other top-
a job, and I’m equally excited that I’ve seen a resurgence of ics in this article, I’m not going to write out a full defini-
“what SQL Server skills should a data person have?” tion, because other websites have done a great job. But
here are things I think you should know.
I know people who build great websites and great vi-
sualizations in reporting tools, where the available data You can only create one clustered index per table and
was very clean and prepared by an existing data team. that index defines the sorted order of the table. For a
However, for every one individual job out there like that, sales table that might have millions of rows, a clustered
there’s more than one job where you’ll have to put on index on either the sale transaction ID or possibly the
your SQL/data-handling hat. sales date will help queries that need to scan over many
rows in a particular order.
I’ve worn multiple hats in the sense that I’ve always had
work. I make mistakes, I underestimate, I still commit You can have many non-clustered indexes. They serve
many silly errors that we all, as developers, wish we could the purpose for more selective queries: that is, finding
avoid. Although no one person can possibly cover every sales between two dates, finding sales for specific prod-
possible SQL/data skill that will make someone success- ucts, specific geographies, etc. A non-clustered index
ful, I stepped back and thought, “What has helped me might contain just one column (composite index), or it
to help clients? What has been the difference-maker on could contain multiple columns if you’ll frequently need
a project?” And that’s why I decided to write this ar- to query on a combination of them.
Figure 2: Execution plan with an INDEX SEEK, which is far more efficient (only one row read)
Related: Know When SQL Server Will Use These Indexes: If I run the same query again and then look at the execu-
Just because you create an index doesn’t mean SQL Server tion, I’ll see a very different story: a INDEX SEEKCLUS-
automatically uses it. For instance, you might create in- TERED INDEX SEEK where SQL Server only needed to read
dexes that SQL Server won’t use, either because the SQL one row (Figure 2).
statement you’re using isn’t search argument optimizable
or because you might not realize what something like Next, I’ll query the table for all names where the last
compound indexes will (and won’t) do. name is “Richardson”.
I’m going to take the table Person.Person from the Ad- select * from testperson
ventureWorks database and create my own table. I’ll also where lastname = 'Richardson'
create two indexes: a clustered index on the primary key
(BusinessEntityID) and a non-clustered index on the Does the clustered index help us at all? Unfortunately,
Last Name and the First Name. not really. Although SQL Server scans a clustered index
instead of a row table (heap), SQL Server must scan all
drop table if exists dbo.TestPerson 19,972 rows (Figure 3).
go
select * into dbo.TestPerson from Person.Person To help with queries based on a last name, I’ll create a
non-clustered index on the Last Name column.
Now I’ll use a single query to retrieve the row for a spe-
cific Business Entity ID: create nonclustered index [ix_LastName]
on TestPerson ( LastName)
select * from TestPerson where BusinessEntityID
= 12563 After creating the index on Last Name, let’s query for a
specific last name, and then for a specific last name and
In the absence of any index, SQL Server must perform a first name:
table scan against the table and read all 19,972 rows.
Here’s what SQL Server returns for an execution plan (Fig- select * from testperson
ure 1). where lastname = 'Richardson'
Although the query runs in a split second, SQL Server had select * from testperson
to read through all the rows. It’s not exactly optimal. where lastname = ‘Richardson’and
firstname = 'Jeremy'
Now let’s create a clustered index that drives the sort
order of the table: In the case of the first query, SQL Server uses a more effi-
cient INDEX SEEK on the new index. However, it does need
create clustered index to perform what’s called a KEY LOOKUP into the clustered
[ix_BusinessEntityClustered] on TestPerson index, to retrieve all the columns (because I did a SELECT
(BusinessEntityID) * to ask for all the columns).
Okay, so a composite index further optimizes the query. “Suppose I have 100 customers in a customer master and
Here’s the last question: Suppose I query only on the first 100,000 rows in a sales table. You can assume that every
name to retrieve all the people named Jeremy? Will SQL sale record is for a valid customer. In other words, there
Server use the FirstName column from the index and opti- are no orphaned sales rows. If I do an INNER JOIN be-
mize as well as it did when I used the last name? Figure tween the rows based on customer ID, how many rows
4 doesn’t give us great news. should I expect to get? If I do a LEFT OUTER JOIN, how
many rows should I expect to get?”
Unfortunately, SQL Server won’t perform an INDEX SEEK.
Although SQL Server uses the LastNameFirstName index, Yes, that’s an interview question floating out there, I kid
it performs an INDEX SCAN through all 19,992 rows. It you not. The problem is, you don’t know if the person
only finds one “hit” and performs a key lookup to retrieve is trying to see what other questions you might ask, or
the non-key columns. maybe the person is trying to see if the two numbers
Second, suppose Task A starts and updates one or more of Many DBAs refer to snapshot
the tables, but performs a rollback at the end (because of isolation as versioning.
some post-validation error, etc.). Suppose Task B read the
data using READ UNCOMMITTED just before the rollback. Task
B will be returning data that never officially saw the light
of day because Task A rolled it back. That’s also not good! Sounds great, doesn’t it? Overall, it is, although there’s
one downfall. Suppose Task A commits its transaction,
These two alone should provide caution to developers who but Task B is still in the middle of its READ session of
use READ UNCOMMITTED. Again, there can be specific in- SNAPSHOT. If Task B queries the row again (in the same
stances (often tied to workflow throughout the course read session), it continues to return “Fair” because it
For example, I worked on a project where we had some SET @CategoryBCount = (select count(*) from
pretty significant cost discrepancies between two sys- CostDetailsLegacy outside
tems. We knew the issues stemmed from code between where MarkedArchive = true
the two systems that needed to be refactored. Before we And exists (select 1 from CostProduction inside
could dive into that, we had to come up with a plan where outside.<Key1> = inside.<Key1> And
RIGHT AWAY to fix the data. Of course, you can’t fix a outside.<Key2> = inside.<Key2> and
problem (or in this case, a myriad of problems) without inside.ConditionForActive = true
identifying all the issues, and that was the first order
of business: identifying all the different ways data had Additionally, management might provide an error factor:
gone bad. Maybe if cost rates differ by less than two cents (round-
ing errors, bad math approaches involving integer divi-
Without going into specifics, we found four different sion, etc.), and *maybe* they’ll elect to tackle that later.
scenarios. Of those four, two of them had sub-scenarios. Once I saw a manager flip out when they saw the top
Some of these were simple and “low-hanging fruit” and of the list of discrepancies sorted by variance and the
some were more complicated. Here were some of them: overall row count. They thought that because the top 10
rows were off by a large percentage and we had thousands
• Rows in the legacy system marked as deleted/ar- of rows, that we had a disaster. It turns out that after
chived, but still in the production system row 20, the variances dropped to pennies, with a slew of
• Rows in the legacy system marked as deleted, but numbers off by just a small amount. So even within your
should not have been categories, check the deviation among the rows.
• Rows in the legacy system with multiple cost com-
ponents, where the target system only ever recog- I’m not going to devote two pages of code to the specif-
nized the first component ics (it’ll just give me flashbacks and nightmares anyway).
• Two T-SQL articles Reading SQL Server books and blogs is great, but what’s
• https://codemag.com/Article/2401051/Stag- even greater is taking some of those skills and trying
es-of-Data-Some-Basic-SQL-Server-Patterns- them out on your own databases. Microsoft has Adven-
and-Practices tureWorks and Contoso Retail demo databases. When I
• https://codemag.com/Article/1801051/A-SQL- wrote articles on COVID data, I found many Excel/CSV
SPONSORED SIDEBAR Programming-Puzzle-You-Never-Stop-Learning files with statistics. Yes, it took some work to assemble
• A Power BI article: those into meaningful databases, but it was worth it. If
Ready to • Stages of Data: COVID Data, Summary Dash- you’re starting out at a new job, or even applying for a
Modernize a boards, and Source Data Tips (codemag.com) new job, you want people to watch you do something and
• Four articles on SQL Server reporting Services: say, “Wow, that person has obviously done this before.”
Legacy App? • https://codemag.com/Article/1805051/
Refactoring-a-Reporting-Services-Report-with-
Need advice on migrating
yesterday’s legacy Some-SQL-Magic
Summary: What I Almost Called
applications to today’s • https://codemag.com/Article/1711061/SQL- This Article
modern platforms? Take Server-Reporting-Services-Eight-Power-Tips I’ve been working on this article for over four months. As
advantage of CODE • https://codemag.com/Article/1705061/SQL- most authors can attest, what you start with and what
Consulting’s years of Server-Reporting-Services-Seven-Power-Tips you finish with can be different things. As I look back
experience and contact • https://codemag.com/Article/1605111/SQL- over this, the content itself didn’t change much, but the
us today to schedule a Server-Reporting-Services-Eight-Power-Tips reasons I wrote it evolved. As I mentioned earlier, I’ve
FREE consulting call to seen many LinkedIn questions where something like this
discuss your options. I’ve created data projects and dashboard pages from per- would be helpful. I also wrote this because I wanted to
sonal data for everything from my weekly health stats to share what things I’ve seen many times. I’ll never claim to
No strings. No
personal finances. The more you practice, the better! have all the answers on what makes a good database de-
commitment.
veloper, but I’ve instructed people at a tech school whose
For more information, It’s great to read and absorb information from websites mission was to help people get jobs (or get better jobs)
www.codemag.com/ and books. Yes, sometimes it’s because you’re trying to and I’ve mentored other developers. I always wanted to
consulting or email us at solve a specific problem at work, so you already know take an inventory of what fundamentals I think others
[email protected]. you’re getting your hands dirty and you just need to how will find important, just to make sure I hadn’t forgotten
to use your hands. Other times, you might be research- anything (and I’ll freely admit that I’d forgotten the spe-
ing or learning a topic where you haven’t gotten your cifics of Fill Factor). I also know someone who’s consider-
hands dirty. All learning is kinetic in some way—a person ing a career in this industry. I’ve been successful in my
could read a book on how to perform open heart surgery career: I’ve made many mistakes, and I’ve learned hard
100 times and be able to quote each line in the book, lessons as well! I wanted to look back on what areas of
for instance. Well, I’m not saying that implementing a knowledge have helped me to be successful.
Type 2 changing dimension is open heart surgery, but
the more you can demonstrate to others that you CAN I’ve been talking to a developer that I mentored for a few
do something…. As someone who’s interviewed people, I years. Their first response was, “Wow, you’re really trying
might find someone’s personal example (a good example) to expose interviewers!!!” As Eric Idle said to John Cleese
of implementing a Type 2 SCD, or someone being able to during the famous “Nudge Nudge Wink Wink” skit: “Oh no,
open two query windows with a test table to demonstrate no, no, (pause), YES!”
READ COMMMITTED SNAPSHOT, to be very compelling.
There are topics I covered in this article in more detail
than others. I went into a fair amount of detail on the
One Thing I Won’t Talk About Snapshot Isolation Level, but only briefly talked about
(But One Final Thing That I Will) Change Data Capture and logging. There are other web
There are other great tools and technologies that database articles out there, including some from me. As I’ve linked
developers use. One that comes to mind is Python. Database in this article, there were some topics that I’ve previously
developers who also work on the .NET side will sometimes covered in CODE Magazine and didn’t want to repeat.
use Entity Framework. Those who work on the ETL side might
use third-party tools such as COZYROC and possibly different Kevin S. Goff
Master Data Management tools. The list goes on and on.
than SOAP or REST alone. This article aims to provide a that allows different applications to interact with one an-
comprehensive overview of the evolution of wеb APIs, other over a network by leveraging XML as the message
exploring thе transition from SOAP to REST, and finally format. You can take advantage of SOAP to build interop-
to GraphQL. It will delve into thе motivation behind еach erable web services that work with disparate technologies
architectural style, and their characteristics, benefits, and platforms. The structure and content of XML mes-
and drawbacks. By understanding the progression from sages, as well as a set of communication guidelines, are
SOAP to REST and thе emergence of GraphQL, developers outlined in a SOAP document.
can makе informed decisions when choosing the right API
design for their projects. Note that ASP.NET Core doesn't have any built-in support
for SOAP. Rather, the .NET Framework provides built-in Joydip Kanjilal
If you’re to work with the code examples discussed in this support for working with ASMX and WCF. Using third-par- [email protected]
article, you need the following installed in your system: ty libraries, you can still build applications that leverage
SOAP in ASP.NET Core. Figure 1 demonstrates how SOAP Joydip Kanjilal is an MVP
• Visual Studio 2022 works. (2007-2012), software
• .NET 8.0 architect, author, and
speaker with more than
• ASP.NET 8.0 Runtime
Anatomy of a SOAP Message 20 years of experience.
He has more than 16 years
If you don’t already have Visual Studio 2022 installed on A SOAP (Simple Object Access Protocol) message is an XML- of experience in Microsoft
your computer, you can download it from here: https:// based structure used to exchange information between web .NET and its related
visualstudio.microsoft.com/downloads/. services across diverse networks and platforms. A typical technologies. Joydip has
SOAP message is comprised of several elements that define authored eight books,
In this article, I'll examine the following points: the message's structure, content, and optional features. more than 500 articles,
SOAP messages are designed to be extensible, neutral, and and has reviewed more
• SOAP, REST, and GraphQL and their benefits independent of any specific programming model or trans- than a dozen books.
• The key differences between SOAP, REST, and GraphQL port protocol, typically HTTP or HTTPS.
• The benefits and drawbacks of SOAP, REST, and GraphQL
• How to use each of these tools in enterprise apps A typical SOAP message comprises four key elements, as
shown in Figure 2.
After gaining this knowledge, you’ll build three applica-
tions: one each using SOAP, REST, and GraphQL. • Envelope
• Header (optional)
• Body
What Is Simple Object Access • Fault
Protocol (SOAP)?
Simple Object Access Protocol (SOAP) is a communication This next shippet is how the structure of a typical SOAP
protocol for data exchange in a distributed environment message looks:
Figure 1: Simple Object Access Protocol (SOAP) at work Figure 2: A SOAP message
SOAP Body The format of a typical SOAP response looks like this:
The SOAP Body represents the main body of the SOAP
message, which contains the data or parameters for the HTTP/1.0 200 OK
method being sent to the web service. It should be noted Content-Type: text/xml; charset=utf-8
that the SOAP Body element is mandatory. It can have
one or more child elements that represent the actual pay- <?xml version="1.0"?>
load and one or more fault elements in the event of an <env:Envelope xmlns:env=
error. "http://schemas.xmlsoap.org/soap/envelope/">
<soap:Body> <env:Header>
<!-- SOAP Body content --> </env:Header>
</soap:Body>
<env:Body>
SOAP Fault </env:Body>
During the processing of a SOAP message, an optional
element called SOAP Fault can be used to convey error or </env:Envelope>
fault information back to the client in case of errors or
exceptions that occur during the process. And here's how a SOAP response to the above SOAP re-
quest looks:
<soap:Fault>
<!-- Fault details --> HTTP/1.1 200 OK
</soap:Fault> Content-Type: application/soap+xml;
<?xml version="1.0"?>
<soap:Envelope
xmlns:soap=
"http://www.w3.org/2003/05/soap-envelope/"
soap:encodingStyle=
"http://www.w3.org/2003/05/soap-encoding">
• Service provider: This is the component that pro- DataContract and ServiceContract
vides the web service and encompasses the applica- In SOAP, DataContract and ServiceContract are key con-
tion itself, the platform on which the application cepts used to define the structure of data and the opera-
executes, and the middleware. tions supported by the service.
[ServiceContract] [DataContract]
public interface IMyDemoService public class Customer
{ {
[OperationContract] [DataMember]
string GetText(int id); public int Id { get; set; }
} [DataMember]
public string FirstName { get; set; }
[DataMember]
Implement a SOAP Service in public string LastName { get; set; }
ASP.NET Core [DataMember]
In this section, I’ll examine how to build a SOAP service public string Address { get; set; }
in ASP.NET Core. The section that follows outlines the }
series of steps needed to create a new ASP.NET Core Web
API project in Visual Studio. Create the CustomerRepository
The CustomerRepository class extends the ICustomerRe-
Create a New ASP.NET Core 8 Project in Visual Studio 2022 pository interface and implements its methods, as shown
You can create a project in Visual Studio 2022 in several in the code given in Listing 1.
ways. When you launch Visual Studio 2022, you'll see the
Start window. You can choose "Continue without code" Create the Service Contract
to launch the main screen of the Visual Studio 2022 IDE. To create a service contract, create an interface called
ICustomerService and write the code given in Listing
To create a new ASP.NET Core 8 Project in Visual Studio 2022: 2 in there. The CustomerService class extends the ICus-
ICustomerService soapServiceChannel =
new CustomerServiceClient
(CustomerServiceClient.EndpointConfiguration.
BasicHttpBinding_ICustomerService_soap);
var response = await
soapServiceChannel.GetCustomersAsync();
using CustomerServiceReference;
ICustomerService soapServiceChannel =
new CustomerServiceClient
(CustomerServiceClient.
EndpointConfiguration.
BasicHttpBinding_ICustomerService_soap);
Figure 4: Adding a new Service Reference var response = await soapServiceChannel.GetCustomersAsync();
When you run the application, the first names of the cus-
tomers will be displayed at the console window.
REST Is a Protocol
It should be noted that REST is not a standard or a pro-
tocol. Representational State Transfer (REST) refers to an
architectural style and a set of architectural constraints
used for developing networked applications that defines a
set of guidelines and principles for developing web servic-
es that are scalable, maintainable, and loosely coupled.
Figure 7: REST application at work
REST Is Only Used for Web Services
Although REST was originally designed for creating web
services, it can also be used for other types of applica- the architectural style, including statelessness, caching,
tions such as mobile apps or IoT devices. As long as the uniform interface, etc.
principles of statelessness, client-server architecture, and
resource-based communication are followed, any type of URLs Must Contain Nouns Only
application can be built using REST. Another misconception about REST is that it can only be
used with HTTP. Although RESTful APIs typically use HTTP
REST Requires the Use of HTTP as the underlying protocol, REST itself is not tied to any
Although HTTP is commonly used in conjunction with specific protocol and can be implemented over other pro-
REST due to its widespread adoption and support for tocols like CoAP or WebSocket.
various request methods, it’s not a requirement. The prin-
ciples of resource identification and manipulation are ap-
plicable to any network protocol. Key Principles of REST
There are several key principles that underpin REST archi-
Every API that Uses HTTP Is Automatically Considered tecture, which are listed in this section. By adhering to
RESTful these key principles, developers can design scalable, reli-
No, not actually. Just because an API uses HTTP doesn’t able, and efficient web services that meet the demands of
mean it follows the principles of REST. A genuinely REST- today's applications as far as performance and flexibility
ful API should adhere to all the constraints set out by is concerned.
Client-Server Architecture
As a rule of thumb, a RESTful architecture should be
What Are REST APIs?
based on a client-server architecture. Although the cli- How Do They Work?
ent requests resources from the server, the server pro- REST APIs communicate data between client and server
vides resources as appropriate to the authorized clients. using HTTP requests. Once a client sends a request, the
reduced performance and improper or inefficient use and the types of operations (queries and mutations)
of memory, CPU, and network resources. that can be performed, thereby helping with valida-
• Versioning: Versioning in REST APIs manages chang- tion and introspection.
es to the APIs by assigning different versions, such as • Improved performance: For applications that re-
v1, v2, and so on. GraphQL eliminates the necessity quire complex queries combining multiple resources,
for version control by allowing clients to specify the GraphQL can be more efficient than REST because it
data they need in the query, making it easier for APIs can gather all data in a single request.
to evolve without breaking the existing queries. APIs
built with GraphQL do not require separate version- There are certain downsides as well:
ing because clients or API consumers can define their
requirements in the query and fetch the required data • Learning curve: Despite its simplicity, GraphQL is quite
without breaking the existing queries. complex for those who are unfamiliar with its concepts.
• Type System: GraphQL employs a strongly typed The learning curve for designing schemas, resolving
schema specifying the data format you can request. queries, and securing GraphQL APIs can be quite steep.
This schema functions as a consensus between the • Caching challenges: Due to the dynamic nature of
client and the server, thereby enabling the early de- GraphQL queries, client-side and server-side caching
tection of errors. By recognizing potential errors up can be more challenging compared to REST, where
front, you can resolve the errors in a planned way URLs can easily serve as cache keys.
before they impact your clients. • Increased complexity: GraphQL adds more complex-
ity to the server-side implementation in contrast to
conventional REST APIs. You should use resolvers to
Benefits and Downsides of GraphQL get the data you need. Managing complex queries
Here are the key benefits of GraphQL: might incur additional effort.
• Rate limiting: Implementing rate limiting in
• Efficient data querying: With GraphQL, clients can GraphQL is more complex than in REST because it's
query multiple resources and retrieve related data in harder to predict the cost of a query due to its flex-
a single request. They can traverse the data graph ible nature.
and retrieve only the required data, avoiding the • Security considerations: GraphQL APIs must be
over-fetching of unnecessary fields or nested objects. carefully designed to avoid potential vulnerabilities.
• Reduced network traffic: GraphQL reduces the net- Exposing too much data or functionality through the
work traffic and bandwidth consumption by minimiz- API can increase the attack surface, making proper
ing the payload size of the responses. This explains authentication and authorization crucial.
why applications that leverage GraphQL often exhibit
better performance compared to RESTful applications.
• Versioning and evolution: With GraphQL, depre- GraphQL vs. REST
cated fields or types can be marked to signal clients Although REST and GraphQL are two of the most popular
for migration, allowing for smooth API evolution approaches for building APIs, there are subtle differences
without breaking existing clients. between the two:
• Support for real-time data: With GraphQL subscrip-
tions, clients can subscribe to specific data changes • Request format: Each endpoint in REST specifies a
in real-time. Once subscribed, the clients are noti- set of resources and operations, and the client can
fied using events about any changes made to the typically retrieve or modify all resources using HTTP
data in real-time. methods, such as GET, POST, PUT, or DELETE. With
• Strongly typed schema: GraphQL enforces a robust GraphQL, clients request data based on a specific
typing system and a well-defined schema, providing structure that matches the server's schema.
clarity for the available data types and fields. The • Data Retrieval: Each resource in REST can only
GraphQL schema defines the structure of the data be accessed through a particular endpoint, mean-
in each of these fields can be traversed and retrieved by Create a GraphQL Subscription
nesting them. Create a new .cs file named StoreQuery in You should also create a subscription to enable your
your project and replace the default generated code with GraphQL server to notify all subscribed clients when an
the code given in Listing 7. event occurs. Create a new class named StoreSubscription
and replace the default generated code with the source
Create the GraphQL Object Type code given in Listing 9.
In GraphQL, Object Types are used to describe the type of data
fetched using your API and they are represented by creating Configure GraphQL Server in ASP.NET Core
a class that derives the GraphQL.Types.ObjectGraphType class. Once you've created the Query type to expose the data
Create a new file named StoreType.cs in your project and re- you need, you should configure GraphQL Server in the
place the default code with the code given in Listing 8. Program.cs file using the following code snippet:
Joydip Kanjilal
codemag.com/code
832-717-4445 ext. 9 • [email protected]
STAFFING
UNLOCK
STAFFING
EXCELLENCE
Top-Notch IT Talent, Contract Flexibility, Happy Teams, and a
Commitment to Customer Success Converge with CODE Staffing
Our IT staffing solutions are engineered to drive your business forward while
saving you time and money. Say goodbye to excessive overhead costs and
lengthy recruitment efforts. With CODE Staffing, you’ll benefit from contract
flexibility that caters to both project-based and permanent placements. We
optimize your workforce strategy, ensuring a perfect fit for every role and
helping you achieve continued operational excellence.
Visit our website to find out more about how we are changing
the staffing industry.
Website: codestaffing.com
Yair Alan Griver (yag)
Chief Executive Officer
Direct: +1 425 301 1590
Email: [email protected]