0% found this document useful (0 votes)
819 views56 pages

Power of Simplicity A Quick Tour of The Dotnet Cli How To Test Asp. Net Core Web Api

.Net Core

Uploaded by

Marko Bubulj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
819 views56 pages

Power of Simplicity A Quick Tour of The Dotnet Cli How To Test Asp. Net Core Web Api

.Net Core

Uploaded by

Marko Bubulj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 56

FACILITATING THE SPREAD OF KNOWLEDGE AND INNOVATION IN PROFESSIONAL SOFTWARE DEVELOPMENT

.NET
Core eMag Issue 68 - Jan 2019

ARTICLE ARTICLE ARTICLE

ASP.NET Core: The A Quick Tour of How to Test ASP.


Power of Simplicity the DotNet CLI NET Core Web API
IN THIS ISSUE
6 ASP.NET Core: The Power of Simplicity

10 Performance Is a Key .NET Core Feature

16 A Quick Tour of the DotNet CLI

24 Distributed Caching with ASP.NET Core

30 Azure and .NET Core Are Beautiful Together

38 .NET Core and DevOps

44 Advanced Architecture for ASP.NET Core Web APIs

51 How to Test ASP.NET Core Web API

FOLLOW US CONTACT US
GENERAL FEEDBACK [email protected]
ADVERTISING [email protected]
EDITORIAL [email protected]

facebook.com @InfoQ linkedin.com youtube.com


/InfoQ company/infoq /MarakanaTechTV
A LETTER FROM THE EDITOR

Chris “Woody” Woodruff


The history of .NET is an interesting journey from the beginnings in the early 2000s to the current im-
plementation called .NET Core. The original vision for the .NET Framework was to allow developers in
the Microsoft ecosystem to have a technology stack similar to Java and JVM. This allowed developers to
have a development method of Rapid Application Development (RAD) with WinForms and WebForms.
Over time, this ecosystem grew but could not grow beyond the Windows desktop or server space. As
more developers expanded beyond Windows to the web and other operating systems like Linux, iOS
and Android, the push to get .NET to those platforms brought the first open source .NET from the Mono
Project and then the Xamarin platform for mobile applications.

There was a movement inside Microsoft and in the .NET community around the early 2010s to bring
the next generation of the framework to developers. One need was to shed the legacy technology that
had been accumulating in the .NET Framework since the beginning. Another was to allow the .NET
Framework to run on the three major operating systems that exist in the world of computers and also
the Cloud. The final push was after Satya Nadella was promoted to CEO of Microsoft in 2014 when the
open source wave was sweeping through Redmond with the Roslyn C# compiler and the Acquisition
of Xamarin.

What was created is the .NET Core that allowed the team inside Microsoft to learn from the past with
.NET Framework, look at the new ideas in software developer since 2000 and finally open sourcing the
.NET Core code to all developers. We now have a rich platform and set of APIs along with the perfor-
mance and efficiency that rivals many of the other application frameworks in the open source world.
We are just at the start of the .NET Core story, but it is one with already great minds pushing it to great
heights.

I had a great experience working on the two series of articles that are in this eMag as well as the au-
thors for all of the pieces. They are all great community members as well as friends that I have grown to
know more during the process of creating the series. Thanks to all of them for giving their time, knowl-
edge and wisdom. I want to thank the entire InfoQ staff for their support and patience over the last 16
months it took to publish all of these articles online. I hope you enjoy the articles as much I have and
they all give you the confidence to dive into the .NET Core stack of technologies and create the next
great apps.
CONTRIBUTORS
Chris “Woody” Woodruff
has been developing and designing software solutions for over 20
years and has worked with many different platforms and tools. He is a
community leader, helping such events as GR DevNight, GR DevDay, West
Michigan Day of .NET, and CodeMash. He has been a Microsoft MVP in
Visual C#, Data Platform, and SQL, and was recognized in 2010 as one of
the top-20 MVPs worldwide. Woodruff is a developer advocate for JetBrains
and evangelizes .NET, .NET Core, and JetBrains’ products in North America.

Maarten Balliauw  Matthew Groves


loves building web and cloud apps. His main interests is a guy who loves to code. It doesn’t matter if it’s C#,
are in ASP.NET MVC, C#, Microsoft Azure, PHP, and jQuery, or PHP: he’ll submit pull requests for anything.
application performance. He co-founded MyGet and He has been coding professionally ever since he wrote
is developer advocate at JetBrains. He’s an ASP Insider a QuickBASIC point-of-sale app for his parent’s pizza
and MVP for Microsoft Azure. Balliauw is a frequent shop back in the ’90s. He currently works as a developer
speaker at national and international events and advocate for Couchbase. He is the author of AOP in .NET
organizes Azure User Group events in Belgium. In his (published by Manning), and is also a Microsoft MVP.
free time, he brews his own beer. He has a blog.
Jeremy Miller Dave Swersky
takes bits and pieces and tweaks and tunes, has been working in IT for over 20 years, in roles from
and comes up with a creative solution to support engineer to software developer to enterprise
his needs. Reputation notwithstanding, a architect. He is an aspiring polyglot and is passionate
shade-tree mechanic knows how to get about all things DevOps. He has presented on DevOps
things running. While Miller doesn’t have any at conferences including DevOps Enterprise Summit,
particular mechanical ability (despite a degree in CodeMash, Stir Trek, and at local meetups in Kansas
mechanical engineering), he likes to think that City, Mo.  Swersky has also written a book on DevOps:
he is the developer equivalent of a shade-tree DevOps Katas: Hands-On DevOps. He can be found on
mechanic. His hard drive is certainly littered with Twitter @dswersky.
the detritus of abandoned open-source projects.

Chris Klug Eric D. Boyd


is a software developer and architect at tretton37 is the founder and CEO of responsiveX.
in Stockholm, Sweden. He has spent the better part responsiveX is a management and technology
of his life solving problems by writing software, and consultancy with deep industry and functional
he loves the creative side of coding as well as the expertise. With a strong focus on user experience,
challenges it continuously provides. He spends a innovation, and emerging platforms, responsiveX
significant amount of his time presenting at developer provides clients with a strategic partner to
conferences around the world — something that has help guide today’s development efforts while
apparently caught Microsoft’s attention as they have preparing for tomorrow’s innovation. Boyd is
awarded him Microsoft MVP for seven years running. passionate about technology, entrepreneurship,
When asked, he has no problem admitting that he and business growth. He loves developing
would much rather be kite-boarding on a nice beach innovative and disruptive startups, growing
or having one of his tattoos extended than playing existing businesses, and helping others increase
with his computer. the value and return of technology investments.
Read online on InfoQ

KEY TAKEAWAYS ASP.NET CORE:


OWIN gives developers a nice
abstraction of the webserver
for building web applications.
THE POWER OF SIMPLICITY
ASP.NET Core steps away by  Chris Klug
from the standard OWIN
implementation but does it to
simplify things for developers.

Simplifying a complex thing


When Microsoft decided to
using an abstraction doesn’t reimagine its web development
make it less powerful, it just
makes it easy to use. platform ASP.NET, they decided
that having it tied to IIS might not
By using middleware instead
of high-level frameworks, be such a great idea. The fact that
developers can build lean but the original ASP.NET was built on
still-powerful applications.
IIS-specific technology not only
Even if you aren’t using ASP. tied it to Windows, it also made it
NET Core, you can still get
a lot of these features by impossible to self-host — limitations
using Project Katana in older
solutions.
that don’t quite cut it in the new
cloud-centric world.

6 .NET Core // eMag Issue 68 - Jan 2019


Instead, Microsoft decided to go Incoming messages pass from for the response, like a response
all in on Open Web Interface for middleware to middleware in the stream or response headers, to
.NET (OWIN) and abstract away order in which they were added the dictionary, which the mid-
the webserver completely. This to what is called a request pipe- dleware can use to generate a re-
allows the framework, as well as line, which evokes an image of sponse to send to the client. On
its users, to completely ignore the message flowing through a top of that, the server can expose
which server is responsible for pipe and interacting with middle- extra functionality to the mid-
accepting the incoming HTTP ware as it flows past. However, a dleware by providing delegates
requests and instead allow de- pipe would require some form of in the dictionary. There is, for ex-
velopers to focus on building the pump to pass the message from ample, an extension for the OWIN
functionality they need. middleware to middleware. In specification that allows the serv-
OWIN, the series of middleware is er to provide a specialized way
OWIN isn’t a new concept. The more like a linked list. The incom- to send files to the client. This is
OWIN specification has been ing request is passed to the first exposed to the middleware by
around for quite a few years and middleware in the list, which then providing a predefined delegate
Microsoft has allowed develop- passes it on to the next when it as an object in the dictionary.
ers to use it while running under is done. The request is passed The middleware can query the
IIS for almost as long, through from middleware to middleware dictionary to see if the currently
an open-source project called until one of them generates a re- used server supports that feature
Project Katana. Microsoft hasn’t sponse and sends it back up the or not and, if it does, can then get
just allowed developers to use list again, allowing previous mid- hold of the delegate and call it to
it through Katana, it has made dleware to inspect the outgoing send back a file.
OWIN the foundation for all ASP. message, at least at a high level.
NET authentication functionality The OWIN interface is ridiculously
for several years. It’s a simple concept, but flexible simple but extremely powerful.
and powerful. It offers low-level access to the
OWIN is fairly simple, and the sim- HTTP requests and responses
plicity is what makes it so great. In reality, the process is a bit more while abstracting away the nit-
It’s an interface that manages to complicated as the response is ty-gritty details of handling the
abstract away the webserver us- actually sent to the client as soon actual request. However, it does
ing only a predefined delegate as it has been generated, before this without limiting the ability
and a generic dictionary of string it has been passed back up the to offer higher-level abstractions
and object. So, instead of hav- pipeline. This makes it a little through delegates in the dic-
ing an event-driven architecture more complex to modify the out- tionary. On the other hand, it is
in which the webserver raises going response but for the sake very low level, and it does make
events that you can attach to, it of simplicity, let’s just ignore that. some things a little cumbersome,
defines a pipeline of so-called which I guess is why Microsoft
“middleware”. The OWIN specification defines decided to embrace OWIN, but in
the incoming request as a dic- their own way.
Middleware is a piece of code tionary of string and object. The
that interacts with the requests server accepts the incoming When Microsoft added the OWIN
coming from clients. (And, yes, HTTP request, splits the request pipeline to ASP.NET Core, they
I intentionally kept my defini- information into usable pieces, made some changes to it. They
tion generic by writing “a piece and places them in a dictionary added their own layer on top of
of code”.) You can implement it before passing the request to the it, to make it a bit easier to work
a couple of different ways. All it first middleware for processing. It with (at least, that is the way I
needs is a delegate with the right adds things like the request path, see it). Instead of supplying the
signature, even if a class-based a headers collection, a stream middleware with a generic dic-
version is preferred in most cases. containing the body, and so on to tionary, Microsoft added a typed
The code can inspect and modify the dictionary using predefined layer on top of it. So, instead of
the request as it arrives from the keys. Using these well-known getting a dictionary, you get an
client as well as the response that keys, the middleware can then HttpContext object, which al-
is returned to the client. And it read the required information lows you to more easily interact
can even decide to generate the from the dictionary and figure with the request and response.
response that is sent to the client out what it needs to do. However, For example, instead of writing
on its own. the server also adds objects used ((Stream)dictionary[“owin.

.NET Core // eMag Issue 68 - Jan 2019 7


RequestBody”]).Write(…) for This simple interface allows you to mix and match the things you need,
a response to the client, you can combine third-party middleware with your own custom-built ones, or
write httpContext.Response. create a completely customized request pipeline that fits your needs.
Body.Write(…). It’s not a huge
difference, but not having to re- And on top of that, if middleware isn’t handling a request, nothing will.
member the correct key to use For example, if you don’t add the static-files middleware to the pipeline,
and to cast to the correct type it won’t even serve static files from disc. This is very different from hosting
makes it much less error prone. ASP.NET in IIS, where some of the functionality comes from ASP.NET and
some from IIS.
In Project Katana, HttpContext
was just a wrapper on top of the I can’t write an article about the request pipeline in ASP.NET Core and not
dictionary, which gave typed have a line of code to show how it works. Let’s use authentication as an
access to the items in it. In ASP. example. It’s one of the most common ways to get dirty with OWIN for
NET Core, on the other hand, it the first time and it provides a great example of the power that the inter-
is a completely custom object. face offers so it seems fitting to try that here.
However, there are extensions
that will provide a “proper” OWIN Adding authentication to an ASP.NET Core web application means add-
API, should you want to use some ing a couple of things to the application. First of all, you need to add the
OWIN-based middleware. authentication services to the services collection. And then you need to
tell it the different types of authentication that you want to support.
So, are you really going to build
applications using a low-level API Say that you want to authenticate users using Facebook, and once users
like this? Of course not! It’s fairly have been authenticated, you want to sign them into the application us-
easy to build higher-level abstrac- ing cookie authentication. That configuration would look something like
tions on top of this low-level API, this:
which is exactly what Microsoft
has done. In ASP.NET Core, ASP. public void ConfigureServices(IServiceCollection
NET MVC is implemented as a services) {

middleware. It’s supported by services.
several services that are injected AddAuthentication(CookieAuthenticationDefautls.
into the middleware using .NET ApplicationScheme)

Core’s dependency-injection .AddFacebook(“### CLIENT ID ###”,
functionality. But in the end, it is “### CLIENT SECRET ###”)
middleware that is responsible 
 .AddCookie();

}
for the interaction with the client
requests and this is where this
basic, low-level API shows its po- Once you have the services in place, you need to add the authentication
tential. middleware to the request pipeline like this:

As you can register as much mid- public void Configure(IApplicationBuilder app,


dleware as you want in the re- IHostingEnvironment env) {
quest pipeline, you can mix and // Middlewares that don’t require authenticated
users
match the frameworks and fea-
app.UseAuthentication();
tures. Want to use ASP.NET MVC
// Middlewares that might require authenticated
for your UI? No worries, just add users
the MVC middleware and sup- }
porting services. Want to use
NancyFx for your API? No prob- And finally, when you want to send a user to Facebook for authentication,
lem, just add it to the request you just call:
pipeline. Want to use Facebook
authentication? Just add the re- await HttpContext.ChallengeAsync(

quired middleware and services. new AuthenticationProperties { RedirectUri =
How about Twitter authentica- “/userinfo” },
tion with JWT bearer tokens for 
 FacebookDefaults.AuthenticationScheme);
your API? Just add the middle- }
ware. I’m pretty sure you get it….

8 .NET Core // eMag Issue 68 - Jan 2019


This call will use the defined au-
thentication scheme to set up the
HttpContext to redirect the user
to the correct login page. In this
case, that means that the Face-
book authentication handler sets
the returned status code to HTTP
302 to make the response a redi-
rect and sets the location header
to the URI the client authenticat-
ed by Facebook needs to redirect
to that address. Once Facebook
has authenticated the user and
redirects the user back to the ap-
plication with the identity token,
the authentication middleware
catches that callback and uses
the Facebook service to validate
the token. If everything is okay,
the middleware asks the cook-
ie-authentication service to issue
a cookie that will be used to keep Core is a great thing. It might be a
the client signed in. In reality, that simple and low-level interface but
just means the cookie-authentica- it packs a massive punch when
tion service adds a header to the used the right way. Creating an
dictionary of response headers on easy interface that allows for both
the HttpContext, which the mid- low-level and high-level usage
dleware then adds to the response and that doesn’t limit what can be
to the client when the authentica- done in the future is no easy task
tion service finally redirects the cli- — but I do believe this interface
ent back to the path defined in the does just that. Now it is up to us to
AuthenticationProperties. make the most out of it! There are
RedirectUri property. very few limitations if you are just
a little creative — it’s just a ques-
Using a couple of extension meth- tion of what problems you need to
ods to register some services and solve.
a middleware, you have set up a
fairly complex authentication flow
that both redirects the user for
authentication and handles the
callback from the identity provid-
er when authentication has taken
place.

(Note: The previous configuration


will by default try to authenticate
the user using cookie authentica-
tion instead of Facebook. A small
configuration change will make
it use Facebook as the default in-
stead, but that’s a bit beyond the
scope of this article.)

I hope that I have managed to


convince you that this move to an
OWIN-based pipeline in ASP.NET

.NET Core // eMag Issue 68 - Jan 2019 9


Read online on InfoQ

KEY TAKEAWAYS PERFORMANCE IS A KEY


.NET Core is cross platform and runs
on Windows, Linux, macOS, and
many more operating systems. Its
.NET CORE FEATURE
release cycle is much shorter than
that of .NET. Most of .NET Core by  Maarten Balliauw
ships in NuGet packages and can be
easily released and upgraded.

The faster release cycle is


particularly helpful for working on
Now that .NET Core is on the
performance improvement, and streets, Microsoft and the open-
a great deal of work is going to
improving performance of language source community can can develop
constructs such as SortedSet and new features and enhancements
LINQ’s .ToList() method.
in the framework more rapidly.
Faster cycles and easier upgrades One of the areas of .NET Core
also speed development of new
improvements to .NET Core that gets continuous attention is
performance, such as with types
like System.ValueTuple and Span.
performance: .NET Core brings along
many optimizations in terms of
These improvements can then
be fed back into the full .NET
performance, in execution speed as
framework once proven. well as memory allocation.

10 .NET Core // eMag Issue 68 - Jan 2019


In this article, we’ll go over some were easy to add to .NET Core ing this off IEnumerable<T>, we
of these optimizations and how due to its faster release cycles. don’t have to care about the im-
the continuous stream — or Value tuples became available to plementation of the underlying
Span<T> — of performance work full .NET as a NuGet package  for IEnumerable<T>, as long as we
helps us in our lives as develop- .NET 4.5.2 and earlier, and only can iterate over it.
ers. became part of the full frame-
work in .NET 4.7. A downside is that when calling .
ToList(), we don’t know what
.NET and .NET Core size of list to create and so enu-
Before we dive in, let’s first look Performance merate all objects in the enumer-
at the main difference between improvements in .NET able, doubling the size of the list
the full .NET Framework (let’s call Core we’re about to return whenever
it .NET for convenience) and .NET One of the advantages of the .NET capacity is reached. That’s slight-
Core. To simplify things, let’s as- Core effort is that many things ly insane as it potentially wastes
sume both frameworks respect had to be either rebuilt or ported memory (and CPU cycles). So, a
the  .NET Standard  — essentially from the full framework. Having change was made to create a list
a spec that defines the class-li- all of the internals in flux for a or array with a known size  if the
brary baseline for all of .NET. That while, combined with fast release underlying IEnumerable<T> is in
makes both worlds very similar, cycles, provided opportunity for fact a List<T> or  Array<T> with
except for two main differences. performance improvements in a known size.  Benchmarks from
code that was almost considered the .NET team show roughly a 4x
First, the full .NET is mostly a Win- to be “don’t touch, it just works!” increase in throughput for these.
dows thing, whereas .NET Core is before.
cross platform and runs on Win- When looking through  pull re-
dows, Linux, macOS, and many Let’s start with  Sorted- quests in the .NET Core Lab repos-
more operating systems. Second, Set<T>  and its Min and Max itory on GitHub, we can see many
the release cycle is very different. implementations. A  Sorted- performance improvements that
.NET ships as a full framework Set<T>  is a collection of objects both Microsoft and the commu-
installer that is system-wide and that is maintained in a sorted nity have contributed, since .NET
often part of a Windows instal- order through a self-balancing Core is open source and anyone
lation, making the release cycle tree structure. Before, getting can provide performance fixes.
longer..NET Core is different in the  Min  or  Max  object from that Most of these are just that: fixes
that there can be multiple .NET set required traversing the tree, to existing classes in .NET. But
Core installations on one system calling a delegate for every ele- there is more — .NET Core also
and there is no long release cycle: ment, and setting the return val- introduces new concepts for per-
most of .NET Core ships in NuGet ue as the minimum or maximum formance and memory that go
packages and can be easily re- to the current element, eventu- beyond just fixing existing class-
leased and upgraded. ally reaching the top or bottom es. Let’s look at those.
of the tree. Calling that delegate
The big advantage is that the and passing around objects in-
.NET Core world can iterate faster. volved quite some overhead — Reducing allocations
It can try out new concepts in the but one developer saw the tree with System.ValueTuple
wild and eventually feed them for what it was and removed the Imagine that we want to return
back into .NET as part of a future unneeded delegate call as it pro- more than one value from a
.NET Standard. vided no value. His own bench- method. Previously, we’d have
marks showed a 30%-50% perfor- to either resort to using out pa-
Very often (but not always), new mance gain. rameters, which are not pleasant
features in .NET Core are driven to work with and not supported
by C# language design. Since Another nice example is found when writing async methods.
the framework can evolve more in LINQ, more specifically in the Another option was to use  Sys-
rapidly, the language can, too. A commonly used .ToList() tem.Tuple<T>  as a return type,
prime example of both the faster method. Most LINQ methods but this allocates an object and
release cycle as well as a perfor- operate as extension methods has rather unpleasant property
mance enhancement is System. on top of IEnumerable<T>  to names to work with, like Item1,
ValueTuple. C# 7 and VB.NET 15 provide querying, sorting, and Item2, etc. A third option would
introduced value tuples, which methods like .ToList(). By do- have been to use specific types or

.NET Core // eMag Issue 68 - Jan 2019 11


Note that next to anonymous types, but that introduces overhead when writing the code as
we’d need the type to be defined, and it also makes unnecessary allocations
in memory if all we need is a value embedded in that object.
their optimized Meet tuple return types, backed by System.ValueTuple. Both C# 7 and

memory usage, VB.NET 15 added a language feature to return multiple values from a method.
Here’s a before and after:

features like tuple // Before:


private Tuple<string, int> GetNameAndAge()

deconstruction {
return new Tuple<string, int>(“Maarten”, 33);
}
are quite pleasant // After:

side effects of
private (string, int) GetNameAndAge()
{
return (“Maarten”, 33);

making this part }

In the first case, we are allocating a  Tuple<string, int>. While the effect
of the language will be negligible in this example, the allocation is done on the managed heap
and the Garbage Collector (GC) will have to clean it up at some point. In the

as well as the second case, the compiler-generated code uses the  ValueTuple<string,
int> type that itself is a struct and is created on the stack — giving us access
to the two values we want to work with while making sure the GC won’t have
framework. to work on the containing data structure.

The difference also becomes visible if we use ReSharper’s Intermediate Lan-


guage (IL) Viewer  to look at the code the compiler generates in the above
examples. Here are just the two method signatures:

// Before:
.method private hidebysig instance class [System.Runtime]
System.Tuple`2<string, int32> GetNameAndAge() cil
managed
{
// ...
}

// After:
.method private hidebysig instance valuetype [System.
Runtime]System.ValueTuple`2<string, int32> GetNameAndAge()
cil managed
{
// ...
}

We can clearly see that the first example returns an instance of a class and the
second example returns an instance of a value type. The class is allocated in
the managed heap (tracked and managed by the Common Language Run-
time (CLR), subject to garbage collection, mutable), whereas the value type
is allocated on the stack (fast, less overhead, immutable). In short,  System.
ValueTuple itself is not tracked by the CLR and merely serves as a simple
container for the embedded values we care about.

12 .NET Core // eMag Issue 68 - Jan 2019


.NET Core // eMag Issue 68 - Jan 2019 13
Note that next to their optimized memory usage, • an optional length; and
features like tuple deconstruction are quite pleasant
• utility functions to grab a slice of the Span<T>,
side effects of making this part of the language as
copy the contents, etc.
well as the framework.
Think of it as this pseudo-code:

Allocation-less substrings with public struct Span<T>


Span<T> {
We’ve already touched on stack versus managed ref T _reference;
heap in the previous section. Most .NET developers int _length;
use only the managed heap, but .NET has three types public ref T this[int index] { get
of memory we can use, depending on the situation: {...} }
}
• Stack memory — the memory space in which
we typically allocate value types like int, double, Whether or not we are creating a  Span<T>  using
bool, etc. It’s very fast (it frequently lives in the a  string, a char[], or even an unmanaged char*,
CPU’s cache) but limited in size (typically less the  Span<T> object provides us with the same func-
than 1 MB). The adventurous use the stack- tions, such as returning an element at index. Think
alloc keyword to add custom objects but know of it as being a  T[], where  T  can be any type of
they are on dangerous territory as a Stack- memory. To write a Substring() method that han-
OverflowException can occur at any time and dles all types of memory, all we have to care about
crash the entire application. is working with a Span<char> (or its immutable ver-
• Unmanaged memory — the memory space sion, ReadOnlySpan<T>):
without a GC, in which we have to reserve and
free memory ourselves, using methods like ReadOnlySpan<char>
Substring(ReadOnlySpan<char> source, int
Marshal.AllocHGlobal and Marshal.FreeH-
startIndex, int length);
Global.
• Managed memory/managed heap — the mem- The source argument here can be a span that is based
ory space where the GC frees up memory that is on a System.String or on an unmanaged char* —
no longer in use and where most of us live our we don’t have to care.
happy programmer lives with few memory issues.
All three types of memory have their own advan- But let’s forget about the memory-type agnosticism
tages and disadvantages and have specific use cas- of Span<T> for a bit and focus on performance. If we’d
es. But what if we want to write a library that works write a  Substring()  method for  System.String,
with all of these memory types? We’d have to provide this is probably what we would come up with:
multiple methods for them: one that takes a man-
aged object and another one that takes a pointer to string Substring(string source, int
an object on the stack or in the unmanaged heap. startIndex, int length)
{
A good example would be creating a substring of a
var result = new char[length];
string. We would need a method that takes a  Sys-
tem.String and returns a new System.String that for (var i = 0; i < length; i++)
represents the substring to handle the managed {
version. The unmanaged/stack version would take a result[i] = source[startIndex
char* (yes, a pointer!) and the length of the string, + i];
and would return similar pointers to the result. Un- }
manageable….
return new string(result);
The System.Memory NuGet package (currently still in }
preview) introduces a new Span<T> construct. It’s a
value type (so the GC does not track it) that tries to That’s great, but we are in fact creating a copy of the
unify access to any underlying memory type. It pro- substring. If we call  Substring(“Hello World!”,
vides a few methods, but in essence it holds: 0, 5), we’d have two strings in memory, “Hello
World” and “Hello”, potentially wasting memory
• a reference to T; space — and our code still has to copy data from one
• an optional start index;

14 .NET Core // eMag Issue 68 - Jan 2019


array to another to make this happen, consuming CPU cycles. Our imple-
mentation is not bad, but it is not ideal either.

Imagine implementing a web framework and having to use the above


code to grab the request body from an incoming HTTP request that has
headers and a body. We’d have to allocate big chunks of memory that
have duplicate data: one that has the entire incoming request and the
substring that holds only the body. And then there’s the overhead of
having to copy data from the original string into our substring.

So let’s rewrite that using (ReadOnly)Span<T>:

static ReadOnlySpan<char> Substring(ReadOnlySpan<char>


source, int startIndex, int length)
{
return source.Slice(startIndex, length);
}

That is shorter, but we have benefits beyond that. Due to the way that
Span<T> is implemented, our method does not return a copy of the source
data. It instead returns a  Span<T> that refers to a subset of our source.
In the example of splitting an HTTP request into headers and body, we’d
have three  Span<T>: the incoming HTTP request, one  Span<T>  point-
ing to the original data’s header, and another  Span<T> pointing to the
request body. The data would be in memory only once (the data from
which the first  Span<T> is created), all else would just point to slices of
the original. There’s no duplicate data and no overhead in copying and
duplicating data.

Conclusion
With .NET Core and its faster release cycle, Microsoft and the open-
source community can progress faster on new features related to per-
formance. We have seen a lot of work that has gone into improving ex-
isting code and constructs in the framework, such as improving LINQ’s
.ToList() method.

Faster cycles and easier upgrades also upgrades also speed develop-
ment of new improvements to .NET Core performance, such as with
types like System.ValueTuple and and Span<T> that make it more nat-
ural for .NET developers to use the different types of memory we have
available in the runtime while avoiding their pitfalls.

Imagine that some .NET base classes were reworked to a Span<T> imple-


mentation: things like string UTF parsing, crypto operations, web pars-
ing, and other typical CPU and memory consuming tasks. That would
bring great improvements to the framework, and all of us .NET develop-
ers would benefit. It turns out that is precisely what Microsoft is planning
to do! .NET Core’s performance future is bright!

.NET Core // eMag Issue 68 - Jan 2019 15


Read online on InfoQ

A QUICK TOUR OF THE DOTNET CLI


by  Jeremy Miller

Recently, folks who have either been hesitant or unable to switch off
the older, full .NET framework have asked me about the advantages in
moving to .NET Core. In reply, I’ll mention the better performance, the
improved csproj file format, the improved testability of ASP.NET Core,
and that it is cross-platform.

As the author of several OSS tools maintain build scripts for my proj- To get started with the dotnet
(including Marten, StructureMap, ects. It’s easier to run tests in build CLI, let’s first install the .NET SDK
and Alba), the single biggest ad- scripts, easier to both consume on our development machine.
vantage to me personally has and publish NuGet packages, Once that’s done, there’s a couple
been the advent of the dotnet and the CLI extensibility mecha- of helpful things to remember:
command-line interface (CLI). nism is fantastic for incorporating
Used in conjunction with the custom executables distributed • The dotnet tools are global-
new .NET SDK csproj file format, through NuGet packages into au- ly installed in our PATH and
the dotnet CLI tooling has made tomated builds. are available anywhere in
it far easier for me to create and our command-line prompts.

16 .NET Core // eMag Issue 68 - Jan 2019


• The dotnet CLI uses the Li- Hello World from the We’re going to start by using the
nux style of command syntax command line dotnet new command to gener-
using --word [value] for To take a little bit of a guided tour ate a solution file and new proj-
optional flags in longhand or through some of the highlights of ects. The .NET SDK comes with
-w [value] as a shorthand. the dotnet CLI, let’s say we want several common templates for
Anyone used to the Git or to build a simple Hello World ASP. common project types or files,
Node.js command-line tools NET Core application. Just for fun with other templates available as
will feel right at home with though, let’s add a couple twists: add-ons (more on this in a later
the dotnet CLI. section). To see what templates
1. There’ll be an automated test are available on our machine,
• dotnet --help will list the
for our web service in a sepa- start by using the command dot-
installed commands and
rate project. net new –help, which should
some basic syntax usage.
give us output like this:
• dotnet --info will tell us 2. We’ll deploy our service via
what version of the dotnet a Docker container because One of the available templates
CLI we are using. It’s prob- that’s what all the cool kids is sln, for an empty solution file.
ably a good idea to call this do (and it shows off more of We’ll use that template to get
command in our continu- the dotnet CLI). started by typing the command
ous-integration (CI) build for dotnet new sln, which will gen-
later troubleshooting when 3. And, of course, we’ll try to uti- erate this output:
something works locally and lize the dotnet CLI as much as
fails on the build server or possible. The template “Solution
vice versa. File” was created
If you want to see the final prod- successfully.
• Even though I’m talking about
.NET Core in this article, do uct of this code, check out this
By default, this command will
note that we can use the new GitHub repository.
name the solution file after the
SDK project format and the containing directory. Because we
dotnet CLI with previous ver- First off, let’s start with an empty
called the root directory DotNet-
sions of the full .NET Frame- directory named DotNetCliArti-
CliArticle, the generated solution
work. cle and open our favorite com-
file is DotNetCliArticle.sln.
mand-line tool to that directory.

.NET Core // eMag Issue 68 - Jan 2019 17


Going farther, let’s add the actual project for our Hello World with this
command:

dotnet new webapi --output HeyWorld

This executes the webapi template to the directory HeyWorld, which we


specified through the optional output flag. This template will generate
a slimmed-down MVC Core project structure suitable for headless APIs.
Again, the default behavior is to name the project file after the contain-
ing directory, so we get a file named HeyWorld.csproj in that directory,
along with all the basic files for a minimal ASP.NET MVC Core API project.
The template also sets up in our new project all the necessary NuGet
references to ASP.NET Core that we need to get started.

Since I just happen to be building this in a small Git repository, after add-
ing any new files with git add, I use git status to see what has been
newly created:

new file: HeyWorld/Controllers/ValuesController.cs


new file: HeyWorld/HeyWorld.csproj
new file: HeyWorld/Program.cs
new file: HeyWorld/Startup.cs
new file: HeyWorld/appsettings.Development.json
new file: HeyWorld/appsettings.json

Now, to add the new project to our empty solution file, we can use the
dotnet sln command like this:

dotnet sln DotNetCliArticle.sln add HeyWorld/HeyWorld.


csproj

We’ve now got the shell of a new ASP.NET Core API service without ever
having to open Visual Studio.NET (or JetBrains Rider, in my case). To go
a little farther and start our testing project before we write any actual
code, issue these commands:

dotnet new xunit --output HeyWorld.Tests


dotnet sln DotNetCliArticle.sln add HeyWorld.Tests/
HeyWorld.Tests.csproj

These commands create a new project using xUnit.NET as the test library
and add that new project to our solution file. The test project needs a
project reference to the HeyWorld project, and fortunately we can add a
project reference with the nifty dotnet add tool like so:

dotnet add HeyWorld.Tests/HeyWorld.Tests.csproj


reference HeyWorld/HeyWorld.csproj

I know, before opening up the solution, of a couple of other NuGet ref-


erences that I’d like to use in the testing project. Shouldly is my assertion
tool of choice, so I’ll add a reference to its latest version by issuing anoth-
er call to the command line:

dotnet add HeyWorld.Tests/HeyWorld.Tests.csproj package


Shouldly

18 .NET Core // eMag Issue 68 - Jan 2019


That will give me command-line output like this:

info : Adding PackageReference for package ‘Shouldly’ into project ‘HeyWorld.Tests/


HeyWorld.Tests.csproj’.
log : Restoring packages for /Users/jeremydmiller/code/DotNetCliArticle/HeyWorld.
Tests/HeyWorld.Tests.csproj...
info : GET https://api.nuget.org/v3-flatcontainer/shouldly/index.json
info : OK https://api.nuget.org/v3-flatcontainer/shouldly/index.json 109ms
info : Package ‘Shouldly’ is compatible with all the specified frameworks in project
‘HeyWorld.Tests/HeyWorld.Tests.csproj’.
info : PackageReference for package ‘Shouldly’ version ‘3.0.0’ added to file ‘/Users/
jeremydmiller/code/DotNetCliArticle/HeyWorld.Tests/HeyWorld.Tests.csproj’.

Next, I want to add at least one more NuGet reference to the testing project called Alba.AspNetCore2, which I’ll
use to author HTTP contract tests against the new web application:

dotnet add HeyWorld.Tests/HeyWorld.Tests.csproj package Alba.AspNetCore2

Before working with the code, let’s make sure everything compiles just fine by issuing this command to build all
the projects in our solution at the command line:

dotnet build DotNetCliArticle.sln

And, ugh, that didn’t compile because of a diamond-dependency version conflict between Alba.AspNetCore2
and the versions of the ASP.NET Core NuGet references in the HeyWorld project. No worries though, because
that’s easily addressed by fixing the version dependency of the Microsoft.AspNetCore.All NuGet in the
testing project like this:

dotnet add HeyWorld.Tests/HeyWorld.Tests.csproj package Microsoft.AspNetCore.All


--version 2.1.2

In the example above, using the --version flag with the value “2.1.2” will fix the reference to exactly that ver-
sion instead of just using the latest version found from our NuGet feeds.

To doublecheck that our NuGet dependency problems have all gone away, we can use the commands shown
below to do a quicker check than we’d get by recompiling everything:

dotnet clean && dotnet restore DotNetCliArticle.sln

As an experienced .NET developer, I’m paranoid about lingering files in the temporary /obj and /bin folders. I use
the “Clean Solution” command in Visual Studio.NET any time I try to change references, just in case something is
left behind. The dotnet clean command does the exact same thing from a command line.

Likewise, the dotnet restore command will try to resolve all known NuGet dependencies of the solution file
I specified. In this case, using dotnet restore will let us quickly spot any potential conflicts or missing NuGet
references without having to do a complete compilation — and that’s the main way I use this command in my
own work. In the latest versions of the dotnet CLI, NuGet resolution is done automatically (though we can over-
ride that behavior with a flag) in calls to dotnet build/test/pack/etc. that would first require NuGet.

.NET Core // eMag Issue 68 - Jan 2019 19


Our call to dotnet restore DotNetCliArticle.sln ran cleanly with no errors, so we’re finally ready to write
some code. Let’s open up the C# editor of our choice and add to the HeyWorld.Tests project a code file that
contains a simple HTTP contract test that will specify the behavior we want from the GET: / route in our new
HeyWorld application:

using System.Threading.Tasks;
using Alba;
using Xunit;

namespace HeyWorld.Tests
{
public class verify_the_endpoint
{
[Fact]
public async Task check_it_out()
{
using (var system = SystemUnderTest.ForStartup<Startup>())
{
await system.Scenario(s =>
{
s.Get.Url(“/”);
s.ContentShouldBe(“Hey, world.”);
s.ContentTypeShouldBe(“text/plain; charset=utf-8”);
});
}
}
}
}

The resulting file should be saved in the HeyWorld.Tests directory with an appropriate name such as verify_the_
endpoints.cs.

Without getting too much into the Alba mechanics, this just specifies that the home route of our new HeyWorld
application should write “Hey, world.” We haven’t actually coded anything real in our HeyWorld application, but
let’s still run this test to see if it’s wired correctly and fails for the right reason.

Back in the command line, we can run all of the tests in the testing project with this command:

dotnet test HeyWorld.Tests/HeyWorld.Tests.csproj


That will fail with our one test because nothing has actually been implemented yet,
giving this output:
Build started, please wait...
Build completed.

Test run for /Users/jeremydmiller/code/DotNetCliArticle/HeyWorld.Tests/bin/Debug/


netcoreapp2.1/HeyWorld.Tests.dll(.NETCoreApp,Version=v2.1)
Microsoft (R) Test Execution Command Line Tool Version 15.7.0
Copyright (c) Microsoft Corporation. All rights reserved.

Starting test execution, please wait...


[xUnit.net 00:00:01.8266290] HeyWorld.Tests.verify_the_endpoint.check_it_out [FAIL]
Failed HeyWorld.Tests.verify_the_endpoint.check_it_out
Error Message:
Alba.ScenarioAssertionException : Expected status code 200, but was 404
Expected the content to be ‘Hey, world.’
Expected a single header value of ‘content-type’=’text/plain’, but no values were
found on the response

20 .NET Core // eMag Issue 68 - Jan 2019


Stack Trace:
at Alba.Scenario.RunAssertions()
at Alba.SystemUnderTestExtensions.Scenario(ISystemUnderTest system, Action`1 configure)
at Alba.SystemUnderTestExtensions.Scenario(ISystemUnderTest system, Action`1 configure)
at HeyWorld.Tests.verify_the_endpoint.check_it_out() in /Users/jeremydmiller/code/
DotNetCliArticle/HeyWorld.Tests/verify_the_endpoint.cs:line 14
--- End of stack trace from previous location where exception was thrown ---

Total tests: 1. Passed: 0. Failed: 1. Skipped: 0.


Test Run Failed.
Test execution time: 2.5446 Seconds

To sum up that output, one test was executed and it failed. We also see the standard xUnit output that gives us
some information about why the test failed. It’s important to note here that the dotnet test command will
return an exit code of zero, denoting success, if all the tests pass and a non-zero exit code, denoting failure, if
any test fails. This is important for CI scripting where most CI tools use the exit code of any commands to know
when the build has failed.

I’m going to argue that the test above failed for the “right” reason, meaning that the test harness seems to be
able to bootstrap the real application and I expected a 404 response because nothing has been coded yet. Mov-
ing on, let’s implement an MVC Core endpoint for the expected behavior:

public class HomeController : Controller


{
[HttpGet(“/”)]
public string SayHey()
{
return “Hey, world!”;
}
}

(Note that the previous code should be appended as an additional class in our HeyWorld\startup.cs file.)

Returning to the command line, let’s run the previous dotnet test HeyWorld.Tests/HeyWorld.Tests.
csproj command again and hopefully we’ll see results like this:

Build started, please wait...


Build completed.

Test run for /Users/jeremydmiller/code/DotNetCliArticle/HeyWorld.Tests/bin/Debug/


netcoreapp2.1/HeyWorld.Tests.dll(.NETCoreApp,Version=v2.1)
Microsoft (R) Test Execution Command Line Tool Version 15.7.0
Copyright (c) Microsoft Corporation. All rights reserved.

Starting test execution, please wait...

Total tests: 1. Passed: 1. Failed: 0. Skipped: 0.


Test Run Successful.
Test execution time: 2.4565 Seconds

.NET Core // eMag Issue 68 - Jan 2019 21


Now that the test passes, let’s run the actual application. Since the “dotnet new webapi” template uses the
in-process Kestrel web server to handle HTTP requests, literally the only thing we need to do to run our new
HeyWorld application is to launch it from the command line with this command:

dotnet run --project HeyWorld/HeyWorld.csproj


Running the command above should give us this:
Using launch settings from HeyWorld/Properties/launchSettings.json...
: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[0]
User profile is available. Using ‘/Users/jeremydmiller/.aspnet/DataProtection-Keys’ as
key repository; keys will not be encrypted at rest.
Hosting environment: Development
Content root path: /Users/jeremydmiller/code/DotNetCliArticle/HeyWorld
Now listening on: https://localhost:5001
Now listening on: http://localhost:5000
Application started. Press Ctrl+C to shut down.

To test out our new application now that it’s running, just navigate in a browser like so:

Dealing with HTTPS setup is outside the scope of this article.

Note that we’re assuming all commands are being executed with the current directory set to the solution root
folder. If the current directory is a project directory and there is only one *.csproj file in that directory, we can
just type dotnet run.

Now that we have a tested web API application, let’s take the next step and put HeyWorld into a Docker image.
Using the standard template for dockerizing a .NET Core application, we’ll add a Dockerfile to our HeyWorld
project with this content:

FROM microsoft/dotnet:sdk AS build-env


WORKDIR /app

# Copy csproj and restore as distinct layers


COPY *.csproj ./
RUN dotnet restore

# Copy everything else and build


COPY . ./
RUN dotnet publish -c Release -o out

# Build runtime image


FROM microsoft/dotnet:aspnetcore-runtime
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT [“dotnet”, “HeyWorld.dll”]

22 .NET Core // eMag Issue 68 - Jan 2019


(Note that the previous text should be saved to a text file called Dockerfile in the
project directory — in this case, HeyWorld\Dockerfile.)

As this article is just about the dotnet CLI, I just want to focus on the two usages of
that within the Dockerfile:

1. dotnet restore — As we learned above, this command will resolve any


NuGet dependencies of the application.

2. dotnet publish –c Release –o out — The dotnet publish command


will build the designated project and copy all the files that make up the
application to a given location. In our case, dotnet publish will copy the
compiled assembly for HeyWorld itself, all the assemblies referenced from
NuGet dependencies, configuration files, and any files that are referenced
in the csproj file.

Note that we had to explicitly tell dotnet publish to compile with the Release
configuration through the usage of the -c Release flag. Any dotnet CLI command
that compiles code (build, publish, or pack, for example) will be default build
assemblies with the Debug configuration. Watch this behavior and remember to
specify the -c Release or --configuration Release if you are publishing a
NuGet or an application that is meant for production usage. You’ve been warned.

Just to complete the circle, we can now build and deploy our little HeyWorld appli-
cation through Docker with these commands:

docker build -t heyworld .


docker run -d -p 8080:80 --name myapp heyworld

The first command builds and locally publishes a Docker image named “heyworld”
for our application. The second command actually runs our application as a Docker
container named “myapp”. You can verify this by sending your browser to http://
localhost:8080.

Summary
The dotnet CLI makes automating and scripting builds on .NET projects simple,
especially compared to the state of the art in .NET a decade or so ago. In many
cases, you may even eschew any kind of task-based build-script tool (Cake, Fake,
Rake, Psake, etc.) in favor of simple shell scripts that just delegate to the dotnet
CLI. Moreover, the dotnet CLI extensibility model makes it possible to incorporate
external .NET-authored command-line applications distributed via NuGet into your
automated builds.

.NET Core // eMag Issue 68 - Jan 2019 23


Read online on InfoQ

KEY TAKEAWAYS DISTRIBUTED CACHING


ASP.NET Core has a built-
in distributed-caching
interface.
WITH ASP.NET CORE
Performance, shared by Matthew Groves
data, and durability are
the primary benefits of
distributed caching.
Caching can help improve the performance
Couchbase Server is a of an ASP.NET Core application. Distributed
memory-first database
that is great for use as a caching is helpful when working with an ASP.
distributed cache. NET application that’s deployed to a server
NuGet packages make it farm or scalable cloud environment.
easy to add Couchbase
Server to your application.
Microsoft documentation contains examples of doing this with SQL
Using the Server or Redis, but in this post, I’ll show you an alternative. Couchbase
IDistributedCache Server is a distributed database with a memory-first (or optionally mem-
interface abstracts away
the details and makes
ory-only) storage architecture that makes it ideal for caching. Unlike Re-
it easy to interact with dis, it has a suite of rich capabilities that you can use later on as your
cache in your ASP.NET use cases and your product expand. But here, I’m going to focus on its
Core controllers. caching capabilities and integration with ASP.NET Core. You can follow
along with all the code samples on GitHub.

24 .NET Core // eMag Issue 68 - Jan 2019


Benefits of distributed
caching
One of the prime benefits of dis-
tributed caching is improved per-
formance. A cache stores data in
RAM for quick and easy retrieval.
It will often be faster to retrieve
data from this cache, rather than
use the  original source every
time.

Another is shared cache data. If


you are using a multi-server de-
ployment for your ASP.NET Core
application, a load balancer could
direct your user to any one of
your ASP.NET Core servers. If the
cached data is on the web serv-
ers themselves, then you need to
turn on sticky sessions to make Figure 1
sure that user  is always directed
to the same ASP.NET Core serv-
er. This can lead to uneven loads
and other networking issues. See
this  Stack Overflow answer for
more details.

There is also durability. If an ASP.


NET Core web server goes down
or you need to restart it for any
reason, this won’t affect your
cached data. It will still be in the
distributed cache after the re-
start.

No matter which tool you use as


a distributed cache (Couchbase,
Redis, or SQL Server), ASP.NET
Core provides a consistent inter-
face for any caching technology
you wish to use.

Installing Couchbase
The first step is to get the distribut- and unlimited for  pre-production you will store your cached data. I
ed-cache server running. Choose use). I’ll be using the Community called my bucket “infoqcache”. I
the installation method  that’s edition here. created an “Ephemeral” bucket
most convenient for you. You can (which is a memory-only option).
use Docker or a cloud provider, When you install Couchbase, You can also use a “Couchbase”
or you can install it on your local you’ll open up your web browser bucket (which will store the data
machine (which is what I did for and go through a short wizard as in memory first and persist to disk
this article). Couchbase Server shown in figure 1. The default set- asynchronously) - see figure 2.
is a  free download, and you can tings are fine for these examples.
use the free Couchbase Commu- The last step in setting up Couch-
nity edition or the Enterprise Edi- Once you’ve installed Couchbase, base is security. Add a Couchbase
tion. (The Enterprise Edition is free create a data bucket. This is where user with appropriate permis-

.NET Core // eMag Issue 68 - Jan 2019 25


2. Setup Couchbase (see the
Explore the Server Configura-
tion page).

3. Created a bucket (Creating a


Bucket).

4. Created a user with permis-


sion to the bucket (Creating
and Managing Users with the
UI).

Create a new ASP.NET


Core application
I’m going to create a sample ASP.
NET Core API application to show
the distributed-caching capabili-
ties of ASP.NET Core. This will be
a small, simple application with
two endpoints.

I’m using Visual Studio


2017. From there, I select
Figure 3 File→New→Web→ASP.NET Core
Web Application. (Figure 4)

The next step is to select what


kind of ASP.NET Core project
template to use. I’m using a bare-
bones API with no authentication
and no Docker support. (Figure 5)

This project has a  ValuesCon-


troller.cs  file. I’m going to
replace most of the code in this
class with my  own code. Here is
the first endpoint that I will cre-
ate. It doesn’t use any  caching
and has a  Thread.Sleep to sim-
ulate high-latency data access
(imagine replacing that  Thread.
Sleep  with a call to a slow web
service or a complex database
query).
Figure 4

sions to that bucket. I called my Make sure you’ve completed all [Route(“api/get”)]
public string Get()
user “infoq” and gave it a pass- these installation steps before
{
word of “password” (please use moving on to ASP.NET Core. Here
// generate a new
something stronger in produc- are the steps with links to more string
tion!). The Enterprise Edition, of- detailed documentation. var myString =
fers a lot of roles to choose from Guid.NewGuid() + “ ” +
but we don’t need them for this 1. Install Couchbase (follow the DateTime.Now;
simple use case. “Bucket Full Ac- instructions on the  down-
cess” for  infoqcache is enough. loads page).
(Figure 3)

26 .NET Core // eMag Issue 68 - Jan 2019


// wait 5 seconds
(simulate a slow
operation)
Thread.Sleep(5000);

// return this value


return myString;
}

Start that website (Ctrl+F5 in Vi-


sual Studio). You can use a tool
like  Postman  to interact with the
endpoint but a browser is good
enough for this example. In my
sample project, the site will launch
to  localhost:64921, and I con-
figured the endpoint with a route
of  api/get. So, in a browser I go
to localhost:64921/api/get: Figure 5

This a trivial example, but it shows


that this endpoint is a) getting
some unique string value and b)
taking a long time to do it. Ev-
ery refresh will result in at least
a five-second wait. This would be
a great place to introduce caching
to improve latency and perfor-
mance. (Figure 6)
Figure 6

ASP.NET Core and Now open up the  Startup.cs file in the project. You will need to add
Couchbase integration some setup code to the ConfigureServices method here.
We now have an ASP.NET Core ap-
plication that needs caching and services.AddCouchbase(opt =>
a Couchbase Server instance that {
wants to help out. Let’s get them opt.Servers = new List<Uri>
{
to work together.
new Uri(“http://localhost:8091”)
};
The first step is to install a pack- opt.Username = “infoq”;
age from NuGet. You can use the opt.Password = “password”;
NuGet UI to search for Couch- });
base.Extensions.Caching, or
you can run this command in the services.AddDistributedCouchbaseCache(“infoqcache”, opt
Package Manager Console:  In- => { });
stall-Package Couchbase.
Extensions.Caching -Version (I also added  using  Couchbase.Extensions.Caching;  and  us-
1.0.1.  This is an open-source ing Couchbase.Extensions.DependencyInjection; at the top of the
project and the full source code is file, but I use ReSharper to identify and add those for me automatically.)
available on GitHub.
In the above code,  AddCouchbase  and  AddDistributedCouchbase-
NuGet will install all the packages Cache are extension methods that add to the built-in ASP.NET Core  IS-
you need for your ASP.NET Core erviceCollectioninterface.
application to talk to Couchbase
Server and to integrate with ASP. With AddCouchbase, I’m telling ASP.NET Core how to connect to Couch-
NET Core’s built-in distribut- base, giving it the user and password that I chose earlier.
ed-caching capabilities.

.NET Core // eMag Issue 68 - Jan 2019 27


With  AddDistributedCouchbaseCache, I’m telling ASP.NET Core how
to use Couchbase as a distributed cache, specifying the name of the
bucket I created earlier.

Documentation for this extension is available on GitHub. Don’t forget to


add clean-up/tear-down code in the ConfigureServices method.

Using ASP.NET Core’s distributed caching


Now that we’ve configured ASP.NET Core to know how to cache, let’s put
it to use in a simple example.

The simplest thing we can do with distributed caching is to inject it into


the ValuesController and directly use an IDistributedCache.

First, add IDistributedCache as a parameter to the constructor.

public ValuesController(IDistributedCachecache)
{
_cache = cache;
}

Since we already configured the distributed cache in  Startup.cs, ASP.


NET  Core knows how to set this parameter (using dependency injec-
tion). Now, _cache is available in  ValuesController to get/set values
in the cache. I wrote another endpoint called  GetUsingCache. This will
be just like the Get endpoint earlier, except it will use caching. After the
first call, it will store the value and subsequent calls will no longer reach
the Thread.Sleep.

[Route(“api/getfast”)]
public string GetUsingCache()
{
// is the string already in the cache?
var myString = _cache.GetString(“CachedString1”);
if (myString == null)
{
// string is NOT in the cache

// generate a new string


myString = Guid.NewGuid() + “ “ + DateTime.Now;

// wait 5 seconds (simulate a slow operation)


Thread.Sleep(5000);

// put the string in the cache


_cache.SetString(“CachedString1”, myString);

// cache only for 5 minutes


/*
_cache.SetString(“CachedString1”, myString,
new DistributedCacheEntryOptions {
SlidingExpiration = TimeSpan.FromMinutes(5)});
*/
}

return myString;
}

28 .NET Core // eMag Issue 68 - Jan 2019


The first request to /api/getfast will still be  slow — but refresh the
page and the next request will pull from the cache. Switch back to the
Couchbase console, click “Buckets” in the menu, and you’ll see that the
“infoqcache” bucket now contains one item.

One important thing to point out in  ValuesController is that none if


it is directly coupled to any Couchbase library. It all depends on the ASP.
NET Core libraries. This  common interface gives you the ability to use
Couchbase distributed caching any place that uses the standard Micro-
soft ASP.NET Core libraries. Also, it’s all encapsulated behind the  IDis-
tributedCache interface, which makes it easier for you to write tests.

In the above example, the cached data will live in the cache indefinite-
ly. But you can also specify an expiration for the cache. In the example
below, the endpoint will cache data for five minutes (on a sliding expi-
ration).

_cache.SetString(“CachedString1”, myString,
new DistributedCacheEntryOptions { SlidingExpiration =
TimeSpan.FromMinutes(5)});

Summary
ASP.NET Core can work hand in hand with Couchbase Server for distrib-
uted caching. ASP.NET Core’s standard distributed-cache interface makes
it easy for you start working with the cache. Next, get your ASP.NET Core
distributed applications up to speed with caching.

If you have questions or comments about the Couchbase.Extensions.


Caching project, make sure to check out the  GitHub repository  or
the Couchbase .NET SDK forums.

.NET Core // eMag Issue 68 - Jan 2019 29


Read online on InfoQ

KEY TAKEAWAYS AZURE AND .NET CORE ARE


ASP.NET Core is cross platform
(Windows/macOS/Linux),
which pairs nicely with Azure’s
BEAUTIFUL TOGETHER
hosting of Windows and Linux
virtual machines (VMs). by  Eric Boyd
ASP.NET Core includes a built-
in container that supports
constructor injection.
One of the most interesting features of
Configure this container’s .NET Core is the cross-platform support,
services in ConfigureServices
in ASP.NET Core app’s Startup both at development time and at runtime.
class. No longer are we limited to Windows for
Azure Resource Manager .NET; we can now use both Linux and
templates allow for the scripted macOS for development and app runtime.
configuration of Azure VMs and
the software they need. Also, there are no constraints that require
the same development and runtime
Azure App Service abstracts
away the lower-level details of platforms so we can develop our .NET
server maintenance and lets
developers deploy ASP.NET
Core apps on a Mac and then deploy to
projects straight to Azure. Windows and Linux servers.

30 .NET Core // eMag Issue 68 - Jan 2019


Azure, the Microsoft cloud platform
Azure, Microsoft’s cloud platform, is a great match for .NET Core apps because of the
wide range of infrastructure and platform services for hosting these applications,
along with the broad cross-platform support. Azure has a set of foundational infra-
structure services that provide compute, storage, and networking capabilities, which
enable customers to deploy virtualized servers very much like managing infrastruc-
ture in a traditional data center. This approach provides customers with great control
of the infrastructure and OS configuration that will host the applications. Azure VMs
support multiple versions of Windows Server and multiple distributions of Linux, in-
cluding Red Hat, CentOS, SUSE, and more.

Before we can deploy our .NET Core apps into Azure, we need to set up an application
host or runtime in Azure. There are numerous ways we can deploy infrastructure and
services in Azure. The easiest way to get started is using the Azure Portal. From the
portal, we can find the services we need in the marketplace and go through a series of
guided questions to configure and deploy these services. Once the VM is in a running
state, we can remotely manage and configure it by using Remote Desktop if it is run-
ning Windows or using SSH if it’s running Linux.

If you’re a fan of DevOps like me, you probably like to script and automate as much as
you can so it’s repeatable and streamlined. Azure Resource Manager (ARM) templates
allow us to automate the service deployment in Azure. ARM templates are simply
JSON files that define the resources that we want to deploy and their relationships
to each other. These ARM templates are very popular, and there is a GitHub repo that
contains hundreds of pre-built templates for lots of services, platforms, and configu-
rations.

In addition to deploying and configuring Azure services, we can use ARM templates to
configure the OS and install other dependencies using VM extensions. For example, if
we are setting up a web server on Ubuntu Linux, we will need to deploy the Ubuntu Li-
nux VM and then deploy a web server like Apache. Using the Custom Script Extension,
we can execute our custom script after the VM has finished deploying. These custom
scripts can do things like install other services and application servers like Apache and
PHP. We can see an example of an ARM template that deploys an Ubuntu Linux server
with Apache in the Azure Quickstart Templates repo at GitHub. In the rendered RE-
ADME.md file there, we can click Deploy to Azure button as shown in Figure 1 to start
the deployment of the selected template into our Azure subscription. Once we have a
web server, we can deploy our ASP.NET Core apps and run them in Azure.

Figure 1: Ubuntu with Apache GitHub README.md.

.NET Core // eMag Issue 68 - Jan 2019 31


Creating an ASP.NET Core app
Now it’s time to create a .NET Core app to deploy to Azure. Using Visual Stu-
dio 2017, I created a simple web API using ASP.NET Core. Since the new “Hel-
lo World!” web app seems to be a to-do list, I created a to-do-list web API.

To get started, I created a new project in Visual Studio and selected the Web
category and the ASP.NET Core Web Application template as shown in Figure
2.

Figure 2: Choosing a new ASP.NET Core Web Application in Visual Studio


2017.

After creating the project, I added a model class that defines the properties
for a to-do-list item using the code shown in Figure 3. I kept it pretty light-
weight and only created properties for the ID and name of the to-do-list item
and a Boolean to track if the item is completed.

public class TodoItem


{

public string Id { get; set; }


public string Name { get; set; }
public bool IsComplete { get; set; }
}

Figure 3: TodoItem.cs model class.

I like to use the repository pattern when creating data-access classes, so I


created an interface for the list repository as shown in Figure 4. This defines
all the methods I need for data access including a get method to read an
individual to-do-list item, a get method to return a list of all to-do-list items,
and methods to add, update, and delete to-do-list items.

public interface ITodoItemRepository


{
TodoItem Get(string id);
IList<TodoItem> Get();
void Add(TodoItem item);
void Update(TodoItem item);
void Delete(string id);
}

Figure 4: The ITodoItemRepository.cs list-repository-pattern interface.

32 .NET Core // eMag Issue 68 - Jan 2019


I then created the implementation of the list-item repository interface using Entity Framework (EF), as shown in
Figure 5. This includes the EF context class and the repository class that uses the EF context.

public class TodoContext : DbContext


{
public TodoContext(DbContextOptions<TodoContext> options)
: base(options)
{
}

public DbSet<TodoItem> TodoItems { get; set; }

}
public class TodoItemRepository : ITodoItemRepository
{
private readonly TodoContext _context;

public TodoItemRepository(TodoContext context)


{
_context = context;

if (!_context.TodoItems.Any())
{
_context.TodoItems.Add(new TodoItem { Name = “Item1” });
_context.SaveChanges();
}
}

public TodoItem Get(string id)


{
return _context.TodoItems.FirstOrDefault(t => t.Id == id);
}

public IList<TodoItem> Get()


{
return _context.TodoItems.ToList();
}

public void Add(TodoItem item)


{
_context.TodoItems.Add(item);
_context.SaveChanges();

Figure 5: TodoContext.cs and TodoListRepository.cs.

.NET Core // eMag Issue 68 - Jan 2019 33


Lastly, I created the controller for the to-do-list web API using the code shown in Figure 6. The controller simply
uses ITodoItemRepository and executes the appropriate data-access methods.

[Produces(“application/json”)]
[Route(“api/Todo”)]
public class TodoController : Controller
{
private ITodoItemRepository _repository;

public TodoController(ITodoItemRepository repository)


{
_repository = repository;
}

[HttpGet]
public IEnumerable<TodoItem> Get()
{
return _repository.Get();
}

[HttpGet(“{id}”, Name = “Get”)]


public TodoItem Get(string id)
{
return _repository.Get(id);
}

[HttpPost]
public void Post([FromBody]TodoItem value)
{
_repository.Add(value);
}

[HttpPut(“{id}”)]
public void Put(int id, [FromBody]TodoItem value)
{
_repository.Update(value);
}

Figure 6: TodoController.cs web API controller.

Now that I have completed the classes that implement the to-do-list web API, I need to configure a couple of
things for the web API to work. When I created the web API controller implementation, I mentioned that I’m us-
ing the ITodoItemRepository, but after reviewing the code, you may be wondering how that ITodoItemRe-
pository field gets an instance of the TodoItemRepository that implements Entity Framework. ASP.NET Core
has built-in dependency-injection container support to inject implementations at runtime, and with a call to an
IServiceCollection.Add* method in the Startup.cs as shown in Figure 7, I can associate the TodoItemRe-
pository class with the ITodoItemRepository interface so that whenever a field of type ITodoItemReposi-
tory is needed, it can be initialized with an instance of the TodoItemRepository implementation. In this case,
I am using the AddScoped() method, which creates a new instance per request and is recommended for Entity
Framework. You can read more about the service-lifetime options.

34 .NET Core // eMag Issue 68 - Jan 2019


public class Startup
{
public Startup(IConfiguration configuration)
{
Configuration = configuration;
}

public IConfiguration Configuration { get; }

public void ConfigureServices(IServiceCollection services)


{
services.AddMvc();
services.AddDbContext<TodoContext>(opt => opt.
UseInMemoryDatabase(“TodoList”));

services.AddScoped<ITodoItemRepository, TodoItemRepository>();
}

public void Configure(IApplicationBuilder app, IHostingEnvironment env)


{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}

app.UseMvc();
}
}

Figure 7: Startup.cs.

One other aspect that I need to configure is the data Azure App Service is one of those higher-level plat-
store for Entity Framework. For my simple “Hello form services that abstract away and hide the serv-
World!” to-do-list API, I chose to use the in-memory ers and infrastructure and just provide a target for
database. In the Startup.cs shown in Figure 7, the call deploying our web applications. In Visual Studio, we
to IServiceCollection.AddDbContext configures can right-click on an ASP.NET project and select the
the in-memory database for the Entity Framework Publish option as shown in Figure 8 to start deploy-
context. ing a web application to Azure App Service.

With this, we can now press F5 to run this application


locally and start debugging with Visual Studio. In ad-
dition to running it on Windows, we could also run
this in macOS or Linux OS, and we can deploy it to a
Windows or Linux VM in Azure, too.

Deploy to Azure PaaS


As developers, we don’t want to worry about deploy-
ing, configuring, and managing servers. Instead, we’d
rather just focus time and energy on developing our
applications. While lower-level infrastructure services Figure 8: Visual Studio project-context menu with
are available to us, Azure offers many higher-level Publish highlighted.
platforms services that reduce the time spent setting
up and configuring infrastructure, letting developers After we select the Publish option, a screen in Visual
focus on their applications and data. This also elim- Studio will display some deployment target options,
inates the ongoing patching of the OS and applica- and Azure App Service is the first option in that list as
tion server and maintenance for the customer. shown in Figure 9.

.NET Core // eMag Issue 68 - Jan 2019 35


ing Microsoft Azure App Service Linux and Container
Registry as shown in Figure 11.

Figure 9: Publish screen in Visual Studio.

If we select Microsoft Azure App Service, we can then


select whether to create a brand new web app by
selecting the Create New option or to update an ex-
isting web app in an Azure subscription by selecting Figure 11: The Publish screen in Visual Studio for
the Select Existing option. After that, we can click the an ASP.NET Core web application with Docker
Publish button to start walking through the guided support.
deployment process. This will deploy to the default
Azure App Service running on Windows, but now we When we select Microsoft Azure App Service Linux, it
can deploy to the Azure App Service running on Li- will guide us through a deployment process that will
nux and even use Docker containers. set up an Azure App Service running on Linux, with
Docker support, and we will also have the option to
In Visual Studio, we can easily enable Docker support create or use an existing Docker container registry.
for an ASP.NET Core application by right-clicking on
our ASP.NET Core project, selecting Add in the con- Azure is a large cloud platform with many services
text menu, then selecting Docker Support as shown and I’ve talked about infrastructure services like VMs
in Figure 10. and platform services like App Service, but there are
also other application runtimes that we can run: .NET
Core apps on Azure include Azure Functions (server-
less), Service Fabric (microservices), and Azure Con-
tainer Service (Docker containers) to name a few. If
you are interested in these other services, I will en-
courage you to visit  http://www.azure.com  to learn
more.

Conclusion
As I mentioned at the beginning of this article, one
of my favorite features of .NET Core is the broad
platform support that includes Windows, Linux, and
macOS. Combined with Azure, this not only pro-
vides cross-platform development but also a cloud
platform to host and run your applications that sup-
ports many OSs including Windows Server and mul-
Figure 10: Adding Docker Support to ASP.NET tiple distributions of Linux and additionally provides
Core apps. many higher-level platform services like Azure App
Service, Docker containers, and serverless comput-
Visual Studio will then display a dialog that allows us ing with Azure Functions. This capability is extremely
to choose whether we want Windows or Linux Dock- exciting and opens many possibilities by providing
er containers. After we select a platform, Visual Studio broad ecosystem support and being open.
will add a Dockerfile to our project and will place in
the Solution Explorer a docker-compose node that
contains a couple of docker-compose.yml files.

Now when we right click on our Project and select


Publish, we will see some different options, includ-

36 .NET Core // eMag Issue 68 - Jan 2019


Sponsored article

The .NET Renaissance is Happening Now.


Here’s Why It’s a New Era for .NET Developers.
When Shawn Neal set out to bring DevOps tooling to tried ServiceStack and Nancy for microservices, with
the Windows community, he wasn’t sure what it would mixed results.
ultimately lead to. But he knew it was something he
had to do. And then Microsoft announced .NET Core. This move,
I think, was the catalyst for the .NET Renaissance we
“Open source is in my DNA. If something is useful and talk about. It changed everything. Overnight, execu-
it works, we should share it with others,” Neal said. tives had a way forward. And they could use their ex-
isting talent. Engineers had new tools to bring more
A simple, yet powerful, sentiment. These days, it rings agility to their business.
true in once-unthinkable places: with Windows and
.NET communities. Now, you’re seeing .NET developers adopt modern
practices and good architecture. Engineers are lev-
Neal helped  bring Vagrant, Packer and Chef  to the eling up their skills. And the really interesting thing?
world of Windows. Since then, he’s continued his Microsoft isn’t the only voice. Pivotal jumped in
efforts with open-source .NET at Pivotal. A hefty chunk with Steeltoe to bring modern microservices patterns
of his expertise is captured in a new whitepaper he to .NET teams. There are other examples too, like
co-authored, “Cloud Native .NET The Right Way.” HashiCorp and Chef.

On the eve of the first Cloud Native .NET track at Cloud Everything has changed for the better. It’s exciting.
Foundry Summit, we sat down with Neal to discuss
the state of .NET, and why he’s bullish on its future. Why do you say .NET Core was the catalyst for this?

How would you describe the state of the .NET Frame- The executives and operations folks I talk to feel this
work over last 10 years? way for one simple reason: .NET Core isn’t tied to Win-
dows. In 2018, the operating system is a commodity.
The .NET world takes its cues from Microsoft. The That’s true of Windows or Linux. Now, people care
community looks to Microsoft to lead the way, to about platforms and distributed systems.
show them where to go, how to change.
When the OS doesn’t matter - when Windows doesn’t
To be sure, Microsoft introduced incremental up- matter - you can break free from the APIs that tethered
grades, but seemed more focused on JavaScript, WPF, you to the OS. Deep integration with the OS is now an
and other non-.NET technologies. CIOs and other anti-pattern. It’s slow, inefficient, and you have to deal
executives saw .NET apps as a liability. There wasn’t with licensing.
a clear path to modernize applications. The anxiety
with .NET shops was palpable. It was easy to under- All that said, you are still going to have plenty of Win-
stand why people would feel this way. When your dows Server deployments, and lots of .NET Frame-
most important business systems run on .NET, and work apps for the next decade. That’s because of
you’re not sure of the tech’s future, there’s a real risk the ASP.NET Webforms module. A huge part of your
there. portfolio uses this component.

A few developers started to “forklift” their apps to


Azure. That can offer incremental benefits, but doesn’t Click here to read the full article
give you  cloud-native  velocity. Other engineers

.NET Core // eMag Issue 68 - Jan 2019 37


Read online on InfoQ

KEY TAKEAWAYS .NET CORE AND DEVOPS


DevOps is a worthy and by Dave Swersky
rewarding pursuit no
matter what technology
stack you currently use.
I’ve been developing software long enough
Closed-source, to have been doing it when .NET 1.0 was in
proprietary software and
build processes do not beta. I remember thinking that using .NET
work well with DevOps felt like cheating. Isn’t this supposed to be
practices.
hard, I wondered. Where is my malloc? I
.NET Core is open source,
and was conceived and
don’t have to cast any pointer arithmetic
built for DevOps. spells? What is this Framework Class
The .NET Core CLI and
Library? I spent the first six months thinking
Roslyn API make the it had to be some elaborate trick.
entire delivery process
open and adaptable.
Fast forward to 2018, and we’re all still happily writing code on the .NET
Automation is a large part Framework, without agonizing over memory allocation. Threading
of DevOps; .NET Core was handled for us by  System.Thread, then BackgroundWorker, and
was built from the ground
up to support build and
now  Task. FCL classes are marked thread-safe for us, or not, ahead of
deployment automation. time. Want to write a web application? Here’s a complete framework,
batteries included. So many of the things we had to handcraft ourselves

38 .NET Core // eMag Issue 68 - Jan 2019


are provided by .NET, on a virtual Software  was rapidly building .NET Core Framework
silver platter. The upshot is that its appetite for the world, yet the and SDK
we developers get to spend far process of producing, deploying, DevOps doesn’t exist in a vacuum.
more time writing code that pro- and operating software-based The technologies used to pro-
vides  business value  (gasp!). The systems  was stuck  in the days duce and deliver software-based
Assembly/C/C++ hipsters may of  Turing  and  Hopper. A revo- systems can support DevOps
cry foul, lamenting the lack of lution was in the  air,  which be- practices or hinder them. DevOps
hardcore systems-programming gan sometime around 2008, and is a worthy pursuit regardless of
knowledge in the average devel- its name was DevOps. your technology stack.  Having
oper, yet here we are. I, for one, said that,  the stack you choose
am not complaining! The years between then and now will have a significant impact on
have seen the rise of a move- your DevOps practice.
.NET has been through many iter- ment. DevOps is a big thing that
ations, including four  major  ver- encompasses and perhaps super- Closed-source, proprietary build
sions, since that first beta. Its sedes the agile movement that systems are  not  DevOps-friend-
most recent iteration, .NET Core, came before it. I was introduced ly. .NET Core is fully open source,
is its most significant yet. .NET Core to DevOps in  2014  when I  was and the file formats used to rep-
features include  true  cross-plat- handed  a copy of  The Phoenix resent projects and solutions are
form targeting, a modern com- Project  at a conference. I made thoroughly documented. Mod-
mand-line interface (CLI) and the fateful decision to crack the ern languages and frameworks
build system, and an  open- binding then and there, thinking such as Node.js/JavaScript, Ruby,
source library, just to name a few I would read just a few pages. Sil- and Python have  been devel-
features. Those things are import- ly me. My conference plans that oped  with a few  common  fea-
ant, but the promise of .NET Core day fell to the wayside as I  de- tures:
goes further.  That  promise goes voured that book. It spoke to me,
to the way software is produced as it has too many. If you’ve been • compact, open-source frame-
and delivered. in the IT industry, even for a short works;
time, you’ve  been  those charac-
• command-line interfaces
I’ve been writing software for over ters. You can relate. DevOps has
(CLIs);
20 years, so I’m also old enough to been a career focus for me since
remember when source control then. • well-documented, open build
was a curiosity reserved for large systems; and
teams. “Automation” was not real- DevOps  is often presented  as
• support for all major operat-
ly in our lexicon — except to the having three  major  legs: culture,
ing systems.
extent that we automated busi- process, and technology. This ar-
ness processes for our custom- ticle is about the technology of These features and more have be-
ers. Building/compiling software DevOps. Specifically, it’s about the come popular in the DevOps era
was something done, ironically, technology that .NET Core brings because they are easy to adapt
by a human being. A build man- to modern DevOps practices. and automate. The .NET Core
ager would produce binaries on .NET Core  was conceived  during CLI,  dotnet, is the singular entry
her own computer (and it would the rise of DevOps. point to all  build  processes for a
always work on her machine!). .NET Core application. The  dot-
Microsoft  clearly  has well-de- net CLI works on developer work-
Deploying software to the en- fined goals to make .NET Core a stations and  build  agents alike,
vironments where it would run DevOps-era platform. This article regardless of platform. To wit: all
was (and too often, still is) a frag- will cover three major topics of the local development work I’ll
ile, byzantine process of shared .NET Core and DevOps: be demonstrating henceforth
drives, FTP, and manual file copy/ will be performed on a MacBook
paste. Integrating the work of • the .NET Core framework and Pro. Try to imagine that, just three
development teams was a miser- SDK, years ago!
able death march, playing whack-
• build automation, and
a-mole with one regression after The first step with .NET Core is to
the next. Is the software ready for • application monitoring. download it. If you’re following
production? Who knows? along, go here and download the
SDK. It’s a lean, mean 171 MB on
my Mac. Once it’s installed, open

.NET Core // eMag Issue 68 - Jan 2019 39


up your favorite terminal window called  “dotnet-angular”. You can
(I’m partial to PowerShell when create the directory manually if
I’m in Windows, but it’s iTerm2 on you prefer —  just  don’t forget
my Mac). to change to it before you exe-
cute  dotnet new  or the project
If you’re already familiar with  . will be created in your current di-
NET  development, you’re used rectory.  (I learned that the hard
to  big  framework installations. way.)
You’re accustomed to using Visual
Studio to get work done. If you’re If you already do Angular devel-
new to .NET Core, this is going to opment, you’ll likely have Node.js
feel a little strange, in a good way. installed. If not, take a moment to
We’re going to get a lot done with download and install.  If you do
these 171  megabytes  before we need to install Node.js, close and
ever touch an IDE. reopen your terminal after instal-
lation.
Execute: dotnet
Execute: dotnet run
This is the new CLI command that
allows you to interact with the This command will compile and
.NET Core SDK on your system. run the application (you can also
The output here teases the avail- compile without running the ap-
able CLI options. Let’s look deep- plication by executing  dotnet
er. build). It may take a minute or
two but then  you’ll have some
Execute: dotnet help output that includes a URL:

This is a list of all the commands ** Content root path: /


supported by the CLI. It’s not a Users/dswersky/dotnet-an-
long list, and it doesn’t have to gular Now listening on:
be. You’re looking at everything https://localhost:5001**
you need to interact with the .NET
Core framework build process, Copy the URL into a web brows-
from a fresh canvas to a deployed er and take a gander. What you
application. should see now is a simple appli-
cation running  ASP.NET  Core in
The first step is to create a new the background and Angular on
application. Let’s look at our op- the front end. Let’s take a breath
tions. for a moment and think about
how this experience differs from
Execute: dotnet new the .NET development experi-
ence of yesteryear.
The output will list the available
templates. This is the part in Visual If you were following along, you
Studio where you click File→New created and ran a .NET Core ap-
Project, only here we’re work- plication in a handful of minutes
ing from the command line. We (even if you had to install .NET
have quite a few templates to Core and Node!). A few questions
choose from. I’m rather partial to might come to mind….
Angular, so let’s start there.

Execute: dotnet  new angular Where’s my IDE?


-o dotnet-angular We didn’t need one to get to
this point, did we?  Obviously,  if
This  will create a new Angu- you want to edit this code, you’ll
lar project in a new directory need  something  to do that. You

40 .NET Core // eMag Issue 68 - Jan 2019


will likely want to use a tool that tained, with no external depen- code. They split the work they
has some understanding of .NET dencies on a Windows-based do  among themselves for the
and Angular. No problem, you HTTPS server. sake of efficiency. The code they
might think, I’ll start Visual Stu- produce must be combined, built,
dio Professional and get to work. and tested as a single unit. That
You could do that… or you could What does this have to testing must be automated, us-
download  Visual Studio Code, do with DevOps? ing a system that does  not  have
which is nearly as capable, and Automation is a core tenet and developer tools installed.
free. You could use Visual Studio practice of DevOps. The por-
Community, which is also free. tability, the CLI, and the  open- Ideally,  build  and  test  should
The point here is that it’s no lon- source  build system offered by happen every time new code is
ger necessary to invest hundreds .NET Core are essential to DevOps merged to a designated branch
of dollars to get started develop- practices. Most importantly, (this would be  master  in trunk-
ing with .NET Core. You can start they make it easy to automate based development). CI systems
small and grow organically. the  build  and deployment pro- are usually integrated directly
cesses. That automation can be with source-control systems, trig-
accomplished by scripting the gering a new build each time the
Where’s IIS? CLI or programmatically by auto- CI branch is changed.
This is a major difference between mating the build system directly.
legacy (too soon?) .NET web-ap- These features of .NET Core make Roslyn is an open-source compil-
plication development and  ASP. it not just possible but relatively er for .NET, with a wealth of APIs
NET  Core. You  can  run  ASP. easy to automate complex build you can access directly. CI system
NET  Core applications in IIS, but processes. This brings us to build developers use the compiler APIs
you don’t  have  to. The need to automation and continuous inte- to build plugins, which in turn
de-couple  ASP.NET Core from IIS gration. automate .NET  build  processes.
is obvious,  considering the fact .NET Core build tools such as Ro-
that .NET Core is truly cross-plat- slyn provide fine-grained control
form. The commands I listed here, .NET Core build over the build process. Develop-
including  dotnet run, work automation ers can use them to adapt and ex-
equally well and precisely the Back in the days of Visual Source- tend existing CI system features
same way on Windows, Mac, and Safe  (“we get it, Dave, you’re  an- to cover almost any conceivable
Linux (there’s even an ARM build cient”), it occurred to me that the build-pipeline use case. The best
that will run on a Raspberry Pi!). code my team was pushing into part is that you don’t have to be
This Angular application is one that repository was there, avail- a CI system developer to build a
of many that you can now “write able and ready to be compiled. plugin. Maintainers and vendors
once, run anywhere.” An idea tickled the back of my of CI systems go to great lengths
mind: why build deployments to make their systems easy to ex-
Hosting .NET applications with- from my system when it could be tend.
out IIS has been possible for some done  from there? I wasn’t the
time. The Open Web Interface only one to have that idea, yet I There are a number of CI systems
for .NET (OWIN) has supported certainly can’t claim to be one out there. Here’s a brief list that is
self-hosted  ASP.NET  applications of the few that did something by no means complete:
for years. That was made possi- with it. That claim belongs to the
ble by code and infrastructure, brave souls that embarked on the • Jenkins,
generally referred to as Project development of continuous-inte-
Katana. .NET Core uses an HTTPS gration (CI) systems. • Azure DevOps Server/Azure
server called  Kestrel. Kestrel is a DevOps,
fast, high-performance, open- The purpose of CI is simple to say, • CircleCI,
source HTTPS server for .NET not so simple to achieve: Always
have a build ready for deploy- • TeamCity, and
applications. Kestrel provides
HTTPS to  ASP.NET  Core websites ment. • GitLab.
and RESTful services running any- The flexibility .NET Core offers al-
where, including Windows, Linux, Software development is a team
lows it to work with any CI system.
and in container orchestrators. sport. The average agile/Scrum
That can be as simple as a script
Kestrel makes  ASP.NET  Core ap- team has three to five full-time
working with the CLI or plugins
plications completely self-con- developers actively contributing

.NET Core // eMag Issue 68 - Jan 2019 41


that use the compiler APIs to automate the
build directly.

If you currently have a favorite CI system, you


can try it out with my sample project. This is the
same project that we created with the CLI ear-
lier, with a little extra. The repository includes
a Dockerfile. It took me about 10 minutes to
create an Azure DevOps (formerly Visual Studio
Team Services) build pipeline that pulls the code
from GitHub, builds an image, and pushes it to
an Azure Container Registry.  This  would work
just as well with a Jenkinsfile or GitLab pipeline,
in AWS or Google Cloud. The possibilities are, as
they say, nearly endless.

Application monitoring with .NET


Core
The care and feeding of software systems is a
full-time job; just ask your Ops colleagues. Those
systems are like a cranky, colicky baby — con-
stantly  needing some  kind of  attention. Ops
staff are often like the confused new parent, at a
loss for why the system is screaming for that at-
tention. How do systems scream for attention?
That depends on how you watch them or don’t!

The worst way to monitor systems is not to mon-


itor  them. Whether you  monitor  or not, one
way or another, you’ll eventually find out when
they break. When your customers call in a rage,
or just quit your services altogether, you’ll find
out after it’s too late. The goal of application
monitoring is to detect problems before your
customers or end users (there’s  really  no prac-
tical difference) do. Many companies make the
false-economy  judgement  that application
monitoring is too expensive or that “well-made
systems don’t need monitoring.” Don’t buy it.

Even the most stable system is only one fail-


ure or change away from catastrophe. DevOps
practices try to balance safety with velocity,
allowing companies to innovate by moving
fast and safely at the same time. That balance is
maintained by keeping a close watch on the op-
erational parameters of your system.

.NET Core design and architecture is well-suit-


ed to application monitoring.  ASP.NET  Core is
an excellent example. It is possible to custom-
ize the internal request/response behavior
of  ASP.NET  3.x/4.x  applications, running on IIS,
using HTTP modules.  ASP.NET  Core improves
on that model with middleware, which is simi-

42
lar in concept to HTTP modules ments (especially production!).
but  quite  different in implemen- This is critical to DevOps practice,
tation. Middleware classes are as it allows you to verify, with hard
integrated through code, and are numbers, that the changes you
much simpler to configure. They are making to your system are not
form a request/response pipe- degrading its performance. You
line chain of modifiers for the re- can then add new features with
sponse to a request. the confidence that moving fast
does not have to break things.
Injecting middleware into an ASP.
NET  Core application that per-
forms monitoring is almost triv- Conclusion
ially easy. I’ll demonstrate an .NET Core was conceived and
example with Azure Application developed with DevOps prac-
Insights. I created an Application tices in mind. The CLI and open
Insights resource in my Azure build system and libraries make it
portal, then edited exactly three possible to automate and adapt
files in my  repository  to enable the software delivery process to
Application Insights monitoring: just about any  imaginable  set
of requirements. Build automa-
• dotnet-angular.csproj — tion  and  continuous integration
added a line to reference the are achieved through CLI script-
Application Insights assem- ing, or deeper programmatic
bly (this manual step was integration if you prefer. Appli-
only necessary because I cation monitoring with  open-
was using Visual Studio for source  or paid enterprise tools
Mac; details here). are available to turn your system
from a black box to a clean pane
• appsettings.json — add- of glass. .NET Core, delivered us-
ed my Application Insights ing DevOps practices, is a com-
instrumentation key. pelling platform for modern soft-
• Startup.cs — where mid- ware systems.
dleware is configured. I add-
ed the Application Insights
middleware here.
Having done these things, I was
able to start debugging locally
and gather monitoring data from
Application Insights. You can try
it out yourself — just replace the
sample key in  appsettings.
json with your key.

Application Insights is not your


only option for application mon-
itoring.  AppMetrics  is an open-
source monitoring library that
integrates with visualization tools
such as Grafana. There are also
paid options from vendors that
offer enterprise features.

All these monitoring options


provide  transparency: the ability
to view the behavior of your ap-
plication in its runtime environ-

.NET Core // eMag Issue 68 - Jan 2019 43


Read online on InfoQ

KEY TAKEAWAYS ADVANCED ARCHITECTURE FOR


ASP.NET Core’s new
architecture offers
benefits that the legacy
ASP.NET CORE WEB APIS
ASP.NET technology
lacks. by  Chris Woodruff
ASP.NET Core benefits
from incorporating
support for dependency
injection from the start.
The Internet is a very different place that it was
five years ago, let alone 20 years ago, when I
The single-responsibility
principle simplifies first started as a professional developer. Today,
implementation and web APIs connect the modern Internet and
design.
drive both web applications and mobile apps.
The ports-and-adapters The skill of creating robust web APIs that other
pattern decouples
business logic from other developers can consume is in high demand.
dependencies. APIs that drive most modern web and mobile
Decoupled architecture apps need to have the stability and reliability
makes testing much
easier and more robust.
to stay in service even when traffic hits the
performance limits.

44 .NET Core // eMag Issue 68 - Jan 2019


This article will describe the ar- Dependency injection maintainability, and we will keep
chitecture of an ASP.NET Core Before we dig into the architec- those aspects in mind.
2.0 web API that uses hexagonal ture of our ASP.NET Core web API,
architecture and the ports-and- I want to discuss dependency in-
adapters pattern, starting with a jection (DI), which I believe is the Maintainability of the API
look at the new features of .NET greatest benefit for .NET Core Maintainability for any engineer-
Core and ASP.NET Core that ben- developers. I know you will point ing process is the ease with which
efit web APIs. out that we had DI in .NET Frame- a product can be maintained:
work and ASP.NET solutions, and finding and correcting defects,
The solution and all code from I agree,  but the DI we used in preventing malfunctions, maxi-
this article’s examples can  be the past came from third-party mizing a product’s useful life, and
found in my GitHub reposito- commercial providers or maybe coping with future maintenance,
ry ChinookASPNETCoreAPIHex. open-source libraries. These did requirements, or a changing en-
a good job, but there was a steep vironments. This can be a difficult
learning curve for a good portion road to go down without a well-
.NET Core and ASP.NET of .NET developers  and all DI li- planned and well-executed archi-
Core for web APIs braries had their  unique ways of tecture.
ASP.NET Core is a new web frame- handling things. Today, .NET Core
work that Microsoft built on top has DI built right into the frame- Maintainability is a long-term
of .NET Core to shed the legacy work.  We get it out of the box issue and should  be considered
technology that has been around and moreover it is quite simple to with a long-term vision of the API.
since .NET 1.0.    By comparison, work with. We need to make decisions that
ASP.NET 4.6 still uses the  Sys- lead to this future vision and not
tem.Webassembly that contains The reason we need to use DI in to short-term shortcuts that seem
all the Webform libraries and as our API is that it allows us to eas- to make life easier right now.
a result is still brought into more ily decouple architecture layers Making hard decisions at the start
recent ASP.NET MVC 5 solutions. and  also lets mock the data lay- will allow a project to have a long
By shedding these legacy de- er or have multiple data sources life and provide benefits that us-
pendencies and developing the built for our API. ers demand.
framework from scratch, ASP.
NET Core 2.0 gives the develop- To use the .NET Core DI frame- What gives a software architec-
er much better performance and work,  our project must refer- ture high maintainability? How
executes across multiple plat- ence the  Microsoft.AspN- do we evaluate if our API  can
forms. With ASP.NET Core 2.0, our etCore.AllNuGet package be maintained? Here are some
solutions will work as well on Li- (which contains a dependency thoughts:
nux as they do on Windows. on the  Microsoft.Exten-
sions.DependencyInjection. • The architecture should allow
There is more about the benefits Abstractions package). This for changes that have minimal
of .NET Core and ASP.NET Core in package gives access to the IS- if not zero impact on other ar-
the other articles in this collec- erviceCollection interface, eas of the system.
tion. which has a System.IService- • Debugging the API should be
Provider interface that we can
easy and not require a difficult
call GetService<TService>. To set up. We should have estab-
Architecture get the services we need from the
Building a great API depends on lished patterns and use com-
IServiceCollection interface,
great architecture. This article mon methods (such as brows-
we will need to add the services er debugging tools).
will look at many aspects of API
our project needs. 
design and development, from • Testing should be as automat-
the  built-in functionality of ASP. ed as possible and be clear
To learn more about .NET Core DI,
NET Core to architecture philos- and uncomplicated.
I reviewing “Dependency injec-
ophy to design patterns. There
tion in ASP.NET Core” at Microsoft
is  much planning and thought
Docs. Interfaces and
behind this architecture, so let’s
implementations
get started.
The design of any architec- The key to our API architecture is
ture  should use proven patterns the use of C# interfaces to allow
and architectures and allow deep for alternative implementations.

.NET Core // eMag Issue 68 - Jan 2019 45


Anyone who has written .NET code with  C# probably has used inter-
faces. We will use interfaces in our API to build a contract in the domain
layer that guarantees that any data layer we develop for the API adheres
to the contract for data repositories. It also allows the controllers in the
API project to adhere to another established contract for getting the
correct methods to process the API methods in the domain project’s
supervisor. Interfaces are very important to .NET Core and anyone who
needs a refresher should go here.

Ports-and-adapter pattern
We want our objects throughout our API solution to have single respon-
sibilities. This keeps our objects simple and allows us to easily fix bugs or
enhance our code. If code is difficult to change, it might be violating the
single-responsibility principle. As a general rule, I look at the implemen-
tations of the interface contracts for length and complexity. I  do not
limit the number of lines of code in my methods, but if the IDE passes a
single view, it might be too long. Also, check the cyclomatic complexi-
ty of methods to determine the complexity of a project’s methods and
functions.

The ports-and-adapter pattern (a.k.a. hexagonal architecture) is a way


to fix this problem of having business logic coupled too tightly to other
dependencies such as data access or API frameworks. Using this pattern
will allow our API solution to have clear boundaries, well-named objects
with single responsibilities, and easier development and maintainabil-
ity.

We can imagine the pattern best like an onion, with ports on the out-
side of the hexagon and the adapters and business logic located in lay-
ers closer to the core. I see the external connections of the architecture
as the ports. The API endpoints that are consumed or the database con-
nection used by Entity Framework Core 2.0 would be examples of ports
while the internal data repositories would be the adapters.

46 .NET Core // eMag Issue 68 - Jan 2019


Let’s look at the logical segments of our architecture and
some examples of code.

Domain layer
We need to explain how we build out the contracts through
interfaces and the implementations of our API business log-
ic. Let’s look at the domain layer. The domain layer has the
following functions:

• It defines the entity objects that will be used throughout the solution. These models will represent the data
layer’s DataModels.
• It defines the ViewModels that the API layer will use for HTTP requests and responses as single objects or
sets of objects.
• It defines the interfaces through which our data layer can implement the data-access logic.
• It implements the supervisor that will contain methods called from the API layer. Each method will represent
an API call and will convert data from the injected data layer to ViewModels to be returned.
Our domain entity objects are a representation of the database that we are using to store and retrieve data used
for the API business logic. Each entity will contain the properties represented in the SQL table. As an example,
this the Album entity:

public sealed class Album


{
public int AlbumId { get; set; }
public string Title { get; set; }
public int ArtistId { get; set; }

public ICollection<Track> Tracks { get; set; } = new HashSet<Track>();


public Artist Artist { get; set; }
}

The Album table in the SQL database has three columns: AlbumId, Title, and ArtistId. These three properties are
part of the Album entity, as is the artist’s name, tracks, and associated artists. As we will see in the other layers in
the API architecture, we will build upon this entity object’s definition for the ViewModels in the project.

The ViewModels are the extensions of the entities and supply more information to the consumers of the
APIs. Let’s look at the Album ViewModel. It is similar to the Album entity but has an additional property. In the
design of my API, I determined that each Album should have the name of the artist in the payload passed back
from the API. This allows the API consumer to have that crucial piece of information about the Album without
having to have the Artist ViewModel passed back in the payload (especially when we are sending back a large
set of albums). An example of our Album ViewModel is below.

public class AlbumViewModel


{
public int AlbumId { get; set; }
public string Title { get; set; }
public int ArtistId { get; set; }
public string ArtistName { get; set; }

public ArtistViewModel Artist { get; set; }


public IList<TrackViewModel> Tracks { get; set; }
}

.NET Core // eMag Issue 68 - Jan 2019 47


The other area that is developed into the domain layer is the contracts for each of the entities defined in the
layer. Again, we will use the Album entity to show the interface that is defined: 

public interface IAlbumRepository : IDisposable


{
Task<List<Album>> GetAllAsync(CancellationToken ct = default(CancellationToken));
Task<Album> GetByIdAsync(int id, CancellationToken ct = default(CancellationToken));
Task<List<Album>> GetByArtistIdAsync(int id, CancellationToken ct =
default(CancellationToken));
Task<Album> AddAsync(Album newAlbum, CancellationToken ct =
default(CancellationToken));
Task<bool> UpdateAsync(Album album, CancellationToken ct =
default(CancellationToken));
Task<bool> DeleteAsync(int id, CancellationToken ct =
default(CancellationToken));
}

The interface defines the methods needed to implement the data-access methods for the Album entity. Each
entity object and interface is well defined and simple, and that allows the next layer to be well defined.

Finally, the core of the domain layer is the Supervisor class. Its purpose is to translate to and from entities and
ViewModels and to perform business logic away from the API endpoints and the data-access logic. Having the
supervisor handle this also isolates the logic to allow unit testing on the translations and business logic.

Looking at the Supervisor method for acquiring and passing a single Album to the API endpoint, we can see
the logic in connecting the API front end to the data access injected into the supervisor while keeping each
isolated.

public async Task<AlbumViewModel> GetAlbumByIdAsync(int id, CancellationToken ct =


default(CancellationToken))
{
var albumViewModel = AlbumCoverter.Convert(await _albumRepository.GetByIdAsync(id,
ct));
albumViewModel.Artist = await GetArtistByIdAsync(albumViewModel.ArtistId, ct);
albumViewModel.Tracks = await GetTrackByAlbumIdAsync(albumViewModel.AlbumId, ct);
albumViewModel.ArtistName = albumViewModel.Artist.Name;
return albumViewModel;
}

Keeping most of the code and logic in the domain layer will allow every project to adhere to the single-respon-
sibility principle. 

Data layer
We are using Entity Framework Core 2.0 in this example, which means that we have the Entity Framework Core’s
DBContext defined and the data models generated for each entity in the SQL database. The data model for the
Album entity, for example, has three properties stored in the database along with a property that contains a list
of associated tracks to the album and a property that contains the artist object.

While we can have a multitude of data-layer implementations,  remember that all must adhere to the require-
ments documented in the domain layer: each data-layer implementation must work with the ViewModels and
repository interfaces detailed there. The architecture we are developing uses the repository pattern to connect
the API layer to the data layer. This is done using dependency injection (as discussed earlier) for each of the
repository objects we implement (the “API layer” section contains the code for dependency injection). The key
to the data layer is the implementation of each entity repository using the interfaces developed in the domain
layer. The domain layer’s Album repository shows that it implements the IAlbumRepository interface. Each
repository will inject the DBContext that will allow access to the SQL database using Entity Framework Core.

48 .NET Core // eMag Issue 68 - Jan 2019


public class AlbumRepository : IAlbumRepository
{
private readonly ChinookContext _context;

public AlbumRepository(ChinookContext context)


{
_context = context;
}

private async Task<bool> AlbumExists(int id, CancellationToken ct =


default(CancellationToken))
{
return await GetByIdAsync(id, ct) != null;
}

public void Dispose()


{
_context.Dispose();
}

public async Task<List<Album>> GetAllAsync(CancellationToken ct =


default(CancellationToken))
{
return await _context.Album.ToListAsync(ct);
}

public async Task<Album> GetByIdAsync(int id, CancellationToken ct =


default(CancellationToken))
{
return await _context.Album.FindAsync(id);
}

public async Task<Album> AddAsync(Album newAlbum, CancellationToken ct =


default(CancellationToken))
{
_context.Album.Add(newAlbum);
await _context.SaveChangesAsync(ct);
return newAlbum;
}

public async Task<bool> UpdateAsync(Album album, CancellationToken ct =


default(CancellationToken))
{
if (!await AlbumExists(album.AlbumId, ct))
return false;
_context.Album.Update(album);

_context.Update(album);
await _context.SaveChangesAsync(ct);
return true;
}

public async Task<bool> DeleteAsync(int id, CancellationToken ct =


default(CancellationToken))
{
if (!await AlbumExists(id, ct))
return false;
var toRemove = _context.Album.Find(id);
_context.Album.Remove(toRemove);
await _context.SaveChangesAsync(ct);
return true;
}

public async Task<List<Album>> GetByArtistIdAsync(int id, CancellationToken ct =


default(CancellationToken))
{
return await _context.Album.Where(a => a.ArtistId == id).ToListAsync(ct);
}
}

.NET Core // eMag Issue 68 - Jan 2019 49


Having the data layer encapsulating all data access facilitates a better testing story for our API. We can build
multiple data-access implementations: one for SQL database storage, another for maybe a cloud NoSQL storage
model, and finally a mock storage implementation for the unit tests in the solution. 

API layer
This is where our API consumers will interact. This layer contains the code for the web-API endpoint logic includ-
ing the controllers. The API project for the solution will have a single responsibility and that is to handle the HTTP
requests received by the web server and to return the HTTP responses with either success or failure. This project
will have minimal business logic. We will handle exceptions and errors that have occurred in the domain or data
projects and effectively communicate with the API consumer. This communication will use HTTP response codes
and any data to be returned will be located in the HTTP response body.

ASP.NET Core 2.0 handles web-API routing with attribute routing (to learn more about attribute routing in ASP.
NET Core, go here). We are also using dependency injection to assign the supervisor to each controller. Each
controller’s Action method has a corresponding Supervisor method that will handle the logic for the API call.
A segment of the Album controller shows these concepts:

[Route(“api/[controller]”)]
public class AlbumController : Controller
{
private readonly IChinookSupervisor _chinookSupervisor;

public AlbumController(IChinookSupervisor chinookSupervisor)


{
_chinookSupervisor = chinookSupervisor;
}

[HttpGet]
[Produces(typeof(List<AlbumViewModel>))]
public async Task<IActionResult> Get(CancellationToken ct =
default(CancellationToken))
{
try
{
return new ObjectResult(await _chinookSupervisor.GetAllAlbumAsync(ct));
}
catch (Exception ex)
{
return StatusCode(500, ex);
}
}

...
}

The web API for the solution is simple and thin. I strive to keep as little code as possible in this solution as it
could be replaced with another form of interaction in the future.

Conclusion
Designing and developing a great ASP.NET Core 2.0 web API solution takes insight, and can lead to a decoupled
architecture that allows each layer to follow the single-responsibility principle and to be easily testable. I hope
my information will allow you to create and maintain your production web APIs for your organization’s needs.

50 .NET Core // eMag Issue 68 - Jan 2019


Read online on InfoQ

KEY TAKEAWAYS HOW TO TEST ASP.NET


Understanding
and using unit
tests correctly are
CORE WEB API
important for your
ASP.NET Core Web by Chris Woodruff
API solutions.

Learning about and


using mock data
for your unit testing
will allow you to
have stable testing
scenarios.
When designing and developing a rich set
of APIs using ASP.NET Core 2.1 Web API, it
Create mock data
projects in .NET Core
is important to remember that this is only
2.1 for use in your ASP. the first step to a stable and productive
NET Core Web API
solutions.
solution. Having a stable environment for
your solution is also important. The key to
Understand and set
up integration testing a great solution includes not only soundly
to test your APIs building the APIs but also rigorously testing
externally for a fully
tested ASP.NET Core your APIs to ensure that consumers have a
2.1 Web API solution. great experience.

.NET Core // eMag Issue 68 - Jan 2019 51


This article is a continuation of my
previous article for InfoQ, titled
“Advanced Architecture for ASP.
NET Core Web API”. You do not
have to read that article to get
the benefits of testing but it may
provide more insight into how I
built the solution I discuss here.
I’ve spent a lot of time thinking
about testing while building APIs
for clients over the last few years.
Knowing the architecture for ASP.
NET Core 2.1 Web API solutions
may help broaden your under-
standing.

The solution and all code from


this article’s examples can be
found in my GitHub repository.
Figure 1: Creating a new unit -test project in Visual Studio 2017.

Primer for ASP.NET Core


Web API [Fact] I will be using the xUnit tool for
ASP.NET Core is a new web frame- public async Task my unit testing throughout this
work that Microsoft built to shed AlbumGetAllAsync() article. xUnit is an open-source
{ package for .NET Framework and
the legacy technology that has
// Arrange now for .NET Core. The .NET Core
been around since ASP.NET 1.0.
By shedding these legacy de- version of xUnit is included in the
// Act
pendencies and developing the installation of .NET Core 2.1 SDK
var albums = await
framework from scratch, ASP. _repo.GetAllAsync(); and this article will use that. You
NET Core 2.1 gives the develop- can create a new unit-test proj-
er much better performance and // Assert ect either through the .NET Core
cross-platform execution. Assert. CLI command dotnet test or
Single(albums); through the project template in
} an IDE such as Visual Studio 2017
What is unit testing? (see figure 1), Visual Studio Code,
Testing your software may be There are three parts of a good or JetBrain’s Rider.
new to some people, but it is unit test. The first is the Arrange
quite easy. The rigid definition of part, which is used for setting up Now let’s dive into unit testing
unit testing at Wikipedia is “a soft- any resources that your test may your ASP.NET Core Web API solu-
ware testing method by which need. The example above does tion.
individual units of source code, not have any setup so the Ar-
sets of one or more computer range part is empty (but I still
program modules together with keep a comment for it). The next What should be unit-
associated control data, usage part called Act is what performs tested with web APIs?
procedures, and operating pro- the test. In this example, I have I am a huge proponent of using
cedures, are tested to determine called the data repository for the unit testing to keep a stable and
whether they are fit for use.” A Album entity type to return the robust API for consumers. But
layman’s definition I like to use is entire set of albums from the re- I keep a healthy prospectus on
that unit testing is used to make pository’s data source. The last how I use my unit tests and what
sure that your code performs as part of the test, Assert, verifies I test. My philosophy is to unit-
expected after you add new func- that the test acted correctly. For test a solution just enough and
tionality or fix defects. You test a this test, I am verifying that I re- not anymore than necessary. By
small sample of code to ensure turned a single album from the that, I mean that I may get a lot
you match your expectations. data repository. of comments from this view, but
Look at a sample unit test: I am not overly concerned with
having 100% coverage with tests.

52 .NET Core // eMag Issue 68 - Jan 2019


you create for your tests, the bet-
ter your test will perform. I would
suggest that you make sure your
data is also clean of privacy issues
and does not contain personal or
sensitive data.

To meet the need for clean, stable


data, I create unique projects that
encapsulate the mock data for my
unit-test projects. For this demo, I
have called my mock-data project
Chinook.MockData (as you can
see in the demo source). My Chi-
nook.MockData project is almost
identical to my normal Chinook.
Data project. They have the same
number of data repositories and
both adhere to the same interfac-
es. I want the Chinook.MockData
project to be stored in the depen-
dency-injection (DI) Container so
that the Chinook.Domain project
can use it just like the Chinook.
Figure 2: Adding the Microsoft.AspNetCore.TestHost NuGet Data project that is connected
package. to the production data source.
That is why I love DI. It allows me
Of course, I think that you need to your ASP.NET Core Web API solu- to switch data projects through
have tests that cover the import- tion. Data is key to testing your configuration and without any
ant parts of the API solution and APIs. Having a predictable data code changes.
isolate each area independently set that you can test is vital, which
to ensure the contract of each is why I would not recommend
segment of code is kept. I do that, using production data or any data Integration testing: What
and that is what I want to discuss. that can unpredictably change is this new testing for
over time. You need a stable set web APIs?
Since my demo Chinook.API proj- of data to make sure all unit tests After performing and verifying
ect is thin and can be tested using run and confirm the contract the unit tests for our ASP.NET
integration testing (discussed lat- between the code segment and Core Web API solution, I will look
er in the article), I find that I con- the test. As an example, when I at a different type of testing. I use
centrate the most on unit tests test the Chinook.Domain project unit testing to verify and con-
in my domain and data projects. for getting an album with an ID firm expectations on the inter-
I am not going to go into detail of 42, that album must exist and nal components of the solution.
about how you unit-test (that have the expected details like ti- When satisfied with the quality of
topic goes beyond this article), tle and artist. I also want to make the internal tests, I can move on
but I do want you to test as much sure that when I retrieve a set of to testing the APIs from the exter-
of your domain and data projects albums from the data source, I nal interface, which is called inte-
as you can with data that does get the expected shape and size gration testing.
not depend on your production to meet the unit test I coded.
database. Integration tests will be written
Many in the industry use the term and performed at the completion
“mock data” to identify this type of all components, so APIs can be
Why use mock data/ of unchanging test data. There consumed with the correct HTTP
objects with your unit are many ways to generate mock response to verify. I look at unit
tests? data for unit tests, and I hope you tests as testing independent and
It is important to know how to create as real-world a set of data isolated segments of code while
correctly unit-test the code for as possible. The better the data the integration tests are used to

.NET Core // eMag Issue 68 - Jan 2019 53


each of the entity types in my API
domain. My first integration test
will cover the album entity.

Create a new class called AlbumA-


PITest.cs in the API folder then
add the following namespaces to
the file:

using Xunit;
using Chinook.API;
using Microsoft.
AspNetCore.TestHost;
using Microsoft.
AspNetCore.Hosting;
(See figure 3)
Figure 3: Integration test using directives.
I now have to set up the class with
our TestServer and HttpCli-
ent to perform the tests. I need
a private variable called _client
of type HttpClient that will be
created based on the TestServ-
er initialized in the constructor
of the AlbumAPITest class. The
TestServer is a wrapper around
a small web server that is created
based on the Chinook.API Start-
up class and the desired develop-
ment environment. In this case, I
am using the development envi-
ronment. I now have a web server
that is running the API and a cli-
ent that understands how to call
Figure 4: Our first integration test to get all albums. the APIs in the TestServer. I can
now write the code for the inte-
gration tests. (Figure 4)
test the entire logic for each API I need to add the appropriate
on my HTTP endpoint. This test- NuGet package. Add the package In addition to the constructor
ing will follow the entire workflow Microsoft.AspNetCore.TestHost code, Figure 4 also shows the
of the API from the API project’s to Chinook.IntegrationTest proj- code for the first integration test.
controllers to the domain proj- ect. This package contains the The AlbumGetAllTestAsync
ect’s supervisor, and finally to the resources to perform the integra- method will test to verify that
data project’s repositories (and tion testing. (Figure 2) the call to get every Album from
back the entire way to respond). the API works. Just like unit test-
I can now move on to creating my ing, the logic for my integration
first integration test to verify my testing also uses the arrange/
Creating the integration API externally. act/assert logic. I first create an
test project HttpRequestMessage object
To take advantage of your exist- with the HTTP verb supplied as
ing knowledge of testing, integra- Creating your first a variable from the InlineDa-
tion-testing functionality is based integration test ta annotation and the URI seg-
on current unit-testing libraries. To start with the external testing ment that represents the call for
I will use xUnit for creating my of all of the APIs in my solution, I all albums (/api/Album/). I next
integration tests. After I have am going to create a new folder have the HttpClient _client
created a new xUnit test project called API to contain my tests. I send an HTTP request, and finally
named Chinook.IntegrationTest, will also create a new test class for

54 .NET Core // eMag Issue 68 - Jan 2019


I check to verify that the HTTP response meets my expectation, which in ployment processes. You should
this case is a” 200 OK”. I have shown in Figure 4 two ways to verify a call to now have an execution path for
the API. You can use either, but I prefer the second way as it allows me to keeping your API well tested and
use the same pattern for checking responses to specific HTTP response maintained through the develop-
codes. ment, quality assurance, and de-
ployment phases, so the consum-
response.EnsureSuccessStatusCode(); ers of your APIs can have a great
Assert.Equal(HttpStatusCode.OK, response.StatusCode); experience without incidents.

I can also create integration tests for specific entity keys from the APIs.
For this type of test, I add additional value to the InlineData annotation Conclusion
that will be passed through the AlbumGetTestAsync method parame- Having a well-thought-out test
ters. The new test follows the same logic and uses the same resources as plan using both unit testing for
the previous test, but will pass the entity key in the API URI segment for internal testing and integration
the HttpRequestMessage object. You can see the code in Figure 5. testing for verifying external API
calls is just as important as the ar-
After you have created all of your integration tests to test your API, you chitecture you create for the de-
will need to run them through a test runner and make sure they all velopment of your ASP.NET Core
pass. You can also perform all of the tests you have created during your Web API solution.
DevOps CI process to test your API over the entire development and de-

Figure 5: The second integration test for a single album.

Figure 6: Running the integration tests in Visual Studio 2017.

.NET Core // eMag Issue 68 - Jan 2019 55


PREVIOUS ISSUES

66
`
Tech Ethics

In an ideal world, devs would like to be ethical in their


work but they ultimately don’t consider it to be part
of their responsibilities. This eMag sets out to under-
stand why they might feel that way and whose job it
is to take reasonable steps to ensure that tech prod-
ucts don’t harm users or anyone else.

65
Domain-Driven
Design in Practice

67 Chaos Engineering

This eMag will inspire you to dig deeper into your systems,
question your mental models, and use chaos engineering
to build confidence in your system’s behaviors under tur-
This eMag highlights some of the experience of re-
al-world DDD practitioners, including the challeng-
es they have faced, missteps they’ve made, lessons
learned, and some success stories.

64
bulent conditions. Testing Your Distributed
(Cloud) Systems

Testing is an under-appreciated discipline and I


wanted to shine a spotlight on the changing nature
of testing in a cloud-driven world. We hope you en-
joy what we’ve put together here, and find a host of
thought-provoking, and actionable, ideas.

You might also like