0% found this document useful (0 votes)
272 views95 pages

DNCMag Issue45 PDF

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
272 views95 pages

DNCMag Issue45 PDF

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 95

LETTER FROM

THE EDITOR

Microsoft Ignite, the company's landmark conference for Contributing Authors :


IT Professionals, Developers and Decision-makers, recently
concluded in Florida. Damir Arh
Daniel Jimenez Garcia
This conference was a testimony to Microsoft's heavy bet on
Gouri Sohoni
Cloud, AI as well as improving productivity.
Mahesh Sabnis
A few key announcements and demos were made around Azure Yacoub Massad
Arc, Azure Synapse Analytics, Azure Quantum, Project Cortex,
Whiteboard, Project Silica, Hololens 2, the relaunced Edge Technical Reviewers :
browser and Bing Search for business.
Yacoub Massad
There were some important announcements for Developers
as well! In this month's edition, Damir demystifies Ignite for Subodh Sohoni
Developers, the annoucements made and how you can make use Daniel Jimenez Garcia
of them. Damir Arh

We also have a bouquet of exclusive articles for you covering Next Edition : Januray 2020
ASP.NET Core 3.0, Azure VMs, Patterns and Practices, C# v8.0 and
more.
Copyright @A2Z Knowledge Visuals
There are plenty of opportunties for developers in the ever Pvt. Ltd.
growing Microsoft Ecosystem, and we at DotNetCurry are all
geared up for 2020. Are you? Art Director : Minal Agarwal
How was this edition?
Editor In Chief :
Make sure to reach out to me directly with your comments Suprotim Agarwal
and feedback on twitter @dotnetcurry or email me at (suprotimagarwal@
[email protected]. dotnetcurry.com)

Happy Learning!

Suprotim Agarwal
Editor in Chief

Reproductions in whole or part prohibited


Disclaimer :
except by written permission. Email requests
to “[email protected]”. The
Windows, Visual Studio, ASP.NET, Azure, TFS & other Microsoft products & information in this magazine has been reviewed
technologies are trademarks of the Microsoft group of companies. ‘DNC Magazine’ for accuracy at the time of its publication, however
is an independent publication and is not affiliated with, nor has it been the information is distributed without any warranty
authorized, sponsored, or otherwise approved by Microsoft Corporation. Microsoft expressed or implied.
is a registered trademark of Microsoft corporation in the United States and/or
other countries.
TABLE OF
CONTENTS

DEMYSTIFYING MICROSOFT IGNITE


FOR DEVELOPERS 06
DEVELOPING SPA WITH
ASP.NET CORE V3.0 16
THE
MAYBE MONAD IN C# - MORE METHODS
42
MANAGE
AZURE VIRTUAL MACHINES
WITH ARM
TEMPLATE
50
WHICH MAJOR NEW FEATURES DOES C#
8.0 BRING 62
ASP.NET CORE 3.0 APPLICATION USING
EF CORE AND COSMOSDB
70
ROADMAP

Damir Arh

DEMYSTIFYING
MICROSOFT IGNITE
FOR DEVELOPERS
WHAT WERE THE ANNOUNCEMENTS MADE AT MICROSOFT IGNITE
THAT ARE OF INTEREST TO DEVELOPERS? HOW CAN DEVELOPERS
TAKE ADVANTAGE OF THESE ANNOUNCEMENTS? READ ON!

In early November 2019, the Microsoft Ignite conference took place in Orlando, Florida. Although
Microsoft Ignite is not as developer oriented as Microsoft Build is, there still was a lot of developer related
information published there. This article provides an overview of the important announcements made
during Ignite, and which are aimed primarily at developers.

6 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


Development Tools
Microsoft IDEs and text editors share a common Visual Studio brand. Since the latest versions of Visual
Studio 2019 and Visual Studio for Mac were released less than two months ago at the .NET Conf virtual
conference in late September, Microsoft now only previewed what future versions will bring.

Visual Studio 2019


The Visual Studio 2019 version 16.4 Preview primarily aims at improving developer productivity:

• improved Find in Files window for faster code navigation,

• grouping of Find All References results by type and member,

• including results in IntelliSense pop-ups even for symbols which don’t yet have a corresponding using
directive in the current file,

• the option to use vertical document tabs instead of horizontal ones,

Figure 1: Vertical document tabs in Visual Studio 2019

• the ability to pin selected properties in the debugger windows (Autos, Locals and Watch).

An important contribution to developer productivity is also the enhanced IntelliCode feature which
provides AI-powered assistance to programming. In the current version, it’s limited to improving IntelliSense
suggestions, but it’s being extended with support for whole-line and argument completion, as well as

www.dotnetcurry.com/magazine 7
refactoring. See Figure 2.

Figure 2: IntelliCode based IntelliSense suggestions

The default IntelliCode model is trained on open source code from GitHub. The ability to train the model
from your own codebase is being simplified with the introduction of an Azure DevOps build task for
training the model and support for associating the model with a repository, so that it can be automatically
activated when working with the code from that repository.

XAML tooling in Visual Studio (for WPF/UWP applications) also has many improvements, among others:

• The Create Data Binding dialog now works with UWP and .NET Core based WPF applications.

• The XAML Editor and XAML Designer can now be split into separate windows.

• The Live Visual Tree can be filtered to only show XAML written in the app and hide everything else.

Visual Studio for Mac


The key features of Visual Studio for Mac 8.4 Preview are support for ASP.NET Core Blazor Server mode
and .NET Core 3.1 Preview. There’s also a big focus on accessibility with improved support for VoiceOver
assistive technology and keyboard navigation. The code editing and debugging experience are being
worked on as well.

Visual Studio Live Share


Visual Studio Live Share allows you to share your development environment with another developer
remotely. Without having to download and build the code locally, she/he will be able to edit and debug
your code. She/He will also have access to your terminal as well as the running web application.

All of this is already available as a built-in feature in Visual Studio 2019 and as an extension for Visual
Studio Code. Visual Studio 2019 version 16.4 Preview includes additional Insiders set of features which can
be enabled in the Options dialog:

• In addition to having access to a running web application, now a running desktop application (UWP,
WPF, WinForms, Win32 C++ or console application) can be casted to the other developer as well. The
developer will be able to see its window and interact with it.

8 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


• To avoid the need for sharing a link with the other developer to start a Live Share session, a list of
contacts has been added which includes developers who you have recently had a Live Share session
with, and those who recently worked on the same project. Your contacts will get a notification for a new
Live Share session request directly inside their editor.

• Audio calls can now be started directly from inside Visual Studio.

Visual Studio Online


Visual Studio Online was originally the name for Microsoft’s cloud version of Team Foundation Server. It
was later renamed to Visual Studio Team Services and finally rebranded to Azure DevOps Services.

At the Microsoft Build conference in May, Microsoft announced it will be releasing an online development
environment based on Visual Studio Code, named Visual Studio Online.

At Microsoft Ignite, it was announced that Visual Studio Online is now available in public preview. The
service allows on-demand creation of managed development environments in the cloud which can be used
for quick tasks like code reviews or for long-term development in the cloud from a computer, which is not
configured for development or doesn’t have enough processing power.

Figure 3: Visual Studio Online web-based editor

Environments are created automatically with minimal initial configuration, but can be fully customized and
personalized. Development environments run on a Linux machine in the cloud. The pricing depends on
the selected hardware configuration and is different when the environment is actively used and when it is
suspended.

Although there is a web-based editor for the environment available online, you can also connect to it with
your local copy of Visual Studio Code using the Visual Studio Online extension. The ability to use Visual

www.dotnetcurry.com/magazine 9
Studio 2019 instead is currently in private preview along with support for Windows-based environments.

Azure DevOps Services


A couple of new Azure DevOps features were announced. All of them are part of or are closely related to
Azure Pipelines, the CI/CD part of Azure DevOps:

• The introduction of pipeline artifacts and pipeline caching is useful when multiple pipelines contribute
to the final build. A pipeline can now act as a trigger for another pipeline, providing its artifacts as input
for the next pipeline. Thanks to caching, these intermediary results can be reused in later builds if their
dependencies haven’t changed in the meantime.

• Azure Artifacts are repositories for packages (NuGet, npm, Maven or Python) to be used in builds or by
the development team. In addition to previously available organization-scoped package feeds, there’s
now also support for public feeds and project-scoped feeds.

• A Review Apps feature for Azure Pipelines has been made available in public preview. For applications
deployed to Kubernetes, it can create a new environment for each pull request to which the application
gets deployed so that it can be validated live. Support for deployment to other Azure services will be
added in the future.

Libraries and Frameworks


The development of .NET Core continues after the final release of .NET Core 3.0 at the .NET Conf virtual
conference in September. The next version is still in preview, but new versions of some other SDKs have
been released at Ignite.

.NET Core 3.1


.NET Core 3.1 will be the next LTS (Long-Term Support) version of .NET Core after .NET Core 2.1. It’s
planned for release in December. Most of its improvements will be related to Windows desktop application
development and Blazor Server mode. At Microsoft Ignite, Preview 2 was released.

Windows UI Library
Windows UI Library (WinUI) is the name used for the native UI platform for Windows 10. In its current
version (WinUI 2), it brings additional controls and styles on top of UWP (Universal Windows Platform)
and provides support for earlier versions of Windows 10 without having to add version checks to the
application.

At Microsoft Ignite, WinUI 3 Alpha was released. It’s the first pre-release of a major update for Windows UI
library planned for release in 2020. The main change is the decoupling from the UWP SDK. New features
won’t depend on new versions of Windows 10 and will be released more frequently. The framework will be
backward compatible and will still work with .NET, but won’t depend on it. This will make it useable from
other environments as well, e.g.:

• unmanaged Windows applications written in standard C++,

• applications written in the next version of React Native for Windows.

10 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


Machine Learning for .NET
Machine Learning for .NET (ML.NET) is an open-source cross-platform machine learning framework for .NET
developers. At Microsoft Ignite, version 1.4 was released, featuring the following:

• A new deep-neural-network-based image classification with GPU support implemented on top of


TensorFlow and its .NET bindings. It can be used to train a custom image classification model in just
a couple of lines of code. In future versions, additional deep neural network training scenarios such as
object detection are planned.

• A database loader for loading training data directly from relational databases by simply providing the
connection string, the SQL query and the model class without writing any custom data access code.

• Support for .NET Core 3.0 taking advantage of hardware intrinsics by using processor specific
instructions to improve performance on modern processors and to improve compatibility with ARM
processors.

Another ML.NET related announcement at Microsoft Ignite was support for .NET Core in Jupyter Notebooks.
Although .NET Core support is in no way specific to ML.NET, the Jupyter Notebook ability to create
documents consisting of text, live code and visualizations lends itself very well to machine learning tasks,
such as data exploration and model training.

Figure 4: C# code in a Jupyter notebook

www.dotnetcurry.com/magazine 11
Bot Framework
Version 4.6 of Microsoft Bot Framework SDK was released at Microsoft Ignite. In this version, the framework
for building conversational bots for many different popular services was extended with general availability
of support for Microsoft Teams. Additionally, several other features became available in preview:

• Bot Framework Skills were introduced as re-usable conversations which can be integrated into a
larger bot solution providing a working implementation for common scenarios, such as managing your
calendar or using maps for navigation.

• Adaptive dialogs allow temporary interruptions of a current conversation flow to handle user’s requests
which can’t be handled by the current dialog. Once the interruption is handled by another dialog, the
current conversation is resumed.

• Language Generation introduces special response templates which can be used to generate variable
bot responses independently of the conversational logic.

As an alternative to code-based development of conversational bots using the Microsoft Bot Framework
SDK, a preview version of Power Virtual Agents was introduced as part of Microsoft’s Power Platform. This
SaaS (software-as-a-service) offering allows creation of conversational bots with a code-free graphical user
interface which can be used even by subject matter experts without any coding skills.

Figure 5: Power Virtual Agents graphical user interface

12 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


Azure Services
Most Azure-related announcements at Microsoft Ignite were not of immediate interest to developers.
However, some of them were about Azure services which primarily target developer audience.

Azure Machine Learning

The Azure Machine Learning service was expanded with:

• a new machine learning designer,

• enhancements to automated machine learning, and

• built-in notebooks.

ONNX Runtime 1.0 was released as well. It can run all models based on ONNX (Open Neural Network
Exchange) format 1.2.1 and higher with a big focus on performance. It’s not only available in the cloud but
can also be deployed to IoT and edge devices, as well as to a local computer.

Azure Functions
General availability of the following features was announced for Azure Functions, i.e. Microsoft’s serverless
(also called Function-as-a-Service a.k.a FaaS) offering:

• The Azure Functions Premium plan provides dedicated hosting to avoid cold starts by pre-warming the
instances.

• Durable Functions are an extension to Azure Functions adding support for stateful functions and
workflow orchestration. In the newly released version 2.0, an actor-like programming model was
introduced.

• Support for developing functions in PowerShell and Python 3.7 was added.

Additionally, support for .NET Core 3 was added in preview.

Azure Blockchain Service


Azure Blockchain Service is a platform for implementing blockchain technologies in the enterprise. At
Microsoft Ignite, the following new features were announced:

• Azure Blockchain Tokens simplify the creation and management of ledger-based tokens for physical and
digital assets.

• Azure Blockchain Data Manager can capture data from a blockchain ledger, transform it and store it in
databases like Azure SQL Database or Azure Cosmos DB for easier integration with existing applications

• In addition to Ethereum, Corda Enterprise distributed ledger technology is now also supported.
Hyperledger Fabric can be deployed to Azure Kubernetes Service using an Azure Marketplace template.

www.dotnetcurry.com/magazine 13
• To improve developer productivity, the Azure Blockchain Development Kit for Ethereum was released as
an extension for Visual Studio Code.

Azure Quantum
Azure Quantum was announced to become available in private preview in upcoming months. It’s going
to be a cloud based service allowing you to run quantum programs written with Q# and the Quantum
Developer Kit (QDK) on a variety of hardware: from classical compute service in Azure to quantum
simulators and quantum hardware provided by technology partners.

Conclusion

Looking at the announcements at Microsoft Ignite, we can recognize Microsoft’s continuous focus
on providing the best tools for developers, not only on Windows and for .NET, but also on other
operating systems and for other development frameworks.

No matter where and what you’re developing, it’s worth keeping a tab on Microsoft’s tools and evaluating
whether they can improve your productivity and development process.

Microsoft is also heavily investing in new technological trends for developers, such as serverless computing,
machine learning, blockchain, and quantum computing. Even in these fields, the effort in making the
technologies more accessible to developers can easily be recognized. This makes their offering interesting
even if you don’t see how these technologies could be used at your current work.

The low barriers to entry make it easier to familiarize yourself with the benefits they can offer you, so that
you can consider them in your future projects!

Damir Arh
Author
Damir Arh has many years of experience with Microsoft development tools; both in
complex enterprise software projects and modern cross-platform mobile applications.
In his drive towards better development processes, he is a proponent of test driven
development, continuous integration and continuous deployment. He shares his
knowledge by speaking at local user groups and conferences, blogging, and answering
questions on Stack Overflow. He is an awarded Microsoft MVP for .NET since 2012.

Thanks to Daniel Jimenez Garcia for reviewing this article.

14 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


ASP.NET CORE

Daniel Jimenez Garcia

DEVELOPING SPA
WITH
V3.0
Single Page Applications (SPAs) During this article, we will take a look
have been around ever since the at the common basic ideas behind any
advent of AJAX combined with SPA project template, followed by an
JavaScript and browser advances, overview of the templates provided out
made them possible. Today, they have of the box in ASP.NET Core 3.0. We will
become one of the most common finish demonstrating that you can apply
ways of building web applications, the same ideas with any other SPA
using frameworks like Angular, React framework not supported out of the
or Vue.js. box, for which we will use two additional
frameworks: Svelte and Vue.
It comes as no surprise that ASP.NET
Core shipped with SPA templates
in its very first release. Since then,
new ASP.NET Core releases have
maintained and adapted these
templates, in no small part, due to the
fast evolution of the SPA frameworks.
These frameworks now provide their
own development workflow with CLI
(command line interface) tools, build
processes and development servers.

Understanding SPA projects


SPA web frameworks have come a long way. They have evolved from “simple” JavaScript libraries into fully-
fledged frameworks that provide their own tooling and development workflow.

When developing a SPA using frameworks like Angular, React, Vue or Svelte, the framework provides you
with the tools that you need to develop, build or configure your SPA. This way, SPA frameworks decouple
your client side from any server-side technology like an ASP.NET Core application.

Most SPA projects are structured as the union of two distinct applications:

• A client-side SPA that is responsible for the code shipped to the browsers, a combination of HTML, CSS
and JS

• A server-side application that provides the API through which the client-side communicates, retrieving
and sending back data

www.dotnetcurry.com/magazine 17

Figure 1, Simplified view of the two applications that make a typical SPA project with ASP.NET Core

For the purposes of this article, we will stick with ASP.NET Core as the server-side application. However,
the same ideas can be followed with any other server-side framework like Flask, Django or Express or even
with serverless architectures.

Nothing prevents you from treating both the SPA and ASP.NET Core applications in a completely separate
manner, with their own development workflow, build process, release cycle, tooling, teams, etc. However,
there are situations and/or teams which might prefer a closer integration between these two applications.
This is what the SPA project templates are designed for.

In the following sections, we will take a deeper look at how SPA project templates typically integrate these
two distinct applications.

As you are all aware, Microsoft now offers Blazor as a C# full-stack SPA alternative. For the purposes of this
article, we will stick with traditional web SPA frameworks, but feel free to consider and investigate Blazor. You can
read more in one of my previous articles about Blazor.

SPA Development setup


As we have already mentioned, SPA frameworks provide their own tooling and development workflow
independent of server-side technologies like ASP.NET Core.

The reason behind is that SPAs have evolved into complex applications that need a build process of their
own. They let you structure your SPA modularly into small components which have at its disposal, a number
of modern technologies like TypeScript, CSS preprocessors, template engines, linters and many others;
attempting to increase developer productivity.

18 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


However, the source code of these applications isn’t something you can directly execute in the browser.
Instead it needs to be transpiled into vanilla HTML, JavaScript and CSS understandable by the browsers,
and bundled into a small number of files optimized for browser performance.

In a way, it is as if you had to compile the SPA source code into a number of artifacts (the bundled HTML/
JS/CSS files) that your browser can execute. This is where webpack comes into play, letting SPA frameworks
define the build process necessary to generate the bundled files. This build process is typically invoked by a
CLI tool provided by the SPA frameworks, which will configure and execute webpack under the hood.

Figure 2, Building the SPA source code into bundles that can be served to the browser

While there are alternatives to webpack like parcel and rollup (with their own advantages and downsides),
webpack is the one most widely used as of today. It is also the one chosen by most official SPA tooling like the
Angular CLI, the Vue CLI and create-react-app.

As anyone who has worked with compiled languages knows, the build process can get very tedious during
development. Having to re-run the build process after each code change in order to test the updated code,
is not fun!

Luckily, SPA frameworks provide a development server that will automatically run the build process
and refresh the bundles as soon as the source code is modified. The development server also acts as a
web server that serves the generated bundles, giving you a localhost URL on which you can access the
application in the browser.

• Since they use webpack to build your code, it is no surprise then that they use the webpack-dev-server
for these purposes

www.dotnetcurry.com/magazine 19
Figure 3, SPA development server during the development cycle

Thus, SPA frameworks pre-configure webpack and the webpack-dev-server in order to provide two different
workflows for building your code:

• During development, they offer a development server which generates initial bundles, then monitors
your source code for changes, automatically updating the bundles. It provides a localhost URL which
you can open in the browser to run the SPA, including a websocket used to notify the browser of bundle
updates. These are loaded without requiring a full reload of the page, a feature called hot module
replacement.

• During the build process, they use webpack to generate the final, optimized bundles. It is up to you to
host these bundles in any web server of your choice. All the webpack-based build process does is to
generate these HTML/JS/CSS bundle files. The next section Hosting SPAs will look at this in more detail.

Webpack and webpack-dev-server are tools built with Node.js, meaning you need to have Node.js installed on
your machine in order to run these commands. In general, SPA frameworks rely on Node.js for their tooling. While
having Node.js installed is a must, getting familiar with it is a pretty good idea!

Each SPA framework provides a CLI command to invoke each of the two processes. The following table
compares the most popular frameworks:

Now let’s add a server-side application into the mix, in our case an ASP.NET Core web application.

20 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


The ASP.NET Core application also provides its own development server, so we now have each application
(client-side SPA and server-side ASP.NET Core) being served by its own development server.

• The SPA development server (let’s assume its running on localhost:8080) provides the index.html page
and the necessary JS/CSS bundles. This is the URL that you would load in the browser

• The ASP.NET Core application (let’s assume its running on localhost:5000) provides the REST API used
by the SPA

Figure 4, SPA and ASP.NET Core being run as independent applications during development

This setup completely separates each application during development, even from the browser perspective.
Each has its own development server that automatically reloads when the code changes. It works well
for teams that like to treat client and server-side applications completely independent from each other,
especially if these are also hosted independently in production.

HTTP requests from the SPA at localhost:8080 to the ASP.NET Core server at localhost:5000 are considered cross-
origin requests by the browsers due to the different port, and so CORS support needs to be added. If deployed to
different domains like my-site.com api.my-site.com, CORS also needs to be enabled in production.

A slightly more integrated setup can be achieved by proxying one of the two development servers, in either
direction. This way, from the browser perspective, there is a single server that serves the HTML/JS/CSS files
and the REST API.

• A proxy from the SPA development server to the ASP.NET Core server can be established through the
webpack development server’s proxy option. This is exposed by all SPA frameworks as part of its options
for the development server (See Angular, React, Vue)

• A proxy from the ASP.NET Core server to the SPA development server can be established through the
UseProxyToSpaDevelopmentServer utility.

www.dotnetcurry.com/magazine 21
Figure 5, Setting up a proxy between the development servers

This is a good idea when the SPA application bundles will be hosted in production alongside the
ASP.NET Core server from the same domain. This way your development setup reflects the production setup,
simulating the same domain.

The two applications can even be further integrated during development by not just proxying from one of
the applications to the other, but also making it responsible for starting the proxied development server. No
SPA framework provides such an option, but the ASP.NET Core templates do.

• From the developer point of view, this almost feels like there is a single application. However, there is an
important downside in making the ASP.NET Core server responsible for starting the SPA development
server. If the ASP.NET Core source code changes, the server will be restarted which means the SPA
development server also has to be restarted. If you are making frequent changes to the server-side
code, this negates the benefits of the hot module replacement features of the SPA development server,
apart from being slow, since bundles are regenerated from scratch on each server-side code change.

Regardless of which approach you take, it is very likely that you will want to use specific tools and editors

22 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


to develop each of the applications. A common situation is using Visual Studio Code with various plugins for
developing the SPA, and Visual Studio for developing the ASP.NET Core API.

Hosting SPAs in production


So far, we have discussed how SPA web applications can be run during the development process. Let’s now
take a brief look at the different options to host them in production.

No matter which hosting option you end up choosing, you will always need to invoke the SPA build process.
This way you will generate a set of static files from the SPA source code, the bundled HTML/JS/CSS files.
Now we need a way to host and serve these files.

Once you have the bundles generated, you basically have two choices for hosting and serving them:

• Host the static bundles alongside the ASP.NET Core server-side application. This is the simplest
approach, which works well in many situations where the same team is in charge of both client
and server-side applications. During the build process, the bundles are generated and copied to a
preconfigured folder inside the ASP.NET Core application.

• Host the static bundles on its own web server (for example a simple NGINX one), independent of the
ASP.NET Core one. While more complex, this frees up the ASP.NET Core application from having to serve
the static files, which can now concentrate on simply serving API requests. It also lets you choose the
best web server technology for serving those static files, including any cloud offerings.

Figure 6, Hosting the SPA generated bundles within the ASP.NET Core server

www.dotnetcurry.com/magazine 23
Figure 7, Hosting the SPA generated bundles on its own web server

Note how in the second approach, the SPA and ASP.NET Core applications are served from different
domains. However, a reverse proxy can be configured in front of both servers, giving the illusion of a single
domain for both applications:

Figure 8, Reverse proxy in front of both web servers

24 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


Of course, you could combine the reverse proxy with the SPA web server into a single reverse proxy that
both serves the static bundles and proxies API requests to the ASP.NET Core web server, a typical setup with
NGINX:

Figure 9, Reverse proxy that directly serves static bundles and proxies API requests

Now that we have seen our options, both during deployment and production, let’s take a look at the specific
templates provided by ASP.NET Core.

ASP.NET Core official SPA templates


ASP.NET Core 3.0 provides SPA templates for Angular and React, with an extra variant of React with Redux.
As we will see in this section, they all follow a similar approach.

www.dotnetcurry.com/magazine 25
Figure 10, SPA templates in ASP.NET Core 3.0

Angular
When generating a new project using the Angular SPA template, we get the expected client and server-side
applications:

• The project structure is the expected one for an ASP.NET Core application and provides the REST API
used by the client-side Angular application.

• The ClientApp folder contains an Angular application created using the Angular CLI. This is a standard
Angular CLI application, that can be treated like any other Angular CLI application. Any ng/npm/yarn
command you are used to, will work. You could even delete the contents of the folder and create a new
Angular application from scratch using ng new.

Figure 11, The ClientApp folder contains an Angular CLI application

26 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


Development setup

If you inspect the contents of the Startup class, you will see the following lines at the end of the
Configure method:

app.UseSpa(spa => {
// To learn more about options for serving an Angular SPA from ASP.NET Core,
// see https://go.microsoft.com/fwlink/?linkid=864501
spa.Options.SourcePath = "ClientApp";
if (env.IsDevelopment())
{
spa.UseAngularCliServer(npmScript: "start");
}
});

As you can see, during development, the spa.UseAngularCliServer middleware is added. What this
middleware does is setup the project so:

• The ASP.NET Core development server automatically starts the Angular development server

• A proxy is established between the ASP.NET Core development server and the Angular development
server

That lets you press F5 to debug the project, starting both development servers. Visual Studio is also
configured to open the URL of the ASP.NET Core application in the browser. When SPA files like index.html
or JS/CSS bundles are requested, the ASP.NET Core application defers to the Angular development server
through the established proxy.

If you run the application, you will notice it takes a while to load, that is because the Angular development
server is being started and the bundles are being generated for the first time. This can be seen in the
output window in Figure 12:

Figure 12, ASP.NET Core starts the Angular development server and proxies requests to it

www.dotnetcurry.com/magazine 27
You can even see the proxying in action. The output shows the Angular development server running at
http://localhost:60119, while ASP.NET Core is running at https://localhost:44373. The browser is requesting
SPA files like https://localhost:44373/main.js, which ASP.NET Core internally proxies to the Angular
development server.

You can make a change to the SPA source code (like the home.component.html template). The Angular
development server will update the bundles and the browser is automatically updated.

However, let’s change the ASP.NET Core source code (like the WeatherForecastController). Since you
need to restart the ASP.NET Core server in order to try the changes, you will have to wait again for a full
generation of the bundles. On my laptop, this takes more than 20s, so the convenience of starting both
servers automatically can become a burden very quickly if you make frequent changes to the server-side
code.

Updated development setup

Let’s instead update the project so both development servers are started independently and a proxy is
simply established between them (so from the browser point of view there is still a single server in charge
of both the API and SPA files).

Open the ClientApp folder in your preferred terminal and execute ng serve (or npm start if you don’t
have the Angular CLI installed). This will start the Angular development server; you will notice a message
at the end that notifies on which port it is listening to:

** Angular Live Development Server is listening on localhost:4200, open your


browser on http://localhost:4200/ **

Figure 13, Running the Angular development server

All we have to do now is replace the call to spa.UseAngularCliServer in the Startup class with spa.
UseProxyToSpaDevelopmentServer, specifying the URL where the Angular development server is
listening:

28 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


spa.UseProxyToSpaDevelopmentServer("http://localhost:4200");

If you run the ASP.NET Core project again, everything will work as before. However, if you have to restart the
project, the Angular development server is unaffected, resulting in a much faster restart process.

Inverting the roles of the Angular development server and the ASP.NET Core server

An interesting alternative you might want to consider, is to let the Angular CLI and its development server
in control of the client-side and the browser. After all, this is what these tools are designed for.

Note with this approach, you lose the ability to debug the client-side SPA code from within Visual Studio. In my
opinion, browsers in general and Chrome in particular provide a superior debugging experience, particularly
when combined with specific extensions for debugging each SPA framework. However, I understand this won’t be
the case for everyone, so be aware of the fact and decide for yourself!

First, stop Visual Studio from opening the browser window (since it gets closed whenever the ASP.NET Core
server is stopped/restarted). Either manually open the browser window or invoke the Angular development
server with the open option (as in ng serve -o or npm start -- -o).

Figure 14, Disabling browser launch from the project options

Next, we can stop establishing a proxy from the ASP.NET Core server to the Angular development server.
Simply remove the spa.UseProxyToSpaDevelopmentServer line from your Startup class.

Finally, we will setup the proxy from the Angular development server to the ASP.NET Core server.

• Add a new proxy.conf.json file inside the ClientApp/src folder. We need to setup the underlying
webpack-dev-server so it sends all requests it cannot understand to the URL where the ASP.NET Core
server will be listening:

www.dotnetcurry.com/magazine 29
{
"/": {
"target": "https://localhost:44373/",
"secure": false
}
}

• Then update the ClientApp/angular.json file, adding the proxyConfig option to the server
command:

"serve": {
"builder": "@angular-devkit/build-angular:dev-server",
"options": {
"browserTarget": "AngularSPA:build",
"proxyConfig": "src/proxy.conf.json"
},

Make sure the URL matches the one where your ASP.NET Core application listens to. This might change
depending on whether it is run from Visual Studio with IISExpress or from the command line with Kestrel.

That’s it, we have now inverted the roles during development of each application. The Angular development
server is now fully responsible for the browser and client-side, while the ASP.NET Core application is
responsible for serving the REST API.

Build and production setup

The project template is configured so the production bundles of the Angular application are generated
during the publish process and hosted alongside the ASP.NET Core project.

If you inspect the generated project file, you will see that:

• It has been configured to run the Angular build process whenever the project is built

• The Angular build output (ClientApp/dist) is included within the published project files

<Target Name="PublishRunWebpack" AfterTargets="ComputeFilesToPublish">


<!-- As part of publishing, ensure the JS resources are freshly built in
production mode -->
<Exec WorkingDirectory="$(SpaRoot)" Command="npm install" />
<Exec WorkingDirectory="$(SpaRoot)" Command="npm run build -- --prod" />
<Exec WorkingDirectory="$(SpaRoot)" Command="npm run build:ssr -- --prod"
Condition=" '$(BuildServerSideRenderer)' == 'true' " />

<!-- Include the newly-built files in the publish output -->


<ItemGroup>
<DistFiles Include="$(SpaRoot)dist\**; $(SpaRoot)dist-server\**" />
<DistFiles Include="$(SpaRoot)node_modules\**"
Condition="'$(BuildServerSideRenderer)' == 'true'" />
<ResolvedFileToPublish Include="@(DistFiles->'%(FullPath)')" Exclude="@
(ResolvedFileToPublish)">
<RelativePath>%(DistFiles.Identity)</RelativePath>
<CopyToPublishDirectory>PreserveNewest</CopyToPublishDirectory>

30 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


<ExcludeFromSingleFile>true</ExcludeFromSingleFile>
</ResolvedFileToPublish>
</ItemGroup>
</Target>

The only extra bit needed is for these files to be served by the ASP.NET Core application. You can see how
this is configured if you inspect the ConfigureServices method of the Startup class:

services.AddSpaStaticFiles(configuration =>
{
configuration.RootPath = "ClientApp/dist";
});

In summary, the project template follows the first alternative discussed during the Hosting SPAs in
production section.

React
This project template follows exactly the same approach as the Angular one, replacing the contents of the
ClientApp folder with a React application generated using the create-react-app CLI.

• The same ASP.NET Core application providing the same REST API is included.

• The ClientApp folder contains the create-react-app React application. Any ng/npm/yarn command
you are used to, will work. You could even delete the contents of the folder and recreate them from
scratch using create-react-app.

Development setup

The default development setup is exactly the same as in the Angular case. If you inspect the
Configure method of the Startup class, you will notice a familiar setup, this time using spa.
UseReactDevelopmentServer instead of spa.UseAngularCliServer:

app.UseSpa(spa =>
{
spa.Options.SourcePath = "ClientApp";

if (env.IsDevelopment())
{
spa.UseReactDevelopmentServer(npmScript: "start");
}
});

Since it uses the same approach as the Angular template, the same caveats discussed there apply.
Modifying server-side code requires restarting the server, which will cause the webpack development
server to be restarted as well, resulting in a very slow restart cycle.

Fortunately, we can modify the default setup in the same way we did in the Angular case. Open the
ClientApp folder in your favorite terminal and type npm start to get the react development server
started independently of the ASP.NET Core server

www.dotnetcurry.com/magazine 31
Figure 15, Running the React development server independently of the ASP.NET Core server

By default, the react development server will open the URL in the browser. To disable this, setup a
BROWSER=none environment variable as per the advanced options of create-react-app.

Updating the default setup to simply establish a proxy to the react development server (without starting it)
is as simple as replacing spa.UseReactDevelopmentServer with:

spa.UseProxyToSpaDevelopmentServer("http://localhost:3000/");

Now you can launch the project, which will open the browser with the URL where the ASP.NET Core
application is listening. The browser is still able to download the SPA files because of the established proxy.

Inverting the roles of the webpack development server and ASP.NET Core server

You might also be interested in applying the same idea we discussed in the Angular case, leaving the React
development server in charge of the browser and the client side, while the ASP.NET Core server is only
responsible for the REST API.

The first steps are the same as in the Angular case. Update the project options in Visual Studio, removing
the option to open a browser window. Then remove the spa.UseProxyToSpaDevelopmentServer line
from the Startup class.

The only difference is that we need to setup the proxy for the create-react-app. This is as simple as adding
the following setting to the ClientApp/package.json file:

"proxy": "http://localhost:44381",

Make sure the URL matches the one where your ASP.NET Core application listens to. This might change
depending on whether it is run from Visual Studio with IISExpress or from the command line with Kestrel.

As simple as that, you can now start the React development server from the command line independently
of the ASP.NET Core application, proxying any requests other than bundle files to the ASP.NET Core
application.

32 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


Build and production setup

This follows exactly the same setup as in the Angular case. When publishing the project, the webpack
bundles for the React application are generated using npm run build, and the bundles included within
the rest of the project files:

<Target Name="PublishRunWebpack" AfterTargets="ComputeFilesToPublish">


<!-- As part of publishing, ensure the JS resources are freshly built in
production mode -->
<Exec WorkingDirectory="$(SpaRoot)" Command="npm install" />
<Exec WorkingDirectory="$(SpaRoot)" Command="npm run build" />

<!-- Include the newly-built files in the publish output -->


<ItemGroup>
<DistFiles Include="$(SpaRoot)build\**" />
<ResolvedFileToPublish Include="@(DistFiles->'%(FullPath)')" Exclude="@
(ResolvedFileToPublish)">
<RelativePath>%(DistFiles.Identity)</RelativePath>
<CopyToPublishDirectory>PreserveNewest</CopyToPublishDirectory>
<ExcludeFromSingleFile>true</ExcludeFromSingleFile>
</ResolvedFileToPublish>
</ItemGroup>
</Target>

The project is then configured to serve these files from the ClientApp/dist folder in the same way as in
the Angular case.

React and Redux


This is simply a variant of the React template where the React SPA has been updated and is integrated with
Redux out of the box.

The development and production setups are exactly the same as in the React template. (And the same
tweaks and modifications can be applied).

Creating your own templates


After going through the Angular and React templates, you might start getting the idea on how the approach
can work for any client-side SPA as long as it:

• Exposes a command to start a development web server which generates the initial bundles and
updates them whenever the source code changes

• Exposes a command to generate the production bundles which can then be hosted alongside the
ASP.NET Core application.

Since most SPA frameworks today use webpack and webpack-dev-server, all we need to know is the
command to start the development server and the command to run the production build of the bundles.

We can demonstrate how easy it is by adapting the React template for two other different frameworks, Vue
and Svelte.

www.dotnetcurry.com/magazine 33
Vue
Before we begin, make sure you have installed the Vue CLI. We will use the commands it provides to create
our Vue project, start the development server and generate the production bundles

Now create a new project using the React SPA template. Once the new project is generated, remove the
ClientApp folder. Open your favorite terminal and navigate to the project root, then execute the command
“vue create client-app”

Figure 16, creating a new Vue application with the Vue-CLI

This will generate a new Vue project in the current folder, letting you customize different aspects along the
way. Once the generation process has finished, make sure to rename the client-app folder as ClientApp
(The Vue CLI does not accept capital letters in project names, which it also uses as the root folder name)

Once finished, cd into the ClientApp folder and use the npm run serve command to start the Vue
development server:

Figure 17, Running the Vue development server

Let’s update the HelloWorld.vue component to retrieve and display data from the ASP.NET Core API, so we
can test the integration between the two applications. Add a new data property and a created method like
the following:

34 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


export default {
name: 'HelloWorld',
props: {
msg: String
},
data() {
return { forecasts: [] };
},
created() {
fetch('weatherforecast')
.then(res => res.json())
.then(forecasts => { this.forecasts = forecasts; });
}
}
</script>

..and update the template to display them. For example, simply format as a code block:

<code style="text-align: left;"><pre>


{{ JSON.stringify(forecasts, null, 2) }}
</pre></code>

Now all we need to do is decide how to setup the proxy between the two applications:

• If you want to proxy from the ASP.NET Core server, replace the spa.UseReactDevelopmentServer
line with:

spa.UseProxyToSpaDevelopmentServer("http://localhost:8080/");

• If instead you want to setup the proxy from the Vue development server, first disable the
ASP.NET Core project option to open the browser on debug. Then completely remove any of the
spa.UseReactDevelopmentServer or spa.UseProxyToSpaDevelopmentServer lines. Then add a
new vue.config.js file inside the ClientApp folder with the following contents and restart the Vue
development server:

module.exports = {
devServer: {
proxy: 'https://localhost:44378/'
}
}

Make sure the URL matches the location where the ASP.NET Core development server is listening.

Any of the two proxy setups will let you independently start each development server (Vue and ASP.NET
Core), which will appear as a single location from the browser perspective:

www.dotnetcurry.com/magazine 35
Figure 18, Running the Vue application with a proxy between the 2 development servers

If you prefer a setup like the one you get out of the box with the Angular/React templates, where ASP.NET
Core is responsible for starting the Vue development server (with the caveats we already discussed), it is
still possible. All you need is to create your own version of spa.UseAngularDevelopmentServer/spa.
UseReactDevelopmentServer. You can find more info in one of my previous articles on DotNetCurry.

Regarding the production setup, the command to generate the production bundles is the same as in React
(npm run build). However, the bundles are generated inside the ClientApp/dist folder instead of
ClientApp/build as in the React template. We can fix this with a couple of changes:

• Update the SPA RootPath defined in the ConfigureServices of the Startup class

services.AddSpaStaticFiles(configuration =>
{
configuration.RootPath = "ClientApp/dist";
});

• Update the DistFiles element of the PublishRunWebpack target inside the project file:

<DistFiles Include="$(SpaRoot)dist\**" />

36 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


After these changes, publishing the project will build and host the production bundles of our Vue
application alongside the ASP.NET Core application.

Svelte
We can further prove how the approach works for most SPAs by modifying the React project template once
more, this time replacing the React SPA with a Svelte SPA.

We will use a Svelte template that uses webpack and the webpack-dev-server, which gives us the
commands npm run dev to start the development server and npm run build to generate the production
bundles.

As we did with Vue, start by creating a new ASP.NET application using the React template. Once generated,
remove the ClientApp folder. Then open your favorite terminal, navigate to the project root and execute
the following commands to generate the Svelte client-side application.

npx degit sveltejs/template-webpack ClientApp


cd ClientApp
npm install

Once they are run, you will have a Svelte application instead of a React application as the client-side SPA. If
you execute npm run dev, you will get the Svelte development server started.

Figure 19, Running the Svelte development server

www.dotnetcurry.com/magazine 37
Let’s also modify this application so it fetches data from our ASP.NET Core API. Replace the contents of the
App.svelte file with:

<script>
export let name;
import { onMount } from "svelte";
let forecasts = [];
onMount(async function() {
const response = await fetch("/weatherforecast");
const json = await response.json();
forecasts = json;
});
</script>

<style>
h1 {
color: purple;
}
</style>

<h1>Hello {name}!</h1>
<code>
<pre>{{ JSON.stringify(forecasts, null, 2) }}</pre>
</code>

All we need to do now is decide how we want to proxy the two applications, same as we did in the Vue
case.

• If you want to proxy from the ASP.NET Core server, replace the spa.UseReactDevelopmentServer
line with:

spa.UseProxyToSpaDevelopmentServer("http://localhost:8080/");

• If instead you want to setup the proxy from the Svelte development server, we will need to
manually configure the webpack-dev-server settings since Svelte does not provide a CLI which
such a proxy option. Start by disabling the ASP.NET Core project option to open the browser
on debug. Then completely remove any of the spa.UseReactDevelopmentServer or
spa.UseProxyToSpaDevelopmentServer lines. Then add the following properties at the end of the
webpack.config.js file:

devServer: {
proxy: {
target: 'https://localhost:44330/',
secure: false,
context(pathname, req) {
// See Vue-cli codebase for a real example
// https://github.com/vuejs/vue-cli/blob/dev/packages/%40vue/cli-service/lib/
util/prepareProxy.js

// Do not proxy requests to public files or hot reload web-socket


if (!mayProxy(pathname)) return false;
// Directly proxy non-GET requests
if (req.method !== 'GET') return true;
// Do not proxy requests to root "/". Let them be handled by
// webpack-dev-server which will return the index.html
return pathname !== "/";
}
}
}

38 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


Where mayProxy is a function defined as:

const fs = require('fs');
function mayProxy(pathname) {
const maybePublicPath = path.resolve(__dirname + '/public', pathname.slice(1));
const isPublicFileRequest = fs.existsSync(maybePublicPath);
const isWdsEndpointRequest = pathname.startsWith('/sockjs-node');
return !(isPublicFileRequest || isWdsEndpointRequest);
}

Make sure the URL matches the location where the ASP.NET Core development server is listening.

Figure 20, Running the Svelte application with a proxy between the 2 development servers

Any of the two proxy setups will let you independently start each development server (Svelte and
ASP.NET Core), which will appear as a single location from the browser perspective. This example was
also interesting because it makes obvious tools such as webpack and webpack-dev-server that other SPA
frameworks “hide” behind their CLI.

Regarding the production setup, the command to generate the production bundles is the same as in the
React and Vue cases (npm run build). However, we have a very similar problem as the one we saw with
Vue. The bundles are generated inside the ClientApp/public instead of ClientApp/build, where the
React template expects them. We need to apply the same fixes to correct the path:

• Update the SPA RootPath defined in the ConfigureServices of the Startup class

services.AddSpaStaticFiles(configuration =>
{
configuration.RootPath = "ClientApp/public";
});

• Update the DistFiles element of the PublishRunWebpack target inside the project file:

www.dotnetcurry.com/magazine 39
<DistFiles Include="$(SpaRoot)public\**" />

After these changes, publishing the project will build and host the production bundles of our Svelte
application alongside the ASP.NET Core application.

Conclusion

There has been a lot covered in the article, considering how to integrate four different SPA
frameworks(Angular, React, Vue and Svelte) within ASP.NET Core. A considerable size of the article was
dedicated to the first section, in order to understand the basic ideas behind any project template combining
a SPA framework and ASP.NET Core. The rest of the article basically demonstrates how these same basic
ideas can be applied with Angular, React, Vue and Svelte.

Having a good understanding of these basic concepts, and a minimum understanding of tooling such as
webpack enabling the SPA frameworks, lets us easily use any other SPA framework like Vue and Svelte even
when there are no official templates for them.

Deciding how each application will be run during development and whether any proxy will be established
between the two applications, has a great impact on your developer experience. The default Angular/React
templates insist on taking control over the SPA development server. While it might seem convenient, there
are downsides that can cause a much slower experience. However, we have seen how easy it is to modify
this initial setup, so you can decide for yourself which approach to follow.

Finally, we have seen how all these templates will generate the production bundles from the SPA source
code and host them alongside the ASP.NET Core application. While we haven’t seen an example of the
alternative hosting models described in the initial section, I hope the article gave you enough information
to find your way!

Daniel Jimenez Garcia


Author

Daniel Jimenez Garcia is a passionate software developer with 10+ years of experience. He started as
a Microsoft developer and learned to love C# in general and ASP MVC in particular. In the latter half
of his career he worked on a broader set of technologies and platforms while these days is particularly
interested in .Net Core and Node.js. He is always looking for better practices and can be seen answering
questions on Stack Overflow.

Thanks to Damir Arh for reviewing this article.

40 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


www.dotnetcurry.com/magazine 41
PATTERNS & PRACTICES

Yacoub Massad

THE MAYBE
MONAD IN C#
:MORE METHODS
In this article, I will go
through some methods that
make working with the
Maybe monad, easier.

Introduction
In a previous article, The Maybe Monad, I talked about the Maybe Monad: a container that represents a
value that may or may not exist.

In that article, I ended up with an implementation of Maybe that is a struct. Here is an excerpt from the
code:

public struct Maybe<T>


{
private readonly T value;
private readonly bool hasValue;
private Maybe(T value)
{
this.value = value;
hasValue = true;
}
//...
}

I also provided a static Maybe class that makes it easier to create instances of Maybe<T>. For example, the
following code creates a Maybe<string> that contains no value, and another one that contains the value
“computer”:

Maybe<string> none = Maybe.None;

Maybe<string> some = Maybe.Some("computer");

I also talked about many other methods that make working with Maybe easier; for example, the Map and
Bind methods.

In this article, I will talk about more useful methods that are related to Maybe.

Using ValueOr and ValueOrMaybe to


handle the case where there is no value
The ValueOr method can be used to provide a default value in case the Maybe object does not contain a
value. For example:

static void Test8()


{
var errorMessage =
GetErrorDescription(15)
.ValueOr("Unknown error"); //Signature: T ValueOr(T defaultValue)
}

The GetErrorDescription method (discussed in the previous article) returns a Maybe<string>


representing the description of the error for the specified code. It returns None (Maybe with no value) in
case there is no defined description for the specified error code. The type of the errorMessage variable
here is string, not Maybe<string>.

errorMessage will always have a value. If the GetErrorDescription method returns None, the default
“Unknown error” value will be returned and stored inside errorMessage.

Now consider this code:

var errorMessage =
GetErrorDescription(15)
.ValueOr(GetDefaultErrorMessage());

Here, the default value is obtained by calling a method called GetDefaultErrorMessage. The
GetDefaultErrorMessage method will always be called here, even if GetErrorDescription returns
a value. This could be an issue if GetDefaultErrorMessage is expensive in terms of performance or if it
has side effects that we only want to have if GetErrorDescription returned None.
There is another overload of ValueOr defined that allows us to provide a default value factory function that
will only be called if the Maybe has no value:

static void Test9()


{
var errorMessage =
GetErrorDescription(15)
//Signature: T ValueOr(Func<T> defaultValueFactory)
.ValueOr(() => GetDefaultErrorMessage());
}

There is another variation of ValueOr defined in the Maybe struct. I call it ValueOrMaybe. It is used to
provide an alternative Maybe value if the Maybe object at hand has no value. For example:

static void Test10()


{
var errorMessage =
GetErrorDescription(15)
//Signature: Maybe<T> ValueOrMaybe(Maybe<T> alternativeValue)
.ValueOrMaybe(GetErrorDescriptionViaWebService(15))
.ValueOr("Unknown error");
}

static void Test11()


{
var errorMessage =
GetErrorDescription(15)
//Signature: Maybe<T> ValueOrMaybe(Func<Maybe<T>> alternativeValueFactory)
.ValueOrMaybe(() => GetErrorDescriptionViaWebService(15))
.ValueOr("Unknown error");
}

In Test10, we first try to get the error description via the GetErrorDescription method which tries to
find the error description in some file. We invoke ValueOrMaybe on the returned Maybe<string> to obtain
the error description from some web service to use it in the case where the first Maybe has no value.

The difference between the overload of ValueOrMaybe used in Test10 and the one used in Test11 is that
in Test11, the GetErrorDescriptionViaWebService method will only be called if the first Maybe has
no value. In Test10, it will always be called, even if we are not going to use its value.

Using ValueOrThrow to get the value or


throw an exception if there is not a value
Consider this example:

static void Test12()


{
var logContents =
GetLogContents(1)
// Signature: T ValueOrThrow(string errorMessage)
.ValueOrThrow("Unable to get log contents");
}

The ValueOrThrow method above will cause an exception to be thrown if GetLogContents returns None.

44 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


Otherwise, the log contents will be returned.

Other ValueOr variations


You can create different variations of ValueOr methods based on the type of the value. For example,
consider the ValueOrEmptyArray extension method:

public static T[] ValueOrEmptyArray<T>(this Maybe<T[]> maybe)


{
return maybe.ValueOr(Array.Empty<T>());
}

Another example is the ValueOrEmptyString extension methods:

public static string ValueOrEmptyString(this Maybe<string> maybe)


{
return maybe.ValueOr(string.Empty);
}

Using GetItemsWithValue
Consider this example:

static void Test14()


{
List<string> multipleLogContents =
Enumerable.Range(1, 20)
.Select(x => GetLogContents(x))
//Signature: IEnumerable<T> GetItemsWithValue<T>(this IEnumerable<Maybe<T>>
enumerable)
.GetItemsWithValue()
.ToList();
}

Here, we invoke GetLogContents twenty times. The Select method returns an enumerable of type
IEnumerable<Maybe<string>>. GetItemsWithValue enables us to obtain an IEnumerable<string> that
corresponds to the maybe objects that do have values. The ones without a value will not be included in the
returned enumerable.

Using IfAllHaveValues
Consider this example:

static void Test15() {


List<string> multipleLogContents =
Enumerable.Range(1, 20)
.Select(x => GetLogContents(x))
//Signature: Maybe<IEnumerable<T>> IfAllHaveValues<T>(this
IEnumerable<Maybe<T>> enumerable)
.IfAllHaveValues()
.ValueOrThrow("Some logs are not available")
.ToList();
}

www.dotnetcurry.com/magazine 45
IfAllHaveValues will return None if any item in the enumerable has no value. In this example, if any of
the 20 logs is unavailable, IfAllHaveValues would return None and ValueOrThrow would throw an
exception.

Notice the signature of IfAllHaveValues, it takes an IEnumerable<Maybe<T>> and returns a


Maybe<IEnumerable<T>>.

Using ToAddIfHasValue
Consider this example:

static void Test16()


{
Maybe<string> logMaybe = Maybe.Some("entry9");

var list = new List<string>()


{
"entry1",
logMaybe.ToAddIfHasValue(),
"entry2"
};
}

In the above method, we create a list of strings. We want the list to have “entry1”, “entry2”. Also, if logMaybe
has a value, we want its value to be included between “entry1” and “entry2”.

The list variable will contain a list that will either have two or three entries inside it depending on whether
logMaybe has a value. In the code we just saw, we know that it has the value “entry9”.

This is possible in C# because the list initializer syntax is extensible. You can have a value, say of type
TValue, in the initialization list as long as there is a method with a signature similar to:

void Add(this TCollection collection, TValue value)

Where TCollection is the type of the list we are trying to initialize.

The following version of Test16 is equivalent to the version displayed earlier:

static void Test16()


{
Maybe<string> logMaybe = Maybe.Some("entry9");

var list = new List<string>();

list.Add("entry1"); //List<T>.Add
list.Add(logMaybe.ToAddIfHasValue()); //Our extension method
list.Add("entry2"); //List<T>.Add
}

Take a look at a method called Add I defined in the ExtensionMethods class. Here is how it looks like:

public static void Add<T>(


this ICollection<T> collection,
AddIfHasValue<T> addIfHasValue)

46 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


{
if (addIfHasValue.Maybe.TryGetValue(out var value))
{
collection.Add(value);
}
}

The ToAddIfHasValue method allows us to wrap a Maybe object inside a special type, AddIfHasValue<T>.
In the first version of Test16, the value returned by logMaybe.ToAddIfHasValue() is of type
AddIfHasValue<string>. Therefore, our extension method (Add) is called to potentially add the value
inside the Maybe to the list.

Note that we could have defined the Add method to work on Maybe<T> instead of AddIfHasValue<T>.
The code in Test16 would look like this in this case:

static void Test16()


{
Maybe<string> logMaybe = Maybe.Some("entry9");

var list = new List<string>()


{
"entry1",
logMaybe,
"entry2"
};
}

The problem would be that a reader of this code would expect that there are going to be three items in the
list. Adding ToAddIfHasValue would make it easier for the reader to understand that the value will only
be added if it exists.

Conclusion:

In this article, I talked about some nice methods that are designed to make it easier to deal with the Maybe
type. I always find myself doing something over and over again with Maybe, and then I decide to add a
special method to do it. I hope you will find these methods useful!

Download the entire source code from GitHub at


bit.ly/dncm45-monadexamples

Yacoub Massad
Author
Yacoub Massad is a software developer who works mainly with Microsoft technologies. Currently, he works
at Zeva International where he uses C#, .NET, and other technologies to create eDiscovery solutions. He
is interested in learning and writing about software design principles that aim at creating maintainable
software. You can view his blog posts at criticalsoftwareblog.com.

Thanks to Damir Arh for reviewing this article.

www.dotnetcurry.com/magazine 47
AZURE

Gouri Sohoni

MANAGE
AZURE VIRTUAL
MACHINES
WITH ARM
(AZURE RESOURCE
MANAGE) TEMPLATE

48 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


Microsoft Azure is a cloud-based platform which offers services to build, test, deploy and manage
applications. You can find a list of all these services at docs.microsoft.com/en-us/azure/index.

One of these services helps you for host your applications.

There are many hosts available like the App Service, Virtual Machines, Container etc. For this tutorial,
I will be using the option of creating virtual machines, which can be used to deploy, test and manage
applications.

We can keep these virtual machines in a desired state (DSC) by using Azure Automation Service. These
virtual machines are based on the Azure Resource Manager API.

Originally, virtual machines were created on Azure using something called the ‘Classic’ deployment, but now
these are replaced by the ARM (Azure Resource Manager) API. The main advantage of using the ARM API is
that many resources can be declared in a single JSON file, called an ARM Template.

These days, development teams being more agile, need a way to deploy to the cloud repeatedly,
consistently and also with a desired state. This Infrastructure as Code (IaC) is possible with ARM templates.
Infrastructure as Code are suitable in situations where there is a difficulty in setting up the steps for
machine(s) by relying on human memory, or to avoid human error while initializing machines or in
situations where we need to take care of server failure automatically.

There are two ways for declaring the templates - declarative and imperative. Declarative is referred to as
functional, and Imperative as procedural.

In this article, I will discuss:

• what is an ARM template

• how we can create a virtual machine using it

• how we can use Azure DevOps

o to put the template in source control


o build to create artifacts and
o deploy to the required stage using release pipelines.

Azure Resource Manager (ARM)

Azure Resource Manager helps us create different resources in a single group. The resources from the group
can be created, deployed and deleted as a group. The management related activities can be easily handled
with the help of Azure PowerShell, Azure CLI, Azure Portal or using Rest APIs.

There are two terms we use when working with Azure - resource and resource group. A resource is an
artifact which can be managed like a VM, database etc., whereas a resource group is a container for
related resources. Azure Resource Manager works as a management layer which can be used to automate
deployment and configuration of resources.

www.dotnetcurry.com/magazine 49
ARM Template

ARM template can define a set of resources like a database server, database, azure function or even a
website. These objects are declared in a JSON format and we have the option of adding them to the source
control. Once they are added to the source control, we can manage it like any other code with various
versions. In this ARM template, we can add multiple resources. Once the template is available as part of the
source control, we can use it to deploy to different stages as required in your application life cycle.

An ARM template can have all the objects for a complete resource group, or a resource from the group. It
can either be deployed completely or incrementally. When complete mode is specified, all the earlier objects
will be deleted from the resource group if they are not part of the template. Whereas in incremental mode,
Resource manager will just be adding the new functionality.

The disadvantage of working with ARM template is that it cannot deploy code. For example, it can create a
virtual machine, but cannot deploy an application on it. It cannot use DACPAC to directly deploy database
on a SQL Server.

Structure of an ARM Template

The structure is a JSON (JavaScript Object Notation) file.

• It uses declarative syntax. We can write the objects or resource we want to create or deploy and also
provide the configuration.

• The advantage of working with ARM template is it can be repeated across your deployment. The special
term used for this is they are idempotent. This is a typical mathematical expression which states that
it can be applied ‘n’ number of times without changing the outcome. This typically means that you can
create a single template and for DSC (Desired State Configuration).

• The template can either have linked or nested resources. We can provide parallel deployment or serial,
as required.

We can deploy a multi-tier (3 tiered) application using a single template or can have a parent template
with three nested templates in it.

• You can use this template in Azure DevOps with CI/CD pipelines. This will provide the facility of a
continuous build of template and deployment as well.

There are some parts for writing the template:

• Parameters: we can provide different values for the same template which can be used in various
deployment scenarios.

50 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


• Variables: values which can be used in our template

• Resources: specify the resources to be created or deployed

• Outputs: if there are any return values from the resources

When the template is deployed, it gets converted to REST APIs.

We can create ARM templates by using the Azure Portal, Azure PowerShell, Azure CLI or by using clients like
Visual Studio Code or Visual Studio. Let us see how easy it is to create a template with the Azure Portal.
Though we can create multiple kinds of resources, I will be focusing on Virtual Machine for this tutorial.

Create ARM Template for Virtual Machine using Azure Portal

Azure Portal is an easy to use, browser-based UI for creating VMs.

Prerequisites: Azure Portal account, you can create one by using this link.

Note: it is a better option to create different resources in different resource groups so as to have logical grouping.
This option will be helpful when exploring is done and you want to delete a resource. All the resources which
share a lifetime are advised to be kept in a single resource group which will help in deploying, updating or
deleting them together. Each resource can be in only one resource group. You can add or remove resources or
even move one to another resource group, any time. It can also be used to administer access control. If required,
one resource can interact with a resource from another group but they may not share the same lifecycle (typical
example will be a Web Service accessing database).

1. Sign in to the Azure Portal

2. If you already have any existing Virtual Machine, you can just download the ARM template for it,
otherwise we can create a new Virtual Machine and use it to create a template

3. For existing Virtual Machine, use the Export template option and download the template.

www.dotnetcurry.com/magazine 51
4. If you do not have any Virtual Machine in your subscription, you can create a new one.

5. We need to specify required properties like Resource Groups name, OS for VM, name for VM, user id and
password for VM

52 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


6. Make sure that the size you are selecting is available for the region as well as in the subscription you
are working with.

7. By clicking on Review + Create, the validation for the Virtual Machine will be taken care of. It will be
advisable to open ports for RDP as we will need them later to connect to the VM. Also ensure that you
set auto shutdown for the Virtual Machine if you do not need to keep it running 24X7.

8. Once validation is successful, you can create the VM. It will take some time to create, as the OS and
other details need to be configured on the machine.

9. Once the machine is created, use Export Template > Download option to create the ARM template.

10. We now have the ARM template with us.

Let us take a look at the template now!

The template comprises of three parts - list of parameters, variables and actual resource. There are no
variables declared here.

The schema is the location of the JSON file used. The version is also mentioned. The same JSON file will be
used for resource group deployment.

We can set the default values for the parameters. When we download the template from an existing
resource (Virtual Machine in this case), it automatically fetches the existing values as default values.

We can change these values if required (changing subscription will be possible if you have multiple). We
can change the values of the parameters on the fly as we will find out in the next section.

www.dotnetcurry.com/magazine 53
We can use the ARM template we downloaded earlier and directly put it in version control and continue
with CI/CD for it. We can even use it as a part of the Visual Studio project and then add it to version control.
I am going to create a new project and a new ARM template with it.

Let us figure out CI CD using Azure DevOps for Virtual Machine creation using ARM template. I am going
to create a new ARM template for the Virtual Machine. As it is possible to change the values of parameters
on the fly at the time of deployment, the same ARM template can be used to create another virtual machine
later.

1. Since we are going to create a build definition to copy the .json files and release definition to do the
actual deployment, create a Team Project in Azure DevOps. Use https://azure.microsoft.com/en-in/
services/devops/ or https://www.visualstudio.com to create a new organization if you do not have a one.

2. Create a new Team Project of the process you prefer, with Git as the source control

3. Start Visual Studio 2017 or 2019 and make sure that you have installed components for Azure. If not
installed, run the Visual Studio Installer, modify and install them.

4. Go to Team Explorer, connect to the newly created Team Project and clone the repository. Create a new

54 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


project of type (select New from Team Explorer- Solutions) Azure Resource Group and select Virtual
Machine.

We can see the two json files added to the project – one for virtual machine and the other for parameters.

5. Have a look at the files created. There are parameters for the administrator user name, password, DNS
name for the IP used for virtual machine (need to be unique), OS for VM etc. We can set the value for
these in the parameters in the json file, if required. I am going to set the values to these at the time of
deployment.

6. Let us commit the json files to the Source Control, create a Build Definition, and provide proper
comments with the changes. Ensure that the files are available in Repo tab in Azure DevOps.

7. Select classic editor for Build Definition, as there is no template available. Select an Empty job after you
select the repository.

8. Add two tasks, copy files and publish artefacts. Let us configure both of them.

www.dotnetcurry.com/magazine 55
After a successful trigger of the build, we should get two json files available in the drop folder.

9. Now the question remains - how to deploy and create the virtual machine? Let us create a release
definition. Select New Pipeline – select template for Empty Job.

10. Provide the name for the stage and select the artefact of Build created earlier.

11. Add the Azure resource group deployment task and configure it as follows:

56 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


o Select the Azure subscription and Authorize it (make sure that you have popup enabled in your
browser so as to enter the credentials). Observe that this will create or update the resource group. I am
using the resource group created earlier when the Virtual Machine was setup. Select the appropriate
location and for the template, select the file WindowsVirtualMachine.json (which you will find in the
linked drop folder) and the parameters json file for Template Parameters

12. As I already discussed that I will be providing some values for the parameters on the fly, I will be
creating variables for the same. Select Variables tab and add three variables - user name, password
and dns. Remember to make the password secure (by default, it will get stored in Azure key vault with
Azure DevOps credentials). As I wanted to have a unique value for dns, I have used other environment
variables to get it. I have concatenated the dns value by using some constant, a build id and build
definition. You can use any other combination.

13. Now we need to override the default values for parameters in the configuration for Azure deployment

add -adminUsername $(vmUser) -adminPassword $(Pwd)) -dnsNameForPublicIP $(dns) to


Override template parameters property

14. The complete configuration looks as follows:

15. The deployment can be either incremental or complete. The default mode is incremental which
deploys whatever is defined in the template. It does not remove or modify any resource(s) not defined
(if you have already deployed a VM and then renamed it in the template, the first one will still remain).
With complete, all existing resources will be deleted (if they are not in the template). This is ideal in a
production environment.

www.dotnetcurry.com/magazine 57
16. Let us create a release and check if the deployment succeeds. This is going to be a time-consuming job
as the virtual machine with the specified configuration is to be created with the specified OS to it.

17. You can login to the Azure Portal and check the Virtual Machine created. Make sure that you do not
keep the Virtual Machine running 24 X 7. You can apply the Auto-shutdown feature to it to avoid
running the machine when not required.

18. We can change the triggers for build to CI (Continuous Integration) and for release to CD (Continuous
Deployment), do some changes in json file and commit it. The build will be triggered immediately,
followed by release.

Note: Since we are using Incremental mode, make sure you are not renaming the VM! If you do so, you will end
up with two VMs!

Conclusion

In this tutorial, we discussed what is Azure Resource Manager, ARM template and the advantages of using
them. We also discussed how the ARM template structure is in JSON syntax and downloaded the ARM
template for a Virtual Machine. I also showed how to create an ARM template for VM using Visual Studio,
followed by copying the required JSON files to artefacts to deploy them as an actual Virtual Machine in
Azure.

Gouri Sohoni
Author
Gouri is a Trainer and Consultant specializing in Microsoft Azure DevOps. She has an experience of
over 20 years in training and consulting. She is a Microsoft MVP for Azure DevOps since 2011 and
is a Microsoft Certified Trainer (MCT). She is also certified as an Azure DevOps Engineer Expert and
Azure Developer Associate.

Gouri has conducted several corporate trainings on various Microsoft Technologies. She is a regular
author and has written articles on Azure DevOps (VSTS) and DevOps Server (VS-TFS) on
www.dotnetcurry.com. Gouri also speaks frequently for Azure VidyaPeeth, and gives talks in
conferences and events including Tech-Ed and Pune User Group (PUG).

Thanks to Subodh Sohoni for reviewing this article.

58 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


C#

Damir Arh

WHICH MAJOR NEW


FEATURES DOES
C# 8.0
BRING

The latest major version of C# (8.0) was released in support.


its final form in September 2019.
C# 8.0 introduces new syntax for expressing a range
At the time of release, C# 8.0 could only be used of values:
with .NET Core 3.0 and .NET Standard 2.1 projects.
In the future, support will probably be added to Range range = 1..5;
other runtimes compatible with .NET Standard 2.1
(i.e. Xamarin, UWP, Unity and Mono). The starting index of a range is inclusive, and the
ending index is exclusive. Alternatively, the ending
No official support is planned for .NET framework. can be specified as an offset from the end:
Although the language version can be manually
set to v8.0 in a .NET framework project file, some Range range = 1..^1;
language features won’t work in this case.
The new type can be used as indexer for arrays.
Visual Studio 2019 16.3 or newer is required to Both ranges specified above will give the same
create or open .NET Core 3.0 and .NET Standard 2.1 result when used with the following snippet of
projects. In Visual Studio Code, the latest version code:
of the C# extension is required for full language

60 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


var array = new[] { 0, 1, 2, 3, 4, 5 };
var subArray = array[range];

Assert.AreEqual(new int[] { 1, 2, 3, 4 }, subArray);

Open-ended ranges are supported as well:

Assert.AreEqual(new int[] { 1, 2, 3, 4, 5 }, array[1..]);

Assert.AreEqual(new int[] { 0, 1, 2, 3, 4 }, array[..^1]);

The syntax for specifying offset from the end is not limited to ranges. It can also be used to specify an
index, again as an offset:

• from the start:

Index index = 5;

• or from the end:

Index index = ^1;

When used as an indexer for arrays, the value at the given offset will be returned. Both the indices shown
above specify the same value in the following array:

var array = new[] { 0, 1, 2, 3, 4, 5 };


Assert.AreEqual(5, array[index]);

www.dotnetcurry.com/magazine 61
There’s no need to add additional indexers for the new Range and Index types to make existing types
usable with the new syntax. The compiler implicitly adds support for the new indexers to the types which
already have the following members:

• For the Index indexer, the int indexer and either the Length or the Count property are required. The
int indexer can then be used instead of the missing Index indexer:

int offset = index.GetOffset(array.Length);


Assert.AreEqual(array[index], array[offset]);

• For the Range indexer, the Slice method and again either the Length or the Count property are
required. The Slice method can then be used instead of the missing Range indexer:

(int offset, int length) = range.GetOffsetAndLength(span.Length);


Assert.AreEqual(span[range].ToArray(), span.Slice(offset, length).ToArray());

The String type is a special case.

It supports the new indexer syntax although it doesn’t have all the required members listed above. For the
Range indexer, the Substring method is used instead of the Slice method:

Assert.AreEqual('5', "012345"[^1]);
Assert.AreEqual("1234", "012345"[1..^1]);

Nullable reference types were already considered in the early stages of C# 7.0 development but were
postponed until the next major version (i.e. till C# 8.0). The goal of this feature is to help developers avoid
unhandled NullReferenceException exceptions.

The core idea is to allow variable type definitions to specify whether they can have null value assigned to
them or not:

IWeapon? canBeNull;
IWeapon cantBeNull;

Assigning a null value or a potential null value to a non-nullable variable would result in a compiler
warning (the developer could configure the build to fail in case of such warnings, to be extra safe):

canBeNull = null; // no warning


cantBeNull = null; // warning
cantBeNull = canBeNull; // warning

Similarly, warnings would be generated when dereferencing a nullable variable without checking it for
null value first:

canBeNull.Repair(); // warning
cantBeNull.Repair(); // no warning
if (canBeNull != null)
{
canBeNull.Repair(); // no warning
}

The problem with such a change is that it breaks existing code: the feature assumes that all variables
from before the change, are non-nullable. To cope with it, static analysis for null safety can be selectively
enabled with a compiler switch at the project level.

62 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


Developers can opt-in for nullability checking when they are ready to deal with the resulting warnings. Still,
this is in their own best interest, as the warnings might reveal potential bugs in their code.

The switch is implemented as a property in the project file. The feature can be enabled by adding the
following line to the first PropertyGroup element of the project file:

<Nullable>enable</Nullable>

Additionally, the feature can be enabled selectively inside an individual file by using the #nullable
directive:

#nullable enable
// feature enabled
#nullable disable
// feature disabled
#nullable restore
// feature restored to project-level setting

C# already has support for iterators (see the tutorial “How to implement a method returning an
IEnumerable?”) and asynchronous methods (see the tutorial “What is the recommended asynchronous
pattern in .NET?”).

In C# 8.0, the two are combined into asynchronous streams. They are based on asynchronous versions of
the IEnumerable and IEnumerator interfaces:

public interface IAsyncEnumerable<out T>


{
IAsyncEnumerator<T> GetAsyncEnumerator(CancellationToken cancellationToken =
default);
}

public interface IAsyncEnumerator<out T> : IAsyncDisposable


{
T Current { get; }

ValueTask<bool> MoveNextAsync();
}

Additionally, an asynchronous version of the IDisposable interface is required for consuming the
asynchronous iterators:

public interface IAsyncDisposable


{
ValueTask DisposeAsync();
}

This allows the following code to be used for iterating over the items:

var asyncEnumerator = GetValuesAsync().GetAsyncEnumerator();


try
{
while (await asyncEnumerator.MoveNextAsync())
{
var value = asyncEnumerator.Current;

www.dotnetcurry.com/magazine 63
// process value
}
}
finally
{
await asyncEnumerator.DisposeAsync();
}

It’s very similar to the code we’re using for consuming regular synchronous iterators. However, it does not
look familiar because we typically just use the foreach statement instead. An asynchronous version of the
statement is available for asynchronous iterators:

await foreach (var value in GetValuesAsync())


{
// process value
}

Just like with the foreach statement, the compiler generates the required code itself.

It’s also possible to implement asynchronous iterators using the yield keyword, similar to how it can be
done for synchronous iterators:

private async IAsyncEnumerable<int> GetValuesAsync()


{
for (var i = 0; i < 10; i++)
{
await Task.Delay(100);
yield return i;
}
}

Cancellation tokens are also supported with this syntax. The EnumeratorCancellation attribute
can be used to annotate the parameter which will receive the CancellationToken passed to the
GetAsyncEnumerator method:

private async IAsyncEnumerable<int>


GetValuesCancellableAsync([EnumeratorCancellation] CancellationToken token =
default)
{
for (var i = 0; i < 10; i++)
{
await Task.Delay(1000, token);
yield return i;
}
}

When using await foreach with such an asynchronous iterator, the CancellationToken can be passed
to the GetAsyncEnumerator method by using the WithCancellation extension method:

await foreach (var value in GetValuesCancellableAsync().WithCancellation(token))


{
// process value
}

LINQ methods for the new IAsyncEnumerable<T> interface are available in the standalone

64 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


System.Interactive.Async NuGet package which is a part of the Reactive Extensions project.

Before C# 8.0, interfaces were not allowed to contain method implementations. They were restricted to
method declarations:

public interface ISample


{
void M1(); // allowed in C# 7
void M2() => Console.WriteLine("ISample.M2"); // not allowed in C# 7
}

To achieve similar functionality, abstract classes could be used instead:

public abstract class SampleBase


{
public abstract void M1();
public virtual void M2() => Console.WriteLine("SampleBase.M2");
}

In spite of this, C# 8.0 added support for default interface methods, i.e. method implementations using the
syntax in the first example above. This allows scenarios not supported by abstract classes.

A library author can now extend an existing interface with a default interface method implementation,
instead of a method declaration.

This has the benefit of not breaking existing classes, which implement the old version of the interface. If
they don’t implement the new method, they can still use the default interface method implementation.
When they want to change that behavior, they can override it, but no code change is required just because
the interface has been extended.

Since multiple inheritance is not allowed, a class can only derive from a single base abstract class.

In contrast to that limitation, a class can implement multiple interfaces. If these interfaces implement
default interface methods, this effectively allows classes to compose behavior from multiple different
interfaces – this concept is known as traits and is already available in many programming languages.

Some pattern matching features have already been added to C# in version 7.0. The support has been
further extended in C# 8.0.

Three new pattern types have been added:

• Positional patterns allow deconstruction of matched types in a single expression. They depend on the
Deconstruct method implemented by a type (you can read more about the Deconstruct method in my
book “How did tuple support change with C# 7?”):

if (sword is Sword(10, var durability)) {


// code executes if Damage = 10
// durability has value of sword.Durability
}

• Property patterns are similar positional patterns but don’t require the Deconstruct method. As a
result, the syntax to achieve equivalent functionality to the example above is a bit longer because it
must explicitly specify the property names:

www.dotnetcurry.com/magazine 65
if (sword is Sword { Damage: 10, Durability: var durability }) {
// code executes if Damage = 10
// durability has value of sword.Durability
}

• Tuple patterns allow matching of more than one value in a single pattern matching expression:

switch (state, transition)


{
case (State.Running, Transition.Suspend):
state = State.Suspended;
break;
}

Additionally, an expression version of the switch statement allows terser syntax when the only result of
pattern matching is assigning a value to a single variable:

state = (state, transition) switch


{
(State.Running, Transition.Suspend) => State.Suspended,
(State.Suspended, Transition.Resume) => State.Running,
(State.Suspended, Transition.Terminate) => State.NotRunning,
(State.NotRunning, Transition.Activate) => State.Running,
_ => throw new InvalidOperationException()
};

The discard character (_) is used for the default case. If it’s not specified in the expression and the value
doesn’t match any of the other cases, a SwitchExpressionException will be thrown.

Conclusion

Most of the new language features in C# 8 only bring alternative simpler syntax

Damir Arh
Author
Damir Arh has many years of experience with Microsoft development tools; both in
complex enterprise software projects and modern cross-platform mobile applications.
In his drive towards better development processes, he is a proponent of test driven
development, continuous integration and continuous deployment. He shares his
knowledge by speaking at local user groups and conferences, blogging, and answering
questions on Stack Overflow. He is an awarded Microsoft MVP for .NET since 2012.

Thanks to Yacoub Massad for reviewing this article.

66 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


ASP.NET CORE

Mahesh Sabnis

DEVELOPING AN APPLICATION USING


ASP.NET CORE 3.0,
EF CORE 3.0,
AZURE COSMOS DB AND
ANGULAR.JS

68 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


For developing modern web applications, a robust application framework, as well as a solid runtime is
needed both on the server-side and at the front-end.

ASP.NET Core 3.0 is one such open-source framework which integrates seamlessly with client-side
frameworks and libraries, including Blazor, React, Angular, Vue.js etc.

Editorial Note: If you are new to .NET Core 3.0, read What’s New in .NET Core 3.0?

.NET Core 3.0 introduces various new features, some of them being:

• Single-File Executable

• Assembly linking

• Tiered compilation

• Desktop Applications like WPF and WinForms

• …and many more

All these new features are useful for modern application development.

Web Applications often have complex requirements now-a-days. Some of these requirements demand that
the application must be cross-platform, application data must be stored in relational as well as NoSQL
database, the front-end must be modular and highly responsive, and so on.

.NET Core was created to be cross-platform and releases from .NET Core v2.0 onwards, help to design
solutions to fulfill most of these requirements.

In ASP.NET Core 2.0 onwards, application templates provide an integration with front-end frameworks
like Angular, React, React-Redux. We can make use of these templates to develop applications as per the
requirements from users.

Editorial Note: In case you are interested in a Vue.js template, check this tutorial: ASP.NET Core Vue CLI
Templates.

Figure 1 shows a projected application implementation we will be building in this tutorial.

Figure 1: ASP.NET Core application with EF Core, SQL Server, CosmosDB and Angular

www.dotnetcurry.com/magazine 69
As seen in Figure 1, in .NET Core 2.2, EF core v2.2 was used only as an ORM for relational databases like
SQL Server, etc. So, it was necessary for developers to write a separate data access layer for accessing
data from Azure Cosmos DB, generally classified as a NoSQL database. This means that our .NET Core 2.2
application would need separate Data Access Layers for relational databases, as well as for a NoSQL one.

Editorial Note: Those new to Cosmos DB can read Azure Cosmos DB - Deep Dive.

In .NET Core 3.0, there is a cool feature provided in EF Core 3.0 which can be used to map the entity classes
to a Cosmos DB NoSQL database and generate the database in the traditional code-first approach.

We can make use of DbContext class to map with the Cosmos DB database collection.

.NET Core 3.0 provides the Microsoft.EntityFrameworkCore.Cosmos package which


provides the Microsoft.EntityFrameworkCore.Cosmos assembly. This assembly contains the
CosmosDbContextOptionsExtensions class with the UseCosmos overloaded method. This is an
extension method for the DbContextOptionsBuilder class and this class is used to configure the
Cosmos DB database for the application.

The UseCosmos method accepts the following parameters:

• The Cosmos DB database account endpoint - application can connect to Cosmos DB using this Endpoint

• The Account key - used for client application authentication

• Database Name - to which the application is connecting

Using EF Core 3.0, one can directly access the Azure Cosmos DB database and perform CRUD operations.
You can use the Code-First approach of EF Core to create a database and collection. Using the ASP.NET Core
3.0 Angular Template and EF Core 3.0 with Cosmos DB, we can modify Figure 1 to the one shown in Figure
2:

Figure 2: Using EF Core 3.0 as ORM for Cosmos DB

70 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


Developing an application Using ASP.NET Core 3.0, EF
Core 3.0 and Azure Cosmos DB
Let’s first create a Cosmos DB database account so that we can have an Endpoint and Key to access Cosmos
DB in our application.

Step 1: Open Azure portal using portal.azure.com. Make sure that you have an Azure subscription. Once you
login with your credentials, you are inside the portal.

Step 2: In the portal, click on the Create a resource link blade on the top left (see Figure 3). In the search
box on this blade, enter Azure Cosmos DB, and the UI will display the Azure Cosmos DB option as shown in
Figure 3.

Figure 3: The Cosmos DB resource option in the portal

Click on the Azure Cosmos DB link that is marked red in the above figure. This will open a new blade for
creating an Azure Cosmos DB Account as shown in Figure 4.

www.dotnetcurry.com/magazine 71
Figure 4: Create an Azure Cosmos DB Account

To create an Azure Cosmos DB Account, you need to enter the Azure Subscription and Resource Group (if
you have not already created a resource group, it can be created using Create new link provided below the
Resource Group combobox).

You can then enter an Account Name as per your choice and then select the Cosmos DB API. In our case, we
will be using Core (SQL) which is a JSON document storage. You need to select a Location for the Account
and other information as per your requirement. To create the account, click on the Review + create button.
Once the Cosmos DB Account is created, we can see its details as shown in Figure 5.

Figure 5: The Azure Cosmos DB Account

72 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


Figure 5 shows the Cosmos DB Account details. We can use Data Explorer (marked red) to view all the
databases and their containers. The Keys (marked red) in Figure 5 are authentication keys so that the client
application can connect with Cosmos DB and perform operations like create database, create container, etc.

Creating an ASP.NET Core 3.0 application with


Angular Template
As we have created a Cosmos DB Account, it’s time for us to create an ASP.NET Core 3.0 application with an
Angular template. This template was introduced in ASP.NET Core 2.2.

We will create Web APIs using ASP.NET Core 3.0. These Web APIs will access Cosmos DB. The Angular
application will be the front-end for our application. We will create an Angular application that will capture
the profile information of Students and this profile information will be stored in Cosmos DB as JSON
documents. The overall structure of the application is explained in Figure 6.

Figure 6: The actual application

Step 1: Open Visual Studio 2019 and create a new ASP.NET Core Web Application. Name this application as
ProfileAppNet30. Select the Angular Template for the application. Make sure that you select ASP.NET Core
3.0 as the project version as shown in Figure 7.

Figure 7: The ASP.NET Core 3.0 app with Angular template

www.dotnetcurry.com/magazine 73
Note: Please disable the option “Configure for HTTPS” if you are using Kestrel to avoid CORS errors. Otherwise
you will have to change the protocol to https and port to 5001 in the Angular application.

Open the Solution Explorer to see the project structure with references of assemblies targeted to .NET Core
3.0 as shown in Figure 8.

Figure 8: The ASP.NET Core 3.0 Project structure

The ClientApp folder shows the Angular application structure. If you look in the package.json file, you will
see that the Angular version supported for this template is Angular 8.0.0.

Step 2: Since we need to access Cosmos DB using EF Core, we need to add EF Core package to the project.
(Note that the EF Core package is not present by default in the ASP.NET Core 3.0 Project Template.)

Right click on Dependencies and select Manage NuGet Packages. Search for Microsoft.
EntityFrameworkCore.Cosmos package. Once the package is found, install it as shown in Figure 9.

Figure 9: Installing Microsoft.EntityFrameworkCore.Cosmos package

Step 3: Modify the appsettings.json file by adding key/value pairs for Cosmos DB settings like EndPoint,
AccountKey and DatabaseName. The EndPoint, AccountKey and DatabaseName can be found from the
Settings > ConnectionString blade.

74 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


"CosmosDbSettings": {
"EndPoint": "https://COSMOSDB-ACCOUNT-NAME-HERE.documents.azure.com:443/",
"AccountKey": "YOUR-KEY-HERE",
"DatabaseName": "ProfilesDatabase"
}

Listing 1: appsettings.json for Cosmos DB Settings

Step 4: In the project, add a folder named Models and in this folder, add a new class file and name it as
ModelClasses.cs. In this class file, add the following code:

using System;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations;

namespace ProfileAppNet30.Models
{
public class Education
{
[Required(ErrorMessage = "Degree is required")]
public string Degree { get; set; }
[Required(ErrorMessage = "Specialization is required")]
public string Specialization { get; set; }
[Required(ErrorMessage = "College Or School is required")]
public string CollegeOrSchool { get; set; }
[Required(ErrorMessage = "Year Of Admission is required")]
public int YearOfAdmission { get; set; }
[Required(ErrorMessage = "Year Of Passing is required")]
public int YearOfPassing { get; set; }
[Required(ErrorMessage = "Grade is required")]
public string Grade { get; set; }
}
public class WorkExperience
{
public string CompanyName { get; set; }
public string Designation { get; set; }
public DateTime DateOfJoin { get; set; }
public DateTime DateOfLeaving { get; set; }
public int YearsOfExperience { get; set; }
}

public class ProfileMaster


{
public Guid Id { get; set; }
[Required(ErrorMessage ="FirstName is required")]
public string FirstName { get; set; }
public string MiddleName { get; set; }
[Required(ErrorMessage = "LastName is required")]
public string LastName { get; set; }
[Required(ErrorMessage = "Gender is required")]
public char Gender { get; set; }
[Required(ErrorMessage = "ContactNumber is required")]
public int ContactNumber { get; set; }
[Required(ErrorMessage = "MaritalStatus is required")]
public string MaritalStatus { get; set; }
[Required(ErrorMessage = "DateOfBirth is required")]
public DateTime DateOfBirth { get; set; }

www.dotnetcurry.com/magazine 75
public List<Education> Educations { get; set; }
public List<WorkExperience> Experience { get; set; }
}
}

Listing 2: The Model classes. These classes will be used to map with Cosmos DB to create JSON documents

The Education class contains properties for storing Education details of the end user. The
WorkExperience class contains properties to store experience of the end user. The ProfileMaster
class contains properties for storing personal information of the end user. This class also contains a list
of Education details and WorkExperiences of the end user. This is done for a One-To-Many Relationship
between ProfileMaster to Education and WorkExperience classes.

We expect that the collection contains JSON document with collection of Education details and
WorkExperiences for a single Profile information.

Step 5: In the Models folder, add a new class file and name it as ProfileDbContext.cs. Add the following
code in this file:

using Microsoft.EntityFrameworkCore;

namespace ProfileAppNet30.Models
{
public class ProfileDbContext : DbContext
{
public DbSet<ProfileMaster> Profiles { get; set; }

public ProfileDbContext(DbContextOptions<ProfileDbContext> options) :


base(options)
{

protected override void OnModelCreating(ModelBuilder modelBuilder)


{
// the container name
modelBuilder.HasDefaultContainer("Profiles");
// ProfileMaster has many educations and Many Experiences
modelBuilder.Entity<ProfileMaster>().OwnsMany(e => e.Educations);
modelBuilder.Entity<ProfileMaster>().OwnsMany(e=>e.Experience);
}
}
}

Listing 3: The ProfileDbContext class contains code for EF Core mapping with Cosmos DB.

Editorial Note: If you have used EF Core earlier, then you will find the code familiar. If not, here’s an old
albeit useful tutorial.

The ProfileDbContext class is derived from DbContext class. This class is responsible for connection
creation and mapping with the database. The class contains a DbSet property for ProfileMaster model
class. This will map with the container in Cosmos DB.

The OnModelCreating() method defines the container name as Profiles and defines the strategy of
the document creation with relationship between ProfileMaster, Education and WorkExperience

76 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


class. It contains code for defining a One-To-Many relationship between ProfileMaster, Education and
WorkExperience class.

Step 6: In the project, add a new folder and name it as Services. In this folder, add an interface file and name
it as ICosmosDbService.cs. Then add a class file, and name this class file as CosmosDbService.cs. Add the
following code in ICosmosDbService.cs

using ProfileAppNet30.Models;
using System.Collections.Generic;
using System.Threading.Tasks;

namespace ProfileAppNet30.Services
{
public interface ICosmosDbService<TEntity, in TPk> where TEntity: class
{
Task<IEnumerable<TEntity>> GetAsync();
Task<TEntity> GetAsync(TPk id);
Task<TEntity> CreateAsync(TEntity entity);
}
}

Listing 4: The repository interface.

Add the following code in CosmosDbService.cs file

using Microsoft.EntityFrameworkCore;
using ProfileAppNet30.Models;
using System;
using System.Collections.Generic;
using System.Threading.Tasks;

namespace ProfileAppNet30.Services
{
public class CosmosDbService : ICosmosDbService<ProfileMaster, string>
{
private readonly ProfileDbContext ctx;

public CosmosDbService(ProfileDbContext ctx)


{
this.ctx = ctx;
// this will make sure that the database is created
ctx.Database.EnsureCreated();
}

public async Task<ProfileMaster> CreateAsync(ProfileMaster entity)


{
entity.Id = Guid.NewGuid();
var response = await ctx.Profiles.AddAsync(entity);
await ctx.SaveChangesAsync();
return response.Entity;
}

public async Task<IEnumerable<ProfileMaster>> GetAsync()


{
var profiles = await ctx.Profiles.ToListAsync();
return profiles;
}

www.dotnetcurry.com/magazine 77
public async Task<ProfileMaster> GetAsync(string id)
{
var profile = await ctx.Profiles.FindAsync(id);
return profile;
}
}
}

Listing 5: The repository class.

The ICosmosDbService interface is a multi-type generic interface. This interface defines methods for
reading and writing data. This interface is implemented by the CosmosDbService class with TEntity
parameter as ProfileMaster and TPk parameter as string. The class has a constructor injected with
ProfileDbContext class. The constructor contains code to make sure that the database is created in
Cosmos DB, if it has not already been created. The other methods of the class contain a familiar code for
performing read and write operations against the database using EF Core.

Step 7: Modify Startup.cs file by adding the following code in the ConfigureServices() method of the
Startup class

// add this line to make sure that controllers can


// suppress the naming convention policy
services.AddControllers().AddJsonOptions(options => {
options.JsonSerializerOptions.PropertyNamingPolicy = null;
});
// register the ProfileDbContext class in DI Container
services.AddDbContext<ProfileDbContext>(options =>
{
options.UseCosmos(Configuration["CosmosDbSettings:EndPoint"].ToString(),
Configuration["CosmosDbSettings:AccountKey"].ToString(),
Configuration["CosmosDbSettings:DatabaseName"].ToString());
});

services.AddScoped<ICosmosDbService<ProfileMaster,string>,CosmosDbService>();

Listing 6: Registering ProfileDbContext class in DI Container along with CosmosDbService class and code for suppressing the
default JSON serialization naming policy

Step 8: In the Controllers folder, add a new empty Web API controller and name it as ProfilesController.cs. In
this controller, add code as shown in Listing 7:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using ProfileAppNet30.Models;
using ProfileAppNet30.Services;
namespace ProfileAppNet30.Controllers
{
[Route("api/[controller]")]
[ApiController]
public class ProfilesController : ControllerBase
{
private readonly ICosmosDbService<ProfileMaster, string> service;
public ProfilesController(ICosmosDbService<ProfileMaster, string> service)

78 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


{
this.service = service;
}

[HttpGet]
public async Task<IActionResult> Get()
{
try
{
var response = await service.GetAsync();
return Ok(response);
}
catch (Exception ex)
{
return BadRequest(ex.Message);
}
}

[HttpGet("{id}")]
public async Task<IActionResult> Get(string id)
{
try
{
var response = await service.GetAsync(id);
return Ok(response);
}
catch (Exception ex)
{
return BadRequest(ex.Message);
}
}

[HttpPost]
public async Task<IActionResult> Post(ProfileMaster profile)
{
try
{
if (ModelState.IsValid)
{
var response = await service.CreateAsync(profile);
return Ok(response);
}
else
{
return BadRequest(ModelState);
}
}
catch (Exception ex)
{
return BadRequest(ex.Message);
}
}
}
}

Listing 7: The ProfilesController code

The controller we just saw in Listing 7 has the CosmosDbService class injected in the constructor . The
controller contains HTTP Get and Post methods for returning and accepting profile information from an
Angular client application.

www.dotnetcurry.com/magazine 79
Creating Angular Client Code

Step 1: Expand the ClientApp folder. In this folder, we have the src folder that contains the app sub-folder.
In the app folder, add three folders named models, profile and services.

Since we will be using Angular material library for a rich UI like dialog-box, we need @angular/cdk and
@angular/material package dependencies in the project. Open the Command Prompt and navigate to the
ClientApp folder of the current application and run the following command to install these packages.

Note: Node.js must be installed on the machine to run npm command.

npm install --save-dev @angular/cdk @angular/material

Step 2: In the Models folder, add a new TypeScript file and name it as app.constants.ts. This file will contain
constant arrays:

export const Degrees = [


'B.Sc.', 'M.Sc.', 'BCS', 'MCS', 'MCA',
'B.E.', 'M.E.', 'B.A.', 'M.A.', 'M.Com.',
'B.Com.', 'MBA', 'MPM'
];

export const Specializations = [


'Computer', 'Mechanical', 'Electrical', 'Civil',
'Petrochemical', 'Chemical', 'Information Technology',
'Mathematics', 'Biology', 'Physics', 'Chemistry',
'Finance', 'Marketing', 'HRD', 'Accounts'
];

export const AdmissionYear = [


1980, 1981, 1982, 1983, 1984, 1985, 1986,
1987, 1988, 1989, 1990, 1991, 1992, 1993,
1994, 1995, 1996, 1997, 1998, 1999, 2000,
2001, 2002, 2003, 2004, 2005, 2006, 2007,
2008, 2009, 2010, 2011, 2012, 2013, 2014,
2015, 2016, 2017, 2018, 2019
];
export const PassingYear = [
1980, 1981, 1982, 1983, 1984, 1985, 1986,
1987, 1988, 1989, 1990, 1991, 1992, 1993,
1994, 1995, 1996, 1997, 1998, 1999, 2000,
2001, 2002, 2003, 2004, 2005, 2006, 2007,
2008, 2009, 2010, 2011, 2012, 2013, 2014,
2015, 2016, 2017, 2018, 2019
];
export const Grades = [
'First', 'Second', 'Third'
];

export const Experiences = [1, 2, 4, 5, 6, 7, 8, 9,


10, 11, 12, 13, 14, 15, 16,
17, 18, 19, 20, 21, 22, 23,
24, 25, 26, 27, 28, 29, 30];

export const Genders = [


'M', 'F'
];

80 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


export const MaritalStatusInfo = [
'Married', 'Unmarried'
];

Listing 8: Array constants for showing constant data in Angular View

Listing 8 contains constants for information (e.g. Degree, Specialization, etc.) that we need to capture from
the end user.

Step 3: In the Models folder, add a new TypeScript file and name it as app.models.ts. This file will contain
TypeScript classes for ProfileMaster, Education and WorkExperience corresponding server-side Models
classes in our ASP.NET Core application.

export class Education {


constructor(
public Degree: string,
public Specialization: string,
public CollegeOrSchool: string,
public YearOfAdmission: number,
public YearOfPassing: number,
public Grade: string,
) { }
}

export class WorkExperience {


constructor(
public CompanyName: string,
public Designation: string,
public DateOfJoin: Date,
public DateOfLeaving: Date,
public YearsOfExperience: number
) { }
}

export class ProfileMaster {


constructor(
public Id: string,
public FirstName: string,
public MiddleName: string,
public LastName: string,
public Gender: string,
public ContactNumber: number,
public MaritalStatus: string,
public DateOfBirth: Date,
public Educations: Array<Education>,
public Experience: Array<WorkExperience>
) { }
}

Listing 9: Model classes on the client

Step 4: We will create an Angular Service to make an HTTP request to the Web API. To do so, in the Services
folder, add a new TypeScript file and name it as app.profile.service.ts. In this file, add the code as shown in
Listing 10.

Note: Run the application in Kestrel instead of IIS Express (the default used by VS 2019) otherwise, you will
get a CORS error. If the application automatically redirects to HTTPS, change the baseUrl value in the code

www.dotnetcurry.com/magazine 81
to https://localhost:5001.

import { Injectable} from '@angular/core';


import { HttpClient, HttpHeaders } from '@angular/common/http';
import { Observable } from 'rxjs';
import { ProfileMaster } from '../models/app.models';

@Injectable({
providedIn:'root'
})
export class ProfileService {
private baseUrl: string
constructor(private http: HttpClient) {
this.baseUrl = 'http://localhost:5000';
}

getProfiles(): Observable<ProfileMaster[]> {
let response: Observable<ProfileMaster[]> = null;
response = this.http.get<ProfileMaster[]>(`${this.baseUrl}/api/Profiles`);
return response;
}

getProfile(id:string): Observable<ProfileMaster> {
let response: Observable<ProfileMaster> = null;
response = this.http.get<ProfileMaster>(`${this.baseUrl}/api/ ]
Profiles/${id}`);
return response;
}

postProfile(profile: ProfileMaster): Observable<ProfileMaster> {


let response: Observable<ProfileMaster> = null;
profile.Id = '00000000-0000-0000-0000-000000000000';

const options = {
headers: new HttpHeaders({ 'Content-Type':'application/json'})
};
response = this.http.post<ProfileMaster>(`${this.baseUrl}/api/Profiles`,
profile,options);
return response;
}
}

Listing 10: Angular Service to request Web API

Listing 10 contains the ProfileService class decorated as @Injectable. This means that the class will
be injected wherever it is required. This class HttpClient is injected in the service class for making an HTTP
call to Web API.

Step 5: It’s time for us to create Angular Views and their logic.

To do so, we need to add components in the application. Since we intend to use Angular Material’s dialog
boxes for WorkExperience and Educational details, we need to add separate components for a dialog box
implementation.

To use a Dialog box in Angular, we need to use the MatDialogRef object. To pass data to this Dialog box,
make use of the MAT_DIALOG_DATA object.

82 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


This object uses interface to accept data for the dialog box. In our application, we will show dialog boxes
for Educational details and Work Experience which will be used by the end user to enter multiple records
for Educational details and WorkExperience details.

In the profile folder, add a new TypeScript file and name it as app.educationinfo.dialog.component.ts.

Add the following code in this file as shown in Listing 11:

import { Component, Inject } from '@angular/core';


import { Education } from '../models/app.models';
import { Degrees, Grades, Specializations, AdmissionYear, PassingYear } from '../
models/app.constants';

import { MatDialogRef, MAT_DIALOG_DATA } from '@angular/material/dialog';

export interface EducationDialogData {


educationInfo: Education;
}

@Component({
selector: 'app-educationinfo-dialog-component',
templateUrl: 'app.educationinfo.dialog.view.html'
})
export class EducationInfoDialogComponent {
degrees = Degrees;
specializations = Specializations;
yearOfAdmission = AdmissionYear;
yearOfPassing = PassingYear;
grades = Grades;
constructor(
public dialogRef: MatDialogRef<EducationInfoDialogComponent>,
@Inject(MAT_DIALOG_DATA) public educationData: EducationDialogData
) { }

cancel(): void {
this.educationData.educationInfo = new Education('', '', '', 0, 0, '');
this.dialogRef.close();
}
}

Listing 11: The Educational Details Dialog box

In Listing 11, the EducationDialogData defines an object of the type Education. This object will be used
as MAT_DIALOG_DATA for the dialog box. This dialog box also uses various constant arrays declared in app.
constants.ts file. To show user interface for the dialog box, add a new HTML file in the profile folder and
name it as app.educationinfo.dialog.view.html.

Add the following markup for this file:

<h2 mat-dialog-title>Educational Information</h2>


<div mat-dialog-content>
<div class="form-group">
<label>Degree</label>
<select matInput [(ngModel)]="educationData.educationInfo.Degree" class="form-
control">
<option>Select Degree</option>

www.dotnetcurry.com/magazine 83
<option *ngFor="let d of degrees" value="{{d}}">{{d}}</option>
</select>
</div>
<div class="form-group">
<label>Specialization</label>
<select matInput [(ngModel)]="educationData.educationInfo.Specialization"
class="form-control">
<option>Select Specialization</option>
<option *ngFor="let s of specializations" value="{{s}}">{{s}}</option>
</select>
</div>

<div class="form-group">
<label>College or School</label>
<input matInput type="text" [(ngModel)]="educationData.educationInfo.
CollegeOrSchool" class="form-control">
</div>

<div class="form-group">
<label>Year of Admission</label>
<select matInput [(ngModel)]="educationData.educationInfo.YearOfAdmission"
class="form-control">
<option>Select Admission Year</option>
<option *ngFor="let ya of yearOfAdmission" value="{{ya}}">{{ya}}</option>
</select>
</div>

<div class="form-group">
<label>Year of Passing</label>
<select matInput [(ngModel)]="educationData.educationInfo.YearOfPassing"
class="form-control">
<option>Select Passing Year</option>
<option *ngFor="let yp of yearOfPassing" value="{{yp}}">{{yp}}</option>
</select>
</div>

<div class="form-group">
<label>Grade</label>
<select matInput [(ngModel)]="educationData.educationInfo.Grade" class="form-
control">
<option>Select Grade</option>
<option *ngFor="let g of grades" value="{{g}}">{{g}}</option>
</select>
</div>

</div>
<div mat-dialog-actions>
<button mat-button [mat-dialog-close]="educationData.educationInfo"
(click)="cancel()">Cancel</button>
<button mat-button [mat-dialog-close]="educationData.educationInfo"
cdkFocusInitial>Ok</button>
</div>

Listing 12: The HTML for the Education Dialog box

Editorial Note: A label can be bound to an element either by using the "for" attribute, or by placing the element
inside the <label> element. Here the author has skipped binding the label with the element as he won’t be using

84 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


it anywhere, but nevertheless, it should be used to specify the type of form element a label is bound to.

Listing 12 shows markup that contains the following Material attributes:

• mat-dialog-title - represents title of the dialog box.

• mat-dialog-content - represents UI elements shown on the dialog box.

• matInput - represents the UI element which will be used to capture input from the end user.

• mat-dialog-actions - represents action elements e.g. button on the dialog box.

o The mat-dialog-actions are applied on button elements as mat-dialog-close. So, when the button is
clicked, the dialog box will be closed.

Step 6: Similar to dialog boxes for Education details, we need to add a dialog box component for Work
Experience also. In the profile folder, add a new TypeScript file and name it as app.workexperience.dialog.
component.ts.

Add the following code in this newly added file:

import { Component, Inject } from '@angular/core';


import { WorkExperience } from '../models/app.models';
import { Experiences } from '../models/app.constants';
import { MatDialogRef, MAT_DIALOG_DATA } from '@angular/material/dialog';

export interface WorkExperienceDialogData {


experienceInfo: WorkExperience;
}

@Component({
selector: 'app-workexperience-dialog',
templateUrl: 'app.workexperience.dialog.view.html'
})

export class WorkExperienceDialogComponent {


experiences = Experiences;
constructor(public dialogRef: MatDialogRef<WorkExperienceDialogComponent>,
@Inject(MAT_DIALOG_DATA) public workexperienceData: WorkExperienceDialogData) {
this.workexperienceData.experienceInfo = new WorkExperience('', '', new Date(),
new Date(), 0);
}

cancel(): void {
this.workexperienceData.experienceInfo = new WorkExperience('', '', new Date(),
new Date(), 0);
this.dialogRef.close();
}
}

Listing 13: The WorkExperience dialog component

To define user interface for the WorkExperience dialog, we need to add a new HTML file in the profile folder
and name it as app.workexperience.dialog.view.html. Add the following markup in the HTML file:

www.dotnetcurry.com/magazine 85
<h2 mat-dialog-title>Work Experience Details</h2>
<div mat-dialog-content>
<div class="form-group">
<label>Company Name</label>
<input matInput type="text" [(ngModel)]="workexperienceData.experienceInfo.
CompanyName" class="form-control">
</div>
<div class="form-group">
<label>Designation</label>
<input matInput type="text" [(ngModel)]="workexperienceData.experienceInfo.
Designation" class="form-control">
</div>
<div class="form-group">
<label>Date of Join</label>
<input matInput type="date" [(ngModel)]="workexperienceData.experienceInfo.
DateOfJoin" class="form-control">
</div>
<div class="form-group">
<label>Date of Leaving</label>
<input matInput type="date" [(ngModel)]="workexperienceData.experienceInfo.
DateOfLeaving" class="form-control">
</div>
<div class="form-group">
<label>Years of Experience</label>
<select matInput [(ngModel)]="workexperienceData.experienceInfo.
YearsOfExperience" class="form-control">
<option>Select Experience</option>
<option *ngFor="let e of experiences" value="{{e}}">{{e}}</option>
</select>
</div>
</div>
<div mat-dialog-actions>
<button mat-button [mat-dialog-close]="workexperienceData.experienceInfo"
(click)="cancel()">Cancel</button>
<button mat-button [mat-dialog-close]="workexperienceData.experienceInfo"
cdkFocusInitial>Ok</button>
</div>

Listing 14: The HTML for WorkExperience

We have added the dialog boxes! Now it’s time for us to define components that will use these dialog
boxes and also display a user interface for accepting the profile information.

Step 7: In the profile folder, add a new TypeScript file and name it as app.profile.component.ts. In this file,
add the following code:

import { Component, OnInit } from '@angular/core';


import { MatDialog } from '@angular/material/dialog';
import { Education, WorkExperience, ProfileMaster } from '../models/app.models';
import { EducationInfoDialogComponent } from './app.educationinfo.dialog.
component';

import {
Genders,
Experiences, MaritalStatusInfo

86 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


} from '../models/app.constants';
import { WorkExperienceDialogComponent } from './app.workexperience.dialog.
component';
import { ProfileService } from '../services/app.profile.service';

@Component({
selector: 'app-profile-component',
templateUrl: 'app.profile.component.view.html'
})
export class ProfileComponent implements OnInit {
genders = Genders;
maritalStatusInfo = MaritalStatusInfo;

education: Education;
educationDetails: Array<Education>;
educationTableHeaders: Array<string>;

workexperience: WorkExperience;
workexperienceDetails: Array<WorkExperience>;
workexperienceTableHeaders: Array<string>;

profile: ProfileMaster;

constructor(public dialog: MatDialog, private serv: ProfileService) {


this.education = new Education('', '', '', 0, 0, '');
this.educationDetails = new Array<Education>();
this.educationTableHeaders = new Array<string>();
this.workexperience = new WorkExperience('', '', new Date(), new Date(), 0);
this.workexperienceDetails = new Array<WorkExperience>();
this.workexperienceTableHeaders = new Array<string>();
this.profile = new ProfileMaster('', '', '', '', '',0, '', new Date(), [], []);
}

openEducationDialog(): void {
this.education = new Education('', '', '', 0, 0, '');
const educationDialogRef = this.dialog.open(EducationInfoDialogComponent, {
width: '600px',
data: { educationInfo: this.education }
});

educationDialogRef.afterClosed().subscribe(res => {
if (res !== undefined) {
console.log(`In If ${JSON.stringify(res)}`);
this.educationDetails.push(res);
console.log(JSON.stringify(this.educationDetails));
this.education = new Education('', '', '', 0, 0, '');
} else {
console.log('In Else');
this.education = new Education('', '', '', 0, 0, '');
}
});
}
openWorkExperienceDialog(): void {
this.workexperience = new WorkExperience('', '', new Date(), new Date(), 0);
const workExperienceDialogRef = this.dialog.open(WorkExperienceDialogComponent,
{

www.dotnetcurry.com/magazine 87
width: '600px',
data: { experienceInfo: this.workexperience }
});

workExperienceDialogRef.afterClosed().subscribe(res => {
if (res !== undefined) {

this.workexperienceDetails.push(res);
console.log(JSON.stringify(this.workexperienceDetails));
this.workexperience = new WorkExperience('', '', new Date(), new Date(),
0);
} else {

this.workexperience = new WorkExperience('', '', new Date(), new Date(), 0);


}
});
}

ngOnInit(): void {
for (const h in this.education) {
this.educationTableHeaders.push(h);
}

for (const h in this.workexperience) {


this.workexperienceTableHeaders.push(h);
}
}

saveProfile(): void {
this.profile.Educations = this.educationDetails;
this.profile.Experience = this.workexperienceDetails;
this.serv.postProfile(this.profile).subscribe(response => {
console.log(JSON.stringify(response));
}, (error) => {
console.log(`${error.status} and ${error.message} ${error.
statusText}`);
});
}
}

Listing 15: The ProfileComponent

The ProfileComponent uses Education and WorkExperience dialog boxes. This component has
ProfileService and MatDialog objects injected in the constructor. Using ProfileService, the
component can make HTTP calls to the Web API.

The MatDialog object is used to manage the dialog box. The MatDialog object contains a method to open
the dialog box and a method to read data entered in the dialog box after the close event of the dialog box
is fired. The saveProfile() method of the component will be used to access postProfile() method of the
ProfileService to post the profile information to the server.

In the profile folder, add a new HTML file and name it as app.profile.component.view.html. In this file, add
the following markup:

88 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


<table class="table table-bordered table-striped">
<tr>
<td>
<label>First Name</label>
<input type="text" class="form-control" [(ngModel)]="profile.FirstName">
</td>
<td>
<label>Middle Name</label>
<input type="text" class="form-control" [(ngModel)]="profile.MiddleName">
</td>
<td>
<label>Last Name</label>
<input type="text" class="form-control" [(ngModel)]="profile.LastName">
</td>
</tr>
<tr>
<td>
<label>Gender</label>
<select class="form-control" [(ngModel)]="profile.Gender">
<option>Select Gender</option>
<option *ngFor="let g of genders" value="{{g}}">{{g}}</option>
</select>
</td>
<td>
<label>Contact Number</label>
<input type="text" class="form-control" [(ngModel)]="profile.ContactNumber">
</td>
<td>
<label>Marital Status</label>
<select class="form-control" [(ngModel)]="profile.MaritalStatus">
<option>Select Marital Status</option>
<option *ngFor="let m of maritalStatusInfo" value="{{m}}">{{m}}</option>
</select>
</td>
</tr>
<tr>
<td colspan="3">
<label>Date of Birth</label>
<input type="date" class="form-control" [(ngModel)]="profile.DateOfBirth">
</td>
</tr>
<tr>
<td colspan="3">
<h3>Education Details</h3>
<input type="button" class="btn btn-danger" (click)="openEducationDialog()"
value="Click to Add Education">
<table class="table table-bordered table-striped">
<thead>
<tr>
<td *ngFor="let h of educationTableHeaders">{{h}}</td>
</tr>
</thead>
<tbody>
<tr *ngFor="let e of educationDetails">
<td *ngFor="let h of educationTableHeaders">{{e[h]}}</td>
</tr>
</tbody>

www.dotnetcurry.com/magazine 89
</table>
</td>
</tr>
<tr>
<td colspan="3">
<h3>Experience Details</h3>
<input type="button" class="btn btn-warning" value="Click to Experience Details"
(click)="openWorkExperienceDialog()">
<table class="table table-bordered table-striped">
<thead>
<tr>
<td *ngFor="let h of workexperienceTableHeaders">{{h}}</td>
</tr>
</thead>
<tbody>
<tr *ngFor="let e of workexperienceDetails">
<td *ngFor="let h of workexperienceTableHeaders">{{e[h]}}</td>
</tr>
</tbody>
</table>
</td>
</tr>
<tr>
<td colspan="3">
<input type="button" (click)="saveProfile()" class="btn btn-success"
value="Save">
</td>
</tr>
</table>

Listing 16: The ProfileComponent markup

The markup in Listing 16 contains input elements and select elements for entering the Profile
information. These elements are bound with the properties defined in the ProfileMaster class.

We have tables for showing Education details and WorkExperience details. We have button elements on
the top of these tables. These buttons are bound with the methods from the component class to open
Education and WorkExperience dialog boxes.

In the component class, we have educationTableHeaders and workexperienceTableHeaders


arrays. These arrays will be populated from properties of the Education and WorkExperience classes
respectively. In the HTML markup, we will be dynamically generating table headers based on these
properties.

Step 8: Modify the style.css as shown in the following listing to import @angular/material style to show
dialog box.

@import '~@angular/material/prebuilt-themes/deeppurple-amber.css';

Listing 17: The material style

Step 9: It’s time for us to update app.module.ts file from the app folder. In this file, we will import all
components created for dialog boxes and ProfileComponent. We need to import various material modules
so that dialog boxes will be executed successfully. Listing 18 shows the modified app.module.ts:

90 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { FormsModule } from '@angular/forms';
import { HttpClientModule, HTTP_INTERCEPTORS } from '@angular/common/http';
import { RouterModule } from '@angular/router';

import { AppComponent } from './app.component';


// removed the default components those are added in project when the
// project is created
import { ProfileComponent } from './profile/app.profile.component';
import { MatNativeDateModule } from '@angular/material/core';
import { MatDialogModule } from '@angular/material/dialog';
import { BrowserAnimationsModule } from '@angular/platform-browser/animations';
import { EducationInfoDialogComponent } from './profile/app.eductioninfo.dialog.
component';
import { MatInputModule } from '@angular/material/input';
import { MatSelectModule } from '@angular/material/select';
import { WorkExperienceDialogComponent } from './profile/app.workexperience.dialog.
component';
@NgModule({
declarations: [
AppComponent,
NavMenuComponent,
// removed some component declaration code here e.g. HomeComponent,
CounterComponent etc. for brevity
ProfileComponent,
EducationInfoDialogComponent,
WorkExperienceDialogComponent
],
imports: [
BrowserModule.withServerTransition({ appId: 'ng-cli-universal' }),
HttpClientModule,
FormsModule,
HttpClientModule,
MatDialogModule,
BrowserAnimationsModule,
MatInputModule,
MatSelectModule,
MatNativeDateModule,
RouterModule.forRoot([
{ path: '', component: HomeComponent, pathMatch: 'full' },
{ path: 'profile', component: ProfileComponent }
])
],
providers: [],
entryComponents: [ProfileComponent, EducationInfoDialogComponent,
WorkExperienceDialogComponent],
bootstrap: [AppComponent]
})
export class AppModule { }

Listing 18: The app.module.ts file with required imports

We have defined the routing for the profile component using RouterModule. We have also imported various
Material modules like MatDialogModule, MatInputModule, MatSelectModule, MatNativeModule to execute
the MatDialog.

So far, so good! We have completed developing both the Server-Side as well as the Front end.

www.dotnetcurry.com/magazine 91
To test the application, run it using F5.

Note: The Application needs to run on Kestrel hosting environment (not IIS Express) since we are using http with
port 5000.

Figure 10 shows the application loaded in the browser:

Figure 10: Application loaded in the browser

Update nav-menu.component.html to add the navigation for the profile page as shown in listing 19.

<li class="nav-item" [routerLinkActive]="['link-active']">


<a class="nav-link text-dark" [routerLink]="['/profile']"
>Profile</a>
</li>

Listing 19: The navigation link for profile

Click on the Profile link on the top right, the following profile page as shown in Figure 11, will be displayed.

Figure 11: The Profile Page

Enter details like the First Name, Middle Name, Last Name etc. and click on the Click to Add Education

92 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


button. This brings up the Dialog Box shown in Figure 12.

Figure 12: The dialog box

Click on the OK button, and the education details as shown in Figure 13 will be displayed in the table.

Figure 13: The Education details table

Now click on the Click to Experience details button to add experience details. A dialog box will be displayed
where you can enter the experience details. Once these details are added, click on the Save button to add
the profile and save the information in Cosmos DB. This will create a container named as Profiles and the
document is added in it.

Figure 14 shows the data added in Cosmos DB by creating Profiles

www.dotnetcurry.com/magazine 93
Figure 14: The data added in Cosmos DB by creating Profiles

Conclusion

In this tutorial, we saw how ASP.NET Core 3.0 with EF Core 3.0 provides a cool mechanism to access Cosmos
DB (classified as a NoSQL db) for performing CRUD operations, very similar to a relational database.

Additionally, the built-in Angular template with ASP.NET Core provided a rich experience for front-
end development. ASP.NET Core provides a unified platform for building web UI and web APIs using a
Relational or a NoSQL Database.

Download the entire source code from GitHub at


bit.ly/dncm45-efcoreapp

Mahesh Sabnis
Author
Mahesh Sabnis is a DotNetCurry author and ex-Microsoft MVP having over 19 years of
experience in IT education and development. He is a Microsoft Certified Trainer (MCT)
since 2005 and has conducted various Corporate Training programs for .NET, Cloud and
JavaScript Technologies (all versions). Follow him on twitter @maheshdotnet

Thanks to Damir Arh for reviewing this article.

94 DNC MAGAZINE ISSUE 45 - NOV - DEC 2019


Thank You
for the 45th Edition

@dani_djg @damirarh @gouri_sohoni

@subodhsohoni @yacoubmassad @maheshdotnet

@suprotimagarwal @saffronstroke

Write for us - mailto: [email protected]

You might also like