DNCMag Issue45 PDF
DNCMag Issue45 PDF
THE EDITOR
We also have a bouquet of exclusive articles for you covering Next Edition : Januray 2020
ASP.NET Core 3.0, Azure VMs, Patterns and Practices, C# v8.0 and
more.
Copyright @A2Z Knowledge Visuals
There are plenty of opportunties for developers in the ever Pvt. Ltd.
growing Microsoft Ecosystem, and we at DotNetCurry are all
geared up for 2020. Are you? Art Director : Minal Agarwal
How was this edition?
Editor In Chief :
Make sure to reach out to me directly with your comments Suprotim Agarwal
and feedback on twitter @dotnetcurry or email me at (suprotimagarwal@
[email protected]. dotnetcurry.com)
Happy Learning!
Suprotim Agarwal
Editor in Chief
Damir Arh
DEMYSTIFYING
MICROSOFT IGNITE
FOR DEVELOPERS
WHAT WERE THE ANNOUNCEMENTS MADE AT MICROSOFT IGNITE
THAT ARE OF INTEREST TO DEVELOPERS? HOW CAN DEVELOPERS
TAKE ADVANTAGE OF THESE ANNOUNCEMENTS? READ ON!
In early November 2019, the Microsoft Ignite conference took place in Orlando, Florida. Although
Microsoft Ignite is not as developer oriented as Microsoft Build is, there still was a lot of developer related
information published there. This article provides an overview of the important announcements made
during Ignite, and which are aimed primarily at developers.
• including results in IntelliSense pop-ups even for symbols which don’t yet have a corresponding using
directive in the current file,
• the ability to pin selected properties in the debugger windows (Autos, Locals and Watch).
An important contribution to developer productivity is also the enhanced IntelliCode feature which
provides AI-powered assistance to programming. In the current version, it’s limited to improving IntelliSense
suggestions, but it’s being extended with support for whole-line and argument completion, as well as
www.dotnetcurry.com/magazine 7
refactoring. See Figure 2.
The default IntelliCode model is trained on open source code from GitHub. The ability to train the model
from your own codebase is being simplified with the introduction of an Azure DevOps build task for
training the model and support for associating the model with a repository, so that it can be automatically
activated when working with the code from that repository.
XAML tooling in Visual Studio (for WPF/UWP applications) also has many improvements, among others:
• The Create Data Binding dialog now works with UWP and .NET Core based WPF applications.
• The XAML Editor and XAML Designer can now be split into separate windows.
• The Live Visual Tree can be filtered to only show XAML written in the app and hide everything else.
All of this is already available as a built-in feature in Visual Studio 2019 and as an extension for Visual
Studio Code. Visual Studio 2019 version 16.4 Preview includes additional Insiders set of features which can
be enabled in the Options dialog:
• In addition to having access to a running web application, now a running desktop application (UWP,
WPF, WinForms, Win32 C++ or console application) can be casted to the other developer as well. The
developer will be able to see its window and interact with it.
• Audio calls can now be started directly from inside Visual Studio.
At the Microsoft Build conference in May, Microsoft announced it will be releasing an online development
environment based on Visual Studio Code, named Visual Studio Online.
At Microsoft Ignite, it was announced that Visual Studio Online is now available in public preview. The
service allows on-demand creation of managed development environments in the cloud which can be used
for quick tasks like code reviews or for long-term development in the cloud from a computer, which is not
configured for development or doesn’t have enough processing power.
Environments are created automatically with minimal initial configuration, but can be fully customized and
personalized. Development environments run on a Linux machine in the cloud. The pricing depends on
the selected hardware configuration and is different when the environment is actively used and when it is
suspended.
Although there is a web-based editor for the environment available online, you can also connect to it with
your local copy of Visual Studio Code using the Visual Studio Online extension. The ability to use Visual
www.dotnetcurry.com/magazine 9
Studio 2019 instead is currently in private preview along with support for Windows-based environments.
• The introduction of pipeline artifacts and pipeline caching is useful when multiple pipelines contribute
to the final build. A pipeline can now act as a trigger for another pipeline, providing its artifacts as input
for the next pipeline. Thanks to caching, these intermediary results can be reused in later builds if their
dependencies haven’t changed in the meantime.
• Azure Artifacts are repositories for packages (NuGet, npm, Maven or Python) to be used in builds or by
the development team. In addition to previously available organization-scoped package feeds, there’s
now also support for public feeds and project-scoped feeds.
• A Review Apps feature for Azure Pipelines has been made available in public preview. For applications
deployed to Kubernetes, it can create a new environment for each pull request to which the application
gets deployed so that it can be validated live. Support for deployment to other Azure services will be
added in the future.
Windows UI Library
Windows UI Library (WinUI) is the name used for the native UI platform for Windows 10. In its current
version (WinUI 2), it brings additional controls and styles on top of UWP (Universal Windows Platform)
and provides support for earlier versions of Windows 10 without having to add version checks to the
application.
At Microsoft Ignite, WinUI 3 Alpha was released. It’s the first pre-release of a major update for Windows UI
library planned for release in 2020. The main change is the decoupling from the UWP SDK. New features
won’t depend on new versions of Windows 10 and will be released more frequently. The framework will be
backward compatible and will still work with .NET, but won’t depend on it. This will make it useable from
other environments as well, e.g.:
• A database loader for loading training data directly from relational databases by simply providing the
connection string, the SQL query and the model class without writing any custom data access code.
• Support for .NET Core 3.0 taking advantage of hardware intrinsics by using processor specific
instructions to improve performance on modern processors and to improve compatibility with ARM
processors.
Another ML.NET related announcement at Microsoft Ignite was support for .NET Core in Jupyter Notebooks.
Although .NET Core support is in no way specific to ML.NET, the Jupyter Notebook ability to create
documents consisting of text, live code and visualizations lends itself very well to machine learning tasks,
such as data exploration and model training.
www.dotnetcurry.com/magazine 11
Bot Framework
Version 4.6 of Microsoft Bot Framework SDK was released at Microsoft Ignite. In this version, the framework
for building conversational bots for many different popular services was extended with general availability
of support for Microsoft Teams. Additionally, several other features became available in preview:
• Bot Framework Skills were introduced as re-usable conversations which can be integrated into a
larger bot solution providing a working implementation for common scenarios, such as managing your
calendar or using maps for navigation.
• Adaptive dialogs allow temporary interruptions of a current conversation flow to handle user’s requests
which can’t be handled by the current dialog. Once the interruption is handled by another dialog, the
current conversation is resumed.
• Language Generation introduces special response templates which can be used to generate variable
bot responses independently of the conversational logic.
As an alternative to code-based development of conversational bots using the Microsoft Bot Framework
SDK, a preview version of Power Virtual Agents was introduced as part of Microsoft’s Power Platform. This
SaaS (software-as-a-service) offering allows creation of conversational bots with a code-free graphical user
interface which can be used even by subject matter experts without any coding skills.
• built-in notebooks.
ONNX Runtime 1.0 was released as well. It can run all models based on ONNX (Open Neural Network
Exchange) format 1.2.1 and higher with a big focus on performance. It’s not only available in the cloud but
can also be deployed to IoT and edge devices, as well as to a local computer.
Azure Functions
General availability of the following features was announced for Azure Functions, i.e. Microsoft’s serverless
(also called Function-as-a-Service a.k.a FaaS) offering:
• The Azure Functions Premium plan provides dedicated hosting to avoid cold starts by pre-warming the
instances.
• Durable Functions are an extension to Azure Functions adding support for stateful functions and
workflow orchestration. In the newly released version 2.0, an actor-like programming model was
introduced.
• Support for developing functions in PowerShell and Python 3.7 was added.
• Azure Blockchain Tokens simplify the creation and management of ledger-based tokens for physical and
digital assets.
• Azure Blockchain Data Manager can capture data from a blockchain ledger, transform it and store it in
databases like Azure SQL Database or Azure Cosmos DB for easier integration with existing applications
• In addition to Ethereum, Corda Enterprise distributed ledger technology is now also supported.
Hyperledger Fabric can be deployed to Azure Kubernetes Service using an Azure Marketplace template.
www.dotnetcurry.com/magazine 13
• To improve developer productivity, the Azure Blockchain Development Kit for Ethereum was released as
an extension for Visual Studio Code.
Azure Quantum
Azure Quantum was announced to become available in private preview in upcoming months. It’s going
to be a cloud based service allowing you to run quantum programs written with Q# and the Quantum
Developer Kit (QDK) on a variety of hardware: from classical compute service in Azure to quantum
simulators and quantum hardware provided by technology partners.
Conclusion
Looking at the announcements at Microsoft Ignite, we can recognize Microsoft’s continuous focus
on providing the best tools for developers, not only on Windows and for .NET, but also on other
operating systems and for other development frameworks.
No matter where and what you’re developing, it’s worth keeping a tab on Microsoft’s tools and evaluating
whether they can improve your productivity and development process.
Microsoft is also heavily investing in new technological trends for developers, such as serverless computing,
machine learning, blockchain, and quantum computing. Even in these fields, the effort in making the
technologies more accessible to developers can easily be recognized. This makes their offering interesting
even if you don’t see how these technologies could be used at your current work.
The low barriers to entry make it easier to familiarize yourself with the benefits they can offer you, so that
you can consider them in your future projects!
Damir Arh
Author
Damir Arh has many years of experience with Microsoft development tools; both in
complex enterprise software projects and modern cross-platform mobile applications.
In his drive towards better development processes, he is a proponent of test driven
development, continuous integration and continuous deployment. He shares his
knowledge by speaking at local user groups and conferences, blogging, and answering
questions on Stack Overflow. He is an awarded Microsoft MVP for .NET since 2012.
DEVELOPING SPA
WITH
V3.0
Single Page Applications (SPAs) During this article, we will take a look
have been around ever since the at the common basic ideas behind any
advent of AJAX combined with SPA project template, followed by an
JavaScript and browser advances, overview of the templates provided out
made them possible. Today, they have of the box in ASP.NET Core 3.0. We will
become one of the most common finish demonstrating that you can apply
ways of building web applications, the same ideas with any other SPA
using frameworks like Angular, React framework not supported out of the
or Vue.js. box, for which we will use two additional
frameworks: Svelte and Vue.
It comes as no surprise that ASP.NET
Core shipped with SPA templates
in its very first release. Since then,
new ASP.NET Core releases have
maintained and adapted these
templates, in no small part, due to the
fast evolution of the SPA frameworks.
These frameworks now provide their
own development workflow with CLI
(command line interface) tools, build
processes and development servers.
When developing a SPA using frameworks like Angular, React, Vue or Svelte, the framework provides you
with the tools that you need to develop, build or configure your SPA. This way, SPA frameworks decouple
your client side from any server-side technology like an ASP.NET Core application.
Most SPA projects are structured as the union of two distinct applications:
• A client-side SPA that is responsible for the code shipped to the browsers, a combination of HTML, CSS
and JS
• A server-side application that provides the API through which the client-side communicates, retrieving
and sending back data
www.dotnetcurry.com/magazine 17
•
Figure 1, Simplified view of the two applications that make a typical SPA project with ASP.NET Core
For the purposes of this article, we will stick with ASP.NET Core as the server-side application. However,
the same ideas can be followed with any other server-side framework like Flask, Django or Express or even
with serverless architectures.
Nothing prevents you from treating both the SPA and ASP.NET Core applications in a completely separate
manner, with their own development workflow, build process, release cycle, tooling, teams, etc. However,
there are situations and/or teams which might prefer a closer integration between these two applications.
This is what the SPA project templates are designed for.
In the following sections, we will take a deeper look at how SPA project templates typically integrate these
two distinct applications.
As you are all aware, Microsoft now offers Blazor as a C# full-stack SPA alternative. For the purposes of this
article, we will stick with traditional web SPA frameworks, but feel free to consider and investigate Blazor. You can
read more in one of my previous articles about Blazor.
The reason behind is that SPAs have evolved into complex applications that need a build process of their
own. They let you structure your SPA modularly into small components which have at its disposal, a number
of modern technologies like TypeScript, CSS preprocessors, template engines, linters and many others;
attempting to increase developer productivity.
In a way, it is as if you had to compile the SPA source code into a number of artifacts (the bundled HTML/
JS/CSS files) that your browser can execute. This is where webpack comes into play, letting SPA frameworks
define the build process necessary to generate the bundled files. This build process is typically invoked by a
CLI tool provided by the SPA frameworks, which will configure and execute webpack under the hood.
Figure 2, Building the SPA source code into bundles that can be served to the browser
While there are alternatives to webpack like parcel and rollup (with their own advantages and downsides),
webpack is the one most widely used as of today. It is also the one chosen by most official SPA tooling like the
Angular CLI, the Vue CLI and create-react-app.
As anyone who has worked with compiled languages knows, the build process can get very tedious during
development. Having to re-run the build process after each code change in order to test the updated code,
is not fun!
Luckily, SPA frameworks provide a development server that will automatically run the build process
and refresh the bundles as soon as the source code is modified. The development server also acts as a
web server that serves the generated bundles, giving you a localhost URL on which you can access the
application in the browser.
• Since they use webpack to build your code, it is no surprise then that they use the webpack-dev-server
for these purposes
www.dotnetcurry.com/magazine 19
Figure 3, SPA development server during the development cycle
Thus, SPA frameworks pre-configure webpack and the webpack-dev-server in order to provide two different
workflows for building your code:
• During development, they offer a development server which generates initial bundles, then monitors
your source code for changes, automatically updating the bundles. It provides a localhost URL which
you can open in the browser to run the SPA, including a websocket used to notify the browser of bundle
updates. These are loaded without requiring a full reload of the page, a feature called hot module
replacement.
• During the build process, they use webpack to generate the final, optimized bundles. It is up to you to
host these bundles in any web server of your choice. All the webpack-based build process does is to
generate these HTML/JS/CSS bundle files. The next section Hosting SPAs will look at this in more detail.
Webpack and webpack-dev-server are tools built with Node.js, meaning you need to have Node.js installed on
your machine in order to run these commands. In general, SPA frameworks rely on Node.js for their tooling. While
having Node.js installed is a must, getting familiar with it is a pretty good idea!
Each SPA framework provides a CLI command to invoke each of the two processes. The following table
compares the most popular frameworks:
Now let’s add a server-side application into the mix, in our case an ASP.NET Core web application.
• The SPA development server (let’s assume its running on localhost:8080) provides the index.html page
and the necessary JS/CSS bundles. This is the URL that you would load in the browser
• The ASP.NET Core application (let’s assume its running on localhost:5000) provides the REST API used
by the SPA
Figure 4, SPA and ASP.NET Core being run as independent applications during development
This setup completely separates each application during development, even from the browser perspective.
Each has its own development server that automatically reloads when the code changes. It works well
for teams that like to treat client and server-side applications completely independent from each other,
especially if these are also hosted independently in production.
HTTP requests from the SPA at localhost:8080 to the ASP.NET Core server at localhost:5000 are considered cross-
origin requests by the browsers due to the different port, and so CORS support needs to be added. If deployed to
different domains like my-site.com api.my-site.com, CORS also needs to be enabled in production.
A slightly more integrated setup can be achieved by proxying one of the two development servers, in either
direction. This way, from the browser perspective, there is a single server that serves the HTML/JS/CSS files
and the REST API.
• A proxy from the SPA development server to the ASP.NET Core server can be established through the
webpack development server’s proxy option. This is exposed by all SPA frameworks as part of its options
for the development server (See Angular, React, Vue)
• A proxy from the ASP.NET Core server to the SPA development server can be established through the
UseProxyToSpaDevelopmentServer utility.
www.dotnetcurry.com/magazine 21
Figure 5, Setting up a proxy between the development servers
This is a good idea when the SPA application bundles will be hosted in production alongside the
ASP.NET Core server from the same domain. This way your development setup reflects the production setup,
simulating the same domain.
The two applications can even be further integrated during development by not just proxying from one of
the applications to the other, but also making it responsible for starting the proxied development server. No
SPA framework provides such an option, but the ASP.NET Core templates do.
• From the developer point of view, this almost feels like there is a single application. However, there is an
important downside in making the ASP.NET Core server responsible for starting the SPA development
server. If the ASP.NET Core source code changes, the server will be restarted which means the SPA
development server also has to be restarted. If you are making frequent changes to the server-side
code, this negates the benefits of the hot module replacement features of the SPA development server,
apart from being slow, since bundles are regenerated from scratch on each server-side code change.
Regardless of which approach you take, it is very likely that you will want to use specific tools and editors
No matter which hosting option you end up choosing, you will always need to invoke the SPA build process.
This way you will generate a set of static files from the SPA source code, the bundled HTML/JS/CSS files.
Now we need a way to host and serve these files.
Once you have the bundles generated, you basically have two choices for hosting and serving them:
• Host the static bundles alongside the ASP.NET Core server-side application. This is the simplest
approach, which works well in many situations where the same team is in charge of both client
and server-side applications. During the build process, the bundles are generated and copied to a
preconfigured folder inside the ASP.NET Core application.
• Host the static bundles on its own web server (for example a simple NGINX one), independent of the
ASP.NET Core one. While more complex, this frees up the ASP.NET Core application from having to serve
the static files, which can now concentrate on simply serving API requests. It also lets you choose the
best web server technology for serving those static files, including any cloud offerings.
Figure 6, Hosting the SPA generated bundles within the ASP.NET Core server
www.dotnetcurry.com/magazine 23
Figure 7, Hosting the SPA generated bundles on its own web server
Note how in the second approach, the SPA and ASP.NET Core applications are served from different
domains. However, a reverse proxy can be configured in front of both servers, giving the illusion of a single
domain for both applications:
Figure 9, Reverse proxy that directly serves static bundles and proxies API requests
Now that we have seen our options, both during deployment and production, let’s take a look at the specific
templates provided by ASP.NET Core.
www.dotnetcurry.com/magazine 25
Figure 10, SPA templates in ASP.NET Core 3.0
Angular
When generating a new project using the Angular SPA template, we get the expected client and server-side
applications:
• The project structure is the expected one for an ASP.NET Core application and provides the REST API
used by the client-side Angular application.
• The ClientApp folder contains an Angular application created using the Angular CLI. This is a standard
Angular CLI application, that can be treated like any other Angular CLI application. Any ng/npm/yarn
command you are used to, will work. You could even delete the contents of the folder and create a new
Angular application from scratch using ng new.
If you inspect the contents of the Startup class, you will see the following lines at the end of the
Configure method:
app.UseSpa(spa => {
// To learn more about options for serving an Angular SPA from ASP.NET Core,
// see https://go.microsoft.com/fwlink/?linkid=864501
spa.Options.SourcePath = "ClientApp";
if (env.IsDevelopment())
{
spa.UseAngularCliServer(npmScript: "start");
}
});
As you can see, during development, the spa.UseAngularCliServer middleware is added. What this
middleware does is setup the project so:
• The ASP.NET Core development server automatically starts the Angular development server
• A proxy is established between the ASP.NET Core development server and the Angular development
server
That lets you press F5 to debug the project, starting both development servers. Visual Studio is also
configured to open the URL of the ASP.NET Core application in the browser. When SPA files like index.html
or JS/CSS bundles are requested, the ASP.NET Core application defers to the Angular development server
through the established proxy.
If you run the application, you will notice it takes a while to load, that is because the Angular development
server is being started and the bundles are being generated for the first time. This can be seen in the
output window in Figure 12:
Figure 12, ASP.NET Core starts the Angular development server and proxies requests to it
www.dotnetcurry.com/magazine 27
You can even see the proxying in action. The output shows the Angular development server running at
http://localhost:60119, while ASP.NET Core is running at https://localhost:44373. The browser is requesting
SPA files like https://localhost:44373/main.js, which ASP.NET Core internally proxies to the Angular
development server.
You can make a change to the SPA source code (like the home.component.html template). The Angular
development server will update the bundles and the browser is automatically updated.
However, let’s change the ASP.NET Core source code (like the WeatherForecastController). Since you
need to restart the ASP.NET Core server in order to try the changes, you will have to wait again for a full
generation of the bundles. On my laptop, this takes more than 20s, so the convenience of starting both
servers automatically can become a burden very quickly if you make frequent changes to the server-side
code.
Let’s instead update the project so both development servers are started independently and a proxy is
simply established between them (so from the browser point of view there is still a single server in charge
of both the API and SPA files).
Open the ClientApp folder in your preferred terminal and execute ng serve (or npm start if you don’t
have the Angular CLI installed). This will start the Angular development server; you will notice a message
at the end that notifies on which port it is listening to:
All we have to do now is replace the call to spa.UseAngularCliServer in the Startup class with spa.
UseProxyToSpaDevelopmentServer, specifying the URL where the Angular development server is
listening:
If you run the ASP.NET Core project again, everything will work as before. However, if you have to restart the
project, the Angular development server is unaffected, resulting in a much faster restart process.
Inverting the roles of the Angular development server and the ASP.NET Core server
An interesting alternative you might want to consider, is to let the Angular CLI and its development server
in control of the client-side and the browser. After all, this is what these tools are designed for.
Note with this approach, you lose the ability to debug the client-side SPA code from within Visual Studio. In my
opinion, browsers in general and Chrome in particular provide a superior debugging experience, particularly
when combined with specific extensions for debugging each SPA framework. However, I understand this won’t be
the case for everyone, so be aware of the fact and decide for yourself!
First, stop Visual Studio from opening the browser window (since it gets closed whenever the ASP.NET Core
server is stopped/restarted). Either manually open the browser window or invoke the Angular development
server with the open option (as in ng serve -o or npm start -- -o).
Next, we can stop establishing a proxy from the ASP.NET Core server to the Angular development server.
Simply remove the spa.UseProxyToSpaDevelopmentServer line from your Startup class.
Finally, we will setup the proxy from the Angular development server to the ASP.NET Core server.
• Add a new proxy.conf.json file inside the ClientApp/src folder. We need to setup the underlying
webpack-dev-server so it sends all requests it cannot understand to the URL where the ASP.NET Core
server will be listening:
www.dotnetcurry.com/magazine 29
{
"/": {
"target": "https://localhost:44373/",
"secure": false
}
}
• Then update the ClientApp/angular.json file, adding the proxyConfig option to the server
command:
"serve": {
"builder": "@angular-devkit/build-angular:dev-server",
"options": {
"browserTarget": "AngularSPA:build",
"proxyConfig": "src/proxy.conf.json"
},
Make sure the URL matches the one where your ASP.NET Core application listens to. This might change
depending on whether it is run from Visual Studio with IISExpress or from the command line with Kestrel.
That’s it, we have now inverted the roles during development of each application. The Angular development
server is now fully responsible for the browser and client-side, while the ASP.NET Core application is
responsible for serving the REST API.
The project template is configured so the production bundles of the Angular application are generated
during the publish process and hosted alongside the ASP.NET Core project.
If you inspect the generated project file, you will see that:
• It has been configured to run the Angular build process whenever the project is built
• The Angular build output (ClientApp/dist) is included within the published project files
The only extra bit needed is for these files to be served by the ASP.NET Core application. You can see how
this is configured if you inspect the ConfigureServices method of the Startup class:
services.AddSpaStaticFiles(configuration =>
{
configuration.RootPath = "ClientApp/dist";
});
In summary, the project template follows the first alternative discussed during the Hosting SPAs in
production section.
React
This project template follows exactly the same approach as the Angular one, replacing the contents of the
ClientApp folder with a React application generated using the create-react-app CLI.
• The same ASP.NET Core application providing the same REST API is included.
• The ClientApp folder contains the create-react-app React application. Any ng/npm/yarn command
you are used to, will work. You could even delete the contents of the folder and recreate them from
scratch using create-react-app.
Development setup
The default development setup is exactly the same as in the Angular case. If you inspect the
Configure method of the Startup class, you will notice a familiar setup, this time using spa.
UseReactDevelopmentServer instead of spa.UseAngularCliServer:
app.UseSpa(spa =>
{
spa.Options.SourcePath = "ClientApp";
if (env.IsDevelopment())
{
spa.UseReactDevelopmentServer(npmScript: "start");
}
});
Since it uses the same approach as the Angular template, the same caveats discussed there apply.
Modifying server-side code requires restarting the server, which will cause the webpack development
server to be restarted as well, resulting in a very slow restart cycle.
Fortunately, we can modify the default setup in the same way we did in the Angular case. Open the
ClientApp folder in your favorite terminal and type npm start to get the react development server
started independently of the ASP.NET Core server
www.dotnetcurry.com/magazine 31
Figure 15, Running the React development server independently of the ASP.NET Core server
By default, the react development server will open the URL in the browser. To disable this, setup a
BROWSER=none environment variable as per the advanced options of create-react-app.
Updating the default setup to simply establish a proxy to the react development server (without starting it)
is as simple as replacing spa.UseReactDevelopmentServer with:
spa.UseProxyToSpaDevelopmentServer("http://localhost:3000/");
Now you can launch the project, which will open the browser with the URL where the ASP.NET Core
application is listening. The browser is still able to download the SPA files because of the established proxy.
Inverting the roles of the webpack development server and ASP.NET Core server
You might also be interested in applying the same idea we discussed in the Angular case, leaving the React
development server in charge of the browser and the client side, while the ASP.NET Core server is only
responsible for the REST API.
The first steps are the same as in the Angular case. Update the project options in Visual Studio, removing
the option to open a browser window. Then remove the spa.UseProxyToSpaDevelopmentServer line
from the Startup class.
The only difference is that we need to setup the proxy for the create-react-app. This is as simple as adding
the following setting to the ClientApp/package.json file:
"proxy": "http://localhost:44381",
Make sure the URL matches the one where your ASP.NET Core application listens to. This might change
depending on whether it is run from Visual Studio with IISExpress or from the command line with Kestrel.
As simple as that, you can now start the React development server from the command line independently
of the ASP.NET Core application, proxying any requests other than bundle files to the ASP.NET Core
application.
This follows exactly the same setup as in the Angular case. When publishing the project, the webpack
bundles for the React application are generated using npm run build, and the bundles included within
the rest of the project files:
The project is then configured to serve these files from the ClientApp/dist folder in the same way as in
the Angular case.
The development and production setups are exactly the same as in the React template. (And the same
tweaks and modifications can be applied).
• Exposes a command to start a development web server which generates the initial bundles and
updates them whenever the source code changes
• Exposes a command to generate the production bundles which can then be hosted alongside the
ASP.NET Core application.
Since most SPA frameworks today use webpack and webpack-dev-server, all we need to know is the
command to start the development server and the command to run the production build of the bundles.
We can demonstrate how easy it is by adapting the React template for two other different frameworks, Vue
and Svelte.
www.dotnetcurry.com/magazine 33
Vue
Before we begin, make sure you have installed the Vue CLI. We will use the commands it provides to create
our Vue project, start the development server and generate the production bundles
Now create a new project using the React SPA template. Once the new project is generated, remove the
ClientApp folder. Open your favorite terminal and navigate to the project root, then execute the command
“vue create client-app”
This will generate a new Vue project in the current folder, letting you customize different aspects along the
way. Once the generation process has finished, make sure to rename the client-app folder as ClientApp
(The Vue CLI does not accept capital letters in project names, which it also uses as the root folder name)
Once finished, cd into the ClientApp folder and use the npm run serve command to start the Vue
development server:
Let’s update the HelloWorld.vue component to retrieve and display data from the ASP.NET Core API, so we
can test the integration between the two applications. Add a new data property and a created method like
the following:
..and update the template to display them. For example, simply format as a code block:
Now all we need to do is decide how to setup the proxy between the two applications:
• If you want to proxy from the ASP.NET Core server, replace the spa.UseReactDevelopmentServer
line with:
spa.UseProxyToSpaDevelopmentServer("http://localhost:8080/");
• If instead you want to setup the proxy from the Vue development server, first disable the
ASP.NET Core project option to open the browser on debug. Then completely remove any of the
spa.UseReactDevelopmentServer or spa.UseProxyToSpaDevelopmentServer lines. Then add a
new vue.config.js file inside the ClientApp folder with the following contents and restart the Vue
development server:
module.exports = {
devServer: {
proxy: 'https://localhost:44378/'
}
}
Make sure the URL matches the location where the ASP.NET Core development server is listening.
Any of the two proxy setups will let you independently start each development server (Vue and ASP.NET
Core), which will appear as a single location from the browser perspective:
www.dotnetcurry.com/magazine 35
Figure 18, Running the Vue application with a proxy between the 2 development servers
If you prefer a setup like the one you get out of the box with the Angular/React templates, where ASP.NET
Core is responsible for starting the Vue development server (with the caveats we already discussed), it is
still possible. All you need is to create your own version of spa.UseAngularDevelopmentServer/spa.
UseReactDevelopmentServer. You can find more info in one of my previous articles on DotNetCurry.
Regarding the production setup, the command to generate the production bundles is the same as in React
(npm run build). However, the bundles are generated inside the ClientApp/dist folder instead of
ClientApp/build as in the React template. We can fix this with a couple of changes:
• Update the SPA RootPath defined in the ConfigureServices of the Startup class
services.AddSpaStaticFiles(configuration =>
{
configuration.RootPath = "ClientApp/dist";
});
• Update the DistFiles element of the PublishRunWebpack target inside the project file:
Svelte
We can further prove how the approach works for most SPAs by modifying the React project template once
more, this time replacing the React SPA with a Svelte SPA.
We will use a Svelte template that uses webpack and the webpack-dev-server, which gives us the
commands npm run dev to start the development server and npm run build to generate the production
bundles.
As we did with Vue, start by creating a new ASP.NET application using the React template. Once generated,
remove the ClientApp folder. Then open your favorite terminal, navigate to the project root and execute
the following commands to generate the Svelte client-side application.
Once they are run, you will have a Svelte application instead of a React application as the client-side SPA. If
you execute npm run dev, you will get the Svelte development server started.
www.dotnetcurry.com/magazine 37
Let’s also modify this application so it fetches data from our ASP.NET Core API. Replace the contents of the
App.svelte file with:
<script>
export let name;
import { onMount } from "svelte";
let forecasts = [];
onMount(async function() {
const response = await fetch("/weatherforecast");
const json = await response.json();
forecasts = json;
});
</script>
<style>
h1 {
color: purple;
}
</style>
<h1>Hello {name}!</h1>
<code>
<pre>{{ JSON.stringify(forecasts, null, 2) }}</pre>
</code>
All we need to do now is decide how we want to proxy the two applications, same as we did in the Vue
case.
• If you want to proxy from the ASP.NET Core server, replace the spa.UseReactDevelopmentServer
line with:
spa.UseProxyToSpaDevelopmentServer("http://localhost:8080/");
• If instead you want to setup the proxy from the Svelte development server, we will need to
manually configure the webpack-dev-server settings since Svelte does not provide a CLI which
such a proxy option. Start by disabling the ASP.NET Core project option to open the browser
on debug. Then completely remove any of the spa.UseReactDevelopmentServer or
spa.UseProxyToSpaDevelopmentServer lines. Then add the following properties at the end of the
webpack.config.js file:
devServer: {
proxy: {
target: 'https://localhost:44330/',
secure: false,
context(pathname, req) {
// See Vue-cli codebase for a real example
// https://github.com/vuejs/vue-cli/blob/dev/packages/%40vue/cli-service/lib/
util/prepareProxy.js
const fs = require('fs');
function mayProxy(pathname) {
const maybePublicPath = path.resolve(__dirname + '/public', pathname.slice(1));
const isPublicFileRequest = fs.existsSync(maybePublicPath);
const isWdsEndpointRequest = pathname.startsWith('/sockjs-node');
return !(isPublicFileRequest || isWdsEndpointRequest);
}
Make sure the URL matches the location where the ASP.NET Core development server is listening.
Figure 20, Running the Svelte application with a proxy between the 2 development servers
Any of the two proxy setups will let you independently start each development server (Svelte and
ASP.NET Core), which will appear as a single location from the browser perspective. This example was
also interesting because it makes obvious tools such as webpack and webpack-dev-server that other SPA
frameworks “hide” behind their CLI.
Regarding the production setup, the command to generate the production bundles is the same as in the
React and Vue cases (npm run build). However, we have a very similar problem as the one we saw with
Vue. The bundles are generated inside the ClientApp/public instead of ClientApp/build, where the
React template expects them. We need to apply the same fixes to correct the path:
• Update the SPA RootPath defined in the ConfigureServices of the Startup class
services.AddSpaStaticFiles(configuration =>
{
configuration.RootPath = "ClientApp/public";
});
• Update the DistFiles element of the PublishRunWebpack target inside the project file:
www.dotnetcurry.com/magazine 39
<DistFiles Include="$(SpaRoot)public\**" />
After these changes, publishing the project will build and host the production bundles of our Svelte
application alongside the ASP.NET Core application.
Conclusion
There has been a lot covered in the article, considering how to integrate four different SPA
frameworks(Angular, React, Vue and Svelte) within ASP.NET Core. A considerable size of the article was
dedicated to the first section, in order to understand the basic ideas behind any project template combining
a SPA framework and ASP.NET Core. The rest of the article basically demonstrates how these same basic
ideas can be applied with Angular, React, Vue and Svelte.
Having a good understanding of these basic concepts, and a minimum understanding of tooling such as
webpack enabling the SPA frameworks, lets us easily use any other SPA framework like Vue and Svelte even
when there are no official templates for them.
Deciding how each application will be run during development and whether any proxy will be established
between the two applications, has a great impact on your developer experience. The default Angular/React
templates insist on taking control over the SPA development server. While it might seem convenient, there
are downsides that can cause a much slower experience. However, we have seen how easy it is to modify
this initial setup, so you can decide for yourself which approach to follow.
Finally, we have seen how all these templates will generate the production bundles from the SPA source
code and host them alongside the ASP.NET Core application. While we haven’t seen an example of the
alternative hosting models described in the initial section, I hope the article gave you enough information
to find your way!
Daniel Jimenez Garcia is a passionate software developer with 10+ years of experience. He started as
a Microsoft developer and learned to love C# in general and ASP MVC in particular. In the latter half
of his career he worked on a broader set of technologies and platforms while these days is particularly
interested in .Net Core and Node.js. He is always looking for better practices and can be seen answering
questions on Stack Overflow.
www.dotnetcurry.com/magazine 41
PATTERNS & PRACTICES
Yacoub Massad
THE MAYBE
MONAD IN C#
:MORE METHODS
In this article, I will go
through some methods that
make working with the
Maybe monad, easier.
Introduction
In a previous article, The Maybe Monad, I talked about the Maybe Monad: a container that represents a
value that may or may not exist.
In that article, I ended up with an implementation of Maybe that is a struct. Here is an excerpt from the
code:
I also provided a static Maybe class that makes it easier to create instances of Maybe<T>. For example, the
following code creates a Maybe<string> that contains no value, and another one that contains the value
“computer”:
I also talked about many other methods that make working with Maybe easier; for example, the Map and
Bind methods.
In this article, I will talk about more useful methods that are related to Maybe.
errorMessage will always have a value. If the GetErrorDescription method returns None, the default
“Unknown error” value will be returned and stored inside errorMessage.
var errorMessage =
GetErrorDescription(15)
.ValueOr(GetDefaultErrorMessage());
Here, the default value is obtained by calling a method called GetDefaultErrorMessage. The
GetDefaultErrorMessage method will always be called here, even if GetErrorDescription returns
a value. This could be an issue if GetDefaultErrorMessage is expensive in terms of performance or if it
has side effects that we only want to have if GetErrorDescription returned None.
There is another overload of ValueOr defined that allows us to provide a default value factory function that
will only be called if the Maybe has no value:
There is another variation of ValueOr defined in the Maybe struct. I call it ValueOrMaybe. It is used to
provide an alternative Maybe value if the Maybe object at hand has no value. For example:
In Test10, we first try to get the error description via the GetErrorDescription method which tries to
find the error description in some file. We invoke ValueOrMaybe on the returned Maybe<string> to obtain
the error description from some web service to use it in the case where the first Maybe has no value.
The difference between the overload of ValueOrMaybe used in Test10 and the one used in Test11 is that
in Test11, the GetErrorDescriptionViaWebService method will only be called if the first Maybe has
no value. In Test10, it will always be called, even if we are not going to use its value.
The ValueOrThrow method above will cause an exception to be thrown if GetLogContents returns None.
Using GetItemsWithValue
Consider this example:
Here, we invoke GetLogContents twenty times. The Select method returns an enumerable of type
IEnumerable<Maybe<string>>. GetItemsWithValue enables us to obtain an IEnumerable<string> that
corresponds to the maybe objects that do have values. The ones without a value will not be included in the
returned enumerable.
Using IfAllHaveValues
Consider this example:
www.dotnetcurry.com/magazine 45
IfAllHaveValues will return None if any item in the enumerable has no value. In this example, if any of
the 20 logs is unavailable, IfAllHaveValues would return None and ValueOrThrow would throw an
exception.
Using ToAddIfHasValue
Consider this example:
In the above method, we create a list of strings. We want the list to have “entry1”, “entry2”. Also, if logMaybe
has a value, we want its value to be included between “entry1” and “entry2”.
The list variable will contain a list that will either have two or three entries inside it depending on whether
logMaybe has a value. In the code we just saw, we know that it has the value “entry9”.
This is possible in C# because the list initializer syntax is extensible. You can have a value, say of type
TValue, in the initialization list as long as there is a method with a signature similar to:
list.Add("entry1"); //List<T>.Add
list.Add(logMaybe.ToAddIfHasValue()); //Our extension method
list.Add("entry2"); //List<T>.Add
}
Take a look at a method called Add I defined in the ExtensionMethods class. Here is how it looks like:
The ToAddIfHasValue method allows us to wrap a Maybe object inside a special type, AddIfHasValue<T>.
In the first version of Test16, the value returned by logMaybe.ToAddIfHasValue() is of type
AddIfHasValue<string>. Therefore, our extension method (Add) is called to potentially add the value
inside the Maybe to the list.
Note that we could have defined the Add method to work on Maybe<T> instead of AddIfHasValue<T>.
The code in Test16 would look like this in this case:
The problem would be that a reader of this code would expect that there are going to be three items in the
list. Adding ToAddIfHasValue would make it easier for the reader to understand that the value will only
be added if it exists.
Conclusion:
In this article, I talked about some nice methods that are designed to make it easier to deal with the Maybe
type. I always find myself doing something over and over again with Maybe, and then I decide to add a
special method to do it. I hope you will find these methods useful!
Yacoub Massad
Author
Yacoub Massad is a software developer who works mainly with Microsoft technologies. Currently, he works
at Zeva International where he uses C#, .NET, and other technologies to create eDiscovery solutions. He
is interested in learning and writing about software design principles that aim at creating maintainable
software. You can view his blog posts at criticalsoftwareblog.com.
www.dotnetcurry.com/magazine 47
AZURE
Gouri Sohoni
MANAGE
AZURE VIRTUAL
MACHINES
WITH ARM
(AZURE RESOURCE
MANAGE) TEMPLATE
There are many hosts available like the App Service, Virtual Machines, Container etc. For this tutorial,
I will be using the option of creating virtual machines, which can be used to deploy, test and manage
applications.
We can keep these virtual machines in a desired state (DSC) by using Azure Automation Service. These
virtual machines are based on the Azure Resource Manager API.
Originally, virtual machines were created on Azure using something called the ‘Classic’ deployment, but now
these are replaced by the ARM (Azure Resource Manager) API. The main advantage of using the ARM API is
that many resources can be declared in a single JSON file, called an ARM Template.
These days, development teams being more agile, need a way to deploy to the cloud repeatedly,
consistently and also with a desired state. This Infrastructure as Code (IaC) is possible with ARM templates.
Infrastructure as Code are suitable in situations where there is a difficulty in setting up the steps for
machine(s) by relying on human memory, or to avoid human error while initializing machines or in
situations where we need to take care of server failure automatically.
There are two ways for declaring the templates - declarative and imperative. Declarative is referred to as
functional, and Imperative as procedural.
Azure Resource Manager helps us create different resources in a single group. The resources from the group
can be created, deployed and deleted as a group. The management related activities can be easily handled
with the help of Azure PowerShell, Azure CLI, Azure Portal or using Rest APIs.
There are two terms we use when working with Azure - resource and resource group. A resource is an
artifact which can be managed like a VM, database etc., whereas a resource group is a container for
related resources. Azure Resource Manager works as a management layer which can be used to automate
deployment and configuration of resources.
www.dotnetcurry.com/magazine 49
ARM Template
ARM template can define a set of resources like a database server, database, azure function or even a
website. These objects are declared in a JSON format and we have the option of adding them to the source
control. Once they are added to the source control, we can manage it like any other code with various
versions. In this ARM template, we can add multiple resources. Once the template is available as part of the
source control, we can use it to deploy to different stages as required in your application life cycle.
An ARM template can have all the objects for a complete resource group, or a resource from the group. It
can either be deployed completely or incrementally. When complete mode is specified, all the earlier objects
will be deleted from the resource group if they are not part of the template. Whereas in incremental mode,
Resource manager will just be adding the new functionality.
The disadvantage of working with ARM template is that it cannot deploy code. For example, it can create a
virtual machine, but cannot deploy an application on it. It cannot use DACPAC to directly deploy database
on a SQL Server.
• It uses declarative syntax. We can write the objects or resource we want to create or deploy and also
provide the configuration.
• The advantage of working with ARM template is it can be repeated across your deployment. The special
term used for this is they are idempotent. This is a typical mathematical expression which states that
it can be applied ‘n’ number of times without changing the outcome. This typically means that you can
create a single template and for DSC (Desired State Configuration).
• The template can either have linked or nested resources. We can provide parallel deployment or serial,
as required.
We can deploy a multi-tier (3 tiered) application using a single template or can have a parent template
with three nested templates in it.
• You can use this template in Azure DevOps with CI/CD pipelines. This will provide the facility of a
continuous build of template and deployment as well.
• Parameters: we can provide different values for the same template which can be used in various
deployment scenarios.
We can create ARM templates by using the Azure Portal, Azure PowerShell, Azure CLI or by using clients like
Visual Studio Code or Visual Studio. Let us see how easy it is to create a template with the Azure Portal.
Though we can create multiple kinds of resources, I will be focusing on Virtual Machine for this tutorial.
Prerequisites: Azure Portal account, you can create one by using this link.
Note: it is a better option to create different resources in different resource groups so as to have logical grouping.
This option will be helpful when exploring is done and you want to delete a resource. All the resources which
share a lifetime are advised to be kept in a single resource group which will help in deploying, updating or
deleting them together. Each resource can be in only one resource group. You can add or remove resources or
even move one to another resource group, any time. It can also be used to administer access control. If required,
one resource can interact with a resource from another group but they may not share the same lifecycle (typical
example will be a Web Service accessing database).
2. If you already have any existing Virtual Machine, you can just download the ARM template for it,
otherwise we can create a new Virtual Machine and use it to create a template
3. For existing Virtual Machine, use the Export template option and download the template.
www.dotnetcurry.com/magazine 51
4. If you do not have any Virtual Machine in your subscription, you can create a new one.
5. We need to specify required properties like Resource Groups name, OS for VM, name for VM, user id and
password for VM
7. By clicking on Review + Create, the validation for the Virtual Machine will be taken care of. It will be
advisable to open ports for RDP as we will need them later to connect to the VM. Also ensure that you
set auto shutdown for the Virtual Machine if you do not need to keep it running 24X7.
8. Once validation is successful, you can create the VM. It will take some time to create, as the OS and
other details need to be configured on the machine.
9. Once the machine is created, use Export Template > Download option to create the ARM template.
The template comprises of three parts - list of parameters, variables and actual resource. There are no
variables declared here.
The schema is the location of the JSON file used. The version is also mentioned. The same JSON file will be
used for resource group deployment.
We can set the default values for the parameters. When we download the template from an existing
resource (Virtual Machine in this case), it automatically fetches the existing values as default values.
We can change these values if required (changing subscription will be possible if you have multiple). We
can change the values of the parameters on the fly as we will find out in the next section.
www.dotnetcurry.com/magazine 53
We can use the ARM template we downloaded earlier and directly put it in version control and continue
with CI/CD for it. We can even use it as a part of the Visual Studio project and then add it to version control.
I am going to create a new project and a new ARM template with it.
Let us figure out CI CD using Azure DevOps for Virtual Machine creation using ARM template. I am going
to create a new ARM template for the Virtual Machine. As it is possible to change the values of parameters
on the fly at the time of deployment, the same ARM template can be used to create another virtual machine
later.
1. Since we are going to create a build definition to copy the .json files and release definition to do the
actual deployment, create a Team Project in Azure DevOps. Use https://azure.microsoft.com/en-in/
services/devops/ or https://www.visualstudio.com to create a new organization if you do not have a one.
2. Create a new Team Project of the process you prefer, with Git as the source control
3. Start Visual Studio 2017 or 2019 and make sure that you have installed components for Azure. If not
installed, run the Visual Studio Installer, modify and install them.
4. Go to Team Explorer, connect to the newly created Team Project and clone the repository. Create a new
We can see the two json files added to the project – one for virtual machine and the other for parameters.
5. Have a look at the files created. There are parameters for the administrator user name, password, DNS
name for the IP used for virtual machine (need to be unique), OS for VM etc. We can set the value for
these in the parameters in the json file, if required. I am going to set the values to these at the time of
deployment.
6. Let us commit the json files to the Source Control, create a Build Definition, and provide proper
comments with the changes. Ensure that the files are available in Repo tab in Azure DevOps.
7. Select classic editor for Build Definition, as there is no template available. Select an Empty job after you
select the repository.
8. Add two tasks, copy files and publish artefacts. Let us configure both of them.
www.dotnetcurry.com/magazine 55
After a successful trigger of the build, we should get two json files available in the drop folder.
9. Now the question remains - how to deploy and create the virtual machine? Let us create a release
definition. Select New Pipeline – select template for Empty Job.
10. Provide the name for the stage and select the artefact of Build created earlier.
11. Add the Azure resource group deployment task and configure it as follows:
12. As I already discussed that I will be providing some values for the parameters on the fly, I will be
creating variables for the same. Select Variables tab and add three variables - user name, password
and dns. Remember to make the password secure (by default, it will get stored in Azure key vault with
Azure DevOps credentials). As I wanted to have a unique value for dns, I have used other environment
variables to get it. I have concatenated the dns value by using some constant, a build id and build
definition. You can use any other combination.
13. Now we need to override the default values for parameters in the configuration for Azure deployment
15. The deployment can be either incremental or complete. The default mode is incremental which
deploys whatever is defined in the template. It does not remove or modify any resource(s) not defined
(if you have already deployed a VM and then renamed it in the template, the first one will still remain).
With complete, all existing resources will be deleted (if they are not in the template). This is ideal in a
production environment.
www.dotnetcurry.com/magazine 57
16. Let us create a release and check if the deployment succeeds. This is going to be a time-consuming job
as the virtual machine with the specified configuration is to be created with the specified OS to it.
17. You can login to the Azure Portal and check the Virtual Machine created. Make sure that you do not
keep the Virtual Machine running 24 X 7. You can apply the Auto-shutdown feature to it to avoid
running the machine when not required.
18. We can change the triggers for build to CI (Continuous Integration) and for release to CD (Continuous
Deployment), do some changes in json file and commit it. The build will be triggered immediately,
followed by release.
Note: Since we are using Incremental mode, make sure you are not renaming the VM! If you do so, you will end
up with two VMs!
Conclusion
In this tutorial, we discussed what is Azure Resource Manager, ARM template and the advantages of using
them. We also discussed how the ARM template structure is in JSON syntax and downloaded the ARM
template for a Virtual Machine. I also showed how to create an ARM template for VM using Visual Studio,
followed by copying the required JSON files to artefacts to deploy them as an actual Virtual Machine in
Azure.
Gouri Sohoni
Author
Gouri is a Trainer and Consultant specializing in Microsoft Azure DevOps. She has an experience of
over 20 years in training and consulting. She is a Microsoft MVP for Azure DevOps since 2011 and
is a Microsoft Certified Trainer (MCT). She is also certified as an Azure DevOps Engineer Expert and
Azure Developer Associate.
Gouri has conducted several corporate trainings on various Microsoft Technologies. She is a regular
author and has written articles on Azure DevOps (VSTS) and DevOps Server (VS-TFS) on
www.dotnetcurry.com. Gouri also speaks frequently for Azure VidyaPeeth, and gives talks in
conferences and events including Tech-Ed and Pune User Group (PUG).
Damir Arh
The syntax for specifying offset from the end is not limited to ranges. It can also be used to specify an
index, again as an offset:
Index index = 5;
When used as an indexer for arrays, the value at the given offset will be returned. Both the indices shown
above specify the same value in the following array:
www.dotnetcurry.com/magazine 61
There’s no need to add additional indexers for the new Range and Index types to make existing types
usable with the new syntax. The compiler implicitly adds support for the new indexers to the types which
already have the following members:
• For the Index indexer, the int indexer and either the Length or the Count property are required. The
int indexer can then be used instead of the missing Index indexer:
• For the Range indexer, the Slice method and again either the Length or the Count property are
required. The Slice method can then be used instead of the missing Range indexer:
It supports the new indexer syntax although it doesn’t have all the required members listed above. For the
Range indexer, the Substring method is used instead of the Slice method:
Assert.AreEqual('5', "012345"[^1]);
Assert.AreEqual("1234", "012345"[1..^1]);
Nullable reference types were already considered in the early stages of C# 7.0 development but were
postponed until the next major version (i.e. till C# 8.0). The goal of this feature is to help developers avoid
unhandled NullReferenceException exceptions.
The core idea is to allow variable type definitions to specify whether they can have null value assigned to
them or not:
IWeapon? canBeNull;
IWeapon cantBeNull;
Assigning a null value or a potential null value to a non-nullable variable would result in a compiler
warning (the developer could configure the build to fail in case of such warnings, to be extra safe):
Similarly, warnings would be generated when dereferencing a nullable variable without checking it for
null value first:
canBeNull.Repair(); // warning
cantBeNull.Repair(); // no warning
if (canBeNull != null)
{
canBeNull.Repair(); // no warning
}
The problem with such a change is that it breaks existing code: the feature assumes that all variables
from before the change, are non-nullable. To cope with it, static analysis for null safety can be selectively
enabled with a compiler switch at the project level.
The switch is implemented as a property in the project file. The feature can be enabled by adding the
following line to the first PropertyGroup element of the project file:
<Nullable>enable</Nullable>
Additionally, the feature can be enabled selectively inside an individual file by using the #nullable
directive:
#nullable enable
// feature enabled
#nullable disable
// feature disabled
#nullable restore
// feature restored to project-level setting
C# already has support for iterators (see the tutorial “How to implement a method returning an
IEnumerable?”) and asynchronous methods (see the tutorial “What is the recommended asynchronous
pattern in .NET?”).
In C# 8.0, the two are combined into asynchronous streams. They are based on asynchronous versions of
the IEnumerable and IEnumerator interfaces:
ValueTask<bool> MoveNextAsync();
}
Additionally, an asynchronous version of the IDisposable interface is required for consuming the
asynchronous iterators:
This allows the following code to be used for iterating over the items:
www.dotnetcurry.com/magazine 63
// process value
}
}
finally
{
await asyncEnumerator.DisposeAsync();
}
It’s very similar to the code we’re using for consuming regular synchronous iterators. However, it does not
look familiar because we typically just use the foreach statement instead. An asynchronous version of the
statement is available for asynchronous iterators:
Just like with the foreach statement, the compiler generates the required code itself.
It’s also possible to implement asynchronous iterators using the yield keyword, similar to how it can be
done for synchronous iterators:
Cancellation tokens are also supported with this syntax. The EnumeratorCancellation attribute
can be used to annotate the parameter which will receive the CancellationToken passed to the
GetAsyncEnumerator method:
When using await foreach with such an asynchronous iterator, the CancellationToken can be passed
to the GetAsyncEnumerator method by using the WithCancellation extension method:
LINQ methods for the new IAsyncEnumerable<T> interface are available in the standalone
Before C# 8.0, interfaces were not allowed to contain method implementations. They were restricted to
method declarations:
In spite of this, C# 8.0 added support for default interface methods, i.e. method implementations using the
syntax in the first example above. This allows scenarios not supported by abstract classes.
A library author can now extend an existing interface with a default interface method implementation,
instead of a method declaration.
This has the benefit of not breaking existing classes, which implement the old version of the interface. If
they don’t implement the new method, they can still use the default interface method implementation.
When they want to change that behavior, they can override it, but no code change is required just because
the interface has been extended.
Since multiple inheritance is not allowed, a class can only derive from a single base abstract class.
In contrast to that limitation, a class can implement multiple interfaces. If these interfaces implement
default interface methods, this effectively allows classes to compose behavior from multiple different
interfaces – this concept is known as traits and is already available in many programming languages.
Some pattern matching features have already been added to C# in version 7.0. The support has been
further extended in C# 8.0.
• Positional patterns allow deconstruction of matched types in a single expression. They depend on the
Deconstruct method implemented by a type (you can read more about the Deconstruct method in my
book “How did tuple support change with C# 7?”):
• Property patterns are similar positional patterns but don’t require the Deconstruct method. As a
result, the syntax to achieve equivalent functionality to the example above is a bit longer because it
must explicitly specify the property names:
www.dotnetcurry.com/magazine 65
if (sword is Sword { Damage: 10, Durability: var durability }) {
// code executes if Damage = 10
// durability has value of sword.Durability
}
• Tuple patterns allow matching of more than one value in a single pattern matching expression:
Additionally, an expression version of the switch statement allows terser syntax when the only result of
pattern matching is assigning a value to a single variable:
The discard character (_) is used for the default case. If it’s not specified in the expression and the value
doesn’t match any of the other cases, a SwitchExpressionException will be thrown.
Conclusion
Most of the new language features in C# 8 only bring alternative simpler syntax
Damir Arh
Author
Damir Arh has many years of experience with Microsoft development tools; both in
complex enterprise software projects and modern cross-platform mobile applications.
In his drive towards better development processes, he is a proponent of test driven
development, continuous integration and continuous deployment. He shares his
knowledge by speaking at local user groups and conferences, blogging, and answering
questions on Stack Overflow. He is an awarded Microsoft MVP for .NET since 2012.
Mahesh Sabnis
ASP.NET Core 3.0 is one such open-source framework which integrates seamlessly with client-side
frameworks and libraries, including Blazor, React, Angular, Vue.js etc.
Editorial Note: If you are new to .NET Core 3.0, read What’s New in .NET Core 3.0?
.NET Core 3.0 introduces various new features, some of them being:
• Single-File Executable
• Assembly linking
• Tiered compilation
All these new features are useful for modern application development.
Web Applications often have complex requirements now-a-days. Some of these requirements demand that
the application must be cross-platform, application data must be stored in relational as well as NoSQL
database, the front-end must be modular and highly responsive, and so on.
.NET Core was created to be cross-platform and releases from .NET Core v2.0 onwards, help to design
solutions to fulfill most of these requirements.
In ASP.NET Core 2.0 onwards, application templates provide an integration with front-end frameworks
like Angular, React, React-Redux. We can make use of these templates to develop applications as per the
requirements from users.
Editorial Note: In case you are interested in a Vue.js template, check this tutorial: ASP.NET Core Vue CLI
Templates.
Figure 1: ASP.NET Core application with EF Core, SQL Server, CosmosDB and Angular
www.dotnetcurry.com/magazine 69
As seen in Figure 1, in .NET Core 2.2, EF core v2.2 was used only as an ORM for relational databases like
SQL Server, etc. So, it was necessary for developers to write a separate data access layer for accessing
data from Azure Cosmos DB, generally classified as a NoSQL database. This means that our .NET Core 2.2
application would need separate Data Access Layers for relational databases, as well as for a NoSQL one.
Editorial Note: Those new to Cosmos DB can read Azure Cosmos DB - Deep Dive.
In .NET Core 3.0, there is a cool feature provided in EF Core 3.0 which can be used to map the entity classes
to a Cosmos DB NoSQL database and generate the database in the traditional code-first approach.
We can make use of DbContext class to map with the Cosmos DB database collection.
• The Cosmos DB database account endpoint - application can connect to Cosmos DB using this Endpoint
Using EF Core 3.0, one can directly access the Azure Cosmos DB database and perform CRUD operations.
You can use the Code-First approach of EF Core to create a database and collection. Using the ASP.NET Core
3.0 Angular Template and EF Core 3.0 with Cosmos DB, we can modify Figure 1 to the one shown in Figure
2:
Step 1: Open Azure portal using portal.azure.com. Make sure that you have an Azure subscription. Once you
login with your credentials, you are inside the portal.
Step 2: In the portal, click on the Create a resource link blade on the top left (see Figure 3). In the search
box on this blade, enter Azure Cosmos DB, and the UI will display the Azure Cosmos DB option as shown in
Figure 3.
Click on the Azure Cosmos DB link that is marked red in the above figure. This will open a new blade for
creating an Azure Cosmos DB Account as shown in Figure 4.
www.dotnetcurry.com/magazine 71
Figure 4: Create an Azure Cosmos DB Account
To create an Azure Cosmos DB Account, you need to enter the Azure Subscription and Resource Group (if
you have not already created a resource group, it can be created using Create new link provided below the
Resource Group combobox).
You can then enter an Account Name as per your choice and then select the Cosmos DB API. In our case, we
will be using Core (SQL) which is a JSON document storage. You need to select a Location for the Account
and other information as per your requirement. To create the account, click on the Review + create button.
Once the Cosmos DB Account is created, we can see its details as shown in Figure 5.
We will create Web APIs using ASP.NET Core 3.0. These Web APIs will access Cosmos DB. The Angular
application will be the front-end for our application. We will create an Angular application that will capture
the profile information of Students and this profile information will be stored in Cosmos DB as JSON
documents. The overall structure of the application is explained in Figure 6.
Step 1: Open Visual Studio 2019 and create a new ASP.NET Core Web Application. Name this application as
ProfileAppNet30. Select the Angular Template for the application. Make sure that you select ASP.NET Core
3.0 as the project version as shown in Figure 7.
www.dotnetcurry.com/magazine 73
Note: Please disable the option “Configure for HTTPS” if you are using Kestrel to avoid CORS errors. Otherwise
you will have to change the protocol to https and port to 5001 in the Angular application.
Open the Solution Explorer to see the project structure with references of assemblies targeted to .NET Core
3.0 as shown in Figure 8.
The ClientApp folder shows the Angular application structure. If you look in the package.json file, you will
see that the Angular version supported for this template is Angular 8.0.0.
Step 2: Since we need to access Cosmos DB using EF Core, we need to add EF Core package to the project.
(Note that the EF Core package is not present by default in the ASP.NET Core 3.0 Project Template.)
Right click on Dependencies and select Manage NuGet Packages. Search for Microsoft.
EntityFrameworkCore.Cosmos package. Once the package is found, install it as shown in Figure 9.
Step 3: Modify the appsettings.json file by adding key/value pairs for Cosmos DB settings like EndPoint,
AccountKey and DatabaseName. The EndPoint, AccountKey and DatabaseName can be found from the
Settings > ConnectionString blade.
Step 4: In the project, add a folder named Models and in this folder, add a new class file and name it as
ModelClasses.cs. In this class file, add the following code:
using System;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations;
namespace ProfileAppNet30.Models
{
public class Education
{
[Required(ErrorMessage = "Degree is required")]
public string Degree { get; set; }
[Required(ErrorMessage = "Specialization is required")]
public string Specialization { get; set; }
[Required(ErrorMessage = "College Or School is required")]
public string CollegeOrSchool { get; set; }
[Required(ErrorMessage = "Year Of Admission is required")]
public int YearOfAdmission { get; set; }
[Required(ErrorMessage = "Year Of Passing is required")]
public int YearOfPassing { get; set; }
[Required(ErrorMessage = "Grade is required")]
public string Grade { get; set; }
}
public class WorkExperience
{
public string CompanyName { get; set; }
public string Designation { get; set; }
public DateTime DateOfJoin { get; set; }
public DateTime DateOfLeaving { get; set; }
public int YearsOfExperience { get; set; }
}
www.dotnetcurry.com/magazine 75
public List<Education> Educations { get; set; }
public List<WorkExperience> Experience { get; set; }
}
}
Listing 2: The Model classes. These classes will be used to map with Cosmos DB to create JSON documents
The Education class contains properties for storing Education details of the end user. The
WorkExperience class contains properties to store experience of the end user. The ProfileMaster
class contains properties for storing personal information of the end user. This class also contains a list
of Education details and WorkExperiences of the end user. This is done for a One-To-Many Relationship
between ProfileMaster to Education and WorkExperience classes.
We expect that the collection contains JSON document with collection of Education details and
WorkExperiences for a single Profile information.
Step 5: In the Models folder, add a new class file and name it as ProfileDbContext.cs. Add the following
code in this file:
using Microsoft.EntityFrameworkCore;
namespace ProfileAppNet30.Models
{
public class ProfileDbContext : DbContext
{
public DbSet<ProfileMaster> Profiles { get; set; }
Listing 3: The ProfileDbContext class contains code for EF Core mapping with Cosmos DB.
Editorial Note: If you have used EF Core earlier, then you will find the code familiar. If not, here’s an old
albeit useful tutorial.
The ProfileDbContext class is derived from DbContext class. This class is responsible for connection
creation and mapping with the database. The class contains a DbSet property for ProfileMaster model
class. This will map with the container in Cosmos DB.
The OnModelCreating() method defines the container name as Profiles and defines the strategy of
the document creation with relationship between ProfileMaster, Education and WorkExperience
Step 6: In the project, add a new folder and name it as Services. In this folder, add an interface file and name
it as ICosmosDbService.cs. Then add a class file, and name this class file as CosmosDbService.cs. Add the
following code in ICosmosDbService.cs
using ProfileAppNet30.Models;
using System.Collections.Generic;
using System.Threading.Tasks;
namespace ProfileAppNet30.Services
{
public interface ICosmosDbService<TEntity, in TPk> where TEntity: class
{
Task<IEnumerable<TEntity>> GetAsync();
Task<TEntity> GetAsync(TPk id);
Task<TEntity> CreateAsync(TEntity entity);
}
}
using Microsoft.EntityFrameworkCore;
using ProfileAppNet30.Models;
using System;
using System.Collections.Generic;
using System.Threading.Tasks;
namespace ProfileAppNet30.Services
{
public class CosmosDbService : ICosmosDbService<ProfileMaster, string>
{
private readonly ProfileDbContext ctx;
www.dotnetcurry.com/magazine 77
public async Task<ProfileMaster> GetAsync(string id)
{
var profile = await ctx.Profiles.FindAsync(id);
return profile;
}
}
}
The ICosmosDbService interface is a multi-type generic interface. This interface defines methods for
reading and writing data. This interface is implemented by the CosmosDbService class with TEntity
parameter as ProfileMaster and TPk parameter as string. The class has a constructor injected with
ProfileDbContext class. The constructor contains code to make sure that the database is created in
Cosmos DB, if it has not already been created. The other methods of the class contain a familiar code for
performing read and write operations against the database using EF Core.
Step 7: Modify Startup.cs file by adding the following code in the ConfigureServices() method of the
Startup class
services.AddScoped<ICosmosDbService<ProfileMaster,string>,CosmosDbService>();
Listing 6: Registering ProfileDbContext class in DI Container along with CosmosDbService class and code for suppressing the
default JSON serialization naming policy
Step 8: In the Controllers folder, add a new empty Web API controller and name it as ProfilesController.cs. In
this controller, add code as shown in Listing 7:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using ProfileAppNet30.Models;
using ProfileAppNet30.Services;
namespace ProfileAppNet30.Controllers
{
[Route("api/[controller]")]
[ApiController]
public class ProfilesController : ControllerBase
{
private readonly ICosmosDbService<ProfileMaster, string> service;
public ProfilesController(ICosmosDbService<ProfileMaster, string> service)
[HttpGet]
public async Task<IActionResult> Get()
{
try
{
var response = await service.GetAsync();
return Ok(response);
}
catch (Exception ex)
{
return BadRequest(ex.Message);
}
}
[HttpGet("{id}")]
public async Task<IActionResult> Get(string id)
{
try
{
var response = await service.GetAsync(id);
return Ok(response);
}
catch (Exception ex)
{
return BadRequest(ex.Message);
}
}
[HttpPost]
public async Task<IActionResult> Post(ProfileMaster profile)
{
try
{
if (ModelState.IsValid)
{
var response = await service.CreateAsync(profile);
return Ok(response);
}
else
{
return BadRequest(ModelState);
}
}
catch (Exception ex)
{
return BadRequest(ex.Message);
}
}
}
}
The controller we just saw in Listing 7 has the CosmosDbService class injected in the constructor . The
controller contains HTTP Get and Post methods for returning and accepting profile information from an
Angular client application.
www.dotnetcurry.com/magazine 79
Creating Angular Client Code
Step 1: Expand the ClientApp folder. In this folder, we have the src folder that contains the app sub-folder.
In the app folder, add three folders named models, profile and services.
Since we will be using Angular material library for a rich UI like dialog-box, we need @angular/cdk and
@angular/material package dependencies in the project. Open the Command Prompt and navigate to the
ClientApp folder of the current application and run the following command to install these packages.
Step 2: In the Models folder, add a new TypeScript file and name it as app.constants.ts. This file will contain
constant arrays:
Listing 8 contains constants for information (e.g. Degree, Specialization, etc.) that we need to capture from
the end user.
Step 3: In the Models folder, add a new TypeScript file and name it as app.models.ts. This file will contain
TypeScript classes for ProfileMaster, Education and WorkExperience corresponding server-side Models
classes in our ASP.NET Core application.
Step 4: We will create an Angular Service to make an HTTP request to the Web API. To do so, in the Services
folder, add a new TypeScript file and name it as app.profile.service.ts. In this file, add the code as shown in
Listing 10.
Note: Run the application in Kestrel instead of IIS Express (the default used by VS 2019) otherwise, you will
get a CORS error. If the application automatically redirects to HTTPS, change the baseUrl value in the code
www.dotnetcurry.com/magazine 81
to https://localhost:5001.
@Injectable({
providedIn:'root'
})
export class ProfileService {
private baseUrl: string
constructor(private http: HttpClient) {
this.baseUrl = 'http://localhost:5000';
}
getProfiles(): Observable<ProfileMaster[]> {
let response: Observable<ProfileMaster[]> = null;
response = this.http.get<ProfileMaster[]>(`${this.baseUrl}/api/Profiles`);
return response;
}
getProfile(id:string): Observable<ProfileMaster> {
let response: Observable<ProfileMaster> = null;
response = this.http.get<ProfileMaster>(`${this.baseUrl}/api/ ]
Profiles/${id}`);
return response;
}
const options = {
headers: new HttpHeaders({ 'Content-Type':'application/json'})
};
response = this.http.post<ProfileMaster>(`${this.baseUrl}/api/Profiles`,
profile,options);
return response;
}
}
Listing 10 contains the ProfileService class decorated as @Injectable. This means that the class will
be injected wherever it is required. This class HttpClient is injected in the service class for making an HTTP
call to Web API.
Step 5: It’s time for us to create Angular Views and their logic.
To do so, we need to add components in the application. Since we intend to use Angular Material’s dialog
boxes for WorkExperience and Educational details, we need to add separate components for a dialog box
implementation.
To use a Dialog box in Angular, we need to use the MatDialogRef object. To pass data to this Dialog box,
make use of the MAT_DIALOG_DATA object.
In the profile folder, add a new TypeScript file and name it as app.educationinfo.dialog.component.ts.
@Component({
selector: 'app-educationinfo-dialog-component',
templateUrl: 'app.educationinfo.dialog.view.html'
})
export class EducationInfoDialogComponent {
degrees = Degrees;
specializations = Specializations;
yearOfAdmission = AdmissionYear;
yearOfPassing = PassingYear;
grades = Grades;
constructor(
public dialogRef: MatDialogRef<EducationInfoDialogComponent>,
@Inject(MAT_DIALOG_DATA) public educationData: EducationDialogData
) { }
cancel(): void {
this.educationData.educationInfo = new Education('', '', '', 0, 0, '');
this.dialogRef.close();
}
}
In Listing 11, the EducationDialogData defines an object of the type Education. This object will be used
as MAT_DIALOG_DATA for the dialog box. This dialog box also uses various constant arrays declared in app.
constants.ts file. To show user interface for the dialog box, add a new HTML file in the profile folder and
name it as app.educationinfo.dialog.view.html.
www.dotnetcurry.com/magazine 83
<option *ngFor="let d of degrees" value="{{d}}">{{d}}</option>
</select>
</div>
<div class="form-group">
<label>Specialization</label>
<select matInput [(ngModel)]="educationData.educationInfo.Specialization"
class="form-control">
<option>Select Specialization</option>
<option *ngFor="let s of specializations" value="{{s}}">{{s}}</option>
</select>
</div>
<div class="form-group">
<label>College or School</label>
<input matInput type="text" [(ngModel)]="educationData.educationInfo.
CollegeOrSchool" class="form-control">
</div>
<div class="form-group">
<label>Year of Admission</label>
<select matInput [(ngModel)]="educationData.educationInfo.YearOfAdmission"
class="form-control">
<option>Select Admission Year</option>
<option *ngFor="let ya of yearOfAdmission" value="{{ya}}">{{ya}}</option>
</select>
</div>
<div class="form-group">
<label>Year of Passing</label>
<select matInput [(ngModel)]="educationData.educationInfo.YearOfPassing"
class="form-control">
<option>Select Passing Year</option>
<option *ngFor="let yp of yearOfPassing" value="{{yp}}">{{yp}}</option>
</select>
</div>
<div class="form-group">
<label>Grade</label>
<select matInput [(ngModel)]="educationData.educationInfo.Grade" class="form-
control">
<option>Select Grade</option>
<option *ngFor="let g of grades" value="{{g}}">{{g}}</option>
</select>
</div>
</div>
<div mat-dialog-actions>
<button mat-button [mat-dialog-close]="educationData.educationInfo"
(click)="cancel()">Cancel</button>
<button mat-button [mat-dialog-close]="educationData.educationInfo"
cdkFocusInitial>Ok</button>
</div>
Editorial Note: A label can be bound to an element either by using the "for" attribute, or by placing the element
inside the <label> element. Here the author has skipped binding the label with the element as he won’t be using
• matInput - represents the UI element which will be used to capture input from the end user.
o The mat-dialog-actions are applied on button elements as mat-dialog-close. So, when the button is
clicked, the dialog box will be closed.
Step 6: Similar to dialog boxes for Education details, we need to add a dialog box component for Work
Experience also. In the profile folder, add a new TypeScript file and name it as app.workexperience.dialog.
component.ts.
@Component({
selector: 'app-workexperience-dialog',
templateUrl: 'app.workexperience.dialog.view.html'
})
cancel(): void {
this.workexperienceData.experienceInfo = new WorkExperience('', '', new Date(),
new Date(), 0);
this.dialogRef.close();
}
}
To define user interface for the WorkExperience dialog, we need to add a new HTML file in the profile folder
and name it as app.workexperience.dialog.view.html. Add the following markup in the HTML file:
www.dotnetcurry.com/magazine 85
<h2 mat-dialog-title>Work Experience Details</h2>
<div mat-dialog-content>
<div class="form-group">
<label>Company Name</label>
<input matInput type="text" [(ngModel)]="workexperienceData.experienceInfo.
CompanyName" class="form-control">
</div>
<div class="form-group">
<label>Designation</label>
<input matInput type="text" [(ngModel)]="workexperienceData.experienceInfo.
Designation" class="form-control">
</div>
<div class="form-group">
<label>Date of Join</label>
<input matInput type="date" [(ngModel)]="workexperienceData.experienceInfo.
DateOfJoin" class="form-control">
</div>
<div class="form-group">
<label>Date of Leaving</label>
<input matInput type="date" [(ngModel)]="workexperienceData.experienceInfo.
DateOfLeaving" class="form-control">
</div>
<div class="form-group">
<label>Years of Experience</label>
<select matInput [(ngModel)]="workexperienceData.experienceInfo.
YearsOfExperience" class="form-control">
<option>Select Experience</option>
<option *ngFor="let e of experiences" value="{{e}}">{{e}}</option>
</select>
</div>
</div>
<div mat-dialog-actions>
<button mat-button [mat-dialog-close]="workexperienceData.experienceInfo"
(click)="cancel()">Cancel</button>
<button mat-button [mat-dialog-close]="workexperienceData.experienceInfo"
cdkFocusInitial>Ok</button>
</div>
We have added the dialog boxes! Now it’s time for us to define components that will use these dialog
boxes and also display a user interface for accepting the profile information.
Step 7: In the profile folder, add a new TypeScript file and name it as app.profile.component.ts. In this file,
add the following code:
import {
Genders,
Experiences, MaritalStatusInfo
@Component({
selector: 'app-profile-component',
templateUrl: 'app.profile.component.view.html'
})
export class ProfileComponent implements OnInit {
genders = Genders;
maritalStatusInfo = MaritalStatusInfo;
education: Education;
educationDetails: Array<Education>;
educationTableHeaders: Array<string>;
workexperience: WorkExperience;
workexperienceDetails: Array<WorkExperience>;
workexperienceTableHeaders: Array<string>;
profile: ProfileMaster;
openEducationDialog(): void {
this.education = new Education('', '', '', 0, 0, '');
const educationDialogRef = this.dialog.open(EducationInfoDialogComponent, {
width: '600px',
data: { educationInfo: this.education }
});
educationDialogRef.afterClosed().subscribe(res => {
if (res !== undefined) {
console.log(`In If ${JSON.stringify(res)}`);
this.educationDetails.push(res);
console.log(JSON.stringify(this.educationDetails));
this.education = new Education('', '', '', 0, 0, '');
} else {
console.log('In Else');
this.education = new Education('', '', '', 0, 0, '');
}
});
}
openWorkExperienceDialog(): void {
this.workexperience = new WorkExperience('', '', new Date(), new Date(), 0);
const workExperienceDialogRef = this.dialog.open(WorkExperienceDialogComponent,
{
www.dotnetcurry.com/magazine 87
width: '600px',
data: { experienceInfo: this.workexperience }
});
workExperienceDialogRef.afterClosed().subscribe(res => {
if (res !== undefined) {
this.workexperienceDetails.push(res);
console.log(JSON.stringify(this.workexperienceDetails));
this.workexperience = new WorkExperience('', '', new Date(), new Date(),
0);
} else {
ngOnInit(): void {
for (const h in this.education) {
this.educationTableHeaders.push(h);
}
saveProfile(): void {
this.profile.Educations = this.educationDetails;
this.profile.Experience = this.workexperienceDetails;
this.serv.postProfile(this.profile).subscribe(response => {
console.log(JSON.stringify(response));
}, (error) => {
console.log(`${error.status} and ${error.message} ${error.
statusText}`);
});
}
}
The ProfileComponent uses Education and WorkExperience dialog boxes. This component has
ProfileService and MatDialog objects injected in the constructor. Using ProfileService, the
component can make HTTP calls to the Web API.
The MatDialog object is used to manage the dialog box. The MatDialog object contains a method to open
the dialog box and a method to read data entered in the dialog box after the close event of the dialog box
is fired. The saveProfile() method of the component will be used to access postProfile() method of the
ProfileService to post the profile information to the server.
In the profile folder, add a new HTML file and name it as app.profile.component.view.html. In this file, add
the following markup:
www.dotnetcurry.com/magazine 89
</table>
</td>
</tr>
<tr>
<td colspan="3">
<h3>Experience Details</h3>
<input type="button" class="btn btn-warning" value="Click to Experience Details"
(click)="openWorkExperienceDialog()">
<table class="table table-bordered table-striped">
<thead>
<tr>
<td *ngFor="let h of workexperienceTableHeaders">{{h}}</td>
</tr>
</thead>
<tbody>
<tr *ngFor="let e of workexperienceDetails">
<td *ngFor="let h of workexperienceTableHeaders">{{e[h]}}</td>
</tr>
</tbody>
</table>
</td>
</tr>
<tr>
<td colspan="3">
<input type="button" (click)="saveProfile()" class="btn btn-success"
value="Save">
</td>
</tr>
</table>
The markup in Listing 16 contains input elements and select elements for entering the Profile
information. These elements are bound with the properties defined in the ProfileMaster class.
We have tables for showing Education details and WorkExperience details. We have button elements on
the top of these tables. These buttons are bound with the methods from the component class to open
Education and WorkExperience dialog boxes.
Step 8: Modify the style.css as shown in the following listing to import @angular/material style to show
dialog box.
@import '~@angular/material/prebuilt-themes/deeppurple-amber.css';
Step 9: It’s time for us to update app.module.ts file from the app folder. In this file, we will import all
components created for dialog boxes and ProfileComponent. We need to import various material modules
so that dialog boxes will be executed successfully. Listing 18 shows the modified app.module.ts:
We have defined the routing for the profile component using RouterModule. We have also imported various
Material modules like MatDialogModule, MatInputModule, MatSelectModule, MatNativeModule to execute
the MatDialog.
So far, so good! We have completed developing both the Server-Side as well as the Front end.
www.dotnetcurry.com/magazine 91
To test the application, run it using F5.
Note: The Application needs to run on Kestrel hosting environment (not IIS Express) since we are using http with
port 5000.
Update nav-menu.component.html to add the navigation for the profile page as shown in listing 19.
Click on the Profile link on the top right, the following profile page as shown in Figure 11, will be displayed.
Enter details like the First Name, Middle Name, Last Name etc. and click on the Click to Add Education
Click on the OK button, and the education details as shown in Figure 13 will be displayed in the table.
Now click on the Click to Experience details button to add experience details. A dialog box will be displayed
where you can enter the experience details. Once these details are added, click on the Save button to add
the profile and save the information in Cosmos DB. This will create a container named as Profiles and the
document is added in it.
www.dotnetcurry.com/magazine 93
Figure 14: The data added in Cosmos DB by creating Profiles
Conclusion
In this tutorial, we saw how ASP.NET Core 3.0 with EF Core 3.0 provides a cool mechanism to access Cosmos
DB (classified as a NoSQL db) for performing CRUD operations, very similar to a relational database.
Additionally, the built-in Angular template with ASP.NET Core provided a rich experience for front-
end development. ASP.NET Core provides a unified platform for building web UI and web APIs using a
Relational or a NoSQL Database.
Mahesh Sabnis
Author
Mahesh Sabnis is a DotNetCurry author and ex-Microsoft MVP having over 19 years of
experience in IT education and development. He is a Microsoft Certified Trainer (MCT)
since 2005 and has conducted various Corporate Training programs for .NET, Cloud and
JavaScript Technologies (all versions). Follow him on twitter @maheshdotnet
@suprotimagarwal @saffronstroke