0% found this document useful (0 votes)
544 views201 pages

DNCMag Issue47

Uploaded by

hopesend
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
544 views201 pages

DNCMag Issue47

Uploaded by

hopesend
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 201

Source Control in Azure

DevOps (Best Practices)

116 Coding practices - My Top Ones


122 Digital Transformation with MSFT during
COVID-19
142 Creating APIs using Azure Functions
174 Architecting Desktop and Mobile
applications in .NET
186 Angular 9 & 10 Cheatsheet

Memoization in
The End of Innovation Machine Learning for
JavaScript, Angular
outside the Cloud Everybody
and React

Product Development Dynamic Class


Hello .NET MAUI
in Azure DevOps Creation in C#

Getting started
Progressive Web
with Application
Applications
Architecture
Editor In Chief :
Suprotim Agarwal
([email protected]) EDITOR’S
Art Director : Minal Agarwal

Contributing Authors :
NOTE
Yacoub Massad
Vikram Pendse
Time flies! The DotNetCurry(DNC) magazine, a
Subodh Sohoni digital publication dedicated to Cloud, .NET and
Mahesh Sabnis JavaScript professionals, is 8 years old!!
Klaus Haller
Keerti Kotaru This new milestone gives us a point of focus,
José Manuel Redondo López and a springboard to jump upward and forward
Gouri Sohoni from here. Although, this time, during this
ongoing pandemic, we will need to focus
Gerald Versluis our efforts towards technologies that boosts
Daniel Jimenez Garcia productivity and helps us stay relevant.
Damir Arh
Benjamin Jakobus I believe to stay relevant, we will need three
key elements: Strengthening our fundamentals,
Technical Reviewers : Awareness of new and disruptive technologies,
and a plan to learn these technologies and
Benjamin Jakobus make the most of it. In short, we will need to -
Damir Arh Skill, Reskill and Upskill.
Gouri Sohoni
Keerti Kotaru We at DotNetCurry will do our best to cover
Ravi Kiran topics that are relevant to the current situation,
Subodh Sohoni and we hope you will do your best to learn as
much as you can!
Suprotim Agarwal
Vikram Pendse On that note, I want to take this opportunity
Yacoub Massad to thank my extraordinary team of Authors
and Experts, who I am humbled to be a part
Next Edition : October 2020 of. I also want to thank our sponsors, and our
Copyright @A2Z Knowledge Visuals Pvt. Ltd. patrons who have helped us so far to keep this
magazine free of cost. A special thanks to YOU,
the reader, for without you, this magazine
Reproductions in whole or part prohibited except by written won't exist.
permission. Email requests to “suprotimagarwal@dotnetcurry.
com”. The information in this magazine has been reviewed for Enjoy this 8th Anniversary Edition, and stay in
accuracy at the time of its publication, however the information is touch via LinkedIn or Twitter.
distributed without any warranty expressed or implied.
You can also email me your feedback at
Windows, Visual Studio, ASP.NET, Azure, TFS & other Microsoft products [email protected]
& technologies are trademarks of the Microsoft group of companies. ‘DNC
Magazine’ is an independent publication and is not affiliated with, nor has it
been authorized, sponsored, or otherwise approved by Microsoft Corporation.
Microsoft is a registered trademark of Microsoft corporation in the United
States and/or other countries.

THE
TEAM Suprotim Agarwal
Editor in Chief
Modern monitoring &
analytics
Aggregate metrics and events Trace requests across
from 400 + technologies distributed systems and alert
including .Net, Azure, and AWS on app performance

Seamlessly pivot between Monitor your applications and


correlated data for rapid API endpoints via simulated
troubleshooting user requests

Search, analyze, and explore


enriched log data

START YOUR FREE TRIAL

NO CREDIT CARD REQUIRED


ASP.NET CORE

Daniel Jimenez Garcia

Progressive Web
Applications
f rom zero t o h e ro

p rogressive Web Application


(PWA) is a name coined in 2015
for describing web applications
known languages and technologies, you
can turn it into a PWA that feels like a
native application in either mobile or
that take advantage of the latest desktop platforms.
browser APIs to provide an experience
similar to a Native application. PWA is a great opportunity for web
developers, which also comes with its
With Google pushing hard for it in own set of challenges, like dealing with
Android, other vendors followed suit. By offline behavior. Luckily for us, as we will
2017, they were finally supported by iOS, see through the article, there is plenty
and in 2019, Windows 10 did the same. of support for PWA in the common web
This means starting 2019, when you application frameworks you might be
create a web application using well already used to.

6 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


What is a Progressive Web Application?
Defining Progressive Web Applications

Progressive web applications (PWAs) can be described as a set of techniques that take advantage of
modern browser APIs and OS support to provide an experience similar to a native application.

While this started as a way for web applications to offer an experience closer to traditional iOS/Android
applications, it has expanded onto traditional desktop applications. For example, now Windows 10 provides
ample support for PWAs!

Unfortunately, there is no single formalized standard that defines what Progressive Web Applications are,
nor which functionality must be implemented by platforms that support them. This means you can read
different definitions depending on where you look. See for example:

• Mozilla MDN

• Google developers

• Wikipedia

Of this, I feel like Mozilla MDN is the one that summarizes it best, even though their page is a draft!

Progressive Web Apps are web apps that use emerging web browser APIs
and features along with traditional progressive enhancement strategy to
bring a native app-like user experience to cross-platform web applications.
Progressive Web Apps are a useful design pattern, though they aren't a
formalized standard. PWA can be thought of as similar to AJAX or other
similar patterns that encompass a set of application attributes, including
use of specific web technologies and techniques.

We will also go through the minimum technical requirements for a web application to be considered as a
PWA. This is important, since otherwise operating systems like iOS, Android or Windows 10 won’t consider
it a PWA, and thus won’t allow installing it!

These requirements are:

• Usage of HTTPS, since they have to be secure.

• Usage of service workers, which lets them be fast and provide an experience closer to native
applications, like push notifications or offline mode.


7
www.dotnetcurry.com/magazine |
• Described through a manifest file, so that at the time of installation, the OS knows about the name, icon
and other useful metadata.

Note that these requirements don’t mean apps have to work offline or provide push notifications. It also
doesn’t mean you have to implement them using JavaScript SPA frameworks. As long as you use HTTPS, a
service worker and a manifest, you have a PWA.

How you implement it and the functionality it offers, is entirely up to you!

A Progressive Web Application example: Vue.js docs site

Enough with the theory, let’s look at an example using the Vue.js docs site: https://vuejs.org/. I am going to
use the latest Edge for Windows, but feel free to try it on your Android/iOS phone as well!

When you open a PWA in your browser, the browser will notice and provide an option for installing it. For
example - using Edge on Windows 10 (you can also install from Chrome; steps might vary slightly):

Figure 1, install the Vue.js docs as an application in Windows 10 using Edge

You can also use the browser developer tools (Ctrl + Shift + I) to inspect both the manifest and the service
worker of the PWA.

If you open developer tools on Microsoft's Edge browser, go to the Application tab as shown in Figure 2 and

8 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


3:

Figure 2, inspecting the manifest.json file of the Vue.js docs site


9
www.dotnetcurry.com/magazine |
Figure 3, inspecting the service worker of the Vue.js docs site
Once you install the PWA, notice it shows up in your start menu in Windows. You can manage it as any
other app, pinning it to the start menu, pinning to the taskbar or inspecting its properties.

Interestingly, if you inspect the application properties, you can see how this is a shortcut for a web
application that runs on Edge:

C:\Program Files (x86)\Microsoft\Edge\Application\msedge_proxy.exe" --profile-


directory=Default --app-id=ooalagpdddfhdaoiahpgohglonkmabgf

If you launch the application, it feels like a regular Windows 10 application, even though it’s just a web
application running inside a sandbox environment:

Figure 4, running the Vue.js docs as a Windows 10 app

Let’s now see if it works offline as promised!

Launch the application from the Windows 10 start menu (or from the installed apps in your iOS/Android
phone). Once the app has launched, disconnect from your network. Notice how the application keeps
working, thanks to the service worker.

10 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


If you want, you can even open the developer tools for the app and inspect the network tab while
navigating the docs (You can also try this in your browser to test the offline mode). Notice how the requests
are being served from cache by the service worker, while requests such as ads or the Vimeo player, will fail.

Figure 5, the service worker intercepts the network requests and serves them from cache

We can go a bit further and see how the manifest and service worker are actually registered in a web
application. If you open the developer tools and navigate to https://vuejs.org, you can inspect the HTML
index document.

• Within the <head> element you will see the manifest file being registered:

<link rel="manifest" href="/manifest.json">

• At the end of the <body> element you will see the service worker being installed:

<script>
'use strict';
'serviceWorker'in navigator&&navigator.serviceWorker.register('service-worker.js').
then( // rest of the code omitted
</script>

The way these are registered is a perfect example of what the term progressive in PWA means. The manifest
is just a link with a special attribute that browsers which don’t provide PWA support, will ignore.

The script that registers the service worker first checks that the browser actually supports the service
workers API. This way browsers/platforms that support the latest APIs get all the features, while older
browsers ignore them.

If you have never looked at PWAs before, hopefully this sneak peek has been enough to pique your interest.
Let’s now see how we can create PWA using different frameworks.


11
www.dotnetcurry.com/magazine |
Developing Progressive Web Applications
In order to create a PWA, all you have to do is to add a manifest file and a service worker to your web
application, and serve it over HTTPS. It’s no surprise then that many of the frameworks to create web
applications, help developers creating these.

This is particularly helpful for the service workers. Armed with the knowledge about the inner workings of
each framework, they can better tailor the service worker with suitable default behavior.

You can find the code for each of these examples on GitHub.

Developing PWAs using Blazor WebAssembly

Now that Blazor WebAssembly is officially released, it is a good time to explore how you can take

Why not use server-side Blazor, which has been officially available
for a while?
Since server-side Blazor relies on a permanent SignalR connection
with the server, it’s harder to find the use case of PWA together with
the server-side Blazor. However, if you don’t care about the offline
functionality and you just like the idea of launching and executing it
as a native app, it is technically possible! See this article.

advantage of it and convert it into a PWA.


Blazor WebAssembly was officially released with .NET Core 3.1.300 and alongside came a project template
for it. Said project template allows you to initialize your Blazor application with PWA support.

If you use the dotnet CLI, you just need to use the --pwa flag as in:

dotnet new blazorwasm --pwa --hosted -o BlazorPWATest

Alternatively, make sure to check the PWA option when creating a new Blazor Web Assembly project using
Visual Studio:

12 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


Figure 6, creating a new Blazor Web Assembly project with PWA support

In both cases, make sure you use the right version of .NET Core. Also note the selection of the hosted deployment
model, which makes it easier to test the PWA functionality.

When the project is generated, you will notice a manifest.json file located inside the Client’s wwwroot
folder.

{
"name": "BlazorPWATest",
"short_name": "BlazorPWATest",
"start_url": "./",
"display": "standalone",
"background_color": "#ffffff",
"theme_color": "#03173d",
"icons": [
{
"src": "icon-512.png",
"type": "image/png",
"sizes": "512x512"
}
]
}

There are also two service workers:

• One is used during development and is called service-worker.js. As explained in the comments,
reflecting code changes would be harder if we are using a real service worker during development.

• The other one is called service-worker.published.js and contains the real service worker used
when published. It is the service worker that provides the caching of static resources and offline


13
www.dotnetcurry.com/magazine |
support.

Figure 7, inspecting the service worker added by the Blazor WebAssembly template

The offline support that the template provides is described in great detail in the official documentation. To
summarize the main points:

• The service worker applies a cache first strategy. If an item is found in the cache, it is returned from it,
otherwise it will try to make the request through the network.

• The service worker caches the assets listed in a file named service-worker-assets.js which is generated
during build time. This file lists all the WASM modules and static assets such as JS/CSS/JSON/image
files part of your Blazor application, including the ones added via NuGet packages.

• The published list of assets is also critical for ensuring the new content is refreshed. Each of the items
in the list includes its contents hash and the service worker will work through the latest version of the
list every time the application starts.

• Non-AJAX HTTP GET requests for HTML documents other than index.html are intercepted and
interpreted as a request to index.html. What this tries to do is intercept full requests for pages of the
Blazor SPA (Single Page Application), so the browser loads the index.html alongside the JS/WASM that
initialize the Blazor application which then renders the required page.

It’s also worth highlighting that the offline working logic is under control of the developer. You can either
modify the provided service worker file, and/or use the ServiceWorkerAssetsManifestItem MSBuild
elements. (See the official documentation for more info)

Let’s see it working in action. After generating your Blazor project, publish it to a folder as in:

14 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


dotnet publish -c Release

This will publish both client and server projects (remember we choose the hosted model when initializing
the project) to corresponding subfolders. The server should be published to a folder like Server/bin/
Release/netcoreapp3.1/publish. If you navigate to that folder, you can then start the published server
by running the command dotnet BlazorPWATest.Server.dll:

Figure 8, running the published Blazor application

Now open the address in the browser and install it as an app:

Figure 9, Installing the Blazor PWA

In the terminal where you were running the published server, stop the server. Notice how you can still
launch the installed application! If you open the developer tools, you can see how the required files are

15
www.dotnetcurry.com/magazine |
being loaded from the service worker cache:

Figure 10, testing the offline capabilities of the installed Blazor PWA

If you run into trouble, feel free to check the sample project on GitHub.

This concludes the brief overview of the PWA support available out of the box for Blazor WebAssembly
projects. For more information, you can check this excellent article from Jon Galloway and the official
Blazor documentation.

Developing PWAs using ASP.NET Core

When writing traditional ASP.NET Core applications, there is no Microsoft provided PWA support. Creating a
manifest is just about providing some metadata, but a useful service worker is a different matter altogether.
You could attempt to create your own, but the caching policy for offline mode can be surprisingly tricky to
get right.

After all, caching is one of the hard problems!

Luckily for everyone, there are community driven NuGet packages to help you turn your ASP.NET Core
application into a PWA. Let’s take a look at WebEssentials.AspNetCore.PWA by Mads Kristensen.

Note the package hasn’t been fully updated to ASP.NET Core 3.1 at the time of writing. While I had no trouble
getting it to work on a brand new MVC project, your mileage might vary (I’ve noticed issues in brand new Razor
pages projects). See the status of this GitHub issue for updates.

Start by creating a new ASP.NET Core MVC application using:

dotnet new mvc -o ASPCorePWATest

16 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


Then install the NuGet package with:
dotnet add package WebEssentials.AspNetCore.PWA

A minor inconvenience is that you need to write the manifest, and add any icons it references! You can
easily write a manifest yourself, either from that NuGet package instructions, taking inspiration from the
earlier Blazor example or just googling online. For example:

{
"name": "ASPCorePWATest",
"short_name": "ASPCorePWATest",
"start_url": "./",
"display": "standalone",
"background_color": "#ffffff",
"theme_color": "#03173d",
"icons": [
{
"src": "icon-192.png",
"type": "image/png",
"sizes": "192x192"
},
{
"src": "icon-512.png",
"type": "image/png",
"sizes": "512x512"
}

]
}

Note that you will also need to provide the icons referenced in the manifest, and that at the very least, you
should provide both a 192x192 and a 512x512 version of the icon. The easiest way is to generate your
own icons using websites such as favicon.io (Which will also provide a code snippet to be added in your
manifest file).

The last thing you need to do is to register the package by adding the following line inside the
ConfigureServices method of the Startup class.

public void ConfigureServices(IServiceCollection services)


{
services.AddControllersWithViews();
services.AddProgressiveWebApp();
}

Now that everything is wired, launch the application with the familiar dotnet run command (Or use
Visual Studio, whatever you prefer).

You will see the now familiar interface to install it as an application (see Figure 11). You can also see in the


17
www.dotnetcurry.com/magazine |
browser developer tools that the manifest and the service worker have been correctly recognized.

Figure 11, adding PWA support to a traditional ASP.NET Core MVC application

If you run into trouble, feel free to check the sample project on GitHub.

If you use dotnet run, and have tried the earlier Blazor example, it is possible that all the Blazor files are
still cached by the browser. If you still see the previous Blazor site when you open the ASP.NET Core MVC
site, clear the browser cache or force reload (On PC: Ctrl+F5 and CMD + R on Mac).

Feel free to take a look at the library documentation on GitHub. Beyond the library installation and basic
setup, it details the options provided to customize behavior such as the service working and/or the strategy
for caching assets.

Developing PWAs using SPA JavaScript frameworks

By now it should be clear that in order to create a PWA, you do not need to use a SPA framework. All you
need is HTTPS, a manifest file and a service worker.

18 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


However, in order to provide a native-like user experience, many developers will choose one of the main
SPA JavaScript frameworks.

I wrote on how to develop SPA applications using these frameworks and ASP.NET Core recently (See
Developing SPAs with ASP.NET Core v3.0). Let’s now take a quick look at how each of these frameworks lets
you add PWA support to your application.

Don’t be surprised if it starts getting a bit repetitive. After all, in order to provide basic PWA functionality,
you really just need to add the manifest and service worker!

If you run into trouble, feel free to check the sample projects on GitHub.

Adding PWA support to a SPA created using the Angular CLI

When you create an ASP.NET Core application using the template provided by Microsoft, the ClientApp
folder contains nothing but a regular Angular CLI application. This means you can use the Angular CLI in
order to enable additional Angular features such as PWA support.

Begin by creating a new Angular SPA project as in:

dotnet new angular -o AngularPWATest

Once initialized, make sure to build the project using dotnet build, or at the very least, install the NPM
dependencies inside the ClientApp folder

cd ClientApp && npm install

You will also need the Angular CLI installed. If you haven't installed it yet, follow the official instructions. It
comes down to:

npm install -g @angular/cli

At this point, you should have your Angular project ready and all the necessary tooling installed.

Now for the interesting part!

In order to add PWA support, all you need to do now is to run a single command using the Angular CLI:

ng add @angular/pwa

That’s it! Now run your project using dotnet run and verify that in fact you have a PWA that can be


19
www.dotnetcurry.com/magazine |
installed.

Figure 12, adding PWA support to an Angular SPA application

I won’t go into more detail as part of this article. If you want to know more, check out this article on how
to get started with PWA in Angular, and also read through the excellent official documentation on service
worker support.

Adding PWA support to a SPA created using create-react-app

Let’s switch our attention to React and the Microsoft provided React templates.

As discussed in my previous article, these templates have a standard create-react-app inside the ClientApp
folder. This means we can take advantage of the existing PWA support already provided by create-react-
app.

In fact, the React application initialized by the React template already has the required PWA support!

20 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


Start by creating a new application using the React template:

dotnet new react -o ReactPWATest

Take a moment to inspect the contents of your ClientApp folder. Notice how there is a manifest file
inside ClientApp/public/manifest.json. There is also a script prepared to install a service worker, the
ClientApp/src/registerServiceWorker.js file.

You will need to make a few small fixes to the provided manifest. If you inspect the manifest in Chrome, you
will notice it complains about the icon and start URL. For the purposes of this article, you can copy both the
manifest and icon from the Blazor example. If you want to fix by hand, make sure you set "start_url":
"./", and provide at least a 512x512 icon.

{
"short_name": "ReactPWATest",
"name": "ReactPWATest",
"icons": [
{
"src": "icon-512.png",
"type": "image/png",
"sizes": "512x512"
}
],
"start_url": "./",
"display": "standalone",
"theme_color": "#000000",
"background_color": "#ffffff"
}

Unlike the service worker generated with Angular template, the one generated with the React template
won’t be registered during development. It is only registered during Release builds. This is due to the
following guard located inside the registerServiceWorker.js file:

export default function register () {


if (process.env.NODE_ENV === 'production' && 'serviceWorker' in navigator) {

}
}

So, in order to see the PWA support in action, we just need to run it in production mode. Publish the project
to a folder using the command:

dotnet publish -c Release

Then navigate to the published folder and run the published version:

cd .\bin\Release\netcoreapp3.1\publish\
dotnet ReactPWATest.dll


21
www.dotnetcurry.com/magazine |
Now when you open the site in the browser, you will notice the browser recognizes the manifest, that a
service worker is registered and that you have the familiar option to install it.

Figure 13, adding PWA support to a React SPA

As you can see, PWA support is pretty much built-in into the React template. Take a look at the create-react-
app documentation for more information.

Adding PWA support to a SPA created using the Vue CLI

Other SPA frameworks like Vue also provide good PWA support. In the case of Vue, the Vue CLI
provides a plugin that enables the PWA support in the application.

Let’s create a new project using the React template and convert it into a Vue application as described in my
previous article. You can add the PWA plugin while initializing the Vue application by manually selecting
the features:

22 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


Figure 14, selecting the PWA feature when initializing a Vue CLI application
Otherwise, you can also install it later by running the following command inside the ClientApp folder:

vue add pwa

Note that the PWA support works similar to the one provided by create-react-app. The service worker is
only registered during Release builds.

Remember there are a few settings to adapt from the React template in order to get Release builds working with
a Vue CLI app. See productions builds for Vue in my previous article.

You can build the application for release and run it using the same commands as in the React SPA example
before. Once you load the production site in the browser, notice the manifest and service worker are
recognized, and the browser offers the option to install the application.

Figure 15, adding PWA support to a Vue SPA application

For more information, check the official plugin documentation.


23
www.dotnetcurry.com/magazine |
Developing PWAs using static site generators

A very interesting use case for PWA are that of static sites such as documentation, portfolios or personal
blogs.

Over the last few years, frameworks such as Jekyll, Vuepress and Gatsby.js have become a popular choice
for building these type of applications. They take care of most of the heavy lifting needed to create a web
application letting the developer concentrate on the content of the site.

It’s no surprise then that enabling PWA support is an out of the box feature that developers can simply
enable. Let’s take a look at a quick example using Vuepress, by creating a documentation site for a library or
a project you have created.

Let’s quickly setup a vuepress project by running the following commands in a terminal:

mkdir vuepress-pwa
cd vuepress-pwa
npm init
npm install -D vuepress

Edit the generated package.json file and replace the scripts property. You will replace it with the
scripts needed so you can run the site in development mode and generate the production build:

"scripts": {
"dev": "vuepress dev docs",
"build": "vuepress build docs"
},

That way, you will be able to run the command npm run dev to start the development server, and npm
run build to generate the production package.

Let’s add some contents to the site.

Create a new folder named docs, and inside create a new file named README.md. This will serve as the
home page of your documentation site:

---
home: true
heroText: My awesome project
tagline: Sample docs site showcasing vuepress and PWA
actionText: Getting Started
actionLink: /getting-started/
---

This is _actually_ markdown


You can include any markdown contents, like a code block:
```bash
# start development server
npm run dev
# generate production build
npm run build
```
Links across files work as expected. For example [getting started](/getting-started)

24 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


Note how vuepress (and similar static site generators) has great out of the box support for markdown, which will
be converted into static HTML and CSS at build time. This makes them a great fit for blogs and documentation.

Inside docs, create a new getting-started folder, containing another README.md file:

# Getting started
This normally contains the easiest way to get started with your library.

Finally, add a subfolder named .vuepress inside the docs folder. Inside, place a new config.js file. This
is where you can configure the vuepress settings, which we will use to define the navigation. Add these
contents to the .vuepress/config.js file:

module.exports = {
title: 'My awesome project',
description: 'Documentation for my awesome project',
themeConfig: {
nav: [
{ text: 'Home', link: '/' },
{ text: 'Getting Started', link: '/getting-started/' },
]
}
}

After all these steps, you should now have a minimal documentation site built using Vuepress. Run the npm
run dev command and navigate to the shown address in the browser (Normally http://localhost:8080). You
will see something like this:

Figure 16, a sample documentation site built using vuepress

If you run into trouble, feel free to check the sample project in GitHub.

Let’s now see how we can add PWA support using the official PWA plugin. Install it by running the


25
www.dotnetcurry.com/magazine |
command:

npm install -D @vuepress/plugin-pwa

And configure it by adding this line inside the default export of docs/.vuepress/config.js:

module.exports = {
// previous contents omitted
head: [
['link', { rel: 'manifest', href: '/manifest.json' }],
],
plugins: ['@vuepress/pwa'],
}

You will then add the manifest and icon files inside a new public folder, added to the existing .vuepress
folder, as in docs/.vuepress/public/manifest.json. You could again lift the ones from the initial
Blazor example or create your own.

That’s the minimum configuration needed to make your Vuepress documentation site PWA compliant. Now
you just need to deploy it using HTTPS.

A simple and nice alternative for public documentation sites is to use GitHub pages, but there are other
options as per the official docs.

Figure 17, testing PWA support after deploying to github pages

Remember the tour of the Vue documentation site at the beginning of this article? That is a perfect real-life
example of a Vuepress application with PWA enabled!

26 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


This barely scratches the surface of what can be done with a framework such as Vuepress. If you are
interested, check out the official docs and the awesome-vuepress site. Gatsby.js is also a popular alternative
for those who prefer React, and is designed with a broader scope than static sites.

Testing and Tooling


If you plan on developing PWA, there are a few tools worth having in your toolbox. If you have built web
applications focused on mobile devices and/or native applications, you might have come across these. Let’s
have a brief look at them.

Lighthouse
Google Chrome (and other Chromium based browsers such as the new Edge) provides an excellent auditing
tool, Lighthouse.

What Lighthouse does is to audit your site on various categories, including PWA support. It then generates a
report with a score, findings and recommended fixes/improvements.

When it comes to PWA, you should make sure to audit your production builds. As we have seen through the
article, in many cases the service worker is not enabled during development, or a dummy version of it might be
used instead.

Let’s try it for example with the Blazor project.

Make sure to publish it for Release and run the resulting site (You can check the earlier section about
Blazor). Once up and running, navigate to the site with your browser and open the developer tools. Go to the
Lighthouse tab:

Figure 18, launching a Lighthouse audit



27
www.dotnetcurry.com/magazine |
Make sure you select the Progressive Web App category on the right, and mobile as the device type. Then
click the Generate report button.

Figure 19, Lighthouse PWA report

Lighthouse is a great tool to improve your web applications, not just for PWA purposes. Make sure to check
it out!

Android/iPhone simulators
Whenever the focus of a PWA is mobile devices, testing them on a simulator can be of great help during
development and/or debugging.

Be aware that the iPhone simulator does not support WebAssembly, so the Blazor WebAssembly example would
not work.

For example, let’s test how our Angular example project behaves in the iPhone simulator. Once we have the
application running in Release mode on a Mac, we then fire the simulator and navigate to the app:

28 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


Figure 20, testing the Angular example in the iPhone simulator

You can then try to install the application, which will add it to the list of applications:

Figure 21, installing the Angular example in the iPhone simulator


29
www.dotnetcurry.com/magazine |
As in the case of Lighthouse, when testing the PWA functionality, you will want to test the published
version of your application. That’s not to say the simulator cannot be useful during development to test
how your app behaves on a phone.

ngrok
While simulators can be really handy, testing on a real device is much better. ngrok is a tool that lets you
expose a site running in your laptop securely over Internet.

This means you can get your site running on localhost exposed with an HTTPS URL that you can then test
on a real device. In certain situations, this can be a life saver!

Remember that one of the main requirements for PWA is the usage of HTTPS. Using ngrok is a great way to
get an HTTPS tunnel to your localhost site that can be trusted by real devices and/or simulators.

You can sign in and create a free account, and follow the instructions to download and setup ngrok on your
machine. Once installed and configured, let’s see what we can do.

Start the published version of the Blazor example on your local machine, which by default will be running
at http://localhost:5001. Then run the following ngrok command:

# windows
ngrok.exe http https://localhost:5001
# mac/unix
ngrok http https://localhost:5001

This will create a tunnel and expose your localhost site over internet:

Figure 22, exposing a site running in localhost over Internet with ngrok

You can then open the site on any device: your laptop, a simulator, or a real phone. As long as the device
has access to Internet, it will be able to reach the established tunnel!

30 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


For example, Figure 23 and 24 shows the Blazor application on my phone:

Figure 23, loading the Blazor example on a real phone with ngrok

Figure 24, installing the Blazor example on a real phone with ngrok


31
www.dotnetcurry.com/magazine |
I have found ngrok a brilliant tool to be aware of. For more information, check their official documentation.

Limitations and gotchas of PWAs


As with every other technique, PWAs works better in certain contexts than the others. It is good to be aware
of certain limitations and gotchas inherent to PWA, independently of the framework you use to build them.

I will briefly go through some of the most important ones, but if you are seriously considering them, it will
be worth to spend some time doing your own research expanding on these topics.

They are not native applications

This might seem obvious, but after all, PWA are not native applications. While they offer a similar
experience, this might not be close enough for your use case or it might have too many drawbacks. For
example:

• PWA are not distributed through app stores. You lose certain benefits that you normally get from simply
being in the app stores - like discoverability, in-app purchases or insights.

• You don’t have access to the same functionality that native applications do. You can only use whatever
functionality is available in the latest browser APIs and while that is plenty, there are examples like
custom URL schemes or access to elements of the hardware, that you won’t have.

• Performance and battery consumption differ when compared to native applications.

• Android, iOS and Windows offer different support levels for PWA, with Google and Android being a
particular champion for them.

Offline mode requires careful consideration

Being able to add offline support via service workers is great, and works extremely well in the case of static
sites such as blogs or documentation.

But what about more interactive sites?

One simple approach could be that your site becomes read-only until the user is back online. If your site is
mostly static data, or interactions need to happen in a timely manner, this might be a decent approach that
saves you a lot of effort.

A harder alternative is considering the usage of a local storage such as IndexedDB to store the data while
offline and sync with the server once the user is back online. This opens a whole new set of problems and
user flows that you need to consider.

• What if there is a data conflict once the data is sent to the server?

• How about data that does not make sense after certain time elapses?

32 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


• Is your system designed to correctly interpret a sudden rush of changes from the same user? For
example, how would that data be merged alongside a data stream window that you have already
processed?

Of course, you can consider a mix and match approach, and allow certain changes to happen offline while
others become unavailable. In any case, offline work does not happen for free and needs to be carefully
considered.

Caching is hard

As seen during the article, pretty much every major web application framework provides some form of PWA
support that lets you add a service worker.

This is great!

You then have a service worker that caches your HTML/JS/CSS files so your application can start and
function while offline.

Let’s leave the caching of data aside, which we briefly discussed over the previous point. Consider only the
caching of HTML/JS/CSS files. Now you need to be careful with invalidating that cache and updating those
files!

That is likely going to need a server that can tell you what the latest versions are, and some process that
requests the latest versions and reconciles with what you have installed (as we briefly saw Blazor doing).

Combined with offline mode, you might need to assume that there can be multiple versions of your
application in the wild. This is significantly more challenging than a traditional web application where you
can easily ensure that everyone uses the latest version of your website!

Development experience

Developing a PWA can be more challenging than developing a web application. While testing with device
simulators or real devices is a good practice even when developing web applications, a PWA introduces
further challenges.

In addition, the need for HTTPS and the fact that many frameworks don’t even enable the service worker
during development, introduces additional friction in the development experience.

Conclusion

Progressive Web Applications (PWA) have become a very interesting choice for web developers. You can
leverage most of the same skills and tools you are used to, in order to provide a native-like experience
across desktop and mobile.

That is not to say every web application should become a PWA!


33
www.dotnetcurry.com/magazine |
Certain applications will benefit more from the capabilities offered by PWAs than others. There will be
teams who might not find it worth dealing with the extra challenges they bring, so they might not want
to convert their application into a PWA. Or they realize a PWA doesn’t yet provide the functionality or
experience they need and will prefer to stick with native applications.

However, for those who adopt them, there is no shortage of tooling and support.

Most of the popular frameworks for writing web applications now offer support for PWAs, making their life
as a developer easier. They might even partially embrace them, for example making your PWA installable
while still requiring an online connection.

Like many other things in technology, the only right answer is It depends.

Download the entire source code from GitHub at


github.com/DaniJG/pwa-examples

Daniel Jimenez Garcia


Author
Daniel Jimenez Garcia is a passionate software developer with 10+
years of experience who likes to share his knowledge and has been
publishing articles since 2016. He started his career as a Microsoft
developer focused mainly on .NET, C# and SQL Server. In the latter
half of his career he worked on a broader set of technologies and pl
atforms with a special interest for .NET Core, Node.js, Vue, Python,
Docker and Kubernetes. You can check out his repos.

Techinical Review Editorial Review


Damir Arh Suprotim Agarwal

34 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)



35
www.dotnetcurry.com/magazine |
ARCHITECTURE

Damir Arh

Getting started with


Application
Architecture
This article is an introduction to
understand application architecture for the
Microsof t technology stack.

What is Application Architecture?


It’s difficult to find an exact definition for application architecture.

One could argue that it’s a subset of software architecture that primarily
focuses on individual applications in contrast to, for example, enterprise
architecture, which encompasses all the software inside a company including
the interactions between different applications.

Application architecture is responsible for the high-level structure of an


application which serves as a guideline or a blueprint for all the development
work on that application.

Often people compare it to building architecture. However, there is a very


important difference between the two.

Unlike buildings, software changes a lot and is never really 'done'. This
reflects in its architecture which also needs to constantly adapt to changing
requirements and further development.

36 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


An application architecture should always be based on the requirements as specified by the stakeholders
(business owners, users, etc.).

It’s important that this doesn’t include only functional requirements (i.e. what the application needs to
do), but also the non-functional ones. The latter is concerned with application performance, reliability,
maintainability and other aspects that don’t directly affect its functionality but have an impact on how the
application is experienced by its users, developers and other stakeholders.

Editorial Note: Also read Software is Not a Building.

Aspects of application architecture

Architectural decisions are made at different levels.

Figure 1: Architectural decisions at different levels

Typically, the application architecture starts with the choice of the type of application to develop.

Is it going to be a web application, or will it run locally on a device (mobile or desktop)?

The choice will depend on multiple factors:

• Will the application be used on a computer, on a mobile phone, or both?

• Will the users always have internet connectivity, or will they use it in offline mode as well?

• Does the application take advantage of or even require the use of services and sensors available on
devices, such as notifications, geolocation, camera, etc.

• …and so on.

Closely related to the choice of application type, is the choice of technology. Even if you’ve already decided
on the .NET technology stack, there are still a lot of choices for all types of applications: desktop, mobile,

37
www.dotnetcurry.com/magazine |
and web.
You can read more about them in my previous articles for DNC Magazine:

• Developing Desktop Applications in .NET

• Developing Mobile Applications in .NET

• Developing Web Applications in .NET

• Developing Cloud Applications in .NET

Once you have decided on the application type and the technology stack, it’s time to look at the
architectural patterns.

Architectural Patterns

They have a role similar to design patterns, only that they are applicable at a higher level. They are proven
reusable solutions to common scenarios in software architecture.

Here’s a selection of well-known architectural patterns:

• In Multitier architecture, applications are structured in multiple tiers which have separate
responsibilities and are usually also physically separated.

Probably the most common application of this architectural pattern is the three-tier architecture that
consists of a data access tier (including data storage), a business tier (implementing the business logic)
and a presentation tier (i.e. the application user interface).

• In Service-Oriented architecture (SOA), application components are implemented as independent


standalone services that communicate over a standardized communication protocol.

A standardized service contract describes the functionality exposed by each service. This allows loose
coupling between the services and high level of their reusability. When the architecture pattern was
first introduced, the most commonly used communication protocol was SOAP.

• Microservices are a more modern subset or an evolution of the Service-Oriented architecture (SOA).

As the name implies, they are usually more lightweight, including the communication protocols which
are most often REST and gRPC. Each service can use whatever technology works best for it, but they
should all be independently deployable with a high level of automation.

• In Event-driven architecture, there’s even more emphasis on loose coupling and asynchronous
communication between the components. Although it’s not a requirement, the system can still be
distributed, i.e. it can consist of independent services (or microservices).

The main distinguishing factor is that the components don’t communicate with imperative calls.
Instead, all the communication takes place using events (or messages) which are posted by components
and then listened to by other components that have interest in it.
The component posting the event doesn’t receive any direct response to it. However, it might receive an
indirect response by listening to a different message. This makes the communication more asynchronous

38 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


when compared with the other patterns.

Within a selected high-level architectural pattern, there are still many lower level architectural decisions to
be made which specify how an application code will be structured, in even more detail.

There are further patterns available for these decisions, such as the Model-View-Controller (MVC) pattern
for web applications and the Model-View-Viewmodel (MVVM) pattern for desktop applications.

As the scope of these patterns becomes even smaller, we sometimes name them design patterns instead of
architectural patterns. Examples of those are dependency injection (read more about it in Yacoub Massad’s
article Clean Composition Roots with Pure Dependency Injection), singleton (read more about it in my
article Singleton in C# – Pattern or Anti-pattern), and others.

After the initial application architecture is defined, it’s necessary to evaluate how it satisfies the list of
requirements it’s based on. This is important because in most cases there is no single correct choice for the
given requirements.

There are always compromises and every choice has its own set of advantages and disadvantages. A certain
architectural choice could, for example, improve the application performance, but also increase operating
costs and reduce maintainability. Depending on the relative importance of the affected requirements, such
an architectural choice might make sense or not.

Figure 2: Continuous evolution of application architecture

The application architecture is never set in stone.

It needs to evolve as the application is being developed and new knowledge and experience is gathered.

If a certain pattern doesn’t work as expected in the given circumstances, it should be reevaluated and
considered for a change. Requirements (functional and non-functional) might change and affect previous
architectural decisions as well.

Of course, some architectural patterns are easier to change than others. For example, it’s much easier to
introduce dependency injection into an application or change its implementation than to change the
application type or core architectural pattern such as MVC.


39
www.dotnetcurry.com/magazine |
Official resources for .NET developers
If you’re focusing on the .NET and Azure technology stack, Microsoft offers a lot of official resources to help
you get started with architecting applications.

The .NET Architecture Guides website is probably the best starting point. The resources on the site are
organized based on the type of application you want to develop. After you select one (e.g. ASP.NET apps
or UWP desktop apps), you get a list of free e-books to download that are applicable to the selected
application type. The same e-book might be listed under multiple application types if the guidance it
contains applies to more than one.

Figure 3: Entry page of the .NET Architecture Guides website

In general, the e-books follow a similar approach and do a good job at introducing the reader to the topic
they cover. Typically, they include the following:
• An introduction to the technologies involved.

• An overview of available application sub-types (e.g. MVC vs. single page web applications) and the
reasoning behind choosing one.

• A list of common architectural patterns for the application type with explanation.

• A tutorial covering the key parts of an application including sample code.

The books primarily target readers with no prior experience in the subject they cover. Their content is
suitable both for software architects and developers. They might still be a valuable read even If you
have some prior knowledge. In that case, you might skip some parts of the book or skim over them, but

40 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


nevertheless you could still pick up a few things or get a more thorough overview of the topic.

Many books are accompanied by working sample applications with full source code available on GitHub.
These showcase many of the concepts explained in the books and are regularly updated to the latest
versions of technologies they use. Often the samples are also updated with features introduced in new
versions of those technologies. It’s a good idea to look at the code before starting a new similar project or
implementing a pattern that’s featured in a sample.

Azure Architecture Center is another good resource.

As the name implies, it’s mostly focused on applications being hosted in Microsoft Azure, but many of the
patterns are also useful in other scenarios. In my opinion, the most useful part of this site are the Cloud
Design Patterns. It consists of a rather large list of architectural patterns, grouped into categories based
on their intended use. Similar to how software design patterns are usually documented, each pattern has a
description of the problem it addresses and the solution it proposes. Many include sample code and links to
related patterns.

While this might not be enough information on its own to fully implement a pattern from scratch,
it’s a good basis to compare and evaluate different architectural patterns in the context of your own
requirements.

Conclusion

In this article, I explained the concept of application architecture and provided some guidelines on how
to approach it. I described the importance of architecture evaluation and evolution. I included links to
Microsoft’s official resources about application architecture and suggested how they can be used effectively.

This is the first article in the series about application architecture. In the following articles, I will focus on a
selection of architectural patterns for different application types.

Check Page 174 of this magazine where I talk about Architecture of Desktop and Mobile Applications in
.NET.

Damir Arh
Author
Damir Arh has many years of experience with Microsoft development tools;
both in complex enterprise software projects and modern cross-platform
mobile applications. In his drive towards better development processes,
he is a proponent of test driven development, continuous integration and
continuous deployment. He shares his knowledge by speaking at local
user groups and conferences, blogging, and answering questions on Stack
Overflow. He is an awarded Microsoft MVP for .NET since 2012.

Techinical Review Editorial Review


Yacoub Massad Suprotim Agarwal


41
www.dotnetcurry.com/magazine |
AZURE DEVOPS

Gouri Sohoni

Most organizations are aware about the


importance of Source Control. However,
many of them do not necessarily work with
Source Control in a methodical manner. In my
honest opinion, everyone working in software
development will benefit if they are made aware
about some best practices of Source Control.
Keeping this in mind, I have listed and explained
various best practices related to Version Control.

Source Co ntrol
in Azure DevOps
(Best practices)
42 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)
Why Source Control?

Software development requires writing code.

Once the code is written, it needs to be kept safe (so code is not deleted or corrupted) and for that, we
maintain a copy of it. Sometimes, we make fixes to code that does not work as expected. As a precaution,
the original code is kept along with the changed code. If the changed code works, we do not need the
original code, but in case it does not work, we can always use the original code to start fresh and remove
bugs.

This entire process creates many copies of our code. Even if we name these copies (or timestamp), it
becomes very difficult to keep track of them.

There is also a chance that our machine, on which the code is created, may crash and we may end up losing
all the code that was written. For team members, this becomes a bigger challenge if multiple developers
are creating, maintaining and working on separate copies of the code.

There should be a way in which all team members are able to collaborate and work with the same
codebase.

Enter Source Control which helps in removing all the aforementioned problems.

What is Source Control?

• Version Control is a term used interchangeably with revision control or source control. So going forward,
I will use the two terms, source control and version control, interchangeably.

• Source Control is a way to keep a common repository of source code for multiple developers in a team.
Multiple team members can work on the same code, so sharing of code becomes easier.

• Source Control helps in tracking and managing changes made to the code by different team members.
This ensures that Team members work with the latest version of code.

• A complete history of code can be viewed with Source Control. We may have to resolve conflicts when
multiple developers try to change the same file. History is maintained for all the changes, including any
conflicts resolution. Source and Version Control terms are used interchangeably. But Version Control
also takes care of large binary files.

• With Source Control it becomes easier to keep track of the version of software which has certain bugs.
These different versions can be labelled and kept separate.


43
www.dotnetcurry.com/magazine |
There are many tools for source control. It mainly comprises of two types of source control - centralized or
distributed. Examples of Centralized version control tools are:

• Azure DevOps – TFVC (Team Foundation Version Control)

• SVN - Subversion

• CVS

• SourceSafe

• ClearCase

• Perforce

Examples of Distributed version control tools:

• Azure DevOps – Git

• GitHub

• GitHub enterprise

• Bitbucket

• Mercurial

Best Practices of Source Control

Check in/Commit early and commit frequently


Ensure that the files you are working on are with the latest code. You have a roll back facility for any
commit you do.

Check in/Commit logical code change


Any logical problem encountered and modified needs to be committed so as to help other team members
aware about it.

Provide descriptive and useful messages with check ins/commits


Providing useful messages will ultimately result in understanding the code better.

Avoid checking in or committing of incomplete work


If incomplete code is checked-in or committed, there is always a chance that some team member may use
the code and build some functionality on top of it. It may result in a cascading effect for a bug or issue.

Always unit test before check in/commit


Unit testing helps in understanding that the functionality is 'actually' working as expected and is not just
merely free of syntax errors.

Use proper branching strategy


Branches help in team development but if we clutter the source control with a lot of them, we may end up

44 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


with more problems than solutions. It is recommended that you design a branching strategy which suits
your need (Eg: GitFlow branching strategy and so on).

Do not check in/commit code without review


Any developer may inadvertently end up making some mistakes which can be discovered during code
review. This will result in fewer bugs at a later stage. The compliance policies can also be checked and
applied at this stage (like naming conventions, using specific names for classes and methods etc).

Database related artefacts should also be versioned


Stored Procedures, User defined functions (UDFs) are changing continuously and must be versioned so as to
roll back if required.

Source Control Management by Microsoft


Microsoft provides both types of source control management with Azure DevOps or Azure DevOps Server
2019 - Centralized as well as Distributed.

Centralized version control comes with a tool called Team Foundation Version Control (TFVC) and for
Distributed, we either have Git implemented with Azure DevOps or can even use GitHub with Azure DevOps.

Team Foundation Version Control (TFVC)

In TFVC, all the team members work with only one version of files(s) on their machines. There is a history of
code only on the server-side. All the branches get automatically created on the server.

TFVC has two kinds of workspaces, server and local. With server workspace, the code can be checked out to
developers’ machines, but a connection to the server is required for all operations. This helps in situations
where the codebase is very large. With local workspaces, a team member can have a copy and work offline
if required. In both the cases, developers can check in code and resolve conflicts.

Git

Each developer has his/her own copy of code to work with on their local machines. All version control
operations can be available in local copy and can execute quickly as no network is required.

The code can be committed to a shared repository, when required. Branches can remain local and thus
can be light weight. We can keep minimum branches on the server so as to keep it less cluttered. The
local branch can be reviewed later using a Pull Request and can be merged on the server. Following is a
comparison table for the same:

TFVC Git
Code history on server Code history on each team member’s machine

Team member needs connection to server for check- Team member can commit code locally
in
Better for large code base Better for relatively smaller code base but have a
look at this link in case you have a big repo
Branches are heavier Branches can be very light weight and only on local
repo

45
www.dotnetcurry.com/magazine |
Just having different source control tools is not enough, we also need to know how to use them optimally.

Source Control related Features in Azure DevOps

Before we get into details of the best practices of source control in Azure DevOps, let us have a look at
various other tooling features available which will help in many ways.

Azure DevOps provides us with work item tracking. If used properly, this feature will be able to get a tracing
from requirement - to code - to test case - to the bug raised. In short, for a bug, we will be able to find out
the requirement associated with it.

• For achieving this we can spawn requirements to multiple tasks

• Provide one or multiple test case to test the requirements

• Create bug associated with Test Case when test case fails

• Associate code with work item at the time of checking code or committing code

Visual Studio provides us with an excellent UI for writing code. It also provides support to various test
framework other than MS framework.

• We can use this UI support to write unit test

• Along with MS Framework, developer can write unit tests with NUnit, xUnit etc.

Azure DevOps provides pipelines with which Continuous Integration (CI) and Continuous Deployment (CD)
is possible with ease.

46 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


Best Practices of Source Control in Azure DevOps

#1 It is always a good practice to associate code with work item at the time of check in/commit. This
will help in getting traceability from requirement to tasks and the tasks in turn will be associated with code.

Visual Studio provides us an interface for doing the same.

This way, when the test case is tested and a bug filed, it can be traced back to the requirement.


47
www.dotnetcurry.com/magazine |
#2 Provide meaningful and useful comments with check in/commits

• “Fixed it” or “done” are not helpful in the long run.

• A comment should be able to help in maintaining code.

#3 Use Code Review with TFVC check-in and Pull Request with git commit

• Code Review will ensure that the code will not be checked in before reviewing it.

• Pull Request (PR) will help in committing quality code to the repository.

#4 Visual Studio provides support to various testing framework along with MS framework. Use them.

#5 Use Build Triggers effectively

• With TFVC, Gated Check-in will ensure that the code which cannot be merged with new code on the
server, will not be checked in:

o As the name suggests, the code will not be directly checked in, but first a private build will
execute on the server with the code to check-in, and the already available new/latest code.

o If the private build completes without any issues, code gets checked-in.

o If the build is not successful in an IDE like Visual Studio, the developer gets a message that
the code cannot be checked in.

o The developer takes care of the problem and tries to check in the code again.

• Using Pull Request with Git will help in achieving same result as Gated check in.

• Using Continuous Integration (CI) trigger will take care of automatically triggering the server-side
build and will also take care of Build Verification Tests (BVTs).

#6 Use Release Pipelines Triggers effectively

• With the help of Release pipeline, we can provide a Continuous Deployment (CD) trigger

o This will automatically deploy artefacts to specified environment once the build gets
completed successfully and artefacts are available.

• We can define multi stage pipelines where there can be two separate targets for deployment

o This can be Virtual Machines, Web Server, on-premises machines or by using deployment
groups

o The stages can be with different triggers and with pre-deployment conditions for
deployment. This can be with respect to automated deployment or manual (after a group or

48 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


person gives the go ahead).

o The stages can be cloned to keep similar tasks but the configuration can be different.

• The release can be directly associated with source control to multiple branches. In this scenario, we
can eliminate the build artefact but directly use source control branch to trigger the release and go
ahead with deployment. A filter can be used to select specific branch.

• A Pull Request can directly trigger release

o Use a PR as an artefact in the release pipeline


o Set up branch policy for the release pipeline

Conclusion

In this article, I wrote about the importance of Source Control, as well as different types of Source Control.

I also covered the various Best Practices of working with Source Control by using tools available in Azure
DevOps.

Gouri Sohoni
Author
Gouri is a Trainer and Consultant specializing in Microsoft Azure
DevOps. She has experience of over 20 years in training and consulting.
She is a Microsoft Most Valuable Professional (MVP) for Azure
DevOps since year 2011 and Microsoft Certified Trainer (MCT). She
is a Microsoft Certified Azure DevOps Engineer Expert and Microsoft
Certified Azure Developer Associate. She has conducted corporate
training on various Microsoft technologies. Gouri has written articles
on Azure DevOps (VSTS), Azure DevOps Server (VS-TFS) on DotNetCurry
along with DNC Magazine. Gouri is a regular speaker for Microsoft Azure
VidyaPeeth, Microsoft events including Tech-Ed as well as Conferences
arranged by Pune User Group (PUG).

Techinical Review Editorial Review


Subodh Sohoni Suprotim Agarwal


49
www.dotnetcurry.com/magazine |
JavaScript

Keerti Kotaru

Demonstrating Memoization via three implementations -


JavaScript, Angular and React.

MEMOIZATION
in JavaScript,
ngular and React
Memoization (also spelled memoisation) is a caching technique for
function calls.

Many of us have written code that caches expensive database calls,


network calls, I/O etc. But how about caching the return value of a
function that takes time to execute?

We can use the cached value as long as the arguments to the function
calls do not change. This way we can execute the function only for new
invocation with unique arguments.

Wikipedia defines memoization as following:

In computing, memoization or memoisation is an


optimization technique used primarily to speed up
computer programs by storing the results of expensive
function calls and returning the cached result when the
same inputs occur again.
This article demonstrates Memoization in JavaScript. We begin with a barebone JavaScript implementation
of Memoization. Next, we look at usage in Redux, an application state management framework. We will
explore two implementations of Redux - in Angular and in React.

This article is an attempt to provide a holistic view of the concept across the JavaScript landscape.

Memoization in JavaScript
Memoization is a caching technique for functions. We can add behavior on top of a JavaScript function to
cache results for every unique set of input parameters.

Let’s dive into the details with a very simple example: a multiplication function.

As you can guess, there is no performance incentive for caching the result of multiplication. However, we
use this as a simplistic example in order to illustrate the concept of Memoization.

Consider the following code in Listing 1. We have a function that takes two arguments p1 and p2. It
multiplies the two arguments and returns a value. To demonstrate memorization, we will write a wrapper
that memoizes the function call. We can use this wrapper for a simple example ( as is the case here ), or for
a more complex and expensive algorithm.

const multiply = memoizeAny((p1, p2) => {


console.log("invoked original function", p1, p2);
return p1 * p2;
});

Listing 1: Wrap multiply function for memoization

Notice the console log in Listing 2 that prints every time the multiplication function is invoked. The log
doesn’t appear when the result is returned from cache.

Save our JavaScript code in a file called memoize.js, and run it using node memoize.js (Refer to the Code
Samples section at the end of the article for a link to the complete code sample).

Consider the following function calls and results.

console.log("multiply", multiply(10,30));// A new set of params. Performs


multiplication and caches the result
console.log("multiply", multiply(10,20));// A new set of params. Performs
multiplication and caches the result
console.log("multiply", multiply(10,20));/* *** RETURNS CACHED VALUE *** */
console.log("multiply", multiply(10,10));// A new set of params. Performs
multiplication and caches the result
console.log("multiply", multiply(10,30));/* *** RETURNS CACHED VALUE *** */
console.log("multiply", multiply(10,10));/* *** RETURNS CACHED VALUE *** */

Listing 2: Invoke memoized function


51
www.dotnetcurry.com/magazine |
Figure 1: Multiply() calls to demonstrate memoization

See Figure 1. For every unique set of arguments (say 10 and 20), the multiplication function is invoked. All
repeated calls are returned from the cache.

Note: In the above sample, we are making an assumption that the function is a pure function. That is, the
return value stays the same for a set of input arguments. If there are additional dependencies, other than the
arguments, this technique does not work.

Memoization: the implementation

The following memoization wrapper is one simple way to implement memoization. However, as we will see,
it is possible to optimize this implementation further.

const memoizeAny = (func) => {


// Use this variable memoizedKeyValues to save results
// Identify each result with it's input.
// If the memoized function is called with the same input, use the existing
// value.
const memoizedKeyValues = [
/* {
args: stringified_input_parameters
result: result
}*/
];
// Return a function. When we memoized multiply below, we called this function,
// for each invocation of multiplication
return (...args) => {

// for the given input (params), check if there is a cached result


let result = memoizedKeyValues.find(x => x.args === JSON.stringify(args))

// YES, there is a cached result


if (result) {
console.log("from cache", args);
return result.result; // return cached result
}

// controls comes to this line only if there is no cached result.

52 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


result = func.apply(this, args); // invoke the function

// cache result
memoizedKeyValues.push({
args: JSON.stringify(args),
result: result
})

// return the result


return result;
}
}

Listing 3: Memozation decorator

Notice, memoizeAny accepts a function as a parameter. In a way, it acts as a decorator, enhancing the
behavior of the provided function (multiply() in the sample).

The variable memoizedKeyValues maintains:


a. key, every unique invocation of a function
b. value, result after the function is invoked.

The memoizeAny function returns another function. In Listing 1, the constant multiple is assigned to this
function object. Notice that the arguments provided to multiply are passed to this function.

This function converts arguments to a string creating a unique key. For every repeated function call, this
value stays the same. For the first invocation of the function, we run the actual multiplication function and
store the result in cache (array variable memoizedKeyValues). For every new invocation, we can query if
the cache has a value with the given key. If it’s a repeated invocation, there will be a match. The value is
returned from cache.

Notice, the actual function is invoked by functionObject.apply(). The control reaches this statement
only if the cache doesn’t have a key for the given arguments.

Notice the console.log when return value is returned from cache (to demonstrate memoization). It’s the key -
stringified arguments. See Figure-1 for the result.

Pure function

A pure function consistently returns the same value for a given set of input arguments. Pure functions do
not have side effects which modify the state outside its scope. Imagine, the multiply function depending on
a global variable. For the sake of an example, call it a multiplication factor. The memoization logic we saw
earlier doesn’t work anymore. Consider the following code in Listing 4:

var globalFactor=10;
const multiply = memoizeAny( (p1, p2) => p1 * p2 * globalFactor);

Listing 4: Need for pure functions

The value of globalFactor could be modified by a different function in the program. Let’s say we
memoized a result 2000 for input arguments 10 and 20 (10 * 20 * 10). Say, a different function changes
the value of global factor to 5. If the next invocation returns value from cache, it’s incorrect. Hence, it is


53
www.dotnetcurry.com/magazine |
important that the function is a pure function for memoization to work.

Note: As a work around, we may include globalFactor while generating the key. However, it will be
cumbersome (but not impossible) to make such logic generic.

Scope of memoizeAny()

Every invocation of memoizeAny() (in Listing 4) creates a separate instance of the returned function. The
array used for cache (variable memoizedKeyValues) is local to each instance of the function. Hence, we
have separate function objects and cache objects for two different memoized functions. Consider add()
and multiply() functions memoized. If the same set of arguments are passed to add and multiply, they
do not interfere.

In Listing 5, multiply(10,20) is cached separate from add(10,20). The result 200 for the former does
not interfere with result 30 for the latter.

const multiply = memoizeAny(function(p1, p2) {


console.log("invoked original function", p1, p2);
return p1 * p2;
});

const sum = memoizeAny((p1, p2) => {


console.log("invoked original function", p1, p2);
return p1 + p2;
});

console.log("multiply", multiply(10, 30)); // A new set of params. Performs


multiplication and caches the result
console.log("multiply", multiply(10, 20)); // A new set of params. Performs
multiplication and caches the result
console.log("sum", sum(10, 20)); // A new set of params (for sum). Performs
multiplication and caches the result
console.log("sum", sum(10, 20)); /* *** RETURNS CACHED VALUE *** */
console.log("multiply", multiply(10, 20)); /* *** RETURNS CACHED VALUE *** */
console.log("multiply", multiply(10, 10)); // A new set of params. Performs
multiplication and caches the result
console.log("sum", sum(10, 10)); /* *** RETURNS CACHED VALUE *** */
console.log("multiply", multiply(10, 30)); /* *** RETURNS CACHED VALUE *** */
console.log("multiply", multiply(10, 10)); /* *** RETURNS CACHED VALUE *** */
console.log("sum", sum(10, 10)); /* *** RETURNS CACHED VALUE *** */

Listing 5: Separate instance for each invocation of memoizeAny()

However, if we call memoizeAny() on multiply repeatedly, they create separate instances as well. It might
result in an unexpected behavior. One way to solve this problem would be to create a factory function
which creates and returns a singleton instance of a memoized function. We may compare function object
passed-in as an argument to determine if there is a singleton memoized function available, already. If not,
wrap with memoizeAny().

Redux
Redux is an application state management framework. In this article, we will discuss the pattern and two
popular implementations.

54 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


a. NgRx for an Angular applications
b. Redux and Reselect libraries for a React application

In the context of memoization, consider Selectors (reselect in React). It is one piece of Redux
implementation. Let’s consider a sample Todo application for ease of understanding the use case.

Say, it provides three features.


a. Create a todo
b. Mark a todo complete
c. Filter todos based on their completed status.

See Figure 2 for a redux implementation of the same.

Figure 2: Redux data flow in a Todo application

Notice a component CreateTodo for creating todos. Another component TodoList for listing, filtering and
marking the todos as complete. The components dispatch actions, which invoke the pure functions called
reducers, which return application level state.

Memoized selectors are used before returning the state to the component.

A typical redux store maintains a large centralized application state object. If busy components continuously
select (retrieve) data from the redux store, it can quickly become a bottle-neck. Depending on the size of the
application, retrieving data from the store could be a costly operation.


55
www.dotnetcurry.com/magazine |
It is a good use case for memoization. For repeated retrieval of state, components use memoized selectors.
As long as state is not modified, arguments (input) for the selectors stay the same (see Figure 2 for input/
output to the selector).

Hence, the selector can return cached results to the components.

Angular Implementation
Let’s begin by visiting an Angular implementation of selectors and memoization with NgRx. If you are
interested in React, jump to the next section.

This section describes memorization implementation with NgRx selectors.

Components make use of data returned by the NgRx selectors. They receive data from the NgRx store a few
times, cached results are returned a couple of other times. This section aims to demonstrate the same and
explain how optimization is achieved.

Selectors in NgRx are a use case for memoization. While the JavaScript section we saw earlier was
theoretical with basic examples like multiplication and addition, the following code sample is realistic for
caching results of a function in JavaScript.

Consider the following Angular folder structure. Refer to Code Samples section at the end of the article for
links to the complete code sample.

src/app/ngrx/todo.reducer.ts - Todo feature specific reducer function. Returns todo state.


src/app/ngrx/todo.actions.ts - Todo actions to retrieve todo state, toggle a todo complete, create a new
todo.
src/app/ngrx/todo.selector - Retrieve all todos or active/incomplete todos.
src/app/create-todo/ (folder) - Component to create a todo
src/app/todo-list/ (folder) - Component to list todos, filter and mark a todo complete.

In Figure 3, notice the CreateTodo component for creating todos. Alongside, notice a TodoList component
for listing, filtering and marking todos complete.

Figure 3: Angular Todo Sample Application

56 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


Now look at Listing 6 for memoized selectors. The createSelector API creates the two selectors
getActiveTodos and getAllTodos. It’s imported from the module ‘@ngrx/store’.

export const getActiveTodos = createSelector(


TODOS,
state => {
console.log("%cInvoked getActiveTodos()", "background: skyblue; color:
purple; font-size: 14pt");
return state.filter(item => item.isComplete === false)
}
);

export const getAllTodos = createSelector(


TODOS,
state => {
console.log("%cInvoked getAllTodos()", "background: lightgreen; color:
purple; font-size: 14pt");
return state;
}
);

Listing 6: NgRx Selectors

These selectors are invoked from the todo list component. If you look at Listing 7, the function
onShowAllClick() is invoked on toggling “Include completed tasks”. Depending on the toggle switch
status, it either uses the getAllTodos() selector or the getActiveTodos() selector.

onShowAllClick(){
// Toggle show all switch
this.isAllSelected = !this.isAllSelected;

if(this.isAllSelected){
this.todos$ = this.store
.pipe(
select(selectors.getAllTodos)
);
} else {
this.todos$ = this.store
.pipe(
select(selectors.getActiveTodos),
);
}

Listing 7: Component making use of the selectors

Notice the console.log statements in Listing 6. They print a log every time the selector is invoked, which
retrieves state from the application store. In case of the state, the input argument for selector doesn’t
change, the memoized result is returned.

Say, the user creates a new todo, the state is updated and the selector function is invoked till the state/
argument changes again.

As we can see in Figure 4, toggling “include complete tasks” repeatedly uses cached results. It doesn’t
retrieve state from the NgRx store (Redux store). Using the memoized results avoids querying a large NgRx
store every time components request data.


57
www.dotnetcurry.com/magazine |
And hence, using cached results is effective. The performance gains could vary based on application size,
NgRx store size and even the efficiency of browser and the client machine.

Figure 4: Console log on invoking the selector.

Follow the link for a complete Angular - NgRx code sample, https://github.com/kvkirthy/todo-samples/tree/
master/memoization-demo

React Implementation
In this section, we cover another example of a Redux implementation - specifically, the React
implementation. It describes how memoization works with Reselect.

It is a selector library, primarily for Redux. The library and the pattern aim to simplify Redux store.

Components consume application state. The state (or data) is created by components; for example, forms in
components. It is also displayed and used in presentation logic etc. The selector functions calculate state as
needed by the components. In addition to memoization, selectors can be composed. A selector can be used
as an input to another selector.

This section aims to demonstrate memoization with reselect.

Redux accesses a large application store more often. Without memoization, any change to the state tree,
be it in the area relevant to the component or not, leads to the Redux store being accessed. Expensive
filtering and state calculations can occur.

Reselect’s memoization optimizes the same by accessing the store only when state being requested is
updated. Otherwise, cached results are returned.

Consider the following structure for a React project with Redux and Reselect. The sample demonstrates
memoization.

App.js - root component of the application


store.js - combines all reducers in the application to create a store

Consider the todo application in the React project:

58 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


CreateTodo.js - a component that allows the user to type-in a todo text and click create.

Todo.js - It lists all todos, allowing filtering in/out completed todos. In the code sample, CreateTodo is a child
component of the Todo component.

todoSlice.js - It creates a slice that encapsulates initial state of the todos, reducers and actions. In the
sample application, this file includes selectors as well.

We focus on memoization with reselect. To demonstrate the feature, consider the following Todo sample.
The application provides three functionalities:
a. Create a todo
b. Mark a todo complete
c. Filter todos based on their completed status.

Figure 5: React Todo Sample Application

Notice the buttons (on top of the screen) - Include completed todos and Exclude completed todos. Clicking on
the former shows all todos, even the completed items; and the latter excludes the completed items.

To create a memoized selector, use the createSelector API in the reselect module as shown in Listing 8.

// a selector function returns todos in the application state.


export const getAllTodos = state => state.todo.todos;

/* a selector function returns state indicating if completed todos to be included


or excluded from the result */
export const showActiveTodos = state => state.todo.shouldIncludeComplete;

// The memoized selector


export const getTodos = createSelector([getAllTodos, showActiveTodos], (todos,
showCompletedTodos) => {
return showCompletedTodos ? todos : todos.filter(item => !item.isComplete);

Listing 8: Selectors

The selector getTodos is memoized and composed using two other selector functions getAllTodos and
showActiveTodos. The selector function getAllTodos returns todos, which is only a part of the state tree.
Typically, each feature is represented by a separate object in the Redux store. The application may have


59
www.dotnetcurry.com/magazine |
many other features and the application state is expected to include additional objects. The getTodos
selector focuses only on todos.

The getTodos selector has a condition to return all todos including completed items or just the active
items. When the selector is invoked the first time, it accesses the state tree and returns the data from the
store. As application state is expected to be a large object, it could be a costly operation to repeatedly
access the redux store.

When memomized, for repeated calls, the store is not accessed till the arguments change. Look at Figure 5.
If the user clicks either of the filter buttons (“Include completed todos” or “Exclude completed todos”), the
cached (or memoized) result is returned. Let’s add a console log to demonstrate invocation of the selector
function in Listing 9.

export const getTodos = createSelector([getAllTodos, showActiveTodos], (todos,


showCompletedTodos) => {
console.log(`%cInvoked selector. Show Completed Todos:
${showCompletedTodos?'YES':'NO'} `, "background: skyblue; color: purple; font-size:
14pt", );
return showCompletedTodos ? todos : todos.filter(item => !item.isComplete);

Listing 9: Selector with a console log statement

Notice Figure 6. During the initial page load, the selector function is invoked. The argument,
showActiveTodos is defaulted to false. I clicked on “Include completed todos”. It invokes an action that
sets state showActiveTodos to true. Notice, this state value is one of the arguments to the selector,
getTodos().

As there is a change in argument value, the selector is invoked the next time. To demonstrate that the value
is returned from cache when arguments don’t change, I continuously clicked the button “include complete
todos” many times. The console log does not print anymore. It does not access the state tree, till the
selector function arguments change.

As described in the section earlier, memoized selector avoids calculating state every time a change occurs
in an unrelated section of a large Redux store. It optimizes React-Redux application by invoking the
selector, only relevant state tree (accessed by the selector) is updated. Other times, cached results are used.

Figure-6: Console log on the React application

Consider the following code snippet for Todo list component. Follow this link for a complete React, Redux
and Reselect sample - https://github.com/kvkirthy/todo-samples/tree/master/react-demo

60 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


// Removed code for brevity

export function Todo() {

// useSelector() hook for using the Redux-reselect selector.


let todos = useSelector(getTodos);

return (<div className={styles.padded}>


/* Removed code for brevity */
{todos.map(element => renderTodo(element))}
</div>);
}

Code Samples

Memoization JavaScript code sample- https://github.com/kvkirthy/todo-samples/blob/master/memoization-


demo/memoize.js

Angular NgRx code sample- https://github.com/kvkirthy/todo-samples/tree/master/memoization-demo

React- Redux and Reselect sample- https://github.com/kvkirthy/todo-samples/tree/master/react-demo

Resources
Wikepedia article on Memoization, https://en.wikipedia.org/wiki/Memoization
Angular NgRx documentation, https://ngrx.io/docs
Getting Started with Redux, https://redux.js.org/introduction/getting-started
Reselect (React) GitHub page, https://github.com/reduxjs/reselect

My past articles on memoization,


Part-1: Memoization
Part-2: Memoization with Selectors in NgRx

Download the entire source code from GitHub at


github.com/kvkirthy/todo-samples

Keerti Kotaru
Author
V Keerti Kotaru is an author and a blogger. He has authored two
books on Angular and Material Design. He was a Microsoft MVP
(2016 - 2019) and a frequent contributor to the developer community.
Subscribe to V Keerti Kotaru's thoughts at http://twitter.com/
keertikotaru. Checkout his past blogs, books and contributions at
http://kvkirthy.github.io/showcase.

Techinical Review Editorial Review


Benjamin Jakobus Suprotim Agarwal

61
www.dotnetcurry.com/magazine |
CLOUD

Klaus Haller

The End of
Innovation outside
the Cloud?
Why PaaS will shape our future.
Only one or two of the articles I read per year are real eye-openers. My favorite one in 2019 was
Cusomano’s “The cloud as an Innovation Platform for Software Development” [Cus19].

I realized: The cloud unlocks a gigantic innovation potential for the business if used effectively by
developers.

Cloud computing builds on three main pillars:

• Infrastructure as a Service (IaaS)

• Platform as a Service (PaaS)

• Software as a Service (SaaS)

Best known are IaaS and SaaS.

Everyone has already been using SaaS for years: Google Docs, Salesforce, Hotmail etc. They are convenient
for users – and for the IT department. SaaS takes away the burden of installing, maintaining, and upgrading
software.

IaaS is thriving as well. Many IT departments have migrated some or all their servers to the cloud. They
rent computing and storage capacity from a cloud vendor in the cloud provider’s data center. Thus, IT
departments do not have to buy, install, and run servers anymore – or care about data center buildings and
electrical installations.

Companies and IT departments benefit from IaaS and SaaS in many ways. Costs shrink. There are no
big upfront investments anymore. They get immediate and unlimited scalability, high reliability, or any
combination of that. This is convenient for developers, engineers, and IT managers.

H o w e v e r , Pa a S – p l a t f o r m a s a s e r v i c e – a n d n o t I a a S a n d S a a S a re t h e
drivers for business innovations.

IaaS and SaaS revolutionize the IT service delivery. You save time because you do not have to wait for
hardware. You save money because you do not have to install software updates.

If your CIO is a fast mover, he or she has some competitive advantages for one, two, or three years before all
your competitors are on the same cost level. However, IaaS and SaaS does not enable developers to build
revolutionary new features, services, and products for their customers.

Only PaaS (Platform as a Service), provides this opportunity.

With PaaS, software developers can assemble and run applications and application landscape, using
fundamental services such as databases, data pipes, or tools for development, integration, and deployment.

[Cus19] M. Cusomano: The Cloud as an Innovation Platform for Software Development, Communications of the ACM, Oct 2019

63
www.dotnetcurry.com/magazine |
The real game changer is the massive investment of big cloud service providers into Artificial Intelligence
(AI) services - from computer vision to text extraction and into making such AI technology usable by
developers without an AI background.

Furthermore, they have an exotic-obscure offering such as a ground station for satellite data or for building
augmented reality applications.

This is the real power of PaaS.

Sadly, I do not see how I can get involved in a project using satellite data and augmented reality at the
same time (or at least one of these technologies). However, at the same time, a broad variety of niche
features enable developers to build innovate products and services for their customers which were
unthinkable a few years in the past. The cloud providers foster this even further by allowing 3rd party
providers to make their software available hoping for a network effect - as seen earlier in the form of the
iStore or the PlayStore.

Certainly, 3rd party services require a second look. Many are easy to integrate and do not need maintenance
afterwards – and there is, what I call Potemkin integration. There is a façade on the marketplace with easy
installation (e.g., using images). Afterwards, however, you have the same amount of work as for any on-
premise open source or standard software installation. You need a dedicated operations team and detailed
technical understanding how things work etc. Obviously, this is lesser of an issue for services developed and
offered by the cloud vendors themselves.

The beauty of the (niche) cloud services and PaaS offerings is that developers and companies can integrate
the newest research and technology in their products without needing research labs or dozens of Ph.D.s in
A.I. and computer vision.

For example, using AWS’s Rekognition service, developers need as much time for writing a printf
command as for programming a service call to determine whether a person smiles on a picture and
whether there are cars on the same image.

I want to illustrate this power of PaaS with a short example.

Assume your company wants to build a revolutionary app for photographers. The app identifies whether
there are celebrities on the picture and generates an automatic description and pushes these pictures to
readers interested in a specific celebrity.

As a developer, you have four work packages:

1. User interface for journalists, photographers, and readers

2. Integration with photo feeds and for push messages to the readers

3. AI solution to detect celebrities, how their mood is, and other objects on the photos

4. Annotation feature that generates a one sentence description to the photo based on what the AI
detected (e.g., A stressed Prince Harry and a stoic Queen with a dog)

64 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


So, how do developers benefit from IaaS, PaaS, and SaaS in such a project?

IaaS can speed up the time you need to get a virtual machine. That is nice at the beginning of the project,
but not a game changer for development.

SaaS does also not help much. If there is a suitable solution in the market, there is no need for your project.
Nobody invests money to get on a level everyone else in the industry already is.

PaaS, however, has a big influence on the development project. It solves work package #3 for you.

You do not have to train neural networks for object and person detection. Especially, you do not have
to collect and annotate thousands of pictures of celebrities for your training data set. You just use AWS
Rekognition. You can focus developing your real innovation (work package #4) including the overall idea of
the app (work package #1 and #2). PaaS does not always provide such perfect services as in this illustrative
example, but the more AI your application contains, the more development time PaaS saves you.

Editorial Note: Similar to AWS Rekognition, Microsoft provides Cognitive Services using which you can extract
information from images to categorize visual data, process it, detect and analyze faces, and recognize emotions in
photos. There’s a also a new ML.NET offering that can be explored in this tutorial.

PaaS Strategy
Certainly, when looking at the cloud vendors’ strategies, IT departments should not be too naïve. They need
a basic PaaS strategy to avoid bad surprises.

First, cloud services are not free. Business cases are still needed, especially if you are in a low-margin
market and need to call many expensive cloud services.

Second, the market power changes. Some companies pressed their smaller IT suppliers for discounts every
year. Cloud vendors play in a different league.

Third, using specific niche services – the exotic ones which really help you to design unique products and
services to beat your competitors – result in a cloud vendor lock-in.

The cloud-vendor lock-in for platform as a service cannot be avoided. However, a simple architectural
measure reduces potential negative impacts: separating solutions and components based on their expected
lifetime.

“Boring” backend components often run for decades. It is important that they can be moved to a different
cloud vendor with limited effort. In contrast, companies develop new mobile and web-front-ends every
2-3 years – plus every time the head of marketing changes. Highly innovative services and products are
reworked and extended frequently.


65
www.dotnetcurry.com/magazine |
In the area of shorter-lived components, applications, and solutions, the vendor lock-in is of no concern.
You change the technology platform every few years. Thus, you just switch the cloud vendor next time
if someone else offers more innovative solutions. It becomes even of a lesser threat if you consider the
alternative: being attacked by more innovative competitors using the power of PaaS.

Conclusion

My rule of thumb is: You might be able to miss out on the cost-savings the cloud can bring with IaaS
and SaaS, but your business will be hit hard if the IT department cannot co-innovate with the business
delivering innovations using a PaaS with its many ready-to-use and highly innovative services.

Klaus Haller
Author
Klaus Haller is a Senior IT Project Manager with in-depth business
analysis, solution architecture, and consulting know-how. His experience
covers Data Management, Analytics & AI, Information Security and
Compliance, and Test Management. He enjoys applying his analytical skills
and technical creativity to deliver solutions for complex projects with high
levels of uncertainty. Typically, he manages projects consisting of 5-10
engineers. You can connect with him on LinkedIn.

Techinical Review Editorial Review


Subodh Sohoni Suprotim Agarwal

66 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


Every time you
subscribe you help us
deliver upto

10x
times the value

SUBSCRIBE US TODAY!
Machine Learning

Benjamin Jakobus

Machine Learning
for Everybody

"The unreal is more powerful than the real, because nothing is as perfect as you can
imagine it. because it’s only intangible ideas, concepts, beliefs, fantasies that last. Stone
crumbles. Wood rots. People, well, they die. But things as fragile as a thought, a dream,
a legend, they can go on and on.”

- Chuck Palahniuk

68 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


Introduction
Like a giant nervous system, computers lie at the centre of human civilization, driving our very lives.

No single technology has so affected the modern world as the digital computer. No invention conjured such
endless possibilities.

If you go to the buzzing business district of Rio de Janeiro, the crowded terminals at Heathrow or the
immense port at Rotterdam, you will have a vague idea of the scale to which computers influence our
world. Cars, planes, trains, subway systems, border controls, banks, stock markets and even the records of our
very existence, are controlled by computers.

If you were born after 1990, then chances are that most things you ever experienced, used or owned are a
mere fabrication of computational omnipotence.

The production, extraction, and distribution of oil, gas, coal, spices, sugar or wheat all depend, in one way or
the other, on computational power. The silicon chip migrated into nearly every home, monitoring our time,
entertaining our children, recording our lives, storing personal information, allowing us to keep in touch
with loved ones, monitoring our home and alarming the police in the event of unwanted intruders.

And recently, computers have begun taking over the most fundamental tasks of our brains: pattern
recognition, learning and decision making. Theories, techniques and concepts from the field of artificial
intelligence have resulted in scientists and engineers building ever more complex and “smarter” systems,
that, at times, outperform even the brightest humans.

In recent times, one subfield of artificial intelligence in particular has contributed to these massive
advances: machine learning.

Figure 1: Abstract image depicting Machine Learning


69
www.dotnetcurry.com/magazine |
” It’s ridiculous to live 100 years and only be able to remember 30 million
bytes. You know, less than a compact disc. The human condition is really
becoming more obsolete every minute.”

- Marvin Minsky

Each subfield of artificial intelligence is concerned with representing the world in a certain way, and
then solving the problem using the techniques that this model supports. Some subfields, for example,
might model problems in graph form so that the computer can easily “search” for a solution by finding the
shortest path from one point in a graph to another.

Other techniques might represent problems, such as solving a sudoku puzzle, using a matrix of numbers
and imposing restrictions or “constraints” on some points in that matrix which the computer can not violate
(this subfield of AI is known as “constraint programming”).

A solution to a problem expressed in such a way would be a combination of numbers so that none of these
numbers violate the given constraints. Similarly, the field of machine learning looks at the world through its
special lens: that of using large amounts of data to solve problems by means of classification.

That is, machine learning is all about getting computers to learn from historical data, and classifying new,
unseen data, by using the “knowledge” gathered from having looked at this past data. Whilst this may sound
complex, the essence of it is achieved by using techniques borrowed from statistics to write programs that
are good at pattern recognition. This, in turn, is the fundamental philosophy behind machine learning, one
eloquently expressed by the fictional character Maximillian Cohen in the cult classic “Pi” when he describes
his three assumptions about the world:

1. Mathematics is the language of nature.

2. Everything around us can be represented and understood through numbers.

3. If you graph the numbers of any system, patterns emerge.

Therefore: There are patterns everywhere in nature.


Evidence: The cycling of disease epidemics; the wax and wane of caribou populations; sun spot cycles; the rise
and fall of the Nile.

Although from a philosophical point of few, looking at the world in terms of numbers and patterns may
seem rather cold-hearted, this way of thinking works well for solving the types of problems for which
i) large amounts of data are available, ii) the problem itself is best understood through patterns, and iii) the
information needed to solve a new problem in the domain constantly varies.

The first requirement - that of data - is pretty much self-explanatory. Given that machine learning is
concerned with pattern recognition, then the more data we have, the easier it becomes to identify certain
patterns in the data.

Take, for example, the problem of object recognition in images. Or more specifically, the problem of
differentiating cats from dogs in digital images: The problem itself is well suited to pattern recognition,

70 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


since i) it is relatively easy to collect a large number of digital photos of both cats and dogs, and ii) the images
themselves tend to be similar enough in order for patterns to emerge across differences.

If we only ever had a single image of each animal - one of a dog, and one of a cat - then recognizing
patterns between the two would be impossible. If, however, we had a thousand such images, the similarities
and differences across these images will become apparent, allowing them to be categorized and labelled.

Whilst the need for data is easy to understand, the statement with the second requirement that “solving
problems using pattern recognition requires that the problem itself is best understood through patterns” may
seem silly at first. Nevertheless, it is an important one!

Unlike the fictional movie character Maximillian Cohen likes us to think, not everything in the world can be
best understood through patterns.

Take for example, the problem of multiplying two numbers. The multiplication follows a predefined set of
mathematical and accounting rules. Creating software that allows users to calculate the result, based on
user input, will be easier if we simply write down the rules for multiplication (for example, 2x2) in computer
code, instead of collecting thousands of multiplications and building up a statistical model to try and find
patterns across these formulas.

Similarly, the problem of finding the shortest path from one point on a map to another might be better
suited by modelling the map as a graph, instead of collecting the paths that thousands of citizens take to
work every day, and trying to find some sort of pattern for each of the different points in the graph.

The third requirement - that is, the variation of information - refers to the inconsistencies in the input data
when presenting the machine with a new problem to solve.

Returning to our example of distinguishing cats from dogs, it is easy to see what is meant by this statement:
every cat and every dog is slightly different.

Although the general characteristics of each animal are the same, minor variations exist both in body shape,
colour and size of each animal. Therefore, any new image that we ask the machine to classify, will differ
slightly to the previous images that it has seen. The information needed to solve a new problem therefore,
constantly varies.

This stands in stark contrast to the problem of finding the shortest path between two points on a map:
whilst people might try to find the shortest paths between different points on the map, neither the points
nor the paths themselves are likely to change (only in exceptional circumstances, such as when a new road
or new building is being constructed). In this case, the information provided by the user to the machine will
never change drastically. The user might ask to find the path between A and B, or B and C, or A and C, but the
points themselves (A, B and C) will never change.

What to expect from this new Machine Learning Series?

We, therefore, see that machine learning, far from being a magic pill, is a powerful problem-solving
technique that works well for certain types of problems. That is, problems that are defined by strong
variations in input, patterns and lots of data. What these problems look like, and exactly the different
techniques that machine learning relies on to solve these problems, is the sole purpose of this tutorial
series.


71
www.dotnetcurry.com/magazine |
Your knowledge of the field will be built from the ground up, using practical examples and easy to
understand explanations - without mathematics or complex technical terms.

We will start our journey by first examining what computer science and artificial intelligence are, and what
they try to achieve. Then we will move on to focus specifically on machine learning, exploring the different
concepts, how they work, and what their advantages and limitations are. By the end of this series, you will
have a solid understanding of the topic itself, will be able to understand advanced technical concepts and
differentiate fact from fiction, marketing hype from reality.

Here's a list of topics that you’ll see in the forthcoming editions:

What is Computer Science? (Current Edition)


What is Artificial Intelligence?
What is Machine Learning?
Decision Trees Explained
Neural Networks Explained
CBC, Logic, Programming, Instance-based learning
Deep Learning
The Bigger Picture

What is Computer Science?


”If people do not believe that mathematics is simple, it is only because they do not realize how complicated life
is” - von Neumann

Before we can start discussing the various concepts and techniques used in machine learning, we must
understand what artificial intelligence actually is. And before we can understand artificial intelligence, we
must first understand how computers work and what computer science actually is. We therefore begin this
series about Machine Learning by introducing Computer Science.

Figure 2: Computer Science Mind map (Image Credit: Harrio) The advent of Computer Science

72 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


Ever since their domestication, the workings of the computer have often been rendered inexplicable.

It is often difficult to comprehend that the computer is just a big calculator, a progression of the dry
sciences of electronics and mathematics, performing addition, subtraction, multiplication and division.

While pure mathematics and computer science have transgressed over the past few decades, mathematics
plays a fundamental role in the development of computational systems. It was really mathematics that
provided the advent for our digital evolution, allowing us to represent our world in an abstract and logical
fashion.

To truly appreciate just how profound a difference mathematics made to our existence, we need to look
back 12,000 years to the first prehistoric settlements along the banks of the River Jordan.

Among the round houses and plastered storage pits, we find the origins of mathematical and scientific
thought. Prior to their settling, hunter-gatherer communities led a very monolithic existence. Life was
relatively simple and therefore little need arose to introduce abstract thought that could simplify problems.
Anthropologists suggest that hunter-gatherer communities maintained a limited mathematical ability
that did not exceed a vocabulary foregoing the number “ten”. Archaeological discoveries indicate that such
limited knowledge was used by cavemen as early as 70,000 years ago for time measurement or quantifying
belongings.

As communities settled, their needs evolved, as did their problems. With an increasingly sophisticated
lifestyle, and therefore respectively sophisticated volumes of information, people were in need of a way
to reduce chunks of information to essential characteristics. We call this system of summarizing the main
points about something without creating an instance of it, abstract thought.

In addition, people began using symbols or sounds to represent these characteristics, without having to
consider the characteristic itself. This realization was the birth of mathematics. The idea of using one “thing”
to represent many “things”, without considering any “thing” in specific.

Figure 3: The Ishango bone, a tally stick, was used to construct a number system. This bone is dated to the Upper Paleolithic era,
around 18000 to 20000 BC (Source: Wikipedia)

Take the number “eight” as an example. “Eight” or ”8” is merely a symbol or sound. It is not a concrete thing
or object in the physical world. Instead it is an abstract thought that we can use to represent an amount of
“things” - be it trees, animals, cars or planes.

Mathematics became a projection of a tribe’s sophistication and a magnification of their own thoughts and
capabilities. As the first civilizations developed, they used such skills to solve many practical problems such
as the measuring of areas of land, calculating the annual flooding of rivers or accounting for government

73
www.dotnetcurry.com/magazine |
income. The Babylonians were such a civilization and their work on mathematics is used to this day.
Clay tabs found by archaeologists, in what is modern-day Iraq, show squares, cubes, and even quadratic
equations.

Figure 4: The Babylonian mathematical tablet Plimpton 322, dated to 1800 BC (Source: Wikipedia)

To deal with complex arithmetic, the Babylonians developed primitive machines to help them with their
calculations; the counting board. This great-grandfather of the modern computer consisted of a wooden
board into which grooves were cut. These grooves allowed stones to be moved along them, representing
different numbers. Such counting boards proved to be building blocks for modern mathematics and were
arguably a considerable step towards academic sophistication.

With the eventual decline of the Mesopotamian civilization, much of the old wisdom was abandoned and
forgotten, and it wasn’t until 500-600 BC that mathematics began to thrive again, this time in Greece.

Unlike the Babylonians, the Greek had little interest in preserving their ancestor’s wills. Their trade and
battles with foreign cultures brought about swift change, as the Greek noblemen embraced the unknown.
Frequent contact with foreign tribes opened their eyes and they soon came to appreciate new knowledge
and wisdom. As they built the first Universities, the history of the world began to change at a great speed.
Many of the world’s most brilliant minds began to gather at these centers of knowledge, exchanging
theories and schooling the young.

One of these disciples was Aristotle, a student of Plato’s. He was to become the teacher of Alexander the
Great and profoundly shaped Western thought by publishing many works on mathematics, physics and
philosophy. One of his greatest feats was the incorporation of logic which, as we will see later on, led to
the development of the first digital computer. The notion of logic will be discussed in one of the upcoming
tutorials in this series; however, the basic idea is that, given two statements, one can draw a conclusion
based on these statements. For example:

1. All Greeks are mortal

2. Aristotle is Greek

3. Therefore, Aristotle is mortal

74 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


This may seem rather obvious now, but at the time, such formalization of thought was an exceptional
breakthrough in the way people viewed their surroundings. In fact, it was so significant to western
culture that, nearly two thousand years later, a mathematician at Queens University Cork (now University
College, Cork) devised an algebraic system of logic that expressed Aristotle’s logic in the form of algebraic
equations. Known as Boolean Logic, the system turned out to provide the basis for digital electronics, and,
without it, computers as we know them today would probably have never come into existence. In his
memory, the University installed a stained-glass window in its Aula Maxima, showing Boole at a table with
Aristotle and Plato in the background. In later years, the University named its new library after him and in
addition launched an annual mathematics competition for students, allowing them to obtain the “Boole
Award”.

Electrifying abstract thought

With an appreciation of the power of abstract thought and boolean logic, we now know that, given a
number, we can manipulate it to illustrate anything: words, thoughts, sounds, images and even self-
contained intelligence.

What we do not yet know is how to overcome the barrier between the metaphysical number and merge it
with the physical world around us. That is, how can we use wires and electrical currents to bring complex
mathematical equations to life?

As computers consist of switches (imagine those switches to be similar to the light switch in your room
- the switch can either be on or off - in other words, it can either allow the flow of electricity or prevent
it from flowing), the key lies in representing mathematics in the form of electric flows. That is, reducing
numbers to two states; on or off. Or, as Aristotle would have put it; true or false.

As our numbering system consists of 10 digits (0, 1, 2, 3, 4, 5, 6, 7, 8, 9 - in mathematical terms: base 10), we
need a notation that allows us to represent a sequence of numbers using only two digits. The fabrication
of such a scheme is accredited to Gottfried Leibniz, a German 17th century mathematician, who, while
constructing a mechanical calculator, realized that using only two digits instead of 10 simplifies not only
the construction of his machine but also requires fewer parts. The concept behind his idea was that any
decimal number can be represented in patterns of 1’s and 0’s. For example, the decimal number 421, can be
converted to read 101010. (we will examine the mathematics behind these conversions in the next section).
The number 10011110 can be converted to read 158 and so on. This numbering system of base two (1’s
and 0’s) is referred to as the binary.

Leibniz, in addition to being a mathematician and multilingual philosopher, was also among the first
Europeans to study Chinese civilization. It was this passion with Chinese society that caused Leibniz to
realize that the Chinese had preceded him. Having availed of the binary code for centuries, the Chinese
used a different notation: broken and unbroken lines instead of Leibniz’s 0 and 1. Staggered by this
discovery, Leibniz outlined his newly found knowledge in ”Explication de l’ arithmetique binaire” in which
he stated that ”this arithmetic by 0 and 1 happens to contain the secret of the lines of an ancient Chinese
king and philosopher named Fohy (Fu Xi), who is believed to have lived more than 4,000 years ago, and
whom the Chinese regard as the founder of their empire and their sciences”.

What Leibniz was ultimately referring to in his writings was the yin-yang - a complex system based on
the principles of what we call binary arithmetic; a two-symbol representation of the world around us.
The broken (yin) and unbroken (yang) lines represent opposites: male and female, day and night, fire and
water, negatives and positives. Coherently, the principle of the yin-yang is the building block of Chinese
civilization and is at the heart of its society. Despite its significance, Leibniz’s (re)discovery was soon to be
forgotten. It wasn’t until the 19th century that the logic upon which we have become so reliant was once

75
www.dotnetcurry.com/magazine |
again revived and elaborated upon with the emergence of George Boole’s boolean algebra.

The astute reader will have noticed a similarity between the ”on” and ”off” of the switch and the 1’s and 0’s
in boolean logic; if we combine boolean logic and the idea of the switch, then we can see that any decimal
numerical combination can be stored and processed in patterns of 1’s and 0’s using silicon, some switches
and transistors. If then we would have a “controlling chip” that could perform simple arithmetic on these
binary numbers, then we can represent any type of mathematical formula, functions or logical operations.
If we can represent mathematics in the form of electric flows, then we can create entire models of actions
and representations of different states. This, in essence, is how the digital computer works: translating man-
made mathematical models that represent the real world into electric signals using boolean algebra.

From the “switch” to Computer Science

Now that we have a rough idea as to how computers work, it’s time to answer a fundamental question:
What is Computer Science? The Oxford Dictionary defines computer science as “the study of the principles
and use of computers.” But what exactly does that mean? And what can one expect to do when signing up to
study computer science?

Most people think that computer science is all about learning how to program. Whilst programming
certainly is an integral part of computer science, the field itself is about much, much more than just that.
Instead of just teaching you how to write computer code, computer
science lies at the cross-roads between mathematics and engineering, and it draws upon both fields to
teach you how to analyse and solve problems, using technology. It teaches you how to break down complex
problems and how to describe solutions in a formal way, so that a solution’s correctness can be validated
and proven. And, in a way that a computer can understand and execute these solutions. To allow you to do
this, Computer Science will give you a fundamental understanding of how information technology works,
from bottom to top.

Figure 5: Computer science in one picture.

And as a result of this, you will be learning how to use certain technologies - although learning how to use
a technology is not the focal point of computer science.

That is, students of computer science will learn how to solve complex problems in such a way that they
can be solved by a computer. To do so, the problem itself is first abstracted, and a general solution to this
abstracted problem is created. This solution is then implemented, using a specific technology. For example,

76 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


using the Java programming language whose instructions are executed using a virtual machine installed
on a laptop running Microsoft Windows or Linux. The focus here is to become capable of abstracting and
solving problems, and technologically competent enough to implement solutions to these problems. The
aim is not to become an expert user of a specific technology (although this is usually the side-effect of
repetitively solving and implementing solutions to problems).

...so what do Computer Scientists do?

We previously talked about “what Computer Science is” and discovered that computer science is, in essence,
concerned about solving complex problems, using technology.

But, if you were a computer scientist or software engineer, how exactly would you go about doing that?

In essence, there really are only 4 things that computer scientists do:

1. When you encounter a problem, the first thing you do is analyse the problem, and see whether you can
decompose it into smaller problems that are easier to solve. That is, a computer scientist would spend
a large amount of time thinking about how to break down a problem down into smaller problems, and
how to determine just how difficult a problem can be to solve. Abstraction forms an important part of
this process (in their book “Artificial Intelligence: A modern approach”, Stuart Russell and Peter Norvig,
give a concise definition of abstraction as the “process of removing detail from a representation”).

2. Once computer scientists have thought about, and analysed a problem, they can begin to develop
solutions to the analysed problem. Each solution, they will need to describe formally. That is, they
typically write down an abstract set of instructions (called algorithms) that solve the said problem.
Such instructions must be universally understandable, concise and free of ambiguity. Above all, these
solutions must be free of implementation details.

3. Once they have come up with a solution, computer scientists must prove the correctness of their
solutions to all the other computer scientists out there. That is, they must demonstrate that their
solution correctly solves the given problem, and any other instances of the problem. For example,
consider the problem of sorting an unordered list of numbers: 3, 1, 2. Your solution must correctly
sort this list of numbers in ascending order: 1, 2, 3. However, not only must you demonstrate that your
solution is capable of sorting the specific numbers 3, 1, and 2; you must also demonstrate that your
solution can sort any other unordered sequence of numbers. For example, 192,384, 928, 48, 3, 1,294,857.

4. Last but not least, computer scientists must implement the given solution. This is the part where
programming comes into play, and where you really get your hands dirty. As such, when people think
about computer science, this is the part that they tend to think about. Programming is all about learning
how to write instructions that a computer can understand and execute, and is a huge field in itself.

In order to become competent at analysing problems and developing solutions to these problems, you
will need to understand the fundamental principles behind information and computer systems. As such, a
computer science course will typically cover topics such as:

1. Logic and boolean algebra. These are huge, fundamental aspects of computer science. We already
touched upon this in earlier sections of this book. Understanding boolean algebra will help you
understand how silicon chips combined with some electrical wiring, can work together to perform
incredible complex tasks such as connecting to a massive network of other computers and playing
YouTube videos.

2. Theoretical foundations behind information, computer architectures, algorithms and data structures.

77
www.dotnetcurry.com/magazine |
The latter are, in essence, constructs that hold and organize data and give rise to efficient ways of
processing and, searching and retrieving data. A very simple type of data structure which every reader
should be familiar with is a list which represent data, such as numbers, in a linear sequence (for
example: 1, 2, 3). There exist however many more complex data structures, such as trees which organize
data not in a sequence but in the form of nodes, branches and leaves (imagine this just like a christmas
tree).

3. Theories behind data organisation, storage and the processing of different types of data. These theories
give rise to things like databases (which are pieces of software that focus on storing data in the most
effective form so that the data can be recovered quickly from disk) and multimedia (which is concerned
with things like image file formats and recording, storing and reproducing sound and video).

4. Networking and communication. This, in essence, is the study of “how computers talk to each other”.

5. Theories governing secure systems. That is, how to prevent attacks or unauthorized access on computer
systems, how to communicate securely and how to encode data in a secure fashion (cryptography).

6. Programming and software engineering. This teaches you how to actually turn abstract solutions to
problems (i.e. algorithms) into actual programs that can be executed by a computer. It is also concerned
with how to best design and implement said programs, as well as with the development of new
programming languages.

7. Advanced, domain-specific topics such as how to create programs that learn by themselves or exhibit
intelligent behaviour, a subfield of which (machine learning) is the focal point of this book.

A Sample Problem

At this point, we know:

i) how computers work (electric switches that can perform logical operations using boolean algebra),

ii) what computer science is (the study of translating real world problems and solutions into
abstractions that can be solved using boolean algebra), and

iii) what computer scientists do (analyse, solve and translate problems and solutions).

Let’s now look at how we would actually go about studying a real-world problem, simplifying and
abstracting it, producing an algorithm to solve it. We won’t dive into writing an actual

implementation of our algorithm - after all, learning to program is far beyond the scope of this series!

Furthermore, the process and fundamentals outlined in this section might seem complex at first and are
designed to give you an appreciation and understanding of the entire field of computer science. A thorough
understanding of the problem itself won’t be necessary to understand the future articles of this series, but
will of course help you appreciate the complexity behind artificial intelligence and machine learning in
general.

The sample problem that we will be focusing on in this section is that of finding the shortest route
between two points on a map.

It is a problem that has been well-studied in computer science and is one that we have all faced in real-
life. Its solution transformed logistics and travel worldwide, and has been applied to a wide range of other

78 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


problems that are unrelated to maps or transportation. Many excellent manifestations and implementations
of the solution to the shortest-path problem exist in products such as Google Maps, Waze or Uber. As such,
the problem itself shouldn’t require much explanation: given a map, and two points on this map, we want to
find the shortest path between these two points. For example, given that we are at the town hall, we might
want to find the shortest route to the local library.

The first thing we should do is find a way to abstract the problem. That is, we should forget everything
about physical maps, buildings, cars and the city. Instead, we should think of a way in which we can
represent the map itself in the simplest form possible, capturing its essence and removing everything else
that is not needed. One way to do this is to represent the map as a graph.

We have all seen a graph at one point or another in life, and we know that it basically is just an abstract
diagram. The nodes in the graph represent the different buildings on our map, whilst the edges (lines
connecting the nodes) represent the streets between these different buildings. For the sake of simplicity, we
will ignore the length of streets or the amount of traffic on them and just consider the number of points (or
nodes) that a driver would need to traverse in order to reach his or her destination.


79
www.dotnetcurry.com/magazine |
Using this graph representation, we have simplified or abstracted our map, since we now:

1. Removed the unnecessary pieces of information, such as colours, gradients, trees and shapes of the
streets.

2. Maintained only the essence - that is, the locations and paths between the different locations - of the
map.

3. Represent the map in such a way that we can reason about it more easily and represent the map
digitally (as we will see shortly).

Now that we have represented our problem in a more abstract form, we must think about solving it. How
exactly would we determine the shortest path from one node to another?

Well, to provide an answer to this question, we must first look at what exactly the shortest path is. Given
our representation of the world (or map) in graph form, the definition of the shortest path between two
points is simply the path on the graph that traverses the least number of nodes. Looking at figure [TODO],
we see that path B is shorter than path A, since path B involves only travelling across 2 nodes, whilst path A
involves travelling across 3 nodes.

Starting at a given node M, we therefore know the shortest path to T to be the path that passes through R
and S. This is easy to figure out by just looking at the graph and then counting the

number nodes in our path. But how would we write a formal set of instructions (an algorithm) to describe a
solution?

Well, let’s think about simplifying the problem a bit: by definition, starting at node M, the shortest path
to node T would be the shortest path to node T from either node N or node R. Likewise, from node R, the
shortest path to node T would be the shortest from node S to node T; from node N, the shortest path to

80 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


node T would be the shortest path from node O to node T. From node O, the shortest path to node T would
be the shortest path from node P to node T. And so on, so forth. Therefore, if we were starting at node M and
wanted to find the shortest path to node T, then we would only need to know the length of the path from
node R to node T and the length of the path from node N to node T. We would then need to choose the
shortest of the two.

We could therefore write a piece of code that calculates the length of the path between two nodes. We
would then invoke this piece of code for all the immediate neighbours of our starting node, and then
choose the smallest number (i.e. shortest path length) returned.

Since we need the actual path, and not just the length of the path, we would somehow keep track of the
nodes with the minimum path length, and then use the node with the minimum path length as our new
starting point.

Let’s call the piece of code that calculates the length of the path from one node to another pathLength(a,
b). Don’t worry about how this length is being calculated for now. Just assume that some dark magic returns
the length of the path, if we want to get from node a to node b, where node a and b can vary to refer to any
node that we like.

When starting off at node M, we therefore use the pathLength code twice. Once to get the length from R
to T, and once to get the length from N to T:

length from R to T = pathLength(R, T) length from N to T = pathLength(N, T)

By looking at the graph, we know that pathLength(R, T) will return 1, whilst


pathLength(N, T) will return 2 (since it takes two nodes to get to T).

Since 1 is less than 2, we know that the shortest path from M to T will involve us travelling through node R.
We therefore record R by adding it to our list of nodes that we use to keep track of our shortest path (let’s
call this list path).

Our new starting point is now R, so we will need to execute pathLength for all of R’s immediate
neighbours: U and S.

length from U to T = pathLength(U, T) length from S to T = pathLength(S, T)

Since S is directly connected to T (and hence passes through no intermediate node), the length of the path
from S to T is 0. The length of the path from U to T is 1, since we need to pass through P to arrive at T.

We therefore choose S and add it to our path list. Our path list now contains the nodes R and S, and hence
our shortest path from M to T, which passes through R and S. And that’s it! We have discovered a simple
way of calculating the shortest path from any node in a network or graph to any other. Don’t worry if this
seems a bit complex. You won’t need to memorize or fully understand every fact of this solution in order to
understand future chapters.


81
www.dotnetcurry.com/magazine |
We now want to put this solution down on paper in a way that it is easy to understand and can later be
converted into code by a programmer. Let’s do this now:

shortestPath(from, to): minPathLength = infinity bestNode = nothing


for neighbour n in getNeighbours(from): if n is to:
return [n]
length = pathLength(n, to) if length < minPathLength:
minPathLength = length bestNode = n
return [bestNode] + shortestPath(bestNode, to)

Does this look overwhelmingly complex? Don’t worry, let’s break things down. shortestPath is the name
of the piece of code that will calculate the shortest path from one node in the graph, to the other. It will
return the actual shortest path in the form of a list. For example, shortestPath(M, T) will return [R, S, T].

The first two lines in the shortestPath algorithm simply say that the value for minPathLength is set to
infinity (or just some really really large value); and that bestNode is empty (or nothing). Next, we will go
through all of the neighbours from our starting node. If a given neighbour is our destination node (to), then
we terminate, returning an empty list

containing only the destination node. Otherwise, we perform our path length calculation by executing the
pathLength code which returns the length from n to our destination node.
Once we know the length, we check it against the minimum length known to date: If it is less,
then we set the minPathLength to the new minimum length, and bestNode to the node containing the
new minimum length.

We then terminate by merging the list containing our bestNode with that of the list returned by a new
invocation of shortestPath from bestNode to the destination node.

If this explanation and the algorithm itself is confusing, try to spend some time thinking about it. Don’t
worry if you didn’t fully understand it: as long as you understand the overall idea behind:

1) abstracting a problem
2) formalizing a solution to an abstract problem

...then you understand the very essence of computer science, and are well equipped to move onto the
next chapter. To the reader more familiar with abstract reasoning: We greatly simplified the problem and
solution to the shortest path problem in this section. Many different, more complex and more efficient
solutions to this problem exist. For example, we could the same approach (modelling the map as a graph
and traversing the nodes and edges of this graph to find the shortest path) to also factor in things like
traffic or the length of the roads by applying weights to the different edges, and then finding the path with
the minimum sum. But this is far beyond the scope of this article.

Conclusion

In this article, we explained the very essence of computer science. We first bridged the gap between real
world problems and silicon chips, giving an indication as to how, using logic and abstract thought, we can
model real problems and have them solved by a computer. The take home message here is that we can
use abstract thought to summarize or represent our world in such a way that we can reason about it and
produce general solutions to concrete problems. That is, using abstract thought, logic and mathematics, we
can take a problem and formulate a solution to the problem in such a way that the solution can be applied
to other instances of the same problem.

82 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


By formulating solutions in a certain way, we can use computers to solve them for us by expressing the
solutions using a precise set of instructions. Under the hood, the computer translates these instructions,
using boolean algebra, in such a way that we can use electrical currents, silicon chips and wiring, to
calculate a result for us.

In the forthcoming articles of this series, we will see:

What is Artificial Intelligence?


What is Machine Learning?
Decision Trees Explained
Neural Networks Explained
CBC, Logic, Programming, Instance-based learning
Deep Learning
The Bigger Picture

Stay tuned!

Benjamin Jakobus
Author
Benjamin Jakobus is a senior software engineer based in Rio de
Janeiro. He graduated with a BSc in Computer Science from University
College Cork and obtained an MSc in Advanced Computing from
Imperial College, London. For over 10 years he has worked on a wide
range of products across Europe, the United States and Brazil. You
can connect with him on LinkedIn

Technical and Editorial Reviewer


Suprotim Agarwal


83
www.dotnetcurry.com/magazine |
AZURE DEVOPS

Subodh Sohoni

Using
AZURE DEVOPS
for Product Development
(Multiple Teams)

HOME SERVICES ABOUT US SIGN IN CREATE ACCOUNT

In this article, I am
going to drill deep
into Azure Boards
service. This is
an Azure DevOps
service to unearth
features for portfolio
management.
A portfolio we are considering here, is development of a product that is so large that it cannot be done by a
single team in a viable length of time. It has to be done by multiple teams as a joint effort.

Azure Boards provides Teams, Areas, Shared backlogs and Team backlogs features for doing portfolio
management. In this article, we will explore why these tools are necessary and how to use them.

Defining the context of portfolio

Scrum sets a limit on the size of the team to a maximum of 9 team members.

For a large product, it will take inordinate length of time for a team of up to 9 team members to develop it.
Such a timeframe for delivery is not acceptable to the customers and the management. It is but natural to
give the task of the product development to two or more teams.

When these multiple teams are working to develop the same product, there are some rules that those
teams will have to follow to remain Scrum teams. Each team has to work within the framework of Scrum.

Keeping that in mind, let’s list the rules, constraints and artifacts that are to be created.

1. Product Backlog – Multiple teams are going to develop a Single product. Hence all product
requirements, regardless of which team will fulfill those, will be put into one product backlog. That
product backlog will be shared by all teams.

When we want to view the product backlog, the entire list of ordered set of Product Backlog Items
(PBIs) should be visible.

2. Delivery Cadence – Usually customers are not interested in the delivery of a feature that depends upon
another feature that may not be available, yet.

Since the customers are interested in the delivery of an entire Product Increment, the delivery cadence
of all the teams should be same. That can be at the end of each sprint, but is not always so. The delivery
of increment of product should be planned at a time that is convenient to all the teams and the
customers. We will call it as a Release.

All the teams should have a plan to release an increment that is synchronized with each other. Within
a release of multiple sprints, it is possible that each team may have different duration, start dates and
end dates of the sprints, as long as they follow the limits set by the release. See Figure 1.

Figure 1: Unsynchronized sprints of multiple sprints in a release


3. Synchronized sprints - As mentioned earlier, the delivery cadence of all teams should be the same with
a freedom to schedule the sprints at the team level. After accepting that fact, we should also consider
the needs of the organization to have a comparison of status of development teams, and deliveries of
dependencies at an appropriate time.

From this point of view, it is suggested to have synchronized sprints for all the teams as shown in
Figure 2.

Figure 2: Synchronized sprints of multiple sprints in a release

4. Sprint Backlog – Each team will have its own sprint backlog. As a default, the team should be able to
view and focus on their own sprint backlog.

5. Transparency about Dependencies – It is natural that if multiple teams are developing a feature, there
will be dependencies within the (Product Backlog Items) PBIs that are taken up by different teams in
their respective sprint backlogs. It should be possible to always view the status of dependencies across
the teams, in one view.

Demystifying Terms associated with Product Development

Having defined these shared rules and constraints, we will now disambiguate some of the terms related to
portfolio.

Product Backlog Item (PBI) – PBI is any abstract entity in the context of the product, on which the team will
take efforts. It may be a User Story or a Non-Functional Requirement which needs to be fulfilled. It may be
a Bug that needs to be fixed. It is expected that every PBI is scoped maximum to a sprint and to a single
team. If it is started for implementation in a sprint, it should be completed in that same sprint.

Feature – Features are set of User Stories or PBIs which go together. They are required to be implemented
as a unit to give a coherent experience to the user. Features are created from top-down and then refined in
the bottom-up approach.

Let me explain this statement.

Once the overall direction of Product development is finalized, the product is split into user experiences
that are required for the product to work. These user experiences, which we call features, are large enough
to be implemented over multiple sprints.

Each feature is then split into multiple PBIs to be implemented. It is added to the product backlog so that it
becomes one of the PBIs. This is the top-down part of the approach.

If a PBI is so large that it can take efforts of all the team members in a sprint and still cannot be
implemented in a sprint, then it is elevated to the level of feature. It then is split into multiple PBIs, each of

86 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


which is possible to be implemented in a sprint. This part is the bottom-up approach. Each feature should
be implemented during a release. A feature can be scoped to multiple teams.

Epic – These entities define the general direction of product development and initiatives that organization
adapts. It is split into multiple features which are implemented over number of releases.

Area – This term is not a generic term but is specific to Azure DevOps and some other DevOps toolsets like
IBM Rational Team Concert.

Conceptually, it is a way to classify our PBIs on a criteria other than time.

As we all know, one of the Agile practices is to do incremental development using iterative development
model. Iteration is a timebox in which the entire development process is carried out for a defined increment
to product. PBIs and their derivatives like tasks are classified into iterations to define, what is being
developed in each iteration.

We may want to classify PBIs on criterion other than time. This is what Areas helps us do. For example, if a
team is developing a feature called “Employee UI” then all the PBIs that are part of mentioned feature can
be added to the area named “Employee UI”. There can be one-to-one relationship between the team and its
area.

Now that we have got a clarity about the terms that we are going to use, we can check how those are
implemented using a case study.

Case Study: SSGS Employee Management System (EMS)

SSGS IT EDUCON Services Pvt. Ltd. is a consulting firm that provides consulting related to DevOps.
SSGS Consultants go on-site for providing support related to various DevOps tools and processes. When an
enquiry comes for support related to DevOps, the profiles of the consultants are sent to the customer over
emails. Customer selects the consultant of their choice and the contract is drawn between the customer and
SSGS.

When consultants upgrade to new technologies and / or have gained new experience, updated profiles
of the consultants are sent. When there are a number of repeat enquiries, it is observed that some
customers do not request new profiles, but check the old profiles of these consultants to find if they fit their
requirement.

To eliminate this issue, SSGS needs a software application to be created which will allow customers to
search and view the latest profiles of consultants. Profiles are initially created by the HR Person in the
organization when the consultant joins the organization. Those profiles cannot be downloaded for offline
use. The Customer can view skill set, certifications and experience of the consultants, as well as their rates
and availability. The customer then can select one or multiple consultants with whom the contract is drawn.

Once the consulting project is over, the consultants update their profile in the same software that was used
by the customer and submit it for approval to the manager. The Manager may accept the updates or may
suggest some more changes. Once the manager approves the updates in a profile, that profile becomes
available to the next customer if the search criterion matches. If the consultant leaves the organization, the
HR Person archives the profile and may also delete it, if needed.


87
www.dotnetcurry.com/magazine |
Analysis of the case study

Since the application is to be accessed by external as well as users internal to the organization, it needs
to be a browser-based web application. It should have separate interfaces for HR, Consultant (Employee),
Manager and Customers. We can call each of them as features.

Each of these interfaces will allow many interactions between the user and the application. These
interactions are gathered as PBIs. The Application needs to be created in the shortest possible time.

Approach to address the issues in the case

1. Since there are multiple interfaces which make a product, we will create multiple teams to handle each
of those. Each interface will be treated as a feature and an area will be created for it. The created area
will be assigned to the respective team.

2. PBIs created as children of each feature, will also be added to the same area.

3. Each team will have the iterations that have same start dates and end dates.

4. There will be only one product backlog, common to all feature teams. This backlog will consist of
features and children PBIs.

5. Each team will have separate sprint backlog in which PBIs of that sprint will be added.

6. It should be possible to view entire product backlog distribution to teams with sprints and release on a
scale in a single screen.

Implementation

Create Team Project and Teams

We start the implementation by creating a new team project on Azure DevOps. When a team project is
created, it automatically adds a team with same name as the team project name.

When we add a team project named “SSGS EMS”, a team named “SSGS EMS Team” is automatically added
to that team project. This team will have all the members working on the product. We will now create the
teams and their respective areas for different interfaces. We will create the following teams:

1. Employee Interface

2. HR Person Interface

3. Manager Interface

4. Customer Interface

We can create new teams by clicking the “New Team” button on the page of Project Settings – Teams. Figure
3 shows a form for creating the new team named “Employee Interface”.

88 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


Observe that:

1. I have selected team members for the team

2. I am the administrator of the team and am also a part of the team.

3. All the other members are assigned the role of “Contributor” so that they can add and edit any entity in
the project.

4. The check-box to create an area path with the name of the team is checked.

Figure 3: Create a new team

On that page itself we can view all the created teams as seen in Figure 4. Each team is going to be a Scrum
Team.


89
www.dotnetcurry.com/magazine |
Figure 4: Created Teams

From Project Settings – Boards – Project Configuration – Areas we can view the created areas.

Figure 5: Areas Created with Teams

To set the properties for each team, let us select the teams one by one from the top (dropdown) – see
Figure 6.

Figure 6: Default Area for the Team

When a team, for example “Customer Interface” is selected, we can see that “Default Area” property of the
team is selected to the area with a name that is same as the team name.

90 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


Configure Sprints in Project Settings

For each team, we can set the sprint (Iteration) that will appear in the Sprint hub of that team. As a
preparation for the same, we should have as many minimum sprints with same dates defined in the project,
as the feature teams.

Since we have 4 feature teams, we can set a minimum of 4 sprints having the same dates. Set the dates of
the sprint from Project Configuration – Iterations page.

Figure 7: Sprint dates

Set Sprints for Teams

Let’s now set sprints for the teams. Open the page Team Configuration from Project Settings – Boards. Select
the team “Employee Interface” from the top-level dropdown on that page. Click the Iterations tab on the
team properties page that is shown. Add the iteration “SSGS EMS\Sprint 1” to the list of iterations of this
team by selecting that iteration.

Figure 8: Set Sprints for the Team



91
www.dotnetcurry.com/magazine |
Add iterations to various teams as shown in the following table:

Create Product Backlog

We will now create Product Backlog. Open the page Boards – Backlog and ensure that the team that is
selected by default is “SSGS EMS” (the default team of the project). Select the Feature from the dropdown
that appears on the top-right corner of the page (Figure 9).

Figure 9: Product Backlog Creation

Let’s add features to the product backlog. Features are parent of PBIs. We will not assign features to any
specific team although we will keep their names same as the team names.

Figure 10: Created Features

In the next step, we will change the view to Backlog Items and add the setting to view Parents.

Once the Features are visible in the product backlog, we will add the PBIs. Each PBI is added as a child to
the respective feature. A sample set of PBIs which is a complete product backlog at this moment can be
seen in Figure 11:

92 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


Figure 11: Snapshot of Product Backlog

Set Area Path and Iteration Path to PBIs as per team

Now we will add PBIs to area path of the respective team.

For example, let’s first add the first PBI that is under the feature “HR Person Interface” to area path with the
name “HR Person Interface”. As soon as we do that, it appears in the backlog of “HR Person Interface” as seen
in Figure 12.

Figure 12: Team Backlog

A side effect of this step is it disappears from the Product Backlog of the Default Team. In most of the cases,
we want that to appear in the Product Backlog.

For that we have to change a setting.

We will open the page Project Settings – Team Configuration – Areas – SSGS EMS (Default area of the
project) and then click the Context Menu Button (…) as shown in Figure 13.


93
www.dotnetcurry.com/magazine |
Figure 13: Include Sub Areas for Default Team

After accepting the Warning that appears, we will be able to see the PBIs that are under a different team
backlog, under the product backlog too.

Now we can drag and drop the PBI under Sprint 2 which is the Current Sprint of the team “HR Person
Interface”.

Figure 14: Creating the Sprint Backlog

PBIs that are put under Sprint 2 can be seen under the Task Board of Sprint 2.

Figure 15: Task Board

94 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


Figure 16 shows the product backlog at default team level after distributing the various PBIs to team
sprints.

As a pure agile practice, we should not put any PBIs in sprints that are not current sprint of the team. But
I have gone against that practice for showing the feature of Release Plan View that is part of the Delivery
Plan extension.

Figure 16: Product Backlog after Distribution

You may observe in Figure 16 that I have deliberately added one of the PBI that is a child of “Employee
Interface” to Sprint 6 that is not a sprint of “Employee Interface”. It goes to show that it is possible to work
across different teams in the same feature.

Working with Delivery Plan Extension

We can install the Delivery Plan extension from Visual Studio Marketplace. It is directly installed on our
Azure DevOps account from the page https://marketplace.visualstudio.com/items?itemName=ms.vss-plans.
This extension is created by Microsoft and is free of cost.

Once it is installed, we can view it under the Boards section of the team project as a node named “Plan”. On
that page, we will create a new plan and call it as Release Plan.

Figure 17: Creating new Delivery Plan


95
www.dotnetcurry.com/magazine |
We can add all our feature teams under this plan and set the backlog level to “Product Backlog Item”.

Figure 18: Add the Teams and Backlogs to the Delivery Plan

The plan is created when we click the Create button. In Figure 18, you may be able to see the plan that has
some additional PBIs. We can add such PBIs directly from the plan if we find any important PBIs missing.

On the plan, we can set markers as milestones as seen in Figure 19. You may be able to see the marker for
Release 1 which is when the first increment of the product is to be released.

We can also set the fields that appear on the cards of the PBIs. I have added the Parent ID of the PBI so
that we may be able to trace the feature that the PBI is a child of.

Use of Delivery Plan extension can be made in the following ways:

1. View the planned iterations and PBIs under them for all teams on a single screen.

2. Check the status of dependencies that are being created by other teams.

3. Use it as a basis for periodic communication between teams.

96 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


Figure 19: Viewing the Delivery Plan for the Entire Release

Conclusion

In Azure DevOps, it is possible to use the same tools provided for managing agile development of one team,
to manage parts of a single product being developed by multiple teams.

In this article, we have seen how we can keep a common product backlog for all teams working on a single
product but maintain a separate sprint backlog for each of them.

We also saw how to use the Delivery Plan extension for viewing the entire release consisting of multiple
iterations and multiple teams.

Subodh Sohoni
Author
Subodh is a Trainer and consultant on Azure DevOps and Scrum. He has an experience of over
33 years. He is an engineer from Pune University and has done his post-graduation from IIT,
Madras. He is a Microsoft Most Valuable Professional (MVP) - Developer Technologies (Azure
DevOps), Microsoft Certified Trainer (MCT), Microsoft Certified Azure DevOps Engineer Expert,
Professional Scrum Developer and Professional Scrum Master (II). He has conducted more
than 300 corporate trainings on Microsoft technologies in India, USA, Malaysia, Australia,
New Zealand, Singapore, UAE, Philippines and Sri Lanka. He has also completed over 50
consulting assignments - some of which included entire Azure DevOps implementation for the
organizations. Subodh is a regular speaker at Microsoft events including Partner Leadership
Conclave.You can connect with him on LinkedIn

Techinical Review Editorial Review


Gouri Sohoni Suprotim Agarwal
C#

Jose Manuel Redondo Lopez

Dynamic Class
Creation
Preserving Type Safety
in C# with Roslyn

C# is gradually incorporating additional features typical


of dynamic languages - thanks to the increased support
for them that’s integrated into its underlying platform.

One of these features is the Roslyn Compiler-as-a-service


(CaaS) API, that allows any program to use compiler
services to manipulate code at runtime.

In this article, I will discuss an approach to take


advantage of Roslyn to increase the amount of dynamic
language features we can use into our programs. This
approach preserves type safety if required and enhances
the performance of dynamically added code when
compared with using ExpandoObject instances.

98 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


Introduction
Dynamic languages like Python, Ruby or JavaScript are very popular in web development, rapid prototyping,
game scripting, interactive programming, and other scenarios requiring dynamic adaptiveness [1]. They
enable rapid development of database-backed web applications and implement the Convention over
Configuration and DRY (Do not Repeat Yourself) principles.

A dynamic language provides high-level features at runtime that other languages only provide statically by
modifying the source code before compilation[2]. They support runtime metaprogramming features, which
allow programs to dynamically write and manipulate other programs (and themselves) as data. Examples of
these features are:

• Fields and methods can be added and removed dynamically from classes and objects.

• New pieces of code can be generated and evaluated at runtime, without stopping the application
execution, adding new classes, replacing method bodies, or even changing inheritance trees.

• Dynamic code containing expressions may be executed from strings, evaluating their results at runtime.

These metaprogramming features can be classified in different levels of reflection (the capability of a
computational system to reason about and act upon itself, adjusting itself to changing conditions), and they
are a key topic of our research work [1] [2].

Dynamic languages are frequently interpreted, and generally check type at runtime [5]. The lack of compile-
time type information involves fewer opportunities for compiler optimizations.

Additionally, runtime type checking increases both execution time and memory consumption, due to the
type checks performed during program execution and the additional data structures needed to be able to
perform them[1] [2] [3].

Because of this, there are approaches to optimize dynamic languages [3] or to add their features to existing
statically typed ones [4]. These are based on modifying the language runtime [8] or using platform features
to instrument the code at load time. In either case, runtime modification of programs is normally supported
via an external API using compiler services that can be accessed programmatically [4] .

These approaches were successful, proving that dynamic language features can be incorporated to
statically typed languages and take benefit from both static type checking and runtime flexibility.

However, they require the development of a supporting API or a custom virtual machine [3]. This requires
a lot of effort and, as the new dynamic features will not be part of the platform, they must be distributed
along with the programs that use them.

However, the modern .NET ecosystem enables us to obtain a lot of these features just with standard
platform services or modules.

The main contribution of this article is to demonstrate how the Roslyn Compiler-as-a-Service (CaaS) API can
be used to achieve runtime flexibility and type safety, while implementing most of the described dynamic
language features.

We’ll demonstrate how a class can be fully created at runtime, incorporating custom methods and

99
www.dotnetcurry.com/magazine |
properties. The compiler services compile the dynamic code ensuring that no type errors exist [4].

You’ll also see how this dynamic class can comply with a known interface so type safety can be
preserved in the program, at least with part of the dynamically created code. Finally, we will also check if
this allows a substantial performance gain when compared to an older, non-statically typed approach that
creates dynamic objects via the ExpandoObject [9] class.

All code shown in the following sections have been successfully tested on Visual Studio Enterprise 2019
using a .NET Core 3.1 solution.

Using ExpandoObject to create dynamic code


The ExpandoObject class was incorporated to the .Net Framework since version 4.0 as part of the
System.Dynamic namespace. This class implements the IDictionary interface.

Thanks to the syntactic sugar embedded into the language and the dynamic type, it can be used to create
new objects whose content can be fully defined at runtime.

The following code snippet shows how a programmer can add the ClassName property to an
ExpandoObject instance by just writing a value to it. Instead of giving an error for writing to a property
that does not exist, the property is created and initialized inside the ExpandoObject instance.

public static ExpandoObject BuildDynamicExpando(string className,


Dictionary<string, object> fields) {
dynamic obj = new ExpandoObject();
obj.ClassName = className;
AssignProperties(obj, fields);

return obj;
}

Also, ExpandoObject behave like dictionaries, and AssignProperties takes full advantage of it to
dynamically add multiple properties.

foreach (var item in dict)


((IDictionary<string, object>)myObj)[item.Key] = item.Value;

The following code snippet shows how easy it is to create an object instance whose contents can be fully
customized using this approach.

The C# compiler does not perform any static type checking on variables of dynamic type. This allows the
code to compile, even if the compiler lacks information about the static structure of an instance.

As a tradeoff, every operation that reads or invokes members dynamically, loses the type safety provided by
the compiler: every type error is detected and thrown at runtime.

This next example will be type checked at runtime and found valid, as all accessed members have been
already defined, and the dynamic state of the object determines the operations that can be applied to it 
(duck typing[10] ).

100 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


var properties = new Dictionary<string, object> {
{ "Name", "John Wick" },
{ "Pet", "Dog" }, { "BirthYear", 1978 },
{ "BirthMonth", 5 }, { "BirthDay", 8 }
};

dynamic person = DynamicExpandoCreator.BuildDynamicExpando("Employee",


properties);

//Print dynamically added properties


Console.WriteLine("{0}, born in {3}/{2}/{1}.",
person.Name, person.BirthYear,
person.BirthMonth, person.BirthDay);

Methods can be also added to ExpandoObject instances, but normally they have to be bound to a
particular one, which is the instance that holds the properties these methods need to perform calculations
on, as shown in the following code snippet.

Func<int> getAge = () => (int)DateTime.Now.Subtract(


new DateTime((int)person.BirthYear,
(int)person.BirthMonth,
(int)person.BirthDay)).TotalDays / 365;
person.GetAge = getAge; // Add a method

Console.WriteLine("{0} is {1} years old.",


person.Name, person.GetAge());

This way, to add this method to multiple ExpandoObject instances, we need to create one per instance,
using each instance variable name. The ExpandoObject approach does not follow the traditional class-
based language behavior, as individual instances can be modified and evolved independently. It is closer to
the prototype-based object-oriented languages model [11].

Using Roslyn to create dynamic code


What happens if we need a more “traditional” class-based approach?

Using ExpandoObject is not suitable, as ExpandoObject cannot work as runtime modifiable classes
that contain the structure of all their instances. However, we can achieve this with the Roslyn module from
Microsoft [12] [8], that can be added to any project via the Microsoft.CodeAnalysis.CSharp.Scripting
NuGet package:


101
www.dotnetcurry.com/magazine |
We can create source code and ask Roslyn to compile it at runtime. To do so, data structures to hold the
necessary member information must be created.

We have created a simple demo program to demonstrate this, using the DynamicProperty class to
hold just new property names and types. A more complete implementation could use FieldInfo and
MethodInfo classes from System.Reflection to obtain data from existing class members.

//Dynamic properties to match the demo interface


var p1 = new DynamicProperty { Name = "BirthYear", FType = typeof(int) };
var p2 = new DynamicProperty { Name = "BirthMonth", FType = typeof(int) };
var p3 = new DynamicProperty { Name = "BirthDay", FType = typeof(int) };

//Other properties
var p4 = new DynamicProperty { Name = "Pet", FType = typeof(string) };
var p5 = new DynamicProperty { Name = "Name", FType = typeof(string) };

The same can be done to dynamically build the source code of the methods to add. This can be achieved
using Roslyn SyntaxTree analysis features. It allows us to read any source code file (.cs) into a syntax tree
and do several operations, including obtaining the actual source code of any method we want. This may
seem like a limitation, as source code might not be available because only the binaries were distributed,
but source code can be obtained by an assembly decompiler (such as dotPeek [14]) from compiled files.

In our demo, we just read a method source code from a sample code file.

//Obtain the dynamic method source from other source files


var scf = new SourceCodeFromFile(@"..\..\..\MethodSource.cs");
var m1 = new DynamicMethod {
Signature = scf.GetMethodSignature("GetAge"),
Body = scf.GetMethodSourceCode("GetAge")
};

However, this is just a convenient feature to “reuse” existing source code. Nothing prevents us from fully
providing source code as a string, the same way as dynamic code evaluation features from typical dynamic
languages [7].

//New methods created from source code strings


var m2 = new DynamicMethod {
Signature = "public string GetPet()",
Body = "{ return Pet; }"
};

Once we have appropriate data structures to hold the dynamic class information, we can create it. This
is when we can achieve a certain degree of type safety by forcing the newly created class to implement
elements known by the compiler: inherit from an existing class or implement multiple interfaces. This way,
access to part of the structure of the dynamically created class can be type checked statically, as shown in
the following code snippet that forces the class to implement the IHasAge interface we created.

//Build a dynamic class with a known interface. Return an instance of this


//class
IHasAge obj = null;
try {
obj = DynamicClassCreator.BuildDynamicClass<IHasAge>("MyDynamicEmployee",
new DynamicProperty[] { p1, p2, p3, p4, p5 },
new DynamicMethod[] { m1, m2 });
}

102 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


catch (Exception ex) {
Console.WriteLine(ex);
return;
}

//Test the instance


obj.BirthYear = 1978;
obj.BirthMonth = 5;
obj.BirthDay = 8;
Console.WriteLine("This person is " + obj.GetAge() + " old.");

We do not know the full structure of the newly created class at this point, but we know that it implements
the IHashAge interface. Therefore, we can access its members normally, and when using them, type errors
will be detected by the compiler. We can still use a dynamic typing approach to access the rest of the class
structure. The difference is that now both static and dynamic type checking can be used in the same class if
we want.

//Of course dynamically typing works too


((dynamic)obj).Name = "John Wick";
((dynamic)obj).Pet = "Dog";

Console.WriteLine("{0} has a {1}.", ((dynamic)obj).Name, ((dynamic)obj).GetPet());

How we achieve this using Roslyn?

We built the source code of the new class from the classes we created to hold member information. Later,
we used our Exec dynamic code execution function to both compile the class and create a new instance
of it (our demo only supports dynamic classes with a non-parameter constructor). Exec is the function that
really uses the Roslyn API.

Once we create a new dynamic type like this, accessing it may be hard because we did not specify a
namespace or class visibility. To facilitate this, we create a single instance of the new type and return it in
CreatedObj, a dynamic property of a custom Globals class we pass to the Exec method. Finally, we return
this object converted to the passed generic type:

public static T BuildDynamicClass<T>(string className, IEnumerable<DynamicProperty>


fields, IEnumerable<DynamicMethod> methods = null) {
var dt = new DynamicType { Name = className, Implements = typeof(T) };

foreach (var field in fields) dt.Fields.Add(field);

foreach (var method in methods) dt.Methods.Add(method);

// This demo only supports non-parameter constructors


var globals = new Globals { };

//If this functionality is extended, extra references / imports could be


provided
//as parameters. This demo just uses a default configuration of both
dynamic ret2 = Exec(dt.ToString() + ";\n CreatedObj = new " + dt.Name + "();",
globals: globals,
references: new string[] { "System", typeof(T).Assembly.ToString() },
imports: typeof(T).Namespace).Result;
if (ret2 != null) throw new Exception("Compilation error: \n" + ret2);

return (T)globals.CreatedObj;
}


103
www.dotnetcurry.com/magazine |
The source code of the new dynamic class is created via the ToString method of the instance dt of the
utility class I created for this demo, DynamicType.

Please note that this code is just a demonstration of how dynamic class creation can be achieved.

If this code is extended to support a dynamic class creation framework, additional features should
be considered, such as allowing multiple instances of dynamically created classes. All this is possible
instructing Roslyn to perform the compilation of source code via its CSharpScript.EvaluateAsync
method, that accepts:

• The source code of the class to be created.

• Global variables that this code may use. In our implementation this is the Globals class.

• Assembly references potentially used by the code. We include System by default, and any other
assembly containing the parent class of the one that is going to be created ( or the interface it
implements).

• Finally, the import parameter is used to add any namespace that the source code needs (same as the
using keyword).

As a result, this task may return a compiler error if the code does not compile. We use this information to
compose the full compiler error and return it to the caller. This way, the reported errors have the same level
of detail as the ones the compiler outputs before a program is run.

private static async Task<object> Exec(string code, object globals = null,


string[] references = null, string imports = null) {
try {
object result = null;
ScriptOptions options = null;
if (references != null) {
options =
ScriptOptions.Default.WithReferences(references);
}
if (imports != null) {
if (options == null)
options = ScriptOptions.Default.WithImports(imports);
else options = options.WithImports(imports);
}
result = await CSharpScript.EvaluateAsync(code,
options, globals: globals);
//Evaluation result
return result;
}
catch (CompilationErrorException e) {
//Returns full compilation error
return string.Join(Environment.NewLine, e.Diagnostics);
}
}

Therefore, this approach allows us to build a fully dynamic class, compile it, and return an instance whose
static type may be partially known, providing type safety and a traditional class-based approach.

The same code with little variations can also be used to evaluate expressions dynamically, thus
implementing another typical dynamic language feature.

104 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


The main disadvantage of this approach is that, once the new class is created and compiled, it cannot be
modified again as compilation “closes” it.

Further modification requires creating a new type and re-creating all the instances of the old type into
instances of the new one. For the same reason, classes compiled statically cannot also be modified.
However, the .NET platform already has features to modify method bodies of already compiled classes at
runtime [15], which partially covers this.

Performance Analysis – Dynamic Typing


As we said, using dynamic typing usually imposes a performance penalty. Our approach both combines
flexibility and static typing advantages, but does it hold any performance advantage over the
ExpandoObject approach?

Reading from a dictionary is very efficient, but an actual method call could be faster. To test this, we
compared 2000000 calls to the GetAge method on both the ExpandoObject and our dynamic class
instances, both methods sharing the same implementation.

Performance measurement has been done at steady state, using a C# port of a statistically rigorous
measurement procedure [16][15] we used in past [3] with excellent results. This procedure runs a maximum
of 30 iterations of 30 executions of the code to be measured, stopping prematurely if the coefficient of
variation (CV, the ratio of the standard deviation to the mean of the collected data) falls below 2.

This ensures that the effect of external events into measured times are minimized. Execution was
performed in Release mode, over an AMD Ryzen 1700 with 64Gb of 2400Mhz RAM. The results appear in the
following table:

Our measurements show that the Roslyn approach holds an average performance advantage of 15% over
the ExpandoObject alternative, thus being significantly faster. This means that if full dynamic typing
is not really needed to implement a feature, the Roslyn approach provides convenient type safety and
performance advantages when using the dynamically created types through the program code.

There is also an initial performance cost when creating a new type via Roslyn (the cost of compilation).
Programmers do not need to choose between static or dynamic typing, hybrid approaches obtain
advantages from both.

Conclusion

Roslyn CaaS can be effectively used to build dynamic code enabling flexibility and type safety if the class to
be build has at least a partially known structure.

Measurements show that this hybrid approach holds a significant performance advantage over the fully
dynamic ExpandoObject. This approach also maintains a more traditional class-based behavior that may be

105
www.dotnetcurry.com/magazine |
easier to use by programmers that need to dynamically incorporate pieces of code.

Acknowledgments

This work has been partially funded by the Spanish Department of Science, Innovation and Universities:
project RTI2018-099235-B-I00. This has also been partially funded by the project GR-2011-0040 from the
University of Oviedo

Source code shown on this article can be found at: <TODO>

References

[1] L. D. Paulson, "Developers shift to dynamic programming languages," IEEE Computer, vol. 40, no. 2, pp.
12-15, 2007.

[2] O. Callau, R. Robbes, E. Tanter and D. Röthlisberger, "How (and why) developers use the dynamic
features of programming languages: the case of Smalltalk," Empirical Software Engineering, vol. 18, no. 6, pp.
1156-1194, 2013.

[3] F. Ortin, J. Redondo and J. B. G. Perez-Schofield, "Efficient virtual machine support of runtime
structural reflection," Science of Computer Programming, vol. 74, no. 10, pp. 836-860, 2009.

[4] F. Ortin, M. Labrador and J. Redondo, "A hybrid class- and prototype-based object model to support
language-neutral structural intercession," Information and Software Technology, vol. 44, no. 1, pp. 199-219,
2014.

[5] L. Tratt, "Dynamically typed languages," Advances in Computers, vol. 77, pp. 149-184, 2009.

[6] J. Redondo and F. Ortin, "A comprehensive evaluation of common python implementations," IEEE
Software, vol. 32, no. 4, pp. 76-84, 2014.

[7] I. Lagartos, J. Redondo and F. Ortin, "Efficient Runtime Metaprogramming Services for Java," Journal of
Systems and Software, vol. 153, pp. 220-237, 2019.

[8] J. M. Redondo, F. Ortin and J. M. Cueva, "Optimizing reflective primitives of dynamic languages,"
International Journal of Software Engineering and Knowledge Engineering, vol. 18, no. 6, pp. 759-783, 2008.

[9] Microsoft, "ExpandoObject class," Microsoft, 2020. [Online]. Available: https://docs.microsoft.com/en-


us/dotnet/api/system.dynamic.expandoobject?view=netcore-3.1. [Accessed 30 4 2020].

[10] D. Thomas, C. Fowler and A. Hunt, Programming Ruby, 2nd ed, Chicago (Illinois): Addison-Wesley,
2004.

[11] J. M. Redondo and F. Ortin, "Efficient support of dynamic inheritance for class-and prototype-based
languages," Journal of Systems and Software, vol. 86, no. 2, pp. 278-301, 2013.

[12] Microsoft, "Overview of source code analyzers," MIcrosoft, 2020. [Online]. Available: https://docs.
microsoft.com/en-us/visualstudio/code-quality/roslyn-analyzers-overview?view=vs-2019. [Accessed 30 4
2020].

106 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


[13] J. Redondo, "New features of C# 8 and beyond," 2019. [Online]. Available: www.researchgate.net/
publication/330514620_New_Features _of_CSharp_8_and_beyond. [Accessed 25 10 2019].

[14] JetBrains, "dotPeek: Free .Net Decompiler and Assembly Browser," JetBrains, 2020. [Online]. Available:
https://www.jetbrains.com/decompiler/. [Accessed 30 4 2020].

[15] T. Solarin-Sodara, "POSE: Replace any .NET method," GitHub, 8 1 2018. [Online]. Available: https://
github.com/tonerdo/pose. [Accessed 30 4 2020].

[16] A. Georges, D. Buytaert and L. Eeckhout, "Statistically rigorous java performance evaluation. In:
Object-oriented Programming Systems and Applications," in OOPSLA’ 07, New York, NY, USA, 2007.

Download the entire source code from GitHub at


bit.ly/dncm47-roslyn

José Manuel Redondo López


Author
YJose Manuel Redondo is an Assistant Professor in the University of
Oviedo, Spain since November 2003. He received his B.Sc., M.Sc., and
Ph.D. degrees in computer engineering from the same university in
2000, 2002, and 2007, respectively. He has participated in various
research projects funded by Microsoft Research and the Spanish
Department of Science and Innovation. He has authored three books
and over twenty articles. His research interests include Dynamic
Languages, Computational reflection, and Computer Security.

Techinical Review Editorial Review


Yacoub Massad Suprotim Agarwal


107
www.dotnetcurry.com/magazine |
XAMARIN

Gerald Versluis

Goodbye Xamarin.Forms,
Hello .NET MAUI!

If you are interested in the mobile development space,


you must’ve heard by now: Xamarin.Forms is evolving
into .NET Multi-platform App User Interface (MAUI).

In this tutorial, I will tell you all about the ins and outs
of this change and what it might mean for you.

Don’t worry, nothing is going away!

Everything will just get faster, better and simpler for you
- the developer.

108 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


A Little History Lesson
For those who are not familiar with what Xamarin and Xamarin.Forms is all about, let me quickly refresh
your memory.

Before Xamarin was Xamarin, it had a different name and was owned by several different companies, but
that is not relevant to this story.

In 2011, Xamarin, in its current form, was founded by Miguel de Icaza and Nat Friedman. With Xamarin,
they built a solution with which you can develop cross-platform applications on iOS, Android and Windows,
based on .NET and C#. Nowadays you can even run it on macOS, Tizen, Linux and more!!

Since developing with Xamarin was all based on the same language, you could share your code across all
supported platforms, and thus reuse quite a bit.

The last piece that wasn’t reusable was the user interface (UI) of each platform.

In 2014, Xamarin.Forms was released as a solution to overcome that problem. With Forms, Xamarin now
introduced an abstraction layer above the different platforms’ UI concepts. By the means of C# or XAML, you
were now able to declare a Button, and Xamarin.Forms would then know how to render that button on iOS,
and that same button on Android as well.

With this in place, you would be able to reuse up to 99% of your code across all platforms.

In 2016 Xamarin was acquired by Microsoft. Together with this acquisition, most of the Xamarin code
became open-source and free for anyone to use under the MIT license.

If you want to learn more about the technical side of Xamarin, please have a look at the documentation
here: https://docs.microsoft.com/xamarin/get-started/what-is-xamarin

Xamarin.Forms Today
As already mentioned, Xamarin and Forms are free and open source today!

This means a lot of people are happily using it to build their apps - both for personal development of apps
as well as to create Line of business (LOB) enterprise apps. Over the years, new tooling was introduced:
Visual Studio for Mac allows you to develop cross-platform solutions on Mac hardware for Xamarin apps,
and also for ASP.NET Core or Azure Functions solutions.

And of course, all the Xamarin SDKs got updated with all the latest features all the way up to iOS 14 and
Android 11 which have just been announced at the time of writing.

Xamarin.Forms is no different: it has seen a lot of development over the years. New features are introduced
with every new version.

Not just new features; even new controls are now “in the box”. While earlier, Forms would only render the
abstraction to the native counterpart; they have now introduced some controls that are composed from
other UI elements.


109
www.dotnetcurry.com/magazine |
Effectively that means Forms now has several custom controls. Currently those are: CheckBox,
RadioButton and Expander for instance.

Xamarin.Forms Internals: Renderers

If we go a little deeper into how Xamarin.Forms works, we quickly find something called renderers.

Each VisualElement, which is basically each element that has a visual representation (so pages and
controls mostly), has a renderer. For instance, if we look at the Button again, Button is the abstract
Xamarin.Forms component which will be translated into a UIButton for iOS and an Android.Button on
Android.

To do this translation, Forms uses a renderer. In this case, the ButtonRenderer. Inside of that renderer, two
things happen basically:

1. Whenever a new Button (or other control) is created, all the properties are mapped to their native
controls’ counterparts. i.e.: the text on a Button is mapped to the right property on the targeted
platform so it shows up the right way.

2. Whenever a property changes on the Button (or other control), the native control is updated as well.

The renderer controls the lifecycle of that control. You might decide that you need things to look or act a
bit different, or that maybe a platform-specific feature is not implemented in Forms. For those scenarios,
you can create a custom renderer. A custom renderer allows you to inherit from the default renderer and
make changes to how the control is rendered on a specific platform.

If you want to learn more about renderers and custom renderers, this Docs page is a good starting point:
https://docs.microsoft.com/xamarin/xamarin-forms/app-fundamentals/custom-renderer/

Introducing .NET MAUI


In May 2020, the Build 2020 conference was held. Because of the current situation around the world, this
was the first time this event was completely virtual, just like a lot of events. Amongst a lot of other great
announcements, there was also the news that Xamarin.Forms will evolve into something called .NET
MAUI. If you want to (re)watch the announcement from Build, you can see the session here on Channel 9:
https://channel9.msdn.com/Events/Build/2020/BOD107.

Editorial Note: Here’s a quick recap of Build 2020 for Developers.

Notice how they are using the word evolve.

This means a couple of things.

First and most importantly: nothing will be taken away from you. Everything that is in Forms today, will be
available in .NET MAUI.

Second: while everything will still be available for you, things will definitely change. The team has taken all
the learnings over the past few years from Forms and will incorporate that into .NET MAUI.

110 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


There will be some breaking changes. Everything that is marked deprecated today or until .NET MAUI
is released, will be removed. Also, and probably most importantly, the architecture of the renderers will
change, and the namespace will change.

Figure 1: .NET MAUI overview slide from the MS Build presentation by David Ortinau and Maddy Leger

Slim Renderers

In .NET MAUI, the renderers that are available right now will evolve to so-called slim renderers. The
renderers will be reengineered and built from the ground up to be more performant. Again, this will be
done in a way so that they should be useable in your existing projects without too much hassle.

The benefit you will get is faster apps out of the box.

You might wonder what will happen to your custom renderers? Well those should just keep working. There
will probably be exceptions where it will cause some issues, but the goal here, again, is to keep everything
as compatible as possible.

If you are wondering about some of the details that are shaping up as we speak, please have a look at the
official Slim Renderers spec on GitHub: https://github.com/dotnet/maui/issues/28

Namespace Change

Microsoft is showing its dedication to Xamarin.Forms.

With .NET MAUI, Forms is taken into the .NET ecosystem as a first-class citizen. The new namespace will be
System.Maui. By the way, Xamarin.Essentials, the other popular library, will take the same route and you
can find that in the System.Devices namespace.


111
www.dotnetcurry.com/magazine |
As you can imagine, this is quite the change and even a breaking change. The team has every intention of
providing you with a transition path or tool that will make the switch from Forms to .NET MAUI, as pain
free as possible.

Single Project

If you have worked with Xamarin.Forms today, you know that you will typically have at least three projects:
the shared library where you want all your code to be so it can be reused, an iOS project and an Android
project. For each other platform that you want to run on, you will have to add a bootstrap project in your
solution.

While this is technically not a feature of .NET MAUI, .NET MAUI is the perfect candidate for this. In the
future, you will be able to run all the apps from a single project.

Figure 2: Screenshots of how the single project structure might look like

With the single project structure, you will be able to handle resources like images and fonts from a single
place instead of per platform. Platform-specific metadata like in the info.plist file will still be available.
Writing platform-specific code will happen the same way as you would write multi-targeting libraries today.

See the bottom-right most screenshot in Figure 2.

Visual Studio Code Support

Another thing that has been announced is that .NET MAUI will be supported in Visual Studio Code (VS
Code). This has been a long-standing wish from a lot of developers, and it will finally happen. Additionally,
everything will be available in the command-line tooling as well, so you can also spin up your projects and
builds from there if you wish.

112 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


Support for Multiple Design Patterns

Xamarin.Forms, and other Microsoft products for that matter, have mostly been designed to work with the
Model-View-ViewModel (MVVM) pattern.

With .NET MAUI, this will change.

While MVVM will still be supported (again, nothing is taken away), because of the new renderer
architecture, other patterns can be implemented now.

For instance, the popular Model View Update (MVU) pattern will now also be implemented. If you are
curious what that looks like, have a look at the code below.

readonly State count = 0;


[Body]
View body() => new StackLayout
{
new Label("Welcome to .NET MAUI!"),
new Button(
() => $"You clicked {count} times.",
() => count.Value ++)
)
};

This can even open the door to completely drawn controls with SkiaSharp for instance. This is not in any
plans right now, but it’s certainly a possibility, even if it comes from the community.

Get Involved, Today!


After all the good news, you’re probably excited to get started, right now!

Unfortunately, it will be a while before the evolution is complete. The first preview is expected together
with the .NET 6 preview which should happen in Q4 2020. The first release of .NET MAUI will happen a year
after that, again with the release of the .NET 6 final; November 2021. For a more detailed roadmap, have a
look at the wiki on the repository: https://github.com/dotnet/maui/wiki/Roadmap.

However, you can already be involved today. All the new plans, features, enhancements and everything will
be out in the open. You can head over to the repository right now and let the team know what is important
to you.

There are already lively discussions happening about all kinds of exciting new things. Also, the code is
there too, so you can follow progress and even start contributing to be amongst the first contributors of this
new product.

You can find the repository here: https://github.com/dotnet/maui

What Happens to Xamarin.Forms?

After the release of .NET MAUI, Forms will be supported for another year. That means; it will still get
bugfixes and support until November 2022. That should give you enough time to transition to .NET MAUI

113
www.dotnetcurry.com/magazine |
with your apps.

There is also a big community supporting Xamarin and Forms, so this will also give library authors all the
time they need to adapt to this new major version.

As you might have already gotten from all the new names and namespaces, the brand Xamarin is bound to
disappear. Also, the iOS and Android SDKs will be renamed to .NET for iOS and .NET for Android.

I think this was always expected from the beginning when Microsoft took over. It’s just that these
transitions take time.

Of course, this is very sad, the monkeys, logo and all that belongs to the Xamarin name will be history. I
think it’s for the best and this means that the Xamarin framework has grown up to be a technology that
is here to stay - Backed by Microsoft, incorporated into .NET, your one-stop solution for everything cross-
platform.

I’m very excited to see what the future will bring, and I hope you are too!

Gerald Versluis
Author
Gerald Versluis (@jfversluis) is a Software Engineer at
Microsoft. He has been working with the Xamarin.Forms
team for a year, and now works on Visual Studio Codespaces.
Not only does he like to code - but spreading knowledge, as
well as gaining it, is part of his daytime job. He does so by
speaking at conferences, live streaming and writing blogs
(https://blog.verslu.is) or tutorials.
Twitter: @jfversluis | Website: https://jfversluis.dev

Techinical Review Editorial Review


Damir Arh Suprotim Agarwal

114 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


ORDER NOW

115
www.dotnetcurry.com/magazine |
PATTERNS AND PRACTICES

Yacoub Massad

In this article, I will continue talking


about the most important coding
practices based on my experience.
In this part, I will talk about data
modeling, and making state or
impurities in general visible.

CODING PRACTICES:
MY MOST
IMPORTANT ONES -
PART 3

3. Immutability
Introduction
In this part, I will talk about data modeling and
making state and other impurities visible.
In this article series, I talk about the coding
practices that I found to be the most beneficial in
Note: In this article, I give you advice based on my
my experience. In the last two parts, I talked about
9+ years of experience working with applications.
the following practices:
Although I have worked with many kinds of
applications, there are probably kinds that I did not
1. Automated testing work with. Software development is more of an art
than a science, so ensure that the advice I give you
2. Separation of data and behavior
here makes sense in your case before using it.

116 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


Practice #4: Model your data accurately
Make sure that the code units that model data are doing it accurately.

In practice #2, I recommended that you separate data from behavior. If you do that, then there will be units
of code in your application whose only job is to model data.

I call these data objects.

In the Designing Data Objects in C# and F# article, I went in depth about how to design data objects. In
another article, I gave examples of suboptimal data object designs and suggested improvements. I also
wrote another article—called Function parameters in C# and the flattened sum type anti-pattern—where I
talked about how easy it is for function inputs to become confusing as functions evolve.

In my opinion, modeling data objects is much more important than modeling behavior. If you are accurate
with modeling your data objects, the behavior code that you write will be guided by the restrictions
imposed by the data object types.

When I start writing a function, before writing any code in the function itself, I think about the inputs and
outputs of the function. Except for the simple cases where built-in data types (e.g. string, int) are enough, I
create special data objects to model the function inputs and outputs. I spend enough time on these.

Of course, I do not always create new data types. The reason is that many functions deal with the same
data types. Once you write a few functions, you have already created data types that are likely to be used
as-is in the functions that you will write next. This is true because functions in the same application are
solving a single problem.

Once the data types are defined, my mission of writing the function itself (the behavior) becomes easier.
Designing the data objects would have already made me think about the different possible input values
that the function can take and the possible output values it will produce. More concretely, this allows me to
split the process of writing the function to:

Step 1: think about the inputs and outputs of the function regardless of how the function will convert
inputs to outputs.

Step 2: write code to convert the inputs to outputs. This is of course a simplification, but you get the idea.

Of course, such separation is not always done perfectly. It happens sometimes that once you are in Step 2,
you realize that there are cases that are not modeled in the input and output data types.

Still, there is a lot of value in such separation.

Not only will well designed input and output data objects help you with writing the behavior code of
your functions, they will also make it easier for people (you included) to understand your functions. We
developers spend much more time reading functions than we spend writing them.


117
www.dotnetcurry.com/magazine |
Practice #5: Make impurities visible
Make sure that the impure parts of your code are visible to readers of your code. Let me explain.

What are pure functions?


I talked about pure (and impure) functions in the following articles:

Writing Honest Methods in C#

Writing Pure Code in C#

Composing Honest Methods in C#

A pure function is a function whose output depends solely on the arguments passed to it. If we invoke a
pure function twice using the same input values, we are guaranteed to get the same output. Also, a pure
function has no side effects.

All of this means that a pure function cannot mutate a parameter, mutate or read global state, read a file,
write to a file, etc. Also, a pure function cannot call another function that is impure.

How to make impurities visible?


By impurities, I mean the things than make an otherwise pure function, impure.

One kind of impurity is state. In practice #3, I talked about making data objects immutable. If you do that,
then you minimize the state in your applications.

Still, we sometimes require some state in applications. In this practice, I am advising you to keep that state
visible.

For example, instead of having global variables that multiple functions use to read and write state, create
state parameters (e.g. ref parameters) and pass them to the functions that need them. This way you have
made visible the fact that these functions might read or update state. This makes it easier for developers to
understand your code. I talk about this in details in the Global State in C# Applications article.

Another kind of impurity I want to talk about is related to I/O.

Few examples are reading or writing to a file, reading the system timer, writing or reading from a database,
etc. If a function does any of these, extract the code that does the I/O into a dependency and make that
dependency visible in your code. Let me show you an example. Let’s say you have this code:

public class ReportGenerator : IReportGenerator


{
public Report Generate(int customerId)
{
//..

118 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


File.WriteAllText(reportCopyPath, reportText);

//..
}
}

This class generates reports for customers. You give the Generate method a customerId, and it gives you
back a Report object. Somewhere in the middle of the method, there is a statement that writes a copy of
the report to some folder that holds copies of all generated reports.

What you can do is extract the call to File.WriteAllText into a dependency like this:

public class ReportGenerator : IReportGenerator


{
private readonly IFileAllTextWriter fileAllTextWriter;

public ReportGenerator(IFileAllTextWriter fileAllTextWriter)


{
this.fileAllTextWriter = fileAllTextWriter;
}

public Report Generate(int customerId)


{
//..

fileAllTextWriter.WriteAllText(reportCopyPath, reportText);

//..
}
}

public interface IFileAllTextWriter


{
void WriteAllText(string path, string contents);
}

public class FileAllTextWriter : IFileAllTextWriter


{
public void WriteAllText(string path, string contents)
{
File.WriteAllText(path, contents);
}
}

We have now made the fact that the ReportGenerator might write to the file system, more visible. The
ReportGenerator class is now more honest.

In the Composition Root, when you construct the ReportGenerator class, you explicitly give it an instance
of the FileAllTextWriter class like this:

var reportGenerator =
new ReportGenerator(
new FileAllTextWriter());

It is important for the developers who read your Composition Roots to see the impure behavior that your
classes use.


119
www.dotnetcurry.com/magazine |
In the above example, I used objects and interfaces to model behavior. The same thing can be done in other
coding styles. For example, you can do the same thing with static methods and delegates.

In the specific case of the example we just saw, a good idea might be to have a single FileSystem class
that contains not just a WriteAllText method, but other file system related methods.

Another thing to note about the Generate method is that this method might have multiple
responsibilities; It generates reports for customers, and it also saves copies of these reports to a special
folder. This might not be the best way to model the behavior. However, as far as making impurities visible,
extracting the WriteAllText method to a special dependency is enough.

Conclusion:

This article series is about the coding practices that I found to be the most beneficial during my work in
software development.

In this part, Part 3, I talk about the #4 and #5 most important practices: modeling data accurately and
making impurities visible.

Spend enough time on the data objects in your programs.

Not only will well-designed data objects make it easier for the readers of your code (including yourself) to
understand the code, they also make it easier for you to write your behavior code (your functions) since they
put restrictions on the inputs the functions receive and the outputs they produce.

You make impurities visible by explicitly passing state to functions instead of having functions access
global state under the hood. You also make impurities visible by extracting impure behavior (e.g. I/O
access) into special dependencies (e.g. classes) and making visible the fact that your code is using these
dependencies.

Yacoub Massad
Author
Yacoub Massad is a software developer and works mainly on
Microsoft technologies. Currently, he works at Zeva International
where he uses C#, .NET, and other technologies to create eDiscovery
solutions. He is interested in learning and writing about software
design principles that aim at creating maintainable software. You can
view his blog posts at criticalsoftwareblog.com. Recently he started a
YouTube channel about Roslyn, the .NET compiler.

Techinical Review Editorial Review


Damir Arh Suprotim Agarwal
Get a plethora of digital
technology magazines
with just a click

SUBSCRIBE HERE

121
www.dotnetcurry.com/magazine |
DIGITAL TRANSFORMATION

Vikram Pendse

DIGITAL
TRANSFORMATION
using Microsoft Technologies
during and post COVID-19

Not too long ago, everyone was super excited to enter 2020.

Businesses were chalking out plans on adopting Cloud, AI, Machine Learning etc. – all encapsulated
in a single capsule called “Digital Transformation”.

Globally, many “Digital Transformation” conferences were taking shape where people were to share
their views and visions for helping their customers and other businesses achieve goals.

..and then came the COVID-19 pandemic.

The entire world went into a lockdown. Most businesses went on a halt with no clarity on
re-opening and revival. As I write this article, many countries and businesses are barely keeping their
heads above water. Some are trying to recover by going “Virtual”.

This article is a personal overview of how this situation will affect the journey, as well as some road-
maps defined earlier. We will also talk about a post Pandemic scenario and some new challenges
that will emerge.

122 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


Disclaimer: Please note that this article talks about technical aspects and not economic, social or political
changes foreseen as a result of COVID-19. Due to my association with Microsoft for over two decades, this
article is more aligned to the Microsoft Technology domain.

What is Digital Transformation?


People have already spent thousands (and in some cases millions) of dollars on defining and building a
roadmap for their “Digital Transformation”. A few have already begun the journey.

However, a large group consisting of non-CXO audiences like Developers, Testers, Back Office, IT, HR, Finance
and Operations don’t know what’s in this blackbox called “Digital Transformation”, what’s in it for them, and
the role they will be playing in it.

Some people have made an assumption that Digital Transformation is some kind of an “Automation” which
will take away jobs, while there are a few who claim the contrary – they feel Digital Transformation will
create new jobs.

Let us go through and understand the definition of “Digital Transformation”. We are using Wikipedia here
which has a very generic definition which most of us will understand.

“Digital Transformation is the use of new, fast and frequently changing digital technology to solve problems.”

This figure shows the core objectives and outcomes of your “Digital Transformation”. Your “Digital
Transformation” should align to most of the objectives at the top level.

We will now see how CoE (Center of Excellence) can drive these objectives to achieve “Digital
Transformation” within the organization and for the customer. We will also discuss how Microsoft
technologies enable us to achieve these objectives.

For COVID-19, social distancing and wearing a mask is one of the recommended measures from WHO to
contain the spread of transmission. This enforces many businesses to Work From Home (except the Gov. and
other related agencies). Because of the remote work, there can be an impact on the operations, delivery and
meeting timelines.

So, what are the roadblocks and how to tackle them?


123
www.dotnetcurry.com/magazine |
Capturing Business Requirements virtually due to No/Limited
Travel or Travel restrictions

With the current pandemic situation followed by the emerging new normal, businesses need to define
unified optimal mechanisms to collaborate with end customers/stakeholders and employee within the
organization.

Since the pandemic will pose challenges on doing Physical Assessments, Face to Face meetings, Industry/
Plant Visits to observe operations and record notes, talking face to face with clients etc., everything will
go virtual. There is a need of Tools/Software which allows to capture these requirements and fulfill the
required analysis. Think of it as a Virtual Business Analyst (VBA) rather than an actual physical BA who will
use these Tools/Software to capture data.

So which tools can be used for capturing the requirements virtually?

Here are some tools which are economical and commonly used (there are plenty of third-party tools which
individual enterprise/company can evaluate based on their nature of business, meeting expectations and
budget). I am avoiding mentioning Word, Excel and PowerPoint, which are in use for over two decades now.

Microsoft Forms

Microsoft Forms is part of your O365 suites of applications. It is quite easy to use and
captures quick information. It can also integrate with other applications for automation,
and data captured in forms, can be exported.

MS Forms can be used for taking various satisfaction surveys and product feedbacks.

We can use this within the organization for various registrations for different business operations. So, for all
small requirements, feedback etc. this tool is quite helpful.

Microsoft Forms provides some basic business templates and allows you to customize them as seen in
Figure 1. You can share these Forms and export the output to MS Excel. Microsoft Forms also provide some
analytics of the survey/feedbacks you took from your users, as shown in Figure 2.

Figure 1 – Build customized Forms using Microsoft Forms Template

124 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


Figure 2 – Microsoft Forms gives you analytics over the response

Azure DevOps Product Backlog

Previously known as VSTS (Visual Studio Team Services) and now called as “Azure
DevOps” (On-premise TFS is now “Azure DevOps Server”), Azure DevOps is license
based and comes with different pricing models. It is FREE for up to 5 Developers.
Then onwards, it is a paid tool. More details here - https://azure.microsoft.com/en-in/
pricing/details/devops/azure-devops-services/

Azure DevOps allows you to capture “Product Backlog” and write “Features” and “User Stories” as part of
requirements (it can be requirements for building a new product or services project).

It also allows you to put down “Acceptance Criteria” for each User Story to bring transparency and mutual
agreement between you and your client - since both can access Azure DevOps in different Roles and Access
controls.

This becomes a one stop tool/service for your organization and the end customer, and gives a complete
visibility to the progress. Therefore, Azure DevOps is very popular since it not only allows you to manage
code repositories, but also allows you to build end to end Integration and Deployment pipeline by building
Project Dashboards, Charts, Sprints if you are following Agile-Scrum method.

So you don’t have to purchase different tools and spend time to integrate and exhaust resources on each.
Azure DevOps tool/service is a one-stop shop for all your DevOps and Product Management needs.

Collaboration

One key challenge people realized during pandemic is the way we physically interact (face to face) in an
office workspace vis-à-vis interacting over Virtual Meeting Tools.

But are Virtual Meeting Tools a new thing? Not at all!

We have been using Virtual Meeting Tools for almost over two decades, but it was limited for several hours


125
www.dotnetcurry.com/magazine |
or number of people per day. We never saw a weeklong or monthlong full time usage of these Virtual
Meeting Tools (except for certain business domains).

Most of the companies use multiple collaboration tools like Skype, Skype for Business or Microsoft Teams
now, Zoom, WebEx (since 1999), GoTo Meeting, etc.

However, many companies have still not standardized the collaboration platform for communication. Each
department has their own choice of tools!

Now with an increased usage, it is time to enforce certain platforms for enabling Digital transformation.

So what features should your collaboration tool have?

1. Allow members to Chat (One to one, In a Group, In a Closed/Secret Group)

2. Allow members to Video Call (One to One, One to Many)

3. Allow members to conduct a Webinar/Seminar publicly

4. Allow members to share Content (Files, Pictures etc.)

5. Show Usage Analytics

6. Integrate with other applications

7. Allow Plugins

8. Safe Authentication/Authorization

9. Allowing Login or switching to multiple Organizations (e.g. Your Own Organization, Customer/Partner
Organization)

10. Minimum to no outage with better Audio/Video quality

11. Availability on different OS Platform and Devices

12. Whiteboards

“Microsoft Teams” has all the above listed capabilities. It is a revolutionary collaboration tool with a vast
number of features compared to instant messaging platforms like Skype and Skype for Business (Lync).

More insights on Teams capability can be found here https://www.microsoft.com/


en-us/microsoft-365/microsoft-teams/group-chat-software

Also if you are using Skype for Business as your existing Group Chat software, then
you can see a migration path to Teams here https://docs.microsoft.com/en-us/
microsoftteams/upgrade-and-coexistence-of-skypeforbusiness-and-teams

As per Microsoft, by 31st March 2020, overall Teams adoption went to 72M users.

126 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


Source Control Repository and CI/CD

It may surprise you what Source Control Repository and CI/CD has to do with “Digital Transformation”.

Traditionally, many businesses have been building and keeping their source code within their corporate
network because of compliance, security and intellectual property reasons. People used to get nightly
builds and would deploy them to production environments manually or with some deployment tools/
scripts, mostly in some data center.

With the embracement of Cloud (Azure) and the current remote working scenario because of COVID-19,
people are pushed to think of a strategy to bring automation to this process, improve GTM (Go To Market)
time and align with the organizational strategy.

Note: Read more about Agile Development and activities here https://www.dotnetcurry.com/devops/1529/
devops-timeline

Azure DevOps makes this job easy!

We briefly discussed some of the features of Azure DevOps, including the capability of storing and
managing Product Backlogs. CI/CD (Continuous Integration and Continuous Delivery) is another feature
of Azure DevOps as a product. CT (Continuous Testing/Test Automation) can also be done using Azure
DevOps. Although it is a Microsoft product, it supports integration with Non-Microsoft/Open Source tools
as well. Azure DevOps supports integration with GitHub and GitHub Enterprise, Bitbucket Cloud, TFVC and
Subversion. Read more about Git and DevOps integration here https://www.dotnetcurry.com/devops/1542/
github-integration-azure-devops

Beside Azure VMs and Azure Services, Azure DevOps can target container registries and on-premises. Azure
DevOps Server is the on-premise offering by Microsoft.

Are you worried about the learning curve for your teams for Azure DevOps?

It is very much possible that CXOs and decision makers may think that along with the learning curve that
comes with Azure, Azure DevOps will be another overhead.

Well, as we discussed above, with Azure DevOps, you can continue using your existing source control and
pipelines, and only target your efforts for integration with Azure DevOps. If you are implementing it from
scratch, Azure DevOps Labs is one-stop shop solution for your teams to gear up with systematic learning
documentation and sample Proof-Of-Concept (PoC) to reduce the learning curve, boost confidence and get
expertise over Azure DevOps. Refer to figure 3 for additional information.

These Free labs are so well designed that a new joinee or a trainee at your company can also build Azure
DevOps CI/CD. Check more over here https://www.azuredevopslabs.com/. Additonally, we have a plethora of
Azure DevOps tutorials at https://www.dotnetcurry.com/tutorials/devops to get you started!


127
www.dotnetcurry.com/magazine |
Figure 3 – Azure DevOps Labs dashboard showcasing different DevOps Labs

Development Editor for Cross-platform Developers

Visual Studio is and has been the primary IDE in enterprises for all Microsoft technologies-based
application development from the past many years, and it has seen unparalleled growth over the years.

However, for medium sized to startup organizations, the entire Visual Studio suite might get heavy in terms
of budget and licensing. Also, Visual Studio is bound to run on Windows environments only and hence there
has been a barrier in adoption of Visual Studio for many organizations.

Visual Studio Code solves this hurdle!

Visual Studio Code does not have all the features of Visual Studio (Community, Professional or Enterprise
Edition), but it is a lightweight version of Visual Studio. It is a powerful Source Code Editor which works on
Windows, Linux and macOS. It supports a diverse collection of programming languages like C++, C#, Java,
Python, TypeScript etc. and supports different versions of .NET frameworks and Unity frameworks too.

A step ahead in the offerings of Visual Studio Code is “Visual Studio Codespaces” (formerly known as Visual
Studio Online) which launched during Microsoft //Build 2020 virtual event. Visual Studio Codespaces is a
Cloud based IDE and people can access it from anywhere using their browser, ensuring they have created a
Visual Studio Codespaces for them.

It supports Git repos and its built-in command-line interface allows you to edit, run and deploy your
application from anywhere and from any device without installing Visual Studio Code editor on it, as it is
purely cloud based and accessible over the browser as shown in figure 4.

128 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


Figure 4 – Visual Studio Codespaces interface – Accessible from anywhere in a Browser

Given the situation, a large majority of developers and technical experts are working from home, and thus
the collaboration between them becomes a challenge.

But Visual Studio Codespaces overcomes this challenge as well because of its built-in Live Code sharing
and IntelliCode features. Visual Studio Codespaces is in Public Preview. For pricing information please visit
https://azure.microsoft.com/en-us/pricing/details/visual-studio-online/

Development and Testing Environment (Dev/Test)

Organizations initially adopting cloud usually try to push their Dev and Test environments as a part of cloud
roadmap, before the actual production environment.

On-premise Dev and Test environment usually takes time to set up even though it’s not so complex.

Installing softwares, configuring Dev environments, adding tools, setting up Test environments, check
readiness are the common activities for Dev/Test environments on on-premise.

When using the Cloud (Microsoft Azure in this case), building new Azure VMs (IaaS components) is the most
preferred way to build Dev and Test environment. This can be done manually if the environments are small.
Azure DevOps CD using IaaS as Code approach can help to spin large environments using Scripts and many
organizations prefer to use ARM (Azure Resource Manager) or Terraform Templates from Hashi Corp. to
automate the overall process.

Azure CLI and PowerShell are commonly used mechanisms for building quick Dev/Test environments. Azure
also provides an SDK if you wish to build IaaS components via REST APIs. Most Azure related products use
the API approach.


129
www.dotnetcurry.com/magazine |
Another most proven way for building rapid Dev/Test environment on Azure is Azure Dev Test Labs. Here are
some key features of Dev/Test Labs – (Please note that Dev Test Labs required Visual Studio Subscription)

• Rapid rollout of Environment

• Support for Windows, Linux and related tooling ecosystem

• Flexible and simple cost model

• Higher degree of Automation built-in

• Reusability

You can find more features here https://azure.microsoft.com/en-us/services/devtest-lab/#features


On our DotNetCurry platform, we have published a whitepaper on Dev/Test labs in the past which can be
used as a reference https://www.dotnetcurry.com/windows-azure/1296/devtest-labs-windows-azure.

Azure Dev Test Subscription

Many organizations get started by first putting up their Dev and Test environments on the Cloud (in our
case Azure). This way they get a first-hand experience and then move forward by putting up production
workload.

Microsoft being a leader in the Cloud Platform understands this trend, and have launched Azure Dev Test
Subscriptions by giving discounted rates compared to other subscription types and specifically targeted for
customers who wish to put their Dev and Test environments only. This is not recommended for Production
purpose workload.

So, this not only nudges customers to try out Azure, but eases adoption as well since the cost impact is low
as it is designed only for non-production workloads. You can find more details here https://azure.microsoft.
com/en-us/pricing/dev-test/

How to Ensure a safe environment for remote working in this Pandemic


COVID-19 situation?

Although Microsoft provides a swift approach to spin Dev/Test environments in no time, following are some
concerns of organizations while working remotely:

1. Secure connection from remote location to office network

2. Protecting the overall environment

3. Protecting the source code, IPs (Intellectual Property), Customer’s code and software assets etc. to
ensure no source code breach happens

Well, for these concerns, you can use Azure AD (Premium), Azure IaaS and Azure DevOps together to address
this. Also make sure to check out an article titled “Prevent Code Access for Developers Working Remotely
using Azure DevOps (Protecting Code and IP during Lockdown)” for the same which we encourage you to
read. https://www.dotnetcurry.com/devops/1533/prevent-code-access-azure-devops

130 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


Azure Bastion – Modern and secure way to do RDP and SSH to your Azure
Virtual Machines

Using the RDP/SSH method to connect to your environments was continued on Azure for a few years. Over a
period of time and with the growing concerns and incidences of compromised RDP/SSH connections, there
was a need of secure and seamless alternative to connect to an Azure environment as shown in Figure 5.

Microsoft has been improving their Azure services and security continuously and have launched a service
called Azure Bastion.

Azure Bastion is a fully managed Platform-as-a-Service (PaaS) offering which enables you to do RDP
and SSH securely without exposure of public IP address. This service is also worth adding in your Azure
Architecture especially for Azure IaaS workloads.

Figure 5 – Azure Bastion based Architecture for accessing Azure IaaS resources using Azure Bastion

Reference Architecture diagram - https://azure.microsoft.com/en-in/services/azure-bastion/#features

Some key features of Azure Bastion:



1. Fully managed service which works per VNET level

2. Easy deployment of Bastion host in VNET

3. Secure RDP/SSH via Azure Portal (You don’t need any separate RDP or Tool to connect)

4. You don’t need to expose Public IPs of your Azure VMs and thus it protects from Port scanning since
there is no direct exposure of your Azure VMs, hence making it more secure

5. Azure ensures hardening of Bastion as it is a fully managed platform, hence no additional security
measures are required

131
www.dotnetcurry.com/magazine |
AI and Automation for Citizen Developers

Microsoft is innovating as well as investing in AI services.

From Automation, to Cognitive Services, to Machine Learning, Microsoft is ensuring to have a fusion of their
AI services in enterprise apps and consumer centric apps.

With Microsoft Cognitive Services, Microsoft has already offered APIs for Speech Recognition, Face
Detection, Language, Text Analytics, Anomaly Detection, Sentiment Analytics, Azure Cognitive Search,
Personalizer etc.

Azure also offers rich Data (both Relational and Non-Relational data stores) services and Big Data Services
for Machine Learning along with Machine Learning studio supporting R and Python, so that data scientists
can easily build predictive analytical solutions.

In Conversational AI, Microsoft enables organizations to build their own chatbots using Bot Framework and
Power Virtual Agents. You can check more on Azure Cognitive Services and relate them with your business
scenarios over here https://azure.microsoft.com/en-in/services/cognitive-services/.

DSVM for Data Scientists

With Dev Test Labs, Microsoft is enabling organizations to rapidly spin up their Dev Test environments.
Similarly, Microsoft has been enabling AI Developers and Data Scientists who are working on different AI
solutions like Building Models, Training Models, Building Predictive Analysis, Churning Data with Microsoft
and Open Source tools with DSVM (Data Science Virtual Machine).

What is DSVM?

DSVM – Data Science Virtual Machine is a pre-configured environment installed with frequently needed
common AI, ML tools for Data Scientists available in both Windows and Linux OS flavors. This environment
is optimized and designed for Data Science and AI, ML work. It supports GPU based hardware allowing you
to build, run and test your Deep Learning scenarios as well.

Since DSVM is hosted in Azure, it also allows you to connect to different Services and Data resources in
Azure as shown in Figure 6.

Figure 6 – Azure DSVM offering at Macro Level

132 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


This reference Diagram has been taken from Microsoft Documentation for visual representation of DSVM
capabilities here https://azure.microsoft.com/en-us/services/virtual-machines/data-science-virtual-
machines/

In this existing COVID-19 situation, the need of the hour is to do rapid data processing, analysis and build
apps with minimum efforts and minimum resources.

Past few months, there has been a huge adoption of Low Code/No Code platform which is Microsoft Power
Platform to do rapid app development.

Power Apps is a no code approach to building apps, and it does not require any prior programming
language, and hence it is suitable for Citizen Developers.

With Power Apps you can quickly design, build and deploy apps which is adoptable, secure and scalable.
Let us quickly understand the Power Platform offerings.

Note that Power Platform offerings are specific to certain O365 Subscription. Hence some features might
not be available to you. Also, some features like AI Builder are region specific and might not be available in
your region.

Power BI

Power BI is mostly used for building self-service analytics and visualizations securely.

Traditionally enterprises have been using SQL Server Reporting Services (SSRS) as a primary Reporting Tool.
Power BI in my opinion is a far better and easy reporting platform compared to SSRS and gives a lot of
freedom to connect to No-SQL Data sources. Power BI also has Power BI embedded services on Azure.

Power BI can be accessed on Windows Desktop and Mobile Devices with Power BI Apps. Developers can also
leverage the embedding feature of Power BI to show dashboards and data widgets in their applications. You
can start building your first data driven visualization as shown in Figure 7. More information can be sought
over here https://powerbi.microsoft.com/en-us/.

Figure 7 – Typical Power BI Dashboard showing dynamic data driven visualizations on Desktop
(Reference Dashboard from Microsoft’s COVID-19 US Tracking Sample)

133
www.dotnetcurry.com/magazine |
Common Use Cases for Power BI

• Dashboards from multiple data sources

• Data Visualization in different format (Styles and Layouts of Charts)

• Easy and Secure access on Desktop, Web and Mobile

• Easy integration with Apps

Power Apps

Power Apps is mostly used for building Data driven No Code apps.

Power Apps is a No Code offering from Microsoft. It allows you to build apps with no prior knowledge of
any programming language or framework. It is purely a web-based studio in which you can choose any
template or build one from scratch.

Power Apps allows you to connect to different data sources and even allows you to access peripherals like
a Camera. It is the quickest way to build apps for your internal processes or even for your customers outside
your organization. See an example shown in figure 8. You can start building your apps here
https://powerapps.microsoft.com/en-us/

Figure 8 – Power App dashboard allowing Citizen developers to build apps without code

To enrich the no code app experience in Power Apps and Power Automate (Flow) for organizations and
citizen developers, Microsoft also allows to consume AI services within Power Apps to make apps more
intelligent. The “AI Builder” within Power Apps and Flow (Power Automate) enables you to integrate some
common AI modules like Entity extraction, Object Detection, Form Processing etc. along with your custom
models.

134 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


Common Use Cases for Power Apps

• No-Code way of building Apps. Apps can be built without complex frameworks and programming skills.

• Rapid Data driven Application building.

• Ideal for organizations having frequent changing processes and complex sub processes.

• Less complex and can be easily build and deployed by Citizen Developers.


135
www.dotnetcurry.com/magazine |
Flow (Power Automation)

Flow helps to create automation by using some pre-defined automation templates which are generic for
all businesses. You can also create customized flow for your business with the additional services and
connectors available. You can start building your flows/automation scenarios here https://powerapps.
microsoft.com/en-us/

Common Use Cases for Flow

Automation of common business processes using predefined workflow


templates
• Highly customized workflow connecting different applications

• Increase efficiency of business processes

• Add Artificial Intelligence capabilities to your Workflows

Power Virtual Agents

Although Microsoft already has a robust Bot Framework, Power Virtual Agents allows you to create
actionable, performance centric no-code chatbots very easily.

You can start building your agents in no time. Here’s how one looks like as shown in Figure 9. More
information on Power Virtual Agents can be found here https://powervirtualagents.microsoft.com/en-us/.

136 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


Figure 9 – Power Virtual Agent allows citizen developers to build Bots with No Code approach

Common Use Cases for Power Virtual Agents

• Rapid Bot Development and Deployment

• Customizable templates and Subjects

• Supports interactive cards and workflow

• No deep knowledge of Bot Framework or any programming language required

Role of Bots and Power Apps in COVID-19 situation

As seen in the past few months, Power Apps have enabled many hospitals, organizations, government
agencies to quickly build apps based on the incoming data, as well as data in persistence.

With humongous amount of data handled by Power Apps and Power BI with ease, people were able to get
real time and seamless visualization of data – when it mattered the most. Power Apps due to its nature of
no code and rapid app building features helped many hospitals and NGOs to collect patient data of positive
Covid-19 cases, mapping them, identifying hospital staff and services availability and so on.
Since the response time during COVID-19 is low and resources are scant, in many areas, applications
having a long development cycle were opted out for solutions providing a no-code approach. With many
IT companies partially shut down with remote working enabled, Power Apps came to the rescue due to its
simplicity of building apps with no code and backed by a solid Microsoft AI and Azure service. It has in no
time become the first choice of all Citizen Developers.

Microsoft with Power Virtual Agent and Microsoft Health Bot is serving as a boon during the ongoing
COVID-19 pandemic, and has helped many large hospitals and government agencies to build a rapid virtual
agent experience.

With Microsoft Health Bot, enrolling COVID-19 patients, collecting their data, Answering FAQ for COVID-19
related topics on health websites to a wider audience, made it easier to manage expectations during the
pandemic.

137
www.dotnetcurry.com/magazine |
More information on Microsoft Health Bot and for building similar solutions can be obtained here
https://www.microsoft.com/en-us/garage/wall-of-fame/microsoft-health-bot-service/

Learning Curve of Cloud and AI

Microsoft has already announced a “Role based Certification” for Azure. It also covers Certification on AI.

During COVID-19, since business growth in certain sectors is limited and overall velocity of growth is slow,
organizations have planned to re-skill their workforce for cloud to ensure they ride the wave of Cloud
adoption and Migration, followed by Artificial Intelligence and Automation. There is a plethora of training
content available online including prime quality communities like DotNetCurry and the DNC Magazine.

Similarly, for Ethics in AI and some fundamental training of AI services, Microsoft is offering an online
self-learn facility titled “AI School” for Devs and “AI School for Business” for business leaders and decision
makers. Here are some useful resources for your teams during and post COVID-19.

• Azure Fundamentals - https://docs.microsoft.com/en-us/learn/paths/azure-fundamentals/

• AI School - https://aischool.microsoft.com/en-us/home

• Microsoft Role Based Certifications - https://docs.microsoft.com/en-us/learn/certifications/

Some additional useful Non-Microsoft tools for Azure

CloudSkew

Many Architects and Tech Leads usually spend a lot of time in Visio and PowerPoint to draw Azure
architectural diagrams. Sometimes it is becomes complex to draw these diagrams in PPT, especially for pre-
sales templates. For smaller organizations, Visio is also an overhead.

CloudSkew is a free tool and can draw and export diagrams for Azure and other public clouds.

138 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


More info here https://www.cloudskew.com/.

Azure Heat Map (aka Azure Charts)

Although designed and developed by a Microsoft FTE Alexey Polkovnikov, it is still not an official tool from
Microsoft. Azure Charts is a single dashboard for Azure and widely popular because of the simplicity of
accessing many critical Azure sites like Azure Status, SLAs, Timelines etc.

It is a very handy website for all developers, tech leads and architects working on Azure.
Here is a glimpse of Azure Heat Map.

https://azurecharts.com/


139
www.dotnetcurry.com/magazine |
Both CloudSkew and Azure Heat Maps (Azure Charts) are currently FREE for Individual use. You can visit
their websites for additional information and other privacy security related statements. As per CloudSkew
website, they may add license/plans/slabs in the near future.

Conclusion:

Many industries, including the IT industry, are trying to cope with the abnormalities introduced due to the
COVID-19 pandemic. Unforeseen circumstances created due to the pandemic has forced companies and
individuals to work remotely to maintain social distancing and adhere to WHO guidelines.

Remote working is the new normal and Virtual Business is the new strategy.

This is the right time to understand, accept, adopt and implement Digital Transformation within the
organization, and at your customer end.

Cloud and AI adoption is at peak because of COVID-19 and it’s time to identify the right Cloud, AI and
Collaboration services to fit this new Virtual workspace.

This article presented some views and best practices to enable and ease your journey of Digital
Transformation, to generate awareness and present an opportunity to revisit business plans and strategies.

With the vision stated by Microsoft CEO, Satya Nadella “Empower every person on earth to achieve more”,
Microsoft is doing their best to enable businesses to run smoothly and adapt to these new working
situations forced upon due to COVID-19.

Vikram Pendse
Author

Vikram Pendse is currently working as a Cloud Solution Architect from India.


He has 13+ years of IT experience spanning a diverse mix of clients and
geographies in the Microsoft Technologies Domain. He is an active Microsoft
MVP since year 2008. Vikram enables Customers & Communities globally to
design, build and deploy Microsoft Azure Cloud and Microsoft AI Solutions. He
is a member of many Microsoft Communities and participates as a Speaker in
many key Microsoft events like Microsoft Ignite, TechEd, etc.
You can follow him on Twitter @VikramPendse and connect at LinkedIn.

Techinical Review Editorial Review


Subodh Sohoni Suprotim Agarwal

140 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)



141
www.dotnetcurry.com/magazine |
AZURE

Mahesh Sabnis

CREATING API
USING
AZURE FUNCTION
WITH
HTTP TRIGGER

142 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


Azure Functions is a service provided on Microsoft Azure to run event-triggered code without thinking and
worrying about the application deployment infrastructure.

Azure Functions is triggered by a specific type of event e.g. Timer, HTTP request, etc. The trigger executes a
function logic and performs operations like changing data, running schedule, HTTP request, etc.

Some benefits of Azure functions are as follows:

1. We can write Azure Functions using languages like, C#, Java, JavaScript, Python and PowerShell.

2. Azure Functions has Pay-per-use pricing model. This means we pay for the time spent running the code.
Azure Functions pricing is per-second based on the consumption plan chosen.

3. We can integrate it easily with Azure Services and SaaS offerings.

4. We can use additional libraries in Azure Functions using NuGet packages .

5. Use HTTP-triggered functions with OAuth providers like Azure Active Directory, Google, Facebook,
Twitter and Microsoft Account

6. Azure Functions runtime is open-source. This means that Azure Functions runtime will be portable so
the function can run anywhere i.e. on Azure Portal, in the organization’s datacenter, etc.

We can use Azure Functions in the following scenarios:

1. Run code based on HTTP requests

2. Time based code execution

3. Process CosmosDB documents

4. Process BLOB Storage

5. Process messages from Queue Storage

6. Respond to Event Grid events

7. Respond to Event Hub events

8. Work with Service Bus and Service Bus topic

9. An important part of Azure Functions is its pricing.

Azure Functions are used for serverless application development in Azure. This means that Azure Functions
allows to develop an application without thinking of the deployment infrastructure in Azure. This approach
increases developer productivity by providing a means of faster development using developer-friendly APIs,
low-code services, etc. This approach helps to boost team performance which goes a long way to benefit
the organization on the revenue front.


143
www.dotnetcurry.com/magazine |
Implementing Azure Function for HTTP Trigger using
Visual Studio 2019
This article uses Visual Studio 2019 with Update 16.6 and .NET Core 3.1 to develop Azure function.

Azure Functions get triggered for HTTP requests of type GET/POST/PUT/DELETE and upon triggering, it
performs operations of reading and writing data from and to Azure SQL database.

To create Azure SQL Database and deploying function on Azure, an Azure subscription is needed. Please
visit this link to register for free and get a subscription.

Once an Azure subscription is created, the Azure Resource group must be created so that we can create
Azure SQL Database, Azure Functions, etc. Create a resource group and Azure SQL database with the name
ProductCatalog and a Product table in it. The Product table will have following columns as shown in the
listing 1.

CREATE TABLE [dbo].[Product](


[ProductRowId] [int] IDENTITY(1,1) NOT NULL,
[ProductId] [varchar](50) NOT NULL,
[ProductName] [varchar](100) NOT NULL,
[CategoryName] [varchar](100) NOT NULL,
[Manufacturer] [varchar](100) NOT NULL,
[Description] [varchar](200) NOT NULL,
[BasePrice] [int] NOT NULL,
PRIMARY KEY CLUSTERED
(
[ProductRowId] ASC
)WITH (STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY]

Listing 1: The Table creation script

Step 1: Open Visual Studio 2019 and create a new Azure function project as shown in Figure 1. The Azure
Functions application is developed using Azure SDK 2.9.

Figure 1: Create a new Azure Function Project

144 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


Name this project as ApiAppFunction. Click on the Next button to bring up a window as shown in Figure 2.
This window shows various triggers to trigger the Azure Function. Select the HTTP Trigger from this list.

Figure 2: The Triggers

The project is created with the file named function1.cs with the default HTTP Triggering code. This code
contains a class with Run() method. The method has the FunctionName attribute. This name is used to
make HTTP calls to the function. The Run() method accepts HttpTrigger attributed parameters as follows:

• AuthorizationLevel: This is an enumeration used to determine the authorization level to access function
using HTTP requests. This enumeration has various values which indicate whether the HTTP request
should contain keys to invoke the function. The authorization level enumeration has the following
values:

o Anonymous: - No API Key is required


o Function: - A function specific key is required. This is the default value.
o Admin: - The master key aka host key is required
o User: - Needs a valid Authentication token.
o System: - Needs a master key

• methods: This is a params array parameter. This represents HTTP methods to which a function will
respond.

• route: This is used to define route template. This represent the HTTP URL to access a function.

Step 2: Rename Function1.cs to ApiAppFunction.cs. This will change the class name to ApiAppFunction.
Please make sure that you remove the static keyword for the class.

Step 3: We will use EntityFrameworkCore to access Azure SQL database. To use EntityFrameworkCore we
need to install packages as shown in Figure 3.


145
www.dotnetcurry.com/magazine |
Figure 3: Installing packages in the project

Step 4: We will be using the Database First approach for generating model classes from the database. To
generate models, we need to run the scaffolding command. Open the command prompt and navigate to the
folder path as shown here:

...\ApiAppFunction\ApiAppFunction

Run the following command from the command prompt

dotnet ef dbcontext scaffold "Server=tcp:msit.database.windows.net,1433;Initial


Catalog=ProductCatalog;Persist Security Info=False;User ID={your_user_name};
Password={your_password};MultipleActiveResultSets=False;Encrypt=True;T
rustServerCertificate=False;Connection Timeout=30;;"
Microsoft.EntityFrameworkCore.SqlServer -o Models -t Product --context-dir
Context -c ProductDbContext

Note: Use the UserName and Password for accessing an Azure SQL database. These are credentials you set while
creating a database server for Azure SQL database.

Once the command is run, you will receive an error as shown in Figure 4.

Figure 4: The database scaffolding error

146 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


The reason behind the error as shown in Figure 4 is because the <APPNAME>.deps.json file is not created
with dependencies. To rectify this error, we need to modify the .csproj file with the settings as shown in
Listing 2.

<target aftertargets="PostBuildEvent" name="PostBuild">


<exec command="copy /Y &quot;$(TargetDir)bin\*.dll&quot; &quot;$(TargetDir)*.
dll&quot;"/>
<exec command="copy $(TargetDir)$(ProjectName).deps.json $(TargetDir)bin\
function.deps.json"/>
</target>

Listing 2: Modification on project file

These settings will make sure that the .deps.json file is created. Now run the scaffold command again, the
command will run successfully and it will generate the DbContext class and Product class.

Check Visual Studio and we can see that project is added with Context and Models folders in it. The Context
folder contains the DbContext class and Models folder contains the Product class.

Open the ProductDbContext class from the Context folder and comment the default constructor. We
need to do this because we will be instantiating this class using dependency injection. Comment the
OnConfiguring() method to avoid using the database connection string from code, instead we will copy this
connection string and paste it in local.settings.json as shown in Listing 3.

{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "UseDevelopmentStorage=true",
"FUNCTIONS_WORKER_RUNTIME": "dotnet",
"SqlConnectionString": "Server=tcp:<your-database-server>.database.windows.
net,1433;Initial Catalog=ProductCatalog;Persist Security Info=False;User ID={your-
user-name};Password={your-password};MultipleActiveResultSets=False;Encrypt=True;Tru
stServerCertificate=False;Connection Timeout=30;"
}
}

Listing 3: The local settings for the Azure function project

Step 5: We will now instantiate ProductDbContext using dependency injection. To use the DI container, we
need a Startup class like in ASP.NET Core applications. We need to use FunctionsStartup class as a base
class for the Startup class. To use this class, install the following package in the project:

Microsoft.Azure.Functions.Extensions

Once this package is installed, add a new class file to the project and name this file as Startup.cs. Add some
code to this file as shown in the Listing 4:

using ApiAppFunction.Context;
using Microsoft.Azure.Functions.Extensions.DependencyInjection;
using Microsoft.EntityFrameworkCore;
using Microsoft.Extensions.DependencyInjection;
using System;
[assembly: FunctionsStartup(typeof(ApiAppFunction.Startup))]
namespace ApiAppFunction
{
public class Startup : FunctionsStartup

147
www.dotnetcurry.com/magazine |
{
public override void Configure(IFunctionsHostBuilder builder)
{
string connectionString = Environment.
GetEnvironmentVariable("SqlConnectionString");
builder.Services.AddDbContext<ProductDbContext>(
options => SqlServerDbContextOptionsExtensions.UseSqlServer(options,
connectionString));
}
}
}

Listing 4: The Startup class

The code in Listing 4 shows the Startup class derived from the FunctionsStartup class and contains a
Configure() method for registering the ProductDbContext class in Services. Please make sure that you do
not add long running logic code in the Configure() method.

Step 6: Lets modify the Product class as shown in Listing 5 for defining JSON property serialization:

using Newtonsoft.Json;
namespace ApiAppFunction.Models
{
public partial class Product
{
[JsonProperty(PropertyName = "ProductRowId")]
public int ProductRowId { get; set; }
[JsonProperty(PropertyName = "ProductId")]
public string ProductId { get; set; }
[JsonProperty(PropertyName = "ProductName")]
public string ProductName { get; set; }
[JsonProperty(PropertyName = "CategoryName")]
public string CategoryName { get; set; }
[JsonProperty(PropertyName = "Manufacturer")]
public string Manufacturer { get; set; }
[JsonProperty(PropertyName = "Description")]
public string Description { get; set; }
[JsonProperty(PropertyName = "BasePrice")]
public int BasePrice { get; set; }
}
}

Listing 5: The JSOP Property serialization for the Product class

Step 7: Now let’s create an Azure Function to perform CRUD operations on Azure SQL database using the
HTTP request methods. Add the code in the ApiAppFunction.cs file as shown in Listing 6.

using System;
using System.IO;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Logging;
using Newtonsoft.Json;
using System.Collections.Generic;
using Microsoft.EntityFrameworkCore;

148 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


using ApiAppFunction.Context;
using ApiAppFunction.Models;
namespace ApiAppFunction
{
public class ApiAppFunction
{
private readonly ProductDbContext _context;
public ApiAppFunction(ProductDbContext context)
{
_context = context;
}

[FunctionName("Get")]
public async Task<IActionResult> Get(
[HttpTrigger(AuthorizationLevel.Function, "get", Route = "products")]
HttpRequest req, ILogger log)
{
try
{
// check for the querystring count for keys
if (req.Query.Keys.Count > 0)
{
// read the 'id' value from the querystring
int id = Convert.ToInt32(req.Query["id"]);
if (id > 0)
{
// read data based in 'id'
Product product = new Product();
product = await _context.Product.FindAsync(id);
return new OkObjectResult(product);
}
else
{
// return all records
List<Product> products = new List<Product>();
products = await _context.Product.ToListAsync();
return new OkObjectResult(products);
}
}
else
{
List<Product> products = new List<Product>();
products = await _context.Product.ToListAsync();
return new OkObjectResult(products);
}
}
catch (Exception ex)
{
return new OkObjectResult(ex.Message);
}

[FunctionName("Post")]
public async Task<IActionResult> Post
[HttpTrigger(AuthorizationLevel.Function, "post", Route = "products")]
HttpRequest req, ILogger log)
{
try
{
string requestBody = await new StreamReader(req.Body).ReadToEndAsync();

149
www.dotnetcurry.com/magazine |
Product product = JsonConvert.DeserializeObject<Product>(requestBody);
var prd = await _context.Product.AddAsync(product);
await _context.SaveChangesAsync();
return new OkObjectResult(prd.Entity);
}
catch (Exception ex)
{
return new OkObjectResult($"{ex.Message} {ex.InnerException}");
}
}

[FunctionName("Put")]
public async Task<IActionResult> Put(
[HttpTrigger(AuthorizationLevel.Function, "put", Route = "products/
{id:int}")] HttpRequest req, int id,
ILogger log)
{
try
{
string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
Product product = JsonConvert.DeserializeObject<Product>(requestBody);
if (product.ProductRowId == id)
{
_context.Entry<Product>(product).State = EntityState.Modified;
await _context.SaveChangesAsync();
return new OkObjectResult(product);
}
else
{
return new OkObjectResult($"Record is not found against the Product Row
Id as {id}");
}

}
catch (Exception ex)
{
return new OkObjectResult($"{ex.Message} {ex.InnerException}");
}
}

[FunctionName("Delete")]
public async Task<IActionResult> Delete(
[HttpTrigger(AuthorizationLevel.Function, "delete", Route = "products/
{id:int}")] HttpRequest req, int id,
ILogger log)
{
try
{
var prd = await _context.Product.FindAsync(id);
if (prd == null)
{
return new OkObjectResult($"Record is not found against the Product Row
Id as {id}");
}
else
{
_context.Product.Remove(prd);
await _context.SaveChangesAsync();
return new OkObjectResult($"Record deleted successfully based on
Product Row Id {id}");
}

150 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


}
catch (Exception ex)
{
return new OkObjectResult($"{ex.Message} {ex.InnerException}");
}
}
}
}

Listing 6: The Azure Function for CRUD operations based on HTTP request method

The code in listing 6 shows that the ApiAppFunction class is constructor injected using the
ProductDbContext class.

The ApiAppFunction class contains Get/ Post/ Put and Delete methods. All these methods accept
HttpTrigger attribute and a HttpRequest parameter. All these methods represent Azure functions methods
with Authorization level as function.

The route parameters defined by all these methods is named as products. This route parameter will be used
in the URL template to invoke these methods in HTTP requests. These methods perform CRUD operations
on Azure SQL database using ProductDbContext class. All these methods return the Task<IActionresult>
object. This means that all these methods will return HTTP Status code as per the execution status. We will
be using OkObjectResult() object to generate response for HTTP requests.

That's it. Now let's test the function by running it and making requests to the function methods using
Postman. Use F5 to run the function. The function URLs are displayed in Figure 5.

Figure 5: Function method URLs

Open Postman and make a HTTP POST Request as shown in Figure 6.

Figure 6: The Postman HTTP Post request


151
www.dotnetcurry.com/magazine |
Click on the Send button to post a record successfully to a SQL Azure database. We can test it by making a
HTTP Get request to the function.

So far, we have successfully created an Azure Function and tested it in the local environment. Now it is time
to publish this on Azure so that we can make it available publicly.

Step 8: Right click on the project and select the Publish option, this will start the publish wizard. In the first
step, we will have to choose the publish target as shown in Figure 7.

Figure 7: Publish target

Select Azure and click on the Next button. It will display a publish specific target window as seen in Figure
8. In this window select either a Windows, Linux, Azure Function App Container or Azure Container Registry
target for publishing the Azure function (see Figure 8).

Figure 8: Publish Specific Target

152 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


Click on Next to display a new window where you can choose the Azure Subscription, Resource Group, etc.
Click on Create a new Azure function as shown in Figure 9.

Figure 9: Create new function for publish

As shown in Figure 9, click on the Create a new Azure Function link to display a new window. Enter Azure
Function details like Name, Subscription, Resource group, Plan Type, Location and Azure storage as shown in
Figure 10.

Figure 10: Enter Azure Function details to create new Azure Function
Click on the Create button, and a new function will be created. Now we have to keep one important thing in
mind - we have already run the Azure Function in local environment successfully by defining the Azure SQL
Database connection string in local.settings.json file.

But the settings in this file will not work for the Azure Function deployment in Azure cloud environment.
So, to make sure that the Azure Function works successfully from the Azure cloud environment, we need to
modify Azure app service settings from the published window as shown in Figure 11.

Figure 11: Link for editing App Service Settings

Once we click on the Manage Azure App Service settings link, the Application settings window will be
displayed as shown in Figure 12.

Figure 12: The Application Settings

The SqlConnectionString value for Local is set to the Azure SQL database but this value will work only for
the local environment. To make sure that this works on Azure cloud environment, we need to add the same
SQL Connection string for Remote setting as shown in Figure 13.
Figure 13: Settings SQL Azure Connection string for Remote settings

This setting will make sure that the Azure SQL database can be accessed from the Azure Function after it
is deployed on the Azure cloud. Click on the Ok button and then click on the Publish button to publish the
function as shown in Figure 14.

Figure 14: Publish the Azure Function

The Azure function is published successfully.

Step 9: Visit the Azure portal and login on to the portal. Once the login is successful, from All Resources, we
can see the published Azure function with the name ApiHttpTrigger as shown in Figure 15.

Figure 15: The ApiHttpTrigger Azure Function


155
www.dotnetcurry.com/magazine |
On the portal, click on the ApiHttpTrigger function to display the Azure Function’s details. Click on the
Functions link of the Azure Functions blade. This will show all Azure Functions methods which we have
published. See Figure 16.

Figure 16: The Azure Functions methods

One of the most important things is that unlike local execution of Azure Functions, we cannot access
published Azure Functions methods directly. To access these methods, we need App Keys to authenticate
the client to access these function methods. We can find these keys in the App keys link from the Azure
Functions details blade. As shown in Figure 16, we have Get/Post/Put/Delete function methods which can
be accessed via Postman to make HTTP calls to them.

Step 10: Click on the Get method of Azure Function. We can see the Get function method details as shown
in Figure 17.

Figure 17: Get Function method details

To access the Get method, you need the function URL. To access the URL, click on the Get Function Url
link as shown in Figure 17. Clicking the link will bring up a new dialog box with the Function as shown in
Figure 18.

Figure 18: The Get function URL

156 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


As you can see in Figure 18, we need a function key/host key/masterKey to access the function method.
Please note that function key is different for each function method, but the masterKey is the same for all
function methods.

We can use Host keys to access all HTTP methods, whereas master keys grant admin access to function
Runtime APIs. So please make a wise decision to access these function methods using these keys.

In case of a scenario where the client application has application roles for Read/Write access separately,
then make sure that you use the Functions keys or Host keys to access these methods.

For demo purposes, in this article, we will use the Host Key. Copy the function URL and using Postman
make a get call as shown in Figure 19.

Figure 19: Get call using Postman with URL containing Host key

Likewise, we can also test the POST/PUT/DELETE calls.

Step 11: In Step 9, we have seen the Azure Functions deployment and tested it using the Postman tool.

Alternatively, we can test Azure Functions using the portal.

As shown in Figure 19 the option Code + Test allows to test the Azure Function using portal without
requiring any other tools.

Figure 20: The Code + Test option


157
www.dotnetcurry.com/magazine |
Before testing via the portal, lets add Application Insights to monitor the function behavior details as
shown in Figure 21

Figure 21: The Monitor feature for the function

Click on Code + Test link, and the function details in function.json will get displayed.

Figure 22: The function details for Code + test

As shown in Figure 22, click on the Test link. It will show the Test blade where we select the HTTP Method,
Key, Query Parameter, Headers and Body (see Figure 23).

Figure 23: Code + Test page for the function

158 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


To test the Get function, enter details as shown in Figure 24 and click on the Run button.

Figure 24: Testing the Get function

Once the Run button is clicked, the Get function method will be executed and the response will be shown
with data in Output and the Logs messages. See Figure 25.

Figure 25: The Response

The monitor page shows the Invocation Details log as shown in Figure 26.

Figure 26: The Monitor log



159
www.dotnetcurry.com/magazine |
Similarly, you can test the Post function method as shown in Figure 27.

Figure 27: The Post function method

Step 12: To access the function methods, we need to make sure that the client can only access the function
using HTTPS instead of the using HTTP traffic. We have to make sure that the Custom domains set the
HTTPS Only to on by default as shown in Figure 28.

Figure 28: The Custom Domains

Step 13: Once we have deployed the Azure Function in the portal and tested it, it’s time to consume the
Azure Functions API in a client application. We will be creating an Angular client application to consume
the Azure Functions API.

Consume Azure Functions API using an Angular client application

To consume the Azure Functions WEB API, we must enable CORS as shown in Figure 29:

160 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


Figure 29: CORS settings for Azure Function

Figure 29 says if we want to allow all origins to access Azure Function API, then use * and remove all other
origins from the list. See Figure 30.

Figure 30: The CORS Configuration

Step 14: We will perform the GET/POST/PUT/DELETE operations from the Angular client application using
HTTP Url of the Azure Functions. We will have to use Function Urls for performing these operations. Table 1
contains a list of Urls for performing HTTP operations.

Table 1: The HTTP Operation Urls


161
www.dotnetcurry.com/magazine |
These URLs are used to make HTTP calls from the Angular client application to Azure Function API.

To develop an Angular client application, we need Visual Studio Code (VS Code) and Node.js. Download and
install it on your machine. We will create Angular application using Angular CLI.

Step 15: Open Node.js command prompt and run the command to install Angular CLI:

npm install -g @angular/cli

Listing 7: Installing Angular CLI

To create an Angular application, run the following command:

ng new <NAME>

Listing 8: Creating Angular application

We will name the application as NgClientApp .

We will be using Bootstrap for rich UI. To install Bootstrap, run the following command as shown in Listing
9:

npm install - -save bootstrap

Listing 9: Installing Bootstrap

Step 16: Open the NgClientApp in VSCode. In the app folder, add three folders of name component, models
and service.

Step 17: In the models folder, add a new file of the name app.product.model.ts. In this file, add code as
shown in Listing 10.

export class Product {


constructor(
public ProductRowId: number,
public ProductId: string,
public ProductName: string,
public CategoryName: string,
public Manufacturer: string,
public Description: string,
public BasePrice: number
){}
}
export const Categories = ['Electronics', 'Electrical', 'Food'];
export const Manufacturers = ['MS-Electronics', 'LS-Electrical',
'TS-Foods', 'VB-Electronics', 'PB-Electrical','AB-Food'];

Listing 10: The model class and constants

The above code contains the Model class. This class will be used to bind with the Angular component for
performing CRUD operations. The code also contains constant arrays for Categories and Manufacturers.
These arrays will be used to bind with HTML templates in the component class.

162 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


Step 18: In the Service folder, add a new file and name it as app.http.service.ts and add the code in this file
as shown in Listing 11.

import { Injectable } from '@angular/core';


import { HttpClient, HttpHeaders } from '@angular/common/http';
import { Product } from './../models/app.product.model';
import { Observable } from 'rxjs';
@Injectable({
providedIn: 'root'
})
export class ProductService {
constructor(private http: HttpClient) {
}

getProducts(): Observable<Product[]> {
let response: Observable<Product[]> = null;
response = this.http.get<Product[]>('https://apihttptrigger.azurewebsites.net/
api/products?code=<KEY-CODE>&clientId=default');
return response;
}

getProductById(id: number): Observable<Product> {


let response: Observable<Product> = null;
response = this.http.get<Product>(`https://apihttptrigger.azurewebsites.net/
api/products?code=<KEY-CODE>&clientId=default&id=${id}`);
return response;
}

postProduct(prd: Product): Observable<Product> {


let response: Observable<Product> = null;
response = this.http.post<Product>(this.url, prd, {
headers: new HttpHeaders({
'Content-Type': 'application/json'
})
});
return response;
}
putProduct(id: number, prd: Product): Observable<Product> {
let response: Observable<Product> = null;
response = this.http.put<Product>(`https://apihttptrigger.azurewebsites.net/
api/products/${id}?code=<KEY-CODE>`, prd, {
headers: new HttpHeaders({
'Content-Type': 'application/json'
})
});
return response;
}
deleteProduct(id: number): Observable<string> {
let response: Observable<string> = null;
response = this.http.delete<string>(`https://apihttptrigger.azurewebsites.net/
api/products/${id}?code=<KEY-CODE>`);
return response;
}
}

Listing 11: The Angular Service

The code in listing 11 is for an Angular service. This service is used to perform HTTP calls to Azure
Functions APIs. In the URL the <KEY-CODE> must be replaced by the Function Key. The Function Key can be
copied from function published in the Azure Portal. See Figure 31.

163
www.dotnetcurry.com/magazine |
Figure 31: The Function Key

This Function Key authorizes the client application to access the Azure Functions and perform HTTP
operations.

Step 19: In the component folder, add a new file and name it as app.listproducts.component.ts and add the
code in this file as shown in Listing 12.

import { Component, OnInit } from '@angular/core';


import { Product } from '../models/app.product.model';
import { ProductService } from '../service/app.http.service';
import { Router } from '@angular/router';

@Component({
selector: 'app-listproducts-component',
templateUrl: './app.listproduct.view.html'
})
export class ListProductsComponent implements OnInit {
products: Array<Product>;
status: string;
product: Product;
headers: Array<string>;
constructor(private serv: ProductService, private router: Router) {
this.products = new Array<Product>();
this.product = new Product(0, '', '', '', '', '', 0);
this.headers = new Array<string>();
}

ngOnInit(): void {
for (let p in this.product) {
this.headers.push(p);
}
this.loadData();
}

private loadData(): void {


this.serv.getProducts().subscribe((response) => {
this.products = response;
// console.log(JSON.stringify(response));
}, (error) => {
this.status = `Error occured ${error}`;
});
}

164 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


edit(id: number): void {
this.router.navigate(['edit', id]);
}

delete(id: number): void {


this.serv.deleteProduct(id).subscribe((response) => {
this.status = response;
}, (error) => {
this.status = `Error occured ${error}`;
});
}
}

Listing 12: The ListProducts component

The code in listing 12 contains the ListProductsComponent class.

This class is constructor injected with ProductService and Router classes. The class contains the loadData()
method. This method invokes the getProducts() method from the service class and makes a HTTP Get
request to Azure function to read data from Azure SQL database.

The edit() method is used to route to the EditProductComponent. The delete() method invokes the
deleteProduct() method of the service class. This method makes HTTP delete call to Azure Function to
delete data from the Azure SQL database.

The ListProductsComponent declares Products array, Product object and headers array. The headers array
is used to bind with HTML table headers to generate table columns and rows based on Product class
properties.

Step 20: In the component folder, add a new file and name it as app.listproduct.view.html. In this file, add
some HTML code as shown in Listing 13.

<h2>List of Products</h2>
<div class="container">{{status}}</div>

<table class="table table-bordered table-striped table-dark">


<thead>
<th *ngFor="let h of headers">{{h}}</th>
<th></th>
</thead>
<tbody>
<tr *ngFor="let prd of products">
<td *ngFor="let h of headers">{{prd[h]}}</td>
<td>
<input type="button" value="Edit" (click)="edit(prd.ProductRowId)"
class="btn btn-warning">
</td>
<td>
<input type="button" value="Delete" (click)="delete(prd.
ProductRowId)" class="btn btn-danger">
</td>
</tr>
</tbody>
</table>

Listing 13: The HTML template for List Product Component

The above HTML markup generates a HTML table based on the products array. The table contains buttons

165
www.dotnetcurry.com/magazine |
for Edit and Delete operations.

Step 21: In the component folder, add a new file of the name app.createproduct.component.ts and add the
code in this file as shown in Listing 14.

import { Component, OnInit } from '@angular/core';


import { Product, Categories, Manufacturers } from '../models/app.product.model';
import { ProductService } from '../service/app.http.service';
import { Router } from '@angular/router';

@Component({
selector: 'app-createproduct-component',
templateUrl: './app.createproduct.view.html'
})
export class CreateProductComponent implements OnInit {
product: Product;
status: string;
categories = Categories;
manufacturers = Manufacturers;
constructor(private serv: ProductService, private router: Router) {
this.product = new Product(0, '', '', '', '', '', 0);
}

ngOnInit(): void { }
save(): void {
this.serv.postProduct(this.product).subscribe((response) => {
this.product = response;
this.router.navigate(['']);
}, (error) => {
this.status = `Error occured ${error}`;
});
}
clear(): void {
this.product = new Product(0, '', '', '', '', '', 0);
}
}

Listing 14: The Create Product Component

The code in Listing 14 contains the CreateProductComponent class. This class uses categories and
manufacturers arrays. The class is constructor injected with ProductService and Router classes.

The save() method of the class invokes the postProduct() method of the service class. If the product is
posted successfully, then the default page will be navigated using the router class.

Step 22: In the component folder, add a new file and name it as app.createproduct.view.html. In this HTML
file, add the following HTML markup:

<h2>Create new Product</h2>


<div class="container">{{status}}</div>
<div class="container">
<div class="form-group">
<label>Product Id</label>
<input type="text" [(ngModel)]="product.ProductId" class="form-control">
</div>
<div class="form-group">
<label>Product Name</label>
<input type="text" [(ngModel)]="product.ProductName" class="form-control">

166 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


</div>
<div class="form-group">
<label>Category Name</label>
<select class="form-control" [(ngModel)]="product.CategoryName">
<option *ngFor="let c of categories" value={{c}}>{{c}}</option>
</select>
</div>
<div class="form-group">
<label>Manufacturer</label>
<select class="form-control" [(ngModel)]="product.Manufacturer">
<option *ngFor="let m of manufacturers" value={{m}}>{{m}}</option>
</select>
</div>
<div class="form-group">
<label>Description</label>
<textarea class="form-control" [(ngModel)]="product.Description"></textarea>
</div>
<div class="form-group">
<label>Base Price</label>
<input type="text" class="form-control" [(ngModel)]="product.BasePrice">
</div>
<div class="form-group">
<input type="button" value="Clear" (click)="clear()" class="btn btn-warning">
<input type="button" value="Save" (click)="save()" class="btn btn-success">
</div>
</div>

Listing 15: The CreateProduct HTML Template

The HTML markup in Listing 15 contains HTML input elements which are bound with properties of the
Product class. The HTML select elements are bound with categories and manufacturers arrays of the
CreateProductComponent class. The HTML buttons are bound with clear() and save() methods of the
CreateProductComponent class.

Step 23: In the component folder, add a new file and name it as app.editproduct.component.ts. In this file
add code as shown in Listing 16.

import { Component, OnInit } from '@angular/core';


import { Product, Categories, Manufacturers } from '../models/app.product.model';
import { ProductService } from '../service/app.http.service';
import { Router, ActivatedRoute } from '@angular/router';

@Component({
selector: 'app-editproduct-component',
templateUrl: './app.editproduct.view.html'
})
export class EditProductComponent implements OnInit {
product: Product;
status: string;
id: number;
categories = Categories;
manufacturers = Manufacturers;
constructor(private serv: ProductService, private router: Router, private act:
ActivatedRoute) {
this.product = new Product(0, '', '', '', '', '', 0);
}

ngOnInit(): void {

167
www.dotnetcurry.com/magazine |
this.act.params.subscribe((param) => {
this.id = param.id;
});
this.serv.getProductById(this.id).subscribe((response) => {
this.product = response;
console.log(`Edit init ${JSON.stringify(response)}`);
}, (error) => {
this.status = `Error occured ${error}`;
});
}
save(): void {
this.serv.putProduct(this.id, this.product).subscribe((response) => {
this.product = response;
this.router.navigate(['']);
}, (error) => {
this.status = `Error occured ${error}`;
});
}
clear(): void {
this.product = new Product(0, '', '', '', '', '', 0);
}
}

Listing 16: The EditProductComponent class

The code in listing 16 contains the EditProductComponent class. This class is constructor injected with
ProductService and Router classes. The ngOnInit() method of the class reads the route parameter based
on which the Product data is retrieved so that it can be edited. The save() method of the class invokes the
putProduct() of the service to update the record.

Step 24: In the component folder, add a new file and name it as app.editproduct.view.html. Add the
following HTML markup in this file as shown in listing 17.

<h2>Update Product</h2>
<div class="container">{{status}}</div>
<div class="container">
<div class="form-group">
<label>Product Id</label>
<input type="text" [(ngModel)]="product.ProductId" class="form-control">
</div>
<div class="form-group">
<label>Product Name</label>
<input type="text" [(ngModel)]="product.ProductName" class="form-control">
</div>
<div class="form-group">
<label>Category Name</label>
<select class="form-control" [(ngModel)]="product.CategoryName">
<option *ngFor="let c of categories" value={{c}}>{{c}}</option>
</select>
</div>
<div class="form-group">
<label>Manufacturer</label>
<select class="form-control" [(ngModel)]="product.Manufacturer">
<option *ngFor="let m of manufacturers" value={{m}}>{{m}}</option>
</select>
</div>
<div class="form-group">
<label>Description</label>
<textarea class="form-control" [(ngModel)]="product.Description"></

168 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


textarea>
</div>
<div class="form-group">
<label>Base Price</label>
<input type="text" class="form-control" [(ngModel)]="product.BasePrice">
</div>
<div class="form-group">
<input type="button" value="Clear" (click)="clear()" class="btn btn-
warning">
<input type="button" value="Save" (click)="save()" class="btn btn-success">
</div>
</div>

Listing 17: The Edit Component Template

In Listing 17, all the HTML input elements are bound with the properties of the product object from the
EditComponent class. The HTML select elements are bound with categories and manufacturers array of the
component class. The HTML buttons are bound with clear() and save() methods of the component class.

The Save button calls the save() method from the component and updates the Product by accessing Azure
Functions API.

Step 25: To complete the application, lets modify the app-routing.module.ts to define route table as shown
in Listing 18.

….
const routes: Routes = [
{path: '', component: ListProductsComponent},
{path: 'create', component: CreateProductComponent},
{path: 'edit/:id', component: EditProductComponent}
];

Listing 18: Route table declaration

Modify the app.module.ts to declare all components and to import modules like FormsModule,
HttpClientModule, etc. as shown in Listing 19.
……
@NgModule({
declarations: [
AppComponent,
ListProductsComponent, CreateProductComponent, EditProductComponent
],
imports: [
BrowserModule, FormsModule, HttpClientModule,
AppRoutingModule
],
providers: [],
bootstrap: [AppComponent]
})
……

Listing 19: The app.module.ts modification

Let’s modify app.component.html to render router links and router outlet so that routing can be
implemented across ListProductsComponent, CreateProductComponent and EditProductComponent as


169
www.dotnetcurry.com/magazine |
shown in Listing 20.

<h1>Angular Client of Azure Function Based APIs</h1>


<table class="table table-bordered table-striped table-dark">
<tbody>
<tr>
<td>
<a [routerLink]="['']">Product List</a>
</td>
<td>
<a [routerLink]="['create']">Create</a>
</td>
</tr>
</tbody>
</table>
<br>
<router-outlet></router-outlet>

Listing 20: The app.component.html

Step 26: Open the command prompt and execute a command to run the angular application

ng serve

This will host the angular application on port 4200 by default. Open the browser and enter the following
URL in the address bar.

http://localhost:4200

The Angular application will be loaded in browser as shown in Figure 32.

Figure 32: The Angular application landing page

This page will display a list of available products. Click on the Create link, the CreateProductComponent
view will be rendered as shown in Figure 33.

Figure 33: The Create Product Component View

170 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


Enter product details and click on the Save button, as shown in Figure 34.

Figure 34: New product data

After clicking on the Save button, the Products List view will get displayed as shown in Figure 35.

Figure 35: Products list after added new Product

Click on the Edit button on the table row, an Edit View gets displayed as seen in Figure 36 (Note: Here the
Edit button on ProductRowId 7 is clicked)

Figure 36: Edit Product Component

Let’s modify the Base Price to 14000 and click on Save() button, the record will be updated and the Products
List view will be displayed as shown in Figure 37.


171
www.dotnetcurry.com/magazine |
Figure 37: Product List after updating the Product record

Likewise, click on Delete button, to delete the record.

Conclusion

Azure Functions with HTTP triggers provide a facility to develop and publish API apps that can be used as
REST APIs and made available to client applications.

Download the entire source code from GitHub at


bit.ly/dncm47-azurefunctionapi

Mahesh Sabnis
Author
Mahesh Sabnis is a DotNetCurry author and an ex-Microsoft MVP having
over two decades of experience in IT education and development. He is a
Microsoft Certified Trainer (MCT) since 2005 and has conducted various
Corporate Training programs for .NET Technologies (all versions), and
Front-end technologies like Angular and React. Follow him on twitter @
maheshdotnet or connect with him on LinkedIn

Techinical Review Editorial Review


Vikram Pendse Suprotim Agarwal

172 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)



173
www.dotnetcurry.com/magazine |
ARCHITECTURE

Damir Arh

ARCHITECTING
DESKTOP
AND MOBILE
APPLICATIONS
The article introduces several architectural and design patterns that can be
used to implement common scenarios in desktop and mobile applications.

174 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


Although it might not seem so at the first glance, desktop and mobile applications have a lot in common
which is the reason why I decided to cover the application architecture for both types of applications in the
same article.

Desktop applications used to be always connected fat clients which implemented all the business logic
locally and directly communicated with the data store (typically a relational database management system,
such as SQL Server).

Today, most of them are much more like mobile applications:

• Instead of having direct access to the data store, they communicate through intermediary services
which can be securely exposed over public networks.

• A significant part of the business logic is implemented in the services so that it can be shared between
different client applications (desktop, mobile and web). Local processing in desktop applications is
often limited only to local data validation, presentation and other highly interactive tasks that benefit
from quick response times.

• They don’t rely on being always connected to services. Public networks are not as reliable as internal
ones. Increasingly, applications are expected to work on laptops in locations where there might be no
connectivity at all.

If you look carefully, haven’t these been the properties of mobile applications since their beginnings?

On the other hand, the performance of mobile devices is getting better and better. Today, they are mostly
at par with desktop and laptop computers, allowing mobile applications to do local processing that’s
comparable to desktop applications.

The most notable differences between the two application types today are the size of the screen (typically
smaller for mobile apps) and the input methods (typically mouse and keyboard for desktop applications,
and touch for mobile applications). These mostly affect the application user interface, not its architecture.

In addition to that, at least in the .NET ecosystem, the technologies and frameworks used to develop
desktop and mobile applications are very similar, and parts of them, are at times the same.

Decoupling application code from UI


Loose coupling is a key property of well-architected applications. When dealing with the user interface, this
means that no application code must directly depend on the user interface. Instead of being placed in the
user interface code-behind files, it should be organized in standalone classes that can be fully used and
tested without the user interface.

The architectural patterns used to achieve that depend on the UI framework.

In this article, I’m going to focus on the MVVM (model-view-view model) pattern which works well with all
XAML-based UI frameworks: WPF (Windows Presentation Foundation), UWP (Universal Windows Platform)
and Xamarin.Forms (cross-platform UI framework for Xamarin mobile applications).

You can read more about other UI frameworks for desktop and mobile applications in two of my

175
www.dotnetcurry.com/magazine |
DNC Magazine articles:

• Developing Desktop Applications in .NET

• Developing Mobile Applications in .NET

As the name implies, the MVVM pattern distinguishes between three types of classes:

• Models implement the domain logic and are in no way affected by the application user interface.

• Views implement the application user interface and consist of the declarative description of the user
interface in the XAML markup language and the imperative code in the corresponding code-behind file.
Depending on the UI framework, these are either Window or Page classes.

• View models are the intermediate classes that orchestrate the interaction between models and views.
They expose the model functionality in a way that can be easily consumed by the views through data
binding.

Figure 1: Types of classes in the MVVM pattern

By default, bindings in the view refer to properties of the object assigned to a control property named
DataContext (or BindingContext in Xamarin.Forms). Although each UI control has its own DataContext
property, the value is inherited from the parent control when it isn’t set.

Because of this, the DataContext property only needs to be set at the view level when using MVVM.
Typically, a view model object should be assigned to it. This can be done in different ways:

• Declaratively in XAML:

<Window.DataContext>
<local:MainWindowViewModel />
</Window.DataContext>

• In the view code-behind file in the constructor:

this.DataContext = new MainWindowViewModel();

• In the external code constructing the view:

var view = new MainWindow();


view.DataContext = new MainWindowViewModel();

With the view model set as the view’s DataContext, its properties can be bound to properties of controls,
as in the following example with InputValue and InputDisabled view model properties:

<TextBox Text="{Binding InputValue}" IsReadOnly="{Binding InputDisabled}" />

The control will read the values when it initializes. Typically, we want the values to be re-read

176 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


when they change in the view model. For this to work, the view model must implement the
INotifyPropertyChanged interface, so that the PropertyChanged event is raised whenever a property
value changes:

public class MainWindowViewModel : System.ComponentModel.INotifyPropertyChanged


{
public event PropertyChangedEventHandler PropertyChanged;
private void OnPropertyChanged([CallerMemberName] string name = null)
{
PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(name));
}

private string inputValue;


public string InputValue {
get => inputValue;
set
{
var oldValue = inputValue;
inputValue = value;
if (oldValue != value) OnPropertyChanged();
}
}
// ...
}

The binding subscribes to the event and updates the control when the event is raised for the bound view
model property.

Bindings can either be one-way (i.e. controls only display the value) or two-way (i.e. controls also write
values that changed because of user interaction back to the view model). Each control property has its own
default value for binding mode which can be changed in the binding declaration:

<TextBox Text="{Binding InputValue, Mode=TwoWay}" IsReadOnly="{Binding


InputDisabled}" />

Apart from displaying and modifying values of the view model properties, the view also needs to invoke
methods on the view model (e.g. when a button is clicked). This can be achieved by binding to a command
that invokes that method.

<Button Command="{Binding ClearCommand}">Clear</Button>

The command exposed by the view model must implement the ICommand interface:

public ICommand ClearCommand { get; }

public MainWindowViewModel()
{
ClearCommand = new DelegateCommand(_ => Clear());
}

private void Clear()


{
InputValue = string.Empty;
}

In the view model above, the DelegateCommand class accepts an action to specify the view model method
to be invoked. The code on the next page is a simplistic implementation of such a DelegateCommand:

177
www.dotnetcurry.com/magazine |
public class DelegateCommand : ICommand
{
public event EventHandler CanExecuteChanged;
private Action<object> Action { get; }
public bool CanExecute(object parameter) => true;
public void Execute(object parameter) => Action(parameter);

public DelegateCommand(Action<object> action)


{
Action = action;
}
}

Not all components support binding to a command for every type of interaction. Many only raise events
which can’t be directly bound to a command.

An example of such a control is the ListView which only raises an event when the selected item changes.
Behaviors (originally introduced by Microsoft Expression Blend design tool for XAML) can be used to bind
events to commands. In WPF, the InvokeCommandAction from the Microsoft.Xaml.Behaviors.Wpf NuGet
package can be used today:

<ListView ItemsSource="{Binding ListItems}">


<i:Interaction.Triggers>
<i:EventTrigger EventName="SelectionChanged">
<i:InvokeCommandAction Command="{Binding SelectionChangedCommand}" />
</i:EventTrigger>
</i:Interaction.Triggers>
</ListView>

There’s one more type of interaction between the UI and the view model: opening other windows (or
pages). This can’t be achieved through binding because it doesn’t affect an existing UI control but requires
a new view to be created instead, e.g.:

var view = new SecondWindow();


view.DataContext = new SecondWindowViewModel(InputValue);
view.ShowDialog();

The above code would have to be placed in a command action to be executed in response to an event - for
example a button click. This allows it to pass a value from the current window (i.e. the view model itself)
to the new window as a parameter of the new view model constructor. Although this works, it requires the
view model to directly interact with the view (i.e. create a new view instance corresponding to the new
window) which the MVVM pattern is trying to avoid.

To keep decoupling between the two classes, a separate class is typically introduced to encapsulate the
imperative interaction with the UI framework. This would be a minimalistic implementation of such a class:

public class NavigationService


{
public void OpenDialog<TWindow, TViewModel>(object parameter)
where TWindow: Window, new()
where TViewModel: IViewModel, new()
{
var view = new TWindow();
var viewModel = new TViewModel();
viewModel.Init(parameter);

178 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


view.DataContext = viewModel;
view.ShowDialog();
}
}

The view model could then simply call the method on the NavigationService class with correct
arguments (view and view model types, view model parameter):

NavigationService.OpenDialog<SecondWindow, SecondWindowViewModel>(InputValue);

To get rid of the only remaining coupling between the View Model and the View (i.e. the type of the view
to open), there’s usually a convention on how to determine the type of the view from the type of the view
model. In our case, the view type could match the view model type with its ViewModel postfix removed.
This would allow the view to be instantiated using reflection:

public void OpenDialog<TViewModel>(object parameter)


where TViewModel: IViewModel, new()
{
var viewModelType = typeof(TViewModel);
var viewTypeName = viewModelType.FullName.Replace("ViewModel", string.Empty);
var viewType = Assembly.GetExecutingAssembly().GetType(viewTypeName);

var view = (Window)Activator.CreateInstance(viewType);


var viewModel = new TViewModel();
viewModel.Init(parameter);
view.DataContext = viewModel;
view.ShowDialog();
}

Now, a window can be opened without being referenced in the originating view model at all. Only its view
model is required since the Navigate class finds the corresponding view based on a naming convention:

NavigationService.OpenDialog<SecondWindowViewModel>(InputValue);

Although there’s a lot of interaction between the view and the view model, all of it is in the same direction.
View is fully aware of the view model, but the view model is not aware of the view at all.

Figure 2: Interaction between the view and the view model


179
www.dotnetcurry.com/magazine |
As you can see, there’s some plumbing code involved to implement the MVVM pattern in an application
that could easily be shared between applications.

This is where MVVM frameworks come into play.

They include all the necessary plumbing so that you don’t have to develop your own. Although they also
have their own conventions which you will need to adopt. The most popular MVVM frameworks are (in no
particular order): Caliburn.Micro, MvvmCross, MVVM Light, and Prism.

Dependency injection
To decouple the view model from the UI framework, we moved the code that interacts with the UI
framework into a separate class. However, the newly created NavigationService class for that purpose is
still instantiated inside the view model:

public NavigationService NavigationService { get; set; } = new NavigationService();

This makes the view model fully aware of it. The property has a public setter on purpose so that the
NavigationService class could be replaced with a mock or a different implementation in unit tests, but
this doesn’t really solve the issue.

To make it easier to replace the NavigationService class in tests, we should first introduce an interface,
that the NavigationService class and any potential replacement classes will implement:

public interface INavigationService


{
void OpenDialog<TViewModel>(object parameter)
where TViewModel : IViewModel, new();
}

Now, we can change the type of the NavigationService property to the INavigationService interface
so that the replacement implementations can also be assigned to it:

public INavigationService NavigationService { get; set; } = new


NavigationService();

One last step to fully decouple the view model from the NavigationService class is to avoid creating the
instance inside the view model:

public INavigationService NavigationService { get; set; };

Instead, we will create the instance of the NavigationService in the code responsible for creating the
instance of the view model and assign it to the property there:

var viewModel = new MainWindowViewModel();


viewModel.NavigationService = new NavigationService();

This pattern is called dependency injection because the dependencies (the NavigationService class in
our case) of a class (the view model in our case) are injected into that class from the outside so that the
class doesn’t depend on the concrete implementation.

180 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


The approach of injecting the dependencies through property assignment is called property injection. A
more common approach is to inject the dependencies as constructor parameters instead. This is called
constructor injection:

private readonly INavigationService navigationService;

public MainWindowViewModel(INavigationService navigationService)


{
this.navigationService = navigationService;

// ...
}

In a typical view model, a class responsible for navigating between the views will not be the only injected
dependency. Other common dependencies to be injected can be grouped into the following categories:

• Domain layer dependencies, such as classes responsible for remote service calls and communication
with the data store.

• Application diagnostics services, such as logging, error reporting and performance measurement.

• OS or device-level services, such as file system, camera, time, geolocation, and similar.

Also, view models aren’t the only classes with external dependencies that should be injected so that the
class doesn’t directly depend on other concrete implementations. The same approach is used for most
classes, even for services that are injected into the view model. For example, a class responsible for calling a
specific remote service could depend on a more generic class responsible for performing HTTP calls which
in turn could depend on a logging class.

Figure 3: Hierarchical nature of class dependencies

There are libraries available to make the process of dependency injection in an application easier to
manage. In addition to automatically providing all the required dependencies when a class is initialized,
they also include features for controlling the lifetime of an instance of a dependency.


181
www.dotnetcurry.com/magazine |
The common choices for the lifetime of a dependency are:

• A new instance can be created every time a dependency is required.

• The same instance could be used throughout the application as long as it is running, making the class
effectively a singleton (you can read more about singletons in my DNC Magazine article: Singleton in
C# – Pattern or Anti-pattern).

• Other custom lifetimes can be defined, e.g. for a duration of a save operation to ensure transactional
consistency in the data store.

There’s an abundance of dependency injection libraries available for .NET. Their APIs are slightly different,
but they all have the same core feature set. Based on the NuGet download statistics, the most popular ones
at the time of writing were: Autofac, Unity, and Ninject.

Since dependency injection is an integral part of creating new instances of view models which the MVVM
frameworks are responsible for, all MVVM frameworks include a built-in dependency injection framework
which you can use. In Prism, for example, you would register your dependencies and view models with calls
to its own dependency injection container instance:

containerRegistry.RegisterSingleton<INavigationService, NavigationService>();

This configuration would then automatically be used when instantiating the view model classes for views.
All the necessary plumbing for that is provided by the framework.

Editorial Note: You can read more about dependency injection in general in Craig Berntson’s DNC Magazine
article Dependency Injection - SOLID Principles.

Communicating with remote services


Many of desktop and mobile applications today primarily communicate with a remote service to get
and manipulate data. Most commonly these are REST (representational state transfer) services which
communicate over the HTTP protocol.

To interact with these services, the client-side code typically uses wrapper methods that map to individual
REST endpoints and hide the details of the underlying HTTP requests. This is an implementation of the
proxy design pattern, a remote proxy to be exact.

Usually, a single class will contain methods for all endpoints of a single REST service. Each method will
serialize any parameters as query parameters or request body and use them to make an HTTP request to
the remote service URL endpoint using an HTTP client.

The response received from the remote service will usually be serialized in JSON format. The proxy method
will deserialize it into local DTO (data-transfer object) classes.

182 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


Figure 4: Communication with a REST service using a remote proxy

These wrapper methods are very similar to each other and are very error-prone to write because of the
repetitive code they contain. This makes them a good candidate for code generation.

If a REST service is documented using the OpenAPI specification (originally called Swagger specification),
there are several tools available for doing that:

• OpenAPI Generator is the official highly configurable command line tool with support for many
programming languages and runtime environments. It has 4 different built-in templates for generating
C# code and supports custom templates as well.

• AutoRest is Microsoft’s command line tool which can also generate code for multiple programming
languages. However, it is much more focused on the Microsoft ecosystem and has very good
documentation for the generated C# code.

• NSwag only supports C# and TypeScript code generation but can also generate OpenAPI specification
files for ASP.NET Web API services. Its primary use case is to generate both the server-side specification
files and the client-side proxy classes, allowing it to better support C# and .NET specifics.

Of course, all tools generate not only the client-side proxies but also the DTO classes describing the data
returned by the remote service. Especially in cases when the REST service is maintained by a different team
or company and changes frequently, automated code generation with these tools can save a lot of time.

In applications which not only display data from remote services but also allow data manipulation,
validation of data input is an important topic.

At minimum, the remote services will validate any posted data and return potential validation errors they
encounter. These validation errors can then be exposed to the views through the view models so that they
are presented to the user.

However, a lot of validations that are done by the remote service could also be performed on the client
before the data is sent to the remote service (e.g. to check if a piece of data is required or if it matches the
requested format).


183
www.dotnetcurry.com/magazine |
Of course, the remote service would still have to repeat the same validations because it can’t rely on the
data sent from the clients. But the users would nevertheless benefit from shorter response times for errors
which are detected without a roundtrip to the remote service.

When implementing data validation, the visitor pattern is often used. This allows the validation logic to be
kept separate from the data transfer objects which is beneficial to keep the concerns separate.

It also makes it easier to have multiple different types of validators for the same type of data, based on
how it is going to be used.

Figure 5: Data validation implemented using the visitor pattern

The validator interface will have the role of the visitor with a Validate method accepting the data
transfer object to validate and returning the validation result including any potential validation errors:

public interface IValidator<T>


{
ValidationResult Validate(T data);
}

To strictly follow the visitor pattern, the object being validated could have its own Validate method
accepting a concrete validator. To keep validation decoupled from the data transfer object, this method
could also be implemented as an extension method:

public static ValidationResult Validate<T>(this T data, IValidator<T> validator)


{
return validator.Validate(data);
}

But such a method is not required, and a validator can be efficiently used without it:

var validator = new FormValidator();


var result = validator.Validate(formData);

Although validation doesn’t require as much plumbing code as some of the other concerns covered earlier
in the article, a good third-party library can still make a developer’s life easier.

The most popular library for validation in the .NET ecosystem is FluentValidation. Its main advantages are a

184 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


well-thought-out API and a large collection of built-in rules to be used in your validators.

Conclusion

In this article, I have described common architecture patterns to be used in desktop and mobile
applications.

I started with the role of the MVVM pattern in decoupling the application code from the user interface. In
the next part, I explained how dependency injection can further help to decouple the view model classes
from the rest of the application logic.

I concluded with patterns used for interaction with remote data services: the remote proxy for
communicating with the remote service and the visitor pattern for implementing local data validation. I
accompanied each pattern with popular libraries and tools in the .NET ecosystem which can make their
implementation easier.

Damir Arh
Author
Damir Arh has many years of experience with Microsoft development tools;
both in complex enterprise software projects and modern cross-platform
mobile applications. In his drive towards better development processes,
he is a proponent of test driven development, continuous integration and
continuous deployment. He shares his knowledge by speaking at local
user groups and conferences, blogging, and answering questions on Stack
Overflow. He is an awarded Microsoft MVP for .NET since 2012.

Techinical Review Editorial Review


Yacoub Massad Suprotim Agarwal


185
www.dotnetcurry.com/magazine |
ANGULAR

Keerti Kotaru

Angular v9
Angular v10
Development Cheat Sheet
&
Details significant changes with version 9 and
some new features introduced in version 10

186 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


Angular is a popular framework for JavaScript/ TypeScript application development. This open source
framework by Google is continuously enhanced and new features are added on a regular basis.

Angular 9 is special.

It’s a key release in the last three years. Having said that, for most applications, the upgrade is smooth with
minimal impact and few changes that need developer intervention.

However, Angular 10 is relatively a smaller update (compared to other major releases).

Angular’s semantic versioning includes three parts - major, minor and patch version numbers. As an
example, v10.0.1 refers to major version 10, minor version 0 and patch version 11.

Angular has a predictable release schedule. There has been a major release every six months.

Angular 9 being a major version, comes with a few breaking changes. Developer intervention is required
while upgrading a project from Angular 8 to Angular 9, and to version 10.

Angular best practices describe a deprecated feature will continue to be available for two major releases.
This gives enough breathing room for teams and developers to update their code.

Additionally, a warning on the console in dev mode highlights the need to refactor code. (Production build
does not show the warning). Follow the link to see a complete list of deprecations https://angular.io/guide/
deprecations

At the time of writing this article, the minor version of Angular 10 is 0 and patch version is 1. A minor
version upgrade wouldn’t need developer intervention. As the name indicates, they are few minor upgrades
in the framework features. Patch releases are frequent, often weekly and include bug fixes.

#1. How to upgrade


To upgrade your project to Angular 10 using Angular CLI, use the following command.

ng update @angular/cli @angular/core

While the above command upgrades CLI and Angular core, we need to run the update command on
individual repos. In this example, my application uses Angular Material and the CDK. Run the following
Angular CLI command to upgrade version 10.

ng update @angular/material @angular/cdk

An application, https://update.angular.io/ lists all changes required while upgrading the framework. It lets
us select current Angular version and the target version.

It’s always advisable to upgrade to the latest Angular version. However, it is recommended to move step by
step between major versions. That is, if your project is on Angular 8, migrate first to the version 9 commit
and test the changes. Next, upgrade to Angular 10 (latest version as of this writing). To upgrade to a specific
version, say version 9 (as opposed to the latest version 10) use the following command

ng update @angular/cli@9 @angular/core@9



187
www.dotnetcurry.com/magazine |
#2. Ivy
First and foremost, along with Angular 9, a new and better compiler and run time, Ivy is introduced. It’s
the default for Angular 9 applications. View Engine, the older compiler and run time, is still available and
actively used by many projects and libraries.

An Angular 9 application using Ivy can have dependencies (libraries and packages) that are built with the
View Engine. A tool ngcc (Angular Compatibility Compiler) helps with the backward compatibility. It works
on node modules and produces Ivy compatible version. Angular CLI runs the ngcc command as required.

It’s preferred to use View Engine for Angular libraries. This is because libraries need to stay compatible with
applications that use View Engine. Moreover, a library build with View Engine is still compatible with Ivy
(compiled by ngcc).

Note: Angular 9.1.x improved ngcc speed. The tool runs in-parallel compiling multiple libs at a time.

Let’s have a look at what changed with the introduction of Ivy and Angular 9.

2.1 Entry Components

Angular doesn’t need imperative components specified as entry components. Ivy discovers and compiles
components automatically.

What are entry components? Angular loads components in one of the two ways, declaratively or
imperatively. Let’s elaborate.

Declarative:
The HTML template is declarative. When we include a component as a child in the HTML template of
another component using a selector, it is declarative.

For example, consider the following todo application’s code snippet. An AppComponent template uses two
other components - create-todo and todo-list. They are included in the application declaratively.

--- app.component.html ---

<mat-toolbar color="primary"><h1>Your Todos</h1></mat-toolbar>


<div>
<app-create-todo></app-create-todo>
<app-todo-list></app-todo-list>
</div>

Imperative
Few components are included in the application imperatively. They are not included in the HTML template.

For example, the App Component is bootstrapped at the time of loading the application and the root
module. Even though it’s in index.html, it’s not part of a component template. The Router loads few other
components. These are not in a template of any kind. Check out the following code snippet:

--- app.module.ts ---


@NgModule({
declarations: [ // Removed code for brevity ],

188 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


imports: [ // Removed code for brevity ],
providers: [],
bootstrap: [AppComponent]
})
export class AppModule { }

--- app-routing.module.ts ---

const routes: Routes = [ { path: '', component: TodoListComponent } ];

Router and the bootstrap process automatically adds the above components to entry components’ list. As a
developer, we don’t have to do anything special.

However, with Angular 8 (and below), if we create a component using ViewContainerRef and load them
programmatically (instead of HTML template), we need to explicitly identify them as entry components. We
may add them to entryComponents[] - one of the fields in NgModule() decorator. We may also use ANALYZE_
FOR_ENTRY_COMPONENTS, a DI token that creates entry components using the list of references in useValue
property. The framework used entry components list for tree shaking.

This is no longer required with Ivy. The compiler and the run time identify the dynamic components. Tree
shaking continues to work without explicitly creating entry components.

For an elaborate read on Entry Components, use the links in the References section at the end of the article.

2.2 Bundle size improvement with Ivy

Tree shaking is better with Ivy.

Tree shaking is a process of eliminating dead code (unused code) in the bundles. With tree shaking, Angular
release notes describes an improvement in the range of 2% to 40% depending on nature of the application.

Few applications might not use all Angular framework features. With better tree shaking, unnecessary
functions, classes and the other code units are excluded from the generated bundles. With Ivy, there is also
an improvement in factory sizes generated by the framework.

See Figure 1 depicting bundle size improvement

Figure 1: Reference: Angular Blog (https://blog.angular.io/version-9-of-angular-now-available-project-ivy-has-arrived-


23c97b63cfa3)

189
www.dotnetcurry.com/magazine |
2.3 Better Debugging with Ivy

With the framework, a global object “ng” is available in the application. This object was available even
before Ivy. However, version 9 onwards, the API is easy to use. We can use it for debugging, while in
development mode.

Consider the following example. In the to-do sample, we can select the CreateTodo component using
getComponent() API on the object, ng. It takes a DOM element as an argument.

In Figure 2 , we used document.getElementsByTagName(‘component-selector-name’) for the element. Notice, as


we type in a value in the text field, component state is shown on the field (on Google Chrome console).

Figure 2: The ng object prints component state on the browser console

Note: Another way to retrieve the component is to select the element in HTML (DOM) and use $0 (instead
of document.getElementsByTagName()), which returns the component for the selected element.

See figure 3 for other useful API with ng (for debugging),

190 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


Figure 3: Useful Debugging APIs with ng

ng.applyChanges() – trigger change detection on the component

ng.getContext() – returns context, which is objects and variables with ngFor. Figure 4 shows getContext on
second item in the todo list. See Google Chrome console.

Figure 4: getContext() API on the 'ng' object in action

ng.getDirectives() – Similar to getComponent(), this method returns directives associated with the selected
element.

ng.getHostElement() – Returns parent DOM element of the selected component or the directive.

ng.getInjector() – Returns injector associated with the element, component or directive.



191
www.dotnetcurry.com/magazine |
ng.getListeners() – Returns Angular event listeners associated with the DOM element.

ng.getOwningComponent() – Returns the parent component to the selected DOM element.

ng.getRootComponents() – Returns a list of root components associated with the selected DOM element.
See figure 5.

Figure 5: Result of getRootComponents() API on the 'ng' object

2.4 Better template type checking and error messages

Angular version 9 and above supports three modes for type checking.

Strict mode: Supported only with Ivy. The strict mode helps identify maximum number of type problems
with the templates. To enable the strict mode, set a flag strictTemplates to true in the TypeScript
configuration file, tsconfig.json

The strict mode enables

a. Type checking for @input fields on components and directives


b. Type checking on $event in components and directives
c. Type checking includes generics
d. Type checking on context variables while using *ngFor

Note that the strict mode conforms to strictNullChecks flag in the template. The strictNullChecks mode
needs to be enabled in tsconfig.json. The strictNullChecks also constraints TypeScript code to set null and
undefined only when defined as a union type.

Look at Figure 6. For the first two variables, aNumericVariable and aNumericVariable2, we can’t set a value
null or undefined. It throws an error, rightly so. Consider the last two lines. When used as a union type, we
may set null or undefined as values on the variable.

Figure 6: Errors in
strict mode

192 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


fullTemplateTypeCheck: It’s less constrained than the strict mode and is a subset of the latter. In addition to
Ivy, it works on View Engine as well. It Type checks context for *ngFor along with the views *ngIf and <ng-
template>. It does not type check generics.

To enable fullTemplateTypeCheck, set the value true in tsconfig.json. Please note, if both
fullTemplateTypeCheck and strictMode are enabled, strict mode takes precedence.

Figure 7: Configuration to toggle full template type check or strict mode

Basic Mode: It is a basic type checking in HTML templates. It checks the object structure while using data
binding. Consider the following sample. A sampleTodo object (of type Todo) may have fields id, title or
isComplete. Type checks the template and returns an error that a dummy field doesn’t exist on sampleTodo.

Figure 8: Template type check in basic mode

An aspect you might have noticed in Figure 8 is that the message highlights the error better than before.
Angular 9 has improved on formatting and showcasing compiler error messages. Following is another
example with the build command (the build npm script).

Figure 9: Better error messages with Angular 9 in component templates



193
www.dotnetcurry.com/magazine |
#3. Angular CLI Strict Mode
Angular 10 supports creating a new project in strict mode. While creating a new project, run the CLI
command with the --strict option. See the following:

ng new --strict

The strict mode helps maintainability of the project by:

• Turning on strict mode with TypeScript and Angular Templates’ type checks.

• Lint rule that alerts usage of any type.

• Better tree shaking and improvements in bundle size.

#4. Additional options with providedIn


Some new values are added with the @injectable decorator. It is used with Angular services.

platform: Creates a singleton service instance available for all applications on the page.
any: Creates a singleton instance per each Angular module

Earlier, the framework allowed using a value ‘root’ with providedIn. With it, a singleton instance got created
for the root module. Hence, one instance for the application.

Use @injectable decorator on an Angular service so that the compiler generates required metadata.
Injector creates an instance of the service. It uses the metadata including service’s dependencies.

#5. Lazy load modules


Recent ECMA script update ES 2020 introduced dynamic import of JavaScript modules. This feature is
in Stage-4 now, and is supported by all major browsers. Follow the link to read more about the ES 2020
feature. See the following code snippet for an example,

import ('./create-todo.js').then((module) => {


// access any exported functions from lazy loaded module
// using the module object
});

Angular modules loaded eagerly, by default. To delay loading the modules on-demand, Angular traditionally
used a string syntax (eg. ./path-to-lazy-module/a-lazy.module#ALazyModule). When a user navigated to a
route, the module got lazy loaded. Hence this string was specified as a value for loadChildren field in the
route configuration.

194 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


This syntax was deprecated in Angular 8. Use the new ES2020 syntax in the route configuration. See an
example in the code snippet below. It is important to change the syntax with Ivy. Ngc depends on JavaScript
dynamic imports to discover lazy loaded modules

const routes: Routes = [{


path: ‘create-todo’,
loadChildren: () => import('./todo/create-todo.module').then(m =>
m.CreateTodoModule) }];

#6. Angular CLI Date Range


Angular Material 10 adds support for date range. In addition to the current component functionality to
select a date, a user can select a range between from and to dates.

mat-date-range-input: MatDateRangeInput component is a form field to show a from date and to date
selected by the user.
mat-date-range-picker: MatDateRangePicker component allows choosing the date range visually. See the
figure below.
MatStartDate: A new directive to be used on input elements for start date in the range.
MatEndDate: A new directive to be used on input elements for end date in the range.

Consider the following code snippet that shows date range in the sample to-do application. Follow the link
for a complete code sample.

<mat-form-field >
<!-- The date field label -->
<mat-label>Complete todo in the timeframe</mat-label>
<!-- Form field that shows from date and to date-->
<mat-date-range-input [rangePicker]="todoWindow">
<input matStartDate placeholder="Start date">
<input matEndDate placeholder="End date">
</mat-date-range-input>

<!-- a button that toggles date range picker -->


<mat-datepicker-toggle matSuffix [for]="todoWindow"></mat-datepicker-toggle>

<!-- date range picker allows selecting from and to date -->
<mat-date-range-picker #todoWindow></mat-date-range-picker>
</mat-form-field>


195
www.dotnetcurry.com/magazine |
Figure 10: The new Date Range component in Angular Material

Other Miscellaneous Improvements


#7. Improvement in build time:
We have seen an improvement in bundle size with Ivy. It also helps improve build times. Of course,
it depends on various factors including application size, developer machine configuration. Angular
documented its angular.io site build time improvement by 40%.

With Ivy, better build time means even the dev builds can use AoT. Using AoT is beneficial as it eliminates
any differences with the production build.

#8. Improvement in Unit Test run time:


TestBed, which configures unit testing environment in Angular, is revamped. It does not recompile all
components between test runs resulting in improved unit test run time.

#9. Support for newer versions of TypeScript:


Angular 10 supports TypeScript 3.9. Earlier, Angular 9 added support for TypeScript versions 3.6 and above .

196 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


Consider the following two interesting features in TypeScript 3.8

• ES Private fields: TypeScript allows creating private fields in a class with the access modifier private.
ES has a private class field proposal in Stage-3. The field is prefixed with #. The field is scoped at class
level. Consider the following example. Follow the link to read more on this topic.

class Todo {
#title: string

constructor(todoTitle: string) {
this.#title = todoTitle;
}

greet() {
// Works okay – private field used within the class
console.log(`The todo is ${this.#title}!`);
}
}

let newTodo = new Todo("Buy Milk");

newTodo.#title // error – can’t access private field outside the class

• Top-level await: We may use await with a promise to avoid using then function callbacks. However, the
function using await need to be declared async. With top-level await, async can be applied at JavaScript
module level. This eliminates the need to mark each function using await to be marked async.

#10. CLI to generate displayBlock components


By default, Angular components uses the CSS property display with the value inline. Component does not
add a line break before or after the component.

Angular CLI in Angular 9.1.x (notice minor version 1) can generate the component with the CSS property
display value block. The component will add a line break before and after the component.

Run the generate component command with the flag –displayBlock

ng g component todo-component --displayBlock

#11. Dependencies with CommonJS Imports


Angular 10 shows a warning if a project dependency uses CommonJS. ECMA Script Module System (ESM)
is efficient. Given the time ESM is in use, JavaScript browser projects should move to a better and newer
option. Considering CommonJS is built with serverside Node.js applications in mind, bundle size is not a
priority. Tree shaking is inefficient. Hence, the bundle sizes are larger compared to ESM. Follow this link to
read more on how CommonJS increases bundle size.

#12. Deprecated Support for older browsers


Angular 10 deprecated support for Internet Explorer mobile, Internet Explorer 10 and below.

197
www.dotnetcurry.com/magazine |
#13. Using View Engine with an Angular 9 project
Ivy is the default compiler and runtime with Angular 9. To disable it and use View Engine, consider the
following configuration. Change the field in tsconfig.json at the root of the project.

Figure 11: Configuration to toggle Ivy

Conclusion

Angular is evolving continuously. Angular 9 is a major release in the past few years. Ivy has been in the
making and actively discussed in the Angular developer community for a while. The runtime and compiler
improved the framework in multiple aspects. Angular 10 is another recent major version upgrade (in June
2020). Version 10 has relatively fewer updates ( as far as major version releases go).

At the beginning, the article introduced Angular versioning and significance of Angular 9. The article
described how removing entry components simplified using imperative components. The article then
discussed bundle size improvements with Ivy. Next, it discussed improved debugging with better API on the
“ng” object.

The article also described strict mode with Angular CLI. It provided a sample implementation for Angular
Material Date Range component.

The article also mentioned improved type checking in the component and directive template files, and
discussed additional options with ProvideIn.

A JavaScript feature, 'dynamic imports' introduced with ES2020 has an implication on Angular 9 and Ivy. The
ngc depends on the new dynamic imports as opposed to the old string syntax.

We concluded by showcasing the configuration to opt-out of Ivy and using View Engine after describing few
miscellaneous Angular 9 and Ivy features.

198 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)


Code Sample

Checkout code sample at the following Github repo. Clone, npm install and npm start to run the sample.

https://github.com/kvkirthy/todo-samples/tree/master/memoization-demo

References
- Entry components documentation
- Bye bye entry components by Nishu Goel
- Dynamic Imports in JavaScript
- Christian Liebel’s blog - Angular & TypeScript: Writing Safer Code with strictNullChecks
- How to create private fields and functions in JavaScript class
- TypeScript 3.8
- How CommonJS is making your bundles larger by Minko Gechev.
- Angular blogs
o Angular 9
o Angular 9.1
o Angular 10
- Angular, deprecated APIs and features

Download the entire source code from GitHub at


github.com/kvkirthy/todo-samples

Keerti Kotaru
Author
V Keerti Kotaru is an author and a blogger. He has authored two
books on Angular and Material Design. He was a Microsoft MVP
(2016 - 2019) and a frequent contributor to the developer community.
Subscribe to V Keerti Kotaru's thoughts at http://twitter.com/
keertikotaru. Checkout his past blogs, books and contributions at
http://kvkirthy.github.io/showcase.

Techinical Reviewer Editorial Reviewer


Ravi Kiran Suprotim Agarwal


199
www.dotnetcurry.com/magazine |
Y o u
a n k ry
h sa

T v e r
n n i
h e a io n
r t d i t
fo e @yacoubmassad

@keertikotaru

@vikrampendse
@subodhsohoni

@gouri_sohoni

@suprotimagarwal

benjamij
@maheshdotnet

José R López

@saffronstroke

@jfversluis
@damirarh

@klaushaller4

@dani_djg

WRITE FOR US @sravi_kiran

mailto: [email protected]

You might also like