DNCMag Issue47
DNCMag Issue47
Memoization in
The End of Innovation Machine Learning for
JavaScript, Angular
outside the Cloud Everybody
and React
Getting started
Progressive Web
with Application
Applications
Architecture
Editor In Chief :
Suprotim Agarwal
([email protected]) EDITOR’S
Art Director : Minal Agarwal
Contributing Authors :
NOTE
Yacoub Massad
Vikram Pendse
Time flies! The DotNetCurry(DNC) magazine, a
Subodh Sohoni digital publication dedicated to Cloud, .NET and
Mahesh Sabnis JavaScript professionals, is 8 years old!!
Klaus Haller
Keerti Kotaru This new milestone gives us a point of focus,
José Manuel Redondo López and a springboard to jump upward and forward
Gouri Sohoni from here. Although, this time, during this
ongoing pandemic, we will need to focus
Gerald Versluis our efforts towards technologies that boosts
Daniel Jimenez Garcia productivity and helps us stay relevant.
Damir Arh
Benjamin Jakobus I believe to stay relevant, we will need three
key elements: Strengthening our fundamentals,
Technical Reviewers : Awareness of new and disruptive technologies,
and a plan to learn these technologies and
Benjamin Jakobus make the most of it. In short, we will need to -
Damir Arh Skill, Reskill and Upskill.
Gouri Sohoni
Keerti Kotaru We at DotNetCurry will do our best to cover
Ravi Kiran topics that are relevant to the current situation,
Subodh Sohoni and we hope you will do your best to learn as
much as you can!
Suprotim Agarwal
Vikram Pendse On that note, I want to take this opportunity
Yacoub Massad to thank my extraordinary team of Authors
and Experts, who I am humbled to be a part
Next Edition : October 2020 of. I also want to thank our sponsors, and our
Copyright @A2Z Knowledge Visuals Pvt. Ltd. patrons who have helped us so far to keep this
magazine free of cost. A special thanks to YOU,
the reader, for without you, this magazine
Reproductions in whole or part prohibited except by written won't exist.
permission. Email requests to “suprotimagarwal@dotnetcurry.
com”. The information in this magazine has been reviewed for Enjoy this 8th Anniversary Edition, and stay in
accuracy at the time of its publication, however the information is touch via LinkedIn or Twitter.
distributed without any warranty expressed or implied.
You can also email me your feedback at
Windows, Visual Studio, ASP.NET, Azure, TFS & other Microsoft products [email protected]
& technologies are trademarks of the Microsoft group of companies. ‘DNC
Magazine’ is an independent publication and is not affiliated with, nor has it
been authorized, sponsored, or otherwise approved by Microsoft Corporation.
Microsoft is a registered trademark of Microsoft corporation in the United
States and/or other countries.
THE
TEAM Suprotim Agarwal
Editor in Chief
Modern monitoring &
analytics
Aggregate metrics and events Trace requests across
from 400 + technologies distributed systems and alert
including .Net, Azure, and AWS on app performance
Progressive Web
Applications
f rom zero t o h e ro
Progressive web applications (PWAs) can be described as a set of techniques that take advantage of
modern browser APIs and OS support to provide an experience similar to a native application.
While this started as a way for web applications to offer an experience closer to traditional iOS/Android
applications, it has expanded onto traditional desktop applications. For example, now Windows 10 provides
ample support for PWAs!
Unfortunately, there is no single formalized standard that defines what Progressive Web Applications are,
nor which functionality must be implemented by platforms that support them. This means you can read
different definitions depending on where you look. See for example:
• Mozilla MDN
• Google developers
• Wikipedia
Of this, I feel like Mozilla MDN is the one that summarizes it best, even though their page is a draft!
Progressive Web Apps are web apps that use emerging web browser APIs
and features along with traditional progressive enhancement strategy to
bring a native app-like user experience to cross-platform web applications.
Progressive Web Apps are a useful design pattern, though they aren't a
formalized standard. PWA can be thought of as similar to AJAX or other
similar patterns that encompass a set of application attributes, including
use of specific web technologies and techniques.
We will also go through the minimum technical requirements for a web application to be considered as a
PWA. This is important, since otherwise operating systems like iOS, Android or Windows 10 won’t consider
it a PWA, and thus won’t allow installing it!
• Usage of service workers, which lets them be fast and provide an experience closer to native
applications, like push notifications or offline mode.
7
www.dotnetcurry.com/magazine |
• Described through a manifest file, so that at the time of installation, the OS knows about the name, icon
and other useful metadata.
Note that these requirements don’t mean apps have to work offline or provide push notifications. It also
doesn’t mean you have to implement them using JavaScript SPA frameworks. As long as you use HTTPS, a
service worker and a manifest, you have a PWA.
Enough with the theory, let’s look at an example using the Vue.js docs site: https://vuejs.org/. I am going to
use the latest Edge for Windows, but feel free to try it on your Android/iOS phone as well!
When you open a PWA in your browser, the browser will notice and provide an option for installing it. For
example - using Edge on Windows 10 (you can also install from Chrome; steps might vary slightly):
You can also use the browser developer tools (Ctrl + Shift + I) to inspect both the manifest and the service
worker of the PWA.
If you open developer tools on Microsoft's Edge browser, go to the Application tab as shown in Figure 2 and
9
www.dotnetcurry.com/magazine |
Figure 3, inspecting the service worker of the Vue.js docs site
Once you install the PWA, notice it shows up in your start menu in Windows. You can manage it as any
other app, pinning it to the start menu, pinning to the taskbar or inspecting its properties.
Interestingly, if you inspect the application properties, you can see how this is a shortcut for a web
application that runs on Edge:
If you launch the application, it feels like a regular Windows 10 application, even though it’s just a web
application running inside a sandbox environment:
Launch the application from the Windows 10 start menu (or from the installed apps in your iOS/Android
phone). Once the app has launched, disconnect from your network. Notice how the application keeps
working, thanks to the service worker.
Figure 5, the service worker intercepts the network requests and serves them from cache
We can go a bit further and see how the manifest and service worker are actually registered in a web
application. If you open the developer tools and navigate to https://vuejs.org, you can inspect the HTML
index document.
• Within the <head> element you will see the manifest file being registered:
• At the end of the <body> element you will see the service worker being installed:
<script>
'use strict';
'serviceWorker'in navigator&&navigator.serviceWorker.register('service-worker.js').
then( // rest of the code omitted
</script>
The way these are registered is a perfect example of what the term progressive in PWA means. The manifest
is just a link with a special attribute that browsers which don’t provide PWA support, will ignore.
The script that registers the service worker first checks that the browser actually supports the service
workers API. This way browsers/platforms that support the latest APIs get all the features, while older
browsers ignore them.
If you have never looked at PWAs before, hopefully this sneak peek has been enough to pique your interest.
Let’s now see how we can create PWA using different frameworks.
11
www.dotnetcurry.com/magazine |
Developing Progressive Web Applications
In order to create a PWA, all you have to do is to add a manifest file and a service worker to your web
application, and serve it over HTTPS. It’s no surprise then that many of the frameworks to create web
applications, help developers creating these.
This is particularly helpful for the service workers. Armed with the knowledge about the inner workings of
each framework, they can better tailor the service worker with suitable default behavior.
You can find the code for each of these examples on GitHub.
Now that Blazor WebAssembly is officially released, it is a good time to explore how you can take
Why not use server-side Blazor, which has been officially available
for a while?
Since server-side Blazor relies on a permanent SignalR connection
with the server, it’s harder to find the use case of PWA together with
the server-side Blazor. However, if you don’t care about the offline
functionality and you just like the idea of launching and executing it
as a native app, it is technically possible! See this article.
If you use the dotnet CLI, you just need to use the --pwa flag as in:
Alternatively, make sure to check the PWA option when creating a new Blazor Web Assembly project using
Visual Studio:
In both cases, make sure you use the right version of .NET Core. Also note the selection of the hosted deployment
model, which makes it easier to test the PWA functionality.
When the project is generated, you will notice a manifest.json file located inside the Client’s wwwroot
folder.
{
"name": "BlazorPWATest",
"short_name": "BlazorPWATest",
"start_url": "./",
"display": "standalone",
"background_color": "#ffffff",
"theme_color": "#03173d",
"icons": [
{
"src": "icon-512.png",
"type": "image/png",
"sizes": "512x512"
}
]
}
• One is used during development and is called service-worker.js. As explained in the comments,
reflecting code changes would be harder if we are using a real service worker during development.
• The other one is called service-worker.published.js and contains the real service worker used
when published. It is the service worker that provides the caching of static resources and offline
13
www.dotnetcurry.com/magazine |
support.
Figure 7, inspecting the service worker added by the Blazor WebAssembly template
The offline support that the template provides is described in great detail in the official documentation. To
summarize the main points:
• The service worker applies a cache first strategy. If an item is found in the cache, it is returned from it,
otherwise it will try to make the request through the network.
• The service worker caches the assets listed in a file named service-worker-assets.js which is generated
during build time. This file lists all the WASM modules and static assets such as JS/CSS/JSON/image
files part of your Blazor application, including the ones added via NuGet packages.
• The published list of assets is also critical for ensuring the new content is refreshed. Each of the items
in the list includes its contents hash and the service worker will work through the latest version of the
list every time the application starts.
• Non-AJAX HTTP GET requests for HTML documents other than index.html are intercepted and
interpreted as a request to index.html. What this tries to do is intercept full requests for pages of the
Blazor SPA (Single Page Application), so the browser loads the index.html alongside the JS/WASM that
initialize the Blazor application which then renders the required page.
It’s also worth highlighting that the offline working logic is under control of the developer. You can either
modify the provided service worker file, and/or use the ServiceWorkerAssetsManifestItem MSBuild
elements. (See the official documentation for more info)
Let’s see it working in action. After generating your Blazor project, publish it to a folder as in:
This will publish both client and server projects (remember we choose the hosted model when initializing
the project) to corresponding subfolders. The server should be published to a folder like Server/bin/
Release/netcoreapp3.1/publish. If you navigate to that folder, you can then start the published server
by running the command dotnet BlazorPWATest.Server.dll:
In the terminal where you were running the published server, stop the server. Notice how you can still
launch the installed application! If you open the developer tools, you can see how the required files are
15
www.dotnetcurry.com/magazine |
being loaded from the service worker cache:
Figure 10, testing the offline capabilities of the installed Blazor PWA
If you run into trouble, feel free to check the sample project on GitHub.
This concludes the brief overview of the PWA support available out of the box for Blazor WebAssembly
projects. For more information, you can check this excellent article from Jon Galloway and the official
Blazor documentation.
When writing traditional ASP.NET Core applications, there is no Microsoft provided PWA support. Creating a
manifest is just about providing some metadata, but a useful service worker is a different matter altogether.
You could attempt to create your own, but the caching policy for offline mode can be surprisingly tricky to
get right.
Luckily for everyone, there are community driven NuGet packages to help you turn your ASP.NET Core
application into a PWA. Let’s take a look at WebEssentials.AspNetCore.PWA by Mads Kristensen.
Note the package hasn’t been fully updated to ASP.NET Core 3.1 at the time of writing. While I had no trouble
getting it to work on a brand new MVC project, your mileage might vary (I’ve noticed issues in brand new Razor
pages projects). See the status of this GitHub issue for updates.
A minor inconvenience is that you need to write the manifest, and add any icons it references! You can
easily write a manifest yourself, either from that NuGet package instructions, taking inspiration from the
earlier Blazor example or just googling online. For example:
{
"name": "ASPCorePWATest",
"short_name": "ASPCorePWATest",
"start_url": "./",
"display": "standalone",
"background_color": "#ffffff",
"theme_color": "#03173d",
"icons": [
{
"src": "icon-192.png",
"type": "image/png",
"sizes": "192x192"
},
{
"src": "icon-512.png",
"type": "image/png",
"sizes": "512x512"
}
]
}
Note that you will also need to provide the icons referenced in the manifest, and that at the very least, you
should provide both a 192x192 and a 512x512 version of the icon. The easiest way is to generate your
own icons using websites such as favicon.io (Which will also provide a code snippet to be added in your
manifest file).
The last thing you need to do is to register the package by adding the following line inside the
ConfigureServices method of the Startup class.
Now that everything is wired, launch the application with the familiar dotnet run command (Or use
Visual Studio, whatever you prefer).
You will see the now familiar interface to install it as an application (see Figure 11). You can also see in the
17
www.dotnetcurry.com/magazine |
browser developer tools that the manifest and the service worker have been correctly recognized.
Figure 11, adding PWA support to a traditional ASP.NET Core MVC application
If you run into trouble, feel free to check the sample project on GitHub.
If you use dotnet run, and have tried the earlier Blazor example, it is possible that all the Blazor files are
still cached by the browser. If you still see the previous Blazor site when you open the ASP.NET Core MVC
site, clear the browser cache or force reload (On PC: Ctrl+F5 and CMD + R on Mac).
Feel free to take a look at the library documentation on GitHub. Beyond the library installation and basic
setup, it details the options provided to customize behavior such as the service working and/or the strategy
for caching assets.
By now it should be clear that in order to create a PWA, you do not need to use a SPA framework. All you
need is HTTPS, a manifest file and a service worker.
I wrote on how to develop SPA applications using these frameworks and ASP.NET Core recently (See
Developing SPAs with ASP.NET Core v3.0). Let’s now take a quick look at how each of these frameworks lets
you add PWA support to your application.
Don’t be surprised if it starts getting a bit repetitive. After all, in order to provide basic PWA functionality,
you really just need to add the manifest and service worker!
If you run into trouble, feel free to check the sample projects on GitHub.
When you create an ASP.NET Core application using the template provided by Microsoft, the ClientApp
folder contains nothing but a regular Angular CLI application. This means you can use the Angular CLI in
order to enable additional Angular features such as PWA support.
Once initialized, make sure to build the project using dotnet build, or at the very least, install the NPM
dependencies inside the ClientApp folder
You will also need the Angular CLI installed. If you haven't installed it yet, follow the official instructions. It
comes down to:
At this point, you should have your Angular project ready and all the necessary tooling installed.
In order to add PWA support, all you need to do now is to run a single command using the Angular CLI:
ng add @angular/pwa
That’s it! Now run your project using dotnet run and verify that in fact you have a PWA that can be
19
www.dotnetcurry.com/magazine |
installed.
I won’t go into more detail as part of this article. If you want to know more, check out this article on how
to get started with PWA in Angular, and also read through the excellent official documentation on service
worker support.
Let’s switch our attention to React and the Microsoft provided React templates.
As discussed in my previous article, these templates have a standard create-react-app inside the ClientApp
folder. This means we can take advantage of the existing PWA support already provided by create-react-
app.
In fact, the React application initialized by the React template already has the required PWA support!
Take a moment to inspect the contents of your ClientApp folder. Notice how there is a manifest file
inside ClientApp/public/manifest.json. There is also a script prepared to install a service worker, the
ClientApp/src/registerServiceWorker.js file.
You will need to make a few small fixes to the provided manifest. If you inspect the manifest in Chrome, you
will notice it complains about the icon and start URL. For the purposes of this article, you can copy both the
manifest and icon from the Blazor example. If you want to fix by hand, make sure you set "start_url":
"./", and provide at least a 512x512 icon.
{
"short_name": "ReactPWATest",
"name": "ReactPWATest",
"icons": [
{
"src": "icon-512.png",
"type": "image/png",
"sizes": "512x512"
}
],
"start_url": "./",
"display": "standalone",
"theme_color": "#000000",
"background_color": "#ffffff"
}
Unlike the service worker generated with Angular template, the one generated with the React template
won’t be registered during development. It is only registered during Release builds. This is due to the
following guard located inside the registerServiceWorker.js file:
So, in order to see the PWA support in action, we just need to run it in production mode. Publish the project
to a folder using the command:
Then navigate to the published folder and run the published version:
cd .\bin\Release\netcoreapp3.1\publish\
dotnet ReactPWATest.dll
21
www.dotnetcurry.com/magazine |
Now when you open the site in the browser, you will notice the browser recognizes the manifest, that a
service worker is registered and that you have the familiar option to install it.
As you can see, PWA support is pretty much built-in into the React template. Take a look at the create-react-
app documentation for more information.
Other SPA frameworks like Vue also provide good PWA support. In the case of Vue, the Vue CLI
provides a plugin that enables the PWA support in the application.
Let’s create a new project using the React template and convert it into a Vue application as described in my
previous article. You can add the PWA plugin while initializing the Vue application by manually selecting
the features:
Note that the PWA support works similar to the one provided by create-react-app. The service worker is
only registered during Release builds.
Remember there are a few settings to adapt from the React template in order to get Release builds working with
a Vue CLI app. See productions builds for Vue in my previous article.
You can build the application for release and run it using the same commands as in the React SPA example
before. Once you load the production site in the browser, notice the manifest and service worker are
recognized, and the browser offers the option to install the application.
23
www.dotnetcurry.com/magazine |
Developing PWAs using static site generators
A very interesting use case for PWA are that of static sites such as documentation, portfolios or personal
blogs.
Over the last few years, frameworks such as Jekyll, Vuepress and Gatsby.js have become a popular choice
for building these type of applications. They take care of most of the heavy lifting needed to create a web
application letting the developer concentrate on the content of the site.
It’s no surprise then that enabling PWA support is an out of the box feature that developers can simply
enable. Let’s take a look at a quick example using Vuepress, by creating a documentation site for a library or
a project you have created.
Let’s quickly setup a vuepress project by running the following commands in a terminal:
mkdir vuepress-pwa
cd vuepress-pwa
npm init
npm install -D vuepress
Edit the generated package.json file and replace the scripts property. You will replace it with the
scripts needed so you can run the site in development mode and generate the production build:
"scripts": {
"dev": "vuepress dev docs",
"build": "vuepress build docs"
},
That way, you will be able to run the command npm run dev to start the development server, and npm
run build to generate the production package.
Create a new folder named docs, and inside create a new file named README.md. This will serve as the
home page of your documentation site:
---
home: true
heroText: My awesome project
tagline: Sample docs site showcasing vuepress and PWA
actionText: Getting Started
actionLink: /getting-started/
---
Inside docs, create a new getting-started folder, containing another README.md file:
# Getting started
This normally contains the easiest way to get started with your library.
Finally, add a subfolder named .vuepress inside the docs folder. Inside, place a new config.js file. This
is where you can configure the vuepress settings, which we will use to define the navigation. Add these
contents to the .vuepress/config.js file:
module.exports = {
title: 'My awesome project',
description: 'Documentation for my awesome project',
themeConfig: {
nav: [
{ text: 'Home', link: '/' },
{ text: 'Getting Started', link: '/getting-started/' },
]
}
}
After all these steps, you should now have a minimal documentation site built using Vuepress. Run the npm
run dev command and navigate to the shown address in the browser (Normally http://localhost:8080). You
will see something like this:
If you run into trouble, feel free to check the sample project in GitHub.
Let’s now see how we can add PWA support using the official PWA plugin. Install it by running the
25
www.dotnetcurry.com/magazine |
command:
And configure it by adding this line inside the default export of docs/.vuepress/config.js:
module.exports = {
// previous contents omitted
head: [
['link', { rel: 'manifest', href: '/manifest.json' }],
],
plugins: ['@vuepress/pwa'],
}
You will then add the manifest and icon files inside a new public folder, added to the existing .vuepress
folder, as in docs/.vuepress/public/manifest.json. You could again lift the ones from the initial
Blazor example or create your own.
That’s the minimum configuration needed to make your Vuepress documentation site PWA compliant. Now
you just need to deploy it using HTTPS.
A simple and nice alternative for public documentation sites is to use GitHub pages, but there are other
options as per the official docs.
Remember the tour of the Vue documentation site at the beginning of this article? That is a perfect real-life
example of a Vuepress application with PWA enabled!
Lighthouse
Google Chrome (and other Chromium based browsers such as the new Edge) provides an excellent auditing
tool, Lighthouse.
What Lighthouse does is to audit your site on various categories, including PWA support. It then generates a
report with a score, findings and recommended fixes/improvements.
When it comes to PWA, you should make sure to audit your production builds. As we have seen through the
article, in many cases the service worker is not enabled during development, or a dummy version of it might be
used instead.
Make sure to publish it for Release and run the resulting site (You can check the earlier section about
Blazor). Once up and running, navigate to the site with your browser and open the developer tools. Go to the
Lighthouse tab:
Lighthouse is a great tool to improve your web applications, not just for PWA purposes. Make sure to check
it out!
Android/iPhone simulators
Whenever the focus of a PWA is mobile devices, testing them on a simulator can be of great help during
development and/or debugging.
Be aware that the iPhone simulator does not support WebAssembly, so the Blazor WebAssembly example would
not work.
For example, let’s test how our Angular example project behaves in the iPhone simulator. Once we have the
application running in Release mode on a Mac, we then fire the simulator and navigate to the app:
You can then try to install the application, which will add it to the list of applications:
29
www.dotnetcurry.com/magazine |
As in the case of Lighthouse, when testing the PWA functionality, you will want to test the published
version of your application. That’s not to say the simulator cannot be useful during development to test
how your app behaves on a phone.
ngrok
While simulators can be really handy, testing on a real device is much better. ngrok is a tool that lets you
expose a site running in your laptop securely over Internet.
This means you can get your site running on localhost exposed with an HTTPS URL that you can then test
on a real device. In certain situations, this can be a life saver!
Remember that one of the main requirements for PWA is the usage of HTTPS. Using ngrok is a great way to
get an HTTPS tunnel to your localhost site that can be trusted by real devices and/or simulators.
You can sign in and create a free account, and follow the instructions to download and setup ngrok on your
machine. Once installed and configured, let’s see what we can do.
Start the published version of the Blazor example on your local machine, which by default will be running
at http://localhost:5001. Then run the following ngrok command:
# windows
ngrok.exe http https://localhost:5001
# mac/unix
ngrok http https://localhost:5001
This will create a tunnel and expose your localhost site over internet:
Figure 22, exposing a site running in localhost over Internet with ngrok
You can then open the site on any device: your laptop, a simulator, or a real phone. As long as the device
has access to Internet, it will be able to reach the established tunnel!
Figure 23, loading the Blazor example on a real phone with ngrok
Figure 24, installing the Blazor example on a real phone with ngrok
31
www.dotnetcurry.com/magazine |
I have found ngrok a brilliant tool to be aware of. For more information, check their official documentation.
I will briefly go through some of the most important ones, but if you are seriously considering them, it will
be worth to spend some time doing your own research expanding on these topics.
This might seem obvious, but after all, PWA are not native applications. While they offer a similar
experience, this might not be close enough for your use case or it might have too many drawbacks. For
example:
• PWA are not distributed through app stores. You lose certain benefits that you normally get from simply
being in the app stores - like discoverability, in-app purchases or insights.
• You don’t have access to the same functionality that native applications do. You can only use whatever
functionality is available in the latest browser APIs and while that is plenty, there are examples like
custom URL schemes or access to elements of the hardware, that you won’t have.
• Android, iOS and Windows offer different support levels for PWA, with Google and Android being a
particular champion for them.
Being able to add offline support via service workers is great, and works extremely well in the case of static
sites such as blogs or documentation.
One simple approach could be that your site becomes read-only until the user is back online. If your site is
mostly static data, or interactions need to happen in a timely manner, this might be a decent approach that
saves you a lot of effort.
A harder alternative is considering the usage of a local storage such as IndexedDB to store the data while
offline and sync with the server once the user is back online. This opens a whole new set of problems and
user flows that you need to consider.
• What if there is a data conflict once the data is sent to the server?
• How about data that does not make sense after certain time elapses?
Of course, you can consider a mix and match approach, and allow certain changes to happen offline while
others become unavailable. In any case, offline work does not happen for free and needs to be carefully
considered.
Caching is hard
As seen during the article, pretty much every major web application framework provides some form of PWA
support that lets you add a service worker.
This is great!
You then have a service worker that caches your HTML/JS/CSS files so your application can start and
function while offline.
Let’s leave the caching of data aside, which we briefly discussed over the previous point. Consider only the
caching of HTML/JS/CSS files. Now you need to be careful with invalidating that cache and updating those
files!
That is likely going to need a server that can tell you what the latest versions are, and some process that
requests the latest versions and reconciles with what you have installed (as we briefly saw Blazor doing).
Combined with offline mode, you might need to assume that there can be multiple versions of your
application in the wild. This is significantly more challenging than a traditional web application where you
can easily ensure that everyone uses the latest version of your website!
Development experience
Developing a PWA can be more challenging than developing a web application. While testing with device
simulators or real devices is a good practice even when developing web applications, a PWA introduces
further challenges.
In addition, the need for HTTPS and the fact that many frameworks don’t even enable the service worker
during development, introduces additional friction in the development experience.
Conclusion
Progressive Web Applications (PWA) have become a very interesting choice for web developers. You can
leverage most of the same skills and tools you are used to, in order to provide a native-like experience
across desktop and mobile.
33
www.dotnetcurry.com/magazine |
Certain applications will benefit more from the capabilities offered by PWAs than others. There will be
teams who might not find it worth dealing with the extra challenges they bring, so they might not want
to convert their application into a PWA. Or they realize a PWA doesn’t yet provide the functionality or
experience they need and will prefer to stick with native applications.
However, for those who adopt them, there is no shortage of tooling and support.
Most of the popular frameworks for writing web applications now offer support for PWAs, making their life
as a developer easier. They might even partially embrace them, for example making your PWA installable
while still requiring an online connection.
Like many other things in technology, the only right answer is It depends.
Damir Arh
One could argue that it’s a subset of software architecture that primarily
focuses on individual applications in contrast to, for example, enterprise
architecture, which encompasses all the software inside a company including
the interactions between different applications.
Unlike buildings, software changes a lot and is never really 'done'. This
reflects in its architecture which also needs to constantly adapt to changing
requirements and further development.
It’s important that this doesn’t include only functional requirements (i.e. what the application needs to
do), but also the non-functional ones. The latter is concerned with application performance, reliability,
maintainability and other aspects that don’t directly affect its functionality but have an impact on how the
application is experienced by its users, developers and other stakeholders.
Typically, the application architecture starts with the choice of the type of application to develop.
• Will the users always have internet connectivity, or will they use it in offline mode as well?
• Does the application take advantage of or even require the use of services and sensors available on
devices, such as notifications, geolocation, camera, etc.
• …and so on.
Closely related to the choice of application type, is the choice of technology. Even if you’ve already decided
on the .NET technology stack, there are still a lot of choices for all types of applications: desktop, mobile,
37
www.dotnetcurry.com/magazine |
and web.
You can read more about them in my previous articles for DNC Magazine:
Once you have decided on the application type and the technology stack, it’s time to look at the
architectural patterns.
Architectural Patterns
They have a role similar to design patterns, only that they are applicable at a higher level. They are proven
reusable solutions to common scenarios in software architecture.
• In Multitier architecture, applications are structured in multiple tiers which have separate
responsibilities and are usually also physically separated.
Probably the most common application of this architectural pattern is the three-tier architecture that
consists of a data access tier (including data storage), a business tier (implementing the business logic)
and a presentation tier (i.e. the application user interface).
A standardized service contract describes the functionality exposed by each service. This allows loose
coupling between the services and high level of their reusability. When the architecture pattern was
first introduced, the most commonly used communication protocol was SOAP.
• Microservices are a more modern subset or an evolution of the Service-Oriented architecture (SOA).
As the name implies, they are usually more lightweight, including the communication protocols which
are most often REST and gRPC. Each service can use whatever technology works best for it, but they
should all be independently deployable with a high level of automation.
• In Event-driven architecture, there’s even more emphasis on loose coupling and asynchronous
communication between the components. Although it’s not a requirement, the system can still be
distributed, i.e. it can consist of independent services (or microservices).
The main distinguishing factor is that the components don’t communicate with imperative calls.
Instead, all the communication takes place using events (or messages) which are posted by components
and then listened to by other components that have interest in it.
The component posting the event doesn’t receive any direct response to it. However, it might receive an
indirect response by listening to a different message. This makes the communication more asynchronous
Within a selected high-level architectural pattern, there are still many lower level architectural decisions to
be made which specify how an application code will be structured, in even more detail.
There are further patterns available for these decisions, such as the Model-View-Controller (MVC) pattern
for web applications and the Model-View-Viewmodel (MVVM) pattern for desktop applications.
As the scope of these patterns becomes even smaller, we sometimes name them design patterns instead of
architectural patterns. Examples of those are dependency injection (read more about it in Yacoub Massad’s
article Clean Composition Roots with Pure Dependency Injection), singleton (read more about it in my
article Singleton in C# – Pattern or Anti-pattern), and others.
After the initial application architecture is defined, it’s necessary to evaluate how it satisfies the list of
requirements it’s based on. This is important because in most cases there is no single correct choice for the
given requirements.
There are always compromises and every choice has its own set of advantages and disadvantages. A certain
architectural choice could, for example, improve the application performance, but also increase operating
costs and reduce maintainability. Depending on the relative importance of the affected requirements, such
an architectural choice might make sense or not.
It needs to evolve as the application is being developed and new knowledge and experience is gathered.
If a certain pattern doesn’t work as expected in the given circumstances, it should be reevaluated and
considered for a change. Requirements (functional and non-functional) might change and affect previous
architectural decisions as well.
Of course, some architectural patterns are easier to change than others. For example, it’s much easier to
introduce dependency injection into an application or change its implementation than to change the
application type or core architectural pattern such as MVC.
39
www.dotnetcurry.com/magazine |
Official resources for .NET developers
If you’re focusing on the .NET and Azure technology stack, Microsoft offers a lot of official resources to help
you get started with architecting applications.
The .NET Architecture Guides website is probably the best starting point. The resources on the site are
organized based on the type of application you want to develop. After you select one (e.g. ASP.NET apps
or UWP desktop apps), you get a list of free e-books to download that are applicable to the selected
application type. The same e-book might be listed under multiple application types if the guidance it
contains applies to more than one.
In general, the e-books follow a similar approach and do a good job at introducing the reader to the topic
they cover. Typically, they include the following:
• An introduction to the technologies involved.
• An overview of available application sub-types (e.g. MVC vs. single page web applications) and the
reasoning behind choosing one.
• A list of common architectural patterns for the application type with explanation.
The books primarily target readers with no prior experience in the subject they cover. Their content is
suitable both for software architects and developers. They might still be a valuable read even If you
have some prior knowledge. In that case, you might skip some parts of the book or skim over them, but
Many books are accompanied by working sample applications with full source code available on GitHub.
These showcase many of the concepts explained in the books and are regularly updated to the latest
versions of technologies they use. Often the samples are also updated with features introduced in new
versions of those technologies. It’s a good idea to look at the code before starting a new similar project or
implementing a pattern that’s featured in a sample.
As the name implies, it’s mostly focused on applications being hosted in Microsoft Azure, but many of the
patterns are also useful in other scenarios. In my opinion, the most useful part of this site are the Cloud
Design Patterns. It consists of a rather large list of architectural patterns, grouped into categories based
on their intended use. Similar to how software design patterns are usually documented, each pattern has a
description of the problem it addresses and the solution it proposes. Many include sample code and links to
related patterns.
While this might not be enough information on its own to fully implement a pattern from scratch,
it’s a good basis to compare and evaluate different architectural patterns in the context of your own
requirements.
Conclusion
In this article, I explained the concept of application architecture and provided some guidelines on how
to approach it. I described the importance of architecture evaluation and evolution. I included links to
Microsoft’s official resources about application architecture and suggested how they can be used effectively.
This is the first article in the series about application architecture. In the following articles, I will focus on a
selection of architectural patterns for different application types.
Check Page 174 of this magazine where I talk about Architecture of Desktop and Mobile Applications in
.NET.
Damir Arh
Author
Damir Arh has many years of experience with Microsoft development tools;
both in complex enterprise software projects and modern cross-platform
mobile applications. In his drive towards better development processes,
he is a proponent of test driven development, continuous integration and
continuous deployment. He shares his knowledge by speaking at local
user groups and conferences, blogging, and answering questions on Stack
Overflow. He is an awarded Microsoft MVP for .NET since 2012.
41
www.dotnetcurry.com/magazine |
AZURE DEVOPS
Gouri Sohoni
Source Co ntrol
in Azure DevOps
(Best practices)
42 | DNC MAGAZINE Anniversary Issue (JULY-AUG 2020)
Why Source Control?
Once the code is written, it needs to be kept safe (so code is not deleted or corrupted) and for that, we
maintain a copy of it. Sometimes, we make fixes to code that does not work as expected. As a precaution,
the original code is kept along with the changed code. If the changed code works, we do not need the
original code, but in case it does not work, we can always use the original code to start fresh and remove
bugs.
This entire process creates many copies of our code. Even if we name these copies (or timestamp), it
becomes very difficult to keep track of them.
There is also a chance that our machine, on which the code is created, may crash and we may end up losing
all the code that was written. For team members, this becomes a bigger challenge if multiple developers
are creating, maintaining and working on separate copies of the code.
There should be a way in which all team members are able to collaborate and work with the same
codebase.
Enter Source Control which helps in removing all the aforementioned problems.
• Version Control is a term used interchangeably with revision control or source control. So going forward,
I will use the two terms, source control and version control, interchangeably.
• Source Control is a way to keep a common repository of source code for multiple developers in a team.
Multiple team members can work on the same code, so sharing of code becomes easier.
• Source Control helps in tracking and managing changes made to the code by different team members.
This ensures that Team members work with the latest version of code.
• A complete history of code can be viewed with Source Control. We may have to resolve conflicts when
multiple developers try to change the same file. History is maintained for all the changes, including any
conflicts resolution. Source and Version Control terms are used interchangeably. But Version Control
also takes care of large binary files.
• With Source Control it becomes easier to keep track of the version of software which has certain bugs.
These different versions can be labelled and kept separate.
43
www.dotnetcurry.com/magazine |
There are many tools for source control. It mainly comprises of two types of source control - centralized or
distributed. Examples of Centralized version control tools are:
• SVN - Subversion
• CVS
• SourceSafe
• ClearCase
• Perforce
• GitHub
• GitHub enterprise
• Bitbucket
• Mercurial
Centralized version control comes with a tool called Team Foundation Version Control (TFVC) and for
Distributed, we either have Git implemented with Azure DevOps or can even use GitHub with Azure DevOps.
In TFVC, all the team members work with only one version of files(s) on their machines. There is a history of
code only on the server-side. All the branches get automatically created on the server.
TFVC has two kinds of workspaces, server and local. With server workspace, the code can be checked out to
developers’ machines, but a connection to the server is required for all operations. This helps in situations
where the codebase is very large. With local workspaces, a team member can have a copy and work offline
if required. In both the cases, developers can check in code and resolve conflicts.
Git
Each developer has his/her own copy of code to work with on their local machines. All version control
operations can be available in local copy and can execute quickly as no network is required.
The code can be committed to a shared repository, when required. Branches can remain local and thus
can be light weight. We can keep minimum branches on the server so as to keep it less cluttered. The
local branch can be reviewed later using a Pull Request and can be merged on the server. Following is a
comparison table for the same:
TFVC Git
Code history on server Code history on each team member’s machine
Team member needs connection to server for check- Team member can commit code locally
in
Better for large code base Better for relatively smaller code base but have a
look at this link in case you have a big repo
Branches are heavier Branches can be very light weight and only on local
repo
45
www.dotnetcurry.com/magazine |
Just having different source control tools is not enough, we also need to know how to use them optimally.
Before we get into details of the best practices of source control in Azure DevOps, let us have a look at
various other tooling features available which will help in many ways.
Azure DevOps provides us with work item tracking. If used properly, this feature will be able to get a tracing
from requirement - to code - to test case - to the bug raised. In short, for a bug, we will be able to find out
the requirement associated with it.
• Create bug associated with Test Case when test case fails
• Associate code with work item at the time of checking code or committing code
Visual Studio provides us with an excellent UI for writing code. It also provides support to various test
framework other than MS framework.
• Along with MS Framework, developer can write unit tests with NUnit, xUnit etc.
Azure DevOps provides pipelines with which Continuous Integration (CI) and Continuous Deployment (CD)
is possible with ease.
#1 It is always a good practice to associate code with work item at the time of check in/commit. This
will help in getting traceability from requirement to tasks and the tasks in turn will be associated with code.
This way, when the test case is tested and a bug filed, it can be traced back to the requirement.
47
www.dotnetcurry.com/magazine |
#2 Provide meaningful and useful comments with check in/commits
#3 Use Code Review with TFVC check-in and Pull Request with git commit
• Code Review will ensure that the code will not be checked in before reviewing it.
• Pull Request (PR) will help in committing quality code to the repository.
#4 Visual Studio provides support to various testing framework along with MS framework. Use them.
• With TFVC, Gated Check-in will ensure that the code which cannot be merged with new code on the
server, will not be checked in:
o As the name suggests, the code will not be directly checked in, but first a private build will
execute on the server with the code to check-in, and the already available new/latest code.
o If the private build completes without any issues, code gets checked-in.
o If the build is not successful in an IDE like Visual Studio, the developer gets a message that
the code cannot be checked in.
o The developer takes care of the problem and tries to check in the code again.
• Using Pull Request with Git will help in achieving same result as Gated check in.
• Using Continuous Integration (CI) trigger will take care of automatically triggering the server-side
build and will also take care of Build Verification Tests (BVTs).
• With the help of Release pipeline, we can provide a Continuous Deployment (CD) trigger
o This will automatically deploy artefacts to specified environment once the build gets
completed successfully and artefacts are available.
• We can define multi stage pipelines where there can be two separate targets for deployment
o This can be Virtual Machines, Web Server, on-premises machines or by using deployment
groups
o The stages can be with different triggers and with pre-deployment conditions for
deployment. This can be with respect to automated deployment or manual (after a group or
o The stages can be cloned to keep similar tasks but the configuration can be different.
• The release can be directly associated with source control to multiple branches. In this scenario, we
can eliminate the build artefact but directly use source control branch to trigger the release and go
ahead with deployment. A filter can be used to select specific branch.
Conclusion
In this article, I wrote about the importance of Source Control, as well as different types of Source Control.
I also covered the various Best Practices of working with Source Control by using tools available in Azure
DevOps.
Gouri Sohoni
Author
Gouri is a Trainer and Consultant specializing in Microsoft Azure
DevOps. She has experience of over 20 years in training and consulting.
She is a Microsoft Most Valuable Professional (MVP) for Azure
DevOps since year 2011 and Microsoft Certified Trainer (MCT). She
is a Microsoft Certified Azure DevOps Engineer Expert and Microsoft
Certified Azure Developer Associate. She has conducted corporate
training on various Microsoft technologies. Gouri has written articles
on Azure DevOps (VSTS), Azure DevOps Server (VS-TFS) on DotNetCurry
along with DNC Magazine. Gouri is a regular speaker for Microsoft Azure
VidyaPeeth, Microsoft events including Tech-Ed as well as Conferences
arranged by Pune User Group (PUG).
49
www.dotnetcurry.com/magazine |
JavaScript
Keerti Kotaru
MEMOIZATION
in JavaScript,
ngular and React
Memoization (also spelled memoisation) is a caching technique for
function calls.
We can use the cached value as long as the arguments to the function
calls do not change. This way we can execute the function only for new
invocation with unique arguments.
This article is an attempt to provide a holistic view of the concept across the JavaScript landscape.
Memoization in JavaScript
Memoization is a caching technique for functions. We can add behavior on top of a JavaScript function to
cache results for every unique set of input parameters.
Let’s dive into the details with a very simple example: a multiplication function.
As you can guess, there is no performance incentive for caching the result of multiplication. However, we
use this as a simplistic example in order to illustrate the concept of Memoization.
Consider the following code in Listing 1. We have a function that takes two arguments p1 and p2. It
multiplies the two arguments and returns a value. To demonstrate memorization, we will write a wrapper
that memoizes the function call. We can use this wrapper for a simple example ( as is the case here ), or for
a more complex and expensive algorithm.
Notice the console log in Listing 2 that prints every time the multiplication function is invoked. The log
doesn’t appear when the result is returned from cache.
Save our JavaScript code in a file called memoize.js, and run it using node memoize.js (Refer to the Code
Samples section at the end of the article for a link to the complete code sample).
51
www.dotnetcurry.com/magazine |
Figure 1: Multiply() calls to demonstrate memoization
See Figure 1. For every unique set of arguments (say 10 and 20), the multiplication function is invoked. All
repeated calls are returned from the cache.
Note: In the above sample, we are making an assumption that the function is a pure function. That is, the
return value stays the same for a set of input arguments. If there are additional dependencies, other than the
arguments, this technique does not work.
The following memoization wrapper is one simple way to implement memoization. However, as we will see,
it is possible to optimize this implementation further.
// cache result
memoizedKeyValues.push({
args: JSON.stringify(args),
result: result
})
Notice, memoizeAny accepts a function as a parameter. In a way, it acts as a decorator, enhancing the
behavior of the provided function (multiply() in the sample).
The memoizeAny function returns another function. In Listing 1, the constant multiple is assigned to this
function object. Notice that the arguments provided to multiply are passed to this function.
This function converts arguments to a string creating a unique key. For every repeated function call, this
value stays the same. For the first invocation of the function, we run the actual multiplication function and
store the result in cache (array variable memoizedKeyValues). For every new invocation, we can query if
the cache has a value with the given key. If it’s a repeated invocation, there will be a match. The value is
returned from cache.
Notice, the actual function is invoked by functionObject.apply(). The control reaches this statement
only if the cache doesn’t have a key for the given arguments.
Notice the console.log when return value is returned from cache (to demonstrate memoization). It’s the key -
stringified arguments. See Figure-1 for the result.
Pure function
A pure function consistently returns the same value for a given set of input arguments. Pure functions do
not have side effects which modify the state outside its scope. Imagine, the multiply function depending on
a global variable. For the sake of an example, call it a multiplication factor. The memoization logic we saw
earlier doesn’t work anymore. Consider the following code in Listing 4:
var globalFactor=10;
const multiply = memoizeAny( (p1, p2) => p1 * p2 * globalFactor);
The value of globalFactor could be modified by a different function in the program. Let’s say we
memoized a result 2000 for input arguments 10 and 20 (10 * 20 * 10). Say, a different function changes
the value of global factor to 5. If the next invocation returns value from cache, it’s incorrect. Hence, it is
53
www.dotnetcurry.com/magazine |
important that the function is a pure function for memoization to work.
Note: As a work around, we may include globalFactor while generating the key. However, it will be
cumbersome (but not impossible) to make such logic generic.
Scope of memoizeAny()
Every invocation of memoizeAny() (in Listing 4) creates a separate instance of the returned function. The
array used for cache (variable memoizedKeyValues) is local to each instance of the function. Hence, we
have separate function objects and cache objects for two different memoized functions. Consider add()
and multiply() functions memoized. If the same set of arguments are passed to add and multiply, they
do not interfere.
In Listing 5, multiply(10,20) is cached separate from add(10,20). The result 200 for the former does
not interfere with result 30 for the latter.
However, if we call memoizeAny() on multiply repeatedly, they create separate instances as well. It might
result in an unexpected behavior. One way to solve this problem would be to create a factory function
which creates and returns a singleton instance of a memoized function. We may compare function object
passed-in as an argument to determine if there is a singleton memoized function available, already. If not,
wrap with memoizeAny().
Redux
Redux is an application state management framework. In this article, we will discuss the pattern and two
popular implementations.
In the context of memoization, consider Selectors (reselect in React). It is one piece of Redux
implementation. Let’s consider a sample Todo application for ease of understanding the use case.
Notice a component CreateTodo for creating todos. Another component TodoList for listing, filtering and
marking the todos as complete. The components dispatch actions, which invoke the pure functions called
reducers, which return application level state.
Memoized selectors are used before returning the state to the component.
A typical redux store maintains a large centralized application state object. If busy components continuously
select (retrieve) data from the redux store, it can quickly become a bottle-neck. Depending on the size of the
application, retrieving data from the store could be a costly operation.
55
www.dotnetcurry.com/magazine |
It is a good use case for memoization. For repeated retrieval of state, components use memoized selectors.
As long as state is not modified, arguments (input) for the selectors stay the same (see Figure 2 for input/
output to the selector).
Angular Implementation
Let’s begin by visiting an Angular implementation of selectors and memoization with NgRx. If you are
interested in React, jump to the next section.
Components make use of data returned by the NgRx selectors. They receive data from the NgRx store a few
times, cached results are returned a couple of other times. This section aims to demonstrate the same and
explain how optimization is achieved.
Selectors in NgRx are a use case for memoization. While the JavaScript section we saw earlier was
theoretical with basic examples like multiplication and addition, the following code sample is realistic for
caching results of a function in JavaScript.
Consider the following Angular folder structure. Refer to Code Samples section at the end of the article for
links to the complete code sample.
In Figure 3, notice the CreateTodo component for creating todos. Alongside, notice a TodoList component
for listing, filtering and marking todos complete.
These selectors are invoked from the todo list component. If you look at Listing 7, the function
onShowAllClick() is invoked on toggling “Include completed tasks”. Depending on the toggle switch
status, it either uses the getAllTodos() selector or the getActiveTodos() selector.
onShowAllClick(){
// Toggle show all switch
this.isAllSelected = !this.isAllSelected;
if(this.isAllSelected){
this.todos$ = this.store
.pipe(
select(selectors.getAllTodos)
);
} else {
this.todos$ = this.store
.pipe(
select(selectors.getActiveTodos),
);
}
Notice the console.log statements in Listing 6. They print a log every time the selector is invoked, which
retrieves state from the application store. In case of the state, the input argument for selector doesn’t
change, the memoized result is returned.
Say, the user creates a new todo, the state is updated and the selector function is invoked till the state/
argument changes again.
As we can see in Figure 4, toggling “include complete tasks” repeatedly uses cached results. It doesn’t
retrieve state from the NgRx store (Redux store). Using the memoized results avoids querying a large NgRx
store every time components request data.
57
www.dotnetcurry.com/magazine |
And hence, using cached results is effective. The performance gains could vary based on application size,
NgRx store size and even the efficiency of browser and the client machine.
Follow the link for a complete Angular - NgRx code sample, https://github.com/kvkirthy/todo-samples/tree/
master/memoization-demo
React Implementation
In this section, we cover another example of a Redux implementation - specifically, the React
implementation. It describes how memoization works with Reselect.
It is a selector library, primarily for Redux. The library and the pattern aim to simplify Redux store.
Components consume application state. The state (or data) is created by components; for example, forms in
components. It is also displayed and used in presentation logic etc. The selector functions calculate state as
needed by the components. In addition to memoization, selectors can be composed. A selector can be used
as an input to another selector.
Redux accesses a large application store more often. Without memoization, any change to the state tree,
be it in the area relevant to the component or not, leads to the Redux store being accessed. Expensive
filtering and state calculations can occur.
Reselect’s memoization optimizes the same by accessing the store only when state being requested is
updated. Otherwise, cached results are returned.
Consider the following structure for a React project with Redux and Reselect. The sample demonstrates
memoization.
Todo.js - It lists all todos, allowing filtering in/out completed todos. In the code sample, CreateTodo is a child
component of the Todo component.
todoSlice.js - It creates a slice that encapsulates initial state of the todos, reducers and actions. In the
sample application, this file includes selectors as well.
We focus on memoization with reselect. To demonstrate the feature, consider the following Todo sample.
The application provides three functionalities:
a. Create a todo
b. Mark a todo complete
c. Filter todos based on their completed status.
Notice the buttons (on top of the screen) - Include completed todos and Exclude completed todos. Clicking on
the former shows all todos, even the completed items; and the latter excludes the completed items.
To create a memoized selector, use the createSelector API in the reselect module as shown in Listing 8.
Listing 8: Selectors
The selector getTodos is memoized and composed using two other selector functions getAllTodos and
showActiveTodos. The selector function getAllTodos returns todos, which is only a part of the state tree.
Typically, each feature is represented by a separate object in the Redux store. The application may have
59
www.dotnetcurry.com/magazine |
many other features and the application state is expected to include additional objects. The getTodos
selector focuses only on todos.
The getTodos selector has a condition to return all todos including completed items or just the active
items. When the selector is invoked the first time, it accesses the state tree and returns the data from the
store. As application state is expected to be a large object, it could be a costly operation to repeatedly
access the redux store.
When memomized, for repeated calls, the store is not accessed till the arguments change. Look at Figure 5.
If the user clicks either of the filter buttons (“Include completed todos” or “Exclude completed todos”), the
cached (or memoized) result is returned. Let’s add a console log to demonstrate invocation of the selector
function in Listing 9.
Notice Figure 6. During the initial page load, the selector function is invoked. The argument,
showActiveTodos is defaulted to false. I clicked on “Include completed todos”. It invokes an action that
sets state showActiveTodos to true. Notice, this state value is one of the arguments to the selector,
getTodos().
As there is a change in argument value, the selector is invoked the next time. To demonstrate that the value
is returned from cache when arguments don’t change, I continuously clicked the button “include complete
todos” many times. The console log does not print anymore. It does not access the state tree, till the
selector function arguments change.
As described in the section earlier, memoized selector avoids calculating state every time a change occurs
in an unrelated section of a large Redux store. It optimizes React-Redux application by invoking the
selector, only relevant state tree (accessed by the selector) is updated. Other times, cached results are used.
Consider the following code snippet for Todo list component. Follow this link for a complete React, Redux
and Reselect sample - https://github.com/kvkirthy/todo-samples/tree/master/react-demo
Code Samples
Resources
Wikepedia article on Memoization, https://en.wikipedia.org/wiki/Memoization
Angular NgRx documentation, https://ngrx.io/docs
Getting Started with Redux, https://redux.js.org/introduction/getting-started
Reselect (React) GitHub page, https://github.com/reduxjs/reselect
Keerti Kotaru
Author
V Keerti Kotaru is an author and a blogger. He has authored two
books on Angular and Material Design. He was a Microsoft MVP
(2016 - 2019) and a frequent contributor to the developer community.
Subscribe to V Keerti Kotaru's thoughts at http://twitter.com/
keertikotaru. Checkout his past blogs, books and contributions at
http://kvkirthy.github.io/showcase.
Klaus Haller
The End of
Innovation outside
the Cloud?
Why PaaS will shape our future.
Only one or two of the articles I read per year are real eye-openers. My favorite one in 2019 was
Cusomano’s “The cloud as an Innovation Platform for Software Development” [Cus19].
I realized: The cloud unlocks a gigantic innovation potential for the business if used effectively by
developers.
Everyone has already been using SaaS for years: Google Docs, Salesforce, Hotmail etc. They are convenient
for users – and for the IT department. SaaS takes away the burden of installing, maintaining, and upgrading
software.
IaaS is thriving as well. Many IT departments have migrated some or all their servers to the cloud. They
rent computing and storage capacity from a cloud vendor in the cloud provider’s data center. Thus, IT
departments do not have to buy, install, and run servers anymore – or care about data center buildings and
electrical installations.
Companies and IT departments benefit from IaaS and SaaS in many ways. Costs shrink. There are no
big upfront investments anymore. They get immediate and unlimited scalability, high reliability, or any
combination of that. This is convenient for developers, engineers, and IT managers.
H o w e v e r , Pa a S – p l a t f o r m a s a s e r v i c e – a n d n o t I a a S a n d S a a S a re t h e
drivers for business innovations.
IaaS and SaaS revolutionize the IT service delivery. You save time because you do not have to wait for
hardware. You save money because you do not have to install software updates.
If your CIO is a fast mover, he or she has some competitive advantages for one, two, or three years before all
your competitors are on the same cost level. However, IaaS and SaaS does not enable developers to build
revolutionary new features, services, and products for their customers.
With PaaS, software developers can assemble and run applications and application landscape, using
fundamental services such as databases, data pipes, or tools for development, integration, and deployment.
[Cus19] M. Cusomano: The Cloud as an Innovation Platform for Software Development, Communications of the ACM, Oct 2019
63
www.dotnetcurry.com/magazine |
The real game changer is the massive investment of big cloud service providers into Artificial Intelligence
(AI) services - from computer vision to text extraction and into making such AI technology usable by
developers without an AI background.
Furthermore, they have an exotic-obscure offering such as a ground station for satellite data or for building
augmented reality applications.
Sadly, I do not see how I can get involved in a project using satellite data and augmented reality at the
same time (or at least one of these technologies). However, at the same time, a broad variety of niche
features enable developers to build innovate products and services for their customers which were
unthinkable a few years in the past. The cloud providers foster this even further by allowing 3rd party
providers to make their software available hoping for a network effect - as seen earlier in the form of the
iStore or the PlayStore.
Certainly, 3rd party services require a second look. Many are easy to integrate and do not need maintenance
afterwards – and there is, what I call Potemkin integration. There is a façade on the marketplace with easy
installation (e.g., using images). Afterwards, however, you have the same amount of work as for any on-
premise open source or standard software installation. You need a dedicated operations team and detailed
technical understanding how things work etc. Obviously, this is lesser of an issue for services developed and
offered by the cloud vendors themselves.
The beauty of the (niche) cloud services and PaaS offerings is that developers and companies can integrate
the newest research and technology in their products without needing research labs or dozens of Ph.D.s in
A.I. and computer vision.
For example, using AWS’s Rekognition service, developers need as much time for writing a printf
command as for programming a service call to determine whether a person smiles on a picture and
whether there are cars on the same image.
Assume your company wants to build a revolutionary app for photographers. The app identifies whether
there are celebrities on the picture and generates an automatic description and pushes these pictures to
readers interested in a specific celebrity.
2. Integration with photo feeds and for push messages to the readers
3. AI solution to detect celebrities, how their mood is, and other objects on the photos
4. Annotation feature that generates a one sentence description to the photo based on what the AI
detected (e.g., A stressed Prince Harry and a stoic Queen with a dog)
IaaS can speed up the time you need to get a virtual machine. That is nice at the beginning of the project,
but not a game changer for development.
SaaS does also not help much. If there is a suitable solution in the market, there is no need for your project.
Nobody invests money to get on a level everyone else in the industry already is.
PaaS, however, has a big influence on the development project. It solves work package #3 for you.
You do not have to train neural networks for object and person detection. Especially, you do not have
to collect and annotate thousands of pictures of celebrities for your training data set. You just use AWS
Rekognition. You can focus developing your real innovation (work package #4) including the overall idea of
the app (work package #1 and #2). PaaS does not always provide such perfect services as in this illustrative
example, but the more AI your application contains, the more development time PaaS saves you.
Editorial Note: Similar to AWS Rekognition, Microsoft provides Cognitive Services using which you can extract
information from images to categorize visual data, process it, detect and analyze faces, and recognize emotions in
photos. There’s a also a new ML.NET offering that can be explored in this tutorial.
PaaS Strategy
Certainly, when looking at the cloud vendors’ strategies, IT departments should not be too naïve. They need
a basic PaaS strategy to avoid bad surprises.
First, cloud services are not free. Business cases are still needed, especially if you are in a low-margin
market and need to call many expensive cloud services.
Second, the market power changes. Some companies pressed their smaller IT suppliers for discounts every
year. Cloud vendors play in a different league.
Third, using specific niche services – the exotic ones which really help you to design unique products and
services to beat your competitors – result in a cloud vendor lock-in.
The cloud-vendor lock-in for platform as a service cannot be avoided. However, a simple architectural
measure reduces potential negative impacts: separating solutions and components based on their expected
lifetime.
“Boring” backend components often run for decades. It is important that they can be moved to a different
cloud vendor with limited effort. In contrast, companies develop new mobile and web-front-ends every
2-3 years – plus every time the head of marketing changes. Highly innovative services and products are
reworked and extended frequently.
65
www.dotnetcurry.com/magazine |
In the area of shorter-lived components, applications, and solutions, the vendor lock-in is of no concern.
You change the technology platform every few years. Thus, you just switch the cloud vendor next time
if someone else offers more innovative solutions. It becomes even of a lesser threat if you consider the
alternative: being attacked by more innovative competitors using the power of PaaS.
Conclusion
My rule of thumb is: You might be able to miss out on the cost-savings the cloud can bring with IaaS
and SaaS, but your business will be hit hard if the IT department cannot co-innovate with the business
delivering innovations using a PaaS with its many ready-to-use and highly innovative services.
Klaus Haller
Author
Klaus Haller is a Senior IT Project Manager with in-depth business
analysis, solution architecture, and consulting know-how. His experience
covers Data Management, Analytics & AI, Information Security and
Compliance, and Test Management. He enjoys applying his analytical skills
and technical creativity to deliver solutions for complex projects with high
levels of uncertainty. Typically, he manages projects consisting of 5-10
engineers. You can connect with him on LinkedIn.
10x
times the value
SUBSCRIBE US TODAY!
Machine Learning
Benjamin Jakobus
Machine Learning
for Everybody
"The unreal is more powerful than the real, because nothing is as perfect as you can
imagine it. because it’s only intangible ideas, concepts, beliefs, fantasies that last. Stone
crumbles. Wood rots. People, well, they die. But things as fragile as a thought, a dream,
a legend, they can go on and on.”
- Chuck Palahniuk
No single technology has so affected the modern world as the digital computer. No invention conjured such
endless possibilities.
If you go to the buzzing business district of Rio de Janeiro, the crowded terminals at Heathrow or the
immense port at Rotterdam, you will have a vague idea of the scale to which computers influence our
world. Cars, planes, trains, subway systems, border controls, banks, stock markets and even the records of our
very existence, are controlled by computers.
If you were born after 1990, then chances are that most things you ever experienced, used or owned are a
mere fabrication of computational omnipotence.
The production, extraction, and distribution of oil, gas, coal, spices, sugar or wheat all depend, in one way or
the other, on computational power. The silicon chip migrated into nearly every home, monitoring our time,
entertaining our children, recording our lives, storing personal information, allowing us to keep in touch
with loved ones, monitoring our home and alarming the police in the event of unwanted intruders.
And recently, computers have begun taking over the most fundamental tasks of our brains: pattern
recognition, learning and decision making. Theories, techniques and concepts from the field of artificial
intelligence have resulted in scientists and engineers building ever more complex and “smarter” systems,
that, at times, outperform even the brightest humans.
In recent times, one subfield of artificial intelligence in particular has contributed to these massive
advances: machine learning.
69
www.dotnetcurry.com/magazine |
” It’s ridiculous to live 100 years and only be able to remember 30 million
bytes. You know, less than a compact disc. The human condition is really
becoming more obsolete every minute.”
- Marvin Minsky
Each subfield of artificial intelligence is concerned with representing the world in a certain way, and
then solving the problem using the techniques that this model supports. Some subfields, for example,
might model problems in graph form so that the computer can easily “search” for a solution by finding the
shortest path from one point in a graph to another.
Other techniques might represent problems, such as solving a sudoku puzzle, using a matrix of numbers
and imposing restrictions or “constraints” on some points in that matrix which the computer can not violate
(this subfield of AI is known as “constraint programming”).
A solution to a problem expressed in such a way would be a combination of numbers so that none of these
numbers violate the given constraints. Similarly, the field of machine learning looks at the world through its
special lens: that of using large amounts of data to solve problems by means of classification.
That is, machine learning is all about getting computers to learn from historical data, and classifying new,
unseen data, by using the “knowledge” gathered from having looked at this past data. Whilst this may sound
complex, the essence of it is achieved by using techniques borrowed from statistics to write programs that
are good at pattern recognition. This, in turn, is the fundamental philosophy behind machine learning, one
eloquently expressed by the fictional character Maximillian Cohen in the cult classic “Pi” when he describes
his three assumptions about the world:
Although from a philosophical point of few, looking at the world in terms of numbers and patterns may
seem rather cold-hearted, this way of thinking works well for solving the types of problems for which
i) large amounts of data are available, ii) the problem itself is best understood through patterns, and iii) the
information needed to solve a new problem in the domain constantly varies.
The first requirement - that of data - is pretty much self-explanatory. Given that machine learning is
concerned with pattern recognition, then the more data we have, the easier it becomes to identify certain
patterns in the data.
Take, for example, the problem of object recognition in images. Or more specifically, the problem of
differentiating cats from dogs in digital images: The problem itself is well suited to pattern recognition,
If we only ever had a single image of each animal - one of a dog, and one of a cat - then recognizing
patterns between the two would be impossible. If, however, we had a thousand such images, the similarities
and differences across these images will become apparent, allowing them to be categorized and labelled.
Whilst the need for data is easy to understand, the statement with the second requirement that “solving
problems using pattern recognition requires that the problem itself is best understood through patterns” may
seem silly at first. Nevertheless, it is an important one!
Unlike the fictional movie character Maximillian Cohen likes us to think, not everything in the world can be
best understood through patterns.
Take for example, the problem of multiplying two numbers. The multiplication follows a predefined set of
mathematical and accounting rules. Creating software that allows users to calculate the result, based on
user input, will be easier if we simply write down the rules for multiplication (for example, 2x2) in computer
code, instead of collecting thousands of multiplications and building up a statistical model to try and find
patterns across these formulas.
Similarly, the problem of finding the shortest path from one point on a map to another might be better
suited by modelling the map as a graph, instead of collecting the paths that thousands of citizens take to
work every day, and trying to find some sort of pattern for each of the different points in the graph.
The third requirement - that is, the variation of information - refers to the inconsistencies in the input data
when presenting the machine with a new problem to solve.
Returning to our example of distinguishing cats from dogs, it is easy to see what is meant by this statement:
every cat and every dog is slightly different.
Although the general characteristics of each animal are the same, minor variations exist both in body shape,
colour and size of each animal. Therefore, any new image that we ask the machine to classify, will differ
slightly to the previous images that it has seen. The information needed to solve a new problem therefore,
constantly varies.
This stands in stark contrast to the problem of finding the shortest path between two points on a map:
whilst people might try to find the shortest paths between different points on the map, neither the points
nor the paths themselves are likely to change (only in exceptional circumstances, such as when a new road
or new building is being constructed). In this case, the information provided by the user to the machine will
never change drastically. The user might ask to find the path between A and B, or B and C, or A and C, but the
points themselves (A, B and C) will never change.
We, therefore, see that machine learning, far from being a magic pill, is a powerful problem-solving
technique that works well for certain types of problems. That is, problems that are defined by strong
variations in input, patterns and lots of data. What these problems look like, and exactly the different
techniques that machine learning relies on to solve these problems, is the sole purpose of this tutorial
series.
71
www.dotnetcurry.com/magazine |
Your knowledge of the field will be built from the ground up, using practical examples and easy to
understand explanations - without mathematics or complex technical terms.
We will start our journey by first examining what computer science and artificial intelligence are, and what
they try to achieve. Then we will move on to focus specifically on machine learning, exploring the different
concepts, how they work, and what their advantages and limitations are. By the end of this series, you will
have a solid understanding of the topic itself, will be able to understand advanced technical concepts and
differentiate fact from fiction, marketing hype from reality.
Before we can start discussing the various concepts and techniques used in machine learning, we must
understand what artificial intelligence actually is. And before we can understand artificial intelligence, we
must first understand how computers work and what computer science actually is. We therefore begin this
series about Machine Learning by introducing Computer Science.
Figure 2: Computer Science Mind map (Image Credit: Harrio) The advent of Computer Science
It is often difficult to comprehend that the computer is just a big calculator, a progression of the dry
sciences of electronics and mathematics, performing addition, subtraction, multiplication and division.
While pure mathematics and computer science have transgressed over the past few decades, mathematics
plays a fundamental role in the development of computational systems. It was really mathematics that
provided the advent for our digital evolution, allowing us to represent our world in an abstract and logical
fashion.
To truly appreciate just how profound a difference mathematics made to our existence, we need to look
back 12,000 years to the first prehistoric settlements along the banks of the River Jordan.
Among the round houses and plastered storage pits, we find the origins of mathematical and scientific
thought. Prior to their settling, hunter-gatherer communities led a very monolithic existence. Life was
relatively simple and therefore little need arose to introduce abstract thought that could simplify problems.
Anthropologists suggest that hunter-gatherer communities maintained a limited mathematical ability
that did not exceed a vocabulary foregoing the number “ten”. Archaeological discoveries indicate that such
limited knowledge was used by cavemen as early as 70,000 years ago for time measurement or quantifying
belongings.
As communities settled, their needs evolved, as did their problems. With an increasingly sophisticated
lifestyle, and therefore respectively sophisticated volumes of information, people were in need of a way
to reduce chunks of information to essential characteristics. We call this system of summarizing the main
points about something without creating an instance of it, abstract thought.
In addition, people began using symbols or sounds to represent these characteristics, without having to
consider the characteristic itself. This realization was the birth of mathematics. The idea of using one “thing”
to represent many “things”, without considering any “thing” in specific.
Figure 3: The Ishango bone, a tally stick, was used to construct a number system. This bone is dated to the Upper Paleolithic era,
around 18000 to 20000 BC (Source: Wikipedia)
Take the number “eight” as an example. “Eight” or ”8” is merely a symbol or sound. It is not a concrete thing
or object in the physical world. Instead it is an abstract thought that we can use to represent an amount of
“things” - be it trees, animals, cars or planes.
Mathematics became a projection of a tribe’s sophistication and a magnification of their own thoughts and
capabilities. As the first civilizations developed, they used such skills to solve many practical problems such
as the measuring of areas of land, calculating the annual flooding of rivers or accounting for government
73
www.dotnetcurry.com/magazine |
income. The Babylonians were such a civilization and their work on mathematics is used to this day.
Clay tabs found by archaeologists, in what is modern-day Iraq, show squares, cubes, and even quadratic
equations.
Figure 4: The Babylonian mathematical tablet Plimpton 322, dated to 1800 BC (Source: Wikipedia)
To deal with complex arithmetic, the Babylonians developed primitive machines to help them with their
calculations; the counting board. This great-grandfather of the modern computer consisted of a wooden
board into which grooves were cut. These grooves allowed stones to be moved along them, representing
different numbers. Such counting boards proved to be building blocks for modern mathematics and were
arguably a considerable step towards academic sophistication.
With the eventual decline of the Mesopotamian civilization, much of the old wisdom was abandoned and
forgotten, and it wasn’t until 500-600 BC that mathematics began to thrive again, this time in Greece.
Unlike the Babylonians, the Greek had little interest in preserving their ancestor’s wills. Their trade and
battles with foreign cultures brought about swift change, as the Greek noblemen embraced the unknown.
Frequent contact with foreign tribes opened their eyes and they soon came to appreciate new knowledge
and wisdom. As they built the first Universities, the history of the world began to change at a great speed.
Many of the world’s most brilliant minds began to gather at these centers of knowledge, exchanging
theories and schooling the young.
One of these disciples was Aristotle, a student of Plato’s. He was to become the teacher of Alexander the
Great and profoundly shaped Western thought by publishing many works on mathematics, physics and
philosophy. One of his greatest feats was the incorporation of logic which, as we will see later on, led to
the development of the first digital computer. The notion of logic will be discussed in one of the upcoming
tutorials in this series; however, the basic idea is that, given two statements, one can draw a conclusion
based on these statements. For example:
2. Aristotle is Greek
With an appreciation of the power of abstract thought and boolean logic, we now know that, given a
number, we can manipulate it to illustrate anything: words, thoughts, sounds, images and even self-
contained intelligence.
What we do not yet know is how to overcome the barrier between the metaphysical number and merge it
with the physical world around us. That is, how can we use wires and electrical currents to bring complex
mathematical equations to life?
As computers consist of switches (imagine those switches to be similar to the light switch in your room
- the switch can either be on or off - in other words, it can either allow the flow of electricity or prevent
it from flowing), the key lies in representing mathematics in the form of electric flows. That is, reducing
numbers to two states; on or off. Or, as Aristotle would have put it; true or false.
As our numbering system consists of 10 digits (0, 1, 2, 3, 4, 5, 6, 7, 8, 9 - in mathematical terms: base 10), we
need a notation that allows us to represent a sequence of numbers using only two digits. The fabrication
of such a scheme is accredited to Gottfried Leibniz, a German 17th century mathematician, who, while
constructing a mechanical calculator, realized that using only two digits instead of 10 simplifies not only
the construction of his machine but also requires fewer parts. The concept behind his idea was that any
decimal number can be represented in patterns of 1’s and 0’s. For example, the decimal number 421, can be
converted to read 101010. (we will examine the mathematics behind these conversions in the next section).
The number 10011110 can be converted to read 158 and so on. This numbering system of base two (1’s
and 0’s) is referred to as the binary.
Leibniz, in addition to being a mathematician and multilingual philosopher, was also among the first
Europeans to study Chinese civilization. It was this passion with Chinese society that caused Leibniz to
realize that the Chinese had preceded him. Having availed of the binary code for centuries, the Chinese
used a different notation: broken and unbroken lines instead of Leibniz’s 0 and 1. Staggered by this
discovery, Leibniz outlined his newly found knowledge in ”Explication de l’ arithmetique binaire” in which
he stated that ”this arithmetic by 0 and 1 happens to contain the secret of the lines of an ancient Chinese
king and philosopher named Fohy (Fu Xi), who is believed to have lived more than 4,000 years ago, and
whom the Chinese regard as the founder of their empire and their sciences”.
What Leibniz was ultimately referring to in his writings was the yin-yang - a complex system based on
the principles of what we call binary arithmetic; a two-symbol representation of the world around us.
The broken (yin) and unbroken (yang) lines represent opposites: male and female, day and night, fire and
water, negatives and positives. Coherently, the principle of the yin-yang is the building block of Chinese
civilization and is at the heart of its society. Despite its significance, Leibniz’s (re)discovery was soon to be
forgotten. It wasn’t until the 19th century that the logic upon which we have become so reliant was once
75
www.dotnetcurry.com/magazine |
again revived and elaborated upon with the emergence of George Boole’s boolean algebra.
The astute reader will have noticed a similarity between the ”on” and ”off” of the switch and the 1’s and 0’s
in boolean logic; if we combine boolean logic and the idea of the switch, then we can see that any decimal
numerical combination can be stored and processed in patterns of 1’s and 0’s using silicon, some switches
and transistors. If then we would have a “controlling chip” that could perform simple arithmetic on these
binary numbers, then we can represent any type of mathematical formula, functions or logical operations.
If we can represent mathematics in the form of electric flows, then we can create entire models of actions
and representations of different states. This, in essence, is how the digital computer works: translating man-
made mathematical models that represent the real world into electric signals using boolean algebra.
Now that we have a rough idea as to how computers work, it’s time to answer a fundamental question:
What is Computer Science? The Oxford Dictionary defines computer science as “the study of the principles
and use of computers.” But what exactly does that mean? And what can one expect to do when signing up to
study computer science?
Most people think that computer science is all about learning how to program. Whilst programming
certainly is an integral part of computer science, the field itself is about much, much more than just that.
Instead of just teaching you how to write computer code, computer
science lies at the cross-roads between mathematics and engineering, and it draws upon both fields to
teach you how to analyse and solve problems, using technology. It teaches you how to break down complex
problems and how to describe solutions in a formal way, so that a solution’s correctness can be validated
and proven. And, in a way that a computer can understand and execute these solutions. To allow you to do
this, Computer Science will give you a fundamental understanding of how information technology works,
from bottom to top.
And as a result of this, you will be learning how to use certain technologies - although learning how to use
a technology is not the focal point of computer science.
That is, students of computer science will learn how to solve complex problems in such a way that they
can be solved by a computer. To do so, the problem itself is first abstracted, and a general solution to this
abstracted problem is created. This solution is then implemented, using a specific technology. For example,
We previously talked about “what Computer Science is” and discovered that computer science is, in essence,
concerned about solving complex problems, using technology.
But, if you were a computer scientist or software engineer, how exactly would you go about doing that?
In essence, there really are only 4 things that computer scientists do:
1. When you encounter a problem, the first thing you do is analyse the problem, and see whether you can
decompose it into smaller problems that are easier to solve. That is, a computer scientist would spend
a large amount of time thinking about how to break down a problem down into smaller problems, and
how to determine just how difficult a problem can be to solve. Abstraction forms an important part of
this process (in their book “Artificial Intelligence: A modern approach”, Stuart Russell and Peter Norvig,
give a concise definition of abstraction as the “process of removing detail from a representation”).
2. Once computer scientists have thought about, and analysed a problem, they can begin to develop
solutions to the analysed problem. Each solution, they will need to describe formally. That is, they
typically write down an abstract set of instructions (called algorithms) that solve the said problem.
Such instructions must be universally understandable, concise and free of ambiguity. Above all, these
solutions must be free of implementation details.
3. Once they have come up with a solution, computer scientists must prove the correctness of their
solutions to all the other computer scientists out there. That is, they must demonstrate that their
solution correctly solves the given problem, and any other instances of the problem. For example,
consider the problem of sorting an unordered list of numbers: 3, 1, 2. Your solution must correctly
sort this list of numbers in ascending order: 1, 2, 3. However, not only must you demonstrate that your
solution is capable of sorting the specific numbers 3, 1, and 2; you must also demonstrate that your
solution can sort any other unordered sequence of numbers. For example, 192,384, 928, 48, 3, 1,294,857.
4. Last but not least, computer scientists must implement the given solution. This is the part where
programming comes into play, and where you really get your hands dirty. As such, when people think
about computer science, this is the part that they tend to think about. Programming is all about learning
how to write instructions that a computer can understand and execute, and is a huge field in itself.
In order to become competent at analysing problems and developing solutions to these problems, you
will need to understand the fundamental principles behind information and computer systems. As such, a
computer science course will typically cover topics such as:
1. Logic and boolean algebra. These are huge, fundamental aspects of computer science. We already
touched upon this in earlier sections of this book. Understanding boolean algebra will help you
understand how silicon chips combined with some electrical wiring, can work together to perform
incredible complex tasks such as connecting to a massive network of other computers and playing
YouTube videos.
2. Theoretical foundations behind information, computer architectures, algorithms and data structures.
77
www.dotnetcurry.com/magazine |
The latter are, in essence, constructs that hold and organize data and give rise to efficient ways of
processing and, searching and retrieving data. A very simple type of data structure which every reader
should be familiar with is a list which represent data, such as numbers, in a linear sequence (for
example: 1, 2, 3). There exist however many more complex data structures, such as trees which organize
data not in a sequence but in the form of nodes, branches and leaves (imagine this just like a christmas
tree).
3. Theories behind data organisation, storage and the processing of different types of data. These theories
give rise to things like databases (which are pieces of software that focus on storing data in the most
effective form so that the data can be recovered quickly from disk) and multimedia (which is concerned
with things like image file formats and recording, storing and reproducing sound and video).
4. Networking and communication. This, in essence, is the study of “how computers talk to each other”.
5. Theories governing secure systems. That is, how to prevent attacks or unauthorized access on computer
systems, how to communicate securely and how to encode data in a secure fashion (cryptography).
6. Programming and software engineering. This teaches you how to actually turn abstract solutions to
problems (i.e. algorithms) into actual programs that can be executed by a computer. It is also concerned
with how to best design and implement said programs, as well as with the development of new
programming languages.
7. Advanced, domain-specific topics such as how to create programs that learn by themselves or exhibit
intelligent behaviour, a subfield of which (machine learning) is the focal point of this book.
A Sample Problem
i) how computers work (electric switches that can perform logical operations using boolean algebra),
ii) what computer science is (the study of translating real world problems and solutions into
abstractions that can be solved using boolean algebra), and
iii) what computer scientists do (analyse, solve and translate problems and solutions).
Let’s now look at how we would actually go about studying a real-world problem, simplifying and
abstracting it, producing an algorithm to solve it. We won’t dive into writing an actual
implementation of our algorithm - after all, learning to program is far beyond the scope of this series!
Furthermore, the process and fundamentals outlined in this section might seem complex at first and are
designed to give you an appreciation and understanding of the entire field of computer science. A thorough
understanding of the problem itself won’t be necessary to understand the future articles of this series, but
will of course help you appreciate the complexity behind artificial intelligence and machine learning in
general.
The sample problem that we will be focusing on in this section is that of finding the shortest route
between two points on a map.
It is a problem that has been well-studied in computer science and is one that we have all faced in real-
life. Its solution transformed logistics and travel worldwide, and has been applied to a wide range of other
The first thing we should do is find a way to abstract the problem. That is, we should forget everything
about physical maps, buildings, cars and the city. Instead, we should think of a way in which we can
represent the map itself in the simplest form possible, capturing its essence and removing everything else
that is not needed. One way to do this is to represent the map as a graph.
We have all seen a graph at one point or another in life, and we know that it basically is just an abstract
diagram. The nodes in the graph represent the different buildings on our map, whilst the edges (lines
connecting the nodes) represent the streets between these different buildings. For the sake of simplicity, we
will ignore the length of streets or the amount of traffic on them and just consider the number of points (or
nodes) that a driver would need to traverse in order to reach his or her destination.
79
www.dotnetcurry.com/magazine |
Using this graph representation, we have simplified or abstracted our map, since we now:
1. Removed the unnecessary pieces of information, such as colours, gradients, trees and shapes of the
streets.
2. Maintained only the essence - that is, the locations and paths between the different locations - of the
map.
3. Represent the map in such a way that we can reason about it more easily and represent the map
digitally (as we will see shortly).
Now that we have represented our problem in a more abstract form, we must think about solving it. How
exactly would we determine the shortest path from one node to another?
Well, to provide an answer to this question, we must first look at what exactly the shortest path is. Given
our representation of the world (or map) in graph form, the definition of the shortest path between two
points is simply the path on the graph that traverses the least number of nodes. Looking at figure [TODO],
we see that path B is shorter than path A, since path B involves only travelling across 2 nodes, whilst path A
involves travelling across 3 nodes.
Starting at a given node M, we therefore know the shortest path to T to be the path that passes through R
and S. This is easy to figure out by just looking at the graph and then counting the
number nodes in our path. But how would we write a formal set of instructions (an algorithm) to describe a
solution?
Well, let’s think about simplifying the problem a bit: by definition, starting at node M, the shortest path
to node T would be the shortest path to node T from either node N or node R. Likewise, from node R, the
shortest path to node T would be the shortest from node S to node T; from node N, the shortest path to
We could therefore write a piece of code that calculates the length of the path between two nodes. We
would then invoke this piece of code for all the immediate neighbours of our starting node, and then
choose the smallest number (i.e. shortest path length) returned.
Since we need the actual path, and not just the length of the path, we would somehow keep track of the
nodes with the minimum path length, and then use the node with the minimum path length as our new
starting point.
Let’s call the piece of code that calculates the length of the path from one node to another pathLength(a,
b). Don’t worry about how this length is being calculated for now. Just assume that some dark magic returns
the length of the path, if we want to get from node a to node b, where node a and b can vary to refer to any
node that we like.
When starting off at node M, we therefore use the pathLength code twice. Once to get the length from R
to T, and once to get the length from N to T:
Since 1 is less than 2, we know that the shortest path from M to T will involve us travelling through node R.
We therefore record R by adding it to our list of nodes that we use to keep track of our shortest path (let’s
call this list path).
Our new starting point is now R, so we will need to execute pathLength for all of R’s immediate
neighbours: U and S.
Since S is directly connected to T (and hence passes through no intermediate node), the length of the path
from S to T is 0. The length of the path from U to T is 1, since we need to pass through P to arrive at T.
We therefore choose S and add it to our path list. Our path list now contains the nodes R and S, and hence
our shortest path from M to T, which passes through R and S. And that’s it! We have discovered a simple
way of calculating the shortest path from any node in a network or graph to any other. Don’t worry if this
seems a bit complex. You won’t need to memorize or fully understand every fact of this solution in order to
understand future chapters.
81
www.dotnetcurry.com/magazine |
We now want to put this solution down on paper in a way that it is easy to understand and can later be
converted into code by a programmer. Let’s do this now:
Does this look overwhelmingly complex? Don’t worry, let’s break things down. shortestPath is the name
of the piece of code that will calculate the shortest path from one node in the graph, to the other. It will
return the actual shortest path in the form of a list. For example, shortestPath(M, T) will return [R, S, T].
The first two lines in the shortestPath algorithm simply say that the value for minPathLength is set to
infinity (or just some really really large value); and that bestNode is empty (or nothing). Next, we will go
through all of the neighbours from our starting node. If a given neighbour is our destination node (to), then
we terminate, returning an empty list
containing only the destination node. Otherwise, we perform our path length calculation by executing the
pathLength code which returns the length from n to our destination node.
Once we know the length, we check it against the minimum length known to date: If it is less,
then we set the minPathLength to the new minimum length, and bestNode to the node containing the
new minimum length.
We then terminate by merging the list containing our bestNode with that of the list returned by a new
invocation of shortestPath from bestNode to the destination node.
If this explanation and the algorithm itself is confusing, try to spend some time thinking about it. Don’t
worry if you didn’t fully understand it: as long as you understand the overall idea behind:
1) abstracting a problem
2) formalizing a solution to an abstract problem
...then you understand the very essence of computer science, and are well equipped to move onto the
next chapter. To the reader more familiar with abstract reasoning: We greatly simplified the problem and
solution to the shortest path problem in this section. Many different, more complex and more efficient
solutions to this problem exist. For example, we could the same approach (modelling the map as a graph
and traversing the nodes and edges of this graph to find the shortest path) to also factor in things like
traffic or the length of the roads by applying weights to the different edges, and then finding the path with
the minimum sum. But this is far beyond the scope of this article.
Conclusion
In this article, we explained the very essence of computer science. We first bridged the gap between real
world problems and silicon chips, giving an indication as to how, using logic and abstract thought, we can
model real problems and have them solved by a computer. The take home message here is that we can
use abstract thought to summarize or represent our world in such a way that we can reason about it and
produce general solutions to concrete problems. That is, using abstract thought, logic and mathematics, we
can take a problem and formulate a solution to the problem in such a way that the solution can be applied
to other instances of the same problem.
Stay tuned!
Benjamin Jakobus
Author
Benjamin Jakobus is a senior software engineer based in Rio de
Janeiro. He graduated with a BSc in Computer Science from University
College Cork and obtained an MSc in Advanced Computing from
Imperial College, London. For over 10 years he has worked on a wide
range of products across Europe, the United States and Brazil. You
can connect with him on LinkedIn
83
www.dotnetcurry.com/magazine |
AZURE DEVOPS
Subodh Sohoni
Using
AZURE DEVOPS
for Product Development
(Multiple Teams)
In this article, I am
going to drill deep
into Azure Boards
service. This is
an Azure DevOps
service to unearth
features for portfolio
management.
A portfolio we are considering here, is development of a product that is so large that it cannot be done by a
single team in a viable length of time. It has to be done by multiple teams as a joint effort.
Azure Boards provides Teams, Areas, Shared backlogs and Team backlogs features for doing portfolio
management. In this article, we will explore why these tools are necessary and how to use them.
Scrum sets a limit on the size of the team to a maximum of 9 team members.
For a large product, it will take inordinate length of time for a team of up to 9 team members to develop it.
Such a timeframe for delivery is not acceptable to the customers and the management. It is but natural to
give the task of the product development to two or more teams.
When these multiple teams are working to develop the same product, there are some rules that those
teams will have to follow to remain Scrum teams. Each team has to work within the framework of Scrum.
Keeping that in mind, let’s list the rules, constraints and artifacts that are to be created.
1. Product Backlog – Multiple teams are going to develop a Single product. Hence all product
requirements, regardless of which team will fulfill those, will be put into one product backlog. That
product backlog will be shared by all teams.
When we want to view the product backlog, the entire list of ordered set of Product Backlog Items
(PBIs) should be visible.
2. Delivery Cadence – Usually customers are not interested in the delivery of a feature that depends upon
another feature that may not be available, yet.
Since the customers are interested in the delivery of an entire Product Increment, the delivery cadence
of all the teams should be same. That can be at the end of each sprint, but is not always so. The delivery
of increment of product should be planned at a time that is convenient to all the teams and the
customers. We will call it as a Release.
All the teams should have a plan to release an increment that is synchronized with each other. Within
a release of multiple sprints, it is possible that each team may have different duration, start dates and
end dates of the sprints, as long as they follow the limits set by the release. See Figure 1.
From this point of view, it is suggested to have synchronized sprints for all the teams as shown in
Figure 2.
4. Sprint Backlog – Each team will have its own sprint backlog. As a default, the team should be able to
view and focus on their own sprint backlog.
5. Transparency about Dependencies – It is natural that if multiple teams are developing a feature, there
will be dependencies within the (Product Backlog Items) PBIs that are taken up by different teams in
their respective sprint backlogs. It should be possible to always view the status of dependencies across
the teams, in one view.
Having defined these shared rules and constraints, we will now disambiguate some of the terms related to
portfolio.
Product Backlog Item (PBI) – PBI is any abstract entity in the context of the product, on which the team will
take efforts. It may be a User Story or a Non-Functional Requirement which needs to be fulfilled. It may be
a Bug that needs to be fixed. It is expected that every PBI is scoped maximum to a sprint and to a single
team. If it is started for implementation in a sprint, it should be completed in that same sprint.
Feature – Features are set of User Stories or PBIs which go together. They are required to be implemented
as a unit to give a coherent experience to the user. Features are created from top-down and then refined in
the bottom-up approach.
Once the overall direction of Product development is finalized, the product is split into user experiences
that are required for the product to work. These user experiences, which we call features, are large enough
to be implemented over multiple sprints.
Each feature is then split into multiple PBIs to be implemented. It is added to the product backlog so that it
becomes one of the PBIs. This is the top-down part of the approach.
If a PBI is so large that it can take efforts of all the team members in a sprint and still cannot be
implemented in a sprint, then it is elevated to the level of feature. It then is split into multiple PBIs, each of
Epic – These entities define the general direction of product development and initiatives that organization
adapts. It is split into multiple features which are implemented over number of releases.
Area – This term is not a generic term but is specific to Azure DevOps and some other DevOps toolsets like
IBM Rational Team Concert.
As we all know, one of the Agile practices is to do incremental development using iterative development
model. Iteration is a timebox in which the entire development process is carried out for a defined increment
to product. PBIs and their derivatives like tasks are classified into iterations to define, what is being
developed in each iteration.
We may want to classify PBIs on criterion other than time. This is what Areas helps us do. For example, if a
team is developing a feature called “Employee UI” then all the PBIs that are part of mentioned feature can
be added to the area named “Employee UI”. There can be one-to-one relationship between the team and its
area.
Now that we have got a clarity about the terms that we are going to use, we can check how those are
implemented using a case study.
SSGS IT EDUCON Services Pvt. Ltd. is a consulting firm that provides consulting related to DevOps.
SSGS Consultants go on-site for providing support related to various DevOps tools and processes. When an
enquiry comes for support related to DevOps, the profiles of the consultants are sent to the customer over
emails. Customer selects the consultant of their choice and the contract is drawn between the customer and
SSGS.
When consultants upgrade to new technologies and / or have gained new experience, updated profiles
of the consultants are sent. When there are a number of repeat enquiries, it is observed that some
customers do not request new profiles, but check the old profiles of these consultants to find if they fit their
requirement.
To eliminate this issue, SSGS needs a software application to be created which will allow customers to
search and view the latest profiles of consultants. Profiles are initially created by the HR Person in the
organization when the consultant joins the organization. Those profiles cannot be downloaded for offline
use. The Customer can view skill set, certifications and experience of the consultants, as well as their rates
and availability. The customer then can select one or multiple consultants with whom the contract is drawn.
Once the consulting project is over, the consultants update their profile in the same software that was used
by the customer and submit it for approval to the manager. The Manager may accept the updates or may
suggest some more changes. Once the manager approves the updates in a profile, that profile becomes
available to the next customer if the search criterion matches. If the consultant leaves the organization, the
HR Person archives the profile and may also delete it, if needed.
87
www.dotnetcurry.com/magazine |
Analysis of the case study
Since the application is to be accessed by external as well as users internal to the organization, it needs
to be a browser-based web application. It should have separate interfaces for HR, Consultant (Employee),
Manager and Customers. We can call each of them as features.
Each of these interfaces will allow many interactions between the user and the application. These
interactions are gathered as PBIs. The Application needs to be created in the shortest possible time.
1. Since there are multiple interfaces which make a product, we will create multiple teams to handle each
of those. Each interface will be treated as a feature and an area will be created for it. The created area
will be assigned to the respective team.
2. PBIs created as children of each feature, will also be added to the same area.
3. Each team will have the iterations that have same start dates and end dates.
4. There will be only one product backlog, common to all feature teams. This backlog will consist of
features and children PBIs.
5. Each team will have separate sprint backlog in which PBIs of that sprint will be added.
6. It should be possible to view entire product backlog distribution to teams with sprints and release on a
scale in a single screen.
Implementation
We start the implementation by creating a new team project on Azure DevOps. When a team project is
created, it automatically adds a team with same name as the team project name.
When we add a team project named “SSGS EMS”, a team named “SSGS EMS Team” is automatically added
to that team project. This team will have all the members working on the product. We will now create the
teams and their respective areas for different interfaces. We will create the following teams:
1. Employee Interface
2. HR Person Interface
3. Manager Interface
4. Customer Interface
We can create new teams by clicking the “New Team” button on the page of Project Settings – Teams. Figure
3 shows a form for creating the new team named “Employee Interface”.
3. All the other members are assigned the role of “Contributor” so that they can add and edit any entity in
the project.
4. The check-box to create an area path with the name of the team is checked.
On that page itself we can view all the created teams as seen in Figure 4. Each team is going to be a Scrum
Team.
89
www.dotnetcurry.com/magazine |
Figure 4: Created Teams
From Project Settings – Boards – Project Configuration – Areas we can view the created areas.
To set the properties for each team, let us select the teams one by one from the top (dropdown) – see
Figure 6.
When a team, for example “Customer Interface” is selected, we can see that “Default Area” property of the
team is selected to the area with a name that is same as the team name.
For each team, we can set the sprint (Iteration) that will appear in the Sprint hub of that team. As a
preparation for the same, we should have as many minimum sprints with same dates defined in the project,
as the feature teams.
Since we have 4 feature teams, we can set a minimum of 4 sprints having the same dates. Set the dates of
the sprint from Project Configuration – Iterations page.
Let’s now set sprints for the teams. Open the page Team Configuration from Project Settings – Boards. Select
the team “Employee Interface” from the top-level dropdown on that page. Click the Iterations tab on the
team properties page that is shown. Add the iteration “SSGS EMS\Sprint 1” to the list of iterations of this
team by selecting that iteration.
We will now create Product Backlog. Open the page Boards – Backlog and ensure that the team that is
selected by default is “SSGS EMS” (the default team of the project). Select the Feature from the dropdown
that appears on the top-right corner of the page (Figure 9).
Let’s add features to the product backlog. Features are parent of PBIs. We will not assign features to any
specific team although we will keep their names same as the team names.
In the next step, we will change the view to Backlog Items and add the setting to view Parents.
Once the Features are visible in the product backlog, we will add the PBIs. Each PBI is added as a child to
the respective feature. A sample set of PBIs which is a complete product backlog at this moment can be
seen in Figure 11:
For example, let’s first add the first PBI that is under the feature “HR Person Interface” to area path with the
name “HR Person Interface”. As soon as we do that, it appears in the backlog of “HR Person Interface” as seen
in Figure 12.
A side effect of this step is it disappears from the Product Backlog of the Default Team. In most of the cases,
we want that to appear in the Product Backlog.
We will open the page Project Settings – Team Configuration – Areas – SSGS EMS (Default area of the
project) and then click the Context Menu Button (…) as shown in Figure 13.
93
www.dotnetcurry.com/magazine |
Figure 13: Include Sub Areas for Default Team
After accepting the Warning that appears, we will be able to see the PBIs that are under a different team
backlog, under the product backlog too.
Now we can drag and drop the PBI under Sprint 2 which is the Current Sprint of the team “HR Person
Interface”.
PBIs that are put under Sprint 2 can be seen under the Task Board of Sprint 2.
As a pure agile practice, we should not put any PBIs in sprints that are not current sprint of the team. But
I have gone against that practice for showing the feature of Release Plan View that is part of the Delivery
Plan extension.
You may observe in Figure 16 that I have deliberately added one of the PBI that is a child of “Employee
Interface” to Sprint 6 that is not a sprint of “Employee Interface”. It goes to show that it is possible to work
across different teams in the same feature.
We can install the Delivery Plan extension from Visual Studio Marketplace. It is directly installed on our
Azure DevOps account from the page https://marketplace.visualstudio.com/items?itemName=ms.vss-plans.
This extension is created by Microsoft and is free of cost.
Once it is installed, we can view it under the Boards section of the team project as a node named “Plan”. On
that page, we will create a new plan and call it as Release Plan.
95
www.dotnetcurry.com/magazine |
We can add all our feature teams under this plan and set the backlog level to “Product Backlog Item”.
Figure 18: Add the Teams and Backlogs to the Delivery Plan
The plan is created when we click the Create button. In Figure 18, you may be able to see the plan that has
some additional PBIs. We can add such PBIs directly from the plan if we find any important PBIs missing.
On the plan, we can set markers as milestones as seen in Figure 19. You may be able to see the marker for
Release 1 which is when the first increment of the product is to be released.
We can also set the fields that appear on the cards of the PBIs. I have added the Parent ID of the PBI so
that we may be able to trace the feature that the PBI is a child of.
1. View the planned iterations and PBIs under them for all teams on a single screen.
2. Check the status of dependencies that are being created by other teams.
Conclusion
In Azure DevOps, it is possible to use the same tools provided for managing agile development of one team,
to manage parts of a single product being developed by multiple teams.
In this article, we have seen how we can keep a common product backlog for all teams working on a single
product but maintain a separate sprint backlog for each of them.
We also saw how to use the Delivery Plan extension for viewing the entire release consisting of multiple
iterations and multiple teams.
Subodh Sohoni
Author
Subodh is a Trainer and consultant on Azure DevOps and Scrum. He has an experience of over
33 years. He is an engineer from Pune University and has done his post-graduation from IIT,
Madras. He is a Microsoft Most Valuable Professional (MVP) - Developer Technologies (Azure
DevOps), Microsoft Certified Trainer (MCT), Microsoft Certified Azure DevOps Engineer Expert,
Professional Scrum Developer and Professional Scrum Master (II). He has conducted more
than 300 corporate trainings on Microsoft technologies in India, USA, Malaysia, Australia,
New Zealand, Singapore, UAE, Philippines and Sri Lanka. He has also completed over 50
consulting assignments - some of which included entire Azure DevOps implementation for the
organizations. Subodh is a regular speaker at Microsoft events including Partner Leadership
Conclave.You can connect with him on LinkedIn
Dynamic Class
Creation
Preserving Type Safety
in C# with Roslyn
A dynamic language provides high-level features at runtime that other languages only provide statically by
modifying the source code before compilation[2]. They support runtime metaprogramming features, which
allow programs to dynamically write and manipulate other programs (and themselves) as data. Examples of
these features are:
• Fields and methods can be added and removed dynamically from classes and objects.
• New pieces of code can be generated and evaluated at runtime, without stopping the application
execution, adding new classes, replacing method bodies, or even changing inheritance trees.
• Dynamic code containing expressions may be executed from strings, evaluating their results at runtime.
These metaprogramming features can be classified in different levels of reflection (the capability of a
computational system to reason about and act upon itself, adjusting itself to changing conditions), and they
are a key topic of our research work [1] [2].
Dynamic languages are frequently interpreted, and generally check type at runtime [5]. The lack of compile-
time type information involves fewer opportunities for compiler optimizations.
Additionally, runtime type checking increases both execution time and memory consumption, due to the
type checks performed during program execution and the additional data structures needed to be able to
perform them[1] [2] [3].
Because of this, there are approaches to optimize dynamic languages [3] or to add their features to existing
statically typed ones [4]. These are based on modifying the language runtime [8] or using platform features
to instrument the code at load time. In either case, runtime modification of programs is normally supported
via an external API using compiler services that can be accessed programmatically [4] .
These approaches were successful, proving that dynamic language features can be incorporated to
statically typed languages and take benefit from both static type checking and runtime flexibility.
However, they require the development of a supporting API or a custom virtual machine [3]. This requires
a lot of effort and, as the new dynamic features will not be part of the platform, they must be distributed
along with the programs that use them.
However, the modern .NET ecosystem enables us to obtain a lot of these features just with standard
platform services or modules.
The main contribution of this article is to demonstrate how the Roslyn Compiler-as-a-Service (CaaS) API can
be used to achieve runtime flexibility and type safety, while implementing most of the described dynamic
language features.
We’ll demonstrate how a class can be fully created at runtime, incorporating custom methods and
99
www.dotnetcurry.com/magazine |
properties. The compiler services compile the dynamic code ensuring that no type errors exist [4].
You’ll also see how this dynamic class can comply with a known interface so type safety can be
preserved in the program, at least with part of the dynamically created code. Finally, we will also check if
this allows a substantial performance gain when compared to an older, non-statically typed approach that
creates dynamic objects via the ExpandoObject [9] class.
All code shown in the following sections have been successfully tested on Visual Studio Enterprise 2019
using a .NET Core 3.1 solution.
Thanks to the syntactic sugar embedded into the language and the dynamic type, it can be used to create
new objects whose content can be fully defined at runtime.
The following code snippet shows how a programmer can add the ClassName property to an
ExpandoObject instance by just writing a value to it. Instead of giving an error for writing to a property
that does not exist, the property is created and initialized inside the ExpandoObject instance.
return obj;
}
Also, ExpandoObject behave like dictionaries, and AssignProperties takes full advantage of it to
dynamically add multiple properties.
The following code snippet shows how easy it is to create an object instance whose contents can be fully
customized using this approach.
The C# compiler does not perform any static type checking on variables of dynamic type. This allows the
code to compile, even if the compiler lacks information about the static structure of an instance.
As a tradeoff, every operation that reads or invokes members dynamically, loses the type safety provided by
the compiler: every type error is detected and thrown at runtime.
This next example will be type checked at runtime and found valid, as all accessed members have been
already defined, and the dynamic state of the object determines the operations that can be applied to it 
(duck typing[10] ).
Methods can be also added to ExpandoObject instances, but normally they have to be bound to a
particular one, which is the instance that holds the properties these methods need to perform calculations
on, as shown in the following code snippet.
This way, to add this method to multiple ExpandoObject instances, we need to create one per instance,
using each instance variable name. The ExpandoObject approach does not follow the traditional class-
based language behavior, as individual instances can be modified and evolved independently. It is closer to
the prototype-based object-oriented languages model [11].
Using ExpandoObject is not suitable, as ExpandoObject cannot work as runtime modifiable classes
that contain the structure of all their instances. However, we can achieve this with the Roslyn module from
Microsoft [12] [8], that can be added to any project via the Microsoft.CodeAnalysis.CSharp.Scripting
NuGet package:
101
www.dotnetcurry.com/magazine |
We can create source code and ask Roslyn to compile it at runtime. To do so, data structures to hold the
necessary member information must be created.
We have created a simple demo program to demonstrate this, using the DynamicProperty class to
hold just new property names and types. A more complete implementation could use FieldInfo and
MethodInfo classes from System.Reflection to obtain data from existing class members.
//Other properties
var p4 = new DynamicProperty { Name = "Pet", FType = typeof(string) };
var p5 = new DynamicProperty { Name = "Name", FType = typeof(string) };
The same can be done to dynamically build the source code of the methods to add. This can be achieved
using Roslyn SyntaxTree analysis features. It allows us to read any source code file (.cs) into a syntax tree
and do several operations, including obtaining the actual source code of any method we want. This may
seem like a limitation, as source code might not be available because only the binaries were distributed,
but source code can be obtained by an assembly decompiler (such as dotPeek [14]) from compiled files.
In our demo, we just read a method source code from a sample code file.
However, this is just a convenient feature to “reuse” existing source code. Nothing prevents us from fully
providing source code as a string, the same way as dynamic code evaluation features from typical dynamic
languages [7].
Once we have appropriate data structures to hold the dynamic class information, we can create it. This
is when we can achieve a certain degree of type safety by forcing the newly created class to implement
elements known by the compiler: inherit from an existing class or implement multiple interfaces. This way,
access to part of the structure of the dynamically created class can be type checked statically, as shown in
the following code snippet that forces the class to implement the IHasAge interface we created.
We do not know the full structure of the newly created class at this point, but we know that it implements
the IHashAge interface. Therefore, we can access its members normally, and when using them, type errors
will be detected by the compiler. We can still use a dynamic typing approach to access the rest of the class
structure. The difference is that now both static and dynamic type checking can be used in the same class if
we want.
We built the source code of the new class from the classes we created to hold member information. Later,
we used our Exec dynamic code execution function to both compile the class and create a new instance
of it (our demo only supports dynamic classes with a non-parameter constructor). Exec is the function that
really uses the Roslyn API.
Once we create a new dynamic type like this, accessing it may be hard because we did not specify a
namespace or class visibility. To facilitate this, we create a single instance of the new type and return it in
CreatedObj, a dynamic property of a custom Globals class we pass to the Exec method. Finally, we return
this object converted to the passed generic type:
return (T)globals.CreatedObj;
}
103
www.dotnetcurry.com/magazine |
The source code of the new dynamic class is created via the ToString method of the instance dt of the
utility class I created for this demo, DynamicType.
Please note that this code is just a demonstration of how dynamic class creation can be achieved.
If this code is extended to support a dynamic class creation framework, additional features should
be considered, such as allowing multiple instances of dynamically created classes. All this is possible
instructing Roslyn to perform the compilation of source code via its CSharpScript.EvaluateAsync
method, that accepts:
• Global variables that this code may use. In our implementation this is the Globals class.
• Assembly references potentially used by the code. We include System by default, and any other
assembly containing the parent class of the one that is going to be created ( or the interface it
implements).
• Finally, the import parameter is used to add any namespace that the source code needs (same as the
using keyword).
As a result, this task may return a compiler error if the code does not compile. We use this information to
compose the full compiler error and return it to the caller. This way, the reported errors have the same level
of detail as the ones the compiler outputs before a program is run.
Therefore, this approach allows us to build a fully dynamic class, compile it, and return an instance whose
static type may be partially known, providing type safety and a traditional class-based approach.
The same code with little variations can also be used to evaluate expressions dynamically, thus
implementing another typical dynamic language feature.
Further modification requires creating a new type and re-creating all the instances of the old type into
instances of the new one. For the same reason, classes compiled statically cannot also be modified.
However, the .NET platform already has features to modify method bodies of already compiled classes at
runtime [15], which partially covers this.
Reading from a dictionary is very efficient, but an actual method call could be faster. To test this, we
compared 2000000 calls to the GetAge method on both the ExpandoObject and our dynamic class
instances, both methods sharing the same implementation.
Performance measurement has been done at steady state, using a C# port of a statistically rigorous
measurement procedure [16][15] we used in past [3] with excellent results. This procedure runs a maximum
of 30 iterations of 30 executions of the code to be measured, stopping prematurely if the coefficient of
variation (CV, the ratio of the standard deviation to the mean of the collected data) falls below 2.
This ensures that the effect of external events into measured times are minimized. Execution was
performed in Release mode, over an AMD Ryzen 1700 with 64Gb of 2400Mhz RAM. The results appear in the
following table:
Our measurements show that the Roslyn approach holds an average performance advantage of 15% over
the ExpandoObject alternative, thus being significantly faster. This means that if full dynamic typing
is not really needed to implement a feature, the Roslyn approach provides convenient type safety and
performance advantages when using the dynamically created types through the program code.
There is also an initial performance cost when creating a new type via Roslyn (the cost of compilation).
Programmers do not need to choose between static or dynamic typing, hybrid approaches obtain
advantages from both.
Conclusion
Roslyn CaaS can be effectively used to build dynamic code enabling flexibility and type safety if the class to
be build has at least a partially known structure.
Measurements show that this hybrid approach holds a significant performance advantage over the fully
dynamic ExpandoObject. This approach also maintains a more traditional class-based behavior that may be
105
www.dotnetcurry.com/magazine |
easier to use by programmers that need to dynamically incorporate pieces of code.
Acknowledgments
This work has been partially funded by the Spanish Department of Science, Innovation and Universities:
project RTI2018-099235-B-I00. This has also been partially funded by the project GR-2011-0040 from the
University of Oviedo
References
[1] L. D. Paulson, "Developers shift to dynamic programming languages," IEEE Computer, vol. 40, no. 2, pp.
12-15, 2007.
[2] O. Callau, R. Robbes, E. Tanter and D. Röthlisberger, "How (and why) developers use the dynamic
features of programming languages: the case of Smalltalk," Empirical Software Engineering, vol. 18, no. 6, pp.
1156-1194, 2013.
[3] F. Ortin, J. Redondo and J. B. G. Perez-Schofield, "Efficient virtual machine support of runtime
structural reflection," Science of Computer Programming, vol. 74, no. 10, pp. 836-860, 2009.
[4] F. Ortin, M. Labrador and J. Redondo, "A hybrid class- and prototype-based object model to support
language-neutral structural intercession," Information and Software Technology, vol. 44, no. 1, pp. 199-219,
2014.
[5] L. Tratt, "Dynamically typed languages," Advances in Computers, vol. 77, pp. 149-184, 2009.
[6] J. Redondo and F. Ortin, "A comprehensive evaluation of common python implementations," IEEE
Software, vol. 32, no. 4, pp. 76-84, 2014.
[7] I. Lagartos, J. Redondo and F. Ortin, "Efficient Runtime Metaprogramming Services for Java," Journal of
Systems and Software, vol. 153, pp. 220-237, 2019.
[8] J. M. Redondo, F. Ortin and J. M. Cueva, "Optimizing reflective primitives of dynamic languages,"
International Journal of Software Engineering and Knowledge Engineering, vol. 18, no. 6, pp. 759-783, 2008.
[10] D. Thomas, C. Fowler and A. Hunt, Programming Ruby, 2nd ed, Chicago (Illinois): Addison-Wesley,
2004.
[11] J. M. Redondo and F. Ortin, "Efficient support of dynamic inheritance for class-and prototype-based
languages," Journal of Systems and Software, vol. 86, no. 2, pp. 278-301, 2013.
[12] Microsoft, "Overview of source code analyzers," MIcrosoft, 2020. [Online]. Available: https://docs.
microsoft.com/en-us/visualstudio/code-quality/roslyn-analyzers-overview?view=vs-2019. [Accessed 30 4
2020].
[14] JetBrains, "dotPeek: Free .Net Decompiler and Assembly Browser," JetBrains, 2020. [Online]. Available:
https://www.jetbrains.com/decompiler/. [Accessed 30 4 2020].
[15] T. Solarin-Sodara, "POSE: Replace any .NET method," GitHub, 8 1 2018. [Online]. Available: https://
github.com/tonerdo/pose. [Accessed 30 4 2020].
[16] A. Georges, D. Buytaert and L. Eeckhout, "Statistically rigorous java performance evaluation. In:
Object-oriented Programming Systems and Applications," in OOPSLA’ 07, New York, NY, USA, 2007.
107
www.dotnetcurry.com/magazine |
XAMARIN
Gerald Versluis
Goodbye Xamarin.Forms,
Hello .NET MAUI!
In this tutorial, I will tell you all about the ins and outs
of this change and what it might mean for you.
Everything will just get faster, better and simpler for you
- the developer.
Before Xamarin was Xamarin, it had a different name and was owned by several different companies, but
that is not relevant to this story.
In 2011, Xamarin, in its current form, was founded by Miguel de Icaza and Nat Friedman. With Xamarin,
they built a solution with which you can develop cross-platform applications on iOS, Android and Windows,
based on .NET and C#. Nowadays you can even run it on macOS, Tizen, Linux and more!!
Since developing with Xamarin was all based on the same language, you could share your code across all
supported platforms, and thus reuse quite a bit.
The last piece that wasn’t reusable was the user interface (UI) of each platform.
In 2014, Xamarin.Forms was released as a solution to overcome that problem. With Forms, Xamarin now
introduced an abstraction layer above the different platforms’ UI concepts. By the means of C# or XAML, you
were now able to declare a Button, and Xamarin.Forms would then know how to render that button on iOS,
and that same button on Android as well.
With this in place, you would be able to reuse up to 99% of your code across all platforms.
In 2016 Xamarin was acquired by Microsoft. Together with this acquisition, most of the Xamarin code
became open-source and free for anyone to use under the MIT license.
If you want to learn more about the technical side of Xamarin, please have a look at the documentation
here: https://docs.microsoft.com/xamarin/get-started/what-is-xamarin
Xamarin.Forms Today
As already mentioned, Xamarin and Forms are free and open source today!
This means a lot of people are happily using it to build their apps - both for personal development of apps
as well as to create Line of business (LOB) enterprise apps. Over the years, new tooling was introduced:
Visual Studio for Mac allows you to develop cross-platform solutions on Mac hardware for Xamarin apps,
and also for ASP.NET Core or Azure Functions solutions.
And of course, all the Xamarin SDKs got updated with all the latest features all the way up to iOS 14 and
Android 11 which have just been announced at the time of writing.
Xamarin.Forms is no different: it has seen a lot of development over the years. New features are introduced
with every new version.
Not just new features; even new controls are now “in the box”. While earlier, Forms would only render the
abstraction to the native counterpart; they have now introduced some controls that are composed from
other UI elements.
109
www.dotnetcurry.com/magazine |
Effectively that means Forms now has several custom controls. Currently those are: CheckBox,
RadioButton and Expander for instance.
If we go a little deeper into how Xamarin.Forms works, we quickly find something called renderers.
Each VisualElement, which is basically each element that has a visual representation (so pages and
controls mostly), has a renderer. For instance, if we look at the Button again, Button is the abstract
Xamarin.Forms component which will be translated into a UIButton for iOS and an Android.Button on
Android.
To do this translation, Forms uses a renderer. In this case, the ButtonRenderer. Inside of that renderer, two
things happen basically:
1. Whenever a new Button (or other control) is created, all the properties are mapped to their native
controls’ counterparts. i.e.: the text on a Button is mapped to the right property on the targeted
platform so it shows up the right way.
2. Whenever a property changes on the Button (or other control), the native control is updated as well.
The renderer controls the lifecycle of that control. You might decide that you need things to look or act a
bit different, or that maybe a platform-specific feature is not implemented in Forms. For those scenarios,
you can create a custom renderer. A custom renderer allows you to inherit from the default renderer and
make changes to how the control is rendered on a specific platform.
If you want to learn more about renderers and custom renderers, this Docs page is a good starting point:
https://docs.microsoft.com/xamarin/xamarin-forms/app-fundamentals/custom-renderer/
First and most importantly: nothing will be taken away from you. Everything that is in Forms today, will be
available in .NET MAUI.
Second: while everything will still be available for you, things will definitely change. The team has taken all
the learnings over the past few years from Forms and will incorporate that into .NET MAUI.
Figure 1: .NET MAUI overview slide from the MS Build presentation by David Ortinau and Maddy Leger
Slim Renderers
In .NET MAUI, the renderers that are available right now will evolve to so-called slim renderers. The
renderers will be reengineered and built from the ground up to be more performant. Again, this will be
done in a way so that they should be useable in your existing projects without too much hassle.
The benefit you will get is faster apps out of the box.
You might wonder what will happen to your custom renderers? Well those should just keep working. There
will probably be exceptions where it will cause some issues, but the goal here, again, is to keep everything
as compatible as possible.
If you are wondering about some of the details that are shaping up as we speak, please have a look at the
official Slim Renderers spec on GitHub: https://github.com/dotnet/maui/issues/28
Namespace Change
With .NET MAUI, Forms is taken into the .NET ecosystem as a first-class citizen. The new namespace will be
System.Maui. By the way, Xamarin.Essentials, the other popular library, will take the same route and you
can find that in the System.Devices namespace.
111
www.dotnetcurry.com/magazine |
As you can imagine, this is quite the change and even a breaking change. The team has every intention of
providing you with a transition path or tool that will make the switch from Forms to .NET MAUI, as pain
free as possible.
Single Project
If you have worked with Xamarin.Forms today, you know that you will typically have at least three projects:
the shared library where you want all your code to be so it can be reused, an iOS project and an Android
project. For each other platform that you want to run on, you will have to add a bootstrap project in your
solution.
While this is technically not a feature of .NET MAUI, .NET MAUI is the perfect candidate for this. In the
future, you will be able to run all the apps from a single project.
Figure 2: Screenshots of how the single project structure might look like
With the single project structure, you will be able to handle resources like images and fonts from a single
place instead of per platform. Platform-specific metadata like in the info.plist file will still be available.
Writing platform-specific code will happen the same way as you would write multi-targeting libraries today.
Another thing that has been announced is that .NET MAUI will be supported in Visual Studio Code (VS
Code). This has been a long-standing wish from a lot of developers, and it will finally happen. Additionally,
everything will be available in the command-line tooling as well, so you can also spin up your projects and
builds from there if you wish.
Xamarin.Forms, and other Microsoft products for that matter, have mostly been designed to work with the
Model-View-ViewModel (MVVM) pattern.
While MVVM will still be supported (again, nothing is taken away), because of the new renderer
architecture, other patterns can be implemented now.
For instance, the popular Model View Update (MVU) pattern will now also be implemented. If you are
curious what that looks like, have a look at the code below.
This can even open the door to completely drawn controls with SkiaSharp for instance. This is not in any
plans right now, but it’s certainly a possibility, even if it comes from the community.
Unfortunately, it will be a while before the evolution is complete. The first preview is expected together
with the .NET 6 preview which should happen in Q4 2020. The first release of .NET MAUI will happen a year
after that, again with the release of the .NET 6 final; November 2021. For a more detailed roadmap, have a
look at the wiki on the repository: https://github.com/dotnet/maui/wiki/Roadmap.
However, you can already be involved today. All the new plans, features, enhancements and everything will
be out in the open. You can head over to the repository right now and let the team know what is important
to you.
There are already lively discussions happening about all kinds of exciting new things. Also, the code is
there too, so you can follow progress and even start contributing to be amongst the first contributors of this
new product.
After the release of .NET MAUI, Forms will be supported for another year. That means; it will still get
bugfixes and support until November 2022. That should give you enough time to transition to .NET MAUI
113
www.dotnetcurry.com/magazine |
with your apps.
There is also a big community supporting Xamarin and Forms, so this will also give library authors all the
time they need to adapt to this new major version.
As you might have already gotten from all the new names and namespaces, the brand Xamarin is bound to
disappear. Also, the iOS and Android SDKs will be renamed to .NET for iOS and .NET for Android.
I think this was always expected from the beginning when Microsoft took over. It’s just that these
transitions take time.
Of course, this is very sad, the monkeys, logo and all that belongs to the Xamarin name will be history. I
think it’s for the best and this means that the Xamarin framework has grown up to be a technology that
is here to stay - Backed by Microsoft, incorporated into .NET, your one-stop solution for everything cross-
platform.
I’m very excited to see what the future will bring, and I hope you are too!
Gerald Versluis
Author
Gerald Versluis (@jfversluis) is a Software Engineer at
Microsoft. He has been working with the Xamarin.Forms
team for a year, and now works on Visual Studio Codespaces.
Not only does he like to code - but spreading knowledge, as
well as gaining it, is part of his daytime job. He does so by
speaking at conferences, live streaming and writing blogs
(https://blog.verslu.is) or tutorials.
Twitter: @jfversluis | Website: https://jfversluis.dev
Yacoub Massad
CODING PRACTICES:
MY MOST
IMPORTANT ONES -
PART 3
3. Immutability
Introduction
In this part, I will talk about data modeling and
making state and other impurities visible.
In this article series, I talk about the coding
practices that I found to be the most beneficial in
Note: In this article, I give you advice based on my
my experience. In the last two parts, I talked about
9+ years of experience working with applications.
the following practices:
Although I have worked with many kinds of
applications, there are probably kinds that I did not
1. Automated testing work with. Software development is more of an art
than a science, so ensure that the advice I give you
2. Separation of data and behavior
here makes sense in your case before using it.
In practice #2, I recommended that you separate data from behavior. If you do that, then there will be units
of code in your application whose only job is to model data.
In the Designing Data Objects in C# and F# article, I went in depth about how to design data objects. In
another article, I gave examples of suboptimal data object designs and suggested improvements. I also
wrote another article—called Function parameters in C# and the flattened sum type anti-pattern—where I
talked about how easy it is for function inputs to become confusing as functions evolve.
In my opinion, modeling data objects is much more important than modeling behavior. If you are accurate
with modeling your data objects, the behavior code that you write will be guided by the restrictions
imposed by the data object types.
When I start writing a function, before writing any code in the function itself, I think about the inputs and
outputs of the function. Except for the simple cases where built-in data types (e.g. string, int) are enough, I
create special data objects to model the function inputs and outputs. I spend enough time on these.
Of course, I do not always create new data types. The reason is that many functions deal with the same
data types. Once you write a few functions, you have already created data types that are likely to be used
as-is in the functions that you will write next. This is true because functions in the same application are
solving a single problem.
Once the data types are defined, my mission of writing the function itself (the behavior) becomes easier.
Designing the data objects would have already made me think about the different possible input values
that the function can take and the possible output values it will produce. More concretely, this allows me to
split the process of writing the function to:
Step 1: think about the inputs and outputs of the function regardless of how the function will convert
inputs to outputs.
Step 2: write code to convert the inputs to outputs. This is of course a simplification, but you get the idea.
Of course, such separation is not always done perfectly. It happens sometimes that once you are in Step 2,
you realize that there are cases that are not modeled in the input and output data types.
Not only will well designed input and output data objects help you with writing the behavior code of
your functions, they will also make it easier for people (you included) to understand your functions. We
developers spend much more time reading functions than we spend writing them.
117
www.dotnetcurry.com/magazine |
Practice #5: Make impurities visible
Make sure that the impure parts of your code are visible to readers of your code. Let me explain.
A pure function is a function whose output depends solely on the arguments passed to it. If we invoke a
pure function twice using the same input values, we are guaranteed to get the same output. Also, a pure
function has no side effects.
All of this means that a pure function cannot mutate a parameter, mutate or read global state, read a file,
write to a file, etc. Also, a pure function cannot call another function that is impure.
One kind of impurity is state. In practice #3, I talked about making data objects immutable. If you do that,
then you minimize the state in your applications.
Still, we sometimes require some state in applications. In this practice, I am advising you to keep that state
visible.
For example, instead of having global variables that multiple functions use to read and write state, create
state parameters (e.g. ref parameters) and pass them to the functions that need them. This way you have
made visible the fact that these functions might read or update state. This makes it easier for developers to
understand your code. I talk about this in details in the Global State in C# Applications article.
Few examples are reading or writing to a file, reading the system timer, writing or reading from a database,
etc. If a function does any of these, extract the code that does the I/O into a dependency and make that
dependency visible in your code. Let me show you an example. Let’s say you have this code:
//..
}
}
This class generates reports for customers. You give the Generate method a customerId, and it gives you
back a Report object. Somewhere in the middle of the method, there is a statement that writes a copy of
the report to some folder that holds copies of all generated reports.
What you can do is extract the call to File.WriteAllText into a dependency like this:
fileAllTextWriter.WriteAllText(reportCopyPath, reportText);
//..
}
}
We have now made the fact that the ReportGenerator might write to the file system, more visible. The
ReportGenerator class is now more honest.
In the Composition Root, when you construct the ReportGenerator class, you explicitly give it an instance
of the FileAllTextWriter class like this:
var reportGenerator =
new ReportGenerator(
new FileAllTextWriter());
It is important for the developers who read your Composition Roots to see the impure behavior that your
classes use.
119
www.dotnetcurry.com/magazine |
In the above example, I used objects and interfaces to model behavior. The same thing can be done in other
coding styles. For example, you can do the same thing with static methods and delegates.
In the specific case of the example we just saw, a good idea might be to have a single FileSystem class
that contains not just a WriteAllText method, but other file system related methods.
Another thing to note about the Generate method is that this method might have multiple
responsibilities; It generates reports for customers, and it also saves copies of these reports to a special
folder. This might not be the best way to model the behavior. However, as far as making impurities visible,
extracting the WriteAllText method to a special dependency is enough.
Conclusion:
This article series is about the coding practices that I found to be the most beneficial during my work in
software development.
In this part, Part 3, I talk about the #4 and #5 most important practices: modeling data accurately and
making impurities visible.
Not only will well-designed data objects make it easier for the readers of your code (including yourself) to
understand the code, they also make it easier for you to write your behavior code (your functions) since they
put restrictions on the inputs the functions receive and the outputs they produce.
You make impurities visible by explicitly passing state to functions instead of having functions access
global state under the hood. You also make impurities visible by extracting impure behavior (e.g. I/O
access) into special dependencies (e.g. classes) and making visible the fact that your code is using these
dependencies.
Yacoub Massad
Author
Yacoub Massad is a software developer and works mainly on
Microsoft technologies. Currently, he works at Zeva International
where he uses C#, .NET, and other technologies to create eDiscovery
solutions. He is interested in learning and writing about software
design principles that aim at creating maintainable software. You can
view his blog posts at criticalsoftwareblog.com. Recently he started a
YouTube channel about Roslyn, the .NET compiler.
SUBSCRIBE HERE
121
www.dotnetcurry.com/magazine |
DIGITAL TRANSFORMATION
Vikram Pendse
DIGITAL
TRANSFORMATION
using Microsoft Technologies
during and post COVID-19
Not too long ago, everyone was super excited to enter 2020.
Businesses were chalking out plans on adopting Cloud, AI, Machine Learning etc. – all encapsulated
in a single capsule called “Digital Transformation”.
Globally, many “Digital Transformation” conferences were taking shape where people were to share
their views and visions for helping their customers and other businesses achieve goals.
The entire world went into a lockdown. Most businesses went on a halt with no clarity on
re-opening and revival. As I write this article, many countries and businesses are barely keeping their
heads above water. Some are trying to recover by going “Virtual”.
This article is a personal overview of how this situation will affect the journey, as well as some road-
maps defined earlier. We will also talk about a post Pandemic scenario and some new challenges
that will emerge.
However, a large group consisting of non-CXO audiences like Developers, Testers, Back Office, IT, HR, Finance
and Operations don’t know what’s in this blackbox called “Digital Transformation”, what’s in it for them, and
the role they will be playing in it.
Some people have made an assumption that Digital Transformation is some kind of an “Automation” which
will take away jobs, while there are a few who claim the contrary – they feel Digital Transformation will
create new jobs.
Let us go through and understand the definition of “Digital Transformation”. We are using Wikipedia here
which has a very generic definition which most of us will understand.
“Digital Transformation is the use of new, fast and frequently changing digital technology to solve problems.”
This figure shows the core objectives and outcomes of your “Digital Transformation”. Your “Digital
Transformation” should align to most of the objectives at the top level.
We will now see how CoE (Center of Excellence) can drive these objectives to achieve “Digital
Transformation” within the organization and for the customer. We will also discuss how Microsoft
technologies enable us to achieve these objectives.
For COVID-19, social distancing and wearing a mask is one of the recommended measures from WHO to
contain the spread of transmission. This enforces many businesses to Work From Home (except the Gov. and
other related agencies). Because of the remote work, there can be an impact on the operations, delivery and
meeting timelines.
123
www.dotnetcurry.com/magazine |
Capturing Business Requirements virtually due to No/Limited
Travel or Travel restrictions
With the current pandemic situation followed by the emerging new normal, businesses need to define
unified optimal mechanisms to collaborate with end customers/stakeholders and employee within the
organization.
Since the pandemic will pose challenges on doing Physical Assessments, Face to Face meetings, Industry/
Plant Visits to observe operations and record notes, talking face to face with clients etc., everything will
go virtual. There is a need of Tools/Software which allows to capture these requirements and fulfill the
required analysis. Think of it as a Virtual Business Analyst (VBA) rather than an actual physical BA who will
use these Tools/Software to capture data.
Here are some tools which are economical and commonly used (there are plenty of third-party tools which
individual enterprise/company can evaluate based on their nature of business, meeting expectations and
budget). I am avoiding mentioning Word, Excel and PowerPoint, which are in use for over two decades now.
Microsoft Forms
Microsoft Forms is part of your O365 suites of applications. It is quite easy to use and
captures quick information. It can also integrate with other applications for automation,
and data captured in forms, can be exported.
MS Forms can be used for taking various satisfaction surveys and product feedbacks.
We can use this within the organization for various registrations for different business operations. So, for all
small requirements, feedback etc. this tool is quite helpful.
Microsoft Forms provides some basic business templates and allows you to customize them as seen in
Figure 1. You can share these Forms and export the output to MS Excel. Microsoft Forms also provide some
analytics of the survey/feedbacks you took from your users, as shown in Figure 2.
Previously known as VSTS (Visual Studio Team Services) and now called as “Azure
DevOps” (On-premise TFS is now “Azure DevOps Server”), Azure DevOps is license
based and comes with different pricing models. It is FREE for up to 5 Developers.
Then onwards, it is a paid tool. More details here - https://azure.microsoft.com/en-in/
pricing/details/devops/azure-devops-services/
Azure DevOps allows you to capture “Product Backlog” and write “Features” and “User Stories” as part of
requirements (it can be requirements for building a new product or services project).
It also allows you to put down “Acceptance Criteria” for each User Story to bring transparency and mutual
agreement between you and your client - since both can access Azure DevOps in different Roles and Access
controls.
This becomes a one stop tool/service for your organization and the end customer, and gives a complete
visibility to the progress. Therefore, Azure DevOps is very popular since it not only allows you to manage
code repositories, but also allows you to build end to end Integration and Deployment pipeline by building
Project Dashboards, Charts, Sprints if you are following Agile-Scrum method.
So you don’t have to purchase different tools and spend time to integrate and exhaust resources on each.
Azure DevOps tool/service is a one-stop shop for all your DevOps and Product Management needs.
Collaboration
One key challenge people realized during pandemic is the way we physically interact (face to face) in an
office workspace vis-à-vis interacting over Virtual Meeting Tools.
We have been using Virtual Meeting Tools for almost over two decades, but it was limited for several hours
125
www.dotnetcurry.com/magazine |
or number of people per day. We never saw a weeklong or monthlong full time usage of these Virtual
Meeting Tools (except for certain business domains).
Most of the companies use multiple collaboration tools like Skype, Skype for Business or Microsoft Teams
now, Zoom, WebEx (since 1999), GoTo Meeting, etc.
However, many companies have still not standardized the collaboration platform for communication. Each
department has their own choice of tools!
Now with an increased usage, it is time to enforce certain platforms for enabling Digital transformation.
7. Allow Plugins
8. Safe Authentication/Authorization
9. Allowing Login or switching to multiple Organizations (e.g. Your Own Organization, Customer/Partner
Organization)
12. Whiteboards
“Microsoft Teams” has all the above listed capabilities. It is a revolutionary collaboration tool with a vast
number of features compared to instant messaging platforms like Skype and Skype for Business (Lync).
Also if you are using Skype for Business as your existing Group Chat software, then
you can see a migration path to Teams here https://docs.microsoft.com/en-us/
microsoftteams/upgrade-and-coexistence-of-skypeforbusiness-and-teams
As per Microsoft, by 31st March 2020, overall Teams adoption went to 72M users.
It may surprise you what Source Control Repository and CI/CD has to do with “Digital Transformation”.
Traditionally, many businesses have been building and keeping their source code within their corporate
network because of compliance, security and intellectual property reasons. People used to get nightly
builds and would deploy them to production environments manually or with some deployment tools/
scripts, mostly in some data center.
With the embracement of Cloud (Azure) and the current remote working scenario because of COVID-19,
people are pushed to think of a strategy to bring automation to this process, improve GTM (Go To Market)
time and align with the organizational strategy.
Note: Read more about Agile Development and activities here https://www.dotnetcurry.com/devops/1529/
devops-timeline
We briefly discussed some of the features of Azure DevOps, including the capability of storing and
managing Product Backlogs. CI/CD (Continuous Integration and Continuous Delivery) is another feature
of Azure DevOps as a product. CT (Continuous Testing/Test Automation) can also be done using Azure
DevOps. Although it is a Microsoft product, it supports integration with Non-Microsoft/Open Source tools
as well. Azure DevOps supports integration with GitHub and GitHub Enterprise, Bitbucket Cloud, TFVC and
Subversion. Read more about Git and DevOps integration here https://www.dotnetcurry.com/devops/1542/
github-integration-azure-devops
Beside Azure VMs and Azure Services, Azure DevOps can target container registries and on-premises. Azure
DevOps Server is the on-premise offering by Microsoft.
Are you worried about the learning curve for your teams for Azure DevOps?
It is very much possible that CXOs and decision makers may think that along with the learning curve that
comes with Azure, Azure DevOps will be another overhead.
Well, as we discussed above, with Azure DevOps, you can continue using your existing source control and
pipelines, and only target your efforts for integration with Azure DevOps. If you are implementing it from
scratch, Azure DevOps Labs is one-stop shop solution for your teams to gear up with systematic learning
documentation and sample Proof-Of-Concept (PoC) to reduce the learning curve, boost confidence and get
expertise over Azure DevOps. Refer to figure 3 for additional information.
These Free labs are so well designed that a new joinee or a trainee at your company can also build Azure
DevOps CI/CD. Check more over here https://www.azuredevopslabs.com/. Additonally, we have a plethora of
Azure DevOps tutorials at https://www.dotnetcurry.com/tutorials/devops to get you started!
127
www.dotnetcurry.com/magazine |
Figure 3 – Azure DevOps Labs dashboard showcasing different DevOps Labs
Visual Studio is and has been the primary IDE in enterprises for all Microsoft technologies-based
application development from the past many years, and it has seen unparalleled growth over the years.
However, for medium sized to startup organizations, the entire Visual Studio suite might get heavy in terms
of budget and licensing. Also, Visual Studio is bound to run on Windows environments only and hence there
has been a barrier in adoption of Visual Studio for many organizations.
Visual Studio Code does not have all the features of Visual Studio (Community, Professional or Enterprise
Edition), but it is a lightweight version of Visual Studio. It is a powerful Source Code Editor which works on
Windows, Linux and macOS. It supports a diverse collection of programming languages like C++, C#, Java,
Python, TypeScript etc. and supports different versions of .NET frameworks and Unity frameworks too.
A step ahead in the offerings of Visual Studio Code is “Visual Studio Codespaces” (formerly known as Visual
Studio Online) which launched during Microsoft //Build 2020 virtual event. Visual Studio Codespaces is a
Cloud based IDE and people can access it from anywhere using their browser, ensuring they have created a
Visual Studio Codespaces for them.
It supports Git repos and its built-in command-line interface allows you to edit, run and deploy your
application from anywhere and from any device without installing Visual Studio Code editor on it, as it is
purely cloud based and accessible over the browser as shown in figure 4.
Given the situation, a large majority of developers and technical experts are working from home, and thus
the collaboration between them becomes a challenge.
But Visual Studio Codespaces overcomes this challenge as well because of its built-in Live Code sharing
and IntelliCode features. Visual Studio Codespaces is in Public Preview. For pricing information please visit
https://azure.microsoft.com/en-us/pricing/details/visual-studio-online/
Organizations initially adopting cloud usually try to push their Dev and Test environments as a part of cloud
roadmap, before the actual production environment.
On-premise Dev and Test environment usually takes time to set up even though it’s not so complex.
Installing softwares, configuring Dev environments, adding tools, setting up Test environments, check
readiness are the common activities for Dev/Test environments on on-premise.
When using the Cloud (Microsoft Azure in this case), building new Azure VMs (IaaS components) is the most
preferred way to build Dev and Test environment. This can be done manually if the environments are small.
Azure DevOps CD using IaaS as Code approach can help to spin large environments using Scripts and many
organizations prefer to use ARM (Azure Resource Manager) or Terraform Templates from Hashi Corp. to
automate the overall process.
Azure CLI and PowerShell are commonly used mechanisms for building quick Dev/Test environments. Azure
also provides an SDK if you wish to build IaaS components via REST APIs. Most Azure related products use
the API approach.
129
www.dotnetcurry.com/magazine |
Another most proven way for building rapid Dev/Test environment on Azure is Azure Dev Test Labs. Here are
some key features of Dev/Test Labs – (Please note that Dev Test Labs required Visual Studio Subscription)
• Reusability
Many organizations get started by first putting up their Dev and Test environments on the Cloud (in our
case Azure). This way they get a first-hand experience and then move forward by putting up production
workload.
Microsoft being a leader in the Cloud Platform understands this trend, and have launched Azure Dev Test
Subscriptions by giving discounted rates compared to other subscription types and specifically targeted for
customers who wish to put their Dev and Test environments only. This is not recommended for Production
purpose workload.
So, this not only nudges customers to try out Azure, but eases adoption as well since the cost impact is low
as it is designed only for non-production workloads. You can find more details here https://azure.microsoft.
com/en-us/pricing/dev-test/
Although Microsoft provides a swift approach to spin Dev/Test environments in no time, following are some
concerns of organizations while working remotely:
3. Protecting the source code, IPs (Intellectual Property), Customer’s code and software assets etc. to
ensure no source code breach happens
Well, for these concerns, you can use Azure AD (Premium), Azure IaaS and Azure DevOps together to address
this. Also make sure to check out an article titled “Prevent Code Access for Developers Working Remotely
using Azure DevOps (Protecting Code and IP during Lockdown)” for the same which we encourage you to
read. https://www.dotnetcurry.com/devops/1533/prevent-code-access-azure-devops
Using the RDP/SSH method to connect to your environments was continued on Azure for a few years. Over a
period of time and with the growing concerns and incidences of compromised RDP/SSH connections, there
was a need of secure and seamless alternative to connect to an Azure environment as shown in Figure 5.
Microsoft has been improving their Azure services and security continuously and have launched a service
called Azure Bastion.
Azure Bastion is a fully managed Platform-as-a-Service (PaaS) offering which enables you to do RDP
and SSH securely without exposure of public IP address. This service is also worth adding in your Azure
Architecture especially for Azure IaaS workloads.
Figure 5 – Azure Bastion based Architecture for accessing Azure IaaS resources using Azure Bastion
3. Secure RDP/SSH via Azure Portal (You don’t need any separate RDP or Tool to connect)
4. You don’t need to expose Public IPs of your Azure VMs and thus it protects from Port scanning since
there is no direct exposure of your Azure VMs, hence making it more secure
5. Azure ensures hardening of Bastion as it is a fully managed platform, hence no additional security
measures are required
131
www.dotnetcurry.com/magazine |
AI and Automation for Citizen Developers
From Automation, to Cognitive Services, to Machine Learning, Microsoft is ensuring to have a fusion of their
AI services in enterprise apps and consumer centric apps.
With Microsoft Cognitive Services, Microsoft has already offered APIs for Speech Recognition, Face
Detection, Language, Text Analytics, Anomaly Detection, Sentiment Analytics, Azure Cognitive Search,
Personalizer etc.
Azure also offers rich Data (both Relational and Non-Relational data stores) services and Big Data Services
for Machine Learning along with Machine Learning studio supporting R and Python, so that data scientists
can easily build predictive analytical solutions.
In Conversational AI, Microsoft enables organizations to build their own chatbots using Bot Framework and
Power Virtual Agents. You can check more on Azure Cognitive Services and relate them with your business
scenarios over here https://azure.microsoft.com/en-in/services/cognitive-services/.
With Dev Test Labs, Microsoft is enabling organizations to rapidly spin up their Dev Test environments.
Similarly, Microsoft has been enabling AI Developers and Data Scientists who are working on different AI
solutions like Building Models, Training Models, Building Predictive Analysis, Churning Data with Microsoft
and Open Source tools with DSVM (Data Science Virtual Machine).
What is DSVM?
DSVM – Data Science Virtual Machine is a pre-configured environment installed with frequently needed
common AI, ML tools for Data Scientists available in both Windows and Linux OS flavors. This environment
is optimized and designed for Data Science and AI, ML work. It supports GPU based hardware allowing you
to build, run and test your Deep Learning scenarios as well.
Since DSVM is hosted in Azure, it also allows you to connect to different Services and Data resources in
Azure as shown in Figure 6.
In this existing COVID-19 situation, the need of the hour is to do rapid data processing, analysis and build
apps with minimum efforts and minimum resources.
Past few months, there has been a huge adoption of Low Code/No Code platform which is Microsoft Power
Platform to do rapid app development.
Power Apps is a no code approach to building apps, and it does not require any prior programming
language, and hence it is suitable for Citizen Developers.
With Power Apps you can quickly design, build and deploy apps which is adoptable, secure and scalable.
Let us quickly understand the Power Platform offerings.
Note that Power Platform offerings are specific to certain O365 Subscription. Hence some features might
not be available to you. Also, some features like AI Builder are region specific and might not be available in
your region.
Power BI
Power BI is mostly used for building self-service analytics and visualizations securely.
Traditionally enterprises have been using SQL Server Reporting Services (SSRS) as a primary Reporting Tool.
Power BI in my opinion is a far better and easy reporting platform compared to SSRS and gives a lot of
freedom to connect to No-SQL Data sources. Power BI also has Power BI embedded services on Azure.
Power BI can be accessed on Windows Desktop and Mobile Devices with Power BI Apps. Developers can also
leverage the embedding feature of Power BI to show dashboards and data widgets in their applications. You
can start building your first data driven visualization as shown in Figure 7. More information can be sought
over here https://powerbi.microsoft.com/en-us/.
Figure 7 – Typical Power BI Dashboard showing dynamic data driven visualizations on Desktop
(Reference Dashboard from Microsoft’s COVID-19 US Tracking Sample)
133
www.dotnetcurry.com/magazine |
Common Use Cases for Power BI
Power Apps
Power Apps is mostly used for building Data driven No Code apps.
Power Apps is a No Code offering from Microsoft. It allows you to build apps with no prior knowledge of
any programming language or framework. It is purely a web-based studio in which you can choose any
template or build one from scratch.
Power Apps allows you to connect to different data sources and even allows you to access peripherals like
a Camera. It is the quickest way to build apps for your internal processes or even for your customers outside
your organization. See an example shown in figure 8. You can start building your apps here
https://powerapps.microsoft.com/en-us/
Figure 8 – Power App dashboard allowing Citizen developers to build apps without code
To enrich the no code app experience in Power Apps and Power Automate (Flow) for organizations and
citizen developers, Microsoft also allows to consume AI services within Power Apps to make apps more
intelligent. The “AI Builder” within Power Apps and Flow (Power Automate) enables you to integrate some
common AI modules like Entity extraction, Object Detection, Form Processing etc. along with your custom
models.
• No-Code way of building Apps. Apps can be built without complex frameworks and programming skills.
• Ideal for organizations having frequent changing processes and complex sub processes.
• Less complex and can be easily build and deployed by Citizen Developers.
135
www.dotnetcurry.com/magazine |
Flow (Power Automation)
Flow helps to create automation by using some pre-defined automation templates which are generic for
all businesses. You can also create customized flow for your business with the additional services and
connectors available. You can start building your flows/automation scenarios here https://powerapps.
microsoft.com/en-us/
Although Microsoft already has a robust Bot Framework, Power Virtual Agents allows you to create
actionable, performance centric no-code chatbots very easily.
You can start building your agents in no time. Here’s how one looks like as shown in Figure 9. More
information on Power Virtual Agents can be found here https://powervirtualagents.microsoft.com/en-us/.
As seen in the past few months, Power Apps have enabled many hospitals, organizations, government
agencies to quickly build apps based on the incoming data, as well as data in persistence.
With humongous amount of data handled by Power Apps and Power BI with ease, people were able to get
real time and seamless visualization of data – when it mattered the most. Power Apps due to its nature of
no code and rapid app building features helped many hospitals and NGOs to collect patient data of positive
Covid-19 cases, mapping them, identifying hospital staff and services availability and so on.
Since the response time during COVID-19 is low and resources are scant, in many areas, applications
having a long development cycle were opted out for solutions providing a no-code approach. With many
IT companies partially shut down with remote working enabled, Power Apps came to the rescue due to its
simplicity of building apps with no code and backed by a solid Microsoft AI and Azure service. It has in no
time become the first choice of all Citizen Developers.
Microsoft with Power Virtual Agent and Microsoft Health Bot is serving as a boon during the ongoing
COVID-19 pandemic, and has helped many large hospitals and government agencies to build a rapid virtual
agent experience.
With Microsoft Health Bot, enrolling COVID-19 patients, collecting their data, Answering FAQ for COVID-19
related topics on health websites to a wider audience, made it easier to manage expectations during the
pandemic.
137
www.dotnetcurry.com/magazine |
More information on Microsoft Health Bot and for building similar solutions can be obtained here
https://www.microsoft.com/en-us/garage/wall-of-fame/microsoft-health-bot-service/
Microsoft has already announced a “Role based Certification” for Azure. It also covers Certification on AI.
During COVID-19, since business growth in certain sectors is limited and overall velocity of growth is slow,
organizations have planned to re-skill their workforce for cloud to ensure they ride the wave of Cloud
adoption and Migration, followed by Artificial Intelligence and Automation. There is a plethora of training
content available online including prime quality communities like DotNetCurry and the DNC Magazine.
Similarly, for Ethics in AI and some fundamental training of AI services, Microsoft is offering an online
self-learn facility titled “AI School” for Devs and “AI School for Business” for business leaders and decision
makers. Here are some useful resources for your teams during and post COVID-19.
• AI School - https://aischool.microsoft.com/en-us/home
CloudSkew
Many Architects and Tech Leads usually spend a lot of time in Visio and PowerPoint to draw Azure
architectural diagrams. Sometimes it is becomes complex to draw these diagrams in PPT, especially for pre-
sales templates. For smaller organizations, Visio is also an overhead.
CloudSkew is a free tool and can draw and export diagrams for Azure and other public clouds.
Although designed and developed by a Microsoft FTE Alexey Polkovnikov, it is still not an official tool from
Microsoft. Azure Charts is a single dashboard for Azure and widely popular because of the simplicity of
accessing many critical Azure sites like Azure Status, SLAs, Timelines etc.
It is a very handy website for all developers, tech leads and architects working on Azure.
Here is a glimpse of Azure Heat Map.
https://azurecharts.com/
139
www.dotnetcurry.com/magazine |
Both CloudSkew and Azure Heat Maps (Azure Charts) are currently FREE for Individual use. You can visit
their websites for additional information and other privacy security related statements. As per CloudSkew
website, they may add license/plans/slabs in the near future.
Conclusion:
Many industries, including the IT industry, are trying to cope with the abnormalities introduced due to the
COVID-19 pandemic. Unforeseen circumstances created due to the pandemic has forced companies and
individuals to work remotely to maintain social distancing and adhere to WHO guidelines.
Remote working is the new normal and Virtual Business is the new strategy.
This is the right time to understand, accept, adopt and implement Digital Transformation within the
organization, and at your customer end.
Cloud and AI adoption is at peak because of COVID-19 and it’s time to identify the right Cloud, AI and
Collaboration services to fit this new Virtual workspace.
This article presented some views and best practices to enable and ease your journey of Digital
Transformation, to generate awareness and present an opportunity to revisit business plans and strategies.
With the vision stated by Microsoft CEO, Satya Nadella “Empower every person on earth to achieve more”,
Microsoft is doing their best to enable businesses to run smoothly and adapt to these new working
situations forced upon due to COVID-19.
Vikram Pendse
Author
Mahesh Sabnis
CREATING API
USING
AZURE FUNCTION
WITH
HTTP TRIGGER
Azure Functions is triggered by a specific type of event e.g. Timer, HTTP request, etc. The trigger executes a
function logic and performs operations like changing data, running schedule, HTTP request, etc.
1. We can write Azure Functions using languages like, C#, Java, JavaScript, Python and PowerShell.
2. Azure Functions has Pay-per-use pricing model. This means we pay for the time spent running the code.
Azure Functions pricing is per-second based on the consumption plan chosen.
5. Use HTTP-triggered functions with OAuth providers like Azure Active Directory, Google, Facebook,
Twitter and Microsoft Account
6. Azure Functions runtime is open-source. This means that Azure Functions runtime will be portable so
the function can run anywhere i.e. on Azure Portal, in the organization’s datacenter, etc.
Azure Functions are used for serverless application development in Azure. This means that Azure Functions
allows to develop an application without thinking of the deployment infrastructure in Azure. This approach
increases developer productivity by providing a means of faster development using developer-friendly APIs,
low-code services, etc. This approach helps to boost team performance which goes a long way to benefit
the organization on the revenue front.
143
www.dotnetcurry.com/magazine |
Implementing Azure Function for HTTP Trigger using
Visual Studio 2019
This article uses Visual Studio 2019 with Update 16.6 and .NET Core 3.1 to develop Azure function.
Azure Functions get triggered for HTTP requests of type GET/POST/PUT/DELETE and upon triggering, it
performs operations of reading and writing data from and to Azure SQL database.
To create Azure SQL Database and deploying function on Azure, an Azure subscription is needed. Please
visit this link to register for free and get a subscription.
Once an Azure subscription is created, the Azure Resource group must be created so that we can create
Azure SQL Database, Azure Functions, etc. Create a resource group and Azure SQL database with the name
ProductCatalog and a Product table in it. The Product table will have following columns as shown in the
listing 1.
Step 1: Open Visual Studio 2019 and create a new Azure function project as shown in Figure 1. The Azure
Functions application is developed using Azure SDK 2.9.
The project is created with the file named function1.cs with the default HTTP Triggering code. This code
contains a class with Run() method. The method has the FunctionName attribute. This name is used to
make HTTP calls to the function. The Run() method accepts HttpTrigger attributed parameters as follows:
• AuthorizationLevel: This is an enumeration used to determine the authorization level to access function
using HTTP requests. This enumeration has various values which indicate whether the HTTP request
should contain keys to invoke the function. The authorization level enumeration has the following
values:
• methods: This is a params array parameter. This represents HTTP methods to which a function will
respond.
• route: This is used to define route template. This represent the HTTP URL to access a function.
Step 2: Rename Function1.cs to ApiAppFunction.cs. This will change the class name to ApiAppFunction.
Please make sure that you remove the static keyword for the class.
Step 3: We will use EntityFrameworkCore to access Azure SQL database. To use EntityFrameworkCore we
need to install packages as shown in Figure 3.
145
www.dotnetcurry.com/magazine |
Figure 3: Installing packages in the project
Step 4: We will be using the Database First approach for generating model classes from the database. To
generate models, we need to run the scaffolding command. Open the command prompt and navigate to the
folder path as shown here:
...\ApiAppFunction\ApiAppFunction
Note: Use the UserName and Password for accessing an Azure SQL database. These are credentials you set while
creating a database server for Azure SQL database.
Once the command is run, you will receive an error as shown in Figure 4.
These settings will make sure that the .deps.json file is created. Now run the scaffold command again, the
command will run successfully and it will generate the DbContext class and Product class.
Check Visual Studio and we can see that project is added with Context and Models folders in it. The Context
folder contains the DbContext class and Models folder contains the Product class.
Open the ProductDbContext class from the Context folder and comment the default constructor. We
need to do this because we will be instantiating this class using dependency injection. Comment the
OnConfiguring() method to avoid using the database connection string from code, instead we will copy this
connection string and paste it in local.settings.json as shown in Listing 3.
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "UseDevelopmentStorage=true",
"FUNCTIONS_WORKER_RUNTIME": "dotnet",
"SqlConnectionString": "Server=tcp:<your-database-server>.database.windows.
net,1433;Initial Catalog=ProductCatalog;Persist Security Info=False;User ID={your-
user-name};Password={your-password};MultipleActiveResultSets=False;Encrypt=True;Tru
stServerCertificate=False;Connection Timeout=30;"
}
}
Step 5: We will now instantiate ProductDbContext using dependency injection. To use the DI container, we
need a Startup class like in ASP.NET Core applications. We need to use FunctionsStartup class as a base
class for the Startup class. To use this class, install the following package in the project:
Microsoft.Azure.Functions.Extensions
Once this package is installed, add a new class file to the project and name this file as Startup.cs. Add some
code to this file as shown in the Listing 4:
using ApiAppFunction.Context;
using Microsoft.Azure.Functions.Extensions.DependencyInjection;
using Microsoft.EntityFrameworkCore;
using Microsoft.Extensions.DependencyInjection;
using System;
[assembly: FunctionsStartup(typeof(ApiAppFunction.Startup))]
namespace ApiAppFunction
{
public class Startup : FunctionsStartup
147
www.dotnetcurry.com/magazine |
{
public override void Configure(IFunctionsHostBuilder builder)
{
string connectionString = Environment.
GetEnvironmentVariable("SqlConnectionString");
builder.Services.AddDbContext<ProductDbContext>(
options => SqlServerDbContextOptionsExtensions.UseSqlServer(options,
connectionString));
}
}
}
The code in Listing 4 shows the Startup class derived from the FunctionsStartup class and contains a
Configure() method for registering the ProductDbContext class in Services. Please make sure that you do
not add long running logic code in the Configure() method.
Step 6: Lets modify the Product class as shown in Listing 5 for defining JSON property serialization:
using Newtonsoft.Json;
namespace ApiAppFunction.Models
{
public partial class Product
{
[JsonProperty(PropertyName = "ProductRowId")]
public int ProductRowId { get; set; }
[JsonProperty(PropertyName = "ProductId")]
public string ProductId { get; set; }
[JsonProperty(PropertyName = "ProductName")]
public string ProductName { get; set; }
[JsonProperty(PropertyName = "CategoryName")]
public string CategoryName { get; set; }
[JsonProperty(PropertyName = "Manufacturer")]
public string Manufacturer { get; set; }
[JsonProperty(PropertyName = "Description")]
public string Description { get; set; }
[JsonProperty(PropertyName = "BasePrice")]
public int BasePrice { get; set; }
}
}
Step 7: Now let’s create an Azure Function to perform CRUD operations on Azure SQL database using the
HTTP request methods. Add the code in the ApiAppFunction.cs file as shown in Listing 6.
using System;
using System.IO;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Logging;
using Newtonsoft.Json;
using System.Collections.Generic;
using Microsoft.EntityFrameworkCore;
[FunctionName("Get")]
public async Task<IActionResult> Get(
[HttpTrigger(AuthorizationLevel.Function, "get", Route = "products")]
HttpRequest req, ILogger log)
{
try
{
// check for the querystring count for keys
if (req.Query.Keys.Count > 0)
{
// read the 'id' value from the querystring
int id = Convert.ToInt32(req.Query["id"]);
if (id > 0)
{
// read data based in 'id'
Product product = new Product();
product = await _context.Product.FindAsync(id);
return new OkObjectResult(product);
}
else
{
// return all records
List<Product> products = new List<Product>();
products = await _context.Product.ToListAsync();
return new OkObjectResult(products);
}
}
else
{
List<Product> products = new List<Product>();
products = await _context.Product.ToListAsync();
return new OkObjectResult(products);
}
}
catch (Exception ex)
{
return new OkObjectResult(ex.Message);
}
[FunctionName("Post")]
public async Task<IActionResult> Post
[HttpTrigger(AuthorizationLevel.Function, "post", Route = "products")]
HttpRequest req, ILogger log)
{
try
{
string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
149
www.dotnetcurry.com/magazine |
Product product = JsonConvert.DeserializeObject<Product>(requestBody);
var prd = await _context.Product.AddAsync(product);
await _context.SaveChangesAsync();
return new OkObjectResult(prd.Entity);
}
catch (Exception ex)
{
return new OkObjectResult($"{ex.Message} {ex.InnerException}");
}
}
[FunctionName("Put")]
public async Task<IActionResult> Put(
[HttpTrigger(AuthorizationLevel.Function, "put", Route = "products/
{id:int}")] HttpRequest req, int id,
ILogger log)
{
try
{
string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
Product product = JsonConvert.DeserializeObject<Product>(requestBody);
if (product.ProductRowId == id)
{
_context.Entry<Product>(product).State = EntityState.Modified;
await _context.SaveChangesAsync();
return new OkObjectResult(product);
}
else
{
return new OkObjectResult($"Record is not found against the Product Row
Id as {id}");
}
}
catch (Exception ex)
{
return new OkObjectResult($"{ex.Message} {ex.InnerException}");
}
}
[FunctionName("Delete")]
public async Task<IActionResult> Delete(
[HttpTrigger(AuthorizationLevel.Function, "delete", Route = "products/
{id:int}")] HttpRequest req, int id,
ILogger log)
{
try
{
var prd = await _context.Product.FindAsync(id);
if (prd == null)
{
return new OkObjectResult($"Record is not found against the Product Row
Id as {id}");
}
else
{
_context.Product.Remove(prd);
await _context.SaveChangesAsync();
return new OkObjectResult($"Record deleted successfully based on
Product Row Id {id}");
}
Listing 6: The Azure Function for CRUD operations based on HTTP request method
The code in listing 6 shows that the ApiAppFunction class is constructor injected using the
ProductDbContext class.
The ApiAppFunction class contains Get/ Post/ Put and Delete methods. All these methods accept
HttpTrigger attribute and a HttpRequest parameter. All these methods represent Azure functions methods
with Authorization level as function.
The route parameters defined by all these methods is named as products. This route parameter will be used
in the URL template to invoke these methods in HTTP requests. These methods perform CRUD operations
on Azure SQL database using ProductDbContext class. All these methods return the Task<IActionresult>
object. This means that all these methods will return HTTP Status code as per the execution status. We will
be using OkObjectResult() object to generate response for HTTP requests.
That's it. Now let's test the function by running it and making requests to the function methods using
Postman. Use F5 to run the function. The function URLs are displayed in Figure 5.
151
www.dotnetcurry.com/magazine |
Click on the Send button to post a record successfully to a SQL Azure database. We can test it by making a
HTTP Get request to the function.
So far, we have successfully created an Azure Function and tested it in the local environment. Now it is time
to publish this on Azure so that we can make it available publicly.
Step 8: Right click on the project and select the Publish option, this will start the publish wizard. In the first
step, we will have to choose the publish target as shown in Figure 7.
Select Azure and click on the Next button. It will display a publish specific target window as seen in Figure
8. In this window select either a Windows, Linux, Azure Function App Container or Azure Container Registry
target for publishing the Azure function (see Figure 8).
As shown in Figure 9, click on the Create a new Azure Function link to display a new window. Enter Azure
Function details like Name, Subscription, Resource group, Plan Type, Location and Azure storage as shown in
Figure 10.
Figure 10: Enter Azure Function details to create new Azure Function
Click on the Create button, and a new function will be created. Now we have to keep one important thing in
mind - we have already run the Azure Function in local environment successfully by defining the Azure SQL
Database connection string in local.settings.json file.
But the settings in this file will not work for the Azure Function deployment in Azure cloud environment.
So, to make sure that the Azure Function works successfully from the Azure cloud environment, we need to
modify Azure app service settings from the published window as shown in Figure 11.
Once we click on the Manage Azure App Service settings link, the Application settings window will be
displayed as shown in Figure 12.
The SqlConnectionString value for Local is set to the Azure SQL database but this value will work only for
the local environment. To make sure that this works on Azure cloud environment, we need to add the same
SQL Connection string for Remote setting as shown in Figure 13.
Figure 13: Settings SQL Azure Connection string for Remote settings
This setting will make sure that the Azure SQL database can be accessed from the Azure Function after it
is deployed on the Azure cloud. Click on the Ok button and then click on the Publish button to publish the
function as shown in Figure 14.
Step 9: Visit the Azure portal and login on to the portal. Once the login is successful, from All Resources, we
can see the published Azure function with the name ApiHttpTrigger as shown in Figure 15.
155
www.dotnetcurry.com/magazine |
On the portal, click on the ApiHttpTrigger function to display the Azure Function’s details. Click on the
Functions link of the Azure Functions blade. This will show all Azure Functions methods which we have
published. See Figure 16.
One of the most important things is that unlike local execution of Azure Functions, we cannot access
published Azure Functions methods directly. To access these methods, we need App Keys to authenticate
the client to access these function methods. We can find these keys in the App keys link from the Azure
Functions details blade. As shown in Figure 16, we have Get/Post/Put/Delete function methods which can
be accessed via Postman to make HTTP calls to them.
Step 10: Click on the Get method of Azure Function. We can see the Get function method details as shown
in Figure 17.
To access the Get method, you need the function URL. To access the URL, click on the Get Function Url
link as shown in Figure 17. Clicking the link will bring up a new dialog box with the Function as shown in
Figure 18.
We can use Host keys to access all HTTP methods, whereas master keys grant admin access to function
Runtime APIs. So please make a wise decision to access these function methods using these keys.
In case of a scenario where the client application has application roles for Read/Write access separately,
then make sure that you use the Functions keys or Host keys to access these methods.
For demo purposes, in this article, we will use the Host Key. Copy the function URL and using Postman
make a get call as shown in Figure 19.
Figure 19: Get call using Postman with URL containing Host key
Step 11: In Step 9, we have seen the Azure Functions deployment and tested it using the Postman tool.
As shown in Figure 19 the option Code + Test allows to test the Azure Function using portal without
requiring any other tools.
157
www.dotnetcurry.com/magazine |
Before testing via the portal, lets add Application Insights to monitor the function behavior details as
shown in Figure 21
Click on Code + Test link, and the function details in function.json will get displayed.
As shown in Figure 22, click on the Test link. It will show the Test blade where we select the HTTP Method,
Key, Query Parameter, Headers and Body (see Figure 23).
Once the Run button is clicked, the Get function method will be executed and the response will be shown
with data in Output and the Logs messages. See Figure 25.
The monitor page shows the Invocation Details log as shown in Figure 26.
Step 12: To access the function methods, we need to make sure that the client can only access the function
using HTTPS instead of the using HTTP traffic. We have to make sure that the Custom domains set the
HTTPS Only to on by default as shown in Figure 28.
Step 13: Once we have deployed the Azure Function in the portal and tested it, it’s time to consume the
Azure Functions API in a client application. We will be creating an Angular client application to consume
the Azure Functions API.
To consume the Azure Functions WEB API, we must enable CORS as shown in Figure 29:
Figure 29 says if we want to allow all origins to access Azure Function API, then use * and remove all other
origins from the list. See Figure 30.
Step 14: We will perform the GET/POST/PUT/DELETE operations from the Angular client application using
HTTP Url of the Azure Functions. We will have to use Function Urls for performing these operations. Table 1
contains a list of Urls for performing HTTP operations.
161
www.dotnetcurry.com/magazine |
These URLs are used to make HTTP calls from the Angular client application to Azure Function API.
To develop an Angular client application, we need Visual Studio Code (VS Code) and Node.js. Download and
install it on your machine. We will create Angular application using Angular CLI.
Step 15: Open Node.js command prompt and run the command to install Angular CLI:
ng new <NAME>
We will be using Bootstrap for rich UI. To install Bootstrap, run the following command as shown in Listing
9:
Step 16: Open the NgClientApp in VSCode. In the app folder, add three folders of name component, models
and service.
Step 17: In the models folder, add a new file of the name app.product.model.ts. In this file, add code as
shown in Listing 10.
The above code contains the Model class. This class will be used to bind with the Angular component for
performing CRUD operations. The code also contains constant arrays for Categories and Manufacturers.
These arrays will be used to bind with HTML templates in the component class.
getProducts(): Observable<Product[]> {
let response: Observable<Product[]> = null;
response = this.http.get<Product[]>('https://apihttptrigger.azurewebsites.net/
api/products?code=<KEY-CODE>&clientId=default');
return response;
}
The code in listing 11 is for an Angular service. This service is used to perform HTTP calls to Azure
Functions APIs. In the URL the <KEY-CODE> must be replaced by the Function Key. The Function Key can be
copied from function published in the Azure Portal. See Figure 31.
163
www.dotnetcurry.com/magazine |
Figure 31: The Function Key
This Function Key authorizes the client application to access the Azure Functions and perform HTTP
operations.
Step 19: In the component folder, add a new file and name it as app.listproducts.component.ts and add the
code in this file as shown in Listing 12.
@Component({
selector: 'app-listproducts-component',
templateUrl: './app.listproduct.view.html'
})
export class ListProductsComponent implements OnInit {
products: Array<Product>;
status: string;
product: Product;
headers: Array<string>;
constructor(private serv: ProductService, private router: Router) {
this.products = new Array<Product>();
this.product = new Product(0, '', '', '', '', '', 0);
this.headers = new Array<string>();
}
ngOnInit(): void {
for (let p in this.product) {
this.headers.push(p);
}
this.loadData();
}
This class is constructor injected with ProductService and Router classes. The class contains the loadData()
method. This method invokes the getProducts() method from the service class and makes a HTTP Get
request to Azure function to read data from Azure SQL database.
The edit() method is used to route to the EditProductComponent. The delete() method invokes the
deleteProduct() method of the service class. This method makes HTTP delete call to Azure Function to
delete data from the Azure SQL database.
The ListProductsComponent declares Products array, Product object and headers array. The headers array
is used to bind with HTML table headers to generate table columns and rows based on Product class
properties.
Step 20: In the component folder, add a new file and name it as app.listproduct.view.html. In this file, add
some HTML code as shown in Listing 13.
<h2>List of Products</h2>
<div class="container">{{status}}</div>
The above HTML markup generates a HTML table based on the products array. The table contains buttons
165
www.dotnetcurry.com/magazine |
for Edit and Delete operations.
Step 21: In the component folder, add a new file of the name app.createproduct.component.ts and add the
code in this file as shown in Listing 14.
@Component({
selector: 'app-createproduct-component',
templateUrl: './app.createproduct.view.html'
})
export class CreateProductComponent implements OnInit {
product: Product;
status: string;
categories = Categories;
manufacturers = Manufacturers;
constructor(private serv: ProductService, private router: Router) {
this.product = new Product(0, '', '', '', '', '', 0);
}
ngOnInit(): void { }
save(): void {
this.serv.postProduct(this.product).subscribe((response) => {
this.product = response;
this.router.navigate(['']);
}, (error) => {
this.status = `Error occured ${error}`;
});
}
clear(): void {
this.product = new Product(0, '', '', '', '', '', 0);
}
}
The code in Listing 14 contains the CreateProductComponent class. This class uses categories and
manufacturers arrays. The class is constructor injected with ProductService and Router classes.
The save() method of the class invokes the postProduct() method of the service class. If the product is
posted successfully, then the default page will be navigated using the router class.
Step 22: In the component folder, add a new file and name it as app.createproduct.view.html. In this HTML
file, add the following HTML markup:
The HTML markup in Listing 15 contains HTML input elements which are bound with properties of the
Product class. The HTML select elements are bound with categories and manufacturers arrays of the
CreateProductComponent class. The HTML buttons are bound with clear() and save() methods of the
CreateProductComponent class.
Step 23: In the component folder, add a new file and name it as app.editproduct.component.ts. In this file
add code as shown in Listing 16.
@Component({
selector: 'app-editproduct-component',
templateUrl: './app.editproduct.view.html'
})
export class EditProductComponent implements OnInit {
product: Product;
status: string;
id: number;
categories = Categories;
manufacturers = Manufacturers;
constructor(private serv: ProductService, private router: Router, private act:
ActivatedRoute) {
this.product = new Product(0, '', '', '', '', '', 0);
}
ngOnInit(): void {
167
www.dotnetcurry.com/magazine |
this.act.params.subscribe((param) => {
this.id = param.id;
});
this.serv.getProductById(this.id).subscribe((response) => {
this.product = response;
console.log(`Edit init ${JSON.stringify(response)}`);
}, (error) => {
this.status = `Error occured ${error}`;
});
}
save(): void {
this.serv.putProduct(this.id, this.product).subscribe((response) => {
this.product = response;
this.router.navigate(['']);
}, (error) => {
this.status = `Error occured ${error}`;
});
}
clear(): void {
this.product = new Product(0, '', '', '', '', '', 0);
}
}
The code in listing 16 contains the EditProductComponent class. This class is constructor injected with
ProductService and Router classes. The ngOnInit() method of the class reads the route parameter based
on which the Product data is retrieved so that it can be edited. The save() method of the class invokes the
putProduct() of the service to update the record.
Step 24: In the component folder, add a new file and name it as app.editproduct.view.html. Add the
following HTML markup in this file as shown in listing 17.
<h2>Update Product</h2>
<div class="container">{{status}}</div>
<div class="container">
<div class="form-group">
<label>Product Id</label>
<input type="text" [(ngModel)]="product.ProductId" class="form-control">
</div>
<div class="form-group">
<label>Product Name</label>
<input type="text" [(ngModel)]="product.ProductName" class="form-control">
</div>
<div class="form-group">
<label>Category Name</label>
<select class="form-control" [(ngModel)]="product.CategoryName">
<option *ngFor="let c of categories" value={{c}}>{{c}}</option>
</select>
</div>
<div class="form-group">
<label>Manufacturer</label>
<select class="form-control" [(ngModel)]="product.Manufacturer">
<option *ngFor="let m of manufacturers" value={{m}}>{{m}}</option>
</select>
</div>
<div class="form-group">
<label>Description</label>
<textarea class="form-control" [(ngModel)]="product.Description"></
In Listing 17, all the HTML input elements are bound with the properties of the product object from the
EditComponent class. The HTML select elements are bound with categories and manufacturers array of the
component class. The HTML buttons are bound with clear() and save() methods of the component class.
The Save button calls the save() method from the component and updates the Product by accessing Azure
Functions API.
Step 25: To complete the application, lets modify the app-routing.module.ts to define route table as shown
in Listing 18.
….
const routes: Routes = [
{path: '', component: ListProductsComponent},
{path: 'create', component: CreateProductComponent},
{path: 'edit/:id', component: EditProductComponent}
];
…
Modify the app.module.ts to declare all components and to import modules like FormsModule,
HttpClientModule, etc. as shown in Listing 19.
……
@NgModule({
declarations: [
AppComponent,
ListProductsComponent, CreateProductComponent, EditProductComponent
],
imports: [
BrowserModule, FormsModule, HttpClientModule,
AppRoutingModule
],
providers: [],
bootstrap: [AppComponent]
})
……
Let’s modify app.component.html to render router links and router outlet so that routing can be
implemented across ListProductsComponent, CreateProductComponent and EditProductComponent as
169
www.dotnetcurry.com/magazine |
shown in Listing 20.
Step 26: Open the command prompt and execute a command to run the angular application
ng serve
This will host the angular application on port 4200 by default. Open the browser and enter the following
URL in the address bar.
http://localhost:4200
This page will display a list of available products. Click on the Create link, the CreateProductComponent
view will be rendered as shown in Figure 33.
After clicking on the Save button, the Products List view will get displayed as shown in Figure 35.
Click on the Edit button on the table row, an Edit View gets displayed as seen in Figure 36 (Note: Here the
Edit button on ProductRowId 7 is clicked)
Let’s modify the Base Price to 14000 and click on Save() button, the record will be updated and the Products
List view will be displayed as shown in Figure 37.
171
www.dotnetcurry.com/magazine |
Figure 37: Product List after updating the Product record
Conclusion
Azure Functions with HTTP triggers provide a facility to develop and publish API apps that can be used as
REST APIs and made available to client applications.
Mahesh Sabnis
Author
Mahesh Sabnis is a DotNetCurry author and an ex-Microsoft MVP having
over two decades of experience in IT education and development. He is a
Microsoft Certified Trainer (MCT) since 2005 and has conducted various
Corporate Training programs for .NET Technologies (all versions), and
Front-end technologies like Angular and React. Follow him on twitter @
maheshdotnet or connect with him on LinkedIn
Damir Arh
ARCHITECTING
DESKTOP
AND MOBILE
APPLICATIONS
The article introduces several architectural and design patterns that can be
used to implement common scenarios in desktop and mobile applications.
Desktop applications used to be always connected fat clients which implemented all the business logic
locally and directly communicated with the data store (typically a relational database management system,
such as SQL Server).
• Instead of having direct access to the data store, they communicate through intermediary services
which can be securely exposed over public networks.
• A significant part of the business logic is implemented in the services so that it can be shared between
different client applications (desktop, mobile and web). Local processing in desktop applications is
often limited only to local data validation, presentation and other highly interactive tasks that benefit
from quick response times.
• They don’t rely on being always connected to services. Public networks are not as reliable as internal
ones. Increasingly, applications are expected to work on laptops in locations where there might be no
connectivity at all.
If you look carefully, haven’t these been the properties of mobile applications since their beginnings?
On the other hand, the performance of mobile devices is getting better and better. Today, they are mostly
at par with desktop and laptop computers, allowing mobile applications to do local processing that’s
comparable to desktop applications.
The most notable differences between the two application types today are the size of the screen (typically
smaller for mobile apps) and the input methods (typically mouse and keyboard for desktop applications,
and touch for mobile applications). These mostly affect the application user interface, not its architecture.
In addition to that, at least in the .NET ecosystem, the technologies and frameworks used to develop
desktop and mobile applications are very similar, and parts of them, are at times the same.
In this article, I’m going to focus on the MVVM (model-view-view model) pattern which works well with all
XAML-based UI frameworks: WPF (Windows Presentation Foundation), UWP (Universal Windows Platform)
and Xamarin.Forms (cross-platform UI framework for Xamarin mobile applications).
You can read more about other UI frameworks for desktop and mobile applications in two of my
175
www.dotnetcurry.com/magazine |
DNC Magazine articles:
As the name implies, the MVVM pattern distinguishes between three types of classes:
• Models implement the domain logic and are in no way affected by the application user interface.
• Views implement the application user interface and consist of the declarative description of the user
interface in the XAML markup language and the imperative code in the corresponding code-behind file.
Depending on the UI framework, these are either Window or Page classes.
• View models are the intermediate classes that orchestrate the interaction between models and views.
They expose the model functionality in a way that can be easily consumed by the views through data
binding.
By default, bindings in the view refer to properties of the object assigned to a control property named
DataContext (or BindingContext in Xamarin.Forms). Although each UI control has its own DataContext
property, the value is inherited from the parent control when it isn’t set.
Because of this, the DataContext property only needs to be set at the view level when using MVVM.
Typically, a view model object should be assigned to it. This can be done in different ways:
• Declaratively in XAML:
<Window.DataContext>
<local:MainWindowViewModel />
</Window.DataContext>
With the view model set as the view’s DataContext, its properties can be bound to properties of controls,
as in the following example with InputValue and InputDisabled view model properties:
The control will read the values when it initializes. Typically, we want the values to be re-read
The binding subscribes to the event and updates the control when the event is raised for the bound view
model property.
Bindings can either be one-way (i.e. controls only display the value) or two-way (i.e. controls also write
values that changed because of user interaction back to the view model). Each control property has its own
default value for binding mode which can be changed in the binding declaration:
Apart from displaying and modifying values of the view model properties, the view also needs to invoke
methods on the view model (e.g. when a button is clicked). This can be achieved by binding to a command
that invokes that method.
The command exposed by the view model must implement the ICommand interface:
public MainWindowViewModel()
{
ClearCommand = new DelegateCommand(_ => Clear());
}
In the view model above, the DelegateCommand class accepts an action to specify the view model method
to be invoked. The code on the next page is a simplistic implementation of such a DelegateCommand:
177
www.dotnetcurry.com/magazine |
public class DelegateCommand : ICommand
{
public event EventHandler CanExecuteChanged;
private Action<object> Action { get; }
public bool CanExecute(object parameter) => true;
public void Execute(object parameter) => Action(parameter);
Not all components support binding to a command for every type of interaction. Many only raise events
which can’t be directly bound to a command.
An example of such a control is the ListView which only raises an event when the selected item changes.
Behaviors (originally introduced by Microsoft Expression Blend design tool for XAML) can be used to bind
events to commands. In WPF, the InvokeCommandAction from the Microsoft.Xaml.Behaviors.Wpf NuGet
package can be used today:
There’s one more type of interaction between the UI and the view model: opening other windows (or
pages). This can’t be achieved through binding because it doesn’t affect an existing UI control but requires
a new view to be created instead, e.g.:
The above code would have to be placed in a command action to be executed in response to an event - for
example a button click. This allows it to pass a value from the current window (i.e. the view model itself)
to the new window as a parameter of the new view model constructor. Although this works, it requires the
view model to directly interact with the view (i.e. create a new view instance corresponding to the new
window) which the MVVM pattern is trying to avoid.
To keep decoupling between the two classes, a separate class is typically introduced to encapsulate the
imperative interaction with the UI framework. This would be a minimalistic implementation of such a class:
The view model could then simply call the method on the NavigationService class with correct
arguments (view and view model types, view model parameter):
NavigationService.OpenDialog<SecondWindow, SecondWindowViewModel>(InputValue);
To get rid of the only remaining coupling between the View Model and the View (i.e. the type of the view
to open), there’s usually a convention on how to determine the type of the view from the type of the view
model. In our case, the view type could match the view model type with its ViewModel postfix removed.
This would allow the view to be instantiated using reflection:
Now, a window can be opened without being referenced in the originating view model at all. Only its view
model is required since the Navigate class finds the corresponding view based on a naming convention:
NavigationService.OpenDialog<SecondWindowViewModel>(InputValue);
Although there’s a lot of interaction between the view and the view model, all of it is in the same direction.
View is fully aware of the view model, but the view model is not aware of the view at all.
179
www.dotnetcurry.com/magazine |
As you can see, there’s some plumbing code involved to implement the MVVM pattern in an application
that could easily be shared between applications.
They include all the necessary plumbing so that you don’t have to develop your own. Although they also
have their own conventions which you will need to adopt. The most popular MVVM frameworks are (in no
particular order): Caliburn.Micro, MvvmCross, MVVM Light, and Prism.
Dependency injection
To decouple the view model from the UI framework, we moved the code that interacts with the UI
framework into a separate class. However, the newly created NavigationService class for that purpose is
still instantiated inside the view model:
This makes the view model fully aware of it. The property has a public setter on purpose so that the
NavigationService class could be replaced with a mock or a different implementation in unit tests, but
this doesn’t really solve the issue.
To make it easier to replace the NavigationService class in tests, we should first introduce an interface,
that the NavigationService class and any potential replacement classes will implement:
Now, we can change the type of the NavigationService property to the INavigationService interface
so that the replacement implementations can also be assigned to it:
One last step to fully decouple the view model from the NavigationService class is to avoid creating the
instance inside the view model:
Instead, we will create the instance of the NavigationService in the code responsible for creating the
instance of the view model and assign it to the property there:
This pattern is called dependency injection because the dependencies (the NavigationService class in
our case) of a class (the view model in our case) are injected into that class from the outside so that the
class doesn’t depend on the concrete implementation.
// ...
}
In a typical view model, a class responsible for navigating between the views will not be the only injected
dependency. Other common dependencies to be injected can be grouped into the following categories:
• Domain layer dependencies, such as classes responsible for remote service calls and communication
with the data store.
• Application diagnostics services, such as logging, error reporting and performance measurement.
• OS or device-level services, such as file system, camera, time, geolocation, and similar.
Also, view models aren’t the only classes with external dependencies that should be injected so that the
class doesn’t directly depend on other concrete implementations. The same approach is used for most
classes, even for services that are injected into the view model. For example, a class responsible for calling a
specific remote service could depend on a more generic class responsible for performing HTTP calls which
in turn could depend on a logging class.
There are libraries available to make the process of dependency injection in an application easier to
manage. In addition to automatically providing all the required dependencies when a class is initialized,
they also include features for controlling the lifetime of an instance of a dependency.
181
www.dotnetcurry.com/magazine |
The common choices for the lifetime of a dependency are:
• The same instance could be used throughout the application as long as it is running, making the class
effectively a singleton (you can read more about singletons in my DNC Magazine article: Singleton in
C# – Pattern or Anti-pattern).
• Other custom lifetimes can be defined, e.g. for a duration of a save operation to ensure transactional
consistency in the data store.
There’s an abundance of dependency injection libraries available for .NET. Their APIs are slightly different,
but they all have the same core feature set. Based on the NuGet download statistics, the most popular ones
at the time of writing were: Autofac, Unity, and Ninject.
Since dependency injection is an integral part of creating new instances of view models which the MVVM
frameworks are responsible for, all MVVM frameworks include a built-in dependency injection framework
which you can use. In Prism, for example, you would register your dependencies and view models with calls
to its own dependency injection container instance:
containerRegistry.RegisterSingleton<INavigationService, NavigationService>();
This configuration would then automatically be used when instantiating the view model classes for views.
All the necessary plumbing for that is provided by the framework.
Editorial Note: You can read more about dependency injection in general in Craig Berntson’s DNC Magazine
article Dependency Injection - SOLID Principles.
To interact with these services, the client-side code typically uses wrapper methods that map to individual
REST endpoints and hide the details of the underlying HTTP requests. This is an implementation of the
proxy design pattern, a remote proxy to be exact.
Usually, a single class will contain methods for all endpoints of a single REST service. Each method will
serialize any parameters as query parameters or request body and use them to make an HTTP request to
the remote service URL endpoint using an HTTP client.
The response received from the remote service will usually be serialized in JSON format. The proxy method
will deserialize it into local DTO (data-transfer object) classes.
These wrapper methods are very similar to each other and are very error-prone to write because of the
repetitive code they contain. This makes them a good candidate for code generation.
If a REST service is documented using the OpenAPI specification (originally called Swagger specification),
there are several tools available for doing that:
• OpenAPI Generator is the official highly configurable command line tool with support for many
programming languages and runtime environments. It has 4 different built-in templates for generating
C# code and supports custom templates as well.
• AutoRest is Microsoft’s command line tool which can also generate code for multiple programming
languages. However, it is much more focused on the Microsoft ecosystem and has very good
documentation for the generated C# code.
• NSwag only supports C# and TypeScript code generation but can also generate OpenAPI specification
files for ASP.NET Web API services. Its primary use case is to generate both the server-side specification
files and the client-side proxy classes, allowing it to better support C# and .NET specifics.
Of course, all tools generate not only the client-side proxies but also the DTO classes describing the data
returned by the remote service. Especially in cases when the REST service is maintained by a different team
or company and changes frequently, automated code generation with these tools can save a lot of time.
In applications which not only display data from remote services but also allow data manipulation,
validation of data input is an important topic.
At minimum, the remote services will validate any posted data and return potential validation errors they
encounter. These validation errors can then be exposed to the views through the view models so that they
are presented to the user.
However, a lot of validations that are done by the remote service could also be performed on the client
before the data is sent to the remote service (e.g. to check if a piece of data is required or if it matches the
requested format).
183
www.dotnetcurry.com/magazine |
Of course, the remote service would still have to repeat the same validations because it can’t rely on the
data sent from the clients. But the users would nevertheless benefit from shorter response times for errors
which are detected without a roundtrip to the remote service.
When implementing data validation, the visitor pattern is often used. This allows the validation logic to be
kept separate from the data transfer objects which is beneficial to keep the concerns separate.
It also makes it easier to have multiple different types of validators for the same type of data, based on
how it is going to be used.
The validator interface will have the role of the visitor with a Validate method accepting the data
transfer object to validate and returning the validation result including any potential validation errors:
To strictly follow the visitor pattern, the object being validated could have its own Validate method
accepting a concrete validator. To keep validation decoupled from the data transfer object, this method
could also be implemented as an extension method:
But such a method is not required, and a validator can be efficiently used without it:
Although validation doesn’t require as much plumbing code as some of the other concerns covered earlier
in the article, a good third-party library can still make a developer’s life easier.
The most popular library for validation in the .NET ecosystem is FluentValidation. Its main advantages are a
Conclusion
In this article, I have described common architecture patterns to be used in desktop and mobile
applications.
I started with the role of the MVVM pattern in decoupling the application code from the user interface. In
the next part, I explained how dependency injection can further help to decouple the view model classes
from the rest of the application logic.
I concluded with patterns used for interaction with remote data services: the remote proxy for
communicating with the remote service and the visitor pattern for implementing local data validation. I
accompanied each pattern with popular libraries and tools in the .NET ecosystem which can make their
implementation easier.
Damir Arh
Author
Damir Arh has many years of experience with Microsoft development tools;
both in complex enterprise software projects and modern cross-platform
mobile applications. In his drive towards better development processes,
he is a proponent of test driven development, continuous integration and
continuous deployment. He shares his knowledge by speaking at local
user groups and conferences, blogging, and answering questions on Stack
Overflow. He is an awarded Microsoft MVP for .NET since 2012.
185
www.dotnetcurry.com/magazine |
ANGULAR
Keerti Kotaru
Angular v9
Angular v10
Development Cheat Sheet
&
Details significant changes with version 9 and
some new features introduced in version 10
Angular 9 is special.
It’s a key release in the last three years. Having said that, for most applications, the upgrade is smooth with
minimal impact and few changes that need developer intervention.
Angular’s semantic versioning includes three parts - major, minor and patch version numbers. As an
example, v10.0.1 refers to major version 10, minor version 0 and patch version 11.
Angular has a predictable release schedule. There has been a major release every six months.
Angular 9 being a major version, comes with a few breaking changes. Developer intervention is required
while upgrading a project from Angular 8 to Angular 9, and to version 10.
Angular best practices describe a deprecated feature will continue to be available for two major releases.
This gives enough breathing room for teams and developers to update their code.
Additionally, a warning on the console in dev mode highlights the need to refactor code. (Production build
does not show the warning). Follow the link to see a complete list of deprecations https://angular.io/guide/
deprecations
At the time of writing this article, the minor version of Angular 10 is 0 and patch version is 1. A minor
version upgrade wouldn’t need developer intervention. As the name indicates, they are few minor upgrades
in the framework features. Patch releases are frequent, often weekly and include bug fixes.
While the above command upgrades CLI and Angular core, we need to run the update command on
individual repos. In this example, my application uses Angular Material and the CDK. Run the following
Angular CLI command to upgrade version 10.
An application, https://update.angular.io/ lists all changes required while upgrading the framework. It lets
us select current Angular version and the target version.
It’s always advisable to upgrade to the latest Angular version. However, it is recommended to move step by
step between major versions. That is, if your project is on Angular 8, migrate first to the version 9 commit
and test the changes. Next, upgrade to Angular 10 (latest version as of this writing). To upgrade to a specific
version, say version 9 (as opposed to the latest version 10) use the following command
An Angular 9 application using Ivy can have dependencies (libraries and packages) that are built with the
View Engine. A tool ngcc (Angular Compatibility Compiler) helps with the backward compatibility. It works
on node modules and produces Ivy compatible version. Angular CLI runs the ngcc command as required.
It’s preferred to use View Engine for Angular libraries. This is because libraries need to stay compatible with
applications that use View Engine. Moreover, a library build with View Engine is still compatible with Ivy
(compiled by ngcc).
Note: Angular 9.1.x improved ngcc speed. The tool runs in-parallel compiling multiple libs at a time.
Let’s have a look at what changed with the introduction of Ivy and Angular 9.
Angular doesn’t need imperative components specified as entry components. Ivy discovers and compiles
components automatically.
What are entry components? Angular loads components in one of the two ways, declaratively or
imperatively. Let’s elaborate.
Declarative:
The HTML template is declarative. When we include a component as a child in the HTML template of
another component using a selector, it is declarative.
For example, consider the following todo application’s code snippet. An AppComponent template uses two
other components - create-todo and todo-list. They are included in the application declaratively.
Imperative
Few components are included in the application imperatively. They are not included in the HTML template.
For example, the App Component is bootstrapped at the time of loading the application and the root
module. Even though it’s in index.html, it’s not part of a component template. The Router loads few other
components. These are not in a template of any kind. Check out the following code snippet:
Router and the bootstrap process automatically adds the above components to entry components’ list. As a
developer, we don’t have to do anything special.
However, with Angular 8 (and below), if we create a component using ViewContainerRef and load them
programmatically (instead of HTML template), we need to explicitly identify them as entry components. We
may add them to entryComponents[] - one of the fields in NgModule() decorator. We may also use ANALYZE_
FOR_ENTRY_COMPONENTS, a DI token that creates entry components using the list of references in useValue
property. The framework used entry components list for tree shaking.
This is no longer required with Ivy. The compiler and the run time identify the dynamic components. Tree
shaking continues to work without explicitly creating entry components.
For an elaborate read on Entry Components, use the links in the References section at the end of the article.
Tree shaking is a process of eliminating dead code (unused code) in the bundles. With tree shaking, Angular
release notes describes an improvement in the range of 2% to 40% depending on nature of the application.
Few applications might not use all Angular framework features. With better tree shaking, unnecessary
functions, classes and the other code units are excluded from the generated bundles. With Ivy, there is also
an improvement in factory sizes generated by the framework.
With the framework, a global object “ng” is available in the application. This object was available even
before Ivy. However, version 9 onwards, the API is easy to use. We can use it for debugging, while in
development mode.
Consider the following example. In the to-do sample, we can select the CreateTodo component using
getComponent() API on the object, ng. It takes a DOM element as an argument.
Note: Another way to retrieve the component is to select the element in HTML (DOM) and use $0 (instead
of document.getElementsByTagName()), which returns the component for the selected element.
ng.getContext() – returns context, which is objects and variables with ngFor. Figure 4 shows getContext on
second item in the todo list. See Google Chrome console.
ng.getDirectives() – Similar to getComponent(), this method returns directives associated with the selected
element.
ng.getHostElement() – Returns parent DOM element of the selected component or the directive.
ng.getRootComponents() – Returns a list of root components associated with the selected DOM element.
See figure 5.
Angular version 9 and above supports three modes for type checking.
Strict mode: Supported only with Ivy. The strict mode helps identify maximum number of type problems
with the templates. To enable the strict mode, set a flag strictTemplates to true in the TypeScript
configuration file, tsconfig.json
Note that the strict mode conforms to strictNullChecks flag in the template. The strictNullChecks mode
needs to be enabled in tsconfig.json. The strictNullChecks also constraints TypeScript code to set null and
undefined only when defined as a union type.
Look at Figure 6. For the first two variables, aNumericVariable and aNumericVariable2, we can’t set a value
null or undefined. It throws an error, rightly so. Consider the last two lines. When used as a union type, we
may set null or undefined as values on the variable.
Figure 6: Errors in
strict mode
To enable fullTemplateTypeCheck, set the value true in tsconfig.json. Please note, if both
fullTemplateTypeCheck and strictMode are enabled, strict mode takes precedence.
Basic Mode: It is a basic type checking in HTML templates. It checks the object structure while using data
binding. Consider the following sample. A sampleTodo object (of type Todo) may have fields id, title or
isComplete. Type checks the template and returns an error that a dummy field doesn’t exist on sampleTodo.
An aspect you might have noticed in Figure 8 is that the message highlights the error better than before.
Angular 9 has improved on formatting and showcasing compiler error messages. Following is another
example with the build command (the build npm script).
ng new --strict
• Turning on strict mode with TypeScript and Angular Templates’ type checks.
platform: Creates a singleton service instance available for all applications on the page.
any: Creates a singleton instance per each Angular module
Earlier, the framework allowed using a value ‘root’ with providedIn. With it, a singleton instance got created
for the root module. Hence, one instance for the application.
Use @injectable decorator on an Angular service so that the compiler generates required metadata.
Injector creates an instance of the service. It uses the metadata including service’s dependencies.
Angular modules loaded eagerly, by default. To delay loading the modules on-demand, Angular traditionally
used a string syntax (eg. ./path-to-lazy-module/a-lazy.module#ALazyModule). When a user navigated to a
route, the module got lazy loaded. Hence this string was specified as a value for loadChildren field in the
route configuration.
mat-date-range-input: MatDateRangeInput component is a form field to show a from date and to date
selected by the user.
mat-date-range-picker: MatDateRangePicker component allows choosing the date range visually. See the
figure below.
MatStartDate: A new directive to be used on input elements for start date in the range.
MatEndDate: A new directive to be used on input elements for end date in the range.
Consider the following code snippet that shows date range in the sample to-do application. Follow the link
for a complete code sample.
<mat-form-field >
<!-- The date field label -->
<mat-label>Complete todo in the timeframe</mat-label>
<!-- Form field that shows from date and to date-->
<mat-date-range-input [rangePicker]="todoWindow">
<input matStartDate placeholder="Start date">
<input matEndDate placeholder="End date">
</mat-date-range-input>
<!-- date range picker allows selecting from and to date -->
<mat-date-range-picker #todoWindow></mat-date-range-picker>
</mat-form-field>
195
www.dotnetcurry.com/magazine |
Figure 10: The new Date Range component in Angular Material
With Ivy, better build time means even the dev builds can use AoT. Using AoT is beneficial as it eliminates
any differences with the production build.
• ES Private fields: TypeScript allows creating private fields in a class with the access modifier private.
ES has a private class field proposal in Stage-3. The field is prefixed with #. The field is scoped at class
level. Consider the following example. Follow the link to read more on this topic.
class Todo {
#title: string
constructor(todoTitle: string) {
this.#title = todoTitle;
}
greet() {
// Works okay – private field used within the class
console.log(`The todo is ${this.#title}!`);
}
}
• Top-level await: We may use await with a promise to avoid using then function callbacks. However, the
function using await need to be declared async. With top-level await, async can be applied at JavaScript
module level. This eliminates the need to mark each function using await to be marked async.
Angular CLI in Angular 9.1.x (notice minor version 1) can generate the component with the CSS property
display value block. The component will add a line break before and after the component.
Conclusion
Angular is evolving continuously. Angular 9 is a major release in the past few years. Ivy has been in the
making and actively discussed in the Angular developer community for a while. The runtime and compiler
improved the framework in multiple aspects. Angular 10 is another recent major version upgrade (in June
2020). Version 10 has relatively fewer updates ( as far as major version releases go).
At the beginning, the article introduced Angular versioning and significance of Angular 9. The article
described how removing entry components simplified using imperative components. The article then
discussed bundle size improvements with Ivy. Next, it discussed improved debugging with better API on the
“ng” object.
The article also described strict mode with Angular CLI. It provided a sample implementation for Angular
Material Date Range component.
The article also mentioned improved type checking in the component and directive template files, and
discussed additional options with ProvideIn.
A JavaScript feature, 'dynamic imports' introduced with ES2020 has an implication on Angular 9 and Ivy. The
ngc depends on the new dynamic imports as opposed to the old string syntax.
We concluded by showcasing the configuration to opt-out of Ivy and using View Engine after describing few
miscellaneous Angular 9 and Ivy features.
Checkout code sample at the following Github repo. Clone, npm install and npm start to run the sample.
https://github.com/kvkirthy/todo-samples/tree/master/memoization-demo
References
- Entry components documentation
- Bye bye entry components by Nishu Goel
- Dynamic Imports in JavaScript
- Christian Liebel’s blog - Angular & TypeScript: Writing Safer Code with strictNullChecks
- How to create private fields and functions in JavaScript class
- TypeScript 3.8
- How CommonJS is making your bundles larger by Minko Gechev.
- Angular blogs
o Angular 9
o Angular 9.1
o Angular 10
- Angular, deprecated APIs and features
Keerti Kotaru
Author
V Keerti Kotaru is an author and a blogger. He has authored two
books on Angular and Material Design. He was a Microsoft MVP
(2016 - 2019) and a frequent contributor to the developer community.
Subscribe to V Keerti Kotaru's thoughts at http://twitter.com/
keertikotaru. Checkout his past blogs, books and contributions at
http://kvkirthy.github.io/showcase.
199
www.dotnetcurry.com/magazine |
Y o u
a n k ry
h sa
T v e r
n n i
h e a io n
r t d i t
fo e @yacoubmassad
@keertikotaru
@vikrampendse
@subodhsohoni
@gouri_sohoni
@suprotimagarwal
benjamij
@maheshdotnet
José R López
@saffronstroke
@jfversluis
@damirarh
@klaushaller4
@dani_djg
mailto: [email protected]