CompTIA Server

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 306

CompTIA Server+ (SK0-005): Authentication

& Authorization
Controlling access to network resources is a vital security concern. Resource access is
controlled first by authentication, then by specific permissions for authorized resource
use. Physical security strategies allow controlled access to IT computing equipment at
the user and server levels. In this course, you will first list physical security strategies
then determine how to harden specific systems and limit the use of USB removable
media through Group Policy. Next, you will manage users and groups in Active
Directory, Linux, and the AWS and Azure clouds. You’ll then configure multi-factor
authentication (MFA) in AWS and Azure. Finally, you’ll define how identity
federation works and enable access control through cloud-based RBAC roles and
attribute-based access control. This course is part of a collection that prepares you for
the CompTIA Server+ SK0-005 certification exam.

Course Overview
Topic title: Course Overview

Hi, I'm Dan Lachance. I've worked in various IT roles, since the early 1990s including
as a technical trainer, as a programmer, a consultant as well as an IT tech author and
editor.

Your host for this session is Dan Lachance. He is an IT trainer and a consultant.

I've held and still hold IT certifications related to Linux, Novell, Lotus, CompTIA,
and Microsoft.

Some of my specialties over the years have included networking, IT security, cloud
solutions, Linux management and configuration, and troubleshooting across a wide
array of Microsoft products. Resource access is controlled first by authentication
followed by specific permissions for authorized resource use. Physical security
controls control access to IT computing equipment at the user and at the server levels.
In this course, I'll discuss physical security strategies, followed by determining how to
harden specific systems, followed by limiting the use of USB removable media
through group policy.

Next, I'll manage users and groups in Active Directory, Linux as well as the AWS and
Microsoft Azure clouds. I'll then configure MFA or multi-factor authentication in
AWS and Azure. Moving on, I'll define how identity federation works and enable
access control through cloud-based RBAC roles and attribute-based access control.
This course is part of a collection that prepares you for the CompTIA Server+ SK0-
005 certification exam.

Physical Security
Topic title: Physical Security. Your host for this session is Dan Lachance.

It's easy to get caught up with all of the technicalities in security solutions related to
IT. We have to think about physical security first because at the end of the day, all of
our IT equipment whether it's servers, storage arrays, laptops, desktops, routers,
switches, all of that stuff, has to physically exist somewhere, and so it needs to be
protected. Just like physical documents would need to be protected if they are, for
example, print offs of very sensitive or confidential information.

So, with physical security we're talking about having restricted access to the IT
infrastructure equipment documents or any other assets that might be considered to be
sensitive. Now this is a big deal when it comes to things like server rooms or even
data centers. Think of a data center building because you might have hundreds or even
thousands of different customers, personal or business data, might be stored on servers
in the data center. It might even be replicated to other data centers for fault tolerance.
But we have to think about the fact that when it comes to data centers, this is why
you'll find some providers are reluctant to provide their physical address of where
their data centers are located.

So, physical security now there are many aspects to physical security, one of which is
perimeter fencing of a certain height. And also the use of bollard posts. Now, bollard
posts are used to protect buildings from vehicular attacks like ramming a truck into a
building to destroy the wall. Another aspect of physical security would be to have
proper lighting around the facility, and around the perimeter, around the property, in
parking lots, parking garages. And also dealing with locked gates, perhaps with the
guard that determines who shooter should not be allowed onto the property in the first
place. Access cards to allow access to a building or perhaps a restricted floor in a
building.

Security guards, guard dogs, even motion-sensing security systems. Now we're not
suggesting that all of these things are needed at once, but if they could be in a high
security facility or some combination of these items might be relevant given a
particular situation. Now, continuing with this, let's think about things like data center
camouflage such as having an unmarked building may be located near a densely
forested area, or it might be a concrete building, or it might be a building that's painted
green to mesh in with the landscape.

That kind of thing can actually be a part of physical security, certainly it is at the
military level. Then we have interior and exterior reflective glass windows to prevent
any type of visual access to what's happening within a facility. Having a clean desk
policy, which means that people will not leave sensitive documents or sensitive
devices containing sensitive documents easily accessible on their desk when they go
home at night from work. And also proper document shredding techniques to ensure
that sensitive data cannot be retrieved after documents have been shredded. Other
commonly used security mechanisms in office environments would include privacy
screen filters.

Here we have a screenshot where you could search online and purchase privacy
screen filters that are designed to be placed over devices like a laptop or a tablet even
a smartphone device. The idea is that if you're close and if you're directly in front of
the screen, you can easily see what's happening. But if you're at an angle or a bit
further away you cannot see what's happening on the screen. So, provides for data
confidentiality that's presented on a screen.

Access control vestibules otherwise called mantraps are very common in secure
buildings. Often what will happen is the outer door of a facility provides access to an
inner chamber where a second door will open only after the first door completely
closes. The idea is we want to prevent tailgating, we don't want a malicious user
sneaking in behind someone because the door hasn't fully closed yet.

And of course these doors would be controlled through some kind of security
mechanism like an access card or some kind of a key lock. Another technique related
to physical security is something called air-gapping, and this is commonly done for
very sensitive or critical networks. Simply stated, air-gapping means that we don't
have a connection to another external network, not wirelessly not through a wired
link. It's completely separated hence air-gapped from other networks. It is a network
unto itself, but it's got its own internal use. So, in our diagram as an example, we have
some kind of an industrial environment, whether it's for water management, whether
it's for the electrical power grid. Whatever the case is we would have industrial
monitoring stations that are connected to a network with PLCs. PLCs are
programmable logic controllers.

These allow for the connection to and monitoring of robotics in a manufacturing


environment or controlling sensors for pressure or opening and closing valves that
type of thing. You might also have a database transaction server that would be
accessible in this example by plant data historians. But the idea is while all of these
items are connected together on a network the network itself is air-gapped, there is no
external network connection. Now you might wonder well how do we remotely
manage this type of network? Well, that's where you have to be very careful, because
if you allow for remote management for technicians that might be working from home
or from another location. Or maybe you have the managed service provider an MSP
that takes care of and monitors these servers from another location. You are then
introducing an entry point into the network, which is a potential security risk you're
increasing the attack surface. And when you do that you really don't have an air-
gapped network.

So, air-gapped networks can keep a network secure, but it can be inconvenient from a
remote management standpoint. Somebody has to make the decision as to which is
more important. Finally, another aspect of physical security is the protection of data at
rest. This means encrypting data, individual files or encrypting entire disk volumes
and then having restricted access to a facility and a server room where the storage
devices themselves exist. Locked equipment racks in a server room or data center
where storage arrays might exist. We want to prevent people from being able to
physically steel storage devices, and if they do get their hands on it. We want to make
sure strong encryption is used so that the data on it that might be sensitive is
completely inaccessible.

Hardening To Reduce the Attack Surface


Topic title: Hardening To Reduce the Attack Surface. Your host for this session is
Dan Lachance.

In the IT world when we talk about hardening, what we're really talking about is
securing or locking down hardware and or software ideally both. So doing things like
applying firmware updates as well as applying software updates. So, hardening is
designed to reduce the attack surface. From a security standpoint, we kind of have to
sometimes think like a malicious actor, what would a malicious actor try to do to
break into a smartphone or to break into a network, or how would they pick up the
phone and physically call somebody and try to trick them into divulging sensitive
information.

We have to think that way because once we identify these potential attack vectors. We
can do our best to either completely remove them or at least mitigate them with some
kind of security control. The human element is always the biggest concern when it
comes to even IT security. It is absolutely critical that there be periodic user security
training that can come in many forms. In many cases it might be physical classroom
training, it might be done through a zoom call, it might be done through some other
online platform.

Whatever the case is employees and an organization must be made aware of current
security threats it's all about user awareness and training. So, that means that people
must be educated on secure computing practices. An example of which would be to be
aware of scamming phone calls where people might say they are from a government
agency like the tax office or from law enforcement and that something has been
discovered where if a fine isn't paid in 24 hours than imprisonment will follow.

Those things don't happen that way, people need to be aware of that and many other
types of items related to secure computing practices, such as not leaving mobile
devices in public spaces unattended like, USB flash drives or laptops or tablets, and
that type of thing. So, awareness of social engineering in other words people trying to
deceive you to find out sensitive information. This is important, phishing scams with a
PH. Phishing as an example might be in the form of an email where the email looks
like it legitimately comes, let's say from Amazon and click here to view your invoice
or to collect your refund.

But in fact when you click that link, you're actually infecting your computer. So, it's
some kind of a scam to try to trick people into clicking on things or to performing
some kind of action they otherwise wouldn't that is related to some kind of sensitive
information. So, general hardening techniques begin with removing unnecessary
components from servers, disabling services that aren't needed, like SSH. If you're not
using it, maybe you have an out of band hardware remote solution you exclusively
use. So, why would you leave SSH running if it's never been used?

All it serves to do is introduce another entry point for potential malicious attacks.
Disabling or removing unused user accounts. Adhering to secure configuration
standards, whether their industry best practices or weather as outlined by
organizational security policies. And that's fine, there shouldn't be a one-time thing,
there needs to be a periodic review of security controls.

A security control is some kind of a solution that either reduces, eliminates, or


mitigates some kind of a security threat. Whether it's applying software patches,
configuring firewall rules, keeping virus scanners up to date on all devices. So, those
need to be reviewed periodically because a security control that was once effective at
mitigating a threat might no longer be effective at the current point in time. And so
you need periodic reviews of these things. Server hardening would include many
different tasks including enabling multi-factor authentication or MFA for admin
accounts. That means not allowing just username and password. Now that's two items,
but it's both in the same category of something you know, which could even be
utilized remotely by someone that could somehow figure out with that combination is.
So, instead maybe requiring a smart card to be inserted into a laptop if it's got a smart
card slot in addition to a username and password, perhaps that would be required
before you could use an admin account to manage a server.

Having a dedicated management network interface on a separate VLAN, so most


servers have more than one network interface. One of those network interfaces
perhaps would be plugged into a separate VLAN that only allows remote management
traffic and nothing else. And if you're talking about server class hardware, it probably
already has a dedicated interface for remote hardware out of band management. So,
you might already have that solution built-in, but it's definitely something to consider.
Frequent patching and confirmation that patches were applied successfully with date
and time stamps and of course the version numbers of those individual patches that is
always going to be important to make sure servers are protected. And also for
troubleshooting sometimes the application of a patch can break something else you
want to be able to track that update history easily.

Periodic vulnerability assessments. There are plenty of tools, some of which are free,
that allow security technicians to scan either an entire network or individual hosts,
even running tests against a web app looking for vulnerabilities. So, we can identify
any weaknesses and then address them. And again this needs to be periodic because a
security assessment might not have revealed something at one point in time. Maybe
because that's prior to a server having been infected, but maybe later it shows up as
being some kind of suspicious anomalous behavior or some kind of configuration
change that really shouldn't have occurred.

Also, using host intrusion detection systems using an IDS. An intrusion detection
system is designed to be configured for the specific host in this case so that it's
designed to look for what's abnormal. That means you have to know what is normal.
You need a baseline of normal activity to identify what's strange. So, an intrusion
detection system can be configured to look for suspicious activity, log it, perhaps send
a notification to admins, that type of thing, so that's part of server hardening. Then if
you're dealing with virtualization for a virtual machine servers. You have to think
about the security of any virtual machine backups or snapshots. The security of virtual
machine hard disks, because they would exist as files on a storage device. The VM
guests themselves must be hardened. So, the operating system running in a virtual
machine needs to be hardened the same way you would harden it. If we're a server OS
running on a real server, there's really no difference at that level.

Making sure that any storage area network traffic is encrypted, especially if you're
using something like iSCSI, which doesn't necessarily require authentication or
encryption. Because it uses standard network equipment to provide servers access to
storage over the network. Another thing to think about is that on a hypervisor with a
bunch of virtual machines, all it takes is one compromised virtual machine guest to be
perhaps infected with a bitcoin miner or something like that, that would hog resources
on that hypervisor, thus depriving other virtual machine guests from being able to
function properly.

You might also consider configuring your hypervisor so that it's got multiple physical
NICs connected to different VLANs, so virtual switch and VLAN isolation can also
be another factor. And then separating hypervisor management from virtual machine
management. You can set permissions to allow someone to manage the hypervisor OS
itself, but not have access to the data inside of virtual machines. Because we also want
to be careful to watch out for VM sprawl. This can be a security issue because we
have so many unnecessary virtual machines running, which increases the attack
surface. Then we have the option of encrypting a virtual machine. This is a screenshot
from VMware workstation, where we can set a password which essentially serves as a
seed to generate an encryption key to encrypt the virtual machine. It means the virtual
machine can't be started or managed unless you know the decryption password.
Finally, on the mobile device hardening side, we have to think about using strong
authentication, whether it's VPN authentication or requiring certificates before
authenticating certain types of apps. May be restricting app installation by users,
including sideloading.

Sideloading means installing an app from the source files as opposed to from an App
Store for a mobile device. Enabling App geofencing, so maybe apps are only active
when they're in a certain geographical region. Encrypting mobile devices, enabling
remote wipe so that if the device is lost or stolen and it contains sensitive apps or info,
it can be wiped remotely by admins. And just like with a regular server disabling
unneeded components like the camera perhaps enabling airplane mode, Turning off
location services, Enabling or disabling near field communications or NFC,
Bluetooth.

All of this is part of mobile device hardening. Now, might wonder why is that relevant
for servers because a mobile device could allow access, let's say through a VPN app
into a network where servers exist, which potentially could be targets for malicious
users.

Disabling USB Storage Using Microsoft Group Policy


Topic title: Disabling USB Storage Using Microsoft Group Policy. Your host for this
session is Dan Lachance.
There are many strategies related to hardening a window server. One of way to
hardened a server and its environment, meaning that if we have an infected client
machine that's joined to the Active Directory domain that potentially could also lead
to a compromise of the server. And so there are many things that we can do not only
on the server but for domain joined machines as well. And one of those things to do is
to prevent the use of removable storage. You know the words preventing users from
plugging in things like USB drives that might be infected and not only that, but
making sure perhaps that users can't copy sensitive data to plugged in USB drives.

Maybe preventing users from inserting CDs or DVDs if your machines have those
types of drives. So, we could do all of this individually on each machine one at a time
or we can do it through group policy. Let's start with the one at a time example,

The Windows desktop screen displays.

so from the start menu I'm going to type gpedit.msc Group Policy Editor Microsoft
Console. But when I click on it, I get nothing. Well, that's because this is a Domain
Controller and there's a different tool to edit group policy for the Active Directory
domain.

So, what I'm going to do is go into the start menu and go under Windows
Administrative Tools, where if I scroll down to the GS, I'll

A list of options appear below.

come across Group Policy Management.

The Group Policy Management window opens. The menu bar contains the following
options, namely: File, Action, View, Window, and Help. The left pane contains the
following options, namely: Group Policy Management, and Sites. The following sub
options are present below the Group Policy Management, namely: Domains, HQ,
Group Policy Objects, WMI Filters and Starter GPOs. The right pane display three
sections. The first section is titled as: HQ_Security_Settings. It contains three tabs,
namely: Scope, Details, Settings, and Delegation. Currently, the Scope tab is active.
The second section is titled as: Security Filtering. The third section is titled as: WMI
Filtering.

This allows me to configure Group Policy Objects or GPOs. We've got a few of them
here that contains settings that we want to apply to users or computers in the domain.
Either all users and computers in the domain or perhaps a subset depending on where
the GPO is linked, such as to a specific OU. But let's flip over to a domain joined
server to see what happens when we open the start menu and try to run gpedit.msc.

OK, so I'm on a different Windows Server now that is not a Domain Controller.
Remember, Domain Controllers contain a copy of the Active Directory database. So,
here I'm going to go to the start menu and just like I did before, I'll type in gpedit.msc
and I'm going to go ahead and select that on this server it runs it says Local Computer
Policy and

The Local Group Policy Editor window opens. The left pane contains two options,
namely: Local Computer Policy, and User Configuration. The first option contains
the following sub options, namely: Software Settings, Windows Settings, and
Administrative Templates. The second section contains the following sub options,
namely: Software Settings, Windows Settings, and Administrative Templates. The
right pane contains a section titled: Select an item to view its description.

I've got computer settings and user settings, but the point is I could configure limited
access to things like removable USB storage through the local group policy editor on
individual Windows devices. But of course if we've got an Active Directory domain
with domain joined devices, it makes more sense perhaps to do that centrally with one
configuration.

OK, so back here on the Domain Controller we had left the Group Policy
Management tool open. And the interesting thing about this is it is showing us our
OUs, our organizational units, Domain Controllers, which is a default OU. And of
course that will contain Domain Controller accounts HQ which I've created and that's
pretty much it. If I were to go to the start menu under Windows Administrative Tools
and open let's say Active Directory Users and Computers, it looks like there's a lot
more than just Domain Controllers in HQ.

The Active Directory Users and Computers screen displays. The left pane contains the
following options, namely: Builtin, Computers, Domain Controllers and HQ. The
right pane contains a table with the following column headers, namely: Name, Type,
and Description.

But Domain Controllers in HQ are the only OUs. The other items here like Builtin,
Computers, ForeignSecurityPrincipals, Users, those are containers and in the Active
Directory world, container and organizational unit, or OU they're not synonymous.
Because you can't apply group policy to a container, but you can apply group policy
to an OU. And that's therefore why only the OU users showing up here in the Group
Policy Management tool. So let's get to it. I want to make sure that I prevent certain
users. So, for example, if I drill down under HQ, we've already got a GPO there called
HQ_Security_Settings.

So, perhaps what I want to do is prevent users in headquarters from accessing


removable media that they just plug in, such as through USB. OK, so in order to do
that I need a group policy object. While I've got one and it's linked to HQ, it's
indented under it, so that means it will only apply to users and computers under HQ.
So, I'm going to right click on that GPO and choose Edit.

The Group Policy Management Editor window opens. The menu bar contains the
following options, namely: File, Action, View, and Help. The left pane contains the
following options, namely: Computer Configuration, and User Configuration. Both
the options contains the following sub options, namely: Policies, and Preferences.
The right pane contains a section titled: Select an item to view its description.

Now the question is what is the setting? Because remember, when you're working
with group policy there are thousands of settings. So how do I find what I need? Well,
you could filter things but the other thing I want to point out here is that when we
looked at gpedit.msc on a single computer that was not a Domain Controller, it said
local security policy. But here we've got the name of the GPO in the upper left. Well,
you're going to find that most security settings are somewhere under computer
configuration, so they apply to a computer regardless of who's logged into it. So,
under computer configuration, I could expand policies and I'm

Three options appear, namely: Software Settings, Windows Settings, and


Administrative Templates.

interested in Administrative Templates.

I can right click on that and choose Filter Options and

A dialog box titled: Filter Options opens. It contains the following fields, namely:
Select the type of policy settings to display, Enable Keyword Filters, Enable
Requirements Filters, and Filter for word(s).

Enable Keyword Filters. Here, I've already added the word removable, so I'm
searching in group policy

He enters the value in the Filter for word(s) field.

titles and Help Text and Comments


He highlights the checkboxes present under the Filter for word(s) field.

for the word removable because I'm interested in removable media. So, I go to ahead
and click OK and the little filter icon shows up. So, what I can now do is just expand
that and go all the way down to All Settings and I'm only looking at things that have
removable in their name.

The right pane displays a table with the following column headers, namely: Setting,
State, Comment, and Path.

So, for example CD and DVD Deny read access. Well, if we still use DVD and CD
drives, then that might be something to consider. If we have no need for it and we
want to prevent them from being read because maybe that would allow an installation
of some Malware or something along those lines, we could do that. So, I could double
click on that and I could enable that restriction.

The dialog box titled: CD and DVD: Deny read access opens. It contains three radio
buttons, namely: Not Configured, Disabled, and Enabled. The following fields are
also present in the dialog box, namely: Comment, Supported on. He selects the
Enabled radio button.

Same with write access if we have CD and DVD burners, perhaps to prevent things
like copying data so we could do that. Now, strictly on the executable side, such as
executing Malware then we could also Deny execute access, so I could do that as
well. OK, so that's something that we would consider, the other thing that we could do
is we could actually deny access to All Removable Storage classes.

So, that would take care of our USP items. Now if you do have some you won't them
allow, then you can allow the use of some of them that match a particular ID or
certain class. Maybe your company has issued USB thumb drives of a certain type to
employees and you want them to allow that.

The other thing you might consider when it comes to removable media if you're
hardening that type of thing is you could determine if you want to allow access to
removable drives that aren't encrypted. For example, Deny write access to removable
drives not protected by BitLocker. BitLocker affords you some protection because it's
encryption of data at rest. So, what you can do is say look we will not allow writing
files to removable storage unless it's encrypted with BitLocker. So, I'm going to go
ahead and turned that option on. Now these are just but a subset of things to consider
when hardening windows.
It could be a lot more in depth than just dealing with preventing access to reading or
writing CDs and DVDs and removable media. But this gives us a sense of how to
filter for what we're looking for and then how to start configuring it within a GPO.
Now remember, in order for that to be put into effect you could go to an individual
machine, in this case that is in HQ and from the Command Prompt, you could run
gpupdate /force. Now that's fine for testing or for just two or three computers, but do
you really want to do that for dozens or hundreds or even thousands of computers. Of
course not, in which case we would end up just waiting for group policy to take effect,
because domain joined devices will periodically contact the nearest Domain
Controller to pull down the things like group policy settings that should apply to that
machine.

And sometimes that might happen within a few minutes, depending on when you've
captured the cycle of when the change was made. Or, it might take 90 minutes, it
might take a few hours, if you're spread across a WAN with a single Active Directory
domain so there are many considerations.

Defining Authentication and Authorization


Topic title: Defining Authentication and Authorization. Your host for this session is
Dan Lachance.

Authentication is an important part of server security. With authentication, we are


talking about the proof of one's identity prior to granting access to a resource, like
managing a server and perhaps its file shares. We can authenticate users, devices or
even software to allow access to resources. Now understanding how to authenticate
users and perhaps even devices might make sense, but what is the software side of
authentication?

Well, imagine that we have custom code running in a virtual machine in the cloud,
and that code needs to call upon some items in cloud storage. What we could do is
create an entity, for example, a managed identity or some kind of a service principle
that would allow access to cloud storage and we would associate that identity with the
virtual machine where the code is running. Thus, we're really granting access to the
software running in that virtual machine.

Now, authentication is required before resource authorization is given, specifically


successful authentication. Now, another aspect of authentication is MFA, multi-factor
authentication which uses multiple authentication categories such as something you
know, that would be perhaps a username or a password or a pin. Something you are,
biometric authentication. So what's very common these days, even on consumer grade
electronics, is fingerprint scanning or facial recognition.

Another category of authentication is something you have in your possession, like a


smart card or a token device or even a smartphone with an authenticator app installed
on it. It could be something you do, in other words, gesture based authentication.
Could be somewhere you are geographically, where you have to be in a certain area
before an app is activated and can be used. We then have to think about password
policies as it's related to authentication, such as the length of the password, and
certainly the complexity or the number of characters and the types of characters that
are required, perhaps in a password such as upper lowercase characters, a certain
amount of symbols or numbers and any combination thereof.

The minimum password age. Now, you might not want to set that, for example, to one
day, because it means after one day people can reset their password. And if you're not
tracking password history, it's potentially possible that people could keep resetting
their password to an old known one. So, a lot of these things work in conjunction with
one another. So, a minimum password age before change is allowed, and maintaining
a password history to prevent password reuse. And, of course, a password maximum
age whereby you force password changes periodically. Also, configuring account
lockout policy so that after subsequent failed logon attempts, the account might be
locked for a period of time.

It could be an hour, it could be 2 hours, it could be 30 minutes. It's really dependent


upon how it's configured. And then there's the notion of password or credential
management. For example, in Windows, you have the credential manager, which can
store things like web passwords or other app passwords in a central location.
However, you can also use third-party password manager apps like LastPass or one
password to name, but a few. Here we've got a screenshot of creating an account to
use LastPass.

The idea with the password manager is you have one central place where you can
have it generate complex passwords that are different for all of the web apps or
websites that you visit. However, of course, you have to have a master password that
is made available for the password manager tool itself. Then there's the notion of one-
time passwords or OTPs. For example, we've got a screenshot here from Amazon.

A verification code, a 6-digit verification code and often this will work in conjunction
with a forgotten password mechanism where you try to sign into an app, you click the
forgot password link. You might have to supply a phone number or an email address
to which the reset code might be set. And, of course, if you know the correct reset
code or one-time password then it might allow you to reset your password, but it's
only good for one sign in, and usually it's within a very limited amount of time. 2-step
verification requires additional authentication factors. For example, when you sign
into an app or a website, you might be asked for a username and a password, and then
you might be asked for a verification code that was perhaps SMS texted to your smart
phone device. Or maybe you have to have configured an authenticator app on your
smartphone like the Google or Microsoft Authenticator app

where you would then look at the code which normally changes every 30 seconds.
Perhaps it's a 6-digit code that you would enter here. And the combination of that
verification code along with a username and a password, for instance, would allow
you to successfully authenticate it. The idea with 2-step verification, it just makes it
more difficult for malicious actors to break into the account. Doesn't make it
impossible, but normally malicious actors tend to go to the lower hanging fruit, what's
easier to break into, as opposed to jumping through additional hoops for 2-step
verification systems. Doesn't mean it can't be compromised, and it never is, but it's
less likely than just username and password single factor authentication.

So, we know that authorization can only occur after successful authentication, so
accessing perhaps a database or access to certain functionality within an app, perhaps
even access to a network in the case of network access control or NAC. So,
authentication and authorization then are an important part of securing server
ecosystems.

Managing Microsoft Active Directory Users and Groups


Topic title: Managing Microsoft Active Directory Users and Groups. Your host for
this session is Dan Lachance.

In this demonstration, I'll be creating and working with Active Directory users and
groups. So, user identities are obviously a crucial part of authentication and also
authorization, and even auditing to track who did what when.

The Windows Desktop screen displays.

So, in an Active Directory environment, then we have a number of ways that we can
work with user accounts. For example, let's go here on our Domain Controller to
Windows PowerShell. Now that's not to say you have to be at the server,

He selects the Windows PowerShell option from the start menu. The Administrator:
Windows PowerShell window opens.
of course, you could always use remote desktop to get into it. But you can also run
PowerShell commandlets remotely from one machine and have it target another
machine over the network.

That's kind of beyond the scope of what we're doing, but what we are going to
mention here is that in PowerShell, there are plenty of commandlets for managing
Active Directory. As an example, in PowerShell I can use get-command, kind of as a
discovery method to learn about what commandlet names are. For example, let's say
I'm interested in aduser, so I'm guessing that aduser Active Directory user might be
part of a name of a commandlet.

It's just a guess, turns out it's a good guess.

The command reads as: get-command *aduser*.

I can run Get-ADUser to retrieve Active Directory user accounts. I can make a new
user with New-ADUser remove a user or set some attribute of the user. Modify them
basically. If I were to run, let's say, Get-aduser PowerShell, of course, is not case
sensitive. It asks for a filter, well, I want to see every user, so I could just type in an
asterisk, which is a wild card and all of my user accounts are returned. OK, that's fine.
However, let's work in Active Directory users and computers.

I'm going to open up the start menu and go into Windows Administrative Tools where
I'm going to run Active Directory Users and Computers.

A list of options appear below.

The Active Directory Users and Computers page opens. He highlights the options
present in the left pane.

Here, of course, we have our Active Directory domain listed along with Builtin
containers like Users, Computers, and literally Builtin as well as some OUs. Now, the
Domain Controllers OU is Builtin, it's automatically there when you create an Active
Directory domain. But I've created one called HQ, headquarters. Now, what we could
also do is create other OUs for example, I've got a button up here in the bar that I
could click to create new Organizational Unit.

A dialog box titled: New Object - Organizational Unit opens. It contains a field,
titled: Name. The OK, Cancel and Help buttons display towards the bottom.

And I'm going to create one called Eastern_Region and I'll click OK, now that shows
up. So, from here I could begin to work with user accounts, computer accounts and
groups. So what is common is that many organisations, atleast larger ones in the
enterprise that use Active Directory, will create subordinate OUs. So for example,
under Eastern_Region and this time maybe I'll just right-click,

Multiple options appear in a context menu, namely: Delegate Control, Move, Find,
New, and All Tasks.

I can choose to create a New Organizational Unit.

Multiple sub options appear, namely: Computer, Contact, Group, Organizational


Unit, and Printer. He selects the option: Organizational Unit.

Maybe, I'll make a general one under Eastern_Region called Users and another one
perhaps called Computers. You don't have to do this, but it's just a way to organize
Computers and Users. So, maybe in the Users OU under Eastern_Region within my
Active Directory domain, I'll create a new user. Specifically,

He selects the new user creation option from the top header. A dialog box titled: New
Object - User opens. It contains the following fields, namely: First name, Initials,
Last name, Full name, and User logon name.

let's say, a user MBishop. So, that's going to be Max Bishop. Let me just change the
nomenclature here a little bit.

The User logon name will be mbishop and again, we want to make sure we adhere to
organizational naming standards when it comes to determining the User logon name.

He selects the Next button. The following fields appear, namely: Password, Confirm
password, and three check boxes.

I'll specify and confirm a default password for that account. From a security
perspective, stay away from using the same default initial password for newly created
users because it's not very hard to find that out within the organization and it's the last
thing that we want. We don't want accounts being hacked before user gets a chance to
sign in for the first time to change that password. So, User must change password at
next logon is turned on that's good thing, I like it, it'll leave it, Next, Finish. We now
have user Max Bishop, we could double click on that user account here and if we
wanted to we could fill out all

A dialog box titled: Max Bishop Properties opens. It contains multiple tabs, namely:
General, Address, Account, and Organization. Currently, the General tab is active.
of these other attributes. The Description, the Office, E-mail, Web page under
Organization I could fill in Job Title.

So, for example, Sales Manager Department might be Sales. Now you don't have to
fill all these in, but it can be useful, it can actually also be used by security access
control mechanisms. An example of that would be something in Microsoft called
Dynamic Access Control, which controls access to the file system based on conditions
such as someone being in the Sales Department without having to add them to a
group. So, sometimes it can be very important that we fill in all of these details.

If I go under the Account tab, here's where we could specify things like Logon Hours
where Logon is Permitted versus when Logon is Denied. So, maybe every day of the
week after 8:00 PM, what we want to do is we want to denied logon. So, we could
select that for all the days of the week if for some reason that was the case in the
organization and maybe we only want to allow work to begin at six or seven in the
morning. So, you could specify another block of time that you wanted to prohibit for
logon.

So, there we go. We've got some of those options available here as well. Then we've
got other options here like password never expires. That's not a good one. But you
might temporarily disable an account if a user is away, and perhaps on paternal leave,
that type of thing. So, there are a number of things that we can do here. You can also
expire an account if you know, for example, you're hiring a contractor for a specific
period of time, or perhaps a summer student that's only going to work for a month or
two.

It might make sense to expire their account right away when you create it. If you
know when that date will be so you don't forget. Last thing we want our extra user
accounts hanging around that are not being used. OK, I am going to click OK. Next
thing I want to do is create a group here. So, I'm going to create a group and I am
going to call the group Sales.

He enters the group name in the New Object - Group dialog box. The Sales group
display on the right pane.

I'll click OK. Now, within the Sales group, I can go to the Members tab and I can
click Add and from here, I can select users

He double clicks the Sales group option. A dialog box titled: Sales Properties opens.
It contains the following tabs, namely: General, Members, Member Of, and Managed
By. The Add, Remove, OK, Cancel, and Apply buttons display towards the bottom.
The Select Users, Contacts, Computers, Service Accounts, or Groups dialog box
opens. It contains the following fields, namely: Select this object type, From this
location, and Enter the object names to select (examples). The Object Type, Location,
and Check Names button is also present on the dialog box.

that I want to be a member of this group. I'm going to search for max.

I'll click Check Names. There's Max Bishop. Excellent, so we've just added that
account. Now, I'm also going to go back one level in the OU structure to
Eastern_Region because here I'm going to create a group called Eastern_Users.

The New Object - Group dialog box opens.

The reason I'm doing this is I want to point out that groups can be members of groups
which can really work well if it's planned properly. So, I'm going to open up the
Eastern_Users group and I'm going to go to Members. I'm going to click Add, then
this is where I'm going to search for sales.

The Select Users, Contacts, Computers, Service Accounts, or Groups dialog box
opens.

I click Check Names. The Sales group is now something that will be a member of this
group. Now that can be useful because it allows me to manage things on a more broad
scale, such as through the Eastern_Users group or I can get a little more specific and I
can assign permissions, for example, to just the Sales group. Maybe permissions to
files or databases, or web apps or whatever the case might be.

Managing Linux Users and Groups


Topic title: Managing Linux Users and Groups. Your host for this session is Dan
Lachance.

For the longest time Unix and Linux systems have had a way of dealing with local
users and groups, by storing those definitions in text files in a protected part of the file
system. Now, of course, depending on what your specific implementation is, you
might use a directory service like an LDAP compliant directory service for
authentication to Linux or Unix. But in this case, we're going to stick with the old
school local files. So, to get started let's explore that a bit here in Ubuntu Linux.

The terminal window displays.


So, what I want to do here is change directory to /etc. And what I want to do is run cat
against a file in here called password,

He enters the command: cd /etc.

but it's spelled passwd.

Now, in the password file what we have are user accounts that get listed here. So, for
example, let's add our own new user account and see how that gets added to the etc
password file. I can use the useradd command with -m because I want to add a home
directory for the user, let's add user mbishop. Now at this point, it says, well, can't do
that access tonight. Well, that's because if I run whoami I'm not logged in as root, I'm
logged in as a user by the name of cblackwell. So, I'll use the Up arrow key to go back
to my useradd command and I will prefix it with the sudo to run this with elevated
privileges. So, it asks me for the password for cblackwell, which is normal when you
run sudo, so I'll go ahead and specify that, and press Enter.

And at this point, no news is good news. It should have added user M Bishop. One
way we can find out is by using the tail command to read the last 10 lines of a file
such as passwd. Sure enough, when we do this, the last entry in that file is for

The command reads: tail passwd.

a user by the name of mbishop. This is a colon delimited file, there is a full colon that
separates a lot of details. The x in the second position reflects the fact that a hash of
the password is stored elsewhere. We'll examine that in a moment. And then we have
some user ID and group ID numbers and then some other details such as the home
directory. In this case /home/mbishop and then the shell to launch by default, when
that user signs in, so /bin/sh OK.

So indeed, if I do an ls of /home, we do have a home directory for mbishop, but we


did say that the x in the second position in the password file indicates that there is a
password hash for that user in a separate file, and that separate file is called shadow.
I'm in the etc directory already if I cat the shadow file or if I attempt to, I get a
permission denied. So, I need to prefix that with the sudo, it's a sensitive file.

The command reads: sudo cat shadow.

When I do that, I get a list back of the details in this file.


I'm primarily interested in mbishop. mbishop is at the very bottom and in the second
position what we have is an exclamation mark. Now that's because we haven't yet set
the password for that account. And, of course, we want to set a password in alignment
with organizational security policies. So, what we can do is run sudo password that's
abbreviated as passwd, and then we'll give it the username, in this case, mbishop, I
want to set a password for that account.

The New password message prompts.

So, I'm going to go ahead and specify a password, and as you might imagine it wants
me to confirm it or retype it to make sure I know what it is. OK, so I'm going to go
ahead and do that and then it says password updated successfully. Let's go back and
check the shadow file a second time. I'm going to clear the screen, use the Up arrow
key so that we can take a look at the shadow file. Now, it looks very different for the
mbishop line, which actually wraps now to the second line after that. But instead of
just an exclamation mark, we now actually have a password hash.

Now a password hash means that the password was fed through a one way
cryptographic algorithm that resulted in this unique value. And so when user mbishop
signs in he will type in the correct password presumably, which will be fed once again
through the same hashing algorithm and should result in this exact same hash.
Because if it does result in the same hash, what that means is that Max Bishop knows
the password. And so, he will be authenticated and then presumably can have access
to certain resources to which he has been granted permissions.

So, that's just a little bit about the user side of things. Now, what we could do to test
that account if we really wanted to, we could stay logged in as who we currently are
logged in as, but we could use the su the switch user command with a dash, which
means I want to run a full login for a specific user, including running login scripts and
so on. And that account in this case is going to be mbishop.

So it says, OK, well, what's the password in this case

The Password message prompts.

for mbishop, since that's who you're trying to sign in as. OK, well I'll go ahead and
specify the password, and I'm in now as user mbishop. Now I can prove this by
typing, whoami, I'm logged in as mbishop. If I were to type pwd print working
directory, when mbishop signs in, he will automatically be placed in his home
directory to which he has access. So, if I type exit, I'm exiting out of that switch user
session. So I'm back to being cblackwell and if I type whoami, of course, it's
evidenced by the output of that command.
Now we can also create groups in Linux. So, for example, let's say I want to add a
group, groupadd and let's call it sales. Now again, I get Permission denied because in
order to add a group, well, you're going to have to run it either when you're logged in
as root or with elevated privileges, meaning prefix with the sudo. So, at this point,
we've added a group called sales. Let's take a look at where that was written to. So I'm
going to run the tail command as opposed to cat it doesn't really matter, tail just only
shows the last 10 lines.

I'm going to go into, etc, which I suppose we're already in and the name of the file I'm
interested in is called group.

The command reads: tail /etc/group.

So, we have our sales group shown here, the x and the second position means that we
don't have a password yet. You can actually have group passwords, but it's not used
very often and the group ID that was assigned by Linux is 1002. So, what I'd like to
do is make sure that mbishop is a member of the sales group.

So, if I were to tail /etc/passwd for mbishop, the default group for that account is
group with ID 1001. I want it to be 1002. Now I could just edit that text file if I really
wanted to or I could run a sudo usermod, the user I want to modify is mbishop -g to
set the group to 1002. So, now if I tail /etc/passwd again.

The command reads: tail /etc/passwd.

Well, it looks now like 1002 is the primary group for mbishop and again if I tail
/etc/group, 1002 is the ID of the sales group. So, let's do an su once again, su - for
mbishop sign in as that account.

And we'll specify the password for that account, because what I want to do is just
create a file and take a look at what happens here. So, we can use the touch command
Linux to create a file, a 0 byte empty file and, of course, mbishop can do that in his
own home directory. The command reads: touch file1.txt.

If I clear the screen and do an ls -l notice that mbishop is the owning user of the file
that was just created. So, he gets the user permissions which here are read write. And
notice that sales was added as the group automatically because of our change for the
primary group association with that account.

And so the sales group gets the second set of permissions, which in this case is just
read. So that's just a little bit of how users and groups work in a Linux environment.
Managing AWS Users and Groups
Topic title: Managing AWS Users and Groups. Your host for this session is Dan
Lachance.

In this demonstration, I'm going to be managing AWS users and groups. So, you can
configure user accounts, add them to groups, and control permissions to cloud
resources by creating those users and groups directly in the cloud, which is what I'll be
doing here in AWS.

The AWS Management Console window opens. It contains the following options,
namely: AWS services, Build a solution, Stay connected to your AWS resources on-
the-go, and Explore AWS. It contains a search bar at the top.

There are also ways that you could link a directory service similar to Active Directory
in the cloud to your on-premises directory service. So, that you could reuse your
existing user accounts. But in this case, we're going to be building them from scratch
directly here in the AWS Management Console. So, to get started what I'm going to
do is, I'm going to search the field at the top for iam identity and access management,

A list of Services opens.

and then I'll click on that search result to open the IAM Management Console, where
over on the left, I can click Users.

The Identity and Access Management (IAM) console window displays. The left pane
contains the following options, namely: Dashboard, User groups, Users, Roles, and
Policies. The right pane contains a section titled: Security recommendations.

The right pane displays a section titled: Users. This section contains a search bar and
a table. The Delete and Add users buttons are present above the table.

Now certainly I am signed in with the user account, here in the upper right, this is
called the AWS root account. But I can create additional IAM users and I might do
this because I want to allow access to certain cloud-based apps. Or perhaps I would
create additional AWS user accounts for other cloud technicians and perhaps give
them limited access to manage AWS resources.

So, I'm going to click the Add users button here over on the right

Two sections display, namely: Set user details, and Select AWS access type. The first
section contains a field, titled: User name. The second section contains a field, titled:
Select AWS credential type.

and I'm going to fill in the name. I'm going to start by creating user Codey Blackwell.
Now when I spell out the name notice it says, we're not looking for you to spell out
the name, we're looking for the user account name. So, I have a message about an
invalid name, it has to be alphanumeric characters, underscores and so on. OK, so I'm
going to go ahead then and follow my organizational naming standard and I'll just put
in cblackwell. Now down below in AWS, I have to determine is this an account that is
for Programmatic access, like for software developers, where I can specify that I want
an access key ID and a secret access key that developers would need in order to use
the AWS API, the application programming interface, the CLI command line tool set
for AWS or the software solution developer kit, the SDK, or is this just AWS
Management Console access. I'm in the console here, is that what I want to do.

So, in this particular case, I'm going to go ahead and create a Password based AWS
credential type,

The Console password field appears. It contains two radio buttons, namely:
Autogenerated password, and Custom password. A text box display below the Custom
password radio button.

because perhaps I'm doing this for User cblackwell, who is an additional cloud
technician, so they need access to the console. Down below, I can have an
Autogenerated password or I can specify a Custom password which I will do here. I
would need, of course, to communicate this to the user, and ideally, we're not using
the same password to initialize each new IAM user that we create because that would
be a security risk. Do we require a password reset upon next sign-in, Sure, let's leave
that turned on. Then I'll click Next for permissions.

The Set permissions section displays. It contains three tabs, namely: Add user to
group, Copy permissions from existing user, and Attach existing policies directly.

So, at this point for this new user account we can Add the user to a group. And if the
group has permissions then the user by extension will get those permissions, of
course. There's even a convenient button here to Create a group, if I don't have any, I
can also instead Copy permissions from an existing user that might already have the
same permissions that this user will need, or I can Attach policies directly to this user
account where policies provide permissions. I'm not going to do any of this right now.
Essentially, I'm going to choose attach policies directly, but I'm not going to attach
any and I'll click Next: Tags.
The Add tags (optional) section displays. It contains a table with the following column
headers, namely: Key, Value (optional), and Remove. Below the column headers
display the text boxes.

I can add a Key and Value pair or numerous Key and Value pairs up to 50 for tagging
for this. I'm not going to do that, I'm going to click Next: Review

The Review section displays. It contains multiple details.

and I'm going to Create the user account. OK, it says, Success, you have successfully
created the user and I could even Send an email with login instructions to

The Download.csv button displays. Below the button display a table with the
following column headers, namely: User, and Email login instructions. It contains
one row.

user cblackwell, I could expand that account to see some of the details.

So, it created the user. It automatically did attach a policy even though I didn't
specifically select one, called the IAMUserChangePassword policy to allow for that to
happen and it created the login profile or the account for that user. So, I'm going to go
Close and if I go back to my list of Users, now cblackwell shows up. If I click to open
up cblackwell then I get into the properties of that user account.

The Summary page opens. It displays the following tabs, namely: Permissions,
Groups, Tags, Security credentials, and Access Advisor. Currently, the Permissions
tab is active. It contains a button labelled as: Add permissions.

So, this is where I can click Add permissions if I want to again add a member to a
group

He highlights the tab present in the Grant permissions section, namely: Add user to
group, Copy permissions from existing user, and Attach existing policies directly.

or maybe copy permissions from another user to this user, or attach policies directly,
whatever the case is. But normally, it's done at the group level.

He moves back to the Summary page.

Also, while I'm in here, I can also modify the Group membership here, modify Tags if
the password is forgotten, for example, under Security credentials, I could go ahead
and Manage that password. I could assign a multi-factor authentication or MFA
device, and I could create access keys here for programmatic access as we saw, it was
an option when we initially created the user. But what I want to do here is create a
user group so we're on the left, I'm going to click User groups and I'm going to choose
Create group

A table and a button labelled as: Create group displays.

The Create user group page opens. It contains the following sections, namely: Name
the group, and Add users to the group - Optional.

and I am going to call this group East_Admins.

Now what I can do down below is Add users to the group right away. So, I want to
add user cblackwell to the East_Admins group and I can attach permissions policies.

He scrolls down in the page. The Attach permissions policies - Optional section
displays. It contains a table with the following column headers, namely: Policy name,
Type, and Description. The table contains multiple rows. A search bar display above
the table.

So, each of these policies is a collection of related permissions, such as for access to
things like Glacier. Glacier is Amazon's archiving solution, so ReadOnlyAccess to
Glacier archives. What I'm going to do is search, let's say for s3, S3 buckets. Because
what I want to do is allow S3ReadOnlyAccess, let's say to this user account. So to be
able to read files that are stored in an S3 storage bucket. Then I'll choose Create
group. So, now the group is created and, of course, I could click on the group to open
up its properties and I could modify or manage the user membership of this group.

The East_Admins page opens.

And as you might guess, if I go back to my Users view and go into cblackwell.

The Summary page opens.

If I go into Groups now, well, of course, we see that this user is a member of the
East_Admins group and under Permissions we're also going to notice that we have a
group attachment for permissions policies.
If I click the Show 1 more, it's the AmazonS3ReadOnlyAccess policy that stems from
the fact that this user is a member of the East_Admins group. OK, well, that's fine. If I
go into the Security credentials tab for this user at the top, there is a Console sign-in
link. I'm going to go ahead and copy that we're going to test signing in as user
cblackwell.

The Amazon Web Services Sign-in page displays. It contains the following fields,
namely: Account ID (12 digits) or account alias, IAM user name, and Password.

So, when I follow that link, it automatically fills in the Amazon Account ID, it wants
the IAM user name. So, I'm going to go ahead and fill that in and I'll also specify the
password that we specified when we created that account.

The screen displays the following fields, namely: AWS account, IAM user name, Old
password, New password, Retype new password.

And it knows that we have to change that password, so I'll go ahead and enter the old
one. And it will enter and confirm a new one and I'll choose Confirm password
change. And now I'm signed in, as evidenced in the upper right as user cblackwell.

The AWS Management Console window displays.

So, if user cblackwell were to try to go into EC2 instances and perhaps go in and try
to Launch an instance, but even before doing that

The New EC2 Experience console window opens. The left pane contains multiple
options. The right pane displays a table and the following buttons, labelled as:
Instance state, Actions, and Launch instances.

The page with section: Step 1: Choose an Amazon Machine Image (AMI) opens.

notice, it says You are not authorized to perform this operation, not even to view the
Instances. But if I go into s3, that would be S3 storage buckets, if I had any, then this
account would have read only access.

He searches S3 in the search bar.

The Amazon S3 console window opens.

So, it looks like we do actually have one bucket, so if we were to open that up, well, it
looks like it's letting us in and we have the ability to start browsing and seeing the
content. So, that's how you can start to work with AWS users and groups.

Managing Microsoft Azure Users and Groups


Topic title: Managing Microsoft Azure Users and Groups. Your host for this session
is Dan Lachance.

In this demonstration, I'm going to be using Microsoft Azure to create Azure users
and groups. Now, just like with other public cloud solutions, like

The left pane displays the following options, namely: Create a resource, Home,
Dashboard, All services, All resources, Resource groups, and SQL databases. The
right pane contains three sections, namely: Azure services, Recent resources, and
Navigate.

Amazon Web Services or AWS, you could link a cloud-based directory service to
your on-premises directory service like Microsoft Active Directory. And you could
reuse existing on-premises user accounts to allow for example, users to sign in with
credentials they already know to access cloud apps. But you can also create user
credentials in groups directly and solely in the cloud, and so that's going to be our
focus in this particular demonstration. First things first, in Microsoft Azure we have
the concept of Azure AD. An Azure AD or Azure Active Directory tenant is created
by default when you sign up for Azure, but you can create additional Azure AD
tenants. Kind of like creating additional on-premises Active Directory domains to
keep things separated for administrative purposes or maybe based on cities, states,
countries, whatever the case is.

In the upper right here, because I've signed into the Azure portal, I can click and
choose Switch directory

He selects the profile icon. The My Microsoft account, and Switch directory links
display.

The Portal settings | Directories + subscriptions page displays. It contains a search


bar, some text. Two tabs display below the text, namely: Favorites, and All
Directories.

and I can choose All Directories to get a list of all of the Azure AD tenants, which are
similar to different Active Directory domains. Although it doesn't support all features
like Active Directory does, such as group policy, but at any rate it is a user credential
store. So, I could switch to any of these specific Azure AD tenants, for example, I'm
going to Switch to Twidale Investments, that's a separate Azure AD tenant.

The Welcome to Azure! page opens. It displays three cards, labelled as: Start with an
Azure free trial, Manage Azure Active Directory, and Access student benefits.

And if I were to open the menu over on the left, I could go to Azure Active Directory
where I could click on Users.

The Twidale Investments | Overview page opens. He selects the Users option in the
left pane.

The Users | All users (Preview) page opens. The left pane displays multiple options,
and the right pane displays a table and the following buttons, labelled as: New user,
New guest user, Refresh, and Bulk operations.

Any existing user accounts will be shown here. I want to create a new user. Now
notice I have a New user button at the top as well as a New guest user. Actually, that
option is available if I just were to click New user, I can choose if it's a regular user in
Azure AD or

The New user page opens. It display the following cards, namely: Create user, and
Invite user. Below the cards display a section titled: Identity. This section display the
following fields, namely: User name, and Name.

I can invite a user through email, which is a guest user. Down below, I'm going to
specify User name of mbishop. And what happens in Azure AD, is it uses the name of
my Azure AD tenant, the DNS suffix, which in this case is twidaleinvestments, and
and then we'll talk on .onmicrosoft.com at the end of the DNS suffix.

And I could change that if I really wanted to, but I'm going to go with that, that's what
I have. So, the user name then spelled out is going to be in this example, Max Bishop.
I can fill out as many details as I really choose here, although down below, I have to
deal with the password. Will it be Auto-generated? Should we specify it? I'm going to
let it Auto-generated, and I'm going to copy it, because then I could specify or
communicate that password to that user. Down below, we can also add the user to a
Group during user account creation or at any time thereafter. Down below, I can also
specify a Usage location, for example, I'm going to specify the United States because
there are some licenses that you might use for tools, like Microsoft 365 and so on.
That might require you to have a usage location set in the user account before you can
assign licenses.
Then I can fill in other attributes, like Job title, Department, Company and so on. I'm
OK with my selections here, so I'm going to Create user Max Bishop.

The Users | All users (Preview) page displays.

And so, Max Bishop here, now exists in Azure AD and if I click on the link for that
account,

The Max Bishop | Profile page displays.

the full sign in name for that account shows up here. mbishop@, in this case,
twidaleinvestments.onmicrosoft.com. So that's the sign in name. Of course, there is a
Reset password button up at the top, if for some reason the password is forgotten for
that user. But what we can also do when we're looking at a User's Profile, which is
what we're doing here, we can click on Groups on the left to see if that user is a
member of any Groups. So, let's go back and create a group in Azure AD. So, I'm
going to follow my breadcrumb trail, so to speak, in the upper left and click on the
Twidale Investments link to go back to Azure AD, where I'm going to click Groups
on the left and this is where the group definitions in Azure AD exist.

The Groups | All groups page displays.

Now I don't have any groups, but I will in a moment. I'm going to click New group.

The New Group page opens. It contains multiple fields, namely: Group type, Group
name, Group description, Membership type, and Owners.

And it's going to be a Security group, not a Microsoft 365 group. The name of this
group is going to be Western_Region_Users. Now down below, for the Membership
type, I can assign members, that's what we're probably already all used to where we
manually add members to the group. Then notice I have Dynamic User and Device
Membership, so if I were to choose Dynamic User, then I could Add a dynamic query,

He selects the Add dynamic query link present below the Owners field. The Dynamic
membership rules page opens. It displays the following tabs, namely: Configure rules,
and Validate Rules (Preview). Currently, the Configure Rules tab is active. It display
a table with the following column headers, namely: And/Or, Property, Operator, and
Value.

where I could Choose a Property, so, for example, maybe department and if the
department Equals a certain value, you know, such as Sales, then perhaps then that's
how we determine members of the group. So, we could do that, but I won't, I'm going
to click Discard and Yes.

The Discard button is present above the table. He selects Discard. A Discard changes
message appears.

But it's very important though, to understand that we can have dynamic groups.

In this case, I'm just going to go back to Assigned,

He moves back to the New Group page.

I can select a group owner that can manage members of the group, so I'll click the No
owners selected link, it's going to be Codey Blackwell.

The Add owners page appears on the right. It contains a search bar and a list.

That's the owner of the group and I'll click Members, No members selected, I'll click
that link and I want to add a member here, it's going to be user Max Bishop, so I'll
Select Max Bishop, Create. Now, the way that we can use this group is really
multifaceted. If I go back to my Azure AD tenant in the breadcrumb trail in the upper
left, one of the things I could do is go to Enterprise applications.

The Enterprise applications | All applications page opens. The left pane contains
multiple options. The right pane contains few buttons and a table. The table contains
multiple rows.

These are applications that are accessible by authenticating through Azure AD. It also
shows up on the myapps.microsoft.com page when a user signs in. So, for example,
Skype for Business Online, one of the things I could do is click on that app to open up
its Properties here in Azure AD. And I could click Users and groups on the left

The Skype for Business Online | Overview page opens. He selects the Add user/group
button on the right pane.

and I could Add a user or group that should have permissions to use this app. It will
show up for them on the my apps Microsoft page.

The Add Assignment page displays. It contains the following fields, namely: Users
and groups, and Select a role. A window titled: Users and groups display towards the
right. It contains a search bar and a list.
So, instead of specifying an individual user, I could specify a group like our
Western_Region_Users group. And there's a note that says When you assign a group
to an app, only users directly in that group will have access, not nested groups,
meaning groups within groups. That's fine, I'll click Assign and the Application
assignment succeeded. So that gives us a sense then of how we would begin to use an
Azure AD user account and group. The last thing I'll do is I'm going to sign in as that
user. So, in a new browser window, I'll go to myapps.microsoft.com, it'll change the
URL and this is where I'm going to sign in as user Max Bishop. So, I'll put in the
entire email address and then I'll click Next, where it will prompt me for the password
for that account. I'll specify that password and Sign in, at which point it will ask me to
specify the Current password and set a new one. After which, when I've specified the
New password, I'll click Sign in. And so once we've signed in here as Max Bishop on
the myapps.microsoft.com page, we now have the Skype for Business Online app
available.

Enabling AWS User Multi-factor Authentication (MFA)


Topic title: Enabling AWS User Multi-factor Authentication (MFA). Your host for this
session is Dan Lachance.

In this demonstration, I'm going to be enabling multi-factor authentication or MFA for


an AWS user account in the cloud. So, I've already signed into the AWS Management
Console. Now your organization might require multi-factor authentication for some or
all user accounts. It enhances users sign in security because instead of just something
you know like a username and a password, there would be an additional
authentication factor, maybe having to possess some kind of a hardware token device
with a changing code that you also have to enter in to authenticate. So that would be
something you have along with something you know, hence multi-factor or two-factor
authentication. And sometimes organizations will perhaps only require that on
powerful administrative accounts, but really, it's a good practice for all users. Yes, it
might be a bit inconvenient, but that's always the case with higher security levels.
Anyways, let's go ahead and enable MFA for an AWS user.

So, here from the console, I'm going to search for iam, identity and access
management because I'm going to click on that, that's where we manage users and
groups in Amazon Web Services. Now I can Add MFA for the root user account,

The Identity And Access Management (IAM) console window displays. the left pane
contains the following options, namely: Dashboard, User groups, Users, Roles, and
Policies. The right pane contains a section titled: Security recommendations. The
Security recommendations section contains a field, titled: Add MFA for root user. It
contains a button, labelled as: Add MFA.

that's the account I'm using to sign in here, directly from for admin purposes. That's
what you get when you sign up for Amazon Web Services. So, I could click the Add
MFA button and continue on to Activate it.

The Your Security Credentials page opens. The right pane displays the following
sections, namely: Password, Multi-factor authentication (MFA), CloudFront key
pairs, and Account identifiers. The Multi-factor authentication (MFA) section
contains a button titled: Activate MFA.

We can also do it for individual IAM users beyond the AWS initial root account, so I
could click Users over on the left, here I've got a user by the name of cblackwell

The right pane displays a section titled: Users. This section contains a search bar and
a table. The Delete and Add users buttons are present above the table.

and what I'd like to do is enable MFA for it. But first thing we should do is see
whether or not it's already been enabled. Pretty easy to figure out, if you click on the
user to open up their profile then one of the things you can do is go to the Security
credentials tab

The Summary page opens. It displays the following tabs, namely: Permissions,
Groups, Tags, Security credentials, and Access Advisor. Currently, the Permissions
tab is active. It contains a button labelled as: Add permissions.

and there are plenty of interesting things here. This is where we have the Console
sign-in link, if that user needs to sign-in to the AWS Management Console, because
perhaps they're an assistant cloud technician.

The Console password is Enabled, but if it's forgotten we could Manage it. And what
we're here to do is Assign an MFA device, currently, it states Not assigned. So, I'm
going to click the Manage link next to Assigned MFA device.

The Manage MFA device dialog box opens. It contains a section titled: Choose the
type of MFA device to assign. It contains three radio buttons, namely: Virtual MFA
device, U2F security key, and Other hardware MFA device.

Now I can use a Virtual MFA device. What this means is I can install an
Authenticator app, the Google Authenticator app, Microsoft Authenticator app, that
type of thing. I can install that on my computer, on my smartphone, or tablet, and so I
would have to have that device with me to sign-in with MFA. Or maybe I've got some
kind of a hardware device that can be used, like a U2F security key, like a YubiKey or
some other type of hardware MFA device, some kind of hardware token. So, in this
particular case, I'm going to go with Virtual MFA device. I've already got the
Microsoft Authenticator app installed on my smartphone, so I'm going to go with that.
I'm going to click Continue and I'm going to choose Show QR code.

The Set up virtual MFA device section opens.

So, in my Authenticator app from my smartphone, I need to add a new account that's
going to activate the camera on my phone, which will allow me to scan in this QR
code, which I'm going to do now. This QR code is unique to my user account. When I
say my user account, I mean the user that we are setting up here for MFA.

Now the moment I scanning that QR code, it adds my account for, in this case,
cblackwell to the Authenticator app on my smartphone. So that's good. But down
below, what I have to do is enter two of these six-digit codes, which timeout after 30
seconds on my Authenticator app.

The two fields are labelled as: MFA code 1, and MFA code 2.

So, I'll enter the first one and then once it switches over within 30 seconds, it's just
about there now, I'll enter the second six-digit code that's unique. I have to have this
code in addition to knowing the username and password sign-in. OK, I've entered in
two cycles of the code, I'll click Assign MFA. We're good to go, it worked, so I'm
going to Close out. So, let's see what happens then, when we sign in as user
cblackwell. If I scroll up here, this is my user account cblackwell. So, I'm going to
copy the sign-in console link and we're going to test it in another web browser.

The Amazon Web Services Sign-in page displays. It contains the following fields,
namely: Account ID (12 digits) or account alias, IAM user name, and Password.

OK, so I'm going to specify cblackwell as the user name. I still need to know the
password, this does not negate the need for that, so I'm going to enter that.

The Multi-factor Authentication section appears. It contains a field, titled: MFA


Code.

Now, it wants my MFA Code and this is where I need to have in my example, my
smartphone near me with the Authenticator app, with the account added and the code.
So, I'm going to go ahead and enter that before it times out and then I'll click Submit.
Because remember those codes change every 30 seconds and I'm now in using multi-
factor authentication as user, cblackwell.

Enabling Microsoft Azure User MFA


Topic title: Enabling Microsoft Azure User MFA. Your host for this session is Dan
Lachance.

You can enable multi-factor authentication or MFA for user accounts in Microsoft
Azure.

The Welcome to Azure! page displays.

The reason you would do this is because it's considered tighter security than just
username and password. And that's because username and password are simply
something you know, it's two items that fall into one category. Multi-factor
authentication means that we have multiple factors, such as something you know,
perhaps a username and a password. And also something you have, perhaps, like a
device with an Authenticator app on it, like a smartphone that changes a numeric code
every couple of seconds and you would have to enter that code in, in addition to a
username and a password. That's one type of multi-factor authentication.

So, to get started here in Azure, I'm going to first make sure I've switched to the
correct Azure AD tenant, because in Microsoft Azure you can have multiple AD
tenants, kind of like having multiple Active Directory domains. You can see which
one you're connected to in the upper right, I'm connected to one called TWIDALE
INVESTMENTS. But I could also choose Switch directory

The Portal settings | Directories + subscriptions page displays.

and click the All Directories link and then from there I could switch to any one of my
Azure AD tenants, if I have more than one, I don't have to have more than one. So,
I'm already in the right one, so I'm going to click to open my left-hand navigator
towards the upper left and I'm going to navigate to Azure Active Directory and Users.
I've got a few users here and my goal here is to enable multi-factor authentication for
user Max Bishop, if I click to open up user Max Bishop, we've got the full sign on
email address.

We can also Reset the password if it's forgotten and work with it that way. But what
I'd like to do is go back to my list of Users and I'd like to click the Per-user MFA
button up at the top. When I click that button, I can enable MFA for individual users.
That's going to open up a new web browser window, where I can

This window contains a multi-factor authentication section. This section display a


table. The table has the following column headers, namely: DISPLAY NAME, USER
NAME, MULTI-FACTOR AUTH STATUS. This table has three users details.

put a check mark next to the users where I want to enable multi-factor authentication.
For user Max Bishop, currently, the MULTI-FACTOR AUTHENTICATION
STATUS is showing as Disabled, so having that selected over on the right, I'll click
Enable,

A dialog box titled: About enabling multi-factor auth opens. It contains some text and
two buttons, namely: enable multi-factor auth, and cancel.

then I'll click the enable multi-factor auth button. And it says Updates were
successful, so I'll close out of that. So now MULTI-FACTOR AUTHETICATION
STATUS for Max Bishop is set to Enabled. So, what we're going to do now is we're
going to sign in as user Max Bishop. So, I'm going to go to myapps.microsoft.com in
a new web browser, but that's which is the URL.

And here's where I'm going to sign in as user Max Bishop and I'll specify the
Password for that account. I think in a message about helping protect my account, so
I'll go ahead and click Next. And now it talks about using the Microsoft Authenticator
app installed on my smartphone, so that's fine. I've already got that installed on my
smartphone. It says after you've installed that on your device, choose Next, OK, I'll
choose Next. And now to set up the account, I'm going to click Next. And what it
wants me to do from my smartphone Microsoft Authenticator app, is it wants me to
add a new account which will activate the camera on my smartphone, so that I can
scan this QR code which is unique to my sign in account for user Max Bishop.

So, I'm going to go ahead and get my smartphone ready and scan in that code from the
Microsoft Authenticator app. Now, once I've done that on my smartphone, it will
automatically add my Azure account and there will be a six-digit numeric code that
changes every 30 seconds, which I will use to sign in. So, when I click Next. It sends
an approval to my smartphone and when that pops up on my smartphone, I'll just tap
on approve, that should be reflected here and it is Notification approved. I'll click
Next and then I'll click Done. When it asks me if I want to Stay signed in, I'll choose
Yes. Now what I want to do is sign out and sign back in as the same user to test multi-
factor authentication.
So, in the upper right, where I have MB for Max Bishop, I'm going to choose Sign out
and I'll select that account to sign out from. And I'm going to sign right back in, so I'll
select the Max Bishop username. I'll pop in the password, so I still have to know the
password. So, on my smartphone, I'm going to have an Approve sign in request, so I
can just go ahead and tap on the approve link for that. And once I've done that, I'm
asked if I want to Stay signed in, I'll choose Yes, and I'm in. So, I don't have to
necessarily enter in the six-digit unique code that gets generated. Instead, I can just
approve the sign in on my device, so I have to have that device, something you have
along with something I know, like your username and a password constitutes multi-
factor authentication, which is often also called 2-step verification.

Identity Federation
Topic title: Identity Federation. Your host for this session is Dan Lachance.

These days, identity federation is a big deal. What we're really talking about is
centralized and trusted authentication. Now, if you compare that with old school
authentication for apps, they had authentication built directly into themselves. Instead
of having an external trusted centralized authentication provider, and that's what
identity federation is.

But let's go into more detail, because certainly there really is more to it when you're
understanding the concepts. And certainly, when you configure and use an identity
federated environment. So, consider this example where we have a sign-in screen for
a web application and we are prompted to put in our Email and Password.

We've got the standard Forgot your password link or what we could do is Continue as
a Facebook user or Continue as a Google user. So, this is a web application that trusts
a third-party identity provider. Those third-party identity providers in this example are
Facebook and Google. So, this means then that you don't have to sign-up directly
within that website or that app. You can just use your existing credentials from
Facebook or Google to sign-in to this one.

That is an example of identity federation. It's a two-way street and what I mean by
that is the web app or the website needs to trust that third-party identity provider. And
that means that the third-party identity provider has to have a secure way to
authenticate users such as Facebook or Google users. So, identity federation has some
specific terminology. We've used some already like a Trusted Identity Provider or an
IdP.

This is normally some kind of an LDAP directory service, like Microsoft Active
Directory. It could be on-premises or in the cloud, as we've seen it could be Google or
Facebook, both of which are cloud-based identity providers. The Resource Provider is
another part of the identity federation ecosystem often it's just referred to as an RP.
This is the actual application or website. Sometimes it's also called a Service Provider
or an SP.

So, essentially we're going to say that a Resource Provider is just a web app that trusts
the identity provider. And that identity provider again, could be an LDAP directory
service like Microsoft Active Directory, it could be Google, it could be a Microsoft
account, it could be Facebook, it could be an Instagram account. Anything like that
that the app is configured to trust.

Then we have the notion of an identity federation claim. A claim, generally speaking,
is an assertion about a user or device. Examples of this would include the date of birth
of a user that might be part of what's in a claim. Or the type of device a user is signing
in from, the subnet IP address they're signing in from, a security clearance level for a
given user. Some or any combination of these types of attributes and many other
possibilities can be stored in what's called a claim. And a claim can be part of a digital
security token.

So, the digital security token is digitally signed by the identity provider upon
successful authentication, that digitally signed security token is signed with the
identity providers private key. That means that apps that trust the identity provider, in
other words, resource or Service Providers, they would be configured with the identity
providers, related public key which can verify the signature of that security token.
Make sure it's valid. And then, of course, the app might consume some of those
claims.

Maybe when a user signs-in to identity providers and there's a claim with the date of
birth, maybe that is used and passed off to an application to determine if the user, for
example, should be able to sign-up for a driver's license. If we were to look at a data
flow diagram for identity federation, it might look like this. And the reason I say
might, is it really depends on the solution being used and how it's configured but this
is a generic example. In step 1, we have a user accessing a web app. Now, the web
app itself does not have authentication built-in that's what identity federation is about.

So, instead in step 2, the app will redirect the user to an identity provider for
authentication. That could be automatic if the app is configured to trust an identity
provider. Or maybe the user is given the choice upon sign-in as we saw. Remember,
sign-in with your Facebook account, sign-in with your Google account, that type of
thing. Either way, step 2 means that we have external authentication that begins. So in
step 3, we assume that the user will authenticate successfully with that external third-
party identity provider, and that server would digitally sign a token, an authentication
token which might contain claims.

In step 4, the user is then ultimately authorized to use the app. Now, when we talk
about a claim and we talked about that, what this means if an app is consuming those
claims, is that we have a form of Attribute-based Access Control or ABAC. Which
might look at a date of birth that's an attribute to determine if a user is allowed to do
something within an app. Single sign-on is often coupled with identity federation, but
it doesn't have to be, it can be its own configuration. What it means is it allows
automatic users sign-in to apps given that they've already authenticated at least once.

So, it is very convenient for users because they're not continually prompted for the
same credentials, but at the same time we have to play devil's advocate look at both
sides of the coin and think about the fact that a compromised user account could
provide access to multiple apps. As always, it's trying to strike the correct balance
between security and convenience in alignment with organizational security policies.

Resource Access Control


Topic title: Resource Access Control. Your host for this session is Dan Lachance.

Server specialists need to be aware of different types of access control models which
control access to resources. Because you might be required to support or configure
one or more of these access control models that might be in use.

So, we're really saying that access control determines the level of access that an entity
can gain to a resource after successful authentication. So, we're really talking about
authorization then. First thing we'll talk about our data roles when it comes to
managing and accessing data, we have the data owner role, the data custodian role,
and the data processor role. Now, a data owner would be the one that would actually
have ownership of the data itself. And that could be a customer, if it's sensitive private
data, if it's essentially personally identifiable information or protected health
information.

But the data owner in some cases could also be an organization that sets the policy
that controls how that data is protected. The data custodian manages the data in
accordance with the rules or policies set forth by the data owner or data owners. The
data processor as the name implies, will take in data and process it. And again, in
accordance with any laws or regulations that determine how that data is to be treated.
So, if we're talking about the General Data Protection Regulation or GDPR for
sensitive data related to European Union citizens.
Then those data sensitivity rules would apply regardless of where the organization is
in the world that is processing that data. Discretionary Access Control or DAC, means
that the data custodian can at their discretion, set permissions in alignment with policy
set forth by the data owner. So, this screenshot comes from setting NTFS permissions
on a Windows machine. So what's happened is somebody has right clicked on a file
called Project_A.txt. They've clicked on the Security tab, where they can then select
users or groups or add users and groups, if they click the Edit button to control
permissions to files and folders on an NTFS formatted disk volume.

Mandatory Access Control or MAC is another model because resources that users,
groups, or software can be given permissions to access and those resources would be
files, apps, network sockets, whatever happens to be. These resources get assigned a
label and that's set by the system administrator. Now, users can then be assigned a
clearance level which allows access to labelled resources. So, imagine that we have a
label called Protected Health Information (PHI).

We can then determine which users will have a clearance level that allows them
perhaps to only read items that are labelled as Protected Health Information. So,
Mandatory Access Control, then, is one of those things that is controlled by the
operating system such as security enhanced or SELinux.

We also have security enhanced Android or SEAndroid. So, this allows us to sandbox
applications on an Android device. Sandboxing is important because it means if we do
have some kind of a security breach in the app, it's only within the app and not outside
of it. It's the same kind of concept if you ever looked at your running processes on a
computer where you have a web browser with multiple tabs open. You'll notice that
each tab is a separate running process and yes, it consumes more resources.

However, it means that if whatever you're doing within that single web tab infects that
part of the web browser, it's only affecting that process, that tab. It will not be able to
go in and infect other tabs which could be other web applications like your secure
online banking and so on or at least that's how the theory goes. So, app sandboxing
then prevents app privilege escalation outside of that app. And if you wanted to enable
this type of thing, you would configure it on an SEAndroid device in what's called the
kernel policy file. You'd have to set security enhanced mode to enforcing that way it's
actually being used.

So, this means that the Android operating system then would control access to
resources like files on the device, or access to the device itself. Of course, to sockets,
network sockets really combine names or IP addresses along with a listening port.
That's really what a socket is, so this would be a form then of Mandatory Access
Control. Another access control model is Role-based Access Control, otherwise called
RBAC. Here we have a screenshot of the Microsoft Azure cloud where what's
happened is somebody has clicked on a Subscription. The subscription is called Pay-
As-You-Go and in the left-hand navigator they've selected Access control (IAM).

Where on the right, they can add role assignments. So, as an example, if we take a
look at the contributor assignments shown on the right. There is a user by the name of
User Two that was given the contributor role to this subscription and what that means
in this particular case is that User Two would be able to contribute or add or create
cloud resources in this Pay-As-You-Go subscription. So, as long as the user occupies
the role, they have the permissions of the role because a role is really just a collection
of related permissions. And it could even be much more granular than this.

It could be a role that only allows the management of things like cloud-based storage
accounts or cloud-based virtual machines. Now, in the Microsoft Azure cloud, there is
a hierarchy. The subscription under which we can have resource groups. A resource
group is just a way to group related cloud resources, like virtual machines, databases,
web apps, and so on. Much like you would organize files in a directory on a storage
device. So, we could assign a role to the subscription so then that means that the
permissions in this example, the virtual machine contributor role assigned to a group
called EastGroup1.

That means members of EastGroup1 would have virtual machine contributor


permissions at the subscription level and everywhere underneath it, in all resource
groups they'd be able to create virtual machines. But alternatively, you might assign
that just to a resource group like resource group 2. So, members of each group 1
would only be able to create virtual machines in resource group 2. Or you could set
that to a specific resource.

Now, of course, virtual machine contributor is probably not a great example if you
already have a virtual machine. But you might assign a role to a specific resource, like
a virtual machine, to allow its management, and only for that single virtual machine,
not even for the resource group which would be applied then to all virtual machines in
that resource group. We then have attribute-based access control or ABAC as another
access control method.

So, an attribute is a property of something like a user. So, like a user department, a
user location, a user security clearance level, or it could be a device operating system.
The subnet that a device is on, the device health state. So basically, if certain
conditions are met with attribute-based access control, then resource access is allowed
such as being allowed to sign-in to the cloud or given access to a database or an app of
some kind.
Working with Cloud Role-based Access Control (RBAC)
Topic title: Working with Cloud Role-based Access Control (RBAC). Your host for
this session is Dan Lachance.

Role-based Access Control or RBAC for short means that you have roles that users
can occupy and those roles are therefore, assigned some kind of permission to some
kind of resource. And so by extension, when users occupy a role, they get the
permissions that the role has. One form of this you might say, is adding users to
groups and depending on how you name your groups and use them for assigning
permissions. That could be true. It could be a form of Role-based Access Control.

We're going to take a look at this really quickly in Microsoft Azure as well as in
Amazon Web Services, so in the public cloud. So, here in Microsoft Azure, I'm
already signed into the Azure portal.

The Microsoft Azure page is open on the screen.

So, on the left, I'm going to go all the way down to Azure Active Directory and I'm
going to click on Users where we have a number of users that are listed,

The Azure DevOps | Overview page opens. The middle pane contains multiple
options, namely: Overview, Preview features, Users, and Groups. The right pane
displays corresponding details.

The Users | All users (Preview) page is open on the screen. The right pane displays a
table with multiple rows.

one of which has the name of Codey Blackwell.

Now I can also go back to my Azure AD tenant and go to Groups and get a list of
groups that I could assign to roles. And this is where the distinction between a group
and a role is different. In the Microsoft Azure cloud, a role is a collection of related
permissions. And that role might be assigned to a group or to individual users, and so
on, and it might work like this. So, on the left in my Azure navigator panel, I'm going
to go all the way down to Resource groups.

A Resource group in Azure allows you to organize related resources such as, all of the
items you might need to support a web app or all of the virtual machines related to a
project, something like that.
The right pane displays a section titled: Resource groups. It contains a table with the
following column headers, namely: Name, Subscription, Location, and Type. It
contains multiple rows.

I have a resource group called Rg1 and what I want to do is click on it to open up its
properties. Because to assign role permissions to this Resource group and everything
in it. I would go to Access control (IAM) over on the left.

The middle pane contains the following options, namely: Overview, Activity log,
Access Control (IAM), Tags, and Deployments. The right pane displays the following
fields, namely: Subscription, Deployments, Subscription ID, and Location. It also
contains two tabs, namely: Resources, and Recommendations.

When I do that, I can then click the Add button and then

He selects the Add button in the right pane. Three options appear, namely: Add role
assignment, Add co-administrator, and Add custom role.

choose Add role assignment and this is where

The Add role assignment section displays on the right pane. It contains the following
tabs, namely: Role, Members, Review + assign. Currently, the Role tab is active. It
contains a table with the following column headers, namely: Name, Description,
Type, Category, and Details. A search bar is present above the table.

I could go ahead and start to manage role. So, first thing I would do is select the role
I'm interested in.

So, I'm just going to search for virt for virtual machine and down below I now have a
filtered list of roles that I'm interested in. One of which is called the Virtual Machine
Contributor Role, which lets you manage virtual machines. So, that's the role that I
want to select, so I'm going to go ahead and click on it and then click Next. So,
Virtual Machine Contributor.

The Members tab is now active. It contains the following fields, namely: Selected role,
Assign access to, and Members. A table display below the Members field. The Assign
access to contains two radio buttons, namely: User group, or service principal, and
Managed identity. The Members field contains a link titled: Select members.

I want to assign this to a User, group, or service principal.


Service principal is related to assigning role permissions to a piece of software. So,
I'm going to click Select members down below

The window titled: Select members appears. It contains a search bar.

and I've got a number of groups that are available here. For example, I'm going to
choose the group Central_Region_Canada. So members of that group will have the
Virtual Machine Contributor role assigned to the resource group called Rg1. So, they
can create and manage virtual machines only in that resource group, not in the entire
Azure subscription.

So, I'll select that group, click Select. And I'll click Review and assign. And it's done.

The Rg1 | Access control (IAM) page displays. The right pane displays the following
tabs, namely: Check access, Role assignments, Roles, Deny assignments, and Classic
administrators.

So, if I look at the Role assignments at this level, at the resource group. So, I've
clicked Role assignments. If I scroll down through the roles, I'll eventually come
across the

A table with the following column headers display, namely: Name, Type, Role, Scope,
and Condition.

Virtual Machine Contributor role where the Central_Region_Canada group is now


showing.

Now that's in Azure. Let's look at doing the same type of thing in the AWS
Management Console. OK, so here in AWS, I'm going to go ahead and search up, iam
or identity and access management and click on it. And then what I want to do is just
point out that I have what are called Policies. If I click the Policies view on the left,
we have a list of Policies on the right. Many of them are built-in but you can build
your own.

A policy is similar to a role in Azure. It's a collection of related permissions. So, for
example, if I were to search for s3 which represents a storage bucket or storage
location in the cloud. And if I were to filter on it, I then have only policies related to
working with Amazon S3. For example, AmazonS3FullAccess contains the necessary
permissions to have full access to S3 buckets. And so, I can assign that policy to a
group or to an individual user. So, if I go look at my Users here in AWS, I have a user
called cblackwell, and if I click on that user and go to Groups, we'll see all of the
groups,

The Summary page opens.

that that user is a member of. The user is only a member of one group called
East_Admins and I could either click the link here or I could click User groups on the
left to get to that group and open up its properties.

What I'm interested in doing for the East_Admins group

The East_Admins page is open on the screen. It contains a section titled: Summary.
Below the Summary section display the following three tabs, namely: Users,
Permissions, and Access Advisor.

is clicking the Permissions tab at the top here. And down below, I currently have a
policy, but I want to select it and I want to Remove it. What I'd like to do I'll click
Delete is, I'd like to add S3 bucket full access control. So I'm going to click Add
permissions and I'll choose Attach policy. And once again, I'll filter it for s3 in this
particular example,

The Attach permission policies to East_Admins section display on the right pane. It
contains two sub sections, namely: Current permissions policies, and Other
permission policies. The Other permission policies contain a search bar and a table.

and I'm going to select AmazonS3FullAccess and I will add the permission. So, if I
were to go back to a User that's in that group, we know cblackwell is a member of the
group.

The Summary page opens.

And if I were to view the Permissions tab here for the user down below, under
Attached from group AmazonS3FullAccess is now applicable to this user by being a
member of the East_Admins group. So, that gives us a sense of how we might work
with Role-based Access Control in both the Microsoft Azure cloud and the Amazon
Web Services cloud.

Configuring Attribute-based Access Control (ABAC)


Topic title: Configuring Attribute-based Access Control (ABAC). Your host for this
session is Dan Lachance.
Server technicians need to be aware of the various access control models that are out
there, including Dynamic Access Control, which is a Windows feature that uses
attribute-based control. Attributes are properties or characteristics of something like a
users location or city or the department a user is in or the IP address that a user device
is on.

We can have those specific attributes compared against some conditions to determine
whether access should be granted or not to a resource. Now with Windows Dynamic
Access Control, we use Active Directory user and device attributes to control access
to the file system, instead of just adding users to groups where groups have access to
the file system.

Which is traditionally what has been done and is still being done and is also just fine.
So, let's get started with this. The first thing we have to do is we have to make a few
changes in the Active Directory Administrative Center. So, I'm on a domain controller
server and I'm going to go to the start menu and under Windows Administrative
Tools, I need to use the Active Directory Administrative Center Tool. I can't use the
Active Directory Users and Computers tool to configure Dynamic Access Control.

So, in the Active Directory Administrative Center,

The Active Directory Administrative Center window opens. The left pane contains the
following options, namely: Overview, Dynamic Access Control, Authentication, and
Global Search. The right pane displays a section titled: WELCOME TO ACTIVE
DIRECTORY ADMINISTRATIVE CENTER.

I'm just going to go ahead and expand it. And on the left, I'm going to click Dynamic
Access Control. Now, next thing I want to do is right-click Claim Types over on the
right and

The right pane displays a table with the following column headers, namely: Name,
Type, and Description. The table contains three rows with Name as: Control Access
Policies, Control Access Rules, Claim Types, Resource Properties, and Resource
Property Lists.

choose New Claim Type. Now what is a claim?

The Create Claim Type: accountExpires window opens. The left pane contains the
following options, namely: Source Attribute, and Suggested Values. The right pane
contains a section titled: Source Attribute. This section contains a table with the
following column headers, namely: Display Name, Value Type, Belongs to, and ID. It
also display the following fields, namely: Display name, Description, User, and
Computer.

A claim is an assertion or a statement of truth about something like a user or a device.


Such as an identity provider like Active Directory will stay to claim that says this user
has successfully authenticated and they are in the HR department.

That's a claim, and so that type of thing is necessary when we want to use attribute-
based control. So, when I add a new claim type here, it's going to be for a User, but it
also could be for a Computer. But what I want to do is go down and select an existing
Active Directory attribute. The one I want to select is department.

That's what I want to use. And then down below further, I want to add values down
below for department. So, what I could do, it says No values are suggested, and that's
fine.

The Suggested Values section contains two radio buttons, labelled as: No values are
suggested, and The following values are suggested. The Add, Edit and Remove
buttons are also present in this section.

But I'm going to click the following values are suggested and I'll click Add and I'm
going to add let's say, HR as the Value and the Display Name.

An Add a suggested value dialog box opens. It contains the following fields, namely:
Value, Display name, and Description.

So, that's one possible value for the department attribute, and I'm also going to Add,
let's say, Exec.

We could go in and add all of the departments that we use in our organization that
would apply to assigning file system permissions, but I'll just go with these two for
now. And that's all I'm going to do here. OK, so if I go into Claim Types, I now have
a department claim type that's one of the things I need to do. Now, back under
Dynamic Access Control, I'm going to go into Resource Properties.

A resource in this context is simply a file or a directory in the file system.

A table with the following column headers displays, namely: Display name, ID,
Referenced, Value Type, Type, and Description. The table contains multiple rows.

So, what we're doing here is we want to be able to associate the Department attribute
in the file system with the Department attribute in Active Directory user accounts.
We'll talk about how that gets applied in the file system. But for now, what I want to
do is go into that and just take a look, OK. So, it looks like we can select Values here,

He selects Department from the table. A window titled: Department (Disabled) opens.
The left pane contains three tabs, namely: General, Suggested Values, and
Extensions. The right pane display two sections, namely: General, and Suggested
Values. The first section contains the following fields, namely: Display name, Value
type, Description, and ID. The second section contains a table having column
headers, namely: Value, Display Name, and Description, and three buttons, labelled
as: Add, Edit, and Remove.

such as, and this one is filled in already, but this is for the file system.

So, we've got Sales and so on. But, well, do I have Human Resources? I might have
Human Resources, but ours was called HR, but really it doesn't matter that they'd be
the same name. However, we are going to have to tie them together I don't see exec
here, so I'm going to Add

The Add a suggested value dialog box opens.

Exec or I could spell out executive, it doesn't really matter that they be exactly the
same name.

But the reason we're doing this twice is because we're going to be comparing how
files and folders are flagged on file servers in the domain with the Department of exec
or whatever, and we're going to compare that against a user that's signed in and their
department in their user account. It doesn't have to be the spelled out exactly the same
way, but when we add conditions, we can link them together as you'll see. OK, so
we've got the resource properties configured for Department. However, I want to
right-click on it and make sure it's enabled, and when I enable it, notice the icon
changes. The other ones are disabled, but this attribute is enabled, so Department is
good to go.

OK, next thing I have to do is install a server component here on my file servers. So,
let's assume I'm doing this on a file server that will let me classify files and
directories, in other words, label them with a department attribute and a value. So, we
can do that by clicking Add roles and features and going through the

The Server Manager Dashboard displays. The left pane contains the following
options, namely: Dashboard, Local Server, All Servers, DNS, and File and Storage
Services. The right pane contains two sections, namely: WELCOME TO SERVER
MANAGER, and ROLES AND SERVER GROUPS. The first section contains the
following links, namely: Add roles and features, Add other servers to manage, Create
a server group, and Correct this server to cloud services. The second section contains
three cards, namely: AD DS, DNS, and File and Storage Services.

Wizard until we get to the roles screen and under File and Storage Services, and under
File and iSCSI Services, we are interested in something called File Server Resource
Manager, that's a role service, otherwise, sometimes called FSRM.

We want it, so I'm going to turn it on. We're going to click Add Features

The Add Roles and Features Wizard opens.

for the admin tools and we're just going to continue on through the Wizard and just do
the installation. Now, by installing the file server resource manager role service, we're
going to get an admin tool that will be under our start menu as well as under the Tools
menu here in Server Manager. Plus, it'll add a new tab when we look at the properties
of a file or directory on file servers where this is installed and that's important,
because we want to be able to work with things labeled as department of sales or exec
or HR, whatever the case is, to assign permissions.

So, I'm going to click Close, next I'm going to do is go in the file system on the server
and make a sample file. So, in the root of drive C: I've got a Data folder and a Projects
folder with some sample files. So, I could right-click on the Projects folder and go to
Properties, and because we've got FSRM installed, we now have a classification tab.

But it says no properties, where's department, I want to flag this as being for the HR
department. I could also do the same thing for individual files. So, what we need to do
then is open up the FSRM tool, which I can do as we've mentioned from the start
menu, there it is, File Server Resource Manager. I'm going to expand Classification
Management

The File Server Resource Manager window opens. The left pane contains the
following options, namely: Quota Management, File Screening Management, and
File Configuration Rules. The right pane contains a section titled: Name.

on the left and Classification Properties.

Now, what I don't see here is department.

A table with the following column headers displays, namely: Name, Scope, Usage,
Type, and Possible Values.
I'm just going to go ahead and click Refresh. It was just a timing thing. Department
now shows as a Global attribute that we can use for tagging items in the file system.
And that's global from the sense of, it's not local to this server, it's in Active Directory
and can be applied to all file servers that are joined at that Active Directory domain.
OK, so if we go back in the file system, back into the Projects folder, back under
Properties Classification.

Now, Department is here. And all of the Values are shown down below. These are
resource properties and remember, I had to add Exec and we had Human Resources,
let's say Human Resources. I'm going to flag this as Human Resources. Now, if I go
into the files within that folder and go to the Properties of one of them and go into
Classification, they too have

He opens the Properties of Project C file.

the department. And, of course, notice, it's currently set with the value of Human
Resources. Because I did it at the folder level and so it flows down to subordinates.

All I've done is flag these project files as Department equals Human Resources. I
haven't assigned permissions, so that's all that I've really done. Now, let's go back into
the Active Directory Administrative Center because the next thing I want to do, I'm
just going to click on Dynamic Access Control again because I have to create a
Central Access Rule. So, I'm going to right-click on that and choose New Central
Access Rule. Basically, to link the resource or file system department property to the
Active Directory user department property.

We're going to call this Rule1

The Create Central Access Rule page opens. The left pane contains the following
sections, namely: General, Resources, and Permissions. The right pane contains the
following three sections, namely: General, Target Resources, and Permissions. The
first section contains the following fields, namely: Name, and Description. The second
section contains a text box and an Edit button. The third section contains two radio
buttons, namely: Use following permissions as proposed permissions, and Use
following permissions as current permissions. A table display below these radio
buttons. This section also contains an Edit button.

and down below for Target Resources, I'm going to click Edit and I'm going to click
Add a condition.
The Central Access Rule window opens. It contains an Add a condition link, and an
OK and Cancel buttons.

So Resource, a Resource is just a file or directory in the file system.

It display five drop down fields. The four drop down fields are, namely: Resource,
Department, Equals, and Value. The fifth drop down is blank.

This does not apply to Active Directory user accounts or anything like that it's for file
system stuff. So, Resource, Department, Equals, OK, and a Value. Now this is where
I'm going to say, well, OK, if files in the file system or directories are flagged with
Department Equals Human Resources, that's what we've done so far.

Then I want to set current permissions, and I'm going to go down and click Edit to do
that, I'm going to click Add, I'm going to choose Select a principal.

The Advanced Security Settings for Permissions dialog box opens. It contains a table
and an Add, Remove, View, and Restore defaults buttons.

The Permission Entry for Permissions dialog box opens. It contains the following
fields, namely: Principal, Type, Basic permissions, and Add a condition. The Basic
permissions field contains check boxes, labelled: Full Control, Modify, Read and
Execute, Read, and Write.

Maybe I'll use the built-in domain users group here in Active Directory, which is
everyone. So, for all domain users, I want to give Read and Execute, Read and let's
say, Write but only if their user account is also in the HR department. And this is
where I'm going to Add a condition. But this time it's for Active Directory, it's not for
a resource in the file system, we already did that.

The six drop down fields appear, namely: User, Group, Member of each, Value, Click
Add items, and Add items.

Now, we're mapping it to a user in Active Directory and I'm going to select the
department attribute, Equals, Value.

He selects department in the Group drop down. As he selects the department value,
the number of drop down fields present on the screen now are five. He selects a value
in the fifth drop down.

And here's what I filled in earlier when I made a claim type, HR. OK, so now I've
linked HR at the Active Directory User department attribute level to human resources
up above. Here in the Target Resources for the file system, excellent. And then we've
set the permissions for members of the domain users group that happened to have a
department equal to HR. OK, now that we've done that, we have some stuff to do in
group policy, so I'm going to open up my start menu and under the Windows
Administrative Tools heading, I'll go down to Group Policy Management.

If I want this applied to all file server computers joined to this domain,

The Group Policy Management window opens.

well, I could go into the Default Domain Policy and configure the following settings.
So, I'll right-click on the Default Domain Policy and I will choose Edit.

He right clicks on the Default Domain option in the left pane. A context menu
appears. The context menu contains the following options, namely: Edit, Enforced,
Link Enabled, and View.

And the first thing we'll do here

The Group Policy Management Editor window opens.

is we're going to go under Computer Configuration. We're going to go under Policies,


Administrative Templates, System and KDC, key distribution center for Active
Directory. And we've got an option here called KDC support for claims.

The right pane displays a table with the following column headers, namely: Setting,
State, and Comment. It contains multiple rows.

I'm going to turn that on and say, Enabled.

The KDC support for claims, compound authentication and Kerberos armoring
dialog box opens. It contains three radio buttons, labelled as: Not Configured,
Enabled, and Disabled. It also contains a drop down with the following options,
namely: Supported, Not supported, Always provide claims, and Failed unarmored
authentication requests.

And I'm going to leave it on Supported. So, if needed, claims will be provided. Next
thing we have to do while I am in here is I need to go into, let's see, where do I, there's
one step I forgot actually before I go further here. I need to go back to the Active
Directory Administrative Center and we created a Central Access Rule called Rule1.
What I need to do is add that to what's called a Central Access Policy, that's the step I
forgot. So, I'm going to right-click on Central Access Policy.

We'll call it Policy1.

The Create Central Access Policy window opens. The right pane contains the
following fields, namely: Name, Description. Two buttons Add and Remove are also
present. He selects the Add button. A dialog box titled: Add Central Access Rules
opens. It contains a table with Name as Rule1.

All I do here is Add the rules I've created. I only have one Rule, that's it, just Add the
rule to it. OK, now if we go back to group policy, what I really want to do is deploy
that central access policy. So, in order to do that, I need to make sure that we go into
the correct place here. So, right now we're under Computer Configuration, Policies,
Administrative Templates, while I need to be under Windows Settings, Security
Settings. Let's see File System, there it is Central Access Policy. And on the right, I'm
going to right-click and choose Manage Central Access Policy. There it is, Policy1,
Add it, OK.

The Central Access Policies Configuration dialog box opens.

So that's it, once group policy refreshes for file servers that are joined to this domain,
those file servers will receive Policy1 which says anything in your file system flagged
as department of human resources, domain users that have a user attribute for the HR
department will have basically read and write and execute permissions. Of course, on
each file server in the domain, file server resource manager has to have been installed
and we have to have flagged files and directories for the department of HR. So, that is
an example of attribute-based access control.

Course Summary
Topic title: Course Summary

So, in this course we've examined how to implement strong authentication and
authorization for resource access through techniques such as MFA, identity
federation, and Role-based Access Control. We did all of this by exploring physical
security strategies and common hardening techniques. We used group policy to
disable USB storage. We examine the relationship between authentication and
authorization. As well we managed users and groups in Microsoft Azure, Active
Directory, Linux and AWS. Following that, we configured MFA in AWS and Azure.
We discussed how identity federation is used and we also covered access control
types. And finally we configured cloud Role-based Access Control and also
configured Attribute-based Access Control. In our next course, we'll move on to
explore Public Key Infrastructure or PKI.

CompTIA Server+ (SK0-005): Authentication


& Authorization
Controlling access to network resources is a vital security concern. Resource access is
controlled first by authentication, then by specific permissions for authorized resource
use. Physical security strategies allow controlled access to IT computing equipment at
the user and server levels. In this course, you will first list physical security strategies
then determine how to harden specific systems and limit the use of USB removable
media through Group Policy. Next, you will manage users and groups in Active
Directory, Linux, and the AWS and Azure clouds. You’ll then configure multi-factor
authentication (MFA) in AWS and Azure. Finally, you’ll define how identity
federation works and enable access control through cloud-based RBAC roles and
attribute-based access control. This course is part of a collection that prepares you for
the CompTIA Server+ SK0-005 certification exam.

Course Overview
Topic title: Course Overview

Some of my specialties over the years have included networking, IT security, cloud
solutions, Linux management and configuration, and troubleshooting across a wide
array of Microsoft products. Resource access is controlled first by authentication
followed by specific permissions for authorized resource use. Physical security
controls control access to IT computing equipment at the user and at the server levels.
In this course, I'll discuss physical security strategies, followed by determining how to
harden specific systems, followed by limiting the use of USB removable media
through group policy.

Next, I'll manage users and groups in Active Directory, Linux as well as the AWS and
Microsoft Azure clouds. I'll then configure MFA or multi-factor authentication in
AWS and Azure. Moving on, I'll define how identity federation works and enable
access control through cloud-based RBAC roles and attribute-based access control.
This course is part of a collection that prepares you for the CompTIA Server+ SK0-
005 certification exam.
Physical Security
Topic title: Physical Security. Your host for this session is Dan Lachance.

It's easy to get caught up with all of the technicalities in security solutions related to
IT. We have to think about physical security first because at the end of the day, all of
our IT equipment whether it's servers, storage arrays, laptops, desktops, routers,
switches, all of that stuff, has to physically exist somewhere, and so it needs to be
protected. Just like physical documents would need to be protected if they are, for
example, print offs of very sensitive or confidential information.

So, with physical security we're talking about having restricted access to the IT
infrastructure equipment documents or any other assets that might be considered to be
sensitive. Now this is a big deal when it comes to things like server rooms or even
data centers. Think of a data center building because you might have hundreds or even
thousands of different customers, personal or business data, might be stored on servers
in the data center. It might even be replicated to other data centers for fault tolerance.
But we have to think about the fact that when it comes to data centers, this is why
you'll find some providers are reluctant to provide their physical address of where
their data centers are located.

So, physical security now there are many aspects to physical security, one of which is
perimeter fencing of a certain height. And also the use of bollard posts. Now, bollard
posts are used to protect buildings from vehicular attacks like ramming a truck into a
building to destroy the wall. Another aspect of physical security would be to have
proper lighting around the facility, and around the perimeter, around the property, in
parking lots, parking garages. And also dealing with locked gates, perhaps with the
guard that determines who shooter should not be allowed onto the property in the first
place. Access cards to allow access to a building or perhaps a restricted floor in a
building.

Security guards, guard dogs, even motion-sensing security systems. Now we're not
suggesting that all of these things are needed at once, but if they could be in a high
security facility or some combination of these items might be relevant given a
particular situation. Now, continuing with this, let's think about things like data center
camouflage such as having an unmarked building may be located near a densely
forested area, or it might be a concrete building, or it might be a building that's painted
green to mesh in with the landscape.

That kind of thing can actually be a part of physical security, certainly it is at the
military level. Then we have interior and exterior reflective glass windows to prevent
any type of visual access to what's happening within a facility. Having a clean desk
policy, which means that people will not leave sensitive documents or sensitive
devices containing sensitive documents easily accessible on their desk when they go
home at night from work. And also proper document shredding techniques to ensure
that sensitive data cannot be retrieved after documents have been shredded. Other
commonly used security mechanisms in office environments would include privacy
screen filters.

Here we have a screenshot where you could search online and purchase privacy
screen filters that are designed to be placed over devices like a laptop or a tablet even
a smartphone device. The idea is that if you're close and if you're directly in front of
the screen, you can easily see what's happening. But if you're at an angle or a bit
further away you cannot see what's happening on the screen. So, provides for data
confidentiality that's presented on a screen.

Access control vestibules otherwise called mantraps are very common in secure
buildings. Often what will happen is the outer door of a facility provides access to an
inner chamber where a second door will open only after the first door completely
closes. The idea is we want to prevent tailgating, we don't want a malicious user
sneaking in behind someone because the door hasn't fully closed yet.

And of course these doors would be controlled through some kind of security
mechanism like an access card or some kind of a key lock. Another technique related
to physical security is something called air-gapping, and this is commonly done for
very sensitive or critical networks. Simply stated, air-gapping means that we don't
have a connection to another external network, not wirelessly not through a wired
link. It's completely separated hence air-gapped from other networks. It is a network
unto itself, but it's got its own internal use. So, in our diagram as an example, we have
some kind of an industrial environment, whether it's for water management, whether
it's for the electrical power grid. Whatever the case is we would have industrial
monitoring stations that are connected to a network with PLCs. PLCs are
programmable logic controllers.

These allow for the connection to and monitoring of robotics in a manufacturing


environment or controlling sensors for pressure or opening and closing valves that
type of thing. You might also have a database transaction server that would be
accessible in this example by plant data historians. But the idea is while all of these
items are connected together on a network the network itself is air-gapped, there is no
external network connection. Now you might wonder well how do we remotely
manage this type of network? Well, that's where you have to be very careful, because
if you allow for remote management for technicians that might be working from home
or from another location. Or maybe you have the managed service provider an MSP
that takes care of and monitors these servers from another location. You are then
introducing an entry point into the network, which is a potential security risk you're
increasing the attack surface. And when you do that you really don't have an air-
gapped network.

So, air-gapped networks can keep a network secure, but it can be inconvenient from a
remote management standpoint. Somebody has to make the decision as to which is
more important. Finally, another aspect of physical security is the protection of data at
rest. This means encrypting data, individual files or encrypting entire disk volumes
and then having restricted access to a facility and a server room where the storage
devices themselves exist. Locked equipment racks in a server room or data center
where storage arrays might exist. We want to prevent people from being able to
physically steel storage devices, and if they do get their hands on it. We want to make
sure strong encryption is used so that the data on it that might be sensitive is
completely inaccessible.

Hardening To Reduce the Attack Surface


Topic title: Hardening To Reduce the Attack Surface. Your host for this session is
Dan Lachance.

In the IT world when we talk about hardening, what we're really talking about is
securing or locking down hardware and or software ideally both. So doing things like
applying firmware updates as well as applying software updates. So, hardening is
designed to reduce the attack surface. From a security standpoint, we kind of have to
sometimes think like a malicious actor, what would a malicious actor try to do to
break into a smartphone or to break into a network, or how would they pick up the
phone and physically call somebody and try to trick them into divulging sensitive
information.

We have to think that way because once we identify these potential attack vectors. We
can do our best to either completely remove them or at least mitigate them with some
kind of security control. The human element is always the biggest concern when it
comes to even IT security. It is absolutely critical that there be periodic user security
training that can come in many forms. In many cases it might be physical classroom
training, it might be done through a zoom call, it might be done through some other
online platform.

Whatever the case is employees and an organization must be made aware of current
security threats it's all about user awareness and training. So, that means that people
must be educated on secure computing practices. An example of which would be to be
aware of scamming phone calls where people might say they are from a government
agency like the tax office or from law enforcement and that something has been
discovered where if a fine isn't paid in 24 hours than imprisonment will follow.

Those things don't happen that way, people need to be aware of that and many other
types of items related to secure computing practices, such as not leaving mobile
devices in public spaces unattended like, USB flash drives or laptops or tablets, and
that type of thing. So, awareness of social engineering in other words people trying to
deceive you to find out sensitive information. This is important, phishing scams with a
PH. Phishing as an example might be in the form of an email where the email looks
like it legitimately comes, let's say from Amazon and click here to view your invoice
or to collect your refund.

But in fact when you click that link, you're actually infecting your computer. So, it's
some kind of a scam to try to trick people into clicking on things or to performing
some kind of action they otherwise wouldn't that is related to some kind of sensitive
information. So, general hardening techniques begin with removing unnecessary
components from servers, disabling services that aren't needed, like SSH. If you're not
using it, maybe you have an out of band hardware remote solution you exclusively
use. So, why would you leave SSH running if it's never been used?

All it serves to do is introduce another entry point for potential malicious attacks.
Disabling or removing unused user accounts. Adhering to secure configuration
standards, whether their industry best practices or weather as outlined by
organizational security policies. And that's fine, there shouldn't be a one-time thing,
there needs to be a periodic review of security controls.

A security control is some kind of a solution that either reduces, eliminates, or


mitigates some kind of a security threat. Whether it's applying software patches,
configuring firewall rules, keeping virus scanners up to date on all devices. So, those
need to be reviewed periodically because a security control that was once effective at
mitigating a threat might no longer be effective at the current point in time. And so
you need periodic reviews of these things. Server hardening would include many
different tasks including enabling multi-factor authentication or MFA for admin
accounts. That means not allowing just username and password. Now that's two items,
but it's both in the same category of something you know, which could even be
utilized remotely by someone that could somehow figure out with that combination is.
So, instead maybe requiring a smart card to be inserted into a laptop if it's got a smart
card slot in addition to a username and password, perhaps that would be required
before you could use an admin account to manage a server.
Having a dedicated management network interface on a separate VLAN, so most
servers have more than one network interface. One of those network interfaces
perhaps would be plugged into a separate VLAN that only allows remote management
traffic and nothing else. And if you're talking about server class hardware, it probably
already has a dedicated interface for remote hardware out of band management. So,
you might already have that solution built-in, but it's definitely something to consider.
Frequent patching and confirmation that patches were applied successfully with date
and time stamps and of course the version numbers of those individual patches that is
always going to be important to make sure servers are protected. And also for
troubleshooting sometimes the application of a patch can break something else you
want to be able to track that update history easily.

Periodic vulnerability assessments. There are plenty of tools, some of which are free,
that allow security technicians to scan either an entire network or individual hosts,
even running tests against a web app looking for vulnerabilities. So, we can identify
any weaknesses and then address them. And again this needs to be periodic because a
security assessment might not have revealed something at one point in time. Maybe
because that's prior to a server having been infected, but maybe later it shows up as
being some kind of suspicious anomalous behavior or some kind of configuration
change that really shouldn't have occurred.

Also, using host intrusion detection systems using an IDS. An intrusion detection
system is designed to be configured for the specific host in this case so that it's
designed to look for what's abnormal. That means you have to know what is normal.
You need a baseline of normal activity to identify what's strange. So, an intrusion
detection system can be configured to look for suspicious activity, log it, perhaps send
a notification to admins, that type of thing, so that's part of server hardening. Then if
you're dealing with virtualization for a virtual machine servers. You have to think
about the security of any virtual machine backups or snapshots. The security of virtual
machine hard disks, because they would exist as files on a storage device. The VM
guests themselves must be hardened. So, the operating system running in a virtual
machine needs to be hardened the same way you would harden it. If we're a server OS
running on a real server, there's really no difference at that level.

Making sure that any storage area network traffic is encrypted, especially if you're
using something like iSCSI, which doesn't necessarily require authentication or
encryption. Because it uses standard network equipment to provide servers access to
storage over the network. Another thing to think about is that on a hypervisor with a
bunch of virtual machines, all it takes is one compromised virtual machine guest to be
perhaps infected with a bitcoin miner or something like that, that would hog resources
on that hypervisor, thus depriving other virtual machine guests from being able to
function properly.

You might also consider configuring your hypervisor so that it's got multiple physical
NICs connected to different VLANs, so virtual switch and VLAN isolation can also
be another factor. And then separating hypervisor management from virtual machine
management. You can set permissions to allow someone to manage the hypervisor OS
itself, but not have access to the data inside of virtual machines. Because we also want
to be careful to watch out for VM sprawl. This can be a security issue because we
have so many unnecessary virtual machines running, which increases the attack
surface. Then we have the option of encrypting a virtual machine. This is a screenshot
from VMware workstation, where we can set a password which essentially serves as a
seed to generate an encryption key to encrypt the virtual machine. It means the virtual
machine can't be started or managed unless you know the decryption password.
Finally, on the mobile device hardening side, we have to think about using strong
authentication, whether it's VPN authentication or requiring certificates before
authenticating certain types of apps. May be restricting app installation by users,
including sideloading.

Sideloading means installing an app from the source files as opposed to from an App
Store for a mobile device. Enabling App geofencing, so maybe apps are only active
when they're in a certain geographical region. Encrypting mobile devices, enabling
remote wipe so that if the device is lost or stolen and it contains sensitive apps or info,
it can be wiped remotely by admins. And just like with a regular server disabling
unneeded components like the camera perhaps enabling airplane mode, Turning off
location services, Enabling or disabling near field communications or NFC,
Bluetooth.

All of this is part of mobile device hardening. Now, might wonder why is that relevant
for servers because a mobile device could allow access, let's say through a VPN app
into a network where servers exist, which potentially could be targets for malicious
users.

Disabling USB Storage Using Microsoft Group Policy


Topic title: Disabling USB Storage Using Microsoft Group Policy. Your host for this
session is Dan Lachance.

There are many strategies related to hardening a window server. One of way to
hardened a server and its environment, meaning that if we have an infected client
machine that's joined to the Active Directory domain that potentially could also lead
to a compromise of the server. And so there are many things that we can do not only
on the server but for domain joined machines as well. And one of those things to do is
to prevent the use of removable storage. You know the words preventing users from
plugging in things like USB drives that might be infected and not only that, but
making sure perhaps that users can't copy sensitive data to plugged in USB drives.

Maybe preventing users from inserting CDs or DVDs if your machines have those
types of drives. So, we could do all of this individually on each machine one at a time
or we can do it through group policy. Let's start with the one at a time example,

The Windows desktop screen displays.

so from the start menu I'm going to type gpedit.msc Group Policy Editor Microsoft
Console. But when I click on it, I get nothing. Well, that's because this is a Domain
Controller and there's a different tool to edit group policy for the Active Directory
domain.

So, what I'm going to do is go into the start menu and go under Windows
Administrative Tools, where if I scroll down to the GS, I'll

A list of options appear below.

come across Group Policy Management.

The Group Policy Management window opens. The menu bar contains the following
options, namely: File, Action, View, Window, and Help. The left pane contains the
following options, namely: Group Policy Management, and Sites. The following sub
options are present below the Group Policy Management, namely: Domains, HQ,
Group Policy Objects, WMI Filters and Starter GPOs. The right pane display three
sections. The first section is titled as: HQ_Security_Settings. It contains three tabs,
namely: Scope, Details, Settings, and Delegation. Currently, the Scope tab is active.
The second section is titled as: Security Filtering. The third section is titled as: WMI
Filtering.

This allows me to configure Group Policy Objects or GPOs. We've got a few of them
here that contains settings that we want to apply to users or computers in the domain.
Either all users and computers in the domain or perhaps a subset depending on where
the GPO is linked, such as to a specific OU. But let's flip over to a domain joined
server to see what happens when we open the start menu and try to run gpedit.msc.
OK, so I'm on a different Windows Server now that is not a Domain Controller.
Remember, Domain Controllers contain a copy of the Active Directory database. So,
here I'm going to go to the start menu and just like I did before, I'll type in gpedit.msc
and I'm going to go ahead and select that on this server it runs it says Local Computer
Policy and

The Local Group Policy Editor window opens. The left pane contains two options,
namely: Local Computer Policy, and User Configuration. The first option contains
the following sub options, namely: Software Settings, Windows Settings, and
Administrative Templates. The second section contains the following sub options,
namely: Software Settings, Windows Settings, and Administrative Templates. The
right pane contains a section titled: Select an item to view its description.

I've got computer settings and user settings, but the point is I could configure limited
access to things like removable USB storage through the local group policy editor on
individual Windows devices. But of course if we've got an Active Directory domain
with domain joined devices, it makes more sense perhaps to do that centrally with one
configuration.

OK, so back here on the Domain Controller we had left the Group Policy
Management tool open. And the interesting thing about this is it is showing us our
OUs, our organizational units, Domain Controllers, which is a default OU. And of
course that will contain Domain Controller accounts HQ which I've created and that's
pretty much it. If I were to go to the start menu under Windows Administrative Tools
and open let's say Active Directory Users and Computers, it looks like there's a lot
more than just Domain Controllers in HQ.

The Active Directory Users and Computers screen displays. The left pane contains the
following options, namely: Builtin, Computers, Domain Controllers and HQ. The
right pane contains a table with the following column headers, namely: Name, Type,
and Description.

But Domain Controllers in HQ are the only OUs. The other items here like Builtin,
Computers, ForeignSecurityPrincipals, Users, those are containers and in the Active
Directory world, container and organizational unit, or OU they're not synonymous.
Because you can't apply group policy to a container, but you can apply group policy
to an OU. And that's therefore why only the OU users showing up here in the Group
Policy Management tool. So let's get to it. I want to make sure that I prevent certain
users. So, for example, if I drill down under HQ, we've already got a GPO there called
HQ_Security_Settings.
So, perhaps what I want to do is prevent users in headquarters from accessing
removable media that they just plug in, such as through USB. OK, so in order to do
that I need a group policy object. While I've got one and it's linked to HQ, it's
indented under it, so that means it will only apply to users and computers under HQ.
So, I'm going to right click on that GPO and choose Edit.

The Group Policy Management Editor window opens. The menu bar contains the
following options, namely: File, Action, View, and Help. The left pane contains the
following options, namely: Computer Configuration, and User Configuration. Both
the options contains the following sub options, namely: Policies, and Preferences.
The right pane contains a section titled: Select an item to view its description.

Now the question is what is the setting? Because remember, when you're working
with group policy there are thousands of settings. So how do I find what I need? Well,
you could filter things but the other thing I want to point out here is that when we
looked at gpedit.msc on a single computer that was not a Domain Controller, it said
local security policy. But here we've got the name of the GPO in the upper left. Well,
you're going to find that most security settings are somewhere under computer
configuration, so they apply to a computer regardless of who's logged into it. So,
under computer configuration, I could expand policies and I'm

Three options appear, namely: Software Settings, Windows Settings, and


Administrative Templates.

interested in Administrative Templates.

I can right click on that and choose Filter Options and

A dialog box titled: Filter Options opens. It contains the following fields, namely:
Select the type of policy settings to display, Enable Keyword Filters, Enable
Requirements Filters, and Filter for word(s).

Enable Keyword Filters. Here, I've already added the word removable, so I'm
searching in group policy

He enters the value in the Filter for word(s) field.

titles and Help Text and Comments

He highlights the checkboxes present under the Filter for word(s) field.
for the word removable because I'm interested in removable media. So, I go to ahead
and click OK and the little filter icon shows up. So, what I can now do is just expand
that and go all the way down to All Settings and I'm only looking at things that have
removable in their name.

The right pane displays a table with the following column headers, namely: Setting,
State, Comment, and Path.

So, for example CD and DVD Deny read access. Well, if we still use DVD and CD
drives, then that might be something to consider. If we have no need for it and we
want to prevent them from being read because maybe that would allow an installation
of some Malware or something along those lines, we could do that. So, I could double
click on that and I could enable that restriction.

The dialog box titled: CD and DVD: Deny read access opens. It contains three radio
buttons, namely: Not Configured, Disabled, and Enabled. The following fields are
also present in the dialog box, namely: Comment, Supported on. He selects the
Enabled radio button.

Same with write access if we have CD and DVD burners, perhaps to prevent things
like copying data so we could do that. Now, strictly on the executable side, such as
executing Malware then we could also Deny execute access, so I could do that as
well. OK, so that's something that we would consider, the other thing that we could do
is we could actually deny access to All Removable Storage classes.

So, that would take care of our USP items. Now if you do have some you won't them
allow, then you can allow the use of some of them that match a particular ID or
certain class. Maybe your company has issued USB thumb drives of a certain type to
employees and you want them to allow that.

The other thing you might consider when it comes to removable media if you're
hardening that type of thing is you could determine if you want to allow access to
removable drives that aren't encrypted. For example, Deny write access to removable
drives not protected by BitLocker. BitLocker affords you some protection because it's
encryption of data at rest. So, what you can do is say look we will not allow writing
files to removable storage unless it's encrypted with BitLocker. So, I'm going to go
ahead and turned that option on. Now these are just but a subset of things to consider
when hardening windows.

It could be a lot more in depth than just dealing with preventing access to reading or
writing CDs and DVDs and removable media. But this gives us a sense of how to
filter for what we're looking for and then how to start configuring it within a GPO.
Now remember, in order for that to be put into effect you could go to an individual
machine, in this case that is in HQ and from the Command Prompt, you could run
gpupdate /force. Now that's fine for testing or for just two or three computers, but do
you really want to do that for dozens or hundreds or even thousands of computers. Of
course not, in which case we would end up just waiting for group policy to take effect,
because domain joined devices will periodically contact the nearest Domain
Controller to pull down the things like group policy settings that should apply to that
machine.

And sometimes that might happen within a few minutes, depending on when you've
captured the cycle of when the change was made. Or, it might take 90 minutes, it
might take a few hours, if you're spread across a WAN with a single Active Directory
domain so there are many considerations.

Defining Authentication and Authorization


Topic title: Defining Authentication and Authorization. Your host for this session is
Dan Lachance.

Authentication is an important part of server security. With authentication, we are


talking about the proof of one's identity prior to granting access to a resource, like
managing a server and perhaps its file shares. We can authenticate users, devices or
even software to allow access to resources. Now understanding how to authenticate
users and perhaps even devices might make sense, but what is the software side of
authentication?

Well, imagine that we have custom code running in a virtual machine in the cloud,
and that code needs to call upon some items in cloud storage. What we could do is
create an entity, for example, a managed identity or some kind of a service principle
that would allow access to cloud storage and we would associate that identity with the
virtual machine where the code is running. Thus, we're really granting access to the
software running in that virtual machine.

Now, authentication is required before resource authorization is given, specifically


successful authentication. Now, another aspect of authentication is MFA, multi-factor
authentication which uses multiple authentication categories such as something you
know, that would be perhaps a username or a password or a pin. Something you are,
biometric authentication. So what's very common these days, even on consumer grade
electronics, is fingerprint scanning or facial recognition.

Another category of authentication is something you have in your possession, like a


smart card or a token device or even a smartphone with an authenticator app installed
on it. It could be something you do, in other words, gesture based authentication.
Could be somewhere you are geographically, where you have to be in a certain area
before an app is activated and can be used. We then have to think about password
policies as it's related to authentication, such as the length of the password, and
certainly the complexity or the number of characters and the types of characters that
are required, perhaps in a password such as upper lowercase characters, a certain
amount of symbols or numbers and any combination thereof.

The minimum password age. Now, you might not want to set that, for example, to one
day, because it means after one day people can reset their password. And if you're not
tracking password history, it's potentially possible that people could keep resetting
their password to an old known one. So, a lot of these things work in conjunction with
one another. So, a minimum password age before change is allowed, and maintaining
a password history to prevent password reuse. And, of course, a password maximum
age whereby you force password changes periodically. Also, configuring account
lockout policy so that after subsequent failed logon attempts, the account might be
locked for a period of time.

It could be an hour, it could be 2 hours, it could be 30 minutes. It's really dependent


upon how it's configured. And then there's the notion of password or credential
management. For example, in Windows, you have the credential manager, which can
store things like web passwords or other app passwords in a central location.
However, you can also use third-party password manager apps like LastPass or one
password to name, but a few. Here we've got a screenshot of creating an account to
use LastPass.

The idea with the password manager is you have one central place where you can
have it generate complex passwords that are different for all of the web apps or
websites that you visit. However, of course, you have to have a master password that
is made available for the password manager tool itself. Then there's the notion of one-
time passwords or OTPs. For example, we've got a screenshot here from Amazon.

A verification code, a 6-digit verification code and often this will work in conjunction
with a forgotten password mechanism where you try to sign into an app, you click the
forgot password link. You might have to supply a phone number or an email address
to which the reset code might be set. And, of course, if you know the correct reset
code or one-time password then it might allow you to reset your password, but it's
only good for one sign in, and usually it's within a very limited amount of time. 2-step
verification requires additional authentication factors. For example, when you sign
into an app or a website, you might be asked for a username and a password, and then
you might be asked for a verification code that was perhaps SMS texted to your smart
phone device. Or maybe you have to have configured an authenticator app on your
smartphone like the Google or Microsoft Authenticator app

where you would then look at the code which normally changes every 30 seconds.
Perhaps it's a 6-digit code that you would enter here. And the combination of that
verification code along with a username and a password, for instance, would allow
you to successfully authenticate it. The idea with 2-step verification, it just makes it
more difficult for malicious actors to break into the account. Doesn't make it
impossible, but normally malicious actors tend to go to the lower hanging fruit, what's
easier to break into, as opposed to jumping through additional hoops for 2-step
verification systems. Doesn't mean it can't be compromised, and it never is, but it's
less likely than just username and password single factor authentication.

So, we know that authorization can only occur after successful authentication, so
accessing perhaps a database or access to certain functionality within an app, perhaps
even access to a network in the case of network access control or NAC. So,
authentication and authorization then are an important part of securing server
ecosystems.

Managing Microsoft Active Directory Users and Groups


Topic title: Managing Microsoft Active Directory Users and Groups. Your host for
this session is Dan Lachance.

In this demonstration, I'll be creating and working with Active Directory users and
groups. So, user identities are obviously a crucial part of authentication and also
authorization, and even auditing to track who did what when.

The Windows Desktop screen displays.

So, in an Active Directory environment, then we have a number of ways that we can
work with user accounts. For example, let's go here on our Domain Controller to
Windows PowerShell. Now that's not to say you have to be at the server,

He selects the Windows PowerShell option from the start menu. The Administrator:
Windows PowerShell window opens.

of course, you could always use remote desktop to get into it. But you can also run
PowerShell commandlets remotely from one machine and have it target another
machine over the network.
That's kind of beyond the scope of what we're doing, but what we are going to
mention here is that in PowerShell, there are plenty of commandlets for managing
Active Directory. As an example, in PowerShell I can use get-command, kind of as a
discovery method to learn about what commandlet names are. For example, let's say
I'm interested in aduser, so I'm guessing that aduser Active Directory user might be
part of a name of a commandlet.

It's just a guess, turns out it's a good guess.

The command reads as: get-command *aduser*.

I can run Get-ADUser to retrieve Active Directory user accounts. I can make a new
user with New-ADUser remove a user or set some attribute of the user. Modify them
basically. If I were to run, let's say, Get-aduser PowerShell, of course, is not case
sensitive. It asks for a filter, well, I want to see every user, so I could just type in an
asterisk, which is a wild card and all of my user accounts are returned. OK, that's fine.
However, let's work in Active Directory users and computers.

I'm going to open up the start menu and go into Windows Administrative Tools where
I'm going to run Active Directory Users and Computers.

A list of options appear below.

The Active Directory Users and Computers page opens. He highlights the options
present in the left pane.

Here, of course, we have our Active Directory domain listed along with Builtin
containers like Users, Computers, and literally Builtin as well as some OUs. Now, the
Domain Controllers OU is Builtin, it's automatically there when you create an Active
Directory domain. But I've created one called HQ, headquarters. Now, what we could
also do is create other OUs for example, I've got a button up here in the bar that I
could click to create new Organizational Unit.

A dialog box titled: New Object - Organizational Unit opens. It contains a field,
titled: Name. The OK, Cancel and Help buttons display towards the bottom.

And I'm going to create one called Eastern_Region and I'll click OK, now that shows
up. So, from here I could begin to work with user accounts, computer accounts and
groups. So what is common is that many organisations, atleast larger ones in the
enterprise that use Active Directory, will create subordinate OUs. So for example,
under Eastern_Region and this time maybe I'll just right-click,
Multiple options appear in a context menu, namely: Delegate Control, Move, Find,
New, and All Tasks.

I can choose to create a New Organizational Unit.

Multiple sub options appear, namely: Computer, Contact, Group, Organizational


Unit, and Printer. He selects the option: Organizational Unit.

Maybe, I'll make a general one under Eastern_Region called Users and another one
perhaps called Computers. You don't have to do this, but it's just a way to organize
Computers and Users. So, maybe in the Users OU under Eastern_Region within my
Active Directory domain, I'll create a new user. Specifically,

He selects the new user creation option from the top header. A dialog box titled: New
Object - User opens. It contains the following fields, namely: First name, Initials,
Last name, Full name, and User logon name.

let's say, a user MBishop. So, that's going to be Max Bishop. Let me just change the
nomenclature here a little bit.

The User logon name will be mbishop and again, we want to make sure we adhere to
organizational naming standards when it comes to determining the User logon name.

He selects the Next button. The following fields appear, namely: Password, Confirm
password, and three check boxes.

I'll specify and confirm a default password for that account. From a security
perspective, stay away from using the same default initial password for newly created
users because it's not very hard to find that out within the organization and it's the last
thing that we want. We don't want accounts being hacked before user gets a chance to
sign in for the first time to change that password. So, User must change password at
next logon is turned on that's good thing, I like it, it'll leave it, Next, Finish. We now
have user Max Bishop, we could double click on that user account here and if we
wanted to we could fill out all

A dialog box titled: Max Bishop Properties opens. It contains multiple tabs, namely:
General, Address, Account, and Organization. Currently, the General tab is active.

of these other attributes. The Description, the Office, E-mail, Web page under
Organization I could fill in Job Title.
So, for example, Sales Manager Department might be Sales. Now you don't have to
fill all these in, but it can be useful, it can actually also be used by security access
control mechanisms. An example of that would be something in Microsoft called
Dynamic Access Control, which controls access to the file system based on conditions
such as someone being in the Sales Department without having to add them to a
group. So, sometimes it can be very important that we fill in all of these details.

If I go under the Account tab, here's where we could specify things like Logon Hours
where Logon is Permitted versus when Logon is Denied. So, maybe every day of the
week after 8:00 PM, what we want to do is we want to denied logon. So, we could
select that for all the days of the week if for some reason that was the case in the
organization and maybe we only want to allow work to begin at six or seven in the
morning. So, you could specify another block of time that you wanted to prohibit for
logon.

So, there we go. We've got some of those options available here as well. Then we've
got other options here like password never expires. That's not a good one. But you
might temporarily disable an account if a user is away, and perhaps on paternal leave,
that type of thing. So, there are a number of things that we can do here. You can also
expire an account if you know, for example, you're hiring a contractor for a specific
period of time, or perhaps a summer student that's only going to work for a month or
two.

It might make sense to expire their account right away when you create it. If you
know when that date will be so you don't forget. Last thing we want our extra user
accounts hanging around that are not being used. OK, I am going to click OK. Next
thing I want to do is create a group here. So, I'm going to create a group and I am
going to call the group Sales.

He enters the group name in the New Object - Group dialog box. The Sales group
display on the right pane.

I'll click OK. Now, within the Sales group, I can go to the Members tab and I can
click Add and from here, I can select users

He double clicks the Sales group option. A dialog box titled: Sales Properties opens.
It contains the following tabs, namely: General, Members, Member Of, and Managed
By. The Add, Remove, OK, Cancel, and Apply buttons display towards the bottom.

The Select Users, Contacts, Computers, Service Accounts, or Groups dialog box
opens. It contains the following fields, namely: Select this object type, From this
location, and Enter the object names to select (examples). The Object Type, Location,
and Check Names button is also present on the dialog box.

that I want to be a member of this group. I'm going to search for max.

I'll click Check Names. There's Max Bishop. Excellent, so we've just added that
account. Now, I'm also going to go back one level in the OU structure to
Eastern_Region because here I'm going to create a group called Eastern_Users.

The New Object - Group dialog box opens.

The reason I'm doing this is I want to point out that groups can be members of groups
which can really work well if it's planned properly. So, I'm going to open up the
Eastern_Users group and I'm going to go to Members. I'm going to click Add, then
this is where I'm going to search for sales.

The Select Users, Contacts, Computers, Service Accounts, or Groups dialog box
opens.

I click Check Names. The Sales group is now something that will be a member of this
group. Now that can be useful because it allows me to manage things on a more broad
scale, such as through the Eastern_Users group or I can get a little more specific and I
can assign permissions, for example, to just the Sales group. Maybe permissions to
files or databases, or web apps or whatever the case might be.

Managing Linux Users and Groups


Topic title: Managing Linux Users and Groups. Your host for this session is Dan
Lachance.

For the longest time Unix and Linux systems have had a way of dealing with local
users and groups, by storing those definitions in text files in a protected part of the file
system. Now, of course, depending on what your specific implementation is, you
might use a directory service like an LDAP compliant directory service for
authentication to Linux or Unix. But in this case, we're going to stick with the old
school local files. So, to get started let's explore that a bit here in Ubuntu Linux.

The terminal window displays.

So, what I want to do here is change directory to /etc. And what I want to do is run cat
against a file in here called password,
He enters the command: cd /etc.

but it's spelled passwd.

Now, in the password file what we have are user accounts that get listed here. So, for
example, let's add our own new user account and see how that gets added to the etc
password file. I can use the useradd command with -m because I want to add a home
directory for the user, let's add user mbishop. Now at this point, it says, well, can't do
that access tonight. Well, that's because if I run whoami I'm not logged in as root, I'm
logged in as a user by the name of cblackwell. So, I'll use the Up arrow key to go back
to my useradd command and I will prefix it with the sudo to run this with elevated
privileges. So, it asks me for the password for cblackwell, which is normal when you
run sudo, so I'll go ahead and specify that, and press Enter.

And at this point, no news is good news. It should have added user M Bishop. One
way we can find out is by using the tail command to read the last 10 lines of a file
such as passwd. Sure enough, when we do this, the last entry in that file is for

The command reads: tail passwd.

a user by the name of mbishop. This is a colon delimited file, there is a full colon that
separates a lot of details. The x in the second position reflects the fact that a hash of
the password is stored elsewhere. We'll examine that in a moment. And then we have
some user ID and group ID numbers and then some other details such as the home
directory. In this case /home/mbishop and then the shell to launch by default, when
that user signs in, so /bin/sh OK.

So indeed, if I do an ls of /home, we do have a home directory for mbishop, but we


did say that the x in the second position in the password file indicates that there is a
password hash for that user in a separate file, and that separate file is called shadow.
I'm in the etc directory already if I cat the shadow file or if I attempt to, I get a
permission denied. So, I need to prefix that with the sudo, it's a sensitive file.

The command reads: sudo cat shadow.

When I do that, I get a list back of the details in this file.

I'm primarily interested in mbishop. mbishop is at the very bottom and in the second
position what we have is an exclamation mark. Now that's because we haven't yet set
the password for that account. And, of course, we want to set a password in alignment
with organizational security policies. So, what we can do is run sudo password that's
abbreviated as passwd, and then we'll give it the username, in this case, mbishop, I
want to set a password for that account.

The New password message prompts.

So, I'm going to go ahead and specify a password, and as you might imagine it wants
me to confirm it or retype it to make sure I know what it is. OK, so I'm going to go
ahead and do that and then it says password updated successfully. Let's go back and
check the shadow file a second time. I'm going to clear the screen, use the Up arrow
key so that we can take a look at the shadow file. Now, it looks very different for the
mbishop line, which actually wraps now to the second line after that. But instead of
just an exclamation mark, we now actually have a password hash.

Now a password hash means that the password was fed through a one way
cryptographic algorithm that resulted in this unique value. And so when user mbishop
signs in he will type in the correct password presumably, which will be fed once again
through the same hashing algorithm and should result in this exact same hash.
Because if it does result in the same hash, what that means is that Max Bishop knows
the password. And so, he will be authenticated and then presumably can have access
to certain resources to which he has been granted permissions.

So, that's just a little bit about the user side of things. Now, what we could do to test
that account if we really wanted to, we could stay logged in as who we currently are
logged in as, but we could use the su the switch user command with a dash, which
means I want to run a full login for a specific user, including running login scripts and
so on. And that account in this case is going to be mbishop.

So it says, OK, well, what's the password in this case

The Password message prompts.

for mbishop, since that's who you're trying to sign in as. OK, well I'll go ahead and
specify the password, and I'm in now as user mbishop. Now I can prove this by
typing, whoami, I'm logged in as mbishop. If I were to type pwd print working
directory, when mbishop signs in, he will automatically be placed in his home
directory to which he has access. So, if I type exit, I'm exiting out of that switch user
session. So I'm back to being cblackwell and if I type whoami, of course, it's
evidenced by the output of that command.

Now we can also create groups in Linux. So, for example, let's say I want to add a
group, groupadd and let's call it sales. Now again, I get Permission denied because in
order to add a group, well, you're going to have to run it either when you're logged in
as root or with elevated privileges, meaning prefix with the sudo. So, at this point,
we've added a group called sales. Let's take a look at where that was written to. So I'm
going to run the tail command as opposed to cat it doesn't really matter, tail just only
shows the last 10 lines.

I'm going to go into, etc, which I suppose we're already in and the name of the file I'm
interested in is called group.

The command reads: tail /etc/group.

So, we have our sales group shown here, the x and the second position means that we
don't have a password yet. You can actually have group passwords, but it's not used
very often and the group ID that was assigned by Linux is 1002. So, what I'd like to
do is make sure that mbishop is a member of the sales group.

So, if I were to tail /etc/passwd for mbishop, the default group for that account is
group with ID 1001. I want it to be 1002. Now I could just edit that text file if I really
wanted to or I could run a sudo usermod, the user I want to modify is mbishop -g to
set the group to 1002. So, now if I tail /etc/passwd again.

The command reads: tail /etc/passwd.

Well, it looks now like 1002 is the primary group for mbishop and again if I tail
/etc/group, 1002 is the ID of the sales group. So, let's do an su once again, su - for
mbishop sign in as that account.

And we'll specify the password for that account, because what I want to do is just
create a file and take a look at what happens here. So, we can use the touch command
Linux to create a file, a 0 byte empty file and, of course, mbishop can do that in his
own home directory. The command reads: touch file1.txt.

If I clear the screen and do an ls -l notice that mbishop is the owning user of the file
that was just created. So, he gets the user permissions which here are read write. And
notice that sales was added as the group automatically because of our change for the
primary group association with that account.

And so the sales group gets the second set of permissions, which in this case is just
read. So that's just a little bit of how users and groups work in a Linux environment.

Managing AWS Users and Groups


Topic title: Managing AWS Users and Groups. Your host for this session is Dan
Lachance.

In this demonstration, I'm going to be managing AWS users and groups. So, you can
configure user accounts, add them to groups, and control permissions to cloud
resources by creating those users and groups directly in the cloud, which is what I'll be
doing here in AWS.

The AWS Management Console window opens. It contains the following options,
namely: AWS services, Build a solution, Stay connected to your AWS resources on-
the-go, and Explore AWS. It contains a search bar at the top.

There are also ways that you could link a directory service similar to Active Directory
in the cloud to your on-premises directory service. So, that you could reuse your
existing user accounts. But in this case, we're going to be building them from scratch
directly here in the AWS Management Console. So, to get started what I'm going to
do is, I'm going to search the field at the top for iam identity and access management,

A list of Services opens.

and then I'll click on that search result to open the IAM Management Console, where
over on the left, I can click Users.

The Identity and Access Management (IAM) console window displays. The left pane
contains the following options, namely: Dashboard, User groups, Users, Roles, and
Policies. The right pane contains a section titled: Security recommendations.

The right pane displays a section titled: Users. This section contains a search bar and
a table. The Delete and Add users buttons are present above the table.

Now certainly I am signed in with the user account, here in the upper right, this is
called the AWS root account. But I can create additional IAM users and I might do
this because I want to allow access to certain cloud-based apps. Or perhaps I would
create additional AWS user accounts for other cloud technicians and perhaps give
them limited access to manage AWS resources.

So, I'm going to click the Add users button here over on the right

Two sections display, namely: Set user details, and Select AWS access type. The first
section contains a field, titled: User name. The second section contains a field, titled:
Select AWS credential type.
and I'm going to fill in the name. I'm going to start by creating user Codey Blackwell.
Now when I spell out the name notice it says, we're not looking for you to spell out
the name, we're looking for the user account name. So, I have a message about an
invalid name, it has to be alphanumeric characters, underscores and so on. OK, so I'm
going to go ahead then and follow my organizational naming standard and I'll just put
in cblackwell. Now down below in AWS, I have to determine is this an account that is
for Programmatic access, like for software developers, where I can specify that I want
an access key ID and a secret access key that developers would need in order to use
the AWS API, the application programming interface, the CLI command line tool set
for AWS or the software solution developer kit, the SDK, or is this just AWS
Management Console access. I'm in the console here, is that what I want to do.

So, in this particular case, I'm going to go ahead and create a Password based AWS
credential type,

The Console password field appears. It contains two radio buttons, namely:
Autogenerated password, and Custom password. A text box display below the Custom
password radio button.

because perhaps I'm doing this for User cblackwell, who is an additional cloud
technician, so they need access to the console. Down below, I can have an
Autogenerated password or I can specify a Custom password which I will do here. I
would need, of course, to communicate this to the user, and ideally, we're not using
the same password to initialize each new IAM user that we create because that would
be a security risk. Do we require a password reset upon next sign-in, Sure, let's leave
that turned on. Then I'll click Next for permissions.

The Set permissions section displays. It contains three tabs, namely: Add user to
group, Copy permissions from existing user, and Attach existing policies directly.

So, at this point for this new user account we can Add the user to a group. And if the
group has permissions then the user by extension will get those permissions, of
course. There's even a convenient button here to Create a group, if I don't have any, I
can also instead Copy permissions from an existing user that might already have the
same permissions that this user will need, or I can Attach policies directly to this user
account where policies provide permissions. I'm not going to do any of this right now.
Essentially, I'm going to choose attach policies directly, but I'm not going to attach
any and I'll click Next: Tags.

The Add tags (optional) section displays. It contains a table with the following column
headers, namely: Key, Value (optional), and Remove. Below the column headers
display the text boxes.

I can add a Key and Value pair or numerous Key and Value pairs up to 50 for tagging
for this. I'm not going to do that, I'm going to click Next: Review

The Review section displays. It contains multiple details.

and I'm going to Create the user account. OK, it says, Success, you have successfully
created the user and I could even Send an email with login instructions to

The Download.csv button displays. Below the button display a table with the
following column headers, namely: User, and Email login instructions. It contains
one row.

user cblackwell, I could expand that account to see some of the details.

So, it created the user. It automatically did attach a policy even though I didn't
specifically select one, called the IAMUserChangePassword policy to allow for that to
happen and it created the login profile or the account for that user. So, I'm going to go
Close and if I go back to my list of Users, now cblackwell shows up. If I click to open
up cblackwell then I get into the properties of that user account.

The Summary page opens. It displays the following tabs, namely: Permissions,
Groups, Tags, Security credentials, and Access Advisor. Currently, the Permissions
tab is active. It contains a button labelled as: Add permissions.

So, this is where I can click Add permissions if I want to again add a member to a
group

He highlights the tab present in the Grant permissions section, namely: Add user to
group, Copy permissions from existing user, and Attach existing policies directly.

or maybe copy permissions from another user to this user, or attach policies directly,
whatever the case is. But normally, it's done at the group level.

He moves back to the Summary page.

Also, while I'm in here, I can also modify the Group membership here, modify Tags if
the password is forgotten, for example, under Security credentials, I could go ahead
and Manage that password. I could assign a multi-factor authentication or MFA
device, and I could create access keys here for programmatic access as we saw, it was
an option when we initially created the user. But what I want to do here is create a
user group so we're on the left, I'm going to click User groups and I'm going to choose
Create group

A table and a button labelled as: Create group displays.

The Create user group page opens. It contains the following sections, namely: Name
the group, and Add users to the group - Optional.

and I am going to call this group East_Admins.

Now what I can do down below is Add users to the group right away. So, I want to
add user cblackwell to the East_Admins group and I can attach permissions policies.

He scrolls down in the page. The Attach permissions policies - Optional section
displays. It contains a table with the following column headers, namely: Policy name,
Type, and Description. The table contains multiple rows. A search bar display above
the table.

So, each of these policies is a collection of related permissions, such as for access to
things like Glacier. Glacier is Amazon's archiving solution, so ReadOnlyAccess to
Glacier archives. What I'm going to do is search, let's say for s3, S3 buckets. Because
what I want to do is allow S3ReadOnlyAccess, let's say to this user account. So to be
able to read files that are stored in an S3 storage bucket. Then I'll choose Create
group. So, now the group is created and, of course, I could click on the group to open
up its properties and I could modify or manage the user membership of this group.

The East_Admins page opens.

And as you might guess, if I go back to my Users view and go into cblackwell.

The Summary page opens.

If I go into Groups now, well, of course, we see that this user is a member of the
East_Admins group and under Permissions we're also going to notice that we have a
group attachment for permissions policies.

If I click the Show 1 more, it's the AmazonS3ReadOnlyAccess policy that stems from
the fact that this user is a member of the East_Admins group. OK, well, that's fine. If I
go into the Security credentials tab for this user at the top, there is a Console sign-in
link. I'm going to go ahead and copy that we're going to test signing in as user
cblackwell.

The Amazon Web Services Sign-in page displays. It contains the following fields,
namely: Account ID (12 digits) or account alias, IAM user name, and Password.

So, when I follow that link, it automatically fills in the Amazon Account ID, it wants
the IAM user name. So, I'm going to go ahead and fill that in and I'll also specify the
password that we specified when we created that account.

The screen displays the following fields, namely: AWS account, IAM user name, Old
password, New password, Retype new password.

And it knows that we have to change that password, so I'll go ahead and enter the old
one. And it will enter and confirm a new one and I'll choose Confirm password
change. And now I'm signed in, as evidenced in the upper right as user cblackwell.

The AWS Management Console window displays.

So, if user cblackwell were to try to go into EC2 instances and perhaps go in and try
to Launch an instance, but even before doing that

The New EC2 Experience console window opens. The left pane contains multiple
options. The right pane displays a table and the following buttons, labelled as:
Instance state, Actions, and Launch instances.

The page with section: Step 1: Choose an Amazon Machine Image (AMI) opens.

notice, it says You are not authorized to perform this operation, not even to view the
Instances. But if I go into s3, that would be S3 storage buckets, if I had any, then this
account would have read only access.

He searches S3 in the search bar.

The Amazon S3 console window opens.

So, it looks like we do actually have one bucket, so if we were to open that up, well, it
looks like it's letting us in and we have the ability to start browsing and seeing the
content. So, that's how you can start to work with AWS users and groups.

Managing Microsoft Azure Users and Groups


Topic title: Managing Microsoft Azure Users and Groups. Your host for this session
is Dan Lachance.

In this demonstration, I'm going to be using Microsoft Azure to create Azure users
and groups. Now, just like with other public cloud solutions, like

The left pane displays the following options, namely: Create a resource, Home,
Dashboard, All services, All resources, Resource groups, and SQL databases. The
right pane contains three sections, namely: Azure services, Recent resources, and
Navigate.

Amazon Web Services or AWS, you could link a cloud-based directory service to
your on-premises directory service like Microsoft Active Directory. And you could
reuse existing on-premises user accounts to allow for example, users to sign in with
credentials they already know to access cloud apps. But you can also create user
credentials in groups directly and solely in the cloud, and so that's going to be our
focus in this particular demonstration. First things first, in Microsoft Azure we have
the concept of Azure AD. An Azure AD or Azure Active Directory tenant is created
by default when you sign up for Azure, but you can create additional Azure AD
tenants. Kind of like creating additional on-premises Active Directory domains to
keep things separated for administrative purposes or maybe based on cities, states,
countries, whatever the case is.

In the upper right here, because I've signed into the Azure portal, I can click and
choose Switch directory

He selects the profile icon. The My Microsoft account, and Switch directory links
display.

The Portal settings | Directories + subscriptions page displays. It contains a search


bar, some text. Two tabs display below the text, namely: Favorites, and All
Directories.

and I can choose All Directories to get a list of all of the Azure AD tenants, which are
similar to different Active Directory domains. Although it doesn't support all features
like Active Directory does, such as group policy, but at any rate it is a user credential
store. So, I could switch to any of these specific Azure AD tenants, for example, I'm
going to Switch to Twidale Investments, that's a separate Azure AD tenant.

The Welcome to Azure! page opens. It displays three cards, labelled as: Start with an
Azure free trial, Manage Azure Active Directory, and Access student benefits.
And if I were to open the menu over on the left, I could go to Azure Active Directory
where I could click on Users.

The Twidale Investments | Overview page opens. He selects the Users option in the
left pane.

The Users | All users (Preview) page opens. The left pane displays multiple options,
and the right pane displays a table and the following buttons, labelled as: New user,
New guest user, Refresh, and Bulk operations.

Any existing user accounts will be shown here. I want to create a new user. Now
notice I have a New user button at the top as well as a New guest user. Actually, that
option is available if I just were to click New user, I can choose if it's a regular user in
Azure AD or

The New user page opens. It display the following cards, namely: Create user, and
Invite user. Below the cards display a section titled: Identity. This section display the
following fields, namely: User name, and Name.

I can invite a user through email, which is a guest user. Down below, I'm going to
specify User name of mbishop. And what happens in Azure AD, is it uses the name of
my Azure AD tenant, the DNS suffix, which in this case is twidaleinvestments, and
and then we'll talk on .onmicrosoft.com at the end of the DNS suffix.

And I could change that if I really wanted to, but I'm going to go with that, that's what
I have. So, the user name then spelled out is going to be in this example, Max Bishop.
I can fill out as many details as I really choose here, although down below, I have to
deal with the password. Will it be Auto-generated? Should we specify it? I'm going to
let it Auto-generated, and I'm going to copy it, because then I could specify or
communicate that password to that user. Down below, we can also add the user to a
Group during user account creation or at any time thereafter. Down below, I can also
specify a Usage location, for example, I'm going to specify the United States because
there are some licenses that you might use for tools, like Microsoft 365 and so on.
That might require you to have a usage location set in the user account before you can
assign licenses.

Then I can fill in other attributes, like Job title, Department, Company and so on. I'm
OK with my selections here, so I'm going to Create user Max Bishop.

The Users | All users (Preview) page displays.


And so, Max Bishop here, now exists in Azure AD and if I click on the link for that
account,

The Max Bishop | Profile page displays.

the full sign in name for that account shows up here. mbishop@, in this case,
twidaleinvestments.onmicrosoft.com. So that's the sign in name. Of course, there is a
Reset password button up at the top, if for some reason the password is forgotten for
that user. But what we can also do when we're looking at a User's Profile, which is
what we're doing here, we can click on Groups on the left to see if that user is a
member of any Groups. So, let's go back and create a group in Azure AD. So, I'm
going to follow my breadcrumb trail, so to speak, in the upper left and click on the
Twidale Investments link to go back to Azure AD, where I'm going to click Groups
on the left and this is where the group definitions in Azure AD exist.

The Groups | All groups page displays.

Now I don't have any groups, but I will in a moment. I'm going to click New group.

The New Group page opens. It contains multiple fields, namely: Group type, Group
name, Group description, Membership type, and Owners.

And it's going to be a Security group, not a Microsoft 365 group. The name of this
group is going to be Western_Region_Users. Now down below, for the Membership
type, I can assign members, that's what we're probably already all used to where we
manually add members to the group. Then notice I have Dynamic User and Device
Membership, so if I were to choose Dynamic User, then I could Add a dynamic query,

He selects the Add dynamic query link present below the Owners field. The Dynamic
membership rules page opens. It displays the following tabs, namely: Configure rules,
and Validate Rules (Preview). Currently, the Configure Rules tab is active. It display
a table with the following column headers, namely: And/Or, Property, Operator, and
Value.

where I could Choose a Property, so, for example, maybe department and if the
department Equals a certain value, you know, such as Sales, then perhaps then that's
how we determine members of the group. So, we could do that, but I won't, I'm going
to click Discard and Yes.

The Discard button is present above the table. He selects Discard. A Discard changes
message appears.

But it's very important though, to understand that we can have dynamic groups.

In this case, I'm just going to go back to Assigned,

He moves back to the New Group page.

I can select a group owner that can manage members of the group, so I'll click the No
owners selected link, it's going to be Codey Blackwell.

The Add owners page appears on the right. It contains a search bar and a list.

That's the owner of the group and I'll click Members, No members selected, I'll click
that link and I want to add a member here, it's going to be user Max Bishop, so I'll
Select Max Bishop, Create. Now, the way that we can use this group is really
multifaceted. If I go back to my Azure AD tenant in the breadcrumb trail in the upper
left, one of the things I could do is go to Enterprise applications.

The Enterprise applications | All applications page opens. The left pane contains
multiple options. The right pane contains few buttons and a table. The table contains
multiple rows.

These are applications that are accessible by authenticating through Azure AD. It also
shows up on the myapps.microsoft.com page when a user signs in. So, for example,
Skype for Business Online, one of the things I could do is click on that app to open up
its Properties here in Azure AD. And I could click Users and groups on the left

The Skype for Business Online | Overview page opens. He selects the Add user/group
button on the right pane.

and I could Add a user or group that should have permissions to use this app. It will
show up for them on the my apps Microsoft page.

The Add Assignment page displays. It contains the following fields, namely: Users
and groups, and Select a role. A window titled: Users and groups display towards the
right. It contains a search bar and a list.

So, instead of specifying an individual user, I could specify a group like our
Western_Region_Users group. And there's a note that says When you assign a group
to an app, only users directly in that group will have access, not nested groups,
meaning groups within groups. That's fine, I'll click Assign and the Application
assignment succeeded. So that gives us a sense then of how we would begin to use an
Azure AD user account and group. The last thing I'll do is I'm going to sign in as that
user. So, in a new browser window, I'll go to myapps.microsoft.com, it'll change the
URL and this is where I'm going to sign in as user Max Bishop. So, I'll put in the
entire email address and then I'll click Next, where it will prompt me for the password
for that account. I'll specify that password and Sign in, at which point it will ask me to
specify the Current password and set a new one. After which, when I've specified the
New password, I'll click Sign in. And so once we've signed in here as Max Bishop on
the myapps.microsoft.com page, we now have the Skype for Business Online app
available.

Enabling AWS User Multi-factor Authentication (MFA)


Topic title: Enabling AWS User Multi-factor Authentication (MFA). Your host for this
session is Dan Lachance.

In this demonstration, I'm going to be enabling multi-factor authentication or MFA for


an AWS user account in the cloud. So, I've already signed into the AWS Management
Console. Now your organization might require multi-factor authentication for some or
all user accounts. It enhances users sign in security because instead of just something
you know like a username and a password, there would be an additional
authentication factor, maybe having to possess some kind of a hardware token device
with a changing code that you also have to enter in to authenticate. So that would be
something you have along with something you know, hence multi-factor or two-factor
authentication. And sometimes organizations will perhaps only require that on
powerful administrative accounts, but really, it's a good practice for all users. Yes, it
might be a bit inconvenient, but that's always the case with higher security levels.
Anyways, let's go ahead and enable MFA for an AWS user.

So, here from the console, I'm going to search for iam, identity and access
management because I'm going to click on that, that's where we manage users and
groups in Amazon Web Services. Now I can Add MFA for the root user account,

The Identity And Access Management (IAM) console window displays. the left pane
contains the following options, namely: Dashboard, User groups, Users, Roles, and
Policies. The right pane contains a section titled: Security recommendations. The
Security recommendations section contains a field, titled: Add MFA for root user. It
contains a button, labelled as: Add MFA.

that's the account I'm using to sign in here, directly from for admin purposes. That's
what you get when you sign up for Amazon Web Services. So, I could click the Add
MFA button and continue on to Activate it.

The Your Security Credentials page opens. The right pane displays the following
sections, namely: Password, Multi-factor authentication (MFA), CloudFront key
pairs, and Account identifiers. The Multi-factor authentication (MFA) section
contains a button titled: Activate MFA.

We can also do it for individual IAM users beyond the AWS initial root account, so I
could click Users over on the left, here I've got a user by the name of cblackwell

The right pane displays a section titled: Users. This section contains a search bar and
a table. The Delete and Add users buttons are present above the table.

and what I'd like to do is enable MFA for it. But first thing we should do is see
whether or not it's already been enabled. Pretty easy to figure out, if you click on the
user to open up their profile then one of the things you can do is go to the Security
credentials tab

The Summary page opens. It displays the following tabs, namely: Permissions,
Groups, Tags, Security credentials, and Access Advisor. Currently, the Permissions
tab is active. It contains a button labelled as: Add permissions.

and there are plenty of interesting things here. This is where we have the Console
sign-in link, if that user needs to sign-in to the AWS Management Console, because
perhaps they're an assistant cloud technician.

The Console password is Enabled, but if it's forgotten we could Manage it. And what
we're here to do is Assign an MFA device, currently, it states Not assigned. So, I'm
going to click the Manage link next to Assigned MFA device.

The Manage MFA device dialog box opens. It contains a section titled: Choose the
type of MFA device to assign. It contains three radio buttons, namely: Virtual MFA
device, U2F security key, and Other hardware MFA device.

Now I can use a Virtual MFA device. What this means is I can install an
Authenticator app, the Google Authenticator app, Microsoft Authenticator app, that
type of thing. I can install that on my computer, on my smartphone, or tablet, and so I
would have to have that device with me to sign-in with MFA. Or maybe I've got some
kind of a hardware device that can be used, like a U2F security key, like a YubiKey or
some other type of hardware MFA device, some kind of hardware token. So, in this
particular case, I'm going to go with Virtual MFA device. I've already got the
Microsoft Authenticator app installed on my smartphone, so I'm going to go with that.
I'm going to click Continue and I'm going to choose Show QR code.

The Set up virtual MFA device section opens.

So, in my Authenticator app from my smartphone, I need to add a new account that's
going to activate the camera on my phone, which will allow me to scan in this QR
code, which I'm going to do now. This QR code is unique to my user account. When I
say my user account, I mean the user that we are setting up here for MFA.

Now the moment I scanning that QR code, it adds my account for, in this case,
cblackwell to the Authenticator app on my smartphone. So that's good. But down
below, what I have to do is enter two of these six-digit codes, which timeout after 30
seconds on my Authenticator app.

The two fields are labelled as: MFA code 1, and MFA code 2.

So, I'll enter the first one and then once it switches over within 30 seconds, it's just
about there now, I'll enter the second six-digit code that's unique. I have to have this
code in addition to knowing the username and password sign-in. OK, I've entered in
two cycles of the code, I'll click Assign MFA. We're good to go, it worked, so I'm
going to Close out. So, let's see what happens then, when we sign in as user
cblackwell. If I scroll up here, this is my user account cblackwell. So, I'm going to
copy the sign-in console link and we're going to test it in another web browser.

The Amazon Web Services Sign-in page displays. It contains the following fields,
namely: Account ID (12 digits) or account alias, IAM user name, and Password.

OK, so I'm going to specify cblackwell as the user name. I still need to know the
password, this does not negate the need for that, so I'm going to enter that.

The Multi-factor Authentication section appears. It contains a field, titled: MFA


Code.

Now, it wants my MFA Code and this is where I need to have in my example, my
smartphone near me with the Authenticator app, with the account added and the code.
So, I'm going to go ahead and enter that before it times out and then I'll click Submit.
Because remember those codes change every 30 seconds and I'm now in using multi-
factor authentication as user, cblackwell.
Enabling Microsoft Azure User MFA
Topic title: Enabling Microsoft Azure User MFA. Your host for this session is Dan
Lachance.

You can enable multi-factor authentication or MFA for user accounts in Microsoft
Azure.

The Welcome to Azure! page displays.

The reason you would do this is because it's considered tighter security than just
username and password. And that's because username and password are simply
something you know, it's two items that fall into one category. Multi-factor
authentication means that we have multiple factors, such as something you know,
perhaps a username and a password. And also something you have, perhaps, like a
device with an Authenticator app on it, like a smartphone that changes a numeric code
every couple of seconds and you would have to enter that code in, in addition to a
username and a password. That's one type of multi-factor authentication.

So, to get started here in Azure, I'm going to first make sure I've switched to the
correct Azure AD tenant, because in Microsoft Azure you can have multiple AD
tenants, kind of like having multiple Active Directory domains. You can see which
one you're connected to in the upper right, I'm connected to one called TWIDALE
INVESTMENTS. But I could also choose Switch directory

The Portal settings | Directories + subscriptions page displays.

and click the All Directories link and then from there I could switch to any one of my
Azure AD tenants, if I have more than one, I don't have to have more than one. So,
I'm already in the right one, so I'm going to click to open my left-hand navigator
towards the upper left and I'm going to navigate to Azure Active Directory and Users.
I've got a few users here and my goal here is to enable multi-factor authentication for
user Max Bishop, if I click to open up user Max Bishop, we've got the full sign on
email address.

We can also Reset the password if it's forgotten and work with it that way. But what
I'd like to do is go back to my list of Users and I'd like to click the Per-user MFA
button up at the top. When I click that button, I can enable MFA for individual users.
That's going to open up a new web browser window, where I can

This window contains a multi-factor authentication section. This section display a


table. The table has the following column headers, namely: DISPLAY NAME, USER
NAME, MULTI-FACTOR AUTH STATUS. This table has three users details.

put a check mark next to the users where I want to enable multi-factor authentication.
For user Max Bishop, currently, the MULTI-FACTOR AUTHENTICATION
STATUS is showing as Disabled, so having that selected over on the right, I'll click
Enable,

A dialog box titled: About enabling multi-factor auth opens. It contains some text and
two buttons, namely: enable multi-factor auth, and cancel.

then I'll click the enable multi-factor auth button. And it says Updates were
successful, so I'll close out of that. So now MULTI-FACTOR AUTHETICATION
STATUS for Max Bishop is set to Enabled. So, what we're going to do now is we're
going to sign in as user Max Bishop. So, I'm going to go to myapps.microsoft.com in
a new web browser, but that's which is the URL.

And here's where I'm going to sign in as user Max Bishop and I'll specify the
Password for that account. I think in a message about helping protect my account, so
I'll go ahead and click Next. And now it talks about using the Microsoft Authenticator
app installed on my smartphone, so that's fine. I've already got that installed on my
smartphone. It says after you've installed that on your device, choose Next, OK, I'll
choose Next. And now to set up the account, I'm going to click Next. And what it
wants me to do from my smartphone Microsoft Authenticator app, is it wants me to
add a new account which will activate the camera on my smartphone, so that I can
scan this QR code which is unique to my sign in account for user Max Bishop.

So, I'm going to go ahead and get my smartphone ready and scan in that code from the
Microsoft Authenticator app. Now, once I've done that on my smartphone, it will
automatically add my Azure account and there will be a six-digit numeric code that
changes every 30 seconds, which I will use to sign in. So, when I click Next. It sends
an approval to my smartphone and when that pops up on my smartphone, I'll just tap
on approve, that should be reflected here and it is Notification approved. I'll click
Next and then I'll click Done. When it asks me if I want to Stay signed in, I'll choose
Yes. Now what I want to do is sign out and sign back in as the same user to test multi-
factor authentication.

So, in the upper right, where I have MB for Max Bishop, I'm going to choose Sign out
and I'll select that account to sign out from. And I'm going to sign right back in, so I'll
select the Max Bishop username. I'll pop in the password, so I still have to know the
password. So, on my smartphone, I'm going to have an Approve sign in request, so I
can just go ahead and tap on the approve link for that. And once I've done that, I'm
asked if I want to Stay signed in, I'll choose Yes, and I'm in. So, I don't have to
necessarily enter in the six-digit unique code that gets generated. Instead, I can just
approve the sign in on my device, so I have to have that device, something you have
along with something I know, like your username and a password constitutes multi-
factor authentication, which is often also called 2-step verification.

Identity Federation
Topic title: Identity Federation. Your host for this session is Dan Lachance.

These days, identity federation is a big deal. What we're really talking about is
centralized and trusted authentication. Now, if you compare that with old school
authentication for apps, they had authentication built directly into themselves. Instead
of having an external trusted centralized authentication provider, and that's what
identity federation is.

But let's go into more detail, because certainly there really is more to it when you're
understanding the concepts. And certainly, when you configure and use an identity
federated environment. So, consider this example where we have a sign-in screen for
a web application and we are prompted to put in our Email and Password.

We've got the standard Forgot your password link or what we could do is Continue as
a Facebook user or Continue as a Google user. So, this is a web application that trusts
a third-party identity provider. Those third-party identity providers in this example are
Facebook and Google. So, this means then that you don't have to sign-up directly
within that website or that app. You can just use your existing credentials from
Facebook or Google to sign-in to this one.

That is an example of identity federation. It's a two-way street and what I mean by
that is the web app or the website needs to trust that third-party identity provider. And
that means that the third-party identity provider has to have a secure way to
authenticate users such as Facebook or Google users. So, identity federation has some
specific terminology. We've used some already like a Trusted Identity Provider or an
IdP.

This is normally some kind of an LDAP directory service, like Microsoft Active
Directory. It could be on-premises or in the cloud, as we've seen it could be Google or
Facebook, both of which are cloud-based identity providers. The Resource Provider is
another part of the identity federation ecosystem often it's just referred to as an RP.
This is the actual application or website. Sometimes it's also called a Service Provider
or an SP.
So, essentially we're going to say that a Resource Provider is just a web app that trusts
the identity provider. And that identity provider again, could be an LDAP directory
service like Microsoft Active Directory, it could be Google, it could be a Microsoft
account, it could be Facebook, it could be an Instagram account. Anything like that
that the app is configured to trust.

Then we have the notion of an identity federation claim. A claim, generally speaking,
is an assertion about a user or device. Examples of this would include the date of birth
of a user that might be part of what's in a claim. Or the type of device a user is signing
in from, the subnet IP address they're signing in from, a security clearance level for a
given user. Some or any combination of these types of attributes and many other
possibilities can be stored in what's called a claim. And a claim can be part of a digital
security token.

So, the digital security token is digitally signed by the identity provider upon
successful authentication, that digitally signed security token is signed with the
identity providers private key. That means that apps that trust the identity provider, in
other words, resource or Service Providers, they would be configured with the identity
providers, related public key which can verify the signature of that security token.
Make sure it's valid. And then, of course, the app might consume some of those
claims.

Maybe when a user signs-in to identity providers and there's a claim with the date of
birth, maybe that is used and passed off to an application to determine if the user, for
example, should be able to sign-up for a driver's license. If we were to look at a data
flow diagram for identity federation, it might look like this. And the reason I say
might, is it really depends on the solution being used and how it's configured but this
is a generic example. In step 1, we have a user accessing a web app. Now, the web
app itself does not have authentication built-in that's what identity federation is about.

So, instead in step 2, the app will redirect the user to an identity provider for
authentication. That could be automatic if the app is configured to trust an identity
provider. Or maybe the user is given the choice upon sign-in as we saw. Remember,
sign-in with your Facebook account, sign-in with your Google account, that type of
thing. Either way, step 2 means that we have external authentication that begins. So in
step 3, we assume that the user will authenticate successfully with that external third-
party identity provider, and that server would digitally sign a token, an authentication
token which might contain claims.

In step 4, the user is then ultimately authorized to use the app. Now, when we talk
about a claim and we talked about that, what this means if an app is consuming those
claims, is that we have a form of Attribute-based Access Control or ABAC. Which
might look at a date of birth that's an attribute to determine if a user is allowed to do
something within an app. Single sign-on is often coupled with identity federation, but
it doesn't have to be, it can be its own configuration. What it means is it allows
automatic users sign-in to apps given that they've already authenticated at least once.

So, it is very convenient for users because they're not continually prompted for the
same credentials, but at the same time we have to play devil's advocate look at both
sides of the coin and think about the fact that a compromised user account could
provide access to multiple apps. As always, it's trying to strike the correct balance
between security and convenience in alignment with organizational security policies.

Resource Access Control


Topic title: Resource Access Control. Your host for this session is Dan Lachance.

Server specialists need to be aware of different types of access control models which
control access to resources. Because you might be required to support or configure
one or more of these access control models that might be in use.

So, we're really saying that access control determines the level of access that an entity
can gain to a resource after successful authentication. So, we're really talking about
authorization then. First thing we'll talk about our data roles when it comes to
managing and accessing data, we have the data owner role, the data custodian role,
and the data processor role. Now, a data owner would be the one that would actually
have ownership of the data itself. And that could be a customer, if it's sensitive private
data, if it's essentially personally identifiable information or protected health
information.

But the data owner in some cases could also be an organization that sets the policy
that controls how that data is protected. The data custodian manages the data in
accordance with the rules or policies set forth by the data owner or data owners. The
data processor as the name implies, will take in data and process it. And again, in
accordance with any laws or regulations that determine how that data is to be treated.
So, if we're talking about the General Data Protection Regulation or GDPR for
sensitive data related to European Union citizens.

Then those data sensitivity rules would apply regardless of where the organization is
in the world that is processing that data. Discretionary Access Control or DAC, means
that the data custodian can at their discretion, set permissions in alignment with policy
set forth by the data owner. So, this screenshot comes from setting NTFS permissions
on a Windows machine. So what's happened is somebody has right clicked on a file
called Project_A.txt. They've clicked on the Security tab, where they can then select
users or groups or add users and groups, if they click the Edit button to control
permissions to files and folders on an NTFS formatted disk volume.

Mandatory Access Control or MAC is another model because resources that users,
groups, or software can be given permissions to access and those resources would be
files, apps, network sockets, whatever happens to be. These resources get assigned a
label and that's set by the system administrator. Now, users can then be assigned a
clearance level which allows access to labelled resources. So, imagine that we have a
label called Protected Health Information (PHI).

We can then determine which users will have a clearance level that allows them
perhaps to only read items that are labelled as Protected Health Information. So,
Mandatory Access Control, then, is one of those things that is controlled by the
operating system such as security enhanced or SELinux.

We also have security enhanced Android or SEAndroid. So, this allows us to sandbox
applications on an Android device. Sandboxing is important because it means if we do
have some kind of a security breach in the app, it's only within the app and not outside
of it. It's the same kind of concept if you ever looked at your running processes on a
computer where you have a web browser with multiple tabs open. You'll notice that
each tab is a separate running process and yes, it consumes more resources.

However, it means that if whatever you're doing within that single web tab infects that
part of the web browser, it's only affecting that process, that tab. It will not be able to
go in and infect other tabs which could be other web applications like your secure
online banking and so on or at least that's how the theory goes. So, app sandboxing
then prevents app privilege escalation outside of that app. And if you wanted to enable
this type of thing, you would configure it on an SEAndroid device in what's called the
kernel policy file. You'd have to set security enhanced mode to enforcing that way it's
actually being used.

So, this means that the Android operating system then would control access to
resources like files on the device, or access to the device itself. Of course, to sockets,
network sockets really combine names or IP addresses along with a listening port.
That's really what a socket is, so this would be a form then of Mandatory Access
Control. Another access control model is Role-based Access Control, otherwise called
RBAC. Here we have a screenshot of the Microsoft Azure cloud where what's
happened is somebody has clicked on a Subscription. The subscription is called Pay-
As-You-Go and in the left-hand navigator they've selected Access control (IAM).

Where on the right, they can add role assignments. So, as an example, if we take a
look at the contributor assignments shown on the right. There is a user by the name of
User Two that was given the contributor role to this subscription and what that means
in this particular case is that User Two would be able to contribute or add or create
cloud resources in this Pay-As-You-Go subscription. So, as long as the user occupies
the role, they have the permissions of the role because a role is really just a collection
of related permissions. And it could even be much more granular than this.

It could be a role that only allows the management of things like cloud-based storage
accounts or cloud-based virtual machines. Now, in the Microsoft Azure cloud, there is
a hierarchy. The subscription under which we can have resource groups. A resource
group is just a way to group related cloud resources, like virtual machines, databases,
web apps, and so on. Much like you would organize files in a directory on a storage
device. So, we could assign a role to the subscription so then that means that the
permissions in this example, the virtual machine contributor role assigned to a group
called EastGroup1.

That means members of EastGroup1 would have virtual machine contributor


permissions at the subscription level and everywhere underneath it, in all resource
groups they'd be able to create virtual machines. But alternatively, you might assign
that just to a resource group like resource group 2. So, members of each group 1
would only be able to create virtual machines in resource group 2. Or you could set
that to a specific resource.

Now, of course, virtual machine contributor is probably not a great example if you
already have a virtual machine. But you might assign a role to a specific resource, like
a virtual machine, to allow its management, and only for that single virtual machine,
not even for the resource group which would be applied then to all virtual machines in
that resource group. We then have attribute-based access control or ABAC as another
access control method.

So, an attribute is a property of something like a user. So, like a user department, a
user location, a user security clearance level, or it could be a device operating system.
The subnet that a device is on, the device health state. So basically, if certain
conditions are met with attribute-based access control, then resource access is allowed
such as being allowed to sign-in to the cloud or given access to a database or an app of
some kind.

Working with Cloud Role-based Access Control (RBAC)


Topic title: Working with Cloud Role-based Access Control (RBAC). Your host for
this session is Dan Lachance.
Role-based Access Control or RBAC for short means that you have roles that users
can occupy and those roles are therefore, assigned some kind of permission to some
kind of resource. And so by extension, when users occupy a role, they get the
permissions that the role has. One form of this you might say, is adding users to
groups and depending on how you name your groups and use them for assigning
permissions. That could be true. It could be a form of Role-based Access Control.

We're going to take a look at this really quickly in Microsoft Azure as well as in
Amazon Web Services, so in the public cloud. So, here in Microsoft Azure, I'm
already signed into the Azure portal.

The Microsoft Azure page is open on the screen.

So, on the left, I'm going to go all the way down to Azure Active Directory and I'm
going to click on Users where we have a number of users that are listed,

The Azure DevOps | Overview page opens. The middle pane contains multiple
options, namely: Overview, Preview features, Users, and Groups. The right pane
displays corresponding details.

The Users | All users (Preview) page is open on the screen. The right pane displays a
table with multiple rows.

one of which has the name of Codey Blackwell.

Now I can also go back to my Azure AD tenant and go to Groups and get a list of
groups that I could assign to roles. And this is where the distinction between a group
and a role is different. In the Microsoft Azure cloud, a role is a collection of related
permissions. And that role might be assigned to a group or to individual users, and so
on, and it might work like this. So, on the left in my Azure navigator panel, I'm going
to go all the way down to Resource groups.

A Resource group in Azure allows you to organize related resources such as, all of the
items you might need to support a web app or all of the virtual machines related to a
project, something like that.

The right pane displays a section titled: Resource groups. It contains a table with the
following column headers, namely: Name, Subscription, Location, and Type. It
contains multiple rows.

I have a resource group called Rg1 and what I want to do is click on it to open up its
properties. Because to assign role permissions to this Resource group and everything
in it. I would go to Access control (IAM) over on the left.

The middle pane contains the following options, namely: Overview, Activity log,
Access Control (IAM), Tags, and Deployments. The right pane displays the following
fields, namely: Subscription, Deployments, Subscription ID, and Location. It also
contains two tabs, namely: Resources, and Recommendations.

When I do that, I can then click the Add button and then

He selects the Add button in the right pane. Three options appear, namely: Add role
assignment, Add co-administrator, and Add custom role.

choose Add role assignment and this is where

The Add role assignment section displays on the right pane. It contains the following
tabs, namely: Role, Members, Review + assign. Currently, the Role tab is active. It
contains a table with the following column headers, namely: Name, Description,
Type, Category, and Details. A search bar is present above the table.

I could go ahead and start to manage role. So, first thing I would do is select the role
I'm interested in.

So, I'm just going to search for virt for virtual machine and down below I now have a
filtered list of roles that I'm interested in. One of which is called the Virtual Machine
Contributor Role, which lets you manage virtual machines. So, that's the role that I
want to select, so I'm going to go ahead and click on it and then click Next. So,
Virtual Machine Contributor.

The Members tab is now active. It contains the following fields, namely: Selected role,
Assign access to, and Members. A table display below the Members field. The Assign
access to contains two radio buttons, namely: User group, or service principal, and
Managed identity. The Members field contains a link titled: Select members.

I want to assign this to a User, group, or service principal.

Service principal is related to assigning role permissions to a piece of software. So,


I'm going to click Select members down below

The window titled: Select members appears. It contains a search bar.


and I've got a number of groups that are available here. For example, I'm going to
choose the group Central_Region_Canada. So members of that group will have the
Virtual Machine Contributor role assigned to the resource group called Rg1. So, they
can create and manage virtual machines only in that resource group, not in the entire
Azure subscription.

So, I'll select that group, click Select. And I'll click Review and assign. And it's done.

The Rg1 | Access control (IAM) page displays. The right pane displays the following
tabs, namely: Check access, Role assignments, Roles, Deny assignments, and Classic
administrators.

So, if I look at the Role assignments at this level, at the resource group. So, I've
clicked Role assignments. If I scroll down through the roles, I'll eventually come
across the

A table with the following column headers display, namely: Name, Type, Role, Scope,
and Condition.

Virtual Machine Contributor role where the Central_Region_Canada group is now


showing.

Now that's in Azure. Let's look at doing the same type of thing in the AWS
Management Console. OK, so here in AWS, I'm going to go ahead and search up, iam
or identity and access management and click on it. And then what I want to do is just
point out that I have what are called Policies. If I click the Policies view on the left,
we have a list of Policies on the right. Many of them are built-in but you can build
your own.

A policy is similar to a role in Azure. It's a collection of related permissions. So, for
example, if I were to search for s3 which represents a storage bucket or storage
location in the cloud. And if I were to filter on it, I then have only policies related to
working with Amazon S3. For example, AmazonS3FullAccess contains the necessary
permissions to have full access to S3 buckets. And so, I can assign that policy to a
group or to an individual user. So, if I go look at my Users here in AWS, I have a user
called cblackwell, and if I click on that user and go to Groups, we'll see all of the
groups,

The Summary page opens.

that that user is a member of. The user is only a member of one group called
East_Admins and I could either click the link here or I could click User groups on the
left to get to that group and open up its properties.

What I'm interested in doing for the East_Admins group

The East_Admins page is open on the screen. It contains a section titled: Summary.
Below the Summary section display the following three tabs, namely: Users,
Permissions, and Access Advisor.

is clicking the Permissions tab at the top here. And down below, I currently have a
policy, but I want to select it and I want to Remove it. What I'd like to do I'll click
Delete is, I'd like to add S3 bucket full access control. So I'm going to click Add
permissions and I'll choose Attach policy. And once again, I'll filter it for s3 in this
particular example,

The Attach permission policies to East_Admins section display on the right pane. It
contains two sub sections, namely: Current permissions policies, and Other
permission policies. The Other permission policies contain a search bar and a table.

and I'm going to select AmazonS3FullAccess and I will add the permission. So, if I
were to go back to a User that's in that group, we know cblackwell is a member of the
group.

The Summary page opens.

And if I were to view the Permissions tab here for the user down below, under
Attached from group AmazonS3FullAccess is now applicable to this user by being a
member of the East_Admins group. So, that gives us a sense of how we might work
with Role-based Access Control in both the Microsoft Azure cloud and the Amazon
Web Services cloud.

Configuring Attribute-based Access Control (ABAC)


Topic title: Configuring Attribute-based Access Control (ABAC). Your host for this
session is Dan Lachance.

Server technicians need to be aware of the various access control models that are out
there, including Dynamic Access Control, which is a Windows feature that uses
attribute-based control. Attributes are properties or characteristics of something like a
users location or city or the department a user is in or the IP address that a user device
is on.
We can have those specific attributes compared against some conditions to determine
whether access should be granted or not to a resource. Now with Windows Dynamic
Access Control, we use Active Directory user and device attributes to control access
to the file system, instead of just adding users to groups where groups have access to
the file system.

Which is traditionally what has been done and is still being done and is also just fine.
So, let's get started with this. The first thing we have to do is we have to make a few
changes in the Active Directory Administrative Center. So, I'm on a domain controller
server and I'm going to go to the start menu and under Windows Administrative
Tools, I need to use the Active Directory Administrative Center Tool. I can't use the
Active Directory Users and Computers tool to configure Dynamic Access Control.

So, in the Active Directory Administrative Center,

The Active Directory Administrative Center window opens. The left pane contains the
following options, namely: Overview, Dynamic Access Control, Authentication, and
Global Search. The right pane displays a section titled: WELCOME TO ACTIVE
DIRECTORY ADMINISTRATIVE CENTER.

I'm just going to go ahead and expand it. And on the left, I'm going to click Dynamic
Access Control. Now, next thing I want to do is right-click Claim Types over on the
right and

The right pane displays a table with the following column headers, namely: Name,
Type, and Description. The table contains three rows with Name as: Control Access
Policies, Control Access Rules, Claim Types, Resource Properties, and Resource
Property Lists.

choose New Claim Type. Now what is a claim?

The Create Claim Type: accountExpires window opens. The left pane contains the
following options, namely: Source Attribute, and Suggested Values. The right pane
contains a section titled: Source Attribute. This section contains a table with the
following column headers, namely: Display Name, Value Type, Belongs to, and ID. It
also display the following fields, namely: Display name, Description, User, and
Computer.

A claim is an assertion or a statement of truth about something like a user or a device.


Such as an identity provider like Active Directory will stay to claim that says this user
has successfully authenticated and they are in the HR department.
That's a claim, and so that type of thing is necessary when we want to use attribute-
based control. So, when I add a new claim type here, it's going to be for a User, but it
also could be for a Computer. But what I want to do is go down and select an existing
Active Directory attribute. The one I want to select is department.

That's what I want to use. And then down below further, I want to add values down
below for department. So, what I could do, it says No values are suggested, and that's
fine.

The Suggested Values section contains two radio buttons, labelled as: No values are
suggested, and The following values are suggested. The Add, Edit and Remove
buttons are also present in this section.

But I'm going to click the following values are suggested and I'll click Add and I'm
going to add let's say, HR as the Value and the Display Name.

An Add a suggested value dialog box opens. It contains the following fields, namely:
Value, Display name, and Description.

So, that's one possible value for the department attribute, and I'm also going to Add,
let's say, Exec.

We could go in and add all of the departments that we use in our organization that
would apply to assigning file system permissions, but I'll just go with these two for
now. And that's all I'm going to do here. OK, so if I go into Claim Types, I now have
a department claim type that's one of the things I need to do. Now, back under
Dynamic Access Control, I'm going to go into Resource Properties.

A resource in this context is simply a file or a directory in the file system.

A table with the following column headers displays, namely: Display name, ID,
Referenced, Value Type, Type, and Description. The table contains multiple rows.

So, what we're doing here is we want to be able to associate the Department attribute
in the file system with the Department attribute in Active Directory user accounts.
We'll talk about how that gets applied in the file system. But for now, what I want to
do is go into that and just take a look, OK. So, it looks like we can select Values here,

He selects Department from the table. A window titled: Department (Disabled) opens.
The left pane contains three tabs, namely: General, Suggested Values, and
Extensions. The right pane display two sections, namely: General, and Suggested
Values. The first section contains the following fields, namely: Display name, Value
type, Description, and ID. The second section contains a table having column
headers, namely: Value, Display Name, and Description, and three buttons, labelled
as: Add, Edit, and Remove.

such as, and this one is filled in already, but this is for the file system.

So, we've got Sales and so on. But, well, do I have Human Resources? I might have
Human Resources, but ours was called HR, but really it doesn't matter that they'd be
the same name. However, we are going to have to tie them together I don't see exec
here, so I'm going to Add

The Add a suggested value dialog box opens.

Exec or I could spell out executive, it doesn't really matter that they be exactly the
same name.

But the reason we're doing this twice is because we're going to be comparing how
files and folders are flagged on file servers in the domain with the Department of exec
or whatever, and we're going to compare that against a user that's signed in and their
department in their user account. It doesn't have to be the spelled out exactly the same
way, but when we add conditions, we can link them together as you'll see. OK, so
we've got the resource properties configured for Department. However, I want to
right-click on it and make sure it's enabled, and when I enable it, notice the icon
changes. The other ones are disabled, but this attribute is enabled, so Department is
good to go.

OK, next thing I have to do is install a server component here on my file servers. So,
let's assume I'm doing this on a file server that will let me classify files and
directories, in other words, label them with a department attribute and a value. So, we
can do that by clicking Add roles and features and going through the

The Server Manager Dashboard displays. The left pane contains the following
options, namely: Dashboard, Local Server, All Servers, DNS, and File and Storage
Services. The right pane contains two sections, namely: WELCOME TO SERVER
MANAGER, and ROLES AND SERVER GROUPS. The first section contains the
following links, namely: Add roles and features, Add other servers to manage, Create
a server group, and Correct this server to cloud services. The second section contains
three cards, namely: AD DS, DNS, and File and Storage Services.

Wizard until we get to the roles screen and under File and Storage Services, and under
File and iSCSI Services, we are interested in something called File Server Resource
Manager, that's a role service, otherwise, sometimes called FSRM.

We want it, so I'm going to turn it on. We're going to click Add Features

The Add Roles and Features Wizard opens.

for the admin tools and we're just going to continue on through the Wizard and just do
the installation. Now, by installing the file server resource manager role service, we're
going to get an admin tool that will be under our start menu as well as under the Tools
menu here in Server Manager. Plus, it'll add a new tab when we look at the properties
of a file or directory on file servers where this is installed and that's important,
because we want to be able to work with things labeled as department of sales or exec
or HR, whatever the case is, to assign permissions.

So, I'm going to click Close, next I'm going to do is go in the file system on the server
and make a sample file. So, in the root of drive C: I've got a Data folder and a Projects
folder with some sample files. So, I could right-click on the Projects folder and go to
Properties, and because we've got FSRM installed, we now have a classification tab.

But it says no properties, where's department, I want to flag this as being for the HR
department. I could also do the same thing for individual files. So, what we need to do
then is open up the FSRM tool, which I can do as we've mentioned from the start
menu, there it is, File Server Resource Manager. I'm going to expand Classification
Management

The File Server Resource Manager window opens. The left pane contains the
following options, namely: Quota Management, File Screening Management, and
File Configuration Rules. The right pane contains a section titled: Name.

on the left and Classification Properties.

Now, what I don't see here is department.

A table with the following column headers displays, namely: Name, Scope, Usage,
Type, and Possible Values.

I'm just going to go ahead and click Refresh. It was just a timing thing. Department
now shows as a Global attribute that we can use for tagging items in the file system.
And that's global from the sense of, it's not local to this server, it's in Active Directory
and can be applied to all file servers that are joined at that Active Directory domain.
OK, so if we go back in the file system, back into the Projects folder, back under
Properties Classification.

Now, Department is here. And all of the Values are shown down below. These are
resource properties and remember, I had to add Exec and we had Human Resources,
let's say Human Resources. I'm going to flag this as Human Resources. Now, if I go
into the files within that folder and go to the Properties of one of them and go into
Classification, they too have

He opens the Properties of Project C file.

the department. And, of course, notice, it's currently set with the value of Human
Resources. Because I did it at the folder level and so it flows down to subordinates.

All I've done is flag these project files as Department equals Human Resources. I
haven't assigned permissions, so that's all that I've really done. Now, let's go back into
the Active Directory Administrative Center because the next thing I want to do, I'm
just going to click on Dynamic Access Control again because I have to create a
Central Access Rule. So, I'm going to right-click on that and choose New Central
Access Rule. Basically, to link the resource or file system department property to the
Active Directory user department property.

We're going to call this Rule1

The Create Central Access Rule page opens. The left pane contains the following
sections, namely: General, Resources, and Permissions. The right pane contains the
following three sections, namely: General, Target Resources, and Permissions. The
first section contains the following fields, namely: Name, and Description. The second
section contains a text box and an Edit button. The third section contains two radio
buttons, namely: Use following permissions as proposed permissions, and Use
following permissions as current permissions. A table display below these radio
buttons. This section also contains an Edit button.

and down below for Target Resources, I'm going to click Edit and I'm going to click
Add a condition.

The Central Access Rule window opens. It contains an Add a condition link, and an
OK and Cancel buttons.

So Resource, a Resource is just a file or directory in the file system.


It display five drop down fields. The four drop down fields are, namely: Resource,
Department, Equals, and Value. The fifth drop down is blank.

This does not apply to Active Directory user accounts or anything like that it's for file
system stuff. So, Resource, Department, Equals, OK, and a Value. Now this is where
I'm going to say, well, OK, if files in the file system or directories are flagged with
Department Equals Human Resources, that's what we've done so far.

Then I want to set current permissions, and I'm going to go down and click Edit to do
that, I'm going to click Add, I'm going to choose Select a principal.

The Advanced Security Settings for Permissions dialog box opens. It contains a table
and an Add, Remove, View, and Restore defaults buttons.

The Permission Entry for Permissions dialog box opens. It contains the following
fields, namely: Principal, Type, Basic permissions, and Add a condition. The Basic
permissions field contains check boxes, labelled: Full Control, Modify, Read and
Execute, Read, and Write.

Maybe I'll use the built-in domain users group here in Active Directory, which is
everyone. So, for all domain users, I want to give Read and Execute, Read and let's
say, Write but only if their user account is also in the HR department. And this is
where I'm going to Add a condition. But this time it's for Active Directory, it's not for
a resource in the file system, we already did that.

The six drop down fields appear, namely: User, Group, Member of each, Value, Click
Add items, and Add items.

Now, we're mapping it to a user in Active Directory and I'm going to select the
department attribute, Equals, Value.

He selects department in the Group drop down. As he selects the department value,
the number of drop down fields present on the screen now are five. He selects a value
in the fifth drop down.

And here's what I filled in earlier when I made a claim type, HR. OK, so now I've
linked HR at the Active Directory User department attribute level to human resources
up above. Here in the Target Resources for the file system, excellent. And then we've
set the permissions for members of the domain users group that happened to have a
department equal to HR. OK, now that we've done that, we have some stuff to do in
group policy, so I'm going to open up my start menu and under the Windows
Administrative Tools heading, I'll go down to Group Policy Management.

If I want this applied to all file server computers joined to this domain,

The Group Policy Management window opens.

well, I could go into the Default Domain Policy and configure the following settings.
So, I'll right-click on the Default Domain Policy and I will choose Edit.

He right clicks on the Default Domain option in the left pane. A context menu
appears. The context menu contains the following options, namely: Edit, Enforced,
Link Enabled, and View.

And the first thing we'll do here

The Group Policy Management Editor window opens.

is we're going to go under Computer Configuration. We're going to go under Policies,


Administrative Templates, System and KDC, key distribution center for Active
Directory. And we've got an option here called KDC support for claims.

The right pane displays a table with the following column headers, namely: Setting,
State, and Comment. It contains multiple rows.

I'm going to turn that on and say, Enabled.

The KDC support for claims, compound authentication and Kerberos armoring
dialog box opens. It contains three radio buttons, labelled as: Not Configured,
Enabled, and Disabled. It also contains a drop down with the following options,
namely: Supported, Not supported, Always provide claims, and Failed unarmored
authentication requests.

And I'm going to leave it on Supported. So, if needed, claims will be provided. Next
thing we have to do while I am in here is I need to go into, let's see, where do I, there's
one step I forgot actually before I go further here. I need to go back to the Active
Directory Administrative Center and we created a Central Access Rule called Rule1.
What I need to do is add that to what's called a Central Access Policy, that's the step I
forgot. So, I'm going to right-click on Central Access Policy.
We'll call it Policy1.

The Create Central Access Policy window opens. The right pane contains the
following fields, namely: Name, Description. Two buttons Add and Remove are also
present. He selects the Add button. A dialog box titled: Add Central Access Rules
opens. It contains a table with Name as Rule1.

All I do here is Add the rules I've created. I only have one Rule, that's it, just Add the
rule to it. OK, now if we go back to group policy, what I really want to do is deploy
that central access policy. So, in order to do that, I need to make sure that we go into
the correct place here. So, right now we're under Computer Configuration, Policies,
Administrative Templates, while I need to be under Windows Settings, Security
Settings. Let's see File System, there it is Central Access Policy. And on the right, I'm
going to right-click and choose Manage Central Access Policy. There it is, Policy1,
Add it, OK.

The Central Access Policies Configuration dialog box opens.

So that's it, once group policy refreshes for file servers that are joined to this domain,
those file servers will receive Policy1 which says anything in your file system flagged
as department of human resources, domain users that have a user attribute for the HR
department will have basically read and write and execute permissions. Of course, on
each file server in the domain, file server resource manager has to have been installed
and we have to have flagged files and directories for the department of HR. So, that is
an example of attribute-based access control.

Course Summary
Topic title: Course Summary

So, in this course we've examined how to implement strong authentication and
authorization for resource access through techniques such as MFA, identity
federation, and Role-based Access Control. We did all of this by exploring physical
security strategies and common hardening techniques. We used group policy to
disable USB storage. We examine the relationship between authentication and
authorization. As well we managed users and groups in Microsoft Azure, Active
Directory, Linux and AWS. Following that, we configured MFA in AWS and Azure.

We discussed how identity federation is used and we also covered access control
types. And finally we configured cloud Role-based Access Control and also
configured Attribute-based Access Control. In our next course, we'll move on to
explore Public Key Infrastructure or PKI.

CompTIA Server+ (SK0-005): Configuring


Server Components
Virtualization has become a foundation for on-premises and cloud computing.
Therefore server administrators and data center technicians need to know how to
configure virtual machine (VM) CPU and RAM settings. Take this mostly hands-on
course to learn what's involved in virtualization. Start by exploring how hypervisors
relate to virtual machines (VMs). Distinguish between the two types of hypervisors.
And look into hypervisor hardening. Getting hands-on, practice configuring virtual
CPUs for VMware and Microsoft Hyper-V VMs. Manage CPU settings for AWS and
Azure. And configure memory, or RAM, for VMware Workstation, Microsoft Hyper-
V, and AWS and Azure instances in alignment with IT workload requirements. Upon
completion, you'll be able to plan and configure the setup for both on-premises and
cloud VMs. You'll also be more prepared for the CompTIA Server+ SK0-005
certification exam.

Course Overview
Topic title: Course Overview.

On-premises virtualization requires the installation, configuration and management of


a hypervisor. In the cloud, this is the responsibility of the cloud service provider,
where cloud customers simply deploy cloud services that utilize the underlying
virtualized infrastructure. In this course, I'll explore hypervisor types.

You will then configure virtual CPUs for VMware and Microsoft Hyper-V virtual
machines. Next, I'll manage CPU settings for cloud-based virtual machines. I'll then
configure memory, or RAM for on-premises virtual machines and cloud-based virtual
machines in alignment with IT workload requirements, as part of a collection that
prepares you for the CompTIA Server+ SK0-005 certification exam.

Hypervisors and Guest Virtual Machines


Topic title: Hypervisors and Guest Virtual Machines. Your host for this session is
Dan Lachance.
Hypervisors are required for operating system virtualization. What we're really talking
about is running multiple virtual machine or VM guests concurrently on one physical
computer. So the hypervisor then is an operating system. It's not specialized hardware.

And that operating system is designed to regulate virtual machine guest access to
physical hardware resources. Now remember, you will have one set of physical CPUs
and cores, one physical amount of RAM, one physical set of storage, and all of that
has to be shared among the virtual machine guests. So the hypervisor is a specialized
operating system, or it could even be an app, but it is not hardware itself.

So let's distinguish the difference between hypervisor types. So a Type 1 hypervisor is


often called bare-metal. And that's because it is the operating system. It is a
streamlined operating system designed to regulate virtual machine guest access to
underlying physical hardware resources. So it's not an all-purpose operating system,
it's designed just to run virtual machines. A Type 2 hypervisor is an app that runs
within an existing operating system.

Now, you can imagine then that a Type 2 type of hypervisor is not as resilient, as
stable as a Type 1 bare-metal hypervisor would be. A Type 1 hypervisor runs on the
physical server hardware. And it is the OS. The OS is lightweight and optimized to do
hypervisor things, and it's great for enterprise use. Now, examples of a Type 1
hypervisor would include Microsoft Hyper-V.

There are variations of Microsoft Hyper-V where it can run, for example, within
Windows 10 computers, but it can also run on the server side as its own bare-metal
type of hypervisor. VMware ESXi is another type of hypervisor that is considered to
be Type 1 or bare-metal, where it itself is the operating system, and it's designed to be
used in the enterprise.

Now, a Type 2 hypervisor runs as an application on top of an existing operating


system. So the underlying operating system then is multipurpose. It's designed to do
more than just be a hypervisor. For example, if you're running Windows 10 or
Windows 11, that operating system is designed to do many things beyond act as a
hypervisor because the OS is not a hypervisor.

So the underlying OS then, would run the Type 2 hypervisor application. So therefore,
it's not really a lightweight solution. What that means is that the underlying OS has a
lot of stuff built into it for a lot of different multipurpose uses.

A Type 2 hypervisor, because of the underlying OS, you could say it has a larger
attack surface than Type 1, because a Type 1 hypervisor is a specialized, lightweight
operating system. A Type 2 hypervisor does not run on such an operating system. So
Type 2 hypervisors do have their place. They're good for testing or for software
developer use.

Examples of a Type 2 hypervisor would include Oracle VM VirtualBox and VMware


Workstation. Pictured on the screen, we have an example of a Type 2 hypervisor.
Specifically what we're looking at here is VMware Workstation. On-screen it displays
a screenshot of Task Manager. The CPU reads 0%, the Memory reads 21.2 MB, the
Disk reads 0.1 MB/s, the Network reads 0 Mbps. When you're running VMware
Workstation and not ESXi, VMware Workstation is a Type 2 hypervisor that runs as
an app. And what we're seeing here is the Windows Task Manager, which is opened
up and showing all of the running processes, specifically VMware Workstation.
Because it's a Type 2 hypervisor, it just runs as a process or as an app within the
operating system.

Now what that means then, is that if we have a problem with the underlying OS, so
nothing to do with VMware Workstation, but we have a problem with the underlying
operating system, performance degradation, or maybe it's some kind of a security
problem, that can adversely affect all of the virtual machines running, in this case in
VMware Workstation, our Type 2 hypervisor.

You might also say that a Type 2 hypervisor introduces a lot more overhead only
because we've got the underlying OS, then we've got an app running in it, and within
that app, we would have our virtual machine guests running. As opposed to a Type 1
hypervisor where the virtual machine guests are running directly in the hypervisor OS,
which of course runs directly on top of the physical server hardware.

The other thing to consider is how to secure or harden a hypervisor host. The first
consideration would be to limit network access to that physical host in the first place.
Maybe that would include placing a hypervisor server on an isolated VLAN network
segment with firewall rules controlling access to and from that network.

You can also have a dedicated management interface on a separate VLAN. So, for
example, you might have a hypervisor server, so it's a server running a Type 1
hypervisor, let's say, running on an isolated VLAN for management purposes. It's got
one network interface connected to that dedicated VLAN so that management traffic
of that hypervisor host is kept separate from regular network traffic that is generated
by the virtual machines running on that hypervisor.

So the hypervisor would have at least two network cards, one connected to a
dedicated management VLAN, the other connected to a regular network where you
want regular network traffic for the VMs to be exposed. You need to make sure that
you patch your hypervisor. So if it's a Type 1 hypervisor, you're patching the
hypervisor OS itself.

If it's a Type 2 hypervisor, we're talking about patching everything running in the
underlying OS and all of the other apps running on that machine. Finally, you should
conduct periodic vulnerability assessments and security permissions access reviews
against hypervisor hosts. Running a vulnerability assessment periodically, might be
able to identify a new vulnerability that wasn't previously discovered because when
you run vulnerability assessments, you use a tool that has an up-to-date vulnerability
database, and it uses that to run the tests.

Configuring VMware vCPU Settings


Topic title: Configuring VMware vCPU Settings. Your host for this session is Dan
Lachance.

In this demonstration, we're going to take a look at how to manipulate virtual CPU
settings for a VMware virtual machine. Now, first of all, you get to specify some of
those hardware details, that virtual hardware configuration, when you initially create
the virtual machine. Here in VMware Workstation, I already have a Windows Server
2019 virtual machine.

And it's currently set to 1 Processor. So I'm going to go ahead and power on the
virtual machine, and we're going to take a look at what the operating system thinks
about having a single processor. So I'm just going to go into the VM menu and choose
Send Ctrl+Alt+Del and I'm going to sign in to my VM. So here in my virtual machine,
I'm going to go head down to the Taskbar, I'm going to right click on it and I'm going
to choose Task Manager and I'm going to click More details.

Now, from here, I can go to the Performance tab, and this is one way to view CPU
utilization. So I've got an Intel Core i7, this is virtual hardware, of course, at 2.20GHz
and down below it says that we've got 1 socket, 1 virtual processor, and that L1 cache
is not applicable. Well, that's because it's virtualized.

Let's just take a look at the CPU settings for a real CPU for just a moment. The host
maximizes the Task Manager window. So on a real physical server, again, in this case,
I've got an Intel Core i7 at 2.20 GHz. Well, this is the hypervisor host that's running
our VMware workstation Type 2 hypervisor.

But down below I've got the Base speed, number of Sockets, but notice that we've got
6 CPU Cores, 12 logical Processors, Virtualization is Enabled, which is allowing us to
run virtual machines on this physical server, and I've got L1 cache and the amount of
384 kilobytes. I've also got L2 and L3 cache listed. Base speed reads 2.21 GHz and
Sockets read 1. L2 cache reads 1.5 MB, and L3 cache reads 9.0 MB. So things will
look a little bit different when you look at performance metrics, when you look at
details and stats for virtual CPUs versus physical CPUs.

So back here in VMware Workstation, I'm just going to go ahead and I'm going to
shut down my guest virtual machine because now what I want to do is edit the virtual
machine settings. So I'll click that link over on the left and I'm going to switch the
Number of processors here to 2. The Number of cores per processor reads 1 and the
Total processor cores read 2.

Now, if I was going to use this virtual machine itself to host other virtual machines
within it, then I would turn on other processor virtualization options like Intel VT or
AMD-V for virtualization. Under Virtualization engine, the other options read
Virtualize CPU performance counters and Virtualize IOMMU (IO memory
management unit). But I'm not going to be running virtual machines within this virtual
machine, so I'm not going to turn that option on.

But I have enabled 2 processors. Let's see what happens here. I'm going to click OK,
I'm going to power on the virtual machine and let's just go ahead and boot it up and
sign in. OK, so I've booted up that virtual machine and I've signed in, so let's go back
into our Task Manager and let's go ahead and take a look at our CPU details.

So I'm going to go to Performance. Notice this time that we have 2 Sockets shown.
OK, Virtual processors is now shown as 2. OK, that makes sense. Let's go ahead and
shut this down and make yet another change. So I'm going to click Edit virtual
machine settings on the left once again, going to select processors, so we've already
set 2 processors.

What I'm going to do now is I'm going to set it to be 2 cores per processor, for a total
of 4 processor cores. Now, some operating systems will not like this type of change
being made. So let's go ahead and click OK, and let's go ahead and fire up that virtual
machine.

So depending on the version of the operating system and the type of operating system
that you've installed, sometimes it's tied to the CPU configuration at the time of
installation. OK, so far so good. Windows liked the change. So I'm going to go ahead
and sign back into the virtual machine once again. The host clicks on the VM tab and
then on Send Ctrl+Alt+Del. Under DOMAIN1\Administrator, the host inserts the
password.
And now let's go ahead and take a look at our CPU settings once again. Let's go back
into the Task Manager. That's what we're using here to check our work. Let's go into
Performance. And sure enough, we've got 2 Sockets and 4 Virtual processors.

So the point is that it is picking up our change to the virtual CPU settings in VMware
Workstation, and that's coming across here in the virtual machine. Now make sure
that you adjust the virtual CPU settings in the virtual machine carefully to
accommodate the workload that it's supporting.

You don't want to assign too many virtual processors or cores if the workload doesn't
require it. So that means monitoring CPU utilization over time. This way, you can
establish a baseline of when things are working well and they're performing well
under normal conditions, that would be the CPU utilization that you would use as a
baseline and gauge your performance from there.

One way to establish a baseline of CPU performance, whether physical or virtual in


Windows, is to use the built-in Performance Monitor tool. From here on the left, I can
click Performance Monitor, and what it's charting by default is the % Processor Time
as a whole on this machine.

And what I can also do is configure a Data Collector Set. If I drill down into User
Defined, I can right click and choose a New Data Collector Set. I'm going to call this
Est CPU Baseline. It's going to be created manually and I'll click Next. The other
option reads Create from a template (Recommended). And what I want to do is work
with performance counters.

Specifically, I want to monitor CPU metrics, so I'll click Next, I'll click Add and what
I want to look at from the Local computer, although I could reach out over the
network, I want to select my local computer and under Processor, The host selects:
DESKTOP-UBG7KTS. I'm interested in looking at the percent processor time in total.

Although I could select a specific processor if I really wanted to, but I'm going to
leave it on Total, I'm going to Add and I'm going to click OK and Next. The next tabs
read Create new Data Collector Set, Which performance counters would you like to
log?, and Where would you like the data to be saved?

I'll continue on through this to the point where I'm going to go ahead and finish it. I'm
going to right click on it and then go into its Properties because under Schedule, what
I could do is say Add a schedule, and maybe I'll begin next week and maybe I'll expire
the data collection set a week after I've started it.
So I'll capture CPU utilization for a week, assuming that we'll have normal activity for
the workload running on this host and I'll click OK and OK. The Beginning date reads
2021-11-01 and the Expiration date reads 2021-11-08. Now I could select my Data
Collector Set over on the left, and I could manually start it now to get it collecting. Of
course, this will happen on a schedule the way that we have configured it, but we will
be able to come back here after it has finished collecting that by going down under
Reports, down under User Defined, there's Est CPU Baseline.

And of course, it's in the midst of collecting data. But once it's collected some CPU
performance metrics, we'll then be able to establish a baseline of normal activity,
which means then we can set our virtual CPUs accordingly.

Configuring Microsoft Hyper-V vCPU Settings


Topic title: Configuring Microsoft Hyper-V vCPU Settings. Your host for this session
is Dan Lachance.

In this demonstration, I'm going to be making changes to the virtual CPU or vCPU
settings for a Microsoft Hyper-V virtual machine. Now what's interesting in this
example, and this is not the way you have to do it, of course, but I'm going to use a
VMware virtual machine running Windows Server 2019, and within it, I'm going to
enable Hyper-V, where I can run virtual machines within that.

So it's multiple layers of virtualization. Now you shouldn't be doing this if you need
the utmost in performance, but the reason I'm doing it is because sometimes you will
come across this, maybe on a developer workstation, where they need to be able to
test virtualization. And sometimes you need to do that within a virtualized
environment.

So to enable this virtual machine to itself run virtual machines, I'm going to edit the
virtual machine settings and I'm going to make sure I have enough memory allocated
to run virtual machine guests along with the host OS. The Memory for this virtual
machine reads 8192 MB. But I'm also going to go to the Processors section where, on
the right, I'm going to make sure that Virtualization for Intel VT, virtualization
technology, or AMD-V is enabled, which it is. It's turned on.

So I'm going to go ahead and OK this and I'm going to power on the virtual machine.
OK, so now that I've signed into that VM, I'm just going to go into the start menu and
start Server Manager. This isn't the only way to do this, but Server Manager can be
used to install server-based components such as Microsoft Hyper-V. The steps read:
Configure this local server, Add roles and features, Add other servers to manage,
Create a server group, and Connect this server to cloud services.
So I'm going to go ahead and click Add roles and features, and I'm going to proceed
through the Wizard to the point where I get to the Select server roles screen The
previous steps read Before You Begin, Installation Type, and Server Selection. and
this is where I'm going to turn on Hyper-V. It'll pop up and ask me if I want some of
the Management Tools for Hyper-V, which I do, so I'll just click Add Features. The
host drills down to Remote Server Administration Tools, Role Administration Tools,
Hyper-V Management Tools. The features are: Hyper-V Module for Windows
PowerShell, and Hyper-V GUI Management Tools.

And at this point it's checking that the underlying processor support will support
virtualization, which it does. Otherwise, we would get an error message at this point.
So I'm going to go ahead and click Next and just continue on through the Wizard
accepting all of the defaults. The tabs read: Features, Hyper-V, Virtual Switches,
Migration, and Default Stores. And then I'll click Install.

So I've restarted the server. Now that Hyper-V is installed, I can go into the Start
Menu and I can use the Hyper-V management tool, so I'm going to start Hyper-V
Manager. Here in Hyper-V Manager, I'm going to right click on the SERVER and I'm
going to choose New, Virtual Machine, as if we're going to install a new VM here in
Hyper-V. The steps read: Before You Begin, Specify Name and Location, Specify
Generation, Assign Memory, Configure Networking, Connect Virtual Hard Disk,
Installation Options, and Summary.

I'm going to click Next. I'm going to call it VM1 and I'll click Next to continue
through the Wizard. Now, in this case, I'm going to choose Generation 2 because I
want the latest features set, which includes UEFI-based firmware options.

So if I want to enable things like secure boot and so on. So that's fine. I'm going to go
ahead and click Next. I'm going to leave it on the default here of one gigabyte of
memory, that will vary depending on what you're going to be running as a VM and its
workload.

And I'm going to continue Next. And for network connections, I'm going to leave it on
Not Connected for now, and I'll click Next. And essentially, I'm just going to go and
tell it that I'm going to attach a virtual disk later, I'll click Next and Finish. So this is
not a complete VM, nothing is installed, but I just want to take a look at the virtual
CPU settings. So there's VM1, I'm going to go ahead and right click on it and choose
Settings.

So here in Hyper-V, the interface is a little bit different than what you might get, for
instance, if you're using VMware Workstation and modifying virtual hardware
settings. So if I click Processor over on the left, on the right, I can select the Number
of virtual processors or VCPUs. I can even set resource control settings for CPU
utilization for this VM to ensure it doesn't monopolize the CPU, thus denying it from
being used by other virtual machines.

I can even set a Relative weight to compare it to other virtual machines. If this virtual
machine's relative CPU weight is 100 and another's is 200, then this virtual machine
can only use half the amount of resources as the other, where its relative weight is
200.

But at any rate, this is where we can go to make the necessary changes to our virtual
machine or vCPU settings here in Microsoft Hyper-V. Now, over on the left, I can
also click the + to expand Processor, where it reveals a few other options, specifically,
Compatibility and NUMA. If I click on Compatibility, here I have the option to enable
a checkmark that's labeled: Migrate to a physical computer with a different processor
version.

So if you plan on doing things like live migrations, where you want to be able to
migrate or move a virtual machine running on one Hyper-V host to another one with
minimal downtime, then you normally have to make sure that you have the same
processor versions the CPU versions on the two hypervisor hosts. However, to allow a
bit of flexibility with that, we can enable this option.

If I click on NUMA over on the left, I also have some configuration, further details
related to this CPU, like the Maximum number of processors, the Maximum amount
of addressable memory where the unit of measurement is in MB, and so on. The
Maximum number of processors reads 4, the Maximum amount of memory (MB)
reads 6962, the Maximum NUMA nodes allowed on a socket read 1, and the
Hardware threads per core read 1. So there are a lot of virtual CPU settings then that
must be considered and tweaked carefully, depending on how you plan on using your
Hyper-V virtual machine and also determining the type of workload that will be
running.

And remember, just enabling multiple CPUs at the virtual level is not always the best
practice because it can actually degrade performance. The host returns to the
Processor tab and sets the Number of virtual processors to 3. And also, don't forget
that sometimes software licensing will depend on the number of virtual processors,
and you want to make sure that you remain compliant with license agreements.

Configuring AWS Instance vCPU Settings


Topic title: Configuring AWS Instance vCPU Settings. Your host for this session is
Dan Lachance.
In this demonstration, I'll be configuring CPU settings for an Amazon Web Services
virtual machine instance. And we're going to look at this from two perspectives. The
first is going to be deploying or launching a brand new instance, brand new virtual
machine and setting the CPU settings at that time. After we've done that we'll then
examine how to change it after the fact, basically for an existing virtual machine.

So the first thing to do here is to take a look at the region you're in, in the upper right
here in the AWS Management Console. Currently, I'm set to Oregon US West, but I'm
going to change that to US East North Virginia, since that's nearer where I am
working from. Next in the search bar at the top, I'll search for EC2 and click on that.
EC2 relates to all things compute, like virtual machine instances. So if I go to the
Instances view on the left, I'll see any virtual machine instances in the current region.

Of course, if I switch to a different region, I might see a different list of virtual


machines if I've deployed them. I'm going to go ahead and click Launch instances,
and in Step 1, I have to choose an Amazon Machine Image, an AMI. This is the OS
image, whether it's Linux-based, like Amazon Linux, or macOS-based, Red Hat
Enterprise Linux, SUSE Linux Enterprise Server, Ubuntu.

And then we get some Windows variations. In this particular example, I'm just going
to leave it on Amazon Linux 2 and I'll click Select. And the next thing we have to do
is choose the Instance Type. And this is the point of interest for us, because the
instance type determines the underlying horsepower for the virtual machine.

So currently, it's the t2 Family that's been selected, specifically t2.micro, which
consists of having 1 virtual CPU and 1 GiB of RAM. The Instance Storage (GB) is set
to EBS only, the Network Performance is set to Low to Moderate, and the IPv6
Support reads Yes. So I'm going to leave that selection, although I could change it.
And as I go through the list, of course, we have variations on the number of virtual
CPUs and of course, the amount of RAM. The more CPUs, the more RAM, the more
the cost per second while that virtual machine is running. So I'm going to leave it as it
is.

I'm going to click Next to configure the Instance Details like the Number of instances
and the Network, or VPC where it will be deployed, and a Subnet where it will be
deployed within that VPC network. It's going to auto-assign a public IP address and
for testing purposes, that's OK.

Otherwise, for production virtual machine instances, unless it really requires it, you
shouldn't assign a public IP for security reasons. The Network reads vpc-27ea875a
(default). You might, for example, use a VPN connection into the AWS cloud from
your site, and then through that VPN connection, you would be able to manage the
virtual machine using its private IP address.

So I'm not going to change any of the other settings here. Essentially, I'm just going to
click Review and Launch, and then I'll click the Launch button. Now, the way that it
works in Amazon Web Services is when you launch an instance or deploy a virtual
machine, you have to have a public and private key pair.

The public key is stored in AWS, the private key is stored with your machine, and so
you need that enable to authenticate. So we can either Choose an existing key pair,
which won't work for us, there are no key pairs, or we could Create a new key pair.
Under Key pair type, the option RSA is selected. The other option reads ED25519. So
I'm going to call this LinuxKeyPair1 and I'm going to click Download Key Pair, and
then I'll be able to click Launch Instances.

So the virtual machine or the instance is now launching. I have a link to it here I can
click on, it filters the view here, and currently the Instance state is pending. The link
reads i-06dc92cef0932a6e2. If I select that instance, down below under the Details, I
will be able to scroll down and see details like the IP configuration, the VPC or the
network and the Subnet where it was deployed.

And as I scroll even further down towards the bottom right, I'll get to the point where
I can see the number of virtual CPUs or vCPUs from the instance type selection. And
in our case, we had selected only 1.

Now, having that instance still selected, I can go to the Actions menu, I can go to
Instance settings and notice that Change instance type is grayed out, I can't select it.
And the reason is because the virtual machine is up and running. It needs to be
stopped in order for me to change the instance type.

Now I might want to change the instance type because the IT workload that is
supported by that virtual machine might require more horsepower. And so I might
change the instance type to increase the amount of virtual CPUs, if that is a
performance metric I've been monitoring that seems to have been sluggish.

So I'm going to go ahead and still having that selected, I'm going to go to the Instance
state button at the top and I'm going to choose Stop instance and then I'll choose Stop.
So it successfully stopped the virtual machine, I'm just going to click Refresh to make
sure we have the most up-to-date details.
The Instance state now shows Stopped, I'll select the instance, go to the Actions menu
again, Instance settings and this time, Change instance type is available, so I'm going
to go ahead and click on that.

Currently our Instance type is t2.micro. So from here, we can choose from the list to
select a specific instance type that will include more virtual CPUs. Now you might
wonder, how do I know just based on the name of the instance type how many virtual
CPUs it consists of? The host selects m6gd.4xlarge. Well, you can search up AWS
instance types in your favorite search engine and easily find that information.

Let's go ahead and take a look at that. So here I'm looking at Amazon EC2 Instance
Types and down below, if I were to choose, for example, A1, then I can scroll down
and see, for instance, that a1.large consists of 2 vCPUs and 4 GiB of memory.
a1.medium includes 1 vCPU and 2 (GiB) of Memory, a1.xlarge includes 4 vCPUs and
8 (GiB) of Memory, a1.2xlarge includes 8 vCPUs and 16 (GiB) of Memory,
a1.4xlarge includes 16 vCPUs and 32 (GiB) of Memory, and a1.metal includes 16
vCPUs and 32 (GiB) of Memory.

So back here, changing the Instance type from the selection list, if I were to choose
a1.large, I now know how many vCPUs that includes and I could go ahead and apply
that change, but in this case, it fails in applying it. And that's because of the specific
Amazon machine image that I have selected for this particular virtual machine.

It was based on Amazon Linux. Not every instance type can be used with every type
of operating system. So if I were to change this, for example, to t2.large and then click
Apply, the Instance type changes successfully. So be aware of this kind of limitation.
So now if I select that instance and down under the Details tab at the bottom, if I
scroll all the way down and take a look down towards the bottom right, the number of
vCPUs now reflects 2.

Configuring Microsoft Azure Virtual Machine vCPU


Topic title: Configuring Microsoft Azure Virtual Machine vCPU. Your host for this
session is Dan Lachance.

In this demonstration, I'll be manipulating the virtual CPU settings for a Microsoft
Azure virtual machine. And as is the case with pretty much any public cloud provider
or even on-premises virtual machines, you can set the CPU settings when you create
the virtual machine or after the fact.

We're going to examine both of them here in Microsoft Azure. So I've signed into the
Microsoft Azure portal, and what I'm going to do is create a virtual machine instance.
So I'll click Create a resource in the upper left. And I'm going to create a Windows
Server 2019 Datacenter virtual machine. Of course, I could choose other options like
Ubuntu Server, I could actually choose See more in Marketplace, and then from here I
could go through and search for what I want.

So, for example, if I wanted to look for variations on Ubuntu Server Linux, then I
could see essentially the images that I could use as a starting point for that type of
virtual machine. However, I'm just going to go back to our initial creation screen,
where I'm going to go back and choose Windows Server 2019. Now, when you're
creating the virtual machine, you have to fill in a couple of details, like the Resource
group that you're going to deploy it into.

You can create a new resource group. A resource group, like the name implies, is just
a logical way to organize related cloud resources. So maybe for a web app, you would
organize all of the cloud resources that support that app and put them in one resource
group. You don't have to do that, but it keeps things nice and organized. The
Subscription reads Pay-As-You-Go and the Resource group reads Rg1. I'm going to
call this Winsrv2019-1, my first server of that type.

It's going to go in the (US) Central region, and I'm going to accept a lot of the defaults
for this, as we go further down. But what I'm really primarily interested in here is the
Size of the virtual machine. The sizing, which is vertical scaling up or down, controls
the underlying horsepower like the number of virtual or vCPUs.

Here, we're currently set to a size that uses 4 virtual CPUs and 16 GiB of memory or
RAM, and we have an approximation on the cost per month. But of course, we can
open up that dropdown list and select something different. Now, notice if I go all the
way down to 1 vcpu and 3.5 GiB of RAM then we're down at a cheaper cost $91 a
month or so, as opposed to $300.

I'm going to go ahead and select that, but then we're going to come back and change
that after the fact, just to show you can change it after the VM exists. So I'm going to
fill in the Username and the Password that I want to use to sign-in to this Windows
virtual machine remotely, such as when I'm managing it. The Username reads
cblackwell. Under Inbound port rules and Public inbound ports, Allow selected ports
is selected. Select inbound ports is set to RDP (3389). And that's all I'm going to
configure at this point.

I'm going to accept the rest of the defaults, so I'm just going to go ahead and click,
Review + create. And after the validation has passed for my settings, I'm going to
actually deploy the virtual machine by clicking Create in the bottom left. So the
virtual machine deployment is currently in progress, so I'm just going to monitor the
screen for a moment or two until such time that it's completed.

OK, now that the deployment is complete I'll click Go to resource, that just takes me
into the properties of that deployed virtual machine, where while I'm looking at the
Overview blade for that virtual machine, over on the right, the Size is being shown
here with 1 virtual CPU.

Now I'm going to use the remote desktop client on my local Windows machine to
RDP into the Public IP address shown here for the VM so that we can check out the
processor information within the running operating system. The Public IP address
reads 40.77.56.129. OK, so now that I've RDPed into my Windows virtual machine in
Azure, I'm just going to go ahead and right click on the Taskbar down at the bottom.

And I want to choose Task Manager just so we can check out more details in the Task
Manager. And really, I'm interested in Performance up at the top where we'll get some
information about the CPU. And notice that I've got 1 Socket, CPU Base speed is
2.1GHz, 1 Virtual processor.

OK, so that makes sense. That's basically what we've configured at the Azure cloud
level. Therefore, that's what the OS in the VM is seeing. OK. So what I want to do
then is go back to the Azure portal and I want to switch the size so that it's got at least
2 virtual CPUs.

OK, so back here in the portal, I'm looking at the properties of our virtual machine
again, and we had gone to the Overview blade where we could see the Size. But if you
want to change it, you have to go down under Settings in the left-hand navigator
where you can click on Size. And from here you'll see the current selection, but you
can also change it.

I'm going to change it to the next listed size that has 2 virtual CPUs. And, of course,
more RAM, 8 GiB of RAM instead of just 3.5, The VM Size reads D2s_v3, the
Family reads General purpose, and the Data disks are 4. and I'm going to click the
Resize button. Now there's a note at the top that says: if the VM is currently running,
resizing it is going to restart the virtual machine.

So if you're running a mission-critical workload or anything like that, you want to be


mindful of this fact. After a moment, if I click the bell or notification icon up in the
upper right of the portal, it says it resized the virtual machine and it was successful. So
if I go back to the Overview blade and if I click the Refresh button, it now reflects for
the size that we've got 2 virtual CPUs.
So let's RDP back into the virtual machine now to make sure that's reflected within the
VM operating system. OK, so here in the virtual machine, I'm going to go ahead and
like we did previously, I'm going to right click on the Taskbar, go into the Task
Manager and I want to go into the Performance tab, where now it's reflecting that
while we still have 1 CPU Socket, according to the virtual machine, it now shows that
we have 2 Virtual processors.

Now what goes hand-in-hand with that, of course, is that we're going to be paying
more because we have more horsepower. But remember that you want to match your
virtual CPUs with the IT workload that's running in the VM. And what you could
even do here in Azure is in the left-hand navigator, you could scroll all the way down
under Monitoring, where you could choose Metrics. And from here, you can monitor
the metrics that are relevant.

In this case, of course, we'd be looking at the percentage CPU utilization, so I'll
choose Percentage CPU and it's now being charted. So I might monitor this over time
to make sure that the current sizing, specifically with the vCPUs, is in alignment with
the IT workload requirements. The Percentage CPU (Avg) reads 4.3170%.

Configuring VMware Virtual Machine Memory


Topic title: Configuring VMware Virtual Machine Memory. Your host for this session
is Dan Lachance.

One of the great things about virtualization, whether it's in the cloud or on-premises,
is that if you need to adjust the underlying hardware resources for your virtual
machine server, it's just a click or two away or even a command or two away if you
prefer to type things in at the command line. So here I'm using VMware Workstation,
where I've got a Windows Server 2019 virtual machine.

It's not running, as well as a Kali Linux virtual machine. It's also not running. So I've
got Windows and Linux. Now on the Windows side, I've got the Memory currently
set to 7.5 GB of memory, and for Kali Linux, it's set to 4 GB. So what I'm going to do
is start up or power on each of these virtual machines.

And once they're powered up, we're going to go in to each of them, both operating
systems, Windows and Linux, and just take a look at checking out how much RAM is
showing up as being available in the virtualized OS, and then we'll adjust it and check
out the difference.

OK, so here in my Windows machine, I've gone to a command prompt where I'm
going to type in systeminfo, that's one word altogether. This is a standard Windows
command that will give me just that, system information. And what I'm primarily
interested in here is taking note of the Total Physical Memory.

So here it's reporting about 7.6 GB, so 7600 MB of memory. And again, if I go into
the VM menu and look at the Settings for this Windows VM, the memory is set at
about 7.5 GB. So that's correct, that's showing up within the virtualized operating
system.

Now here, I've used the free PuTTY client to open an SSH, a secure shell remote
management session with my Linux virtual machine given its IP address. So in here,
I'm going to go ahead and use the cat command. Now, in Linux, the cat command lets
you view the contents of a text file.

And I have a text file under /proc for processes and that file is called meminfo,
memory information. And if I press Enter on that to view the contents of that file,
because it's dynamically adjusted when your Linux kernel is up and running, the
MemTotal count is what I'm interested in.

Here we've got a listing that is telling us that we've got, at least in the unit of
measurement of gigabytes, about 4 GB of total memory. Then, of course, we're being
told that we have approximately 2.8 GB of free memory, MemFree, and
MemAvailable is about 3 GB.

The difference is that MemAvailable is really about how much memory is available
for starting applications. And that's what we're primarily interested in if we are
gauging: do we have enough to accommodate a given workload that is not yet already
running on this machine? However, I'm going to go ahead and use the clear command
here because I can also run the top command in Linux to show the top running
processes.

And one of the things that's always interesting is the percent of memory utilization. So
I can determine which processes essentially are memory hogs in Linux if I'm having a
problem with depleted amounts of memory, which is not really the case here at all. So
I'm going to go ahead and type q to quit out of that.

OK, so back here in VMware, I've got my Windows virtual machine selected. If I go
to the VM menu and choose Settings, from here, not only do we see the Memory
allocated, but we can change it. So for instance, if I were to reduce this down to 4 GB
of memory or thereabout, so as we start dragging, it's showing up at the top here.

We could also type in the value that we want. Notice that when I drag it down to 4
GB, it keeps going back up to the 8 GB point. Well, that's because down below there's
a note that tells us what's happening. It says the virtual machine must be powered off
in order to reduce the amount of memory. OK.

So if I go back and Shut Down the Guest, so I'm going to shut down the virtual
machine operating system. Now if I go into the virtual machine settings and back to
the memory adjustment area, when I drag it down to 4 GB, it stays.

Let's go ahead and click OK, and let's fire that one back up, and then we'll check out
the amount of memory being reported within the OS. OK, so back here in Windows,
let's go ahead and run the systeminfo command yet again and let's check out our
memory. So we've reduced it down to 4 GB of memory.

Let's see what it's telling us here in this command. So according to the OS, indeed the
Total Physical Memory is now being reported as being approximately 4 GB of
memory. So the operating system just picks up the change immediately.

Now, of course, if I still have that VM running and I go back into the VM Settings,
when it comes to adjusting the amount of memory for the VM up or down, notice that
I can increase the amount of memory without a problem while the VM is running.

It's only when you want to reduce it that it has to be powered off here in VMware
Workstation. Now, if I switch over to my Linux virtual machine and if I go into my
menu system and of course, into the same area, basically into my VM Settings, and if
I attempt to reduce the amount of memory, it must be powered off to reduce it.

But if I want to increase it, let's say we want to increase it to 6.6 GB or something like
that, we can go ahead and click OK to put that into effect. Next to the options of
memory, there are three arrows. The blue one corresponds to the Maximum
recommended memory (Memory swapping may occur beyond this size.), 13.4 GB. The
green one corresponds to the Recommended memory, 4 GB. The yellow one
corresponds to Guest OS recommended minimum, 2 GB.

Now, notice that it's saving and restoring the virtual machine. In other words, there is
an interruption of service, if that is a mission-critical workload that's running. So you
don't want to do that during peak busy times. So if I use the up arrow key to go back
through my previous commands, back to cat /proc/meminfo, and if we kind of scroll
back up to the top of that output, notice in this case, it's not reflecting the new memory
change.

Even other Linux commands like free don't seem to reflect the change in the available
memory. So I'm going to go ahead and reboot the operating system. And one of many
ways to do that is to run an elevated command, so I'll prefix with sudo init 6,
switching to run level 6 is going to reboot and I'll put in the password in order for me
to use that command. Of course, I'm in a remote session, so it says that the network
connection was closed.

That's fine, it's because it's restarting, so I'm going to go ahead and reconnect. OK, so
back here in Linux, we're going to go ahead and go back to our cat /proc/meminfo
command, and let's take a look at what is reporting now that we've rebooted.

OK, so now we've got approximately 6.6 GB of memory being reported. This is
important, even though your platform might allow you to adjust memory upwards, so
to scale up vertically while a VM is running, the VM may not take note of that until
it's actually properly and entirely restarted.

Configuring Microsoft Hyper-V Virtual Machine Memory


Topic title: Configuring Microsoft Hyper-V Virtual Machine Memory. Your host for
this session is Dan Lachance.

Setting the amount of RAM or electronic memory for a server is always absolutely
critical, but as we've mentioned before, more RAM is not always better. To a point,
you need a certain amount of RAM for optimal performance of the IT workload
running in the server, but you need to be monitoring performance metrics to find out
what that value is.

So to get started here in Microsoft Hyper-V, I've got a virtual machine named VM1.
What we're going to take a look at is how to adjust the RAM or memory for that
virtual machine. So I'm going to go ahead and right click on it and I'm going to choose
Settings. Now, over on the left, I'll click Memory. Currently, this virtual machine, if I
look to the right, has an amount of RAM set to 1024 MB.

So in other words, 1 GB of RAM, and depending on the server operating system


you're running and the workload that you plan on running within it, that might be
sufficient, but depending on what else you might be running. So if you're running
multiple server roles like DHCP, DNS, it's a web server, it's an Active Directory
domain controller, if it's a Windows host.

If you're running a lot of those things together, 1 GB of RAM is definitely not going
to do it. And when it comes to newer versions of the Microsoft Windows Server
operating system, 1 GB RAM is not enough to begin with out of the box anyway. So
what we can do then, is adjust the amount of RAM accordingly. So, for example, if I
were to set it to 8192 MB, that is about 8 GB of RAM.
What you want to make sure you're not doing is applying too much RAM to the
virtual machine beyond what it needs. Otherwise, it could starve other virtual
machines of any RAM that they might need while this one does not need it. But that's
where settings like Dynamic Memory come in. If I enable Dynamic Memory, we can
specify the Minimum amount of RAM that would be allocated to this virtual machine.

So let's say 2048 MB or 2 GB and then a Maximum amount. That means it can
dynamically fluctuate with the amount of RAM that it will assign to this virtual
machine. That's what Hyper-V will do. Bear in mind, though, that when an operating
system starts, it tends to spike the amount of memory utilization and often CPU
utilization also as it's firing up.

Once everything is loaded, it'll eventually idle a little bit, even when it's running the
workload that virtual machine is designed to support. Whether it's a web app or a
database server, or whatever the case might be. The Maximum RAM reads 1048576
MB.

Now, even though you can set Minimum and Maximum RAM levels for this virtual
machine, the RAM listed above, in our case 8 GB, is what will really be reported to
the OS. So if you're, for example, updating the virtual machine software in some way
and it needs to check for memory requirements, this is the value that it will see; 8 GB
of RAM in our particular configuration.

So the virtual machine OS will always see 8 GB of RAM, but the Dynamic Memory
is for Microsoft Hyper-V, so the host OS running in the VM will never see that the
Minimum RAM dips very low, such as down to 2 GB. This is only for memory
management at the Hypervisor host level. The Memory buffer, which is a percentage
value here, shown as 20%, but you can change that value, is relative to the amount of
allocated RAM up at the top, and so, it'll be 20% in this case, of 8 GB of RAM that is
set aside and made available should the virtual machine need it.

The great thing about these Dynamic Memory settings is that you can even change
them while the virtual machine is running and the VM will not need to be restarted.
Then we have a relative memory weight option here, which is compared against other
VMs on this Hyper-V host to determine the priority for memory utilization from the
VM's perspective.

So if this VM is running a mission-critical workload, we can bump up the priority


level, such as all the way up to High, to make sure that this virtual machine is assured
it's going to get the memory it needs, if it needs more, according to the Dynamic
Memory settings, compared to other VMs running on the host.
Now of course, if all of the other hosts are also set to High, it's meaningless because
there is no relative difference between the memory weight priority level. So adjust
these very carefully. Now, if you've set way too much memory for this virtual
machine beyond what is available on the underlying hypervisor, you'll find out pretty
quickly.

For instance, let's go ahead and right click on this Hyper-V VM and try to start it. So
we get an error stating that it failed to start because there's not enough memory in the
system itself to start this virtual machine. Now, that could be because we've exceeded
what's available physically or there are other virtual machines running that are
consuming large amounts of RAM.

But that's not the case here. There are no other listed VMs. So I'm going to go back
into the settings of that VM and I'm going to reduce the amount of memory, let's say,
down to 2 GB of RAM, and I'll click OK, and let's go ahead and start that VM again.
This time it's in the midst of starting up. And notice that we have the Assigned
Memory counter here showing the assigned RAM that we specified.

Remember, if we look at the Settings of this VM, and we go back to memory,


Dynamic Memory is available, but our RAM amount up at the top is what is being
shown right now as the Assigned Memory. There are no additional memory loads on
this VM, so it doesn't need to use more RAM.

So make sure you adjust these accordingly, meaning that you should monitor the
utilization of the VM over time, if it's a new VM, and you don't already have
performance metrics or baselines, so that you know exactly how you should be setting
these values.

Configuring AWS Instance Memory Settings


Topic title: Configuring AWS Instance Memory Settings. Your host for this session is
Dan Lachance.

One crucial aspect of working with virtual machines in the cloud is sizing them
correctly. Right sizing means that you are choosing the appropriate underlying
horsepower for your virtual machine server to accommodate the running workload. So
of course, you want to make sure that you've got enough CPU compute power,
enough RAM, enough support for attached data disks and all that type of thing to
accommodate what you're doing properly.

You want to make sure that you are capturing performance metrics over time to
establish what is, quote unquote, normal. In other words, setting up a base line;
because then you can select the appropriate instance type, as it's called in Amazon
EC2.

EC2 deals with virtual machines, among other things, in the Amazon Web Services
cloud. And the instance type is the sizing aspect. So I've searched up Amazon EC2
Instance Types, and as I go further down, I have the instance types with their names,
the number of virtual CPUs, the amount of memory, the type of Storage, the Network
Performance. So depending on what you need, will determine what you select.

Now, if you need ultra performance, the T2 series is probably not what you need. So
if I were to choose, for example, M5, as we scroll down, we have differing types of
virtual CPU numbers all the way up to 96. That's 96 virtual CPUs, up to 384 GiB of
memory.

Then we have the type of Network Bandwidth measured in Gbps for the sizing and
the EBS volume Bandwidth in Mbps, that's a measure of disk throughput. So it can be
a bit of a balancing act to strike the correct balance when selecting the size for the
virtual machine.

So I'm going to close that and in the AWS Management Console, if I search up ec2
and I can click on that, that'll take me into the EC2 Management Console where I can
view any virtual machine instances that I might have in the current region. So I can
select different regions here in the upper right.

And as I switch from one region to another, it might give me a different listing for
what was deployed in that region. So I don't have any virtual machine instances, but I
will soon. So I'm going to click the Launch instances button and it's going to be
Amazon Linux, so I'll select that Amazon Machine Image. So in Step 2: Choose an
Instance Type, there's a selection that's already been made here, t2.micro, which is
Free tier eligible for my AWS account, 1 vCPU and 1 GiB of memory.

And that might suffice depending on what's going to be running in the virtual
machine. Also, I have to take note of the Network Performance because depending on
what I'm doing, I might need high network performance and I might need support for
IPv6 addressing. The Instance Storage (GB) is set to EBS only, the Network
Performance is set to Low to Moderate, and the IPv6 Support reads Yes. And
remember that you can always change the instance type after you've created the
instance or the virtual machine in the cloud.

This is really pretty neat when you think about it. All we have to do is make a click,
and all of a sudden we're specifying how many virtual CPUs and how much RAM we
want to use. It's such a far cry from changing equipment physically, and sometimes
you still do that for sure as a Server+ tech, but it's great in the cloud where we just
have to make a selection, either in a GUI or by typing in commands for the underlying
horsepower.

So I'm going to go ahead and go with t2.micro here, I'm just going to go ahead and
click Review and Launch, and I'm going to click the Launch button. I've got an
existing key pair that is used, a public and private key pair that's used to authenticate
to the virtual machine here.

So I will acknowledge that I've downloaded the private key file, which I have, and I
will launch the instance. The key pair reads LinuxKeyPair1 | RSA. OK, so the instance
is being launched based on the instance type that I've selected, so I'll click the link
here The link reads i-0c4d3b7d64701ea3a. for the launching instance. It's in a state of
pending, the Instance type is now shown as t2.micro.

Let's wait a moment for that to be deployed, and then we'll examine how to view the
instance type in the details as well as, we can see it here in the column. And then also
how to change it. OK, so the instance is now running so I can go ahead and select it in
the checkbox to the far left.

And if I look at the Details tab down below, then from here I can scroll down and I
can start to read details about the underlying horsepower. For example, in the bottom
right, the number of CPUs is currently set to a value of 1. Now, of course, with that
virtual machine, or instance, running, if I go to the Actions button and go to Instance
settings, Change instance type is not available because it is running.

So I'm going to go to Instance state and stop it. And once it's stopped, there it is,
successfully stopped. I'll just click refresh, I'll select it again and I will go to the
Actions button, Instance settings, and I'll choose Change instance type. So the current
Instance type is shown in the list as being t2.micro. So if I decide I need a little bit
more horsepower, maybe I would choose t3.medium.

Now if I go back to searching up my EC2 instance types for Amazon Web Services,
and specifically, if I look at the T2 series, so I've gone to the T2 series tab, down
below, t2.medium consists of 2 virtual CPUs and 4 GiB of memory, as opposed to
what we've got, which is 1 virtual CPU and 2 GiB of RAM.

OK, well, assuming, I think that will accommodate the workload fine, I would just
select it, click Apply with the understanding that when I do fire up that instance, so
I'm going to select it, go to Instance state and choose Start instance, when I fire that up
and it's running, I will be paying more while it's running because I've got more
underlying horsepower. But really, that's all it takes to change the instance type for
your AWS EC2 instance.

Configuring Microsoft Azure Instance Memory Settings


Topic title: Configuring Microsoft Azure Instance Memory Settings. Your host for this
session is Dan Lachance.

In this demonstration, I'm going to be focusing on setting the correct amount of RAM
or memory, for an Azure cloud-based virtual machine. So I've already signed into the
Microsoft Azure Portal, the GUI management tool.

So in the left-hand navigator on the far left, I'm going to scroll down until I get to
Virtual machines, which takes me to the Virtual machines view. Now I've got one
virtual machine running Windows Server 2019 that's already been deployed, but it's
Stopped, or deallocated, so it's not running. However, if I click on the link for the
name of the VM, that puts me in the Overview blade, where on the right, the Size is
available.

So in this case, I've got a Size or the underlying horsepower for the VM, which
consists of 2 virtual CPUs and 8 GiB of memory. However, because the virtual
machine is stopped, it's not running, I'm not paying for the use of that resource,
specifically for the memory. However, let's go ahead and click the Start button to start
the virtual machine to get it up and running.

Now, of course, you would do that to accommodate whatever workload needs to run
in the VM, but at the same time, you're going to be paying for the VM every second
that it's running. And the more RAM that you have, the more that you pay, for the
time that the VM is up and running, naturally. Now, while the virtual machine is
running, you can be on the Overview blade like we are just checking out the Size, but
you can also click on Size in the left-hand navigator, where you can actively switch
the virtual machine size.

So what we're looking at here is a current selection for our sizing, which is 2 virtual
CPUs, 8 GiB of RAM. But what we can also do is scroll through the list and choose
something that might be more appropriate for our workload, maybe more virtual
CPUs and RAM, which of course, will translate into more cost when the VM is
running. Or in some cases, you will actually scale down vertically because you might
have too much horsepower assigned for the given IT workload.

And so why would you pay more unnecessarily when you don't need to? So I'm going
to go ahead and select a different size for this virtual machine that includes a lesser
amount of RAM. We're currently at 8 GiB of RAM, I've decided after monitoring the
VM, I don't need that much, 3.5 should do the trick. VM Size reads DS1_v2, Family
reads General purpose, vCPUs read 1, and Data disks read 4.

So I'm going to go ahead and select that and click Resize. Of course, at the top, there's
a message that says when you resize a VM, in this case for RAM, but the same thing
applies for the CPU, that it's going to cause it to be restarted. So you don't want to do
this during peak hours if a mission-critical workload is running in the virtual machine.

So we can change that for an existing VM, we can change the amount of RAM. After
a moment, we'll have a message in the notification area; so that's the bell icon in the
upper right, that the virtual machine was resized. And if we go back to the Overview
blade and if we click the refresh button just to make sure we're looking at the most up-
to-date information, it will now reflect that we've got a new size for this VM that now
only includes 3.5 GiB of RAM instead of the original 8 GiB.

So we resized it while it was running it and re-started it, it's now in effect and so we
will be paying less when the virtual machine is running. Of course, you can stop the
virtual machine at any time clicking at the Stop button if you don't need it to run the
IT workload anymore.

However, when we click Create a resource in the upper left and if we were to create a
new virtual machine, we can also at that time, if I scroll down a little bit, set the VM
Size. So the VM sizing is available here and again, we have to really carefully think
about the workload that we need to accommodate to determine what kind of
horsepower or in our case, specifically, how much RAM we're going to need.

Do we need 3.5 GiB of memory or 16 GiB of memory? Now you just have a couple
of common sizes shown here. You can click the See all sizes link to get a much larger
list. To the point, for example, where we've got 32 GiB of RAM that would be shown
here for one of these sizes.

Then we've got a 4th generation E family, for high memory needs, that would be the
amount of RAM. And as we go down through the list, we're getting up to 64 GiB of
RAM. So depending on what you need to accommodate your workload will
determine, of course, what you're selecting here.

Always remember we can monitor performance metrics such as those related to


memory to make an informed decision when we resize the VM. So let me just go back
to the Virtual machines view, I don't want to create a new VM. Here we've got our
existing VM, so I'm just going to click on it to open it up and in the properties, if I
scroll down on the left under Monitoring, way down the bottom, I can click Metrics.
And from here I have a number of metrics I can monitor related to the selected virtual
machine. Now, of course, the virtual machine is stopped, so it's not going to be
anything meaningful until such time that it's running. But what we're always interested
in is the amount of available memory. So we can chart this over time.

And what you really need to do is establish a baseline: how much memory is normally
being consumed under normal performance conditions, when our IT workload,
whatever that happens to be, is running and the performance is satisfactory because
we need a standard to measure performance by. And then you'll know exactly how to
resize a VM to select the appropriate amount of RAM.

Now, be wary of some high memory utilization configurations, like using in-memory
database caching, which can be great to improve database performance, maybe for
commonly run queries or for indexing operations to make sorting and grouping and
searching through databases much quicker. And while there are still definitely times
as a Server+ technician where you will have to physically replace RAM to increase or
decrease the amount in a server, when working with virtual machines, especially in
the cloud, it's just so quick and easy to select differing amounts of hardware resources
to accommodate the workload.

Course Summary
Topic title: Course Summary.

So in this course, we've examined how to plan for and configure on-premises and
cloud virtual machine settings related to CPUs and RAM. We did this by exploring
how hypervisors run guest virtual machines, how to configure VMware vCPU and
Microsoft Hyper-V settings.

We examined AWS and Microsoft Azure vCPU configurations. We configured


VMware workstation and Microsoft Hyper-V virtual machine memory settings. We
also examined AWS and Microsoft instance memory configuration.

In our next course, we'll move on to explore servers and cloud computing.

CompTIA Server+ (SK0-005): Data Privacy


& Protection
Data privacy has become engrained in laws and regulations all over the world. Server
technicians must take the appropriate steps to secure sensitive data in alignment with
applicable laws and regulations. Discover items that constitute personally identifiable
information (PII) and protected health information (PHI) and identify common data
security standards such as GDPR, HIPPAA, and PCI DSS. Differentiate between
various types of malware and discover how the art of deception is practiced through
social engineering. Next, examine data loss prevention (DLP) and implement data
discovery and classification on-premises and in the cloud. Lastly, examine key storage
media destruction techniques. Upon course completion, you'll be able secure data in
alignment with applicable laws and regulations. You'll also be more prepared for the
CompTIA Server+ SK0-005 certification exam.

Course Overview
Topic title: Course Overview

Data privacy has become ingrained in laws and regulations all over the world. Server
technicians must take the appropriate steps to secure sensitive data in alignment with
applicable laws and regulations.

In this course, I'll identify items that constitute PII and PHI. Next, you will list
common data security standards such as GDPR, HIPAA, PCI DSS and the like. I'll
then differentiate between various types of malware, and I'll define how the art of
deception really comes down to practices through social engineering. Next, I'll
examine data loss protection or DLP and implement data discovery and classification
on-premises as well as in the cloud. Lastly, I'll examine storage media destruction
techniques. This course is part of a collection that prepares you for the CompTIA
Server+ SK0-005 certification exam.

Personally Identifiable Information (PII)


Topic title: Personally Identifiable Information (PII). Your host for this session is
Dan Lachance.

Data privacy is a very important consideration these days when it comes to server and
data management. So, let's start by talking about personally identifiable information or
PII, PII. When we talk about PII, what we're talking about is anything that can be
uniquely traced back to an individual, whether it's one single piece of information like
a credit card number or whether it's a combination of pieces information like a last
name and an address and a Social Security number. So, data privacy then means
dealing with how data is collected, how data is transmitted, how it gets stored, and
where it gets stored in different geographical regions, and also how that data is shared
and ultimately used.
Now, what's important about this is that there are different laws and regulations that
govern how PII is to be treated through all of the phases that we talked about
collection through to the usage of the data, and server technicians need to be aware of
which laws and regulations apply for data privacy. So, non-technology examples of
PII would include things like mother's maiden name, home street address, credit card
number. But then, when we get to technology, we have to think about things like an IP
address, which could uniquely identify a user or at least a network where that user was
connected. A web browser cookie. Web browser cookies are small text files used by
the web browser for a variety of purposes like storing preferences for a website or
retaining session information for a website. So, there could be confidential
information in a cookie.

A user social media account is considered to be an example of PII. Another term you
might come across is sensitive personal information or SPI. This is a little bit different
and specialized where it deals with things like political party affiliation of an
individual, an individual's sexual orientation or gender, and also membership in
various trade unions. Protected health information, or PHI is similar to PII, but what
makes it different is that it focuses on sensitive medical information. Now that could
be past as well as current health information or even future health details that might
relate to future health care and how it will be paid for. Organizations need to conduct
periodic assets to ensure the security of sensitive data. And so, there should be a
periodic PII or PHI security control review. This means reviewing related
organizational policies.

And this is important to be done periodically because overtime laws or regulations do


change. And as a result, organizational security policies related to that type of
sensitive information needs to also change. So, it means evaluating security control
efficacy. A security control is something put in place to reduce or eliminate or in some
way, mitigate a security threat. And so, what once might have been an effective
security control to protect data, like a certain form of encryption, may no longer be as
effective because maybe there's a flaw known in that encryption algorithm. It also
means being able to identify security control inadequacies, and that can sometimes
become known over time by reviewing logs or performing a trend analysis. All of this
can serve to eventually implement new security controls or change existing security
controls, so that they're more effective in protecting sensitive information.

It's fine to talk about the need for protecting sensitive data, but how do you find out if
you have any sensitive data and where it is, especially on a large scale in the
enterprise, where you might have many different locations, thousands of different
storage arrays, and even cloud computing environments that might store sensitive
information. The answer is to use a tool that can crawl or go through and discover this
sensitive data for you. And ideally, not only discover it, but label it in the file system
because then your security controls, your backup mechanisms, and so on, can look at
those labels in the file system and treat sensitive data differently than non-sensitive
data.

In the case of a Windows environment, you could even use the File Server Resource
Manager or FSRM role service that is built into Windows server, but not installed by
default. You can use that to discover and classify sensitive data in the file system. You
can even use Windows dynamic access control to limit access to certain data flagged
in a certain way in the file system. The other thing is in the cloud, you might use
services like Amazon Macie to discover and classify sensitive data. So, yes, it's
important to know what sensitive information is and how it can be traced back to
individuals. But it's also important that we think about how we could actively
implement a solution to discover and identify that data.

Data Privacy Standards


Topic title: Data Privacy Standards. Your host for this session is Dan Lachance.

Remaining compliant with data privacy standards can be a daunting task for any
organization. So, let's take a few minutes and let's discuss data privacy standards and
compliance. The first consideration is to think about how organizational security
policies are influenced by data privacy standards. Depending on the nature of your
organization, the industry that you're in, the clients that you deal with, can determine
which laws or regulations might apply when it comes to sensitive data that your
organization might collect, that it might process, that it might share, and so on. So,
legal and regulatory compliance are important. There needs to be someone in the
organization that tracks which laws or regulations are applicable, and that can vary
around the world with multinational firms and also with the replication of cloud data
from one geographical region to another.

Somebody has to be watching out for that to find out what is required in terms of laws
and regulations that must be followed and how that will be followed or complied with.
So, security and regulatory compliance then means that we have to think about
sensitive data confidentiality, so making sure only authorized parties can see it. And
that can often be achieved with access control lists of various kinds and also data
encryption, but also data integrity, we have to think about making sure that data is
trustworthy, authentic, and has not been tampered with. That could be done over the
network with network security protocol suites. You can also enable things like file
system hashing to detect any modifications, any unauthorized modifications to files.
Often, regulatory compliance also means retaining certain types of data for a specific
period of time.

So, if you're in the finance industry, for example, you might be required to retain
financial records for banking, for taxation, for investment transactions, for a number
of years, and so that has to be accounted for. Now, from an IT strategic standpoint, if
you must archive data for a few years, then you might want to make sure that you are
moving that data to a slower access tier. In other words, on slower storage media to
save on costs, since the data doesn't need to be accessed frequently. Then, we have to
think about the disclosure of security incidents, which might be required by some laws
or regulations. When a security incident does occur, such as being required to report
to customers, if their personal information was affected by a security data breach or
notifying investors of the same type of thing.

There are a number of different types of sensitive information. The first being
personally identifiable information or PII, PII. This, of course, is one or more pieces
of information that could be traced back to an individual like a credit card number, a
bank account number, a tax ID, that type of thing. Then, we have electronic medical
records or EMRs. This is the type of medical information that would be recorded and
stored by your family doctor that might contain things like your blood pressure
details, your blood type details, and so on. Then, we have an electronic health record.
So, this is different than an electronic medical record or an EMR. With electronic
health record or an EHR, this is patient medical information that's stored digitally.
However, it's not as localized as for a local family doctor, as an EMR would be. The
EHR is designed to be shared across different systems.

And then, we have a personal health record. Now, this is an entire medical record of
an individual patient. However, it is under the control of the patient or a family
member. So, instead of a health care provider having control of patient medical
details, as is the case with EMRs and EHRs. With the PHR, it's entirely under the
control of the patient or a family member. The Health Insurance Portability and
Accountability Act, otherwise called HIPAA relates to the limited disclosure of
protected health information as well as electronic protected health information, ePHI,
specifically, in the United States. It applies to HIPAA-related entities, and those
entities that are affected by HIPAA would include health care providers in the United
States, also, any health plans that are applicable to the United States.

And HIPAA deals with not only technical safeguards, things like network access
control mechanisms to control access to the network and firewalls and encryption, but,
HIPAA also has details related to physical security, like limited access or restricted
access to facilities or rooms within facilities and equipment racks that might contain
storage devices. In the European Union, we then have the General Data Protection
Regulation or GDPR. Now, while this is an active legislation from the EU, it's not
limited to being applicable only within EU countries. So, the idea is that we're talking
about personal data for EU citizens, that is in the control of the EU citizen. So, data
privacy of their PII, so the collection or retention of the data, the use of it, the
potential sharing of it.

But what's interesting about GDPR is it will apply to organizations that deal with EU
citizen private data anywhere in the world. So, regardless of where the organization is
located, working with this data, GDPR will apply, if it is the private data of a
European Union citizen. So, EU citizens then get the ability to provide consent to their
data being collected in the first place. They have access to that data and the ability to
correct any problems with it, like errors in their personal information. This is all what
falls under the GDPR. Then, we've got the Children's Online Privacy Protection Act,
or COPPA. This is related to the United States. It deals with protecting the personally
identifiable information of children. In other words, it would require parental consent
before any of that information is made available.

So, it would restrict information gathering for children from Internet services like
certain types of children's apps and so on. It requires parental consent. At the
international level, we've got the Payment Card Industry Data Security Standard,
otherwise called PCI DSS. So, this isn't tied to a particular region or a country. What
it deals with is the security of cardholder information, whether it's debit cards or credit
cards like Visa, Mastercard, American Express, so on. So, what's interesting is it deals
with security standards for merchants that will deal with cardholder data, like retail
outlets that would deal with credit card payments or ecommerce sites online that
would deal with credit card payments and what they should be doing to protect that
cardholder information. So, some examples of PCI DSS requirements and how they
are met might be as a goal to build and maintain a secure network that's part of PCI
DSS.

But PCI DSS doesn't specify technical details on how to achieve these goals. So, a
control for that type of a goal might be to install firewalls and change network
defaults on network appliances. If the goal is to protect cardholder data, well, we
might put in security controls that enable storage data encryption and network
encryption over public networks. The goal is to maintain a vulnerability management
program. Maybe the security controls put in place to achieve that would include anti-
virus solutions and updates for it and applying security to all phases of the software
development lifecycle or SDLC, when software is being created. If the goal is to
implement strong access control, perhaps we would do that by making sure we have a
need to know basis only for cardholder data and that would influence how our ACLs,
or our access control lists are crafted. Using unique user accounts, so everyone signs
in with their own account. Having physical security controls in place to protect the
storage of sensitive information like cardholder data.

Defining Malware Types


Topic title: Defining Malware Types. Your host for this session is Dan Lachance.

Malware is a really big problem. Malware is really just malicious software. So, its
software code or scripts written by malicious actors for some nefarious purpose. So,
as an IT security technician, as a server technician, we need to be thinking about
malware questions such as, who are the attackers, what do they want, is it for financial
gain, are they trying to make a point? Is it for bragging rights because it's a young
script Kiddie? How will the attackers attack? Will it be a denial-of-service attack to
render a server useless for legitimate activity? Or will it be a ransomware attack to
encrypt data files? And we also then have to think, how can we mitigate these attacks?

Can we put security controls in place to either eliminate or reduce the impact or
somehow mitigate the attack itself? The malware infection process begins with
reconnaissance, so the malicious actors will do their homework to determine who
their target is, how they're going to target them, what they're going to get out of it, but
again, whether it's bragging rights, or whether it's financially motivated, whatever the
case might be. Next, in step 2, the user is somehow tricked in some way into
divulging sensitive information, perhaps by clicking on a link in an innocent looking
email. Maybe there's an email message that comes into someone in accounts payable
that looks like it's a message from a valid vendor that the company has purchased
things from, maybe it's an accounts payable invoice, so the attackers would have done
their reconnaissance to find out what would easily trick users.

And maybe when the user clicks that legitimate looking invoice, although it's
malicious, maybe when they click the link in it to open the attachment, it infects the
machine. There are many ways it could happen, that's only about one way it might
happen. So, in step 3, malware would then infect the device. Whatever type of
malware it happens to be, and we'll cover that in a moment, after which data then
might be exfiltrated or copied out of the organization, sensitive information perhaps
like credit card numbers. Or maybe if it's malware, data files will be encrypted and a
ransom will be demanded in order to potentially receiving a decryption key. Or maybe
it's malware that simply deletes important data. It could be that kind of thing as well.
So, let's take a moment to talk about different types of malware, starting with the
Trojan.
A Trojan essentially, you could think of as being a wolf in sheep's clothing. It means
that it looks like perhaps it's a piece of software, or a website or an email address that
looks benign. But really, it's just used to deliver malware of some kind, maybe
through an infected file attachment. Maybe it's a website that pops up that says your
computer is infected and you should scan it now, but when you actually choose to
scan it now, you're actually infecting your computer. That could be another example
of a Trojan. Then, we have a traditional virus. When we use the word virus in terms of
IT malware, we're talking about malicious code that attaches itself to files. It needs a
host. And so, it could be an office productivity type of virus like an Excel macro virus
or it could be an app installer file that's infected.

It could be media, maybe people are illegally trying to download the latest movie
that's just in the theaters, and when they do, it could, in fact, be a copy of the movie.
But when they play the movie, it might also infect their machine. It happens all the
time. So, illegal downloads of music, movies, TV shows, that type of thing, that's
great way to trick people into downloading something and running it, when, in fact,
they could be infecting their machine. There are also worms. A worm is a type of
malware that does not need to be attached to a file. It doesn't need a host like an actual
virus does. However, a worm is called a worm because it is self-propagating over the
network. Most worms in the past were malware that were self-propagating that
scanned for vulnerable file sharing hosts on the network and then reached out over the
network and inserted themselves there to continue worm through the network beyond
that point.

Ransomware, as we've briefly mentioned, can do a number of things. It can even


prevent system start-up unless a ransom is paid, normally, in some anonymous ways,
such as through Bitcoin. It can encrypt data files, again, unless a ransom is paid, then
the decryption key will not be provided. Bear in mind, though, that there is never a
guarantee that a decryption key is going to be provided when you pay the ransom, so
something to think about. But most law enforcement agencies will firmly state that
ransoms should never be paid because it encourages ransomware to continue in the
future. But on the other hand, if you've got a City Hospital that's been struck by
ransomware and needs access to patient medical information, in some cases, that
could mean life and death, well, then maybe paying the ransom and getting a
decryption key from their perspective, might be the best solution.

So, I'm not offering an opinion one way or another as to whether ransom payments
should be made. But it's something that would have to be considered. It's one of those
things. It's unpleasant, but it has to be planned ahead of time in case your organization
or agency gets hit with ransomware. Now, the other thing to think about with malware
is system compromise isolation. And what this means if we look at our network
diagram is if we have Internet of Things devices, IoT devices, which are notoriously
not as secure as they really should be, usually because they might have default
credentials, it can't be changed or maybe you have no ability to add a PKI certificate
to secure communications to an IoT device, that type of thing.

Because of those reasons and many more, IoT devices you might consider placing on
their own isolated network. So, in our diagram, we might have malicious actors that
came in through the Internet, managed to somehow get through firewalls through a
screened subnet to an IoT device network because they're able to compromise it. But
if those IoT devices are on their own network, it makes it a little more difficult for
malicious actors to get to a production network. Ideally, the firewall separating an IoT
network and a production network will be very restrictive. So, what can we do to
mitigate malware? One and it's very important, has nothing to do with technology,
periodic user training and awareness.

Now, while that user training and awareness might mention technology, it's all about
people. People knowing about scams, about knowing the risks of opening email, file
attachments, if they didn't specifically ask for something that's the problem is that
using the weakest link is people. So, having up-to-date malware scanning software
and signatures. Yes, those things are also important. Having, you know, the best
technological solutions for firewalling and having it configured appropriately for your
environment, that's key, is also important Using antivirus behavioral analysis to detect
anomalies or suspicious behaviors can also be very valuable. So, let's remember then
that when it comes to mitigating malware, a big part of that equation is the human
element.

Social Engineering
Topic title: Social Engineering. Your host for this session is Dan Lachance.

Social engineering is all about trickery, and it's also about deception. Social
engineering means that we are trying to fool somebody into either doing something
they normally wouldn't do, or to provide some kind of sensitive information like bank
account information or passwords, or we're tricking them into clicking something that
they otherwise shouldn't click, because if they do, it will infect their computer or their
device. So, the goal here from a malicious user standpoint is the disclosure of
sensitive information from unsuspecting victims. That's why there are so many crazy
scams out there, whether they're perpetrated through the Internet, through email,
through social media links, through telephone calls. People need to be aware that
these things are really actively out there all of the time.
Social engineering techniques include impersonating government officials. You can
imagine if you get a phone call from what appears to be the income tax department for
your country, stating that you're behind on your taxes and imminent arrests are just
around the corner unless you pay a fine. That can scare a lot of people into, for
example, paying a fine when really they're just paying malicious actors, and of course,
not the government tax department. Impersonating law enforcement. I've been
affected by this personally, receiving a phone call stating that there is a warrant out for
your arrest and unless you pay a fine, you are facing jail time. So, of course, some
people that might not know any better and might be scared by this, might pay that
fine.

Again, you're just paying anonymously to malicious actors. Or it could be a phishing


scam, phishing spelled with ph at the beginning not f. A phishing scam means that a
malicious actor is trying to deceive somebody into, for example, clicking on
something or visiting a website they normally wouldn't visit and that can be done
through email. It could be through social media. For example, if you're a Facebook
user, there were plenty of scams in the past where there would all of a sudden be a
notification that someone posted a video and you're in it. Here's the link and people
being curious as we are, will probably follow that link to see what we were doing in
that video.

Social engineering can also include extortion or sextortion, which is common among
younger people these days, unfortunately, where malicious actors might have pictures
or video that are considered inappropriate, or they might claim that they have pictures
or video that are considered inappropriate. And unless more pictures or videos are sent
or money is paid, then that would be made known to the public. So, extortion,
sextortion, blackmail, this is also part of social engineering, and it is a very dark part
of the Internet, unfortunately. But to be on the ball and to prepare for these things and
do the best that we can possibly do to mitigate against these types of things, we have
to have an awareness that they do exist and we have to understand how they work.
But social engineering doesn't have to involve technology. Not at all. It can happen
without technology, without malware. So, it could be the impersonation, let's say, of a
communications provider technician over the phone.

So, the malicious user might call up and say they're from the phone company or from
the Internet service provider company, and they have to come into the server room to
make some changes. So, the attacker would then show up dressed convincingly to fool
someone like at reception. So, the victim then might allow the attacker into a server
room or a wiring closet, and now the attacker has physical access to equipment, which
could be network switches. They might install some kind of a network packet
capturing sensor, or they might install a Wi-Fi router to give them direct access to the
network without being in the office. That type of thing. So, with the phishing email
example, it might work like this. The victim receives an email that looks legitimate.
The email contains a link, the victim clicks the link thinking it's legitimate.

Maybe the link is a file attachment to the email message that looks like an invoice or
maybe it's just an update link to a website that looks benign. So, if the user clicks that
link, for example, maybe that link would download and install malware, or maybe it's
a fake website that's made to look like a payment website or a banking website, the
user puts in their credentials, which are now captured by the attacker on the fake
website, and the user might just receive a message saying we're experiencing technical
difficulties. Please try again later. That type of thing. With the ransomware example,
that might work like this. The victim receives an email that looks legitimate. There's
the common thread again. The email might contain a malicious file attachment. So,
the user opens the file attachment. Yes, in some cases, even just opening a file could
trigger an on open type of macro to execute.

So, the ransomware then might execute on the user device, and perhaps the
ransomware encrypts data files. The ransomware would then talk to a command and
control server, a C&C server, so the ransomware would then execute on the user
device, and there are many variations on exactly what the ransomware will do. Maybe
what the ransomware would do is talk to a command and control or a C&C server
elsewhere to generate a key pair used to encrypt data files on the device. Pictured
here, we have a screenshot of a phishing email message from what appears to be from
Commonwealth Bank. There's a message here stating that my account has been locked
due to several unsuccessful login attempts, which is strange considering I don't deal
with Commonwealth Bank.

But sure enough, there's a button here that says Unlock Account that I could click and
it says Notice : It is necessary to validate your netcode after having checked your
information. So, it looks like scammers here are trying to capture my Commonwealth
Bank logon credentials, so they could log on as me and presumably steal money.
Now, the best way to prevent those types of things from happening within the
organization is for users to be aware of these types of things and to be very cautious,
maybe even a healthy dose of paranoia. So, user awareness and training. This means
having lunch and learn sessions to make people aware of the latest scams. Make it
fun, make it interesting. Feed people, people like to eat, have good lunch and learn
sessions to get the point across and make sure you have an effective presenter or
communicator to present these things. It really goes a long way.

Or have fun security-related posters or videos that are available to employees. Make it
interesting and make it fun, so people will take it seriously and have fun with it and
they'll retain the information. Even gamification. In some environments, it might
make sense if the corporate culture allows to kind of have a game related around
awareness of scams, even awarding points or having prizes or that type of thing. And
also, having periodic employee surveys about what they might do given a situation or
in fact, even surveys about how effective lunch and learn sessions might have been to
make it even better in the future. So, user awareness and training is probably the most
important thing to do to prevent many attacks, many security breaches, which in the
end can help the company's bottom line or the government agency's objectives.

Data Loss Prevention (DLP)


Topic title: Data Loss Prevention (DLP). Your host for this session is Dan Lachance.

With data loss prevention otherwise called DLP, the theme is about preventing
sensitive data exfiltration. So in other words, preventing data from leaving the
organization or the agency and falling into unauthorized hands. That's what we're
trying to prevent with DLP solutions. Now there are digital privacy laws and
regulations that can influence whether we must implement DLP solutions and how
they should be configured. And really in the end it's all about protecting proprietary
sensitive data. Whether it's company trade secrets or national security items, or
whether it's sensitive customer information, whether it's a copyrighted piece of work
like a piece of music or a book. All of these things fall under the umbrella of DLP
when it comes to ensuring those items are only available to authorized users.

And there is a certain process that is followed to secure data with DLP solutions.
Generally speaking, the first thing that we have to do is we must first discover
sensitive data. In a large enterprise that might have locations around the world, you
can imagine how many different file servers there would be, how many shared
folders. Also perhaps how many storage locations there might be in the cloud that are
replicated to alternate regions. Basically, there could be a lot of data. How do we go
through it all in a timely fashion to identify what needs to be protected? What is
sensitive or proprietary information? We need an automated solution to discover
sensitive data. Now we might have to tweak the solution to what we think is sensitive.
Maybe we'll configure regular expression masks that say if it looks like a credit card
number, we're going to flag that as being credit card. If it looks like this is a
customer's home address then we're going to go ahead and flag that as customer
information. So, we can configure how it searches and filters out what appears to be
sensitive data.

Now then, we can also customize how data labels are applied. We started to describe
that if it's going to be classified as customer information or credit card information, or
maybe just more generally as PII, PII personally identifiable information or maybe
protected health information PHI, if it's medical patient type of info. Next thing we
can do is apply DLP policies to that classified data. So, if we've discovered on a large
scale, let's say credit card information for customers, what do we want to do about it?
Maybe our DLP policy would move that to a different server or a different location
with more strict control over who has access to it. Maybe the data would be removed
because it's deemed as not being necessary to store it. Or maybe that information gets
encrypted and gets backed up more frequently. Your DLP policies can pretty much do
anything with any classified data that is deemed as being sensitive in some way.

Now some of the different types of DLP controls that could be put in place beyond
just encrypting, archiving, backing up, moving to another location. You might also
apply a digital watermark, such as for copyrighted photographs, maybe making sure
there's a little digital watermark in the background unless someone pays for the copy
without the digital watermark. Or maybe a game or a movie or some kind of an IT
solution requires hardware or software Internet activation before it can be used. Or
maybe within an enterprise for sensitive data, we might want to prevent it from being
forwarded through email messages. Or maybe we would require employees to use a
PKI card. Insert a PKI card for example into a laptop before they are allowed to send
sensitive file attachments. Or maybe even before they're allowed to connect to a VPN
that contains sensitive data. Maybe we would prevent document printing for certain
types of sensitive documents that should only be viewed on screen. Or prevent
copying of sensitive files to removable media. So there are many different
possibilities for what your DLP policies might put in place to protect sensitive
information.

Picture on the screen, we've got a screenshot of something called Amazon Macie. This
is in the Amazon Web Services cloud or AWS cloud.

A slide labeled Amazon Macie Configuration appears. It displays a screenshot of a


page with a heading Choose S3 buckets. The page displays a card titled Select
specific buckets. Down below, it has a field named Select S3 buckets. At the bottom,
the page displays a table. Currently, the following table entry is selected: project-
files.

Amazon Macie is designed to scour storage locations looking for sensitive data and
then to classify or label it as such. So, in our next screenshot we have a result of
Amazon Macie Findings where it is pointing out that we have medium or highly
sensitive information.

A slide labeled Amazon Macie Findings appears. It displays a page named Findings
(2). The page contains two fields: Suppress findings and Saved rules. Below this, it
displays a table with the following column headers: Finding type and Resources
affected.

And it also gives us under the Resources affected column, the actual filename and
where it is being stored. So that we could do something about it. But Amazon Macie
at this point has just scoured vast storage locations to identify what appears to be
sensitive information.

Another part of DLP or data loss prevention is planning for what to do when in fact
there is a data breach as much as we put in a lot of effort to ensure this never happens,
nothing is 100%. So what if there is a sensitive data breach, what then? Well, we then
have to be able to confirm that that actually has occurred, so confirmation that there
was an incident and sensitive data was exfiltrated in some way. And then, of course,
scoping out what data was exfiltrated. And then perhaps adhering to data breach
notification laws. If it's customer data that was affected, then maybe we need to notify
customers that some of their information might have fallen into unauthorized hands.

Labeling Data Using File Server Resource Manager


Topic title: Labeling Data Using File Server Resource Manager. Your host for this
session is Dan Lachance.

Windows Server has a role service called File Server Resource Manager or FSRM
that can be used to discover sensitive data and then label it as such on a Windows file
server. So, we're going to go ahead and use FSRM on Windows Server 2019. To get
started, I need to install that role service. So I'm going to go ahead and open the Start
menu and I'm going to go into Server Manager to install it. So in Server Manager,

The Dashboard page of the Server Manager window appears. The left navigation
pane displays the following components: Dashboard, Local Server, and All Servers.
The main pane contains two sections: WELCOME TO SERVER MANAGER and
ROLES AND SERVER GROUPS. The WELCOME TO SERVER MANAGER section
further contains 3 sub-sections. They are: QUICK START, WHAT'S NEW, and
LEARN MORE. He highlights the first section, which has the following elements: 1.
Configure this local server, 2. Add roles and features, 3. Add other servers to
manage, 4. Create a server group, and 5. Connect this server to cloud services.

once it's initialized, I'm going to go ahead and click the Add roles and features link.
And I'll click Next and I'm going to continue on

A dialog box named Add Roles and Features Wizard appears. The left pane displays
the following options: Before You Begin, Installation Type, Server Selection, Server
Roles, Features, Confirmation, and Results. The right pane contains the subsequent
information about the selected option. Currently, the first option is opened.

through the wizard accepting the defaults. So I'll just keep clicking Next until I get to
the Select server role screen.

He highlights the Server Roles option from the left pane. The right pane displays a list
of Roles and their Description.

What I want to do here is go down under File and Storage Services and I want to
expand File and iSCSI Services and I'm looking for File Server Resource Manager,
FSRM. It's not turned on. So we want to install that component, that role service.

He selects the following option from the Roles list: File and Storage Services. It
expands and displays various sub-options. He selects the File and iSCSI Services sub-
option which further opens a list of options.

So I'm going to turn on the check mark. As usual, it pops up and asks if I would like
the management tools installed as well. Of course, I do. So I'll just click Add Features
and I'll continue through the wizard by clicking Next until the very end, where I can
click the Install button to get the component installed. Once the installation has
succeeded, I'll go ahead and click Close. And I can either start the FSRM management
tool from the Tools menu here in the Server Manager. There it is, File Server
Resource Manager.

He returns to the Dashboard page. The top-right corner contains the Tools drop-
down menu button. He selects it that opens a list of multiple options. Few of the
names are: Active Directory Administrative Center, Active Directory Domains and
Trusts, ADSI Edit, File Server Resource Manager, and so on.

Or I could open up the Start menu on the server and go down under Windows
Administrative Tools, that you might expect to go down to the Fs and there's File
Server Resource Manager. I'm going to go ahead and fire it up. It'll leave that running.

A window titled File Server Resource Manager opens. The left navigation pane
displays various components, namely, Quota Management, File Screening
Management, Storage Reports Management, Classification Management, and File
Management Tasks. The main pane displays a similar list. And the right pane displays
various features under the heading Actions.
I'd like to examine the file system for a moment before we continue in this tool.

So let's go over into the File Explorer, Windows Explorer tool. On drive C: on this
server, I have a folder called Data. And I've got a folder called Projects. And I've got a
couple of sample files, one of which is labeled as Project_C.

A notepad file named Project_C appears.

Now, one of the things you'll notice in this sample file Project_C is I've got some
sample text in the form of the word Visa with the capital V. Basically, we're going to
configure FSRM to seek out any files that have Visa in them and flag them as PII, PII
for personally identifiable information. Now, of course, we could base it on the
numbers. It doesn't like the numbers that look like a credit card. It doesn't have to be
the word Visa, but it's just an example. Notice that the other two sample files
Project_A as well as Project_B do not contain the word Visa. Also notice that if I
were to right-click on one of these files and if I were to go all the way down to the
Properties of the file, we have a Classification tab. That's there because we just

He opens the Project_B Properties dialog box. It has various tabs, namely, General,
Classification, Security, Details, and Previous Versions.

installed the File Server Resource Manager role service.

If it wasn't installed, there would not be a Classification tab because we're talking
about data classification. And if I click on that, I could manually select a particular
type of item I want to flag this size such as Department Human Resources or maybe
some other Department like Engineering. This is a resource property that comes from
Active Directory. It was enabled in Active Directory with these values and so it shows
up here on any file servers in the domain that have the File Server Resource Manager
role enabled. But I don't see anything here about PII, PII,

The Classification tab consists of two list boxes. The first list box has two column
headers: Name and Value. The second list box also contains two column headers:
Value and Description.

personally identifiable information. Well, we've not to define that property. Yes, so
let's go ahead and do that. So I'm going to Cancel out of that screen. We're going to go
back into the File Server Resource Manager tool and I'm interested in Classification
Management over on the left. So if I click Classification Properties, there's the
Department
He opens the Classification Management component underneath the File Server
Resource Manager heading. It displays two options: Classification Properties and
Classification Rules.

when we are just looking at, it's showing as a Scope of Global. It comes from Active
Directory.

However, what I want to do is I want to be able to work with

As he selects the first option, a table appears in the main pane. It contains the
following column headers: Name, Scope, Usage, Type, and Possible Values.

personally identifiable information. So I could right click on Classification Properties


and Create a Local Property that would only be usable on this one, File Server, which
is what I want. Otherwise, I could use the Active Directory Administrative Center tool
to create a centralized or global property. So, I'm going to choose Create Local
Property. It's going to be personally identifiable information or PII,

The Create Local Classification Property window appears. It displays three fields
under the General tab. The names of the fields are: Name, Description, and Property
type.

and it's going to be Yes/No. Either yes, it is or no, it is not. But there are other types
here. It could be a Date-time and numeric value.

A drop-down menu appears.

It could be a Multiple Choice List. It could be a String. But we're going to leave it as
Yes/No. OK. So now we've got that property PII. Why don't we go back in the file
system to check out if that now shows up under the Classification tab. So, I'm going to
right-click on one of my files, go into Properties,

He reopens the Project_B Properties dialog box. The Classification tab now displays
a field named Property. It displays three radio button options under the heading
Value. They are: Yes, No, and None.

Classification, and now there's PII. It's a Yes or No value. OK.

So we could manually do this, but you don't want to do that in an enterprise where
you might have thousands upon thousands, even millions of files spread across
multiple servers, and especially where files can be created, deleted or updated
everyday. You don't want to do this manually, and so, let's go back and configure this
to be automated. The way we're going to do that is by creating what's called a
Classification Rule. And so I'm going to right-click on Classification Rules. I'm going
to choose Create Classification Rule and I'm going to call it Seek out credit card files.

The Create Classification Rule dialog box appears. It displays various tabs: General,
Scope, Classification, and Evaluation Type. Currently, the General tab is opened. It
contains two fields: Rule name and Description.

And for Scope, I could tell it to look at certain types of files by turning on the
checkmarks. Or I could go down

The Scope tab displays two list boxes. The first box contains 4 files: Application Files,
Backup and Archival Files, Group Files, and User Files. At the bottom, a button
named "Add" displays.

and click Add and tell it to look somewhere.

A dialog box named Browse For Folder appears. It contains a list of folders.

I'm going to tell it to look on drive C: in the Data folder and I'll click OK. Under the
Classification tab, I'm looking at the content. So, Content Classifier.

He now opens the Classification tab. It contains three sections: Classification


method, Property, and Parameters. The first section displays a field named Choose a
method to assign a property to files.

And I want to assign the PII property to files with the value of Yes, but only if it
contains personally identifiable information like credit card information.

The second section named Property displays two fields: Choose a property to assign
to files and Specify a value. The third section contains a button named Configure.

So I'm going to click Configure and what I

A window opens labeled Classification Parameters. It displays a table with the


following column headers: Expression Type, Expression, Minimum Occurrences, and
Maximum Occurrences.

can do is I can look for a specific String.


And what I'm looking for here for Expression is the word Visa.

He opens the drop-down menu which is shown under the Expression Type header. It
has the following menu options: Regular Expression, String (case-sensitive), and
String.

Now for String, it could be case sensitive or not. It depends on what I want to look
for, but I'm going to look for at least one minimum occurrence of the word Visa. OK,
done and OK. So we now have our Seek out credit cards Classification Rule.

The File Server Resource Manager window reopens. The main pane displays a table
with the following column headers: Rule name, Scope, Folder Usages, and Property
Name. The Actions pane on the right displays various features, namely, Create
Classification Rule, Configure Classification Schedule, Run Classification With All
Rules, and Refresh.

Now we could schedule it. We could create a Classification Schedule which is


realistically what you would do to have this run periodically. But all I'm going to do is
Run Classification With All Rules right now.

The Run Classification dialog box appears.

So I'm going to click it. And I'm going to say Run classification in the background.
OK. Now, once the Classification Rules have executed and it's scoured all the files,
we can check them out. If I were to go back here to Project_B and go to the Properties
under Classification, PII is set to No. However, if I look at Project_C, which you
might recall contains the word Visa, it's the only one that does. If I look at its
Properties and go to Classification, PII is set to Yes. So that's how you can use File
Server Resource Manager to discover and classify sensitive files.

Labeling Data Using Amazon Macie


Topic title: Labeling Data Using Amazon Macie. Your host for this session is Dan
Lachance.

In this demonstration, it will be working with the Amazon Macie service. Now, Macie
is part of the AWS cloud

An aws page labeled AWS Management Console appears. It contains a search bar on
the top. It further consists of 4 sections: AWS services, Build a solution, Stay
connected to your AWS resources on-the-go, and Explore AWS .
and what it lets me do is it lets me discover data that's stored, for example, in cloud
storage buckets looking for sensitive information. And when it finds what it thinks
might be sensitive information, it will label it as such. Now, of course, I can tweak
and configure Macie to what I deem as being sensitive information. Or I can just go
with the default filters, which I'll be doing here. So to get started here in the AWS
Management Console, I'm going to search in the search bar up at the top for the word
macie. And then I'm going to click Amazon Macie. And then I'm going to click the

A page labeled Amazon Macie appears. It contains a card named Get started with
Macie. Also, it has a button named Get started.

Get started button. So I'm going to choose the Enable Macie button.

The subsequent page appears. It displays a description under the heading Enable
Amazon Macie. At the bottom, a button named Enable Macie displays.

Now what I need to do at this point is I need to make sure that

A page opens underneath Macie, titled Summary. It contains a section named S3


buckets which displays three sub-sections. They are: Public access, Encryption, and
Sharing.

Macie is going to be looking for sensitive data in S3 storage buckets in the Amazon
Web Services cloud and S3 bucket is a cloud storage location.

So while Macie is doing its thing, I'm going to go ahead and right-click on my Macie
tab at the top and choose Duplicate, my browser window so that I can search for s3 at
the top and go take a look at what it might discover in terms of data.

A window titled S3 Management Console opens. It displays a page named Amazon


S3. The left navigation pane contains the following options: Buckets, Access Points,
Object Lambda Access Points, and Batch Operations. The Buckets option is
highlighted. The main pane displays the corresponding details of the highlighted
option.

So, in my AWS account, I only have a single bucket. It's called bucketappyhz172.
And if I click to open that bucket,

The Buckets section displays a table which has the following column headers: Name,
AWS Region, Access, and Creation date.
The subsequent page opens. It displays various tabs, namely, Objects, Properties,
Permissions, and Access Points. The Objects tab is opened and displays a listed file
named projects/.

there is a projects folder and within the projects folder, I have a


Projects_Sample_Files folder. And finally, I've got a couple of sample files in here,
Project_A, Project_B, Project_C. So, we want Macie then to be able

The Projects_Sample_Files/ page opens. It displays a table under the Objects tab.
The table has the following column headers: Name, Type, and Size.

to scour these types of things, looking for sensitive information. So, depending on the
number of storage buckets you have in AWS, and how populated they are with data
will really determine how long it takes for Macie to scour through looking for what
might be sensitive data. So, I'm in the Amazon Macie interface in the Summary
section where I have a summary on Public access data and versus Not publicly
accessible data, of which we have one item. If I click on Not publicly accessible, it's
showing me my bucket. Well, that's good. It's not publicly accessible and

The S3 buckets page opens.

in some cases, you might want, for example, public website information that's stored
in a bucket, perhaps to be publicly accessible, but other than that,

you might want to keep things protected a little bit more. Then I've got Encryption
and Not required by bucket policy versus Encryption enabled by default. And also,
data that's Shared versus Not shared. So this is just general high level bucket
information. It hasn't yet scoured all of the details or the files stored in those buckets.
Before I create a job to do that, over on the left I'm going to scroll down in the
navigator because I'm interested under Settings in Custom data identifiers.

A page opens in the main pane named Custom data identifiers. It displays an empty
table that has two column headers called Name and Description. Also, the page has a
button named Create.

Now, there are a lot of built-in things that we'll look for, but if you wanted to, you
could come here and click Create, where you could specify details, you could fill out a
Regular expression or a regex that would be used for pattern matching, maybe looking
for certain types of codes or numbers, and you could even enter some sample data that
it could use to learn about what you're looking for. However, I'm going to go with the
built-in filters.

A new page named Create opens. It displays multiple fields under the heading New
custom data identifier. The names of the fields are: Name. Description, Regular
expression, and Keywords. On the right side, a section named Evaluate displays. It
contains a field named Sample data.

So, I'm going to go back to Get started over here on the left. And for Analyzing
buckets, I'm going to choose Create job.

A page named Get started opens underneath Macie. It contains two cards called
Analyze buckets and Analyze public buckets. Both the cards contain a button named
Create job.

A new page opens with the following bread crumb: Macie > Jobs > Create. Its left
pane displays a process of 7 steps. Currently, the first step is opened. It is named as
Choose S3 buckets. At the top, it has two radio button options: Select specific buckets
and Specify bucket criteria. The second option is selected. Below, the page displays
the corresponding details.

So, down below the bucket criteria would be all of the buckets for my AWS account
ID which has been added and I'm OK with that. If I go down to preview the criteria
results, it'll have my bucket shown.

He opens the Preview the criteria results section under the Specify bucket criteria
option. It displays a table that contains the following bucket file: bucketappyhz172.

I only have one bucket in my AWS account. So, I'm going to go ahead and click Next,
just going to close the help screen

The next step page named Review S3 bucket criteria opens. It displays the following
notification message: Macie and customer managed AWS KMS keys.

on the right. So there we go. We've got our bucket that we're going to scour or that
Macie is going to scour. I'm going to click Next. We can determine how often we
want this to be done.

The "Refine the scope" step page opens. It displays two radio button options under
the heading Sensitive data discovery options. The names of the options are: Scheduled
job and One-time job. The Scheduled job option is selected. It displays a field named
Update frequency and a checkbox named Include existing objects. The page also has
a section named Additional settings.

Do we want it to be a recurring schedule which is a good idea? Although, for this case
it's just a demo, I'm going to make it a One-time job. And down under Additional
settings, if I wanted to, I could specify things like File name extensions for things I
want to either Include or Exclude from being examined. but I want everything
examined, so I'm going to go ahead and click Next.

The fourth step page opens titled Select managed data identifiers. It displays the
following radio button options: All, Exclude, Include, and None.

And I want to look for all sensitive types of data. So I'm going to leave it on All and
click Next.

The Select custom data identifiers step page opens. It displays a section named
Custom data identifiers which further displays a table. The names of the column
headers are: Identifier name and Description.

I don't have any Custom data identifiers, I'll click Next. And for this job,

The next step page opens, titled Enter a name and description. It displays two fields:
Job name and Job description.

I'll just call it Job1 and I will click Next. And on the Review

The last step page opens named Review and create.

and create screen, in the bottom right, I will click Submit. So I've submitted a
discovery job for Macie. Currently

The Jobs page opens. It has a table that displays the newly created file. The table has
the following column headers: Job name, Resources, and Status. Above the table, it
displays a button named Create job.

the status is showing as Active Running. If I hang out here and click refresh at some
point it will be completed. OK, so after a few minutes, the status will change to
Complete. So, I'm going to open up my left hand navigator again for Amazon Macie.
And I'm really interested in going under Findings By bucket. But I get a message that
states that there are no

The left pane of the page displays a header named Findings. It contains three options:
By bucket, By type, and By job. He opens the Findings by bucket page.

results that match the current filter criteria. Well, there is no filter. So if I look at the
Findings By type or By job, I get nothing. What does this mean? Because if I go back
to the Jobs list, the Job did complete. So what it means is it didn't find what it thinks is
sensitive data.

But what would it look like if we did have Macie find something that is potentially
sensitive and needs to be protectable? Well, let's take a peek at that. OK. So before too
long, if I keep refreshing, eventually the Status will change

The Jobs page reopens. Currently, the Job3 file is selected. On the right side, the page
displays a pane, that has the corresponding details of the selected file. The pane also
has a drop-down menu button named Show results. The menu contains three options:
Show findings, Configure export, and Show CloudWatch logs.

to Complete. So I can select Job3 here and I can click Show results and Show
findings. That's going to open up new web browser window where if there are any
indicators of personal or financial data, it will show that with the specific reference of
the resource. In other words, the S3 object, the file that's stored up there in the S3
bucket. And if I select any one of those

The Findings page opens and displays a table that has the following column headers:
Finding type, Resources affected, and Updated.

items and click on it just to go into some further detail, it will even start to show me
exactly what it thinks it is.

The right side of the page displays a pane. The pane comprises multiple headers. The
headers display various details and information of the selected item. Few of the
names are: Overview, Result, Financial information, and Resources affected(S3
bucket).

For example, as we go further down through it, we can see details about that
information like CostCenter information that's being shown here as a Tag. And we
could have filtered on that, but we didn't.

So it found everything, including things tagged that way. But that's not what's
sensitive. If we go back up, it says we've got an occurrence of a credit card number or
something like that 1 line range. And it gives me some details about well, this must be
a spreadsheet. So, of course, the details about the column it's in, and so on. So, this is
the kind of thing that we can do

A dialog box labeled "Occurrences of credit card number no keyword" opens. It


displays an editor pane that has multiple lines of code.

with Macie and the great thing about it, remember, is that we can also Export our
findings. If I go back actually to this level, to the Jobs level and Job3 and Show
results. Here we can Configure the export of this information. So we have a copy of it,
so we can work with that. And then, ideally do something about it. Put in the
appropriate measures to protect the data. Normally in S3, that's going to involve S3
bucket encryption.

Secure Data Removal Techniques


Topic title: Secure Data Removal Techniques. Your host for this session is Dan
Lachance.

At some point, storage media is no longer usable. And so we need to securely remove
data to make sure there are no traces left behind, that might be somehow acquired by
unauthorized parties. So that means that we have to talk about secure data removal
techniques. Now the first consideration is that just because you delete a file from a
device doesn't mean it can't be recovered, it's very easy to recover deleted files. Even
when entire disk volumes are reformatted, it's possible to retrieve data that was there
prior to the format. Even if a disk is repartitioned, it's possible to reconstruct data that
was there. I've had cases over the years where people have brought me their
computers because their disks were no longer accessible and all you need is time,
patience and bit of knowledge of how to use some of these tools. Of course, have the
right data recovery tools. And you can potentially recover data from deleted partitions
that might have been removed a long time ago, it's possible. And so securely
removing data then is very important. if you're going to be dealing with sensitive
information.

Then there's the notion of storage media scrubbing. So scrubbing here means that we
want to make it as difficult

A slide titled Storage Media Scrubbing appears. It displays a screenshot of the Disk
Scrubber v3.4 window. At the top, it displays a status bar. On the left side, the
window displays two sections: Hard Disk Free Space Scrubbing and Scrub
(Overwrite) Type. The first section has two fields: Drive to Scrub and Priority. It also
has a button named Scrub Drive. Below this, the section has two checkboxes: Write
Log File and Shutdown PC on Completion. The second section has various radio
button options, namely, Normal (Random Pattern Only), Heavy (3-Stage Pattern,
0s+1s+Rnd), and Ultra (9-Stage, DoD recommended spec w/Vfy). On the right side,
the page has a list box that displays a blank table. The table has the following column
headers: Name, In folder, and Modified.

as possible for data retrieval for previously stored data on the media. So this is
normally done by writing useless random data to the disk multiple times over and
over, or in multiple passes. And that's called a hard wipe. So there could be some
government laws, or there could be regulations in some parts of the world that
requires specific disk scrubbing techniques to be used in order to be deemed as being
an acceptable solution for removing data remnants. So pictured here, we have a tool
called Disk Scrubber which has an Ultra option for scrubbing shown selected here in
the bottom left which is even DoD recommended. That's the American Department of
Defence recommendation.

And then we shall have to think about paper documents. Many organisations still deal
with paper to some degree, and so therefore sensitive information might appear on
paper documents. So we have to consider whether we will use onsite or third-party
document shredding. And whether industrial shredding is required for large amounts
of documents. And in which case we might even have to think about security of paper
documents that are being transported to a shredding facility. One thing that attackers
can do for reconnaissance is dumpster diving, which generally means going through
paper documents potentially. That might have been discarded in waste bins or in
garbage bins looking for any clues as to usernames, passwords, network configuration
settings that type of thing. But those are paper documents. What about digital storage
media devices, like disks?

Well, then there's physical destruction. Whether that includes physically shredding
equipment with industrial grade metal shredders or degaussing of equipment.
Degaussing is a technique that can be applied to hard disk drives, not solid-state
drives. But the idea is at a high intensity magnetic field is applied to the hard disk
drive, which rearranges the data and how it's stored on the drive. So it's not
retrievable. So degaussing will not apply to things like USB flash storage, just a hard
disk drives. However, USB drives could be burned, or they could be crushed, or holes
could be drilled through the storage media so it's no longer usable. There are many
effective ways to physically destroy storage media to make sure data cannot be
recovered from it. And this might even all be done after securely wiping the device at
the software level.

So after sensitive data has been securely removed, what next? Well, we need to
document the data destruction techniques that were used and that might be required
for regulatory compliance. We also need to make sure we update asset and change
management inventory systems to reflect that equipment is no longer in use and in
fact that it's been decommissioned. On the mobile device side, we have remote wipe
as an option. So mobile devices then would be configured to be enrolled in a
centralized Mobile Device Management or MDM solution. So in the event that we
have a lost or stolen mobile device like a smartphone, admins would have the ability
to remotely wipe the device or at least perhaps even a portion of the device because
the device might be configured with a logical personal partition and illogical corporate
partition. The corporate partition might contain the app settings and data that are
specific to the enterprise, and maybe that's all that gets wiped remotely in the event of
a lost or stolen device instead of the whole thing. So the corporate partition then
couldn't be wiped potentially independently from the personal partition on the device
if this is set up ahead of time. So it's all about preplanning.

Wiping Disk Partitions Securely


Topic title: Wiping Disk Partitions Securely. Your host for this session is Dan
Lachance.

In this demonstration, I will be securely wiping storage media both in Linux as well as
in Windows. So to get started here in Linux,

A Terminal window opens with the following command prompt:


cblackwell@ubuntu1:~$.

one of the things that we can do is verify what we have for storage devices. And I can
do that with sudo. Now, sudo is the prefix I'm going to use because the following
command requires elevated privileges and are not logged in as root. lsblk, list block
devices -- of type scsi. So it wants the password for my current logged in username, so
I'll go ahead and specify that. So, I've got a couple of disk devices here, notably
device sda and device sdb. Those are two separate disks. So I could run, for example,
sudo fdisk -l and /dev/sdb. Show me the details on how that disk is carved up. So that
disk doesn't have anything on it. However, let's assume it's partitioned and perhaps it's
got data on it, but the disk has been decommissioned.

So what we want to do is essentially sanitize that storage media by wiping the data
that might be stored on it currently, and there are a number of ways to do that. One of
the ways is to use the built-in dd command in Linux. For example, sudo going to run
dd input file or if=/dev. Now if I want to overwrite it with zeros, I can refer to the
built-in zero device in Linux or I could refer to the random device. But here is zero,
the output file or of= I want to write a bunch of zeros is going to be /dev/sdb the entire
second disk device. I'll specify the byte size bs=4096. This is the amount of data that
would be read and written by the dd command. And I'm going to set status=progress.
So, we can have a sense on what's going on and I'm going to press Enter. So currently,
we can see the progress as it begins writing zeros over device sdb. Now depending on
what laws or regulations govern your industry will determine if this is even an
acceptable way to do that.

So what this will let us do then is overwrite a disk device with a bunch of zeros. Let's
take a look at how we might do that in the Windows environment using a tool called
the Hard Disk Scrubber. So, the Hard Disk Scrubber tool is just one of but many many
tools out there that can be used to wipe disks. This one is a simple free tool

A web page titled Hard Disk Scrubber opens. It has the following URL:
https://www.summitcn.com/hdscrub.html. The left navigation pane displays the
following options: Overview, Scrubber Parts, Scrub Types, Advanced Features, and
Download Scrubber.

that we can download. So I've already downloaded and installed it. So I'm going to go
ahead and run the tool. So here in the Hard Disk Scrubber tool, what I'm going to do,

A window named Disk Scrubber v3.4 opens. On the left side, the window displays two
sections: Hard Disk Free Space Scrubbing and Scrub (Overwrite) Type. The first
section has two fields: Drive to Scrub and Priority. It also has a button named Scrub
Drive. Below this, the section has two checkboxes: Write Log File and Shutdown PC
on Completion. The second section has various radio button options. On the right
side, the page has a list box that displays a blank table. The table has the following
column headers: Name, In folder, and Modified.

we select the appropriate drive to scrub. Now I don't want to overwrite drive C:
because that's the OS drive that I'm running this tool from, in the first place. So, I've
plugged in a USB drive that I want to sanitize. So I'm going to select it. Now down
below, we can determine the priority. So I'm going to choose High priority when it
comes to CPU and foreground cycles.

I don't want to Write a Log File and I don't want a Shutdown of the PC upon
Completion. But down below, I can select the Scrub Type such as Ultra Scrubbing 9-
Stage or 9 Passes, DoD recommended. I'm going to leave it on that one and I'm going
to go ahead and choose Scrub Drive. So as this progresses, I'll be able to see what it's
doing, how many bytes are written, and the Pass number that it's on. Overwriting
something, overwriting a storage device multiple times with all zeros makes it even
more difficult to recover any data fragments that might remain on the storage media.
Course Summary
Topic title: Course Summary

So, in this course, we've examined how to implement automated sensitive data
discovery and classification in order to secure data in alignment with applicable laws
and regulations. We did this by exploring PII, PHI and common data privacy
standards. Common malware types, social engineering, and how DLP is used. We
used FSRM in Windows and Amazon Macie in the cloud to discover and classify
data. We also examined common secure media destruction techniques and how to
wipe data partitions securely. In our next course, we'll move on to explore server
troubleshooting.

CompTIA Server+ (SK0-005): Deploying


Cloud IaaS
Infrastructure as a Service (IaaS) concerns storage, networking, firewalls, virtual
machines, and other cloud services. To be at the top of their game, server admins
should know how to deploy various IaaS solutions. Learn how to deploy IaaS
solutions related to storage, networking, and virtual machines in this wide-spanning
course. First, get a theoretical overview of IaaS. Then, practice deploying AWS and
Microsoft Azure cloud storage solutions, virtual networks, and Windows and Linux
virtual machines (VMs). After that, create a custom cloud-based VM. Next, deploy
cloud services using cloud automation templates. And finally, use role-based access
control (RBAC) and cloud policies to limit the cloud administrative activities
available to cloud technicians. When you're done, you'll be able to deploy several IaaS
solutions and restrict cloud server access. You'll also be more prepared for the
CompTIA Server+ SK0-005 certification exam.

Course Overview
Topic title: Course Overview.

Infrastructure as a Service or IaaS, spelled I-a-a-S, refers to cloud services related to


storage, networking, firewalls, virtual machines, and the like. The deployment and
management of IaaS solutions can be done using GUI and command line tools. In this
course, I'll begin by defining IaaS. Next, I'll deploy cloud storage solutions in the
AWS and Microsoft Azure clouds.
I'll then work with cloud-based virtual networks followed by deploying Windows and
Linux cloud-based instances. Continuing on, I'll examine how cloud automation
templates can be used to quickly deploy one or more related cloud services. Finally,
I'll examine how role-based access control or RBAC and cloud policies can be used to
limit what type of cloud administrative activities are available for cloud technicians.
This course is part of a collection that prepares you for the CompTIA Server+ SK0-
005 certification exam.

The Infrastructure as a Service Cloud Service Model


Topic title: The Infrastructure as a Service Cloud Service Model. Your host for this
session is Dan Lachance.

We've already talked about cloud deployment models like public private clouds,
hybrid clouds and community clouds, but now we're going to talk about cloud service
models. Let's start by defining Anything as a Service or XaaS; X-a-a-S.

Really, what this means is that it's a catch-all phrase for any type of IT service that is
accessed over a network. Now with cloud computing, we must make sure that the
environment adheres to all five cloud computing characteristics like metered usage,
pooled resources, self-service provisioning, rapid elasticity, and broad access. Now,
there are many different types of cloud services, and they're broken into specific
categories.

We're just going to touch upon some of the most common ones. Specifically, we're
going to focus here on Infrastructure as a Service or IaaS. There are many other types
of categories, but this is going to be our current focus. So when we talk about
Infrastructure as a Service, we're talking about the underlying IT infrastructure,
dealing with things like storage, network components and configurations, like Internet
gateways to allow access out to the Internet from a network.

And also manually managed virtual machines where the word manually is really
operative here; it's a key term because we also have the ability of deploying some
cloud services like databases where the underlying virtual machines are automatically
deployed and configured. That's not Infrastructure as a Service.

Deploying a database that uses underlying VMs where those are managed
automatically, that's called Platform as a Service. It's using a managed service
configuration where the VMs are handled by the cloud service provider. We're talking
here about manually deploying and managing VMs at the infrastructure level.
With Infrastructure as a Service, the cloud service provider or the CSP is responsible
for the underlying physical equipment. Also, the cloud customer, that would be us, we
are responsible for configuring those solutions and when it comes to virtual machines,
we would be responsible for configuring the operating system and patching it and any
apps running within it.

So IaaS virtual machines means that the cloud customer deploys and manages the
virtual machine and remotely accesses it and does everything to it. It can run on either
a shared hypervisor, which is pretty common when it comes to the public cloud, also,
the private cloud. You have a hypervisor host that can run virtual machines that are
deployed by different organizations or different departments or different individuals.

However, there are also options where you can have a dedicated hypervisor host, even
in the public cloud. You're going to pay a premium for this, but it means that only
your organization's virtual machines run on that hypervisor host.

And this might be done for isolation, for performance, for security, perhaps even for
compliance with certain regulations. So the cloud customer then is responsible for the
OS configuration in its entirety and applying updates. Infrastructure also means
storage.

In this screenshot, we are in the midst of creating a storage account in the Microsoft
Azure public cloud environment. So we have to give the storage account a unique
name, we have to determine the Region geographically where the account will be
deployed; in this case it's being deployed in the Eastern US Region.

Now, of course we could also configure geo-redundancy to replicate that to another


Region in case the Eastern US Region for some reason is experiencing a disruption of
some kind. The Storage account name reads storacctwebapp172, and the Region
reads (US) East US. The Performance can also be set for the storage account. Here it's
set to Premium, so it says, Recommended for scenarios that require low latency.

Usually, Premium type of storage for high performance means it's using solid state
drives as the underlying storage media, whereas infrequently accessed data would
normally be stored on a cool access tier that uses the older hard disk drive technology.
Premium account type reads Block blobs, and Redundancy reads Zero-redundant
storage (ZRS). Another aspect of infrastructure is networking. In the cloud, when we
configure a virtual network, we simply call it a VNet.

Now that's at the Microsoft Azure level. If you're working specifically in the Amazon
Web Services cloud, then you would call the virtual network a VPC. That stands for
virtual private cloud. Either way, it's all the same thing; you are defining a virtual
network in the cloud computing environment.

So in our diagram, we have two VNets: VNet1 and VNet2, and within each of those
VNets we can deploy services like virtual machines, databases, that type of thing, and
it's all running on someone else's equipment in the cloud. Now what we can also do is
we can enable a peered connection between cloud virtual networks. Now why would
we enable a peered connection? What is that?

It's a direct link between cloud virtual networks, the benefit of which means the VMs
in each of those VNets can talk to each other using their assigned private IP addresses.
You see, if we don't have a peered VNet connection, virtual machines in VNet1 can
communicate with virtual machines in VNet2 using public IP addresses, if they are
assigned.

Now, ideally we don't assign public IPs to virtual machines unless we absolutely need
to for security reasons. Also for cost reasons; using unique public IP addresses costs
more than just sticking with a private IP address that's assigned in the virtual network.

So this way we can talk using private IP addresses through the peered connection.
Also, there's no routing configuration required to make this work, so it's very quick
and easy. And in the case of Microsoft Azure, as an example, the peered VNet traffic
uses the Microsoft global backbone network. In other words, the traffic between the
two peered VNets does not go over the Internet.

Now, to take this a step further when it comes to Infrastructure as a Service at the
network level in the cloud, you can even use a hub and spoke VNet peering topology.
In our diagram at the bottom center, we have a Hub VNet, we are calling it.

Now we're calling it a Hub VNet because we've got two other VNets: VNet1 and
VNet2, that are connected directly to the Hub VNet through a peered connection. We
just talked about peering. So that's fine. Now what we could then do is have a VPN
gateway in that Hub VNet.

A VPN gateway is just a cloud configuration that allows a VPN connection into the
cloud. That means an encrypted tunnel over the Internet into our cloud computing
environment. So we might use that for a site-to-site VPN connection from our on-
premises network directly into the cloud. So all traffic going from our on-premises
network into the public cloud, is protected through an encrypted VPN tunnel.

Now once the traffic arrives at the Hub VNet where the VPN gateway resides,
because we have peered network connections with VNet1 and VNet2, it means that
from on-premises we could refer to the private IP addresses of, let's say virtual
machines running in those cloud VNets and communicate with them that way. We
wouldn't need to have public IP addresses assigned to them or anything like that. So
this is a common type of configuration at the enterprise level as it relates to public
cloud computing.

Deploying an Amazon S3 Bucket


Topic title: Deploying an Amazon S3 Bucket. Your host for this session is Dan
Lachance.

In Amazon Web Services or AWS, a bucket is a storage location in the cloud, and
much like we might work with storage on-premises, we have to account for what type
of data will be stored in that location and whether or not we want to limit access to it,
or whether it should be publicly accessible.

So to get started here in the AWS Management Console, I'm going to search for s3.
The simple storage service or S3 is where we configure our cloud storage buckets. So
I'm going to click to open up the S3 management console.

Any existing S3 buckets will be shown on the right, and from there I would be able to
go in and view the contents of the bucket. I don't have any buckets yet, so I'm going to
go ahead and click Create bucket. Now we have to adhere to organizational naming
standards when it comes to naming buckets, so I'm going to call this a unique name
that maps to an app that might use this bucket. The Bucket name reads
bucketappyhz172.

We can specify the geographical Region or location where we want this bucket to be
deployed. So I'm going to leave it in US East (N. Virginia). Now, if you've already got
an S3 bucket in the AWS cloud that has settings that you want to use for this new
bucket, you can choose to copy the settings from an existing bucket. That's not the
case here. Right now the Block Public Access setting for the bucket is enabled, so all
public access is blocked.

So in other words, you can only access any files, for example stored in this S3 bucket,
if you successfully authenticate to AWS. That's fine. I can change that setting at any
time, just like I can enable or disable any of these settings at anytime like Bucket
Versioning.

Bucket Versioning can be useful if you want to retain multiple versions of files that
are stored in this bucket. So if you have a Word document, let's say Microsoft Word,
you have an original copy stored in the bucket.
But when someone opens it, edits it, and uploads it back to the bucket you can have
the new copy showing as the most recent version, but previous versions are still
accessible. Bucket Versioning is set to Disable. We can also add tags to the bucket.
Tags are metadata information.

You might use this to track projects that deploy multiple resources, like buckets, or
maybe tie it to a cost center for billing. So if I click Add tag for example, I might add
Project abc, if this bucket is being used for that project, and later on we could filter or
search for all resources related to Project abc.

So it's not required, but it can be useful. I can also enable encryption for data stored in
this bucket with an Amazon supplied key. Or if I've already set it up, I can have my
own key set up to the AWS Key Management Service. Sometimes regulations will
dictate that you must have control of an encryption and decryption key.

I'm not going to go into any of the other settings, I'm just going to go ahead and create
the bucket. So after a moment we have a message about the successful creation of our
new bucket. I'll click the X to close that. So our bucket is shown in the US East
Region and the Access is showing as Bucket and objects not public.

So that looks good. So what I'm going to do then is click on the Name of the bucket to
open it up. The first thing we can do here is create a folder so that we could start
organizing files that are stored here. So I'm going to call this Folder projects.

And then down below we can determine if we want Server-side encryption enabled
for items within this particular Folder. I'm going to choose Enable and I'll leave it on
Amazon S3 key and I'll choose Create folder.

So we've now got a folder here called projects, just really like what you would do on
any storage media locally; you create folders to organize related files. So I'm going to
open up the projects folder and from here I can then click Upload to upload content.
So maybe I'll choose to add a local folder with some sample files that I want uploaded
here. So then I've got my files shown in the list, perfect. I'm not going to change
anything here, I'm just going to click Upload. The names of the files are:
Project_A.txt, Project_B.txt, and Project_C.txt. In the Summary screen, the
Destination reads: s3://bucketappyhz172/projects/.

So depending on the size of the files, depending on your Internet connection will
determine how long that takes. These are three very small files, so it doesn't take very
long. I'll click Close and if I go into my projects folder, let's just back up here to the
bucket level, there's projects.
I uploaded an entire folder containing files, and that folder is called
Projects_Sample_Files. And if I open that up, here are the files themselves. I can click
on any one of these files The host clicks on Project_A.txt. to open up the details about
the file which includes things like a URL or a direct link to this Object, because it is
stored in a cloud storage location. The Object URL reads:
https://bucketappyhz172.s3.amazonaws.com/projects/Projects_Sample_Files/
Project_A.txt.

I can also go to the Permissions where we can determine who has Read or Write
access to this file in the cloud. And if we've got Versioning enabled we can also see if
there are past versions of this particular document. There's a note here that says that
Versioning isn't yet enabled.

We could also Download this file if we wanted to copy on-premises. We could Open
it directly; because here it's a text file, it does open directly in the browser. Otherwise
we might have to download it and open it with some other app on our device. Under
the Object actions button in the upper right we can also Copy or Move the file.

I could also choose Edit storage class. Currently the storage class here is Standard.
But I can determine if I want infrequently accessed data as the Storage class, which
means that it's on slower underlying storage media, but at a lesser cost.

So we can do that, or maybe we're interested in Glacier archiving or Long-term


archiving, where if you want to retrieve any archive data with Glacier Deep Archive,
it could take up to 12 hours to get that data back. But it's designed to be stored on
cheap media, so therefore your costs go down. So depending on what your needs are
will determine how you select the appropriate storage class.

I'm going to turn on Intelligent-Tiering so that it automatically determines that. And


I'll click Save changes and now it's been done. So if I close out of here, and let's say I
go back to my list of uploaded files, notice the Storage class reflects Intelligent-
Tiering now for the first file that we just edited that change for.

Deploying a Microsoft Azure Storage Account


Topic title: Deploying a Microsoft Azure Storage Account. Your host for this session
is Dan Lachance.

In this demonstration I will be deploying a storage account in the Microsoft Azure


Cloud. So a storage account is used if you want to store things like files, otherwise
called blobs, that stands for binary large objects. Or maybe you want to create a
message queue to exchange messages between application components.
So you could do that within a storage account as well. If you want to deal with things
like databases, that's a separate type of solution. So we're talking about the underlying
infrastructure in the cloud and a storage account is considered Infrastructure as a
Service. So to get started here in the Azure GUI portal, I'm going to click Create a
resource over on the left. Now I can browse through the Categories, but what I can
also do is simply search for something.

So if I were to type in storage account, two separate words, then it shows up in the list
and I could click on that to select it. Now that gives me a brief Overview of what a
storage account is and I can read these details, The Overview reads: Microsoft Azure
provides scalable, durable cloud storage, backup, and recovery solutions for any
data, big or small. It works with the infrastructure you already have to cost-effectively
enhance your existing applications and business continuity strategy, and provide the
storage required by your cloud applications, including unstructured text or binary
data such as video, audio, and images. but I know this is what I want to deploy. So
I'm going to go ahead and click on Create. And there are some details to be filled out
such as the Subscription to tie this to.

Subscription deals with things like support plans and billing actually occurs at the
Subscription level in the Microsoft Azure Cloud. The Subscription is set to Pay-As-
You-Go. I want to deploy this into a specific Resource group. A Resource group lets
you organize cloud resources, kind of like a folder or directory is used to organize
files on storage media; same type of thing in the cloud. The host selects Rg1. I have to
put in a unique name here. If I were to try to call this, let's say, just store account, it
says that the storage account name is already taken.

It needs to be unique in the cloud. So I'm going to have to specify something that
adheres to my naming standard in my organization. We don't want to just use
randomized names, at least not all the time. The Storage account name reads
storacctappyhz172. And then if it's unique, we'll be good to go. So after it's been
determined that it's unique, there's no message saying it's taken, we then determine the
geographical Region where we want that storage account deployed.

Now, while it's accessible worldwide, of course, over the Internet, in the cloud, we
want to try to place this nearest where there will be the most activity related to the
storage account. That's fine, I'll leave it in East US. The Performance is set to
Standard. If we need the utmost in Performance, so we want the best possible Disk
I/O access for read and write so we could choose Premium, but you're going to pay
more for that.

In this case, I only need Standard. For Redundancy, I can also choose, for example,
Geo-zone-redundant storage. The other available options read: Locally-redundant
storage (LRS), Geo-redundant storage (GRS), and Zone-redundant storage (ZRS). So
it will allow me for example, to make sure that the data is available in an alternate or
secondary geographical region, if for some reason the East US Region is inaccessible.

So that's fine. Having done that, I can click the Next: Advanced button down at the
bottom where I can determine if I want to enable things like infrastructure encryption,
or perhaps I don't want to allow public access to blobs, to files in this storage account.
There are two more options: Require secure transfer for REST API operations and
Enable infrastructure encryption. Next for Networking lets me determine from which
networks do I want to allow access to the storage account.

If I were to choose, for instance, Public endpoint (selected networks), then down
below, if I had any, I could choose a Virtual network and that means that anything in
that Virtual network would be allowed to access this storage account. For instance, I
might have some custom code running in a virtual machine deployed in this VNet.

I don't have a VNet, so I can't select that, so I'm just going to go back for now and just
leave it on Public endpoints (all networks), which also allows access from the
Internet.

It doesn't mean anonymous, but potentially people can authenticate and access the
contents of this storage account even from the Internet. I'll click Next for Data
protection. This is where I can determine if I want to enable features like soft delete so
that if I delete a file, I can recover it or undelete it for up to 7 days. The other options
read: Enable point-in-time restore for containers, Enable soft delete for containers,
Enable soft delete for file shares.

Those types of things are available here. Next for tagging. Tags are metadata, and you
can add multiple tags here. For example, maybe I want this to be tied to a specific
Project; let's say Project abc. And also I can specify more than one, so I could also
specify, let's say a CostCenter tag, and once I've done that, I can go ahead and create
the resource. You don't have to add tags, but I've opted to because you can then filter
based on these tags. You might even be able to organize your billing based on tag
information.

For example, how much has Project abc cost this month for resources tagged that way.
The Value of CostCenter is set to yhz. At any rate, I'm going to click Review + create.
It's going to verify that my selections make sense. The validation has passed, so to
create the storage account I'll click Create. And then I'll get a message saying it's in
the midst of deploying the storage account.
At this point I could go and click and do something else in the cloud, configure
existing objects, maybe deploy new objects, or whatever it is that I need to do. Now I
can refer to the notification or the bell icon in the upper right, which will show me
what's been happening, such as the deployment having succeeded for our new storage
account. We could click the Go to resource button here to work with the storage
account.

Or I could go to the All resources view on the left here in the portal, and perhaps I
would filter it, let's say, based on stor, there's our storage account. Of course there is
also a dedicated Storage accounts view. Regardless of how you get to it, when you
click to open up the storage account, you can manage the properties or the
configuration of it.

So for example, if I scroll down and if I go all the way down under Data management
to Geo-replication, you might recall that when we created the storage account we
enabled Geo-replication or Geo-redundancy, so therefore even though the primary
Location is that it was deployed in the East US, the secondary replica is in the West
US.

Excellent, so if any one of these regions is inaccessible for some reason, we have the
data available in another secondary or alternate region. If I were to click, scrolling up
on the top on Containers on the left, and this refers to folders, to organize your files,
not application Docker containers or anything like that.

This is where I could click the add Container button, for example maybe add a
projects folder, and if I click to open that folder I could then click the Upload button
and then I could select some on-premises files that I might want to upload. I'll just
upload a couple of sample files.

Once you've selected them, you can just click Upload and they're uploaded. The files
are: Project_A.txt, Project_B.txt, Project_C.txt, ProjectA_Budget.xls,
ProjectB_Budget.xls, ProjectC_Budget.xls, and ProjectD_Budget.xls.

The default is that they're in the Hot Access tier for frequent access, but if something
is only accessed infrequently, let's say our Project_A file, I could select that and I
could change the tier. Now, you could have done that back here actually, you could
have just selected it and selected the Change tier button it doesn't matter.

I might put this in the Cool Access tier for less frequently accessed data, which uses
slower underlying storage media, which means we pay less, and of course, that Access
tier is now reflected under the Access tier column. You could then click on a file and
then go into the details, all of the metadata related to it. You could Download the file,
Delete it, Edit it even directly in the browser if it's a standard file type, like a TXT file,
that type of thing is available. So really working with files in the cloud is not very
different from working with files that are stored on storage devices in an on-premises
environment.

Deploying an AWS Virtual Private Cloud Network


Topic title: Deploying an AWS Virtual Private Cloud Network. Your host for this
session is Dan Lachance.

In order for a server to be useful, it needs to be properly configured on a network, so it


can communicate with other hosts and potentially devices. So what we're going to do,
in this example, is in the Amazon Web Services or AWS Cloud, I'm going to
configure a virtual network, although in AWS, it's actually called a VPC, a virtual
private cloud, but it's really just a virtual network.

So here in the AWS Management Console, I'm going to search for vpc in the top
center and I'll click on VPC, which is going to open the VPC management console. If
I click on Your VPCs over on the left, any existing VPCs will be shown. A VPC is
visible on-screen with an ID of vpc-27ea875a. Notice that this type of thing is tied to a
region. Currently in the upper right, the region is selected as US East or N. Virginia.

If I were to choose a different region like US West (Oregon), let's say, so it'll switch
over my view to show me what's in that region, notice that we don't have the same
VPC there. Now if you look carefully at the ID, this one ends with 5d93. Let's just flip
back to US East well, this one ends with a different suffix for the unique identifier, so
it's not the same VPC is the point. I want to create a VPC in the US East N. Virginia
region.

So I'm in that area here, I'm in that region, so I'm going to click the Create VPC button
and I'm going to call it vpc and I'll use a nomenclature that is standardized in my
organization. The Name tag reads vpcapp1easty172. So I'm going to specify that.
Now I must specify an IPv4 CIDR block. CIDR or Classless Inter-Domain Routing
means that you specify a slash and the number of bits in the network mask.

So I want to set up a network here, 10.1.0.0/16. You will have planned this ahead of
time, what your IP addressing needs are. So I want that CIDR block. I can also specify
that I would like an Amazon-provided IPv6 address block, or I could specify an IPv6
address block that I own. I'm going to leave it on just a single IPv4 address block.

I could also add tagging information for this. So the Name tag is going to be
important. I'm going to call this App1VPC, assuming that it will be used for me to
deploy components related to an app called App1. And then I'll click the Create VPC
button. So our VPC has been successfully created. We can see that with the message
at the top of the screen and I have the Details, or properties, listed here.

But if I go back to the VPCs view over here on the left and we're still in the N.
Virginia region, the VPC shows up in the list and the name of course reflects what
we've just specified. If I put a check mark in the name of that VPC, then it selects it
and down below I have the properties or Details of that VPC. Now it will
automatically be associated with DHCP options and a routing table and a network
ACL. The DHCP options set reads dopt-6d868817, the Main route table reads rtb-
0cada04e573c9c149, and the Main network ACL reads acl-0d127c3b91c734eaf.

So DHCP options, if I click on that link and take a look at them, the DHCP options
were referring to our things like DNS servers that will be used, the DNS domain name
that will be used as a suffix for things like virtual machines that are deployed here.
The Domain name reads ec2.internal. But also if I go back into the Details of that
VPC, the routing table of course controls any routes that might be set to control
network traffic flow.

And of course, finally, the last item that was associated here automatically was a
network ACL, and that's an access control list, which has a list of inbound and
outbound traffic that is either allowed or denied into or out of subnets within this
VPC. So we've got an Allow for all traffic.

And if I were to take a look at the Outbound rules, we can also determine what traffic
should be allowed or denied out of this VPC and its subnets, although we haven't
defined subnets yet; so let's do that. I'm going to go to the Subnets view on the left and
I've got some existing ones, but I'm going to create a new one, Create subnet, so it has
to be tied to a VPC.

So I'm going to choose my App1VPC that we've just created and down below I'm
going to call it Subnet1 and down below I have to specify a CIDR range, so I'm going
to specify 10.1.1.0/24, so 24-bit subnet mask. Now, my IPv4 CIDR for the VPC is
10.1. So this subnet address range falls within 10.1, which is important, it's valid. So
having done that, I'm going to click Create subnet and after a moment it'll be done.

If I go to my Subnets view and scroll down through the list, there's Subnet1, so we've
got that part enabled. So if I go back to the main AWS screen, let's say I search for
ec2, which is where I go to launch or deploy new virtual machine instances, I'll go to
the Instances view. If I go to launch an instance, I'll just choose Amazon Linux 2 as
the image.
Then one of the things I can do under Step 3: Configure Instance is determine the
Network where I want it deployed. There's App1VPC. And of course I've got my
subnet, Subnet1. Now that shows up automatically because we're still in the same
region, US East (N. Virginia), where that VPC and subnet were deployed.

But let's say, let's just cancel this first, let's say we switch to a different region like US
West (Oregon) and let's say we go to launch an EC2 instance, a virtual machine in that
location. So I'll go to Step 3, Configure Instance. Under Step 1: Choose an Amazon
Machine Image (AMI), the host clicks on Select, next to Amazon Linux 2 AMI, (HVM),
SSD Volume Type Notice that our VPC for App1 does not show up, and that's because
we're in a different region. So the moral of the story here is that we need to make sure
that we are in the correct region where the VPC was deployed in order for it to be
selectable from the list. And so you have to plan for this ahead of time before you start
deploying or launching EC2 instances, or at least you should. I mean, technically you
could create VPCs and subnets on the fly, but ideally you plan that ahead of time and
deploy those first.

Deploying a Microsoft Azure Virtual Network


Topic title: Deploying a Microsoft Azure Virtual Network. Your host for this session is
Dan Lachance.

A server specialist must also be a network specialist. The server lives in a network
ecosystem and so things like IP addressing and network connectivity to remote
networks, all of that becomes very important for a server technician, whether you're
working with physical or virtual servers. In this case, we're going to deploy a virtual
network in the Microsoft Azure Cloud.

To get started here, I'm going to click Create a resource in the upper left. I'm using the
Azure portal GUI to create this VNet. Now I could go through and select the
appropriate category and then navigate to what I want to create, in this case a virtual
network, or I could type out what I want to search for, for example virtual network,
two separate words, and from here I'll choose Virtual network in the list.

That puts me on the landing page which gives me an Overview about virtual
networks. Well, I'm good with this, so I'm just going to move on by clicking the
Create button. OK, so what do we have to do here? Well, as is the case with any
resource you deploy in the Azure cloud, it's got to be associated with a Subscription
for billing purposes, The Subscription reads Pay-As-You-Go. and also it's got to be
deployed into a Resource group for organizational purposes. The host selects Rg1.
Then we have to give it a Name. Ideally your organization will have some kind of
naming standards that are used for naming things like virtual networks in the cloud.
So I'm going to call this a Name that adheres to my organizational naming standard
The Name is set to vnetyhz172. and I have to determine which Region I want to
deploy this in. Now that doesn't have any impact on what can access it, it's just, it's a
cloud resource. It's going to be really a definition for a configuration somewhere in a
data center where does that originate.

In this case, let's say, I'll choose Canada East. Having done that, I'll click Next for IP
addressing. There's a default private IP address block that's been assigned here, which
is great. Under IPv4 address space, it reads: 10.1.0.0/16 10.1.0.0 - 10.1.255.255
(65536 addresses). I'm going to leave it, but I could click the trash can to remove it,
but I can add additional ones.

Maybe I want to use the 192.168.5.0 network /24. That's CIDR notation, C-I-D-R,
which means we use a slash and then a number that represents the number of bits in
the subnet mask. 24 bits means the 192.168.5, that's three bytes, or 24 bits, that
identifies our network. So having done that, that's great.

I can also at the same time Add IPv6 address spaces. along with the IPv4 address
spaces if I need that. I don't. Now besides the virtual network, you need subnets
within the virtual network because that's really at the end of the day where you'll be
deploying things like virtual machines and so on. There's a default subnet here, I'm
going to remove it.

I'm going to select it and choose Remove subnet. I'm going to click Add subnet. Let's
call this first one Subnet1. Here I can specify the IP address range that I want to use.
And the Subnet range must fall within one of the virtual network ranges. So, let's say,
I put in 10.2.0.0/16. It says, well, that's not valid because it's not contained within
10.1.

16 bits means the first two numbers, 10.1, are going to be the prefix no matter what,
so that's invalid. But if I were to put in, let's say, 10.1.1.0/ let's say 24 bits in the
subnet mask, so 10.1.1 is our network, it likes it. OK, so I'm going to go ahead and
click Add.

I could also add another subnet that might use some or all of the range in the
192.168.5 prefix I've specified above, so maybe I'll call this one Subnet2 and I'll start
filling out the details. You wouldn't do this randomly, you would do it according to
something you had planned ahead of time. So if I want to use the entire subnet, I can
actually do that, like the entire range for this subnet. So I'll go ahead and Add that and
there we go. Under Subnet address range, the host types 192.168.5.0/24.
So I've got two different ranges that can be used within a single virtual network totally
valid. If I click Next down at the bottom for Security, I can enable a BastionHost.
What that allows me to do is that allows me to remotely manage virtual machines in
this virtual network without each of those virtual machines requiring a public IP
address, which can present a security risk.

I can enable DDoS enhanced Protection and the Azure Firewall, I'm not going to do
any of those things and I'm not going to add any tagging or metadata. I'm just going to
go to Next: Review + create and it'll validate my selection, I'll click Create. And our
network and its two subnets are in the midst of being created. Although we aren't
creating three resources here, subnets are not their own resource in Azure.

We just have the virtual network and within its properties we'll have the subnets. I'll
show you what I mean. So I'm going to click Go to resource to go into our VNet and
one of the things I can do is click Subnets in the left-hand navigator to reveal the
subnets contained within this VNet, so they aren't their own separate objects. They're
just a configuration within the properties of the VNet.

So we've now got our VNet and if I go to Overview was placed in Canada East. So if I
go to deploy, let's say a virtual machine, I'm going to click Create a resource, the
upper left, I'm going to, let's say, deploy Windows Server 2019 or I will begin the
process. I won't actually do it. One of the things I have to do here when I deploy a
virtual machine is specify the Region. Notice it's set here to US West.

So when I go to Networking and I go to select a Virtual network, notice nothing


shows up Because US West is not where we created our VNet. If I go back to Basics,
and if I were to choose from the Region where I want to deploy this virtual machine,
if I were to choose the Region where we deployed our virtual network, which I
believe was Canada East, then when we go and take a look at the Networking section,
of a sudden our VNet shows up.

Now I'm showing you this because it's important when you plan your VNets in the
cloud, in this case in Azure, think about the regions where you deploy them, because
that will have an impact on the regions where you deploy your virtual machines.

Deploying an AWS Windows Instance


Topic title: Deploying an AWS Windows Instance. Your host for this session is Dan
Lachance.

In this demo I'm going to be deploying a Windows virtual machine instance in the
Amazon Web Services cloud. Now I'm going to be using the AWS Management
Console to do this. But the first thing we have to consider is which network in which
we want to deploy the virtual machine.

Now you can create virtual machine networks or VPCs as they're called, virtual
private clouds, as you deploy the VM. But you can also do it ahead of time and I've
already done it ahead of time, so let's explore that first. So here in the AWS search bar
at the top, I'm going to search for vpc, those are virtual networks in the AWS cloud.

I'll click on it in the resultant list, which takes me to the VPC Management Console
and I'll choose Your VPCs. And the thing to watch out for in the AWS Management
Console is which geographical region you are in. So in the upper right there's a
selector.

I'm currently in US East N. Virginia where I have a VPC, a virtual network, already
configured called App1VPC. But if I were to switch to a different region I would get a
different list of VPCs deployed in that region. I don't see my App1VPC and so that's
always important from a planning perspective and also an execution perspective,
when you go to launch your virtual machines.

So let's go back to US East N. Virginia, where I've got my App1VPC. Now if we take
a look at the network range it's set to use 10.1.0.0. Now you don't actually deploy
virtual machine instances directly into a VPC, but rather a subnet associated with the
VPC. So let's do our homework and let's take a look at the Subnets view on the left.

Here I've got a subnet called Subnet1 and it's using a range under the VPC range of
10.1. This subnet is 10.1.1. OK, so that's reasonable. Now I want to go and deploy a
virtual machine instance into Subnet1. So I'm just going to go and search here for ec2.
EC2 relates to things like virtual machines.

I'll click on EC2 to go in to that console, There is an Instance visible on-screen with
an ID of i-0c4d3b7d64701ea3a. and again, I'm already in the US East N. Virginia
region, which is good, I want to deploy my VM there, because I want to deploy it into
the VPC we were looking at. I'm going to choose Launch instances. An instance is
just a virtual machine. In this case, we don't want to deploy Amazon Linux or macOS
or anything like that.

Specifically, I am interested in this particular case in deploying a Windows virtual


machine. So I could scroll down and look for Amazon Machine Images or AMIs that
use the Windows Server operating system. Now I don't have to scroll through, of
course, I could search. So I could search for windows server to filter out the list, and
now I've got my variations on some of the same themes like Server 2019 Base, Server
2019 Base with Containers, Server 2019 with SQL Server 2017 and so on.
I just want Windows Server 2019 Base in this example, so I'm going to select that
AMI. The next consideration, so Step 2, is the Instance Type. This is where you
determine the horsepower. The current Type is set to t2.micro which consists of 1
vCPU and 1 GiB of Memory. Well that might be pushing it depending on the
workload you're going to run in Windows Server 2019.

I'm going to choose t2.medium where I have 2 vCPUs and 4 GiB of Memory. Then
I'll click Next to Configure the Instance Details. For both of these Instance Types, the
Instance Storage (GB) is set to EBS only, the Network Performance is set to Low to
Moderate, and the IPv6 Support reads Yes. Now there are many options here, but
what I'm primarily interested in is making sure that this server gets deployed into the
correct VPC, in this case App1VPC and specifically Subnet1 within that VPC.

Now those are available and selectable in the dropdown lists only because we're in the
same region where that VPC and subnet were deployed; US East (N. Virginia), in this
particular case. OK, so that's good. Do we want to assign a Public IP? Well, that's not
recommended for security purposes, however, for testing purposes it's OK. So I'm
going to enable a Public IP for this particular virtual machine.

Then we have numerous other options, such as whether we want to join this to an
existing instance of an Active Directory domain, which I've not configured, so
nothing is in the list, but no, I don't want to do that. The remaining fields read:
Placement group, Capacity Reservation, IAM role, Shutdown behavior, Stop -
Hibernate behavior, Enable termination protection, Monitoring, and Tenancy. I'm
going to click Next for Storage. So we've got an operating system device here, it's 30
GiB.

It's using General Purpose solid state drives or SSD, The Volume Type reads Root, the
Device reads /dev/sda1, the Snapshot is set to snap-09b8797765fb66586, the IOPS
reads 100 / 3000, the Throughput (MB/s) reads N/A, the Delete on Termination
option is checked, and under Encryption, it reads Not Encrypted. but we can also add
additional data volumes here where we could determine, for example, if we need high
performance, so Provisioned IOPS SSD where we can specify the IOPS value. IOPS
stands for input/output operations per second. So a higher IOPS value means much
higher disk performance for reading and writing.

So we can determine if we need to do that and whether or not we want to encrypt this
particular data disk. But I don't want to do that, so I'll click the X to remove it. For
now, I'm just going to go with the operating system disk. I can add data disks at any
point in time. So I'm going to go ahead and click Next. This is where I can add tag
information. I'm going to add a Name tag and I'm going to call this App1WinSrv2019.
I'll click Next for security settings. A security group controls traffic into and out of a
specific virtual machine instance. It knows that it's running the Windows OS because
of the Amazon Machine Image I selected, and so there's a rule to allow RDP traffic
for TCP 3389. Now notice there's not an allow or deny option.

Security groups are specifically allowances. In Amazon Web Services, Network


ACLs, which are tied to subnets are lists of types of traffic that are allowed or denied,
but for security groups allowed is implied. I'm going to click Review and Launch and
I'm going to click Launch, at which point I have to specify a key pair I want to use.
Now a key pair is a public and private key pair where you can download and store the
private key.

Public key is available and stored in AWS and you use it to authenticate to the
instance. In the case of Windows, you use it to decrypt the built-in administrator
account password, and so I'm going to create a new key pair here. The Key pair type
is set to RSA. The other option reads ED25519. I'm going to call it WindowsKeyPair.
However, you can use the same key pair for Windows and Linux.

Really, it doesn't matter. But, in this case, for clarity, I'll do that and I will download
the key pair to my local machine and you want to make sure you back this up. So I'm
going to choose Download Key Pair. So it's downloaded the KeyPair.pem file,
perfect, and that's great. Now what I'm going to do is I'm going to launch the instance.

Now when you launch the instance you're deploying the virtual machine and it will
automatically be started. I'm going to click on the link here for the unique ID that was
assigned to that instance. The Instance ID reads: i-04d03ffdc9bd9c573. The instance
now shows up in the view with an Instance state of Pending.

Now this is a filtered list because I clicked on the link, I can click to remove the filter
to view all instances and their states. Now, once the instance is up and running, you
can select it, and for example, if you no longer need that instance for now, you can
Stop the instance.

You can Reboot it, you can Terminate or Delete the instance if, for example, you were
only using it for testing and you no longer require it. But don't leave it running if you
don't need it, and don't keep it if you don't need it, because otherwise you're paying for
it unnecessarily.

So this is what the downloaded KeyPair file looks like, that I specify when I launched
that instance, and you don't need a new key pair for each instance, you can just have
one and use it for all your instances. But the point is this, it contains the private key
information.
And so if I want to decrypt the Windows admin password to remotely manage it, I
need this file with the private key to decrypt that. Let's take a look at that procedure.
So here in the Instances view, if I select my EC2 instance, I can go to the Actions
menu down to Security and then from there I can choose Get Windows password.

And I have to browse to my key pair file, the one we were just looking at that I
downloaded. So you need to safeguard that. I'm going to go ahead and click Browse to
grab it. Once I've done that, the private key is automatically placed in this field and I
can choose Decrypt Password. What I then see is the Administrator User name and
the Password.

So what I can now do is use any method I would normally use to remotely manage
Windows like Remote Desktop client, and I would enter these credentials, and of
course I'd have to have a public IP address. The private IP will only work for example
if you've VPNed into the AWS Cloud or if you've got some kind of a Bastion host.

But I can also go back to that instance, and if it was assigned a Public IP address that
will be shown when I select it down under Details. The Public IPv4 address reads
34.205.155.225. And we said for testing purposes it's OK to have a public IP with
RDP listening on it, but you don't want that for long and certainly never for
production type of servers.

Deploying an AWS Linux Instance


Topic title: Deploying an AWS Linux Instance. Your host for this session is Dan
Lachance.

In this demonstration I'm going to deploy a Linux server in the Amazon Web Services
cloud. The way I'm going to do that is using the AWS Management Console. So to
start with, I'm going to search for vpc because I want to make sure I know if there are
any existing virtual networks and subnets where I can deploy my Linux server. So I'm
going to go to Your VPCs on the left.

I've got a VPC called App1VPC and the IPv4 range for that network is 10.1. If I look
at my Subnets, I also have a subnet called Subnet1 that is within that range. So it falls
under 10.1, because the subnet range is 10.1.1. So that means that my Linux server
will acquire an IP address within that range.

It might also have a public IP address depending on how I launch or deploy my virtual
machine instance. And all of that addressing can be changed after the fact if I really
want it to. So I'm going to search for ec2 in the search bar at the top and go to the EC2
management console, because that's where I go to work with virtual machine
instances, of which I have a few. I want to launch a new one, so I'll click Launch
instances.

And the first thing I have to determine is which Amazon Machine Image, or AMI, I
want to create my virtual machine from. This is like an OS image. So maybe it's
Amazon Linux, or perhaps maybe I would search for ubuntu and, and maybe I want
an Ubuntu Linux-based virtual machine. So I'm going to go ahead and go back to the
original screen and I'm going to use Amazon Linux 2 as the AMI, I'll choose Select.

I know that's a lightweight version of the Linux operating system, so I'm going to
leave it on the t2.micro sizing, which consists of 1 virtual CPU and 1 GiB of RAM. So
I'll click Next to Configure the Instance Details. I want to deploy this in App1VPC
and then there's only one subnet within that VPC, so that's good. The Subnet reads
subnet-04e88f187a7ff43f3 | Subnet 1 | us-east-1d. I then get to determine if I want a
Public IP address assigned.

Now if you don't have another way to remotely manage this virtual machine, such as
from your on-premises environment, so if you don't have a VPN, if you don't have
some kind of a Jumpbox or Bastion Host that you go through to manage virtual
machines through their private IPs, then you're going to have to consider whether you
want to assign a Public IP, which I will here, I will enable it.

Now I would not do this for a production server because it increases the attack
surface. It's not part of hardening a machine. Hardening a machine means reducing
potential attack vectors, and having it publicly accessible with a public IP is a
potential security risk. But because this is for testing, I will Enable that. That's really
all I'm going to do.

There are no other options I'm going to configure here. So I'm going to click Next for
Storage because we know we're going to have an operating system disk. Here, it's
going to be approximately 8 GiB in Size, and I have the option of adding additional
disk volumes where I would fill in the details like the Size of it, and also determine
the Volume Type. So if I need the utmost in disk performance, The options for
Volume Type are: General Purpose SSD (gp2), General Purpose SSD (gp3),
Provisioned IOPS SSD (io1), Provisioned IOPS SSD (io2), Cold HDD (sc1),
Throughput Optimized HDD (st1), Magnetic (standard). I could choose Provisioned
IOPS SSD, so Input/Output Operations Per Second Solid State Drive and then specify
an IOPS value.

A higher IOPS value always provides better performance if you need it for the
workload running in the VM. I don't need to attach any additional data volumes and I
can always do it after the fact if I needed to. Next. Then for tagging, I'm going to click
Add Tag and I'm going to add a Name tag and I'm going to call this AmazonLinux1.
Then I'll click Next.

The security group is a list of what traffic is allowed into or out of the virtual machine.
You can't specify that it's denied. So it's automatically going to allow SSH traffic for
TCP port 22. The default here is, it's going to do it from anywhere. That's what
0.0.0.0/0 means with IPv4.

But I could choose my IP and it will automatically take a look at the IP address I am
managing this from in the first place right now, and fill that in, and in my case that
makes sense. The IP reads 24.142.51.137/32. So I'm going to go ahead and use that.
When I choose Review and Launch, I have a chance to go back and edit each section
of my virtual machine details, but I'm OK with my choices, so I'm going to click
Launch.

Here, I can choose an existing key pair. This is a public and private key pair. Public
key is stored in AWS, the private key I download and store, and you need to safeguard
it and back it up. You need it to authenticate to the VM. So I'm going to acknowledge
that I have the corresponding private key and I'll launch the instance.

So at this point my Amazon Linux virtual machine is in the midst of being launched in
the AWS cloud and I've got a link here I can click on to monitor its progress, but I
could also click to remove that filter to see all instances, The link reads i-
0125866fe258df75b. and I can click the refresh button to kind of track the Status, the
Instance state now for my Amazon Linux VM is Pending. So it might take a moment
or two before it's ready and initialized and ready to go.

So I can keep clicking the refresh button and I can track it. So currently it's up and
running and ready to go. So if I were to select that Amazon Linux VM, here's where I
could view for example, in our particular case it's Public IP address, The Public IPv4
address reads 3.94.21.254. and I could use any mechanism I would normally use to
remotely manage Linux over SSH.

So for example, using the free PuTTY, P-u-T-T-Y client, I could go ahead and do
that. But I would have to have the private key file that was downloaded as part of the
key pair that must be in the correct format to be consumed by the tool I'm using like
PuTTY. So there you go, that's how quick and easy it is to deploy a Linux VM in the
AWS cloud.

Of course, if you don't need it any longer, you can go to Instance state and Stop it, or
if you don't need it ever again, you can Terminate or Delete it just so that you're not
paying for its storage or you're going to stop it because you don't want to pay for it
while it's running, if you don't need it running all of the time.

Deploying a Microsoft Azure Windows Virtual Machine


Topic title: Deploying a Microsoft Azure Windows Virtual Machine. Your host for this
session is Dan Lachance.

In this demonstration, I'm going to be deploying a Windows virtual machine in the


Microsoft Azure public cloud. Now, when you think about deploying a server, even
on-premises, you really need to have a network first in which to deploy that server so
it can communicate with other hosts. And so we have to have the same kind of
planning in place in the cloud.

And what that means then is we're first going to explore our VNets or virtual networks
here in Azure and then we'll deploy a virtual machine into one of the subnets in the
VNet. So here in the Azure portal on the left, in the menu system, I'm interested in
viewing my virtual networks. So I'm going to click the Virtual networks view and I've
got a virtual network here called vnetyhz172.

Now you know what's important here? Is the Location or the geographical region.
That VNet was deployed in Canada East and that's relevant because when you deploy
a virtual machine in the Azure cloud, you have to specify your region in which to
deploy it. And if you deploy it, let's say, in a different region, let's say Canada West,
well, then there will be no VNets that show up because I don't have any VNets
deployed in Canada West.

So you see then you have to match up the Location or region where you've deployed
your VNets with where you're deploying your virtual machines. The Resource group
reads Rg1 and the Subscription reads Pay-As-You-Go. OK, so let's go ahead then and
click Create a resource in the upper left. I want to deploy a Windows Server virtual
machine and Windows Server 2019 shows up here, but I can also choose See more in
Marketplace. Maybe I want to see all of the variations of operating systems.

I could of course filter it, like right now it says Operating System: All, but I could
remove that and say, well, maybe I specifically need Windows Server 2016 for some
reason. Maybe there's some kind of a software dependency. And so now my list is
filtered to show me only that. Of course you can also simply search for something
specific. So let's say I'm looking for a Linux Apache, MySQL and Python or PHP
stack, a LAMP Stack Server.
Then I've got images that are available for me to deploy a virtual machine from that.
But of course we're here to do Windows, so I could also search for windows and see
what's available from that perspective. So once I've selected what it is that I need, so
let's say I'm going to choose Windows Server and from the dropdown list here, I could
have chosen this before of course, but this is just another way to do it.

I'm going to choose Windows Server 22 Datacenter, I'm going to click Create and I
have to deploy this into a Subscription for billing purposes and also a Resource group
for organizational reasons. The Resource group reads Rg1 and the Subscription reads
Pay-As-You-Go. And then I have to give it a name. So if I were to call this
WinSrv2022-1, then down below I can specify the Region. Now currently the Region
is Central US.

Well that could be a problem if I leave it that way because if I just skip ahead here at
the top to the Networking part of the wizard, it allows me to create a new VNet and
I've got this VNet, Rg1-vnet, but I don't see my yhz VNet that we looked at
previously. And that's because the Region doesn't match. So I'm going to go and
specify that I want to deploy this virtual machine in the Canada East Region. Under
Instance details, the other fields visible on-screen read: Available options, Security
type, Image, Azure Spot instance, and Size. The host leaves them as the defaults.

Now if I just skip ahead once again to the Networking part of the wizard then my
vnetyhz shows up as being available. And then from there I can select the Subnet
where I want this virtual machine deployed. But let's go back to Basics because we're
not quite finished here.

So down below we can specify some details such as the Image that we want to use to
deploy this virtual machine. Well, that's what we started this off with. We've got
Windows Server 2022 Datacenter edition. Down below the sizing is crucial, vertical
sizing, the horsepower. It's set to 1 vcpu and 3.5 GiB of memory, and I have an
approximation for the monthly cost if it's running all the time. But I can also select a
different Size if I think I'm going to need more CPU power or more RAM.

I have to specify the Administrator account Username and Password and then confirm
the Password. So let me go ahead and do that. And once I've done that down below,
it's going to allow inbound access to port 3389 for remote desktop protocol for remote
managing. Now I could turn that off and say no public inbound ports.

If I'm going to use a solution like a Bastion Host to allow public connectivity to
remotely manage virtual machines where the virtual machines don't have to have a
public IP or have to allow all kinds of traffic directly from the Internet. I'll click Next
for Disks. Here's where I can determine the extra disks beyond the operating system
disk that I might want to add to this virtual machine. I could attach an existing disk if I
have it, or create and attach one, so a data disk.

The disk type is going to be important too. Premium SSD, well, SSD as we know
means solid state drive as opposed to hard disk drive, the older magnetic moving disk
standard, which is slower, but it's also cheaper. So depending on what you need will
determine what you select here of course. Next for Networking. Well, we've already
determined the network and the subnet where this VM will be deployed. We can also
determine if a Public IP should be assigned.

If you do not assign a Public IP, you're going to need to either VPN into the Azure
cloud, so then you could remotely manage this VM with its private IP, or use a
Bastion Host to do the same type of thing. Because this is only a testing example, I'm
going to leave a Public IP associated with the VM. A Public IP in Azure is actually a
separate cloud resource that gets created and then gets associated with the virtual
machine.

And I can change that association at anytime if I so choose. The Public IP reads (new)
winsrv2022-1-ip. The NIC network security group is set to Basic, the Public inbound
ports are set to Allow selected ports, and the Select inbound ports reads RDP
(3389). I'm going to click Next for Management. There's really nothing else I'm going
to change here under Management, Advanced or Tags. I'm just going to go ahead and
go to Review + create. It's going to validate my selections. It's passed, so I'm going to
go ahead and click Create. So at this point I am deploying a virtual machine and you
can imagine how quick this would be even using this GUI if we weren't talking about
the options.

Just a few clicks, and you know, in under a minute you can deploy a very powerful
server. Now compare that with how long it took in the past, say in the early 90s if you
needed to deploy a powerful server. You would have to order the equipment and get it
shipped or go somewhere and buy it and then plug it all in and then configure it and
then install the server operating system after you acquired the software and the
licensing and it just took a lot longer back in the day.

These days in the cloud it's very quick and easy to deploy a server. OK, so before too
long the virtual machine will be deployed. I could click the Go to resource link to
jump right to it, or I could go to the Virtual machines view, there are a bunch of ways
to do it. So our winsrv2022-1, as it's named, VM is now up and running, and so at any
point in time I could click on that virtual machine and go through the property pages
on the left to configure and manage the virtual machine.
There in the Overview blade as they call it I could also Connect to it to remotely
manage it. I could Restart the virtual machine, Stop it, Capture an image of it if I've
customized it, and I could Delete the virtual machine. A lot of those things I could do
back here by selecting the checkbox for the VM and then using the buttons at the top.

So for example, if I click on the extra three dots, the ellipsis button on the far right,
this is where I have the options to Start, Restart, Stop the VM, and so on. The other
options read Delete, Services, Maintenance, Feedback, and Leave preview. Now
what's important is that if you know that you don't need the virtual machine running
all of the time, you don't need the server running, maybe it's only for test purpose,
then you can stop it at any point in time. Now, while you'll still be charged for things
like storage related to the virtual machine, you won't be charged for it running.

Deploying a Microsoft Azure Linux Virtual Machine


Topic title: Deploying a Microsoft Azure Linux Virtual Machine. Your host for this
session is Dan Lachance.

OK, so in this demonstration I'm going to deploy a Linux virtual machine in the
Microsoft Azure cloud, and I'm going to do it using the Azure portal, so the portal is
the GUI interface. I'm going to start here in the portal, which I've already signed in to
by clicking the Create a resource link in the upper left.

Now immediately I see under Popular products that Ubuntu Server 20.04 LTS is
shown, but of course I could click See more in Marketplace because what I might
want to do is search for something. Maybe I'm interested in SUSE Linux, so SUSE
Enterprise Linux. OK, let's select that. The available options read: SUSE Enterprise
Linux for SAP 15 SP3 +24x7 Support and SUSE Enterprise Linux for SAP12 &
15+24x7 Support. So we have a couple of options available for that distribution of
Linux.

Or maybe I'm interested in Red Hat Linux, so maybe I could search for redhat and
choose one of those variations. The available options visible on-screen read: Red Hat
Enterprise Linux 7.4, Red Hat Enterprise Linux 7, Red Hat Enterprise Linux 6, Red
Hat Enterprise Linux, Red Hat Enterprise Linux 8.1, Red Hat Enterprise Linux 7.8,
Red Hat Enterprise Linux 7.6, Red Hat Enterprise Linux 8.0, and Red Hat Enterprise
Linux 6.9. So naturally, depending on the flavor of Linux or even Unix that you're
interested in will determine exactly what you're going to search for.

So I'm going to go back to Ubuntu Linux, and I'm interested primarily in Ubuntu
Server let's say version 18.04 LTS. So maybe I have a specific need for that version of
the server OS. So I'm going to select it. That takes me to a brief Overview of that
server where I can create it manually, but there's also a Start with a pre-set
configuration button Under Select a workload environment, there are two options:
Dev/Test and Production. Production is selected by default. where it takes me to a
different page where I can determine if I want this to be a specific type of workload,
Memory optimized workload or general purpose, or maybe Compute optimized, if
there's going to be a lot of calculations that will be done. General purpose (D-Series)
is selected by default.

However, I'm just going to Continue to create a VM, and that puts us on this page
where we have to specify the details. The wizard steps are listed across the top for the
creation of this VM. The steps read Basics, Disks, Networking, Management,
Advanced, Tags, and Review + create. So as usual in Azure we have to tie this to a
Subscription and specify a Resource group to deploy this virtual machine into, and we
have to give it a unique name.

Now, we want to make sure that the name adheres to organizational standards. So let's
say I call this ubuntuserver18-1. I'm going to specify that I want this to go in a
specific Region. I'm going to choose Canada East here, and the reason I'm being
specific about that is because I know I've got a virtual network with subnets deployed
in Canada East where I want this deployed. So that's why I'm choosing Canada East.
Down below, I'm primarily interested in taking a look at the Size.

What is the vertical scaling? What of the amount of CPUs and the amount of RAM
that are available here? So if I have a very minimum workload, I might just go down
to 1 vcpu and 3.5 GiB of RAM. So I'm scaling down vertically. And if I find it that I
need more horsepower because I've been monitoring the performance of this Linux
VM, then I could always resize it here in Azure.

Now resizing, bear in mind, will require the VM to be restarted. So you don't want to
do that during peak usage time if that VM is running a mission-critical workload.
Down below the default Authentication type for Linux here in the Azure cloud is set
to SSH or secure shell public key. That means that the public key is stored here in the
cloud, but the related private key would be something that you would download and
have on your machine and be required to have that to authenticate to this virtual
machine.

You can also go with Password-based authentication where you could configure the
Username and Password. Arguably, SSH public key authentication is generally
considered to be more secure because you have to possess something, you have to
possess the private key and know the passphrase for it, if it's passphrase-protected. So
I'm just going to fill in a standard Username and Password and confirm it here to
move on with password-based authentication in this particular example.
Remotely managing a Linux host over SSH normally occurs over TCP port 22, and so
that port has been selected as being allowed. Now you want to make sure that you do
not allow that on a public inbound port, if this is a production server. Instead, just
stick with the private IP address assigned to the VM and either VPN into Azure so
you can manage it with its private IP, or perhaps use a Bastion Host, which serves as
the public connection point, but you're still going through it to manage the private IP
of a backend host.

But because this is only for testing purposes, I'm OK with port 22 for this VM
temporarily. I'll click Next. So we have a Premium solid state drive or SSD operating
system disk here in the cloud. And we could go down and we could attach an existing
data disk or create and attach one if we want to create one or more of them.

The Size of the VM, the sizing along with the vCPUs and the RAM also determines
the maximum number of data disks that you can attach to your VM, so be mindful of
that when you're setting this up. I'll click Next for Networking. Because I specified
Canada East as the region where I want to deploy this VM, it shows me the VNets or
the virtual networks in the cloud that I've already got deployed in Canada East, and
there's the one I wanted; it's vnetyhz172.

And I can specify the Subnet in that VNet where I want this particular VM to be
deployed. The Subnet reads Subnet2 (192.168.5.0/24). The default is that this VM will
be assigned a Public IP address and for testing purposes it's OK or unless you
absolutely need it, we don't like to use Public IPs directly on a VM, even in the cloud,
because it's a potential security risk, especially if you're listening on port 22 for SSH
connections or 3389 for RDP. If you can see those over the Internet, connect to them,
so can potential attackers.

So that's why we want to be careful with public IPs for production VMs. The Public
IP reads (new) ubuntuserver18-1-ip, the NIC network security group is set to Basic,
and the Public inbound ports are set to Allow selected ports. I'm OK with everything
here, so I'm just going to go to Review + create. I'm not going to make any other
config changes here. So the validation has passed. I'm going to go ahead and Create
this Linux-based virtual machine. Before too long, the Linux VM will be deployed. So
I can click the Go to resource link to view its properties.

Of course I could also go to the Virtual machines view over on the left and once we
do that there's our you ubuntuserver VM which has a Status of Running and it's been
deployed to the Canada East Location or region. So I can click directly on the name of
that VM to go in and start modifying the various and many properties or configuration
settings available for that VM. And within the Overview page or blade as it's called in
Azure, at the top I have buttons to make a remote SSH connection, to Start, Restart or
Stop or even Delete the virtual machine.

Many of those functions are available back here. If I view the VM and turn the check
mark on to select it, I can click the context or the ellipsis button, the three dots in the
upper right and choose to Start, Restart, Stop, Delete and so on. So we have control of
the virtual machine in its entirety. Now as is always the case if you don't need that
VM running all the time, don't leave it running because you'll be paying for it every
second that it's running. Nobody likes wasting money and so make sure you shut
down or stop that virtual machine if it doesn't need to be up and running.

Creating a Custom Cloud-based Virtual Machine Image


Topic title: Creating a Custom Cloud-based Virtual Machine Image. Your host for
this session is Dan Lachance.

In the AWS Cloud an Amazon Machine Image or an AMI, A-M-I, is an OS image


that you use when you're launching a new EC2 instance. Basically, when you're
deploying a virtual machine in the cloud. But you can create your own custom AMIs
and so that when you deploy a new virtual machine instance, it's already tweaked and
configured the way you specifically need it to be instead of using a standard Amazon
Machine Image.

So let's start this out by examining AMIs. Let's go to the EC2 management console, so
I'm going to search for ec2 in the top search bar and select it. And what I'm going to
do is scroll down on the left and under the Images section, if I click AMIs, I will have
a list of any custom Owned by me AMIs. Now we haven't built any yet, so therefore
nothing is here. Now I could choose Public images and here we have quite a large list
of very large Amazon Machine Images, but really that's the type of thing that you
would see when you go to launch an instance.

So if I go to the Instances view on the left and then click the Launch instances button
in the upper right, I then get a list of Amazon Machine Images; by default, Quick start
Amazon Machine Images. We have a variety here. Linux variants, macOS, also
Windows variances, and all kinds of different things that are available.

Also, we could go to the AWS Marketplace on the left and actually search for very
specific types of Amazon Machine Images based on Juniper systems firewalls, or
Barracuda firewalls and so on. You could also of course search for anything specific.
So I could look for, let's say openvpn if I want to quickly deploy an OpenVPN Server
without manually deploying the VM and manually downloading and installing the
OpenVPN software. This is much quicker and easier to do.
So you could just select that AMI and away you go. Any private or custom AMIs that
you build will show up under My AMIs on the left. Of course we don't have any yet,
so let's get to it. How do you create a custom Amazon Machine Image? Well, the first
thing you would do is deploy a virtual machine. So let's Cancel and Exit here. Deploy
a virtual machine, launch an instance and then sign into it remotely and tweak and
configure it.

Maybe you would tweak the operating system, patch it, maybe you would add
additional software components and files to the file system, whatever you need to do
to customize that as a custom image. Once you've done that, you would capture it as
an image and here's how you do that. So let's say I've got a Windows Server 2019
AWS instance that I've launched, and I've customized and tweaked it.

So what I would do is select it The Instance ID reads i-04d03ffdc9bd9c573. and then I


can go to the Actions menu and then I could scroll down to Images and templates and
choose Create image. So it's going to create an image from my selected virtual
machine instance. I'm going to call this CustomWinSrv2019, that's the name of my
Amazon Machine Image.

Of course, I could have a description that outlines exactly what is in this custom
image. And then down below, of course, we could add additional data volumes or
disks if we wanted to, but I'm not going to. Under Tags, Tag image and snapshots
together is selected by default. The other option reads Tag image and snapshots
separately. So that's all I'm going to do at this point to create my custom virtual
machine image. So in the bottom right I'll scroll around and I'll choose Create image.

So it states that it successfully created my custom Amazon Machine Image and it


assigned a custom ID to it. The ID reads ami-09f535fc2d78a20c9. So I'm going to
close that message and I'm going to scroll to my AMIs view over on the left, but I
want to go back and filter it for Owned by me. So it's in the midst of being created.

Now the AMI Name is showing up here, so that when I go to launch a new instance, if
I go back to where we normally go to launch instances when I click on My AMIs on
the left, any custom Amazon Machine Images will show up here. Now the reason ours
is not, it's just timing; nothing more. So if I Cancel and Exit let's go back and kind of
monitor the progress, it's still in the midst of creating this.

So I can keep clicking refresh until such point that we no longer have a Status of
pending. OK, so after refreshing, the Status now shows as available. So if we go to
launch an instance this time, our CustomWinSrv2019 image shows up. We can just
click Select and then go through all the normal steps we would go through to deploy a
virtual machine, the difference being here, the virtual machine will be based on a
custom Amazon Machine Image.

Cloud Automation Templates


Topic title: Cloud Automation Templates. Your host for this session is Dan Lachance.

When you're working with cloud computing, you can deploy resources manually,
resources meaning things like virtual machine instances, dealing with cloud storage,
databases, web applications and so on. But you can also work with cloud templates.

Now, cloud templates are referred to as infrastructure as code, and the reason is
because a template is really just a text file that uses a specific syntax and that syntax
defines cloud resources that we want to deploy again, like virtual machine instances,
web apps and whatnot. But it could also mean that we have certain resources that we
want to modify, they already exist; well that we want to remove.

So it's just not for creating stuff in a cloud computing environment. Now bear in mind
that when you invoke a cloud template, really it's automation for creating or managing
cloud resources. But that template can only do what you can do as a cloud technician.

So if you're signed in with your cloud account, then the template can only do what
you would normally be able to do given your permission set. Now in the Amazon
Web Services cloud, just to focus on one specific provider, the solution is called
CloudFormation. CloudFormation allows us to work with templates in Amazon Web
Services to deploy and manage resources using infrastructure as code instead of doing
it manually using a GUI or typing in commands to create or modify things.

So with CloudFormation templates, we have a template file. You can have multiple
files depending on the different types of automation templates you want to use. And a
template might simply automate the deployment of a single virtual machine instance,
or it could be very complex and it might deploy an entire web app infrastructure that
includes storage, that includes multiple virtual machines.

It might include a load balancer and so on. So a template file is really just a text file at
the end of the day. Now in AWS CloudFormation, the syntax used within that
CloudFormation template file can be JSON, that's JavaScript object notation, or if you
prefer, you can use the YAML file format. Either one is valid.

The CloudFormation template, given that it's just a text file, can be stored on your
local device and you can refer to it and upload it into the cloud to create or manage
resources or you might store it in an AWS S3 bucket. An S3 bucket is just an
enterprise cloud storage location in the Amazon Web Services environment. So there
are different components or sections within a template.

We'll just mention a few of them that are listed here, one of which would be the
parameters section. So besides standard things like the version of the template, the
description and so on, parameters are very important because your template instead of
hardcoding all values; so imagine that you have an automation template that creates a
number of virtual machines and a web app and a load balancer.

Instead of hard coding, the names of all those things and all of the details, like the
region where those things will be deployed, instead, you could parameterize those
items so that you could reuse this template over and over, and then provide values for
those parameters like the names of virtual machines and web apps, and also the
location where you want these deployed geographically. So it allows for reuse.

You can also define conditions within your template that determine if resources get
created or if they get updated with certain property values. You can also determine the
resources themselves, such as virtual machine instances and S3 buckets, which are
storage locations or web apps and so on. You can even define the output of a template,
which could actually even be used as the input for a second template.

So you can have a chain of events that are triggered here that use more than one
template for a single type of deployment. AWS CloudFormation stacks are a
collection of related AWS resources that you want to manage as one unit and they are
a result of deploying a template.

You can also use what are called stack change sets to modify stack resources. So
when you delete a stack, then naturally what you're doing is deleting all of the
resources within that particular stack. So when you use a CloudFormation template,
let's say to deploy a new web application and all of the infrastructure required for that
that template might deploy one stack or multiple stacks, depending on how you've
configured it.

Pictured on the screen, we've got a screenshot of the AWS CloudFormation Designer.
Under Resource types, it reads: EnclaveCertificateIamRoleAssociation, FlowLog,
GatewayRouteTableAssociation, Host, Instance, InternetGateway, LaunchTemplate,
LocalGatewayRoute, and LocalGatewayRouteTableVPCAssociation. So we've
already stated that a CloudFormation template can be JSON or YAML in terms of its
syntax, but you don't have to type that all out manually. That can become very
complex.
The CloudFormation Designer is a visual design tool where you have an empty
canvas, and in our screenshot on the left, we're looking at Resource types where you
can drag different types of cloud resources like virtual machine instances or S3
buckets or web applications or whatever it is that you need to deal with.

You can drag that resource onto the design canvas and then you can click and drag to
draw links or dependencies between those types of resources. So for example, maybe
you're going to set it so that a public IP address depends on an AWS instance a virtual
machine that it will be assigned to, and it will generate the YAML or the JSON
template for you as you work with this graphical CloudFormation Designer tool.

Now, if we were to look for example at the code that gets generated from this tool and
what we're doing in this screenshot is looking at the JSON. But notice in the upper
right we could also have selected YAML. It's just, it's really a preference. There's no
technical reason you would choose one over the other.

Anyway, if we look at the JSON example here, there is a section here called
"Resources" and we've stated that a CloudFormation template has many different
sections, one of which is called Resources where you define the resources that will be
deployed or managed, or deleted. In this case, the type of resource is an AWS EC2
instance. In AWS, that's a virtual machine. So here we can define all the properties
that go along with that to create or deploy or whatever the case is with this particular
template.

Deploying Cloud Resources Using a Template


Topic title: Deploying Cloud Resources Using a Template. Your host for this session
is Dan Lachance.

Deploying and managing resources in the cloud can be done manually. And what that
means is that you might use a GUI tool such as this, the Azure portal in a web
browser, and you could manage your resources in here. You could use command line
tools.

So for instance you could switch to the cloud shell here in the Azure portal within
your web browser, and you could use PowerShell cmdlets to manage all of your
resources or deploy them, or you could switch over to Bash if you like using a Bash
shell environment, because you're used to Unix or Linux nuances.

But either way, those are still all manual ways of creating and managing cloud
resources. Templates often referred to as infrastructure as code are essentially text
documents with instructions on how to create or manage resources. And you can
deploy the template, which in turn deploys one resource, maybe, like a virtual
machine, or maybe an entire ecosystem, virtual networks, virtual machines, public
IPs, databases, web apps; a template could do that as well.

So here in Microsoft Azure, I'm going to create what's called a template deployment
and I'm going to do it using the portal, the GUI. So I'm going to click Create a
resource in the upper left and I'm going to search for the word template, and I'm going
to choose Template deployment. You could use command line tools like PowerShell
or the Azure CLI, the command line interface, to also deploy templates. But I'll use
the GUI here.

I'll click Create and on this screen I've got some standard common templates, such as
for creating a Linux VM or a Windows VM, creating a web app or a SQL database,
that type of thing. I could also build my own template in the editor if I want to do it
completely from scratch, which means you have to know the syntax and what the
names of various directives are. Under Template source, the option Quickstart
template is selected by default. The other option reads Template spec.

So down below, I could choose from the Quickstart templates so I could browse
through and look at all the various templates for Docker containers and for various
Java, python, postgresql solutions. I could also type in what I'm interested in like let's
say if I type in loadbalance, something like that, then I could choose a load balancer
like a, an internal load balancer solution, that type of thing. However in this case I'm
just going to choose Create a web app and depending on the template will determine
whether you have to specify parameters like I do here.

Now this would be fewer parameters than would normally be required. Otherwise,
you might as well just manually deploy it here in the portal. So I have to specify a
Subscription, a Resource group, a Region, and the Location is set to the Location of
the Resource group, that's referred to here programmatically as
resourceGroup().location. And that's it. The Subscription is set to Pay-As-You-Go, the
Resource group reads Rg1, and the Region is set to (US) Central US. Now, normally
deploying a web app and all of those related items requires a lot more details to be
specified. Now, I can also edit the template.

If I click Edit template, it takes me into the template editor. This is where we would
have been placed if we had chosen to build the template manually at the beginning.
And here we have all of the JSON, that's JavaScript object notation syntax, for this
template to create various items in the cloud and then of course to configure them
accordingly.
So if we scroll down through this, we'll see there are a lot of different things like the
'webAppName' is a variable that will be determined automatically and so on. So I'm
not going to change that. I'm just going to go ahead, let's say, and click Save, and then
I'll click Review + create, and really, we didn't tell it much of anything yet. When I
click Create now it's deploying an entire web app type of solution.

Now remember the templates don't have to be used to deploy or create new things.
You might use them to remove things or modify the configuration of something that's
already out there in the cloud. OK, our template deployment is complete. So I'm going
to go to the resource group, in this case, Rg1, where I can view any deployments that
are related to that resource group, so our Microsoft.Template as an example.

But what's interesting is if I go to App Services on the left, here is the Name that was
generated for our web application and it's currently up and running. The Name reads
web-fjaf7mugvg5gq. And if I click on the link to open up the properties of that web
app I can even click on the URL The URL reads: https://web-
fjaf7mugvg5gq.azurewebsites.net. and sure enough we're going to have our sample
web application that is up and running. So we've got the little welcome screen, Hey,
App Service developers!

So that's how incredibly quick and easy it can be to use templates, and there are plenty
of templates available all over the Internet, templates that you can use for free to
deploy and manage cloud-based resources. Normally many organizations will come
up with some templates that they reuse, for instance to set up a sandbox or a testing
environment in the cloud.

So if that testing environment, let's say, consists of a couple of virtual networks, a


VPN, a web app, a couple of virtual machines, instead of doing all that manually, a
template is perfect for that type of thing. So back here in the portal, there's another
neat aspect of automation templates in the Microsoft Azure cloud, and it's related to
when you manually deploy resources.

So let's say I were to create a resource and let's say it's a virtual machine. Now I'm not
going to fill out all the details to save time here, but let's say that we did fill
everything out and we get to the Review + create part of the wizard. Of course, here
it's going to say Validation failed. You haven't filled out what's required, but that's not
the point now. What is the point is the link in the bottom right on the Review + create
page: Download a template for automation.

So based on your selections, the way you've determined this virtual machine in this
case will be deployed and configured, all of those instructions will have been placed
inside of a template. In Azure, they're called ARM templates, where ARM, A-R-M,
simply stands for Azure Resource Manager. In the end, it's just a template that uses
JSON syntax, but the point is you can create a template from one of your
deployments.

Maybe go back and tweak it and change a few things and maybe parameterize it, if
you want it to accept parameters like virtual machine names and so on. And it really
lends itself nicely to re-usability, which translates to saving time when managing
cloud resources.

Limiting Server Deployment & Management in the Cloud


Topic title: Limiting Server Deployment & Management in the Cloud. Your host for
this session is Dan Lachance.

When you're working in a cloud computing environment with other cloud technicians,
there needs to be a way to limit what different technicians can do. In other words,
setting permissions. So we're going to go ahead and restrict cloud server management
using RBAC, role-based access control, and we'll also take a look at policies as well
and how they are different from RBAC.

Let's start with role-based access control. So the first thing I'm going to do here in the
Azure portal is in the search bar I'm going to search for the word sub and I'm going to
go into my Subscription. Now a subscription is part of the Microsoft Azure hierarchy
and you can assign permissions at the subscription level, which flows down to
everything in the subscription.

That would be resource groups which are used to group resources and individual
resources themselves, like virtual machines, storage accounts and so on. Now
whenever you go in to set RBAC permissions, in this case it's at the Subscription
level, you would click Access control (IAM) on the left. IAM stands for Identity and
Access Management. So I can click the Add button to Add a role assignment.

Now when I go to add a role assignment, what I would do is then choose the specific
role that I'm interested in. So I have a number of roles here. Some of them are generic,
like the Reader role which would apply to all classes of cloud objects. So you can read
anything in the cloud, but there are also roles that are specific to different types of
cloud resources. So if I go all the way down, let's say to the V's where we get to the
virtual machine stuff, then we'll have some roles that are specific to working with
virtual machines.

So we've got the Virtual Machine Administrator Virtual Machine Contributor, and so
on. A number of different roles. So if I were to select one of these roles, like the
Virtual Machine Admin, then we could go to Next and then we could assign that
specific role to a User, a group, or a service principal, or a Managed identity.

A Managed identity really refers to code running in, for example a virtual machine. So
you're really assigning the permissions to the virtual machine and not an individual
user or a group. So I could click Select members and from here I could go through and
select groups or users that might exist. So for example I have a group here called
Azure_Admins. I'm going to go ahead and select that and go to Next. The Object ID
reads 8512cd3f-3965-4bc2-b820-1d24f15890e2.

So we have assigned a role to the Azure_Admins group and the Scope is the entire
subscription. The Scope reads /subscriptions/00da78ac-9d0e-427b-80da-
e1b07c749f72. So the permissions will flow down for Azure admins to everything. So
I'm going to choose Review + assign and we're still looking at the Access control
settings for the Pay-As-You-Go subscription. So if I look at the Role assignments
here, Azure_Admins will show up for Virtual Machine Administrator Login.

That's what we've just assigned here. Now, what if we went down to an individual
virtual machine at the subscription? Would it reflect that? Because it's supposed to
flow down through the hierarchy and the answer is yes, it will. Let's verify this. Let's
go to the Virtual machines view on the left. Let's open any virtual machine here in the
subscription, it doesn't matter. The host clicks on ubuntuserver18-1. If we're going to
go to the same place we went to set it up, at the subscription level; meaning we're
going to click Access control (IAM) on the left, but we're not going to add an
assignment here.

We're just here to check it out. So I'm going to click Role assignments and I'm just
going to filter it just so we don't have to look through everything for admins. And if I
just take a look here, sure enough, Azure_Admins, the Group, has the Virtual
Machine Administrator Login role and it's inherited from the Subscription level.

Now if I wanted to set a role only at this scope, let's say for this particular Ubuntu
Server VM, then this is where I would just click Add role assignment and go through
the exact same steps. It really is no different.

So we can do that at the individual resource level, like a VM, the subscription level, or
if I go to the Resource groups view on the left, I've got a resource group here, let's say
called Rg1, for resource group 1, and I've got all of the resources shown over here on
the right that were deployed in this resource group.

So, as you might imagine, I can click Access Control (IAM) here and add or view role
assignments at the resource group level, and it would apply to only the resources that
are deployed into the resource group which we were starting to look at over here on
the right. So that's another level you could set RBAC roles to. The other thing to think
about is policy.

Azure policy is interesting. I'm going to search for policy in the search bar at the top
and I'll click Policy. Now the idea with Azure policy is we can assign policies to
different parts of the Azure hierarchy, maybe to check for compliance. There's already
a policy here to audit virtual machines without disaster recovery configured, and it
looks like we're Compliant, 100%.

So all our VMs have disaster recovery. But you can also restrict capabilities with
policy, not just check compliance. So I'm going to click Assign policy. My goal here
is to limit regions where Azure resources can be deployed. And so I want this to apply
for a specific resource group, so maybe technicians that have access to create items in
the Rg1 resource group.

I want that to be where that is applied, that's the scope. The Policy definition, I'll click
the selector button to the right here and I'm going to search for policies that have the
word location, and in my resultant list down below, I'm going to choose the Allowed
locations Built-in policy and I'm going to select it. Now if I go to Next here for the
Parameters, this is where I can specify Allowed locations where resources can be
deployed, let's say Canada East and Canada Central.

And I can continue on through here. There's nothing I need to do here for
Remediation, we're not checking for compliance, so I can go ahead and create this
policy assignment. OK, so back here on the home page for the portal, let's see what
happens if we attempt to deploy a virtual machine in resource group 1, Rg1, but in a
region that isn't listed as being allowed. So I'm going to go ahead and click Create a
resource.

Let's say it's a Windows Server 2019 VM. I'm going to put this in the Rg1 Resource
group. In the Virtual machine name field, the host types six d's. Immediately it says,
Policy enforcement, and it shows under the Region because this Region violates our
policy. The value does not meet requirements on our virtual machine deployment.

The field 'Location' with the value '(US) Central US' is denied. Well, what would
happen if we chose something that the policy allows like Canada East? No problem.
So that's how policies play out then when you work with, in this case, restricting
things like regions, where resources can be deployed in the Microsoft Azure cloud.

Course Summary
Topic title: Course Summary.

So in this course, we've examined how to deploy cloud Infrastructure as a Service. We


did this by exploring how IaaS differs from other cloud service models. We deployed
an Amazon S3 bucket and an Azure storage account. We examined AWS and Azure
virtual network deployment and we deployed a Windows and Linux VM in AWS and
in the Azure cloud.

Next we configured an Amazon Machine Image or AMI. We worked with cloud


automation templates and how to deploy resources with a template, and we used role-
based access control and policies in the cloud. In our next course, we'll move on to
explore how to deploy cloud Platform as a Service and Software as a Service
offerings.

CompTIA Server+ (SK0-005): Deploying


Cloud PaaS & SaaS
Platform as a Service (PaaS) and Software as a Service (SaaS) are two popular and
valuable cloud service models. Both play a unique role in managing certain aspects of
cloud computing. If you're an IT professional working in server environments, you
need to know what these two cloud service models entail. Take this course to learn all
about PaaS and SaaS solutions. Furthermore, practice deploying databases in the
AWS and Microsoft Azure clouds. Configure a SaaS cloud solution. Use an
automation template to deploy a PaaS solution. And use several strategies and tools to
keep cloud computing costs to a minimum. Upon course completion, you'll be able to
deploy PaaS and SaaS solutions and control cloud computing costs. This course also
helps prepare you for the CompTIA Server+ SK0-005 certification exam.

Course Overview
Topic title: Course Overview

Platform as a Service or PaaS, spelled PaaS, is a cloud service model related to items
such as managed databases and software development solutions.

Managed cloud services are those that don't require the configuration of the
underlying infrastructure, such as virtual machines or the installation of database
software. Software as a Service or SaaS refers to cloud-based apps. In this course, you
will examine both PaaS and SaaS solutions. First, I'll examine the characteristics that
differentiate PaaS and SaaS. Next, you will deploy databases in the AWS and
Microsoft Azure clouds. Then, I'll configure a SaaS cloud solution. Moving on, you
will then use an automation template to deploy a pad solution. Finally, I'll cover a
variety of strategies and tools to keep cloud computing costs to a minimum. This
course is part of a collection that prepares you for the CompTIA Server+ SK0-005
certification exam.

The Platform as a Service (PaaS) Cloud Service Model


Topic title: The Platform as a Service (PaaS) Cloud Service Model. Your host for this
session is Dan Lachance.

In the cloud, there are a number of service delivery models, one of which is Platform
as a Service which is spelled out PaaS, often this is just called PaaS. So with PaaS,
what happens is that the underlying infrastructure, storage, networking, virtual
machine servers, all of that is handled by the cloud service provider. So, the
deployment of those underlying items, the configuration of them, even the
management of them like the application of patches, it's handled by the provider. That
way you can focus on the task at hand, such as focusing on deploying a database and
not worrying about the underlying server and database software. So, this is often then
referred to as a "managed service", because all of the underlying infrastructure is
managed by the provider. It's also sometimes even called "serverless".

Now, of course, there are servers that are making this all happen, but the point is that
you as a cloud customer do not have to concern yourself with those servers, hence,
serverless. With Platform as a Service, we have a number of different service
offerings like web applications. If you deploy a cloud hosted web app, that's
considered to be PaaS. If you deploy a content delivery network for caching content
near users and we'll talk about that in more detail soon, that's also considered Platform
as a Service. If you want to deploy a cloud-based database as a managed service, well,
that's Platform as a Service. And also, there are a number of Software development
platforms for software developers, whether it's actually writing code and hosting the
code and having it triggered and run on a cloud-based server. Even different software
development solutions used by multiple developers for code check-in and code
testing. All of that is considered Platform as a Service. As a cloud customer, we don't
have to worry about the underlying stuff that's required to make that work, like virtual
machines or storage accounts or anything like that.

We just focus on the higher level web app or database or whatever the case might be.
So, a content delivery network or CDN is a PaaS type of cloud service. The idea is
that we want to have web app data cached near users geographically. So, imagine that
you're hosting a web app that actually is running on a server in the Western US. Now,
if a user in Ireland accesses your application, that means that they're going quite a
distance to get to that app content. But with the content delivery network, rather than
additional network latency, for each file that's requested from Ireland all the way to
the Western US, let's say, that normally would take 100 milliseconds, then with the
content delivery network, with the content replicated or cached locally for users in
Ireland, that same request might be reduced all the way down to less than 10
milliseconds. Because with the content delivery network, the content for the web app
in the Western US can actually be replicated,

for example, to a Dublin server, a Dublin edge location in Ireland, which means Irish
users accessing that content will get it much quicker. So, with CDN then is Platform
as a Service. Here, we have a screenshot of deploying a web application in the
Microsoft Azure Cloud, and remember web apps are also considered Platform as a
Service. So, all we concern ourselves with here, are details such as the name of the
web app, whether we're running a certain runtime stack or a codebase like PHP 8.0,
which we have here, could just as well have been .NET or Python. Then we have to
think about where we are deploying this. In this example, the Region is set to Central
US. Then down below, in Microsoft Azure we have to specify the App Service Plan.
The app service plan can be shared by multiple web apps. The app service plan
contains details like the sizing, that's the underlying virtual machines and their power.
Here, it's set to Premium V2 P1v2.

In other words, it's got 3.5 gigabytes of memory. And it's got 210 what they call Azure
compute units or ACUs. That's about as much as we have, in terms of control of the
underlying virtual machines, we don't have access to them, we don't have to worry
about patching them, because remember with Platform as a Service, that's the
responsibility of the cloud service provider. In this screenshot, we have a Microsoft
Azure Database PaaS Deployment. So remember, that databases are considered PaaS,
as long as it's a managed service. You could always deploy a virtual machine in the
cloud manually and manually install database software, in which case, that's really just
infrastructure as a service. However, with PaaS all you do is concern yourself with the
database details. As we see here, we have the Database name being specified. A
database server to host the database, then the compute storage power, and then if we
want any redundancy for backups, such as Geo-redundant backup storage as is being
specified here. Now, because we are talking about Platform as a Service, when we
deploy this database solution in the Azure Cloud, we don't have to concern ourselves
with the underlying virtual machine, with the credentials added for it patching it. We
don't have to concern ourselves with the underlying database software. All we do is
focus on what we have here, database details.
Finally, when it comes to software developer solutions, we have solutions like
Amazon Web Services or AWS Lambda Functions. Essentially, what software
developers can do in this case, in the Amazon Web Services Cloud, is take any code
that they've written and upload it to Lambda, or they could use the Lambda editor
directly in the web browser to write their code there. But either way, the code is being
hosted in the cloud. You can choose a memory size for the functions. So, depending
on the nature of what the software code does, we'll determine how much memory it
will need. Now, remember to size this carefully. If you allocate way too much
memory, it's not going to cause a performance issue, but you're going to be paying
more unnecessarily. So, that's where the term rightsizing comes in as being relevant,
having the right amount of horsepower for the given workload without accumulating
too much unnecessary cost. Finally, in AWS, Lambda functions are scaled
automatically. So as there are more requests for a piece of code that's being hosted in
the cloud, then the resources are adjusted as required to allow for good performance
of that Lambda function.

Deploying AWS RDS Databases


Topic title: Deploying AWS RDS Databases. Your host for this session is Dan
Lachance.

Deploying a database in the cloud can either be infrastructure as a service-based or


platform as a service-based. In order for it to be Platform as a Service, it means that
we're using a managed service. In other words, you deploy the database, but you don't
concern yourself with the underlying virtual machine and software that support it.
That's Platform as a Service when it comes to a managed service, and that's how we're
going to do it here, in Amazon Web Services.

The screen displays an AWS Management Console. The header contains a search bar
and the following tabs, namely: Services, Support. The windows pane contains the
following fields, namely: AWS services, All services, Build a solution, and Explore
AWS.

So, to get started, I'm going to search for database here, in the AWS Console.

A list of Services appears below the search bar.

And I'm interested in the Relational Database Service. Specifically, it's called
Managed Relational Database Service because we don't have to worry about the
underlying virtual machines, it'll be taken care of, for us. So, I'm going to click on
RDS, which is going to take me into the RDS Management Console.
The left pane contains the following options, namely: Dashboard, Databases, Query
Editor, Performance Insights, Snapshots and Automated backups. Currently, the
Dashboard option is selected. The right pane contains a button titled: Create
database and two sections, namely: Resources, and Recommended for you.

From here, I'm going to click Create database. So, we can choose either Standard
create,

A Create database window displays. The screen displays two sections, namely:
Choose a database creation method, and Engine options. The first section contains
two radio buttons, labelled as: Standard create and Easy create. The second section
contains three radio buttons, labelled as: Amazon Aurora, MySQL, and MariaDB.

where we have configuration flexibility or if we just need to get something deployed


very quickly without dealing with configuration options, we can choose Easy create.
I'm going to leave it on Standard create. Down below, I have to make a selection.
What type of database engine am I interested in? Whether it's Oracle or MySQL or
MariaDB or Microsoft SQL Server. The great thing about this is I just make a click
and it will automatically take care of provisioning the underlying virtual machine and
the selected software, which is great. Then down below, I've selected MySQL. Of
course, I get to choose the specific version of the software that I want available.
Depending on what you're going to be doing with the database, will really determine
if you in fact need a specific version.

I'm going to leave it on the version selected here for MySQL. Down below, we can
choose Templates for the deployment of this database, whether it's Production based,
Dev/Test, or for use in the Free tier. I'm going to use the Free tier deployment option.
And down below for Settings, have to fill out the details such as the database name,
the database will be called database-1. I'm going to leave that for this example. I then
have to specify the MySQL database credentials, so I'll fill in the Master username
actually, I'll leave it as admin, but I will generate and confirm my own password.
Once I fill that in, I'm going to continue scrolling down here, where I can see that I
have these sizing for this database platform selected, because I didn't choose anything
beyond the Free tier, I have limited options. But you can size a DB instance, so that it
uses a certain number of vCPUs, a certain amount of RAM, much like you would for
a virtual machine. And really, in the backend, there is a virtual machine that will be
running this, we just don't have to concern ourselves with the virtual machine details.
Down below, we then deal with the Storage options. For performance reasons,
we might want to choose something like Provisioned IOPS SSD for solid state drive,
where we can get the best read and write performance. But I'm going to leave it on
General Purpose, if you're looking to save on cost and performance is not an issue,
You could choose Magnetic, which uses the older hard disk technology. The allocated
storage for the database here is set to 20 GiB. I'll leave that. Storage autoscaling is
also turned on if we need more storage capacity than that in the database. And as we
go down through, I can specify the VPC where we want this stored or deployed. So,
it's going to put it in our Default VPC. Further down, I'm going to turn on Public
access to Yes, which means that I want to be able to make a connection, such as from
my on-premises environment directly into the database. And if you think about it, if
you're going to be deploying a database in the cloud, you're going to want a way to get
into it to manage it. Now, you don't necessarily have to do it by enabling public
access, that could be a security risk. You might first VPN into the AWS Cloud,
thereby, giving you access to resources in AWS using private IPs.

However, for testing purposes I'm going to leave this on a public connection. And as I
go further down, I have a number of other options, like Database authentication and so
on. And then I have a breakdown of the Estimated monthly costs. Now, what you
want to watch out for, specially with databases, is that you do not leave them running,
if you don't need them to be running. Because it can end up being thousands of dollars
for your monthly bill, and that is quite a shock if you didn't intend to have the
database running that entire time. So, make sure when you're finished testing it, if
that's what you're doing, that you shut down the database or that you remove it if you
don't need it any longer. So I'm going to go ahead and do this. If you scroll down in
the bottom right, I'm going to click on the Create database button. OK, so the database
creation has been started and we have a message that it says, the database might take a
few minutes to launch.

The right pane displays a Databases table. It displays the following column headers,
namely: DB identifier, Role, Engine, Region & AZ, Size, and Status. It contains one
row with DB identifier, namely: database-1

So currently, the database is showing here, but the Status is showing Creating. So, we
have a refresh button that we can periodically click and monitor the Status until we
know that the database is available.

But while that's happening, if I click on the database link to open it up, we have a
Summary of our configurations. And one of the things I wanted to point out here, is
some of the dependencies that had to be put in place first for this deployment to work
successfully. So, the Region and Availability Zone or AZ currently is set to us-east-
1b. Let's go ahead and open up another web browser window where we look at our
underlying network Infrastructure, our VPCs and subnets.

He opens the VIRTUAL PRIVATE CLOUD console. The left pane contains the
following options, namely: Your VPCs, Subnets, Route Tables, and Internet
Gateways. The right pane contains a section, titled: Your VPCs. The following tabs
display below this section, namely: Details, CIDRs, Flow logs, and Tags.

Now we've got a VPC that we deploy the database into and it's our default VPC. And
what's important is that we have to have two subnets in different availability zones.
So, I've got two subnets that are using unique IP address ranges

He selects the Subnets in the left pane. The right pane displays a table with the
following column headers, namely: Name, Subnet ID, State, VPC, and IPv4 CIDR.

amongst themselves, but both of those ranges 172.31 fall within the VPC range. But
what's important is that each of those Subnets is in a different availability zone.

It's not quite the same thing as being in a different data center, but for simplicity we'll
say, we've got some Subnets that are spread out in different locations. And for
availability that's required for some services. So, that has to be in place if we only had,
for instance, the two Subnets here that were in the same availability zone, let's say,
they were both in us-east-1a, then our database deployment would have failed. So,
that's an important aspect to consider. The other thing I want to point out is the
Connectivity. Currently we don't have a port number shown here,

He moves back to the Databases tab. He highlights the Connectivity & security
section. It contains the following fields, namely: Endpoint, Port, Availability Zone,
and VPC.

but once everything is deployed, we will. The standard MySQL listening port is 3306.
And so, if you wanted to make a connection into your database once it's ready and
deployed, you'll have to specify the Endpoint URL and then, of course, the connection
would be on the standard port 3306. OK, so after a few minutes the database is
established, the endpoint URL is now showing up and ,of course, we have port 3306,
the standard listening port. So the Status is now showing as Available.

Deploying a Microsoft Azure SQL Database


Topic title: Deploying a Microsoft Azure SQL Database. Your host for this session is
Dan Lachance.
In this demonstration, I'm going to be using the Microsoft Azure portal to deploy a
cloud based database.

The left pane displays the following options, namely: Create a resource, Home,
Dashboard, All services, All resources, Resource groups, and SQL databases. The
right pane contains three sections, namely: Azure services, Recent resources, and
Navigate.

Now I could deploy this manually by deploying a virtual machine myself and then
installing the database software into the virtual machine and then configuring the
database from there. However, I don't want to do that, I just want to focus on the
database itself. I want to use a managed service. So, that's how we're going to
approach this inside of the Azure portal. So really, it's Platform as a Service or PaaS
that we're going to be working with in this particular example. So, I'm going to click
Create a resource in the upper left here in the portal

A Create a resource page displays. It contains a section titled: Categories, having the
list as: AI + Machine Learning, Analytics, Blockchain, Containers, and Databases.
The Popular products section contains a list of multiple options.

and I'm going to go ahead and choose Databases on the left.

Over on the right, we then have a number of different options such as SQL Database.
I'm going to go ahead and click on SQL Database.

A page titled: Create SQL Database displays. It contains the following tabs, namely:
Basics, Networking, Security, Additional settings, Tags, and Review + create.
Currently, the Basics tab is selected. It displays the following sections, namely:
Project details and Database details. The first section contains the following fields,
namely: Subscription, and Resource group. The second section displays the following
fields, namely: Database name, and Server.

From here, we're going to go through a number of wizard steps that will ask us
questions that are really focused primarily around the database, and it will then deploy
it for us in the Azure Cloud. So as usual in Azure, when we tie this to a subscription.
We deploy the database into a Resource group for organizational type of purposes and
we have to give the database a name. So I'm going to call this app1db1, but we also
have to tie it to a SQL Server in the cloud.

OK, well if I open the list, I don't have any currently. So I'm going to have to click
Create new
He scrolls down the page. The screen now displays two sections, namely: Server
details, and Authentication. The first section contains two fields, namely: Server
name, and Location. The second section contains the following fields, namely:
Authentication method, Server admin login, Password, and Confirm password.

and I'm going to call this app1sqlserver1. It's going to make sure that the name is
unique and it's going to actually add the .database.windows.net DNS suffix at the end
of the server name that I specify, so the name is good. It checks out. We've got a green
checkmark. The location here will be East US. And standard SQL Server stuff, what
kind of authentication, I'm going to leave it on SQL authentication and I'm going to
specify the server admin credentials including the password. And then I'll go ahead
and I click OK.

So we've taken care of then the SQL database server deployment. The next thing,
we're going to have to concern ourselves with, is the actual database, which is what
we started configuring. So, the server is now showing up in the list and you can tie
multiple databases to the server. This is just the first time we've done this, so we had
to specify that. So, I've got some options down below. So just do I want to configure a
SQL elastic pool, where we have a collection of databases using the same resources.
I'm going to leave it on No, but then we have the sizing, the Compute + storage
options here, so it's set here 2 vCores, 32 GBs of storage.

If we wanted to change that, we could click Configure database, but I'm OK with that.
I could always change it after the fact. We've got backup set to Geo-redundant backup
storage, so that's fine. I'm going to go ahead and click Next for networking, because
from here, we can determine from which networks we want to allow connectivity to
this Microsoft SQL Server database. So if I were to choose Public endpoint, that
would include allowing access

The Networking tab displays the following sections, namely: Network connectivity,
and Connection policy. The first section contains the following radio buttons, namely:
No access, Public endpoint, and Private endpoint.

from the Internet. And depending on what kind of access you want to allow, we'll
determine if you need to do that. I can leave it on No access, and I could always
change that after the fact if I wanted too. So I'm going to go ahead and click Next. For
security,

we then have a number of security options available for the database.


The Security section contains the following sections, namely: Azure Defender for
SQL, and Ledger (preview).

I'm not going to enable the Microsoft Azure Defender service. I'll choose Not now for
that. I'll click Next: Additional settings.

The Additional settings tab contains the following sections, namely: Data source,
Database collation, and Maintenance window. The Data source section contains the
following fields, namely: Use existing data, and Backup.

I can choose to use existing sample data, or I can restore data into this database as I'm
building it from a backup. When I choose Backup, I would have to choose a backup in
Azure, where I've already got database information available. I'm going to continue on
by clicking Next for tags, if I want to tie this, let's say to a specific CostCenter. I could
create a CostCenter tag and maybe this is for the Toronto cost center.

The Tags tab contains the following fields, namely: Name, Value, and Resource.

I'll click Next for review and create. We have an Estimated cost per month,

It displays the following sections, namely: Product details, Terms, and Basics.

so this is one of those services you don't just want to deploy and test and then leave
running and forget about it.

Turn it off when you don't need it or delete it when you are completely finished with
it. So you don't incur unnecessary charges and it can add up very quickly. So I'm
going to go ahead and click Create to create this deployment. And I'm going to be
creating are two things. It's going to be the database server and it's going to be the
database. After a moment, the deployment will be complete. So what I can do over on
the left in the portal here, is I can click SQL databases. And here, I've got my database
that's being shown, and I can click on it to open up its property sheet where I'll have
things like the Server name that it's linked to, which is important if I want to make a
connection into the database to manage it. Then I would have to know the server
name, maybe using, for example, the Microsoft SQL Server Management Studio tool
on-premises. Also, if I actually click on the Server name, it takes me into the details of
that SQL Server and so, I have a configuration available for that as well.

And if I actually look at the SQL databases tied to that server, it shows our online
database, the database that we just deployed. That's how quick and easy it is to get a
database up and running in the Azure Cloud. There's no need to concern ourselves
with setting up a server first and then getting the right version of the SQL Server
software installed and doing all that. That's taken care of when you work with
Platform as a Service. However, I'm going to go to the Overview blade here, and I'm
going to delete this server and ultimately the database because for testing purposes, I
don't want to leave it running after I'm finished with it because of unnecessarily
incurred charges.

A window appears from the right. This window has a field, titled: TYPE THE
SERVER NAME.

The Software as a Service (SaaS) Cloud Service Model


Topic title: The Software as a Service (SaaS) Cloud Service Model. Your host for this
session is Dan Lachance.

Software as a Service otherwise pronounced SaaS for sure, that's spelled SaaS, is
really just a software delivery model in the cloud where the software is delivered over
a network to end user devices. So what this really means then, is that the underlying
resources to support that software solution. So, think of things, like storage, VMs,
databases, the app software itself. All of that is managed by the cloud service provider
by the CSP. So, we don't have to worry about it as cloud customers. Now, examples
of common SaaS solutions would include Gmail, Dropbox, Netflix, Microsoft 365.
Anything that you can access over a network and run as an app, without necessarily
having to install anything locally is considered SaaS. So, with SaaS, we have end-user
productivity software accessible over a network. Now that network doesn't have to be
the Internet. Certainly, with public cloud computing, that's the case. But there's no
reason why that can't also be done on a private network with a private cloud.

SaaS solutions are also accessible over the network using a variety of different types
of thin client devices. Thin client means that we don't have to install a full-fledged
piece of software on the device to use the software. So, whether the device is a
smartphone, a tablet, laptop, desktop, or some other kind of specialized equipment.
The benefit of using SaaS solutions, whether in the public or private cloud, is that the
solution is accessible over a network using a variety of different types of devices.
Now, if you think back, let's say 20 years, if you're working in IT at the time, if you
had to deploy a solution, like Microsoft Office to 200 computers, then you would have
to package up that software and install it over the network for those 200 devices. Each
of those devices would need the full piece of software installed and running locally,
which is a problem when it comes to applying updates or configuration changes
because you have to do it to each device where it's installed. Whereas, if it's running
centrally on a server somewhere, then you can make your changes to the configuration
or apply patches centrally.

So, with SaaS, there is no need to deploy software locally to each device. The solution
is always already up-to-date. Now, there are some exceptions. When we say a SaaS
solution doesn't require a software on the client device, in many cases, you can install
a small thin client, like a helper app. You might install, for example, the Dropbox app
on a smartphone to facilitate using Dropbox in the cloud for storage as opposed to
accessing it directly through a web browser interface. So, the underlying
infrastructure, the storage, the servers, and its configuration, it's all handled by
someone else and that someone else it is the cloud service provider. Also, when you're
using Software as a Service, you have an ongoing monthly operational expense or
OPEX as opposed to CAPEX. The CAPEX stands for capital expenditure, where if
you wanted to host a solution entirely yourself, you would have to acquire the
hardware, the software, get everything installed and configured, and maintain all of
that over time. That is a capital expenditure. But with cloud computing we just pay
monthly.

Pictured on the screen, we have a screenshot of Microsoft 365, formerly called Office
365. This is a Software as a Service or a SaaS solution, where we can fire up a web
browser and we can sign into our Microsoft 365 account, after which in the left hand
navigator we can access quite a variety of applications, like Outlook email or
PowerPoint presentations, Excel spreadsheets, Microsoft Word, and so on. Now, you
can in some cases, install client-side apps to use those same solutions, or you can
access it directly through a web browser. But either way, it's Software as a Service.
We don't concern ourselves with installing the servers and the software first, before
we can use things, like Microsoft Outlook or Microsoft PowerPoint in the cloud. So
basically, it's online software applications. Then we have to think about the
performance of our SaaS solution in the cloud. Bear in mind, that your SaaS solution
is running on provider hardware that you do not have direct control of.

But the benefit here is that cloud service providers ensure that they have compute
capacity available and that it's always running. So, you should be monitoring relevant
performance metrics for your specific SaaS solution. Maybe, like the number of
concurrent users signed in to check mail in the morning for Outlook, or maybe it's
Gmail that you're using. You can also configure performance metric alerts when
certain thresholds are exceeded. So maybe if we have more than 200 users
concurrently signed in checking Outlook mail in the morning, we want to be notified
through SMS text messaging. Finally, we have to think about the cost of using a SaaS
solution. We did say that, with cloud computing we have operating expenses or OPEX
as opposed to capital expenditures or CAPEX. So normally, with SaaS cloud
computing solutions, you have a pay-as-you-go, monthly subscription. So, that means
that if you create more user accounts, for example, for use with Gmail or for Outlook,
then you will be paying more than if you didn't create those additional users. So, more
storage that you end up using more user accounts, extra features you might sign up
for, those can equate to being more costs for your monthly cloud computing charges.

Deploying SaaS Cloud-based Apps


Topic title: Deploying SaaS Cloud-based Apps. Your host for this session is Dan
Lachance.

When we talk about software as a service or SaaS, SaaS, all we're talking about is
accessing end user productivity software over a network. In other words, you can use
any type of client device and you're running software that's actually being hosted on
servers that are handled by somebody else in a data center somewhere, that's software
as a service. One example of many would be using Google Docs.

The Gmail account Sign in page displays.

So, I'm going to go ahead and sign in to Google Docs using a sample Gmail account.
So once I've signed in, I've got a welcome screen. I'll just click the X in the upper
right to close it. And from here, we have a number of templates that we can begin
working with. So, Google Docs is what's selected over here in the upper left.

He selects the hamburger icon. A list of options appear, namely: Docs, Sheets, Slides,
and Forms.

Of course, we could go to Sheets if we wanted to work with spreadsheet information.


We could go over to Slides if we want to work with presentations. So, this is an
example of software as a service. The beauty of it is there's no need to install software
locally on my device. Whether I'm using a Smartphone, a Tablet, a Desktop PC, a
MAC, or anything like that. Instead, it's all centralized on the servers that are hosting
it. So if you think about it, that also makes managing the latest updates and the overall
configuration of that software easier, because it's all centralized on a server. Clients
just access it over a network, and you could go on and on with examples. If you're
using outlook.com, if you're using Microsoft 365, if you're using Facebook, if you're
using Twitter, if you're using Dropbox, all of those are software as a service solutions,
because if you think about it, you don't need to install anything locally.
Now, yes, you could install an app on a Smartphone to facilitate using some of the
features, but it's still considered software as a service. You're not actually
downloading an entire software suite that you're running locally. It's being hosted
somewhere else. So the responsibility of the configuration and the use of the SaaS
solution and the security of any sensitive documents is the responsibility of the end
user. Whether we're talking about documents, spreadsheets, slide decks, and so on.
However, the underlying servers and storage and software that host all of this are the
responsible in this case of the provider, which is Google.

Here in the Microsoft Azure portal, we're going to take a look at software as a service,
but more from an enterprise perspective. Microsoft Azure can use Azure Active
Directory as its user store. If I click on Azure Active Directory in the left hand
navigator, it opens up a new menu system. And one of the options here is to go to the
Users view where I have a list of the users in Azure AD, Azure Active Directory.
Now this is not exactly the same as having on-premises Active Directory, for
example, you don't have the same structure with organizational units or group policy,
but it does serve however, as a centralized authentication store. So I've got a user here,
for example, by the name of Cody Blackwell and we've got the email address for the
user and if the user forgets their password, it can be reset with the temporary password
that they would have to change after they next sign in.

But the reason that we're talking about this is if I go back in my breadcrumb trail in
the upper left, back to Azure Active Directory, we can also have enterprise
applications tied to our directory service.

He selects the Enterprise applications blade in the left pane. The right pane displays
corresponding details.

So there are a number of them here like eBay, Facebook, Expense Point, and you can
add new applications here that are tied to your directory service. So if I click the add
New application button,

A page titled: Browse Azure AD Gallery opens. It contains few buttons, a search bar
and a section titled: Cloud platforms.

here's where I can select, for example, Google Cloud and I can have that made
available here in my environment, in my Azure environment.

He selects Google Cloud / G Suite Connector by Microsoft option. A window appears


on the right.
OK, so all we're doing is using Azure AD to regulate access to these types of
applications. And this Google Cloud, of course, is an example of software as a
service, we're just going to control access to it for our Azure AD users.

So that they will be back in our Enterprise applications and we look at our
alphabetized list, we now have the Google Cloud / G Suite Connector listed here.

He selects Google Cloud / G Suite Connector by Microsoft from the table. The left
pane contains a list of blades. The right pane displays two sections, namely:
Properties and Getting Started.

And there are a number of configuration settings here that you might want to go
through and change

He selects the Properties blade.

things like the Homepage URL, the User access URL, if it's not already populated.
But what's interesting here is to allow user access. We can also go to Users and
groups,

He selects the Add user/group button in the right pane. A page titled: Add Assignment
page opens. It contains two fields, namely: Users, and select a role. He selects the
User field. A window titled: Users appears. It contains a search bar and a section
titled: Selected items. and we can add a user or group and we can start selecting from
our Azure AD. For example, if I look for our user, Codey Blackwell, there he is, we
can add that account here. So I'll select that user and assign them to this app. And we
would do the same type of thing for other apps that we want to allow access to, for
example, Expense Point.

If I go into the configuration for that enterprise app, we can also go to Users and
groups and Codey Blackwell is shown as a user here. So what we're going to do is
sign into the myapps.microsoft.com URL, sign in as user Codey Blackwell, and check
out the SaaS apps that are available. So, now that I've signed in to
myapps.microsoft.com as user Codey Blackwell, the SaaS applications configured for
this user through Azure AD are now showing up, such as Expense Point and the
Google Cloud / G Suite Connector. So this is another aspect of working with software
as a service at the enterprise level in the cloud and making those specific SaaS
offerings available to end users.

Deploying PaaS Solutions Using Templates


Topic title: Deploying PaaS Solutions Using Templates. Your host for this session is
Dan Lachance.

You can use templates in Microsoft Azure to facilitate the deployment or even the
management of existing resources. So in this particular case, what I've done, is in my
web browser, I've searched up Azure templates and I've navigated then to the Azure
Quickstart Templates page. If I scroll down a bit here, we can search through, well,
over 1000 Quickstart templates in Azure. And in our particular case here, we want to
deploy a PaaS solution, platform as a service, through a template and one example of
that would be deploying a database solution. So if I were to search, for example, for
MySQL and let's just go ahead and do that and see what we have, we have a number
of search results related to deploying MySQL along with potentially other services or
configurations here in Azure.

So, for example, if I were to choose Web app with Azure database for MySQL. When
I click on that, I actually have the option of reading the details for it, so there are
parameters here, that it will expect to receive values for for the deployment to
succeed. And I also have the option of deploying it to Azure directly here, by clicking
Deploy to Azure. If I've already signed into Azure, then when I get to this point, I'm
already taken directly to the portal where I'm in the midst of a template deployment
from that template, and because it's parameterized, I can fill in the appropriate details,
that are required for that to work. Now, remember that each template is a little bit
different from the other, in that, some of them will have parameters that require
values. Some will have more parameters than others. Some may have none because
everything is hardcoded directly into the template.

It just, it will vary. It's all over the map. Now if I just click Home here in the Azure
portal, we can also do it manually from within the portal to begin with.

The Microsoft Azure home page displays.

If I click Create a resource in the portal, we can search for template.

A page titled: Create a resource opens.

And we can create a template deployment from this perspective.

The page titled: Template deployment (deploy using custom templates) page opens. It
contains few tabs and a button titled: Create.

So, I'm just going to go ahead and click on Create.


A page titled: Custom deployment displays. It contains three tabs, namely: Select a
template, Basics, and Review + create. Currently, Select a template tab is active.

So we could build our own template completely from scratch in the online template
editor, which means you have to have a solid understanding of JSON syntax and the
directives that you need to type in an Azure. There are some common templates here
for virtual machines, a web app, and also a SQL database. If I were to click on Create
a SQL database,

A page titled: Provision a SQL Database with TDE opens. It contains the following
tabs, namely: Select a template, Basics, Review + create. Currently, the Basics tab is
active.

then down below, we have a couple of parameters that it requires us to fill in, like a
Resource group, a Region, SQL admin login and password, whether we want to
enable TDE, Transparent Data Encryption, those types of things.

However, up at the top, we could also click Edit template to view the template JSON
syntax and specify anything we wanted to, at this level. If we really wanted to, we
could change it here, and instead of having all of these parameters for things like
transparentDataEncryption, the SQL admin account and password, we could actually
specify the value. So, we could actually tweak that and customize it, suit our needs.
However, I'm going to go back to Select a template here because here in the portal
down at the bottom, you can also go to the Quickstart template dropdown list. And if I
want to deploy a PaaS solution, like a database, for example, I could look for sqlserver
and type that in to see if they're already templates.

Well, there are none for that specifically, but if I was to search for mysql then we have
a number of them available here. So as we go through the list again, this is just yet
another source for deploying a PaaS database solution through a template. So, I'm
going to go ahead and select one of these templates. This one is going to be Mysql on
Ubuntu and if I continue with Next: Basics, it then takes me to the parameter screen.
So, I'm going to follow through with this one. I'll specify a resource group, I'll specify
the admin login and password, and once I've done that, I'll click Next: Review +
create, and then I'll click the Create button to actually deploy. In this case, my
database solution from this template. After the deployment is complete, I've got the
Go to resource group button. If I want to go to the resource group where this was
deployed. If I click by bell notification icon in the upper right, it states that our
template deployment succeeded.
And I'm actually going to go to the All resources view, where here, I've got the results
of having used that template.

He selects the All resources blade in the left pane. The right pane displays a section
titled: All resources. It contains few buttons and a table.

I have no other resources deployed other than what we just deployed through the
template. So, that would include a SQL server to host our SQL database. And if I
click directly on the link for the database, we have the details such as in the Overview
blade where the Status of the database is Online, the Location it was deployed to is
Central US. We have the SQL Server name, there's a link here, database Connection
strings and if I click on that, it depends on how I want to make the connection,

The following tabs display, namely: ADO.NET, JDBC, ODBC, PHP, and Go.

for example, through PHP, here's the database connection information versus
ADO.NET, and so on.

Also, if I close out of that with the X in the upper right, I can open the SQL Server
itself and here we can view the details related to our SQL Server, such as the SQL
Server name. This would be important. I would have to copy that if I wanted to make
a connection into this server, so I could manage the databases that are hosted by the
server. Maybe I would do that for my on-premises station where I'm running SQL
Server Management Studio or a tool similar to that. However, in order to make sure I
have the permissions to be allowed to get into the SQL Server over the network, I
would have to look at the firewall settings here.

He selects Show firewall settings link. A Firewall and virtual networks page displays.
It displays the following buttons, labelled as: Save, Discard, and Add client IP. The
following fields display below the buttons, namely: Display public network access,
Minimum TLS Version, Connection Policy, and Client IP address.

So down below, we would have to determine whether we were allowed to get in,
which IPs or IP ranges are allowed.

I could click Add client IP to add my current client IP address from which I am
making this connection right now and viewing these details. So it fills in my single IP
as both the starting IP and as we can see, of course, as the ending IP, which would
allow me then to make a connection such as using SQL Server Management Studio
into the server. So, I'm going to go ahead and save that change. And then I could, of
course, because we're still in the server properties, go in and view any SQL databases
that are part of this deployment.

He selects the SQL databases blade on the left. A search bar and a table appears on
the right.

So, here's our sample-db-with-tde, Transparent Data Encryption. Again, if we click on


that, it's just a link back to it. So, we can go back and forth between the SQL database
and the server to manage the properties, to enable connectivity, and so on.

Managing Cloud Costs


Topic title: Managing Cloud Costs. Your host for this session is Dan Lachance.

While cloud computing might have changed how we deploy and manage servers, we
still have to think about cost. That's one thing that never changes. So here in Microsoft
Azure, we're going to start by examining some costs related to an Azure subscription,
and then we'll take a look at some measures that you might put in place to reduce
monthly charges. So to get started here in the Azure portal, I'm going to search for the
word sub in the field, and I'm going to choose Subscriptions. In Microsoft Azure, you
can have one or more subscriptions and your costs are tied to a subscription.

The Subscriptions page contains a few buttons, namely: Add, Manage Policies, and
View Requests. A table display below the buttons. This table has the following column
headers, namely: Subscription name, Subscription ID, My role, Current cost, and
Status. The table has one row with Subscription name as: Pay-As-You-Go.

So, for example, I have a Pay-As-You-Go subscription where the current cost is listed
at $2.

Now that's not a lot but, of course, that can change depending on how many resources
you deploy in your subscription, what type of resources they are, databases, for
example, can be expensive, especially if you leave them up and running for a period
of time. So I'm going to click directly on my subscription to go into further details, I'm
interested in clicking Cost analysis, over in the left hand navigator.

The right pane displays a few buttons, a graph, and few pie charts. The left pane
contains the following blades, namely: Overview, Activity log, Access Control (IAM),
and Cost Management.

Now when we look at the cost analysis, we can look at the time frame and then we
can get a listing of the cost and, of course, we can go through and see how that has
changed over the time. In this case throughout the month. And as we go down further
with cost analysis, we can then breakdown where those costs stem from. The lion's
share of the costs here stem from virtual machine deployments. Next, following that
would be sql databases, then azure app services which are web apps, and so on.

We then can also see it broken down by region where these services were deployed.
The bulk of the cost at the regional level comes from us central and as we go further
down, we see the breakdown. We can then also see resource groups where these
things were deployed, where the cost comes from. A resource group is a way to
organize multiple resources in the cloud, like virtual machines, databases, and so on,
that might be related. So we can go to Cost alerts, and we can click Add to add a
budget

He selects Costs alerts blade in the left pane. A page titled: Create budget opens. It
displays two tabs, namely: Create a budget, and Set alerts. Currently, Create a
budget tab is active. It contains multiple fields.

where we specify a dollar amount where we want to start to get notified about the
charges that are adding up in our Azure subscription. So we have the option of
working with that as well. And, of course, I can go and view all of the invoices and
deal with the payment methods.

Now that's how we take a look at existing charges. But what can we do to reduce
them? One is to work with reserved instances. So if I were to go Home, for example,
and if I were to choose Create a resource, and if I were to search for reserved,

The page titled: Create a resource displays.

then I could choose Reserved VM Instances. Now when I create a reserved VM


instance,

The page titled: Reserved VM Instances opens. It contains some text and a Create
button.

what I'm doing is selecting a specific size of a virtual machine instance. So, for
example, Standard_DS1_v2 and Central US with 1 vCPU

The page titled: Select the product you want to purchase opens. It contains two tabs,
namely: Recommended and All Products. Currently, the Recommended tab is active.
It contains a table. The Add to cart, and Cancel buttons display at the bottom.
and three and a half gigs of RAM. What I can do is I can take a look at that specific
reservation and it's loading the price here. The idea is that we can go for a one up to a
three year commitment upfront with using the compute power in this case, for virtual
machines, and when you do that, you get a savings.

In this case, it's saying the Monthly price per VM of that type, if you were to go with
that for a three year term, is $17.69 US per month, which is a 66% saving estimation
compared to, if you didn't use a reserved virtual machine instance and just used a
standard one which is Pay-As-You-Go on demand. If you know, you need the
compute power ahead of time for a one year up to a three year time frame, then it
might make sense to look at using reserved instances. What happens is it will match
any of the virtual machines that you have running in that region that match this size
type and you will automatically get the discounted saving. Now, if I choose Add to
cart for that, what I can also do, is specify how many of them I want.

He closes the Select the product you want to purchase page. A page titled: Purchase
reservations opens. It contains two tabs, namely: Products, Review + buy. Currently,
the Products tab is active. It contains a table with the following column headers,
namely: Reservations, Product, Scope, Billing frequency, Unit price, Quantity, and
Subtotal.

So, the Quantity is currently set to 1 but I could set it to 5, so that if I have 5 virtual
machines in Central US that use the Standard_DS1_v2 virtual machine size, then they
will benefit from using reserved instance pricing, when the sixth virtual machine is
deployed. Well, we're back to using standard on demand pricing. So, that's one thing
to consider. You can also do the same type of thing when you are working with
database deployments. You can get reserved database pricing. You can also configure
virtual machines to shut down our schedules. Such as at the end of the day, if they
don't need to be running 24/7. So, there are a lot of considerations then to reduce your
monthly cloud computing charges.

Course Summary
Topic title: Course Summary

So, in this course, we've examined how to differentiate and deploy PaaS and SaaS
solutions. We've deployed PaaS through a template and also learned how to control
cloud computing costs. We did this by exploring how PaaS and SaaS differ from other
cloud service models. We deployed AWS and Azure databases. We deployed SaaS
cloud-based apps and we used a template to deploy a cloud PaaS solution. And
finally, we examined various ways in which you can control cloud computing costs.
In our next course, we'll move on to explore server storage.

CompTIA Server+ (SK0-005): File System


Security
Today’s IT security ecosystem requires strong file system security. This is achieved
not only through a robust file system permissions implementation but also controlled
network access. Learn how to handle file system security in this mostly hands-on
course. First, get a theoretical overview of how file system security works in
Windows and Linux environments. Then, practice configuring permissions for
network shared folders and setting NTFS file and folder permissions. Next, use the
Windows built-in Encrypting File System (EFS) option to encrypt files and folders.
Enable disk volume encryption using Microsoft BitLocker. Encrypt files using
OpenSSL. Configure Linux file system permissions. And work with file hashing in
Windows and Linux. Upon course completion, you'll be able to implement and
manage Windows and Linux file system permissions, encryption, and file system
hashes. You'll also be more prepared for the CompTIA Server+ SK0-005 certification
exam.

Course Overview
Topic title: Course Overview

Today's IT security ecosystem requires strong file system security. This is achieved
not only with a strong file system permissions implementation, but also with
controlled network access in the first place. In this course, I'll define how file system
security works in a Windows environment, followed by configuring permissions for
network shared folders as well as setting NTFS file and folder permissions.

Next, I'll use the Windows built-in Encrypting File System or EFS option to encrypt
files and folders. I'll then enable disk volume encryption using Microsoft BitLocker
followed by encrypting files using OpenSSL. Moving on, I'll examine how Linux file
system security is implemented, followed by actually configuring Linux file system
permissions. Lastly, I'll work with file hashing in Windows and in Linux. This course
is part of a collection that prepares you for the CompTIA Server+ SK0-005
certification exam.

Windows File System Security


Topic title: Windows File System Security. Your host for this session is Dan
Lachance.

Security is a big deal these days and not just security over the network. We're going to
take a few moments to talk about Windows file system security. What we're really
talking about here is thinking about file system permissions and making sure they're
assigned properly to users or groups or computer accounts or service accounts. We
want to make sure we follow the principle of least privilege, which really states
simply that we should not assign any permissions beyond what is absolutely required
to perform a specific task. So in the Windows world, that means don't just add people
to the administrators group just to quickly solve a problem and let them access stuff.
Make sure you get the individual persons that are required and nothing more. But, file
system security also includes encryption, basically scrambling data so that only the
appropriate parties with the decryption key of some kind then can decrypt it to view
the original source data.

So, let's go into some details. We'll start with shared folders. In Windows, you can
share a folder over the network, not an individual file but a folder. And so the
permissions that you assign at the shared folder level will apply when the folder is
accessed over the network, but not for any technicians that might be locally working
on the server. Now, when we say locally that, of course, that also means if they
remoted into the server, for instance using the remote desktop client. Shared folder
permissions include full control, change and read so you don't get really granular
permission capabilities at the shared folder level. You do with NTFS though. The new
technology file system or NTFS. When you format a file system as NTFS, you get
some special options. You otherwise wouldn't have if it were formatted, say with
FAT32. So, what happens is any permissions that are applied to the NTFS file system
apply locally for any technicians or anyone at the machine locally, which of course
also still means remotely through the remote desktop client. But also, if you access a
shared folder over the network, NTFS permissions are also in play.

Now this also means that permissions can be set in this case for folders and for files.
Share permissions only apply for the share. And the share is a folder, you can't share
individual file. So folder permissions you might set then get inherited by child objects
within that folder structure. Now, there are more NTFS permissions and you would
find at the shared folder level. These are standard NTFS permissions, but you can get
very granular and even get these broken aparts if you really want to. So for example,
we've got full control, modify, read and execute, list folder contents, read, write and
special permissions. Now, a lot of these kind of speak for themselves. If you have full
control, well, you can do everything. If you can read and execute, you can read what's
in a file or execute it if it's a binary program, for example. List folder contents pretty
much means what it says. Read, read only access. And writing, but one of the things
that's interesting is you might wonder, well, wait a moment. What's the difference
between modify and write? Well, the primary difference is to say that modify also
allows the deletion of the file to the folder, write does not. It allows you to write to the
entity in the file system.

We might look at this and say, well, how do I, for example, allow someone to only
create folders but not files? Well, that's where you would get into things like special
permissions. There is the thing to watch out for is that when you share a folder over
the network, the strategy normally employed is that you would give a lot of
permissions such as change permissions. But then restrict it at the NTFS level for that
same folder structure or files within that folder structure. So you can combine NTFS
file system permissions with shared folder permissions and when you do the most
restrictive permission applies. So, if someone has full control permissions at the
shared folder level but only read permissions at the NTFS level, well, then read is in
place. It's the most restrictive. Now remember that file system security also means
encryption. An encrypting file system or EFS is a component available in Windows
for NTFS file systems. Here, we've got a screenshot of the EFS GUI.

A page labeled with the heading EFS GUI. It contains various components
underneath: Advanced Attributes. The components are as follows: Choose the settings
you want for this folder, Archive and Index attributes, and Compress or Encrypt
attributes.

Basically, what someone has done here is gone into the properties of a file or folder
on an NTFS volume.

And when you go into the properties, you can click on Advanced Attributes which has
been done here. And this is where down at the bottom, you can turn on the checkmark
to encrypt contents to secure data. And so that's how you would encrypt a folder or a
file using essentially Windows Explorers, really what's being used there. Now at the
command line, you can also use the cipher command to encrypt or decrypt or work
with the EFS basically is what you're doing. So if I were to type in cipher c:\projects,
it'll show me any details related to that folder and it says here that new files added to
this directory will not be encrypted. That's what the current setting is, and there's a U
next to Projects, which means unencrypted, but if I were to have run the command
cipher c:\projects /e for encrypt, we then get a message that it's encrypting files in
drive C: and then that folder was encrypted. So, the cipher command is pretty easy to
use when you're working with EFS.
And then we've got Microsoft BitLocker, which, as opposed to EFS, where you can
cherry pick the individual files and folders to be encrypted. With BitLocker, you're
encrypting an entire disk volume. So, EFS encrypts individual files and folders. EFS
does the whole volume. EFS is tied to a specific user account that was signed in when
a folder or a file was encrypted. That's not so with BitLocker. BitLocker is tied to a
specific machine. And BitLocker can work with some firmware that may or may not
be present in the machine, and that firmware is called Trusted Platform Module or
TPM. So, if you have TPM firmware in your machine that can work with BitLocker to
generate and store keys are used to encrypt and decrypt entire disk funds. It can also
be used for verifying the integrity of the boot process TPM can be. So, when you're
thinking about file system security and windows then think about shared folder and
NTFS permissions and think about encryption using EFS and BitLocker.

Enabling Shared Folder Accessibility


Topic title: Enabling Shared Folder Accessibility. Your host for this session is Dan
Lachance.

Sharing folders over a network is not new. It's been around for many decades, even
steaming back to the old Unix days and still available in Unix and Linux and of course
Windows. We're talking about sharing a folder over the

A webpage labeled Home-Microsoft Azure appears. It contains a search bar on the


top. The left pane contains various icons options. The main pane contains 4 sections.
The first section, is titled: Azure services. It contains various elements such as :
Create a resource , Azure Active Directory, Subscriptions, and so on. The second
section, is labeled as: Recent resources. It contains a table with the following column
headers: Name, Type, and Last Viewed. The third section, is labeled as Navigate. It
contains the following options: Subscriptions, Resource groups, All resources, and
Dashboard. The last section is called, Tools. It contains the following options:
Microsoft Learn, Azure Monitor, Security Center, and Cost.

network to allow other computers to access the contents of that folder and of course
we can set permissions. But these days, we can also do that in the public cloud
because that are cloud-based shared folders. So to get started here in doing that in
Microsoft Azure, I've signed into the Azure portal. I'm going to click Create a
resource because the first thing I need to do is I need to have a storage account. So
cloud storage before I can even think about a file share.

Currently, Create a resource page opens. It contains a left pane with the following
elements: Get started, Recently created, and Categories. The right pane appears with
a search bar. It further contains a list of documents along with their icons.

So I'm going to go ahead and create a Storage account. I'll click Create. And I'm going
to deploy this into a specific Resource group.

A page labeled Storage account appears underneath Microsoft Azure. It contains


various sections: Overview, Plans, Usage Information+Support, and Reviews.

And down below, I have to give a unique Storage account name.

A page displays with the heading: Create a storage account. It contains the following
sections: Basics, Advanced, Networking, Data protection, Tags, Review + create.
Currently, the Basics section is selected. It contains two options: Project details and
Insurance details. The Project details contain the following options: Subscription and
Resource group. The Insurance details contain the following options: Storage
account name, Region, Performance, and Redundancy.

So I'll call this store account, let's say some unique name that still adheres to my
company's naming standards. We'd leave it in East US as a Region.

Now if I need the utmost in disk IO performance, I could select Premium. However,
you pay more for that. For what I'm doing, Standard as a Performance selection for
the storage account will be just fine. It's set to Geo-redundant storage so to make a
replica or copy of the account in an alternate secondary region beyond US East. And
that's for high availability in case for some reason the East US region becomes
unavailable. I'll click Next for Advanced where I can configure various Security
options.

Currently, the Advanced section is selected. It contains various Security options.

I'm not really too concerned with that. I can also go Next into the Networking part of
the wizard, where I can determine from which networks the storage account should be
available. I'm not going to leave it on public. Really, there's nothing else

The Networking section is highlighted. It contains 2 sections: Network connectivity,


and Network routing.

I'm going to do here other than Review and create, so I'll click that.

Now, The Data protection section is highlighted. It contains the following Recovery
options: Enable point-in-time restore for containers, Enable soft delete for blobs, and
Enable soft delete for containers.

It's going to check my validation to make sure I've specified what's needed. Well,
what you know the Validation failed.

The Review + create section is now highlighted. It contains the following options:
Basics and Advanced.

Let's click to view details. Looks like the storage account name is already in use.
Well, that happens. It's not a problem.

Let's go back to basics and I'm just going to add a couple of other characters to make
the Storage account name unique. Again, while adhering to company naming
standards, we don't want all technicians just randomly choosing their own naming.
That's not going to work on a larger enterprise scale. OK, I'll click Review and create
again. Let's let it do the validation. This time the Validation passed. Excellent. I'm
going to click Create to create the storage account. So to be clear, I'm not defining a
shared folder yet by doing this, all I'm doing is creating the infrastructure that will
allow me to create a shared folder here in the cloud. And in this case, in Azure that's

A page labeled storacctyhz1727225_1636474817498|Overview. The left pane


contains the following options: Overview, Inputs, Outputs, and Template. The right
pane contains a toolbar: Delete, Cancel, Redeploy, and Refresh. It further contains
the following message: Deployment in progress.

creating a storage account. And before you know it, the deployment is complete. So
I'm going to click Go to resource to go directly to that storage account.

The page with the heading: storacctyhz1727225appears. The left pane appears with
various options: Overview, Activity log, Tags, Diagnose and solve problems, Access
Control(IAM), Data migration, Events, and so on. The right pane with a toolbar with
the following sections: Open in Explorer, Delete, Move, Refresh, and Feedback.

What I want to do within this storage account is scroll down in the left-hand navigator
and what I'm interested in is File shares. I'm going to click File shares. We don't have
any yet.

A page labeled storacctyhz1727225| File share appears. It contains 2 options: File


share and Refresh. It further contains a table with the following column headers:
Name, Modified, Tier, and Quota.
But I'm going to click the add File share a button up at the top.

Let's say we're going to call this file share projects. Then we can specify the
Performance Tier such as Transaction optimized, Hot or Cool. Now I'm going to use
the Hot access tier, because let's assume that this is

The right pane appears with the heading: New file share. It contains 3 sections:
Name, Tier, and Performance.

going to be something that will be used frequently and we need the best performance.
OK. So I'm going to leave that as it is. And down below I'll just click the Create
button to create that file share. So there's projects. And of course, the Hot access Tier
is being shown in the column. The default Quota is 5 terabytes, so that's fine. That's
plenty of space. I'm going to click projects and there's nothing in here, so we need to
get some content in here.

A projects page appears underneath Microsoft Azure. The left pane appears with the
following sections: Overview, Diagnose and solve problems, Access Control(IAM),
Settings, and Operations. The right pane appears with a toolbar with following
options: Connect, Upload, Add directory, Refresh, Delete share, Change tier, and
Edit quota. It contains a table with the following column headers: Name, Type, and
Size.

I'll click the Upload button and I'll click the selector to select some on-premises files
that I want to store here in this cloud-based file share. OK. Then I'll click Upload to
get those files here.

A right pane appears with the heading: Upload files. It contains a search bar with the
label: Files.

So now, we've got some files to work with here in this cloud-based file share. The
other thing that's important here is how do we connect to this file share? Like, where's
the server? We don't have to worry about that.

This is a managed service. We don't know what the server side that is. But what we do
have to think about is the Connect button at the top, because when I click Connect I
get instructions for various operating systems on how essentially to map a Drive letter.
So if I were to

A right pane appears with the heading: Connect. It contains 3 sections: Windows,
Linux, and macOS.

leave it on Windows and let's say I want the Drive letter here to be Drive letter P
because it's projects. Down below, I then select the Authentication method which
could be through an Active Directory account or Storage account key which is the
default and I'll leave that. Down below what we then have is a lot of PowerShell code
that essentially tests that we can make a connection on Port 445, which is required for
shared folder access and then eventually actually maps Drive letter and PowerShell
using the New-PSDrive cmdlet. PSDrive stands for PowerShell drive with the name
of P mapping Drive letter P and then specifying the UNC path. Now, what's
interesting is if we take note of the UNC path it's double backslash, the full name of
the storage account. After the storage account, it automatically tacks on the DNS
suffix of .file.core.windows.net. Then of course, we have \projects which is the name
of our share.

So what we could do is actually copy this information and then use that to map a
Drive letter. Or you could map the Drive letter any other way that you might choose.
So I'm going to go ahead and copy the whole thing. I'll just click Copy to clipboard.
But while we're here, I can scroll up and click on Linux and also get the same type of
information on how to create a Linux mountpoint mapping to this shared folder and
we also have the instructions for the macOS. So, the only thing that you have to keep
in mind is that if you're trying to make a drive mapping connection, let's say from
windows from an on-premises Windows computer, chances are it might fail. And the
reason is because a lot of Internet providers do not allow outbound Port 445 traffic. So
you'd really have to find out the details about your Internet connection. Of course, if
your VPNed into the cloud then that might be different. However, it's something that
we need to think about. Now, if you've got virtual machines deployed in the cloud,
then they certainly should not have a problem connecting to Port 445, unless of course
you have blocked that with a cloud-based firewall solution.

Configuring Windows Shared Folder Security


Topic title: Configuring Windows Shared Folder Security. Your host for this session
is Dan Lachance.

Being a server technician certainly involves hardware, whether it's physical or virtual.
But it also means being able to configure servers to support specific requirements, like
setting up a file server and setting up things like shared folders and permissions. We're
going to go ahead and do that here in Windows Server 2019. The first thing I'm going
to do here is go into my File Explorer, otherwise called Windows Explorer. If I go to
drive C: on my server, I've already created a sample folder structure. On drive C:, it
begins with Data in which I've got a Projects folder. And in which I've got three
sample Project text files. So there's not really much here, but this is going to be what
we're

A dialog box appears with the heading: Project_A-Notepad. It contains a toolbar:


File, Edit, Format, View, and Help. It contains a message called: This is a Project A.

going to be using to set up our shared folder and permission access. So if I were to
right click on a selected file and go into Properties, notice there is no sharing tab, and
that's because you can't share individual files. What you do is you share folders
instead.

A dialog box appears with the heading: Project_C Properties. It contains 4 sections:
General, Security, Details, and Previous Versions.

So if I go back, let's say up one level to Projects and right click on it and go into
Properties, sure enough we have a Sharing tab. Now we're going to share a folder over
the network. What you need to do is determine what level you need to do that at.
Meaning, do I want to share the entire Data folder? And maybe I would because I
have numerous other folders at that level. Or do I just want to share Projects? Here,
I'm just going to share Projects. So if I right click on Projects

A drop-down box appears with various options: Open, Open in new window, Send to,
Cut, Copy, and so on,

and go into Properties. I can then go into Sharing. It's currently Not Shared. So I'm
going to click the Share button and this is

A dialog box appears with the labeled Projects Properties. It contains the following
sections: General, Sharing, Security, Previous Versions, and Customize.

where I have a basic easy interface to set Permission Levels and Owners. However,
I'm going to cancel that. I could also go down to Advanced Sharing where I have all
options available. So I'm going to choose to Share the folder.

Now, a dialog box appears with the heading: Advanced Sharing. It contains various
settings along with a Comments box.

It automatically assumes that the network share. So what it would be called over the
network? It automatically suggests that that be the same as the local folder name, and
that's fine here. I can put in Comments and down below Permissions that the default
here is that the built-in Everyone group has Read access.

Now, of course I can remove that. I could also remove the entry for Everyone.

A dialog box called Permissions for Projects appears. It contains 2 sections: Share
Permissions and Permissions for Everyone.

I can also Add additional security principles. So if I were to click the Add button, this
is a server, a Windows server that is part of a Microsoft Active Directory domain and
that Active Directory domain name is shown above quick24x7.local. So I can click
Locations to switch to different sources of security principles or user in group
accounts, but I'm going to leave it there. What I could do is click Advanced, Find
Now and what it's looking for our Users, Groups, Built-in security principles.

Another dialog box labeled Select Users, Computers, Service Accounts, or Groups
appears. It contains the following components: Select this object type, From this
location, Common Queries, and Search results. The Search results contain a table
with the following column headers: Name, E-Mail Address, Description, and In
Folder.

And from this list I can then start to select who I want to have shared folder
permissions to the Projects folder. So as I go through the list, it's just a matter of
selecting the account. For example, if it's an individual user, then I could choose
Codey Blackwell or if it's a group, I could also select a group. In this example, I'm just
going to use an individual user, but on a large scale it's usually considered to be easier
to manage on a group level.

So if I know that Codey Blackwell is in a group and members of the group have the
same file system requirements instead of the individual, maybe I would go down and
select the group. So I've got a number of groups here. One of which some of them are
built-in, of course, but one of the ones that I've created here is called HelpDesk and
Codey Blackwell. The user is a member. So I could go ahead and add the helpdesk
group. Now when you add a security principle with shared folder permissions until
you do anything, the default is to Allow Read. Notice you have an Allow and a Deny
column. Deny always takes precedence over allow if there's some kind of a conflict,
maybe because user Codey Blackwells in two groups here and one of the groups
allows Read and the other denies it. Codey Blackwells is going to be denied always
takes precedence. Well, we could also enable Full Control to have full abilities to
create files and folders, to delete them, or maybe just change to be able to change the
contents of the shared folder.
So I'm going to leave Change and Read selected for members of the HelpDesk group
and I'm going to click OK and OK. So the Network Path is shown here. Now this is a
UNC path, a Universal Naming Convention path which always looks the same in the
sense of the fact that it starts with the double backslash. Then we have the computer
name. It could also be an IP address that would be valid and then another backslash
followed by the share name. In this case Projects. So I'm going to go ahead and copy
that.

The Projects Properties dialog box appears again. The Sharing option is highlighted.
It consists of the following options: Network File and Folder Sharing, and Advanced
sharing. He highlights the following URL: \\WIN-MGCGN1MFJLQ\Projects

And I'm going to click Close. Now I'm going to test this at the same machine where
I'm sharing that folder, but we could do it across the network. It's not going to make a
difference. So as long as we have valid network configurations and firewalls aren't
blocking Port 445, which is what's used by file and print sharing at this level. Then
we're good to go. So what I could do then?

One way to do this to make a connection to a shared folder over the network is to
click, for example here up in the addressing bar in the File Explorer tool and specify
the UNC path you want to connect to. So I've just pasted what I've copied here. Let
me just press enter and we're in. Here are the files, the project A, B and C files that are
in the Projects shared folder. Also what we could do is use the IP address. Now
depending on the app that you're using to make the connection from the client side
will determine if it needs a name or an IP is valid. So let's go to the Command
Prompt, find out with the IP address is here for this host. So the IP address is being
shown as 192.168.4.167.

A page labeled Administrator: Command Prompt appears. He enters the following


command: ipconfig. He highlights the IPv4 Address: 192.168.4.167.

OK. So, I'm going to change the UNC path to include instead of the name, the IP
address. So 192.168.4.167 and we still need the \Projects that's still part of the UNC
path and when I press enter it's still the exact same thing. OK, so that's working. You
could also map a drive letter. So, of course, we could always just stick with that,
meaning a UNC path and I could drag a shortcut to the desktop.

So in the future I can just double click it on the desktop and it will take me right into
it. However, you can also map a drive letter. So for instance, if I were to right click on
this PC here in the File Explorer, I could choose Map Network Drive. I could choose
the drive letter, maybe P for projects makes sense. And I could paste in the UNC path,
Reconnect at sign-in

A dialog box labeled Man Network Drive appears underneath the Projects folder. It
contains a message: What network folder would you like to map? It contains 2
sections: Drive and Folder.

to make it a persistent mapping, and then OK. So this mapping will persist between
reboots. So now, we have a drive letter that's been mapped to that shared folder. Drive
letter P is pointing to that UNC path and we have the contents available here. So that's
just a little bit about configuring shared folder permissions in a Microsoft Windows
environment.

Configuring Windows NTFS File and Folder Security


Topic title: Configuring Windows NTFS File and Folder Security. Your host for this
session is Dan Lachance.

In this demo, I'll be managing Windows NTFS file system permissions. So the first
thing to do is to make sure that you've got disk volumes formatted as NTFS. NTFS
stands for new technology file system. Although it's not really new. It's been around
for a long time since the 1990s, actually. However, it remains the same in terms of,
you need an NTFS file system to apply NTFS file or folder permissions. And those
permissions are in effect locally, if someone is signed into the server as well as across
the network. So first what we're going to do here is let's go into the Windows File
Explorer or Windows Explorer and let's take a

A page appears with the title: File Explorer. It contains a toolbar with the following
elements: File, Home, Share, and View. The left pane appears with various options
such as: Quick access, Desktop, Downloads, This PC, Network, and so on. The right
pane appears with 2 options: Frequent folders and Recent files.

look at drive C: on our Windows server.

A folder with the heading: Local Disk (C:). The toolbar on the top contains the
following elements: File, Home, Share, View, and Drive Tools. The left pane appears
with various sections: Quick access, This PC, Local disk (C:), and so on. The main
pane contains a table with the following column headers: Name, Date modified, and
Type.

So, I'm going to right click on it and go into Properties and notice that it's showing
that the File system type is NTFS. Good,
A dialog box appears with the heading Local Disk (C:) Properties. It contains the
following sections: Shadow Copies, Previous Versions, Quota, General, Tools,
Hardware, Sharing, and Security.

that means we have NTFS security options available. And as a matter of fact we'll
know this because we have a Security tab right here. We can set NTFS security
permissions at the root of a drive. The default when you configure this is that the
permissions will be inherited by all subordinate items in the file system.

So naturally, of course, we could drill down onto drive C: into a specific folder. We
could right click it, we could go into Properties and then go into Security, and we
could set NTFS permissions at this level. And as you might guess, the permissions
would apply from this point down in the file system hierarchy. And unlike shared
folders where you cannot

A dialog box appears with the label: Projects Properties. It contains the following
options: General, Sharing, Security, Previous Versions, and Customize.

share an individual file, well, unless you have a shared folder with one file in it, you
don't share the file itself. But with NTFS permissions, you can set permissions directly
on individual files. So, if I were to right click on a specific file and same old same old
if I go into Properties and once again Security, here I can set NTFS permissions. OK,
so that's where we could set it and we know that we need an NTFS

A dialog box appears with the heading: Project_C Properties. It contains 4 sections:
General, Security, Details, and Previous Versions. Currently, the Security section is
highlighted. It further contains 2 options: Group or user names and Permissions for
SYSTEM.

formatted file system to do it. Let's experiment a little bit. I'm going to go, let's say
back to the Data folder where we've got Projects. I would like to set permissions here
for a group. Now if I go to my Start menu and go into Windows Administrative Tools
because I'm using a Windows domain controller server to do this.

If I go into Active Directory Users and Computers because I'm connected to Active
Directory with this server, I can go to any OUs that might have been built. I've built
an OU, an organizational unit called HQ where I've got a group called HelpDesk. And
if I go

A page labeled Active Directory Users and Computers appears. It contains a toolbar
with the following options: File, Action, View, and Help. The left pane appears with
the following options: Saved Queries, Builtin, Computers, HQ, Users, and so on.

into that group in Active Directory and look at the Members tab, Codey Blackwell is a
member. OK, well I'm going to set permissions

A dialog box called: HelpDesk Properties appears. It contains various sections:


General, Members, Member Of, Managed By.

at the HelpDesk group level. On a larger scale, it's usually easier to manage
permissions for groups, given that the members of the group have the same
permissions needs. OK. So having done that, I'm going to right click on the Projects
sample folder here. And what I'm going to do is go down into Properties and we know
that we go under Security to work with NTFS permissions. Now currently we have
some default entries in the access control list. This is the access control list, the ACL.
Often, you'll find that little bit to refers to it as the discretionary access control list or
the DACL. And that's because server technicians, such as myself, set the permissions
at their discretion, usually in alignment with organizational security policies and
requirements, of course.

So, we've got CREATOR OWNER. And then, of course, any permissions that are set
will show up with the check mark. We've gotten Allow as well as a Deny column for
each of the permission shown here. Now, what's interesting is that Deny always
overrides Allow if there's a conflict. So, let's say that you are in a member of a group
that's been added here with Read & execute for Allow. But you're also in a second
group that Denies Read & execute. Well, you're going to be denied because deny
always takes precedence. Anyhow, Administrators are here, SYSTEM and Users, but
we can go ahead and add other entries. For example, I'm going to modify this because
I don't want users listed here. All Active Directory domain users. So I'm going to click
Edit, select Users, Remove. Now, it says you can't remove that because it's inheriting
permissions from its parent.

A Windows Security pop-up box appears. It contains a warning message along with
an OK button.

OK, well what can I do? Because I don't want to turn it off anywhere else. I don't want
to remove the Users group.

All you have to do is turn off object inheritance. OK, so let's figure this out. What I'm
going to do back here on the Properties of that folder is click the Advanced button at
the bottom and I'm going to click Disable inheritance and it says, well, if you're going
to do this, should we convert the existing inherited permissions that we were just
looking at, into explicit permissions that would, for example, have been set manually,
you want them to be treated that way, or just remove everything. Well, I just want to
convert inherited permissions because then I can go in

A pop-up box appears with the heading: Block Inheritance. It contains the following
message: What would you like to do with the current inherited permissions?

and remove like I was trying to do initially. So let's go back into Edit, Users, Remove.
This time it's no problem because these are no longer being inherited from above.
They've been converted as if we had manually made the entries at this level in the
Projects folder. OK, but what I want to do is Add.

A dialog box titled Permissions for Projects displays. It contains 2 sections: Security
and Permissions for Administrations.

I want to add my HelpDesk group.

Now I could type in help or any anything that makes it unique and click Check Names
and it'll resolve it. You could also have clicked Advanced and Find Now and filter this
list or

Currently, a dialog box called: Selects Users, Computers, Service Accounts, or


Groups appears. It contains the following components: Select the object type, From
this location, and Enter the object names to select.

manually look through it. It really makes no difference how you do it, but I want
HelpDesk here. That's what I want to end up with. OK, so the HelpDesk group has
been added. When you add an entry to the ACL, it has some default NTFS
permissions. Read & execute, to read the contents of files and if the scripts are
executables to run them. List folder contents to see what's in a folder and Read to
actually read what's in the file. We've got other options like the ability to write to a
file or to modify, which means things like modifying file attributes and also deleting
files. And Full control which encompasses the entire gamut. This is all under the
allow column, but if you have more specific needs you could enable special
permissions. Not through here, but there's a way to do that which we'll see. So I'm
going to go ahead and click OK. HelpDesk has been added. Now if I click Advanced,
this is where I can actually select. For example, HelpDesk group, click Edit,

A dialog box displays with the heading: Advanced Security Settings for Projects. It
contains the following options: Name and Owner. It further contains the following
sections: Permissions, Share, Auditing, and Effective Access.

and this is where I could choose Show advanced permissions.

Now, a new dialog box appears with the title: Permissions Entry for Projects. It
consists of various elements: Principal, Type, Applies to, Advanced permission, and
Add a condition to limit access.

And here we've got some detail. Do we want to allow, for example, only ability to
Create folders but not Create files or maybe vice versa. So we have those types of
options when we start drilling down under the Advanced button. I'm going to click
OK. So, now that we've set the HelpDesk group as having certain permissions at the
Projects level, that should show up in subordinate objects. Let's check one of the
project files inside the Project folder. Let's right click, go into Properties, Security.
There is HelpDesk, Read & execute, Read. Now, List folder contents isn't shown here
because how can you do that on a file? It's not a folder. So that's why List folder
contents isn't here, but nonetheless it is, this is working perfectly. Now the other thing
to think about is how to check effective permissions. As you might imagine, this can
get pretty chaotic pretty quickly with a large environment with many groups and
many members and different permissions sets.

So I'm curious what permissions specifically does the user Codey Blackwell in Active
Directory have to this file Project_C? What he could do when I'm already under the
Security tab for that file is go under Advanced, Effective Access. Then I can Select a
user, so I'm interested in Codey. Search for Codey, Check Names. There it is Codey
Blackwell. We're going to select that user and then I'll click the View effective access
button down below. OK. Traverse folder / execute file, List folder / read data, Read
attributes, Read extended attributes, Read permissions. Excellent, but all the other
stuff like Change permissions, Take ownership, Delete, all of that stuff is turned off.
These are the actual permissions that can be exercised and realized by user Codey
Blackwell.

Securing Data at Rest Using Windows EFS


Topic title: Securing Data at Rest Using Windows EFS. Your host for this session is
Dan Lachance.

In Windows, encrypting file system or EFS is a built-in feature. Now there are some
Windows editions like for example Windows 10 Home Edition where EFS is not an
option but on the server-side, EFS is there. So, encrypting file system or EFS lets you
determine which individual folders or files should be encrypted and that encryption
would apply for the user that signed in when encryption is enabled. So, it's encrypted
for the user, whereas other encryption schemes, like Microsoft BitLocker are not tied
to the user. They are tied to the machine though, but that's not what we're talking
about. So, let's work with EFS. So, what I'm going to do here on my server is open up
a folder where I know I've got some files. So, I'm going to go to drive C:, Data and
Projects. So let's say as an IT server technician, I have some files that I want
encrypted with my account. It might be network configuration documents, it could be
IT projects in the future, anything that might be considered sensitive.

A page labeled Data appears. A toolbar appears on the top with the following
sections: File, Home, Share, and View. The left pane appears with several options:
Quick access, Desktop, Downloads, Documents, This PC, Network and so on.

So what I could do is instead of encrypting individual files if they're organized


accordingly into folders, what it could do is enable EFS encryption at the folder level.
For example, I right-click on Projects and go into Properties, I can then go into the
Advanced button at the bottom. Now as long as

A dialog box appears with the label: Projects Properties. It contains the following
options: General, Sharing, Security, Previous Versions, and Customize. Currently,
the General option is highlighted. It consists of the following components: Type,
Location, Size, Size on disk, Contains, Created, and Attributes.

the file system is formatted as NTFS, you'll have these options

A pop-up box appears with the heading: Advanced Attributes. It contains the
following components: Choose the settings you want for this folder, Archive and
Index attributes, and Compress or Encrypt attributes.

such as to Encrypt contents to secure data that means EFS encryption. Now I could
also choose to Compress. Well, actually not also. Instead of encrypting, I could
choose to compress it's one or the other. So I'm choosing to encrypt contents to secure
data. Notice the details button is greyed out because the file or folder in this case is
not yet encrypted. But one thing at a time, I'm going to click OK and OK, at which
point I'm asked if I would like to apply the change to enable encryption to the folder,
subfolders, and files within or

Now, a pop-up box appears with the label: Confirm Attribute Changes. It contains 2
tabs: OK and Cancel.

to the folder itself.


So, do you want to modify existing items or just the folder itself? Which means new
items dropped in the folder would be encrypted Now, let's apply it to everything. So
I'm going to leave the default selection. Click OK. And then it's done. So if I were to
right-click on the Projects folder again, just so we can check our work. I'll go down
into Properties, go to Advanced, and sure enough, Encrypt contents to secure data is
enabled. In the same way, if I were to drill down into files within that, now the files,
even though it's very hard to see the tiny icon, have a tiny yellow padlock. But if I
right-click on one of those files and go to the same place, if I go to Properties, click
the Advanced button, Encrypt contents to secure data has been enabled. So the file is
encrypted with the user that I'm currently logged in as. If I open up a Command
Prompt, this we can verify quite easily by simply typing whoami. And I'm logged in
as the administrator account in the quick 24x7 domain.

A page titled Administrator: Command Prompt displays. He enters the following


command: whoami.

So that's who the files encrypted for.

But remember I pointed out the Details button before, which was greyed out. But now
that we have files that are encrypted, the details button is available. And if I click
Details, I can click Add to add other people that should be able to decrypt this file
beyond just who I'm logged in as right now.

A pop-up box called: Encrypting File System appears. It contains a table with
headings: issued to, Friendly name, and Expiration Date.

So when I do that, what I have to do is I have to have a certificate for the user that I
can select and then add them, so they have decryption capabilities. However, I won't
be doing that here. So that's what we can do at that level for EFS encryption. But, we
can also work with this at the command line. So, let's go back into a Windows
Command Prompt. I guess we already have one open down here and I'll clear the
screen cls.

The Administrator page displays again. It contains the following command: cls.

I'm going to change directory to data. And if I do a dir, there's the projects folder we

The command now reads: cd \ data.

were just working within the GUI.


If I were to type in cipher projects, it tells me that New files added to

He enters the following command: cipher projects.

this directory will not be encrypted, but that's for C:\Data\. For Projects, it's currently
encrypted as denoted with the capital letter E. If I just change directory into projects
and if I were just to type cipher

The command now reads: cd projects.

which is the EFS command line management tool but cipher, no parameters, nothing.

The command now reads: cipher.

Then what it's telling me is for C:\Data\Projects\, New files added to this directory
will be encrypted. Remember, when we enabled EFS at the Projects level, we stated
that we wanted to encrypt the folder, which means new items dropped into it, as well
as everything in it and under it, if there are subordinate sub-directories. The three
Project files are automatically encrypted based on how we configure this as denoted
again with a capital letter E next to each of them. So I can use the cipher command
line tool, let's say cipher /d for decrypt. And let's say I want to decrypt Project_C.txt.

The following command is added: cipher /d Project_C.txt.

It says it's decrypting it. Ok, says it was decrypted.

But let's really find out for sure, cls to clear the screen, cipher sure enough, Project_C
previously had a capital E which denoted it was encrypted with EFS. But now it has a
capital U because it's been unencrypted, so it's not encrypted with EFS. So you can
manage these types of things using the GUI, or you can manage them using the cipher
command line utility. But either way, what you're doing is adding a layer of protection
for data at rest by encrypting it because if there was somehow a malicious user that
gained remote file access to this server, they would have much more of a problem
getting into Project_A and B txt because they're still encrypted. Then they would
Project_C.

Enabling Microsoft BitLocker


Topic title: Enabling Microsoft BitLocker. Your host for this session is Dan
Lachance.
In this demonstration, I'm going to be using Windows Server 2019 to manage
Microsoft BitLocker encryption. And so BitLocker encryption then is designed to
encrypt or protect data at rest at the disk volume level. So with BitLocker, you cannot
select individual files or folders to encrypt like you can with other tools like open SSL
or using the built-in Windows Encrypting File System or EFS. So the first thing I'm
going to do here in my Windows Server is, I'm going to go to the Start menu and fire
up the File Explorer or Windows Explorer, whatever you want to call it. I'm going to
right-click on drive C:

A page labeled File Explorer appears. It contains a toolbar with the following
elements: File, Home, Share, and View. The left pane displays various elements such
as: Quick access, Desktop, This PC, Pictures, Videos, and so on. The right pane
contains 2 sections: Frequent folders and Recent files.

and go into Properties. OK, so we've got a large disk here, about 60 gig. We've got
some used and free space. Looks good. Next thing I want to do is I want to go into
BitLocker.

A dialog box appears with the heading Local Disk (C:). Properties. It contains the
following sections: Shadow Copies, Previous versions, Quota, General, Tools,
hardware, Sharing, and Security.

So I'm going to search for bit in the Start menu and I'm going to choose Manage
BitLocker. But as I click it, nothing happens. OK, so on a Windows Server, depending
on the version you're working with, that may happen, and that's because the feature is
not installed.

So I'm going to go to my Start menu and go into Server Manager. Now in Server
Manager, I can add and remove roles and features.

A page called Server Manager- Dashboard appears. The Left pane appears with the
following: Dashboard, Local Server, and All Servers. The right pane appears with the
2 sections: WELCOME TO SERVER MANAGER and ROLES AND SERVER
GROUPS.

I'm going to click Add roles and features and I am going to continue through the
wizard by clicking Next, until such time I'm at the Select features screen where I'm
going to choose

A dialog box appears with the heading Add Roles and Features Wizard. It contains
the following options in the left pane: Before You Begin, Installation Type, Server
Roles, Features, Confirmation, and Results. In the middle pane, it contains a list of
various Features. The right pane contains its subsequent information.

BitLocker Drive Encryption. So I'm going to turn that on. And it's going to prompt me
to install some management tools. I want them, so I'll just click Add Features and I'll
continue on, through the wizard. I'll click Next and Install. Now, this is a good thing.
It is not inconvenient. We want a fresh server installation to be very vanilla with as
little component installation as possible for security reasons and also for the amount
of time it takes to apply updates. The less you have on a server, the smaller the attack
surface, and the less there is to update. So this is a good thing. It then says a restart is
pending on the server, no problem.

Let's actually go ahead and start that. So from the Start menu, I'll click the power
button and I'll choose Restart. OK. So, now we've got a server that has the BitLocker
feature installed. So, let's go back into the Start menu and I'm going to search for
BitLocker. Even the icon appears differently, because it's installed. So I'm going to
choose Manage BitLocker and this time it fires up the tool, that's just perfect. So for
drive C: in my server, it says BitLocker is off, but I do have a link where I could turn
on BitLocker.

A page labeled BitLocker Encryption opens. It contains the following sections:


Operating system drive, Fixed data drives, and Removable data drive-BitLocker To
Go.

Now what it does initially is it verifies that you have TPM firmware, trusted platform
module. BitLocker is designed to work with TPM firmware which is present in most
server motherboards and if it's not, it can be made that way with an add-on expansion
card for TPM or it might even be a chip because some motherboards have a socket,
it's just the chip is absent.

You can also have this available on desktop computers, laptops and so on. However,
I'm going to cancel. I don't want to enable encryption for drive C:, but notice that I do
have a Removable data drive at it. I've plugged in a USB drive externally in my server
and it's detected here, and I can encrypt it to protect data at rest on it using BitLocker
To Go. Now when you enable BitLocker for a disk that's internal to the server, it's
encrypted and if you have TPM, the decryption key is in the TPM. So in other words,
it's tied to the machine. If someone were to physically steal the driver of the server, if
it's a physical drive in a server, then there's not much that they could do. The chances
of them getting into that content on that encrypt the drive are very, very unlikely, very
slim. However, how does that work with a removable drive, little bit different. So if I
go back to turn on BitLocker on drive C:, and if we were to go through as if we were
going to proceed, it asks me where I want to save a recovery key. So if for some
example, maybe the TPM firmware fails and you need to still be able to decrypt the
drive contents, you can use a recovery key which can be saved to a file or printed.

I'm going to go ahead and choose the option of printing it. So I'm printing it to a PDF.
And then I can continue on through the wizard. So, the next thing we have to specify
is whether we want to encrypt used space only on the drive or the entire drive. Now, if
you're creating the entire drive, including unused space, it's slower. So I could leave it
on Encrypt used disk space only and continue on through the wizard by clicking Next.
We're going to use the new encryption mode and then from here we have a message
stating that BitLocker will restart your computer before encrypting. You could start
encrypting and so on. So I'm not going to do that, I'm going to cancel. What I want to
do is go and enable BitLocker for my removable USB drive. So I'll just expand that
little section and I'll click Turn on BitLocker. OK, so it's just going to initialize the
drive before we get the option. Now this is a different set of options, which is why I
wanted to go through both a fixed drive as well as a removable drive.

We can use a password to unlock the drive. So if we enable BitLocker encryption for
the USB drive on this machine, we can plug it into a different machine and access it,
given that we know the password that will decrypt the drive contents. Or we might use
a smart card. If your servers and desktops and laptops are equipped with smart card
readers and everyone is using a smart card for authentication, then maybe you would
have to have your smart card inserted to unlock a BitLocker encrypted drive. So, that's
considered more secure than just a password. Password is just something you know,
but you would have to have a smart card so something you have. Not only that, often
many smart cards will require the entry of a pin to be able to use it. But in this case,
I'm going to use a password and I'll enter and confirm a password to be used. And
then I'll click Next.

Same thing about the recovery key, Save to a file or print. So maybe we'll just print to
PDF. I'll just overwrite the other file because this is just a demo machine. You would
never overwrite an existing BitLocker recovery key file in real life. OK, next step I'm
just going to go through and accept the defaults. And start encrypting. Now, of course,
depending on the size of the drive and how full it is and the encryption mode you've
selected, will determine exactly how long it takes. But at this point, encryption of
drive E: is complete, just a small USB thumb drive. And when we do that notice we
get a number of new options available. We can Back up the recovery key, Change the
password in this case, Remove the password, Add smart card authentication, Turn on
auto-unlock, and even Turn off BitLocker if we've changed our mind.

Encrypting Linux Data at Rest Using OpenSSL


Topic title: Encrypting Linux Data at Rest Using OpenSSL.Your host for this session
is Dan Lachance.

There are so many tools out there that you can use to encrypt files and you would do
that for protection of data at rest, to provide confidentiality. And as a server
technician, sometimes you also need to be aware of laws and regulations that might
apply to the organization, such as those that are related to data privacy where stored
data that's considered sensitive might need to be encrypted. So what we're going to be
doing here in Linux is we're going to be working with the OpenSSL tool to encrypt
files. Now, the first thing we're going to do is simply type openssl here in Linux and

A Linux page labeled cblackwell@ubuntu1:/data appears. He enters the following


command: openssl.

that takes us into an OpenSSL prompt where we could begin typing in commands. All
we're doing at this point is verifying that OpenSSL is there. And it is, so I'll press
Ctrl+C. The first thing that I really want to do is I want to determine what is supported
by OpenSSL in terms of ciphers.

Now, cipher means that we have a cryptographic function. Now the next thing I'll do
is run openssl and space help.

He enters the following command: openssl help.

Here we have a list of cipher commands. So for instance, if we want to use des, the
digital encryption standard which is really deprecated, that goes way back to the
1970s when it was a U.S. government standard. We're more interested in AES, the
Advanced Encryption Standard, that type of thing. So that's one way to get a bit of
help. You could also use the standard Linux man command to view the manual page.
So, that means we could type man openssl. The reason we're doing this with this
command

He enters the following command: man openssl.

is this is one of those commands that gets very complex very quickly. You can do a
lot with OpenSSL. We're just going to be encrypting and decrypting a file in the file
system. But you can also work with public and private key pairs and PKI certificates.
You can also generate password hashes and so on. So there's a lot that you can do
here.
So here's the man page that we might refer to if we want to get some help on how to
use the openssl command. I'm going to, I'll press Q to get out of there. So the next
thing I'm going to do is start using this. In the data directory where I am currently
looking, I'm going to encrypt the file by the name of file1.txt which we can currently
view the contents of, with the cat command. So sudo cat file1.txt. When we do that,
we see the contents of the file. It's a text file. It

The command reads: sudo cat file1.txt.

says, Changed. This is a sample file. We want to decrypt it so that that can't be done
without knowledge, at least of the decryption key or password. What we're going to
be doing is using simple encryption here where we're going to enter a password and
the actual key is going to be derived from that password. The way we're going to do it
through multiple iterations of going through and looking at that password.

So let's do this, sudo openssl. We know that's our route command. I'm going to use the
aes 256-bit cbc method of a cipher. CBC is a mode of the AES 256 encryption cipher.
-a -salt.The -salt means I want to add randomness or salt values. -input is the input
file. So -in. And that's going to be file1.txt -out. I want to write out to a file, let's say
called file1.txt.encrypted just for clarity's sake. And I'm going to use -iter for
iterations of let's say 100.

He enters the following lines of command. Line 1 reads: sudo openssl aes-256-cbc -a
-salt -in file1.txt -out. Line 2 reads: file1.txt.encrypted -ilter 100.

And so I'm prompted then to enter a password from which the encryption key actually
will get derived and I'll then reenter that same password again for verification and it's
done. So we can check our work. We can clear the screen and do an ls where we now
have a file called file1.txt.encrypted. Let's try to run sudo cat against that file to see if
we can make any sense of it.

The command reads: sudo cat file1.txt.encrypted.

Well, it looks like a bunch of gobbledygook because it's encrypted. So let's decrypt
the file. I'm going to run sudo openssl aes-256-cbc mode. So that's the cipher that's
going to be used to decrypt. It's the same one that was used to encrypt. -d for decrypt.
We also still need -a and -in for input file. So in this case, while we're decrypting the
input file is going to be file1.txt.encrypted and -out file. Why do we call it
decrypted.txt for clarity's sake and -iterations, a 100. 100 was used initially as the
password derivation type of way of going through multiple passes of the password to
generate the keys. So we have to specify that here.
He enters the following lines of command. Line 1 reads: sudo openssl aes-256-cbc-d -
a -in file1.txt.encrypt. Line 2 reads: ed.decrypted.txt -ilter100.

I will go ahead and enter, of course, the correct key, and we're good. So not only do
we have the correct password, we've added the -iteration command line switch, life is
good.

Let's verify that. If I do just an ls, there's the decrypted.txt file. Well, can we read it
though? Is it encrypted still or not? So, if I run sudo to cat the decrypted.txt file. We're
not reading

The command reads: sudo cat decrypted.txt .

what we saw in the encrypted file. If you recall, that was the contents looked like
gobbledygook, I think is the term I used, technical term. It looked like that. But when
we look at the decrypted file,

The command reads: sudo cat file1.txt.encrypted.

we have the original plaintext that we started with. Now, there are many easier to use
GUI tools these days to work with file encryption and decryption. In the backend,
we're just doing this type of thing, but sometimes working at the command line helps
solidify it in our minds as to exactly what's happening when encrypting, as well as
when decrypting.

Linux File System Security


Topic title: Linux File System Security. Your host for the session is Dan Lachance.

Being a server technician necessitates understanding not only Windows file system
security, but also Linux file system security. Now in Linux file systems, we have both
standard file system permissions that we can assign to files and folders, and we also
have enhanced permissions that can be set using what are called Access Control Lists
or ACLs. ACLs give you more capabilities. For instance, with standard file system
permissions in Linux, you can set permissions for the owning user of that file or
directory or the group that is tied to that file or directory or everybody else. But, not
multiple groups. But, you can do multiple groups with differing permissions if you're
using file system ACLs. Linux user and group account information is stored in text
files in a restricted location in the Linux file system. So for example, under
/etc/passwd which is spelled passwd. This is a colon-separated text file that contains
user and group account information on a standard Linux installation.

And /etc/shadow is also a text file. It's colon-separated that contains user password
hashes and password expiry information. Now a password hash means that when a
user enters in a password initially or when a server technician resets a forgotten user
password. When they enter that password, that password is fed through a one way
algorithm called a hashing algorithm that results in a unique value. And that's what
actually gets stored in /etc/shadow for a given user. And so when the user
authenticates to the Linux system and enters their password, it's entered through the
same one way hashing algorithm and then compared to the stored password hash. If
it's the same, then the user, of course, knew what the original password was. Now, of
course, Linux and Unix hosts don't have to use authentication from these two standard
files. You could use some kind of a centralized LDAP compliant directory service in
Linux or Unix variants that would have account information for users and groups
stored in the centralized network database. So it doesn't have to be this way. But this
is standard with Unix and Linux, at least with the standard installation.

Now let's get to the permissions. When we talk about permissions in Linux and Unix
environments, there are primarily three. There are read, write and execute
permissions. Each of them has a numeric value. So, read has a value of 4, write has a
value of 2, and execute has a permission of 1. So, if you assign permission 4, as you
might guess, you're assigning read. If you assign 2, it's write. If it's 1, it's execute. If
you add up 4 and 2 for read and write, that equals 6. So, setting a permission of 6
means read and write. Whereas, read and execute would be setting a permission of 5,
which, of course, comes from 4 + 1. Now, besides those three standard permissions in
Linux, there are also three permissions sets for each file system object. And we've
mentioned this briefly.

The owning user could be potentially granted read, write, and execute for a given file
or a folder. And for that same file or folder, a group that's been assigned to it could
also potentially be assigned read, write, and execute. And for that same file or folder,
we could have permissions for everyone else, which potentially could be some
variation of read, write, and execute. Now we can issue the ls -l command in Linux or
Unix and that means long listing. When we do that, it will show us a listing of files
with the permissions and then, of course, we can change those permissions with
chmod. So if I were to run chmod 740 for a file called file1.txt. While we're setting the
permissions for file1.txt. But what does the 740 mean? Well, the 7 is the sum of read,
write, execute 4 + 2 + 1 and that's for the owning user.

That's the first position of the permission. Then, 4 comes from r that's read, so we
would have read permissions set for the group that's assigned to that file. And finally,
the 0 means no permissions, no permissions for everyone else. Everyone else means
whoever is not the owner of the file or listed as a group assigned to the file. Now you
can use chmod, change mode to set file system permissions using those numbers. But
you can also do it this way. chmod u=rwx. In other words, the owning user gets read,
write, execute, g=rw. So, the group assigned to the file will get read, write, o= r in this
example. Everyone else gets read and, of course, we put in the file or the directory as
the last parameter for the chmod command.

As another example, we might run chmod g-rw. In other words, I want to remove read
and write permissions for the group assigned to the file. And, of course, the last
parameter is the file in question or the directory, in this case file1.txt. We could also
run chmod -R. Remember, Linux is case sensitive. Now this means recursive. So what
we're doing here is setting 744 permissions for everything in and under the /projects
folder. And so, 7, of course, means read, write, execute for the owner. 4 means read
for all the file system objects in and under projects and the last 4 is for everyone else,
they also get read permissions.

Now, we mentioned Linux file system ACLs. These supports standard Linux file
system permissions and adds the ability to have different sets of permissions for
different groups. So, you can set these permissions using the setfacl command, set file
system ACL command and read those permissions with the getfacl command. So for
example. setfacl -m which means modify, -u for user. I want to set the permissions for
cblackwell:rwx read, write, execute. And I want that applied to /projects and then I
could use getfacl /projects to retrieve those permissions that were set. As another
example I could run setfacl -m, so modify again. -g for group: and maybe the group is
salesgroup:r. So what I want to do in this particular case is give read permissions to
the members of the sales group for /projects. So, that gives us a sense then of what we
might do to work with Linux file system permissions.

Setting Linux File System Permissions


Topic title: Setting Linux File System Permissions. Your host for this session is Dan
Lachance.

In this demonstration, I'm going to be configuring Linux file system permissions. So


I'm already SSHed into a Linux host. So at this point, we want to look around the file
system a little bit before we start working with permissions. So I'm going to start by
changing directory to a folder on the root

A Linux page labeled cblackwell@ubuntu1:/data appears.


that I've created called data. And if I do an ls, there's one file there called file1.txt.

He adds the command: cd/data.

If I do an ls -l, so along listing for file entries.

The command reads: ls -l.

The -l allows me to see the permission sets. There are three permission sets for the
owning user, shown here as root. Three permissions for the owning group that's
assigned to this file, which also is called root. And finally permissions for everyone
else. So, the root user has read, write. The root group has read and everyone else has
read to that file. And that's fine.

Now there are times when you're going to have a script file where permissions might
differ, so let's get a file ready for that. I'm going to run sudo nano. Nano is a built-in
text editor. And I'm just going to call this sample file script1.sh. Now in

He enters the following command: sudo nano script1.sh.

Linux file system extensions do not have the same meaning as they do in Windows
and so you don't need to put sh. That's just one of my personal habits. So all I'm going
to do here is make sure I start my script with a hashtag or number sign, exclamation
mark, then an !/bin/bash. So I'm telling Linux that this is the interpreter that is to be
used to run this script. And from here, I'm just going to echo back.

Currently, a script labeled script1.sh displays underneath cblackwell@ubuntu1:/data


page. The following command is added: #!/bin/bash.

"Hello world", because this is not really about script writing,

The following command reads: echo "Hello world".

it's just about having a script file to apply permissions to salt press Ctrl+X, save the
modified buffer Y for Yes and I'll just press Enter. Now I use the sudo for that
command because otherwise I would not have had permissions because I'm not
logged in as root to write to that directory to the data directory, of course, we could
change that. That's why we're here. So we know how to look at the permissions for
that.
Now let's ls -l again and for the script file, the permissions are set the same. So there is
a default umask as it's called in Linux, which determines default permissions. We're
not going to be really getting into that, but that's how those permissions are
determined. If I were to do a cd .. to go back one level in the file system hierarchy,

The command reads: cd...

I could also do an ls -l of the data directory or folder. But what's interesting

The command reads: ls -l data.

is it might not be the output that you expected, because what it's doing is basically
showing me what we looked at already up above which showing me those two files in
the directory and their permissions. But what if I want to see the permissions for the
directory itself? What you do is you modify your command just slightly, so I'm using
the up arrow key on the keyboard to go up to the previous command. And I'm just
going to add a d after the -l. Now these are lowercase letters. Remember, case
sensitivity matters in Linux.

The command reads: ls -ld data.

Now that's better what I have is a listing for the data directory itself, where I have
their permissions set. The three permission sets for the owning user, the owning
group, and everyone else. Also because it's a directory, we have a d at the very
beginning. That's all that that means. OK, so we're looking how do we set these
things? We can do it using the change mode command or chmod. I'd actually like to
go into the data directory first. OK, I'm going to change directory into the data
directory again and ls -l. So we can see the existing permissions for those two files.
What I want to do is make sure that only the root user has full permissions to file1.txt.
Nobody else should have anything. Currently read is set for the owning group and for
other. We don't want that. So I'm going to run sudo because this requires elevated
privileges and I'm going to run the change mode command. Some people pronounce
that chmod, doesn't make a difference,

The command reads: sudo chmod700 file1.txt.

as long as you know what you're doing. And I'm going to put in 700 file1.txt.

Now, the 7 is the combination of read, write, and execute. Read has value of 4, write
has a value of 2, and execute 1. So, 4 + 2 + 1 = 7. So I'm setting 7 permissions. That's
the read, write, execute permissions for the owning user. But the two zeros are for the
owning group and for other, they get nothing. Let's press Enter to see if that's true and
I'll just use the up arrow key to go back to ls -l. Indeed, file1 even looks different
visually. It's green. That's because execute is flagged on it, and if we look at the
permissions set indeed, read, write, execute 4, 2 and 1 or 7 has been set for the owning
user, nothing for the owning group. So just a bunch of dashes and same for everyone
else.

Now of course, remember, we can also use letters when we set permissions instead of
the octal numbers like 700. So I could for example, say chmod. Actually I'm going to
have to prefix that with this sudo, aren't I? sudo chmod and what I'm going to do is
I'm going to say g or group +r. Now that means I'm adding it for the group that's
assigned to that file, and of course I have to specify the file. So, file1.txt. If we do an
ls -l again,

The command reads: sudo chmod g+r file1.txt.

indeed, we now have an r in place there for the owning group, which also happens to
be called root. Now, if you want to set permissions for an entire subdirectory
structure, you can do it. Let's go back out one level. So as you do chmod and what I
want to do, let's say is I want to set 740 permissions and I want to do it for /data and I
want to do it recursively -R, everything in and under data as well as to data itself.

So now if I do an ls -ld of data, what do we have?

The following command reads: sudo chmod 740/data-R.

Do we have 740? We do we have 7, we have 4 and we have 0 that worked. So I'm


going to use sudo chmod -o for other +rx /data. Now doing that because I'm currently
logged in as cblackwell

The following command reads: sudo chmod -o+rx /data.

and you might recall that cblackwell is neither the owning user or group for the data
directory. Now I can go into the data directory and at this point we're back in business
where we can see everything and, of course, we can see that our permissions
recursively were in fact applied to everything in the data directory.

Verifying Windows File System Hashes


Topic title: Verifying Windows File System Hashes. Your host for this session is Dan
Lachance.
Hashing is a cryptographic function, and you can apply hashing to network messages,
or you can apply them to an entire file system like a disk partition or to an individual
file, it doesn't matter, but the purpose remains the same. Hashing uses a one way
crypto algorithm in which you feed data. What comes out of the other end of that one
way algorithm is a hash, which is sometimes called a message digest. Now the idea is
that it is not reversible. You can't take that hash and feed it back into an algorithm and
get back to the original plaintext like you would with encryption, where if you have
the correct decryption key you can get back to where you started from, that's not how
it works. The way it works with hashing is we have a unique hash value resulting
from original data, and we can compare original data to that hash value again by
feeding it again through the same one way algorithm.

And there are many different types of hashing algorithms. Older ones that you
probably should stay away from because they're crackable include MD5, message
digest 5. But using SHA 256, secure hashing algorithm 256 bit would be considered
much stronger than MD5. Either way, it all works the same. So it's used to verify the
integrity of something that's transmitted over a network or something in the file
system to make sure it's not been tampered with by unauthorized parties because if a
single bit at the binary level changes within a message or within a file, then the next
time you calculate that hash, the hash will be different. and that's how you know
something has changed. So let's put this in motion. Here on my Windows server, I've
got three files. One is a JPEG image, so it's a picture of a beautiful sunrise. The next
thing I have is a Secret_Message text file that says, This is a secret hidden message
and then I've got a program that will allow me to inject my

A notepad page labeled: Secret_Message.txt-Notepad appears. It contains a toolbar


with the following options: File, Edit, Format, View, and Help. It contains the
following message: This is a secret hidden message!

secret message file into the graphic image. This is called steganography, where you
embed something secret in something that just looks in this case like picture of a
sunrise.

The way we're going to work with this is we're going to take a hash of the JPEG
image prior to injecting it with the secret message, and then after to see if the hash is
any different. And that's one way that you can actually detect if something has
changed. I'm going to go into my Start menu and we're going to fire up Windows
PowerShell. I want to increase the size of it so that we have a way to see what's
happening here. It's much too small by default. Now, once we're in here, I'm going to
go to the testing directory. We're just looking at on the root of drive C:. I'll do a dir.
A page labeled Windows PowerShell. He enters the following command: cd \ testing.

There are the three files I'm interested in generating a hash for the JPEG file.

He now enters the following commands. Command 1 reads: cls. Command 2 reads:
dir.

So I'm going to run Get-FileHash. Remember, PowerShell is not case sensitive, and
I'll start specifying the file name and then I'll press Tab. So it spells out the rest of the
file name, that's great.

The command now reads: Get-FileHash . \ 20211102_072052.jpg.

So here when we run this, we get the resultant file hash which is unique based on the
binary bits, the 0s and 1s that comprise that JPEG.

And what you could also do if I use the up arrow key to bring up that previous
command is perhaps write that out to a file. So how about we call this
Sunrise_File_Hases.txt.

He adds the following command: Sunrise_File_Hashes.txt.

And if I were to run notepad against that file, there it is. OK.

A Notepad labeled Sunrise_File_Hashes.txt appears. It contains toolbar: File, Edit,


Format, View, and Help. It contains 3 columns: Algorithm, Hash, and Path.

So having done that, let's do what we're going to do. Let's inject a secret message into
that file, which is really going to change it at the binary level. But it's not going to
look any different visually. So back here in the file system, I'm going to run my
JPeg_FileHider program

A pop-up box labeled: Welcome to JPHS for Windows.

and what I'm going to do is open a JPEG and I'm going to select the JPEG file and
that's great.

A dialog box JPHS for Windows- Freeware version BETA test rev 0.5 displays. It
contains a toolbar with the following options: Exit, Open jpeg, Hide, Seek, Save jpeg,
Save jpeg as, Pass phrase, Options, and Help. It contains 3 sections: Input jpegfile,
Hidden file, Saved jpeg file.
What I want to do is hide a file in it, so I'm going to choose Hide and it wants a pass
phrase. OK, so I'm going to specify pass phrase and then I'll go ahead and confirm it.
I'll click OK and then it wants me to select what I want to hide.

A pop-up box appears with the heading: Enter the pass phrase and confirmation.

So that's going to be my Secret_Message.txt file.

It's embedded within it. Excellent. So I'm going to choose Save jpeg as. Just so we
have a copy of the original, but also a copy with the message. So I'm going to call, the
copy of the file same as the original file name, but I'm going to add _Copy at the end
of the file name. OK, so now we've got two files here. We've got our original. We've
got the copy which has a message embedded within it. And if we were to go back to
PowerShell and I'll just use the up arrow key again to run Get-FileHash on the same
file. Let's actually not write it out to a file initially. Well, if we do a hash of the same
file, the same file has not changed. But if we were to do a hash of the copy of the file,
the hashes are definitely different.

The updated command reads: Get-FileHash .\ 20211102_072052_Copy.jpg.

Even though if we go back and look at the images, that first picture looks precisely
like the second one. Visually, we don't know anything's changed. But in fact, because
the hashes are different, we do know something has changed. So from a more
clandestine viewpoint, someone that might be expecting a hidden message in that file
could extract it quite easily because it won't be detected. Really, unless you're looking
at hashes.

So if I were to run that same JPeg_FileHider program again, and if I were to open the
JPEG specifically the copy, the one that we know has the embedded message and if I
were to choose Seek, it asks for the pass phrase that was entered originally. So I'm
going to go ahead and enter that here. And then, I'll click OK. So then we have a
dialog box whose title states Save the hidden file as. So why don't we just call it
Secret_Message_Copy? Just because it's a copy of the original. Now that it makes a
difference really, and of course if we open up the Secret_Message_Copy file, it
reveals the hidden text. So, of course, you would have to know to check file hashes on
that file to see if anything has changed. But two things happened here.

A notepad page labeled: Secret_Message_Copy.txt-Notepad appears. It contains a


toolbar with the following options: File, Edit, Format, View, and Help. It contains the
following message: This is a secret hidden message!
We worked with file hashing and we got to see a little bit of steganography from the
security standpoint.

Verifying Linux File System Hashes


Topic title: Verifying Linux File System Hashes. Your host for this session is Dan
Lachance.

File hashing is useful when you want to be able to detect if a change has been made to
a file. And so in Linux, we have a number of command line tools that we can use to
do that. The idea is that you would take a hash or generate a hash for a file and then

A Linux page labeled cblackwell@ubuntu1:/data appears.

later down the road if you wanted to detect if that file had been changed. You could
then use the same hashing algorithm, feed the file into it and see if the hash value is
the same. If the hash value is the same, the file is not changed. If it's different, the file
has changed and that's how you can detect that kind of thing. So hashing can be used
to verify file system integrity. Of course, it can also be used to store passwords in a
reasonably safe format. So, consider for example that if I run sudo and if I were to
run, let's say, tail just to retrieve the last couple of lines in a file /etc/shadow. Now the
shadow file in Linux, if you're using local authentication,

He enters the following command: sudo tail /etc/shadow.

will store password hashes. And one of the things I have here for users, cblackwell is
just that. I've got a password hash value.

So that's also how it's used. That's not going to be our focus, though. Our focus is
going to be on file system hashing. So here in the data folder, I've got two files and
the one we're going to focus on is called file1.txt. We want to be able to generate a
hash for that file, and one way to do it is to use a command called md5sum. Now of
course, when I press Enter, it's expecting me to add

He enters the following command: md5 sum.

something right now to type things in to generate a hash from, but I'm just going to
press Ctrl+C. So what I really want to do is run md5sum, but I want to pass on the
command line, the name of a file in this case file1.txt.
He enters the following command: md5 sum file1.txt.

Now I get a message that states permission is denied. While I'm not logged in as root,
as evidenced by my prompt and the resultant output of the whoami command. So I'll
use my up arrow key to go back to that command.

The command reads: whoami.

I'll just prefix it with this sudo. So then it returns back an md5sum or hash for file1.

The command now reads: sudo md5 sum file1.txt.

The hash itself is shown here in the command line output. The md5 stands for
message digest version 5. It is an older hashing algorithm that is really considered
deprecated. There are ways it can be compromised, and so what you're probably better
off working with is something like secure hashing algorithm. So if I were to clear the
screen and run sudo sha256sum that secure hashing algorithm, and then as the last
parameter, specify the file, in this case file1.txt. It now returns a sha256sum of that
file. Then normally, what you would need

He enters the following command: sudo sha256sum file1.txt.

to do is store that somewhere, because in the future when you run the same command,
you're going to be looking at whether or not the hash value is the same, which implies
the file hasn't changed, or whether it's different. And when you get a different hash on
the same file, something has changed. So, let's use the up arrow key to bring up the
previous command and at the end, I get to add a space in a greater than sign.

Now what I want to do is output that to a text file in the file system, so the greater
than sign is the output redirection symbol, the output of the command to the left. I
want to put that in a file called file1_hashes.txt. Well, we got the Permission denied
message

He now enters the following command: sudo sha256sum file1.txt> file1_hashes.txt.

because we're trying to write to a location where our current account does not have
write permissions. So if I were to do, for example, let's go back one level because
we're in the data folder ls -ld data. What do we have?

Command 1 reads: cd.. . Command 2 reads: ls -ld data.


Well, it looks like we're going to do is for the other permissions shown here is only
read and execute. We're going to add write, and this is just not opportunity for us to
practice our file system security commands. So sudo chmod. What I'm going to do is
say, o that's other +w, I want to add the write permission and that's going to be for the
data folder.

The command reads: sudo chmod o+w data.

OK, let's go ahead and do that. And let's do an ls -ld data again, sure enough, the w is
now part of permission set for other along with read and execute. OK, now I'm back in
the data directory and we're going to run the same command again where we want to
use the output redirection symbol to write the hash out to a file called file1_hashes.txt.
This time we're good to go. If we do an ls, there's the file hashes file. Let's cat it to
view its contents just to make sure that everything is as we think it is and it is.

The command reads: cat file1_hashes.txt.

When we look at the contents of that text file, there's the hash. There's the file name.
You could get a little fancier and sophisticated with the shell script and start reading
out date and time information and so on. But for our purposes this will be fine. And
what I want to do is modify that file. So sudo nano for file1.txt.

The command now reads: sudo nano file1.txt.

And I'm going to add, let's say the text changed. So the file is changed. Ctrl+X save
the modified buffer, press the letter Y for yes and Enter to write to that file.

So if we cat that file itself file1.txt

The command now reads: cat file1.txt.

and we'll just do that by making sure we prefix it with the sudo. We were not logged
in with root. The contents do reflect that there is a change. Alright,

He enters the following command: sudo cat file1.txt.

so what I'd like to do is go back through my command history to where we wrote that
out to the file. But instead of just a single greater than sign, I'm going to add two of
them because I want to append to the file1_hases.txt

The following command reads: sudo sha256sum file1.txt>> file1_hashes.txt.


file. I don't want to overwrite what's already in there. I want to add to it. Go ahead and
do that. Now let's run a sudo and let's cat the file1_hases.txt file

He enters the following command: sudo cat file1_hashes.txt.

while the file name remains the same. But when we generate the hash, the second one
clearly is different from the first one. And so, therefore, we know that something has
changed in that file through hashing.

Course Summary
Topic title: Course Summary

So, in this course, we've examined how to plan, implement and manage Windows and
Linux file system permissions, encryption and generating file system hashes. We did
this by exploring configuring Windows file system security and NTFS. Securing data-
at-rest with Windows EFS and enabling Microsoft BitLocker. Following that, we used
OpenSSL for encrypting Linux data and we examined how Linux file system security
works and, of course, how to configure file system security in Linux.

And finally, we verified Windows and Linux file system hashes. In our next course,
we'll move on to explore network communications.

CompTIA Server+ (SK0-005): Network


Communications
Learning the various aspects of network communications hardware and software is
vital to anyone working in a server environment. Use this theory and practice-based
course to get a grip on configuring virtual networks and virtual network interface
cards (NICs). Explore how network communications hardware and software map to
the OSI model. Identify different types of communication networks such as LAN and
VLAN. Then, learn how network switching and network routing work. Moving on,
practice deploying a hypervisor virtual network. Next, practice configuring IP routing
in the cloud and virtual network peering. Then, identify various types of NICs and
cables. And finally, practice configuring on-premises and cloud-based virtual machine
NICs. Upon completion, you'll be able to identify various network models and
configure virtual networks and virtual NICs. You'll also be a step closer to being
prepared for the CompTIA Server+ SK0-005 certification exam.
Course Overview
Topic title: Course Overview

Network communications hardware and software can be mapped to the seven-layer


conceptual OSI model. The OSI model defines layers that have specific
functionalities, such as getting bits transmitted through the network, IP routing,
network encryption, and so on.

In this course, I'll first examine each conceptual layer of the OSI model. Next, I'll
differentiate between LANs, WANs, VLANs and Wi-Fi networks. I'll then explore
how network switching works followed by exploring how network IP routing works.
Moving on, I'll configure a virtual network in an on-premises virtualization
environment. Next, I'll define various types of NICs and cables followed by
configuring a virtual NIC. This course is part of a collection that prepares you for the
CompTIA Server+ SK0-005 certification exam.

The Open Systems Interconnection (OSI) Model


Topic title: The Open Systems Interconnection (OSI) Model. Your host for this session
is Dan Lachance.

Server technicians have to support network environments in which servers live and so
all server technicians will benefit from understanding the Open Systems Interconnect
or OSI model. So, what is this anyway? The OSI model is a seven-layer conceptual
model. Now it's designed so that we can take network communication hardware like
switches, routers, network cards and software like various types of TCP/IP protocols
like TCP or SMTP or HTTP. And we can map all of that to this seven-layer model to
help explain its function. Each layer has a definitive function on what it does in a
network environment. So, if we say for instance, I've got a layer 4 firewall that's going
to be very different than saying, I've got a layer 7 firewall and that's going to be very
relevant if you understand the seven-layer conceptual model. So, let's go through it.

Layer 7 up at the top is the application layer, layer 6 is presentation, 5 is session, 4 is


transport, 3 is the network layer, 2 is the data link layer, and finally, the very bottom
layer 1 is the physical layer. Now, one way to remember this is to take the first letter
of each of these layers starting at the top and come up with some kind of a saying and
one common one that's used in this context is, all people seem to need data
processing, so APSTNDP to go all the way through down to give you a little reminder
of at least with the first letter is for each of the layers in the OSI model. But let's talk
about the role of each of these layers. So, the OSI model has layer 7 which is
otherwise called the application layer. Now this deals with things like high-level
application protocols like HTTP and SMTP. Layer 6 is called the presentation layer.
It's really primarily involved in things like network compression, network encryption,
and differing character sets.

Layer 5 is the session layer which deals with the establishment, the maintenance, and
the eventual tear-down of a network session. Layer 4 of the OSI model is the transport
layer. And it deals with transport type of protocols that control end-to-end
communication between two nodes on a network. So, think of things like TCP, the
transmission control protocol. This is a reliable and connection-oriented network
protocol, meaning that with TCP we have to establish a session, before we can
transmit something to a target host and for everything that we send to that target host,
we must receive an acknowledgement that it was received. So yes, there's overhead,
but it's a careful and reliable protocol. UDP is the polar opposite, the user datagram
protocol. This is an unreliable, connectionless protocol that essentially sends out
network transmissions and its best effort.

We hope that the target receives it because we're not checking. So, port numbers such
as the fact that HTTP web servers normally listen on TCP port 80. Port numbers are a
layer 4 type of number or address. Layer 3 is the network layer that really deals with
packet routing, between remote networks and as a result, it deals with things like IP
addresses. So, an IP address is a layer 3 address. Finally, we have layer 2, the data link
layer, which deals with things, like how machines can access the transmission media,
whether it be wired or wireless, and it also deals with MAC addresses. So, if we have
a layer 4 firewall, layer 4 is the transport layer. That means our firewall can do
everything implied by lower layers all the way up to layer 4. So the firewall could
allow or deny traffic based on things like port numbers, protocol type, IP addresses,
MAC addresses, that type of thing.

Finally, we have layer 1, the physical layer. The physical layer of the OSI model
really concerns itself with things, like cables and connectors and whether you have
wired or wireless connectivity and all those technical details. And what's really
interesting here is we can breakdown the headers in a packet transmission over the
network and map it to the OSI model. So for instance, on an Ethernet network you
have an Ethernet header which contains source and destination MAC addresses, so
that's layer 2. Then if it's an IP packet, we would have an IP header following that.
Well, that's layer 3 because it deals with IP addressing. Now there's more than just a
source and destination IP address in the IP header, but we're concerned right now
which is the addressing part of it. And then after that, depending on what kind of
protocol we're using, will determine, if we are going to see a TCP or a UDP header.
When I say protocol we're using, I mean high-level protocol. If you're using DNS
from a client perspective to query DNS server, that uses UDP as a transport, but if
you're using HTTP to talk to a web server, well that uses TCP.

So, whether TCP or UDP is used is really either based on what the software developer
chose, or it might be a configuration option, depending on what you're using. So, layer
4 TCP and UDP header deals with port addresses. Then you have the protocol header
for HTTP, which could include layers 5 through 7 until we get to the eventual packet
payload data. So, layer 7 firewall, then can look at everything in the transmission,
including the data. Maybe the URL that a user typed in or anything like that. So, layer
7 firewalls are much more sophisticated than layer 4 firewalls, and that's where the
OSI model then can become very important.

Types of Communication Networks


Topic title: Types of Communication Networks. Your host for this session is Dan
Lachance.

A solid understanding of network technologies is crucial for a server technician. Let's


take a few minutes to talk about different types of communication networks. Now, the
first thing to consider is that servers can communicate, of course, on a wired network,
server plugged into an Ethernet switch physically, which allows network connectivity
for other devices that can communicate to that switch, either directly or indirectly. Of
course, you can also install wireless network adapters directly in server hardware, or if
you're running a virtual machine server, it might use the hypervisor hosts wired or
wireless network connection. So, there are many ways through which servers can
communicate on a network. The first thing to consider is with an on-premises Local
Area Network or LAN, the IT technicians that are responsible for network
communications have full control and flexibility of that network, because they have
physical access to the hardware and therefore can configure the network accordingly.

So pictured in our diagram in the far left, we have a wireless router that has a printer
connected to it as well as a smartphone, but the wireless router in turn is plugged into
an Ethernet switch. So, it's got a wired connection and that Ethernet switch also has a
station plugged into it. Our diagram consists of two Ethernet switches that are trunked
together or linked directly together. The second switch has a server plugged into it.
And then the second switch has a connection to a router, Router A in our diagram and
between Router A and Router B, we have a Demilitarized Zone or a DMZ. This is
often called a screened subnet. What it means is that we can place services here in the
DMZ that should be accessible, for instance, from the public Internet.
So, the DMZ can either be an individual port in a router, or it might actually be a
router plugged into an entire dedicated network switch numerous ports with multiple
servers connected to be publicly accessible. So, Router B, in our diagram would be
our perimeter firewall to the Internet, where router A would be our internal level
firewall. The point is this, a Local Area Network can consist of many different
network infrastructure devices and being a good server technician means,
understanding how the network is configured. And that's often done through
documentation, because if you have to troubleshoot in a hurry, that becomes very
difficult if you already don't know how something is configured, such as at the
network level. Now a Virtual Local Area Network or a VLAN is called virtual
because it can be configured within a single Ethernet switch. Pictured in this diagram,
we have two switches, each of them is a four-port switch and the switches are linked
together with Switch port trunking, so they're wired directly together. Now that can be
beneficial, for example, if you want the members of a VLAN to span more than one
physical switch.

In our diagram, VLAN 1 is denoted as the two rightmost computers plugged into the
switch, whereas VLAN 2 is denoted as the two leftmost computers plugged into the
switch. What does this mean? It means that the first two computers on the left are
treated as if they were on their entire own physical network LAN segment. And the
same is true for the second two rightmost computers. So, the first two computers are
kept separate from the second two. Now they can communicate if routing has been
enabled.

A layer 3 switch not only supports traditional network switching, including VLANs,
but it also allows routing between them if you want that to be configured. So VLAN
membership could be defined by the physical switch in a port that the device is
plugged into, or instead of that perhaps VLAN membership might be determined by a
MAC address or a configured IP address. But what it means is that we can segment
our network without physically having multiple switches dedicated just for an
individual single VLAN, you can have multiple VLANs within a single switch. In the
cloud, specifically, in our screenshot in Microsoft Azure, we can also configure
Virtual Networks or VNets.

A screenshot titled Microsoft Azure Cloud Virtual Network (VNet) appears. It


displays a page named Create virtual network. It has 5 tabs: Basics, IP Addresses,
Security, Tags, and Review + create. The IP Addresses tab is opened. It contains a
field named IPv4 address space. The following range is displayed under it:
10.0.0.0/16 10.0.0.0 -10.0.255.255 (65536 addresses). Also, the page displays several
checkboxes, namely, Add IPv6 address space, Subnet name, and default.
In our screenshot, we are configuring the Virtual Network IP address space, the
default of which has been assigned as 10.0.0.0/16.

So, a 16-bit mask means that 10.0 identified the network. And we could add multiple
IPv4 and IPv6 address spaces to be used by this Cloud Virtual Network. Take note
that in Microsoft Azure at the top here, you would be defining the IP address range for
a VNet, but down at the bottom of the screenshot, you can now one or more subnets
that must fall within the IP ranges configured above. The next screenshot we have is
configuring a Virtual Network in VMware Workstation. So, when you're working
with on-premises hypervisors, you can configure Virtual Networks at that level. You
can add networks, you can specify the IP address ranges they will be using.

A screenshot titled VMware Workstation Virtual Networks appears. It displays a


dialog box labeled Virtual Network Editor. At the top, the page displays a list that has
the following headers: Name, Type, External Connection, Host Connection, DHCP,
and Subnet Address. Below this, the page has three buttons: Add Network, Remove
Network, and Rename Network. Also, the page contains various radio button options.

For example, in our screenshot, one of the VMnets that we have is VMnet5, which is
a host only network and we've got a subnet address of 172.16. So, when you configure
hypervisor Virtual Networks, you then would configure virtual machines themselves
that, specifically their network cards, to be connected to a specific Virtual Network.
So, when you're configuring networks, whether physical or virtual, there are a number
of considerations you must keep in mind.

The first one is, do you need one or more VLANs? For example, for human resources
computers, you might want to organize them onto a separate VLAN for security
purposes, or if you have a nice cuzzie SAN, you might want a separate VLAN to
optimize performance. You then have to determine if you were using IPv4 and/or
IPv6 addressing. Then determine the specific IPv4 and IPv6 address ranges that you
will be using. Determine if you'll be using DHCP with central IP configurations
delivered to clients over the network, or if you will manually configure TCP/IP
settings on individual devices. Then you have to configure things like a default
gateway to allow traffic to get out of the Local Area Network to get to remote
networks. You have to determine if you're going to be using, for example, custom
DNS name resolution server(s). This way you can resolve names to IP addresses, or
you might point to Internet DNS name servers, or you might have a hybrid of both
custom internal DNS servers and Internet servers. All of these things are crucial to
understand not only at the technical level, but to know the implementation, so you can
properly support the server environment.
Network Switching
Topic title: Network Switching. Your host for this session is Dan Lachance.

When we talk about network switching, what we're talking about is a physical central
connectivity device that allows devices to communicate with each other on a local
area network. And so network switching is really focused on layer 2 addresses. In the
OSI model you might recall that layer 2 is the data link layer. And one of the items
that applies to the data link layer, layer 2 are MAC addresses. The media access
control address of the network interface card. MAC addresses are 48-bits long, and
the first half of a MAC address actually identifies the vendor such as Intel or 3Com or
HP or whoever it is that manufactured that network interface card.

So it's important to have a solid understanding of MAC addresses. On a Windows


Server what we can do is we can issue the ipconfig /all command which was done
here, which results in a listing of configuration items for each and every network
interface. In this particular case, it's a wireless LAN adapter called Wi-Fi, and one of
the interesting items here is the physical address. That's just another term for MAC
address. So the MAC address is 48-bits long, and it's expressed in hexadecimal or hex.
Now, hex is base 16 and what that means is the characters that we have to work with
would include 0 through to 9.

But then instead of 10, we have the letter A instead of 11. We have B and so on all the
way up to the letter F which denotes 15. Now you might say, I thought it was base 16.
Well, it is, but because we're starting to count at 0 to get 16 placeholders, we go up to
15. So 0 through to 9, A through to F. In this particular example, the first three parts
of the MAC address in this one it's 18-56-80 will identify the manufacturer of that
NIC. And you can look that stuff up online if you must see that many

A slide titled Viewing a Windows Device MAC Address appears. It contains various
details. The following detail is highlighted: Physical Address18-56-80-C3-68-BA.

common network scanning tools will do that for you automatically out of the box. So
network switches then supersede old hubs back in the 70s, 80s, 90s we had hubs
which kind of looked the same.

They had multiple ports they served as central connectivity points on the network
where you plug things in, but there is a big difference and one of the big differences
between a switch and a hub is that with the switch each physical switch port is its own
collision domain. Now that has no meaning if we don't know what a collision domain
is. So let's take a look at this diagram, where we have a four port network switch and
we have four computers plugged into each individual port. Computers A, B, C and D.
So, what happens then in this particular case, let's say that computers A and B need to
transmit and communicate information over the network.

Well, that can happen simultaneously or concurrently, while computers C and D


communicate. So we would have two network conversations happening at the exact
same time within one network switch, and that's fine. That's what network switches
do. That is not possible with the standard network hub. With a network hub the whole
thing is one big giant collision domain, which means that if computers A and B want
to communicate and we're looking at a hub, well they would communicate, but
computer C and D would have to wait their turn.

You can't have concurrent conversations within an Ethernet hub. So network switches
then do that. Layer 2 remember is the OSI data link layer and layer 2 switches
remember which MAC addresses are plugged into which ports. So that instead of a
machine trying to communicate with another machine on the LAN which at the end of
the day needs the MAC address instead of sending out a broadcast all the time, the
switch will learn it's got its own memory table. It will learn and remember where
certain MAC addresses are plugged in. So subsequent requests to communicate with
that particular MAC address can be serviced from these switches MAC address table
instead of sending out network broadcasts. Layer 3 switches are similar in that they do
everything that layer 2 switches do, but they also have routing capabilities built-in.

So if you wanted to route between 2 VLANs within a layer 3 switch. You could do it
without the need to have any external routing equipment. With network switches you
can configure them locally, as in you have physical access to a wiring closet, to a
server room, to a data center location where you can physically plug in a console port
cable and connect it to your configuration device like a laptop computer.

And then use the appropriate tool to configure the device locally. So that is
completely possible, but it requires physical access. Another way to configure a
switch is remotely from a different network location and so remote configuration is
possible over HTTP. Modern switches have HTTP web server stacks built-in to their
firmware, and you can configure it for HTTPS with your own custom PKI certificate.
The idea being that you can use a web browser interface to make a remote connection
to the switch and configure it using a GUI interface environment.

Of course, what's been around for a long time and what you can still continue to do is
remotely manage a switch over SSH or secure shell. Secure Shell is designed to be
used for remote command line management. So depending on the type of network
switch you have, you might have to have a local console port connection first in order
to enable SSH, or you might be able to do that through HTTP if the SSH daemon is
not enabled by default.

Now the other consideration is that when you have an Ethernet switch by default all
switch ports are in the same VLAN, the same virtual local area network. If you want
to carve it up further, you can, but that is the default setting that you will find.

Network Routing
Topic title: Network Routing. Your host for this session is Dan Lachance.

Routing is a very important network communication topic. With Network Routing,


we're talking about transmitting packets to remote networks using the most efficient
route. Now a router itself is an OSI layer 3 device, layer 3 being the network layer,
and a router has to have at least two interfaces that connect to different networks, if
not more than two. So, a large company might use dozens of routers internally to
interconnect all of its internal networks, or even to interconnect networks in different
locations geographically, through a private Wide Area Network circuit. So, the
Internet actually consists of millions of interconnected routers. So, we know that these
apply to layer 3, the network layer, which means that routers are concerned not with
things, like MAC addresses, at least not directly. They are more concerned when it
comes to routing with IP addresses, which are layer 3 addresses.

So with Network Routing, each device, including routers themselves, should be


configured with the default route, otherwise called a default gateway. What this means
is that when we have network traffic destined for a target remote network, so not the
Local Area Network, if there's not an explicitly matching route in the route table, then
the default route or the default gateway is where the traffic is sent. So, any non-local
traffic then gets sent to the default gateway. What's interesting about this is that each
individual packet has what's called a time-to-live or a TTL value. And this is actually
part of the IP header. It's a field in the IP header and this TTL value gets decremented
by at least a value of one if not more, each time that the packet goes through a router.
The purpose of this is that once the TTL value is depleted, the packet is discarded and
is no longer routed. You can also view network routing hops and a hop means, a
router that you are traversing to get to another location.

You can view network hops to a target using commands like tracert, traceroute in
Windows, or it's actually spelled out as traceroute in Linux. So, this is different than
ping, where ping would only tell you whether or not you're getting a reply from a host
you're pinging but traceroute will also tell you each hop or each router that is being
gone through or traversed, to get to that ultimate target. Here we have a screenshot of
a Windows Device Routing Table. Every device has a routing table. Even Windows
client workstations do, and that's where this comes from. So, what's happened here is
the route print command was issued. And if we look carefully, one of the interesting
items in the resultant output is the Network Destination of 0.0.0.0, in IPv4, this means
the default route. So, if we look at the Gateway column, we have a value of
192.168.4.1. So, from this computer's perspective, when traffic needs to get out to a
remote network and there's not an explicit match in the routing table for the target
network address. It's going to be sent to the default gateway configured here.

Now, routing devices themselves have to share routing information with other routers.
You can configure the scope to which that applies of course, and configure
authentication options depending on the routing protocol you're using. But that's really
the topic here, isn't it? Route Propagation Protocols, one of which is the Routing
Information Protocol or RIP. Now RIP has a couple of versions, but generally
speaking RIP is an older Route Propagation Protocol configured on routers. And the
idea is that RIP tracks the next router after itself, on the way to each destination.
Generally speaking, depending on how RIP is implemented in the version of RIP
you're using on your routers, generally, it's considered to be chatty, so it's still sending
routing updates to routers even if there are no changes.

However, in enterprise class networks you're probably more likely to find Open
Shortest Path First (OSPF) used. So, RIP is good for smaller networks. OSPF works
well in larger networks, because OSPF is designed to track the entire topology of the
database, and updates are only sent when there are changes in links. So, Open Shortest
Path First, tries to find the quickest way to transmit a packet to an ultimate
destination. RIP does the same thing, but RIP basis it only on the hop count. In other
words, based on the number of routers that must be traversed. With Network Routing,
each router then maintains a routing path table. And Route Propagation Protocols,
some of which we just talked about previously, are used to share that information
among routers dynamically. But you could statically configure routing tables
manually, but that's a lot of work and it doesn't scale well, but it is possible.

So, each router then examines the packet IP header target network address. In other
words, the destination IP address that looks at the network portion of that address, to
determine the best way to get that packet to the target network. Pictured on the screen,
we have a screenshot of configuring a cloud routing table, specifically in the Amazon
Web Services or AWS Cloud.

A slide titled AWS Cloud Route Table appears. It displays a page that contains 5 tabs.
Their names are: Routes, Subnet Associations, Edge associations, Route propagation,
and Tags. The Routes page is opened. It has a table with 4 column headers:
Destination, Target, Status, and Propagated.

Now down at the bottom we have two routes. The first route is a destination of
172.31.0.0/16 and the target here in the cloud is local. In other words, that is the local
IPv4 address range for this subnet. In other words, for the subnet that is associated
with this route table. Notice that we have a link to the right of the word Routes here,
where we could look at the subnets that are associated with this route table that will
use this routing table. The bottom route here is the IPv4 default route of 0.0.0.0/0. The
target here is for a device called an Internet gateway, which will allow connectivity
out to the Internet for any cloud resources deployed into subnets associated with this
route table.

Deploying a Hypervisor Virtual Network


Topic title: Deploying a Hypervisor Virtual Network. Your host for this session is Dan
Lachance.

Networking is an integral part of server configuration and management. And in the


physical sense we have to think about using network switches, wiring them together,
wiring devices to wall jacks. But in the virtual world it's a little bit different. We have
to think about configuring virtual networks.

Now, what doesn't change is the fact that we still have to consider things like IPv4 or
IPv6 addressing ranges. That we want to assign to servers and then configuring server
network cards to use that IP address in the connect to the appropriate network.

A window titled VMware Workstation is opened. It displays various file tabs, namely,
Home, Windows Server 2019-2, Windows 10 x64, and Windows Server 2019. The
Home file tab is opened. It displays a title named WORKSTATION 16 PRO.
Underneath, it displays three options: Create a New Virtual Machine, Open a Virtual
Machine, Connect to a Remote Server.

So in this particular case I'm going to be configuring a hypervisor virtual network.


Specifically, I'm using VMware Workstation 16 PRO. Now this is a type 2 hypervisor
that runs in an app within an existing operating system

specifically, in my case in Windows 10. Now while that may not be suitable for
enterprise mission critical app usage. It certainly is a great idea for testing, for
Sandbox configurations, for software developers, that type of thing. So to get started
here in VMware Workstation, I'm going to go to the Edit menu and I'm going to
choose Virtual Network Editor
The Virtual Network Editor consists of a list with the following column headers:
Name, Type, External Connection, Host Connection, and DHCP. Down below, the
page has various radio button and checkbox options.

and then I'm going to click on the Change Settings button in the bottom right, which
requires elevated administrative permissions. OK, so I've got a couple of existing
virtual networks here. And there are of different types,

As he clicks the Change Settings button, the radio button options under the VMnet
Information section get active. Their names are: Bridged (connect VMsdirectly to the
external network), NAT (shared host's IP address with VMs, and Host-only (connect
VMs internally in a private network).

a Bridged network means that you can connect virtual machine next to it and they will
have a connection to whatever network your real network card on your host is
pointing to.

So if you're connected to a wired network or wireless network, that is where your


virtual machine will actually be treated as another device. It will participate in DHCP
on the real network, get an IP address if it's configured to do that, and so on. But what
I want to do is add a new network for testing purposes, so I'm going to click the Add
Network button and it says, Select network to add. OK, well I can choose from this
list.

A drop-down menu list appears. It contains various options, such as, VMnet2,
VMnet3, VMnet4, and so on.

For example, I'm going to choose VMnet5 and I'm going to choose OK. Now that's
been added as a Host-only network and down below Host-only means that VMs
connect internally in a private network. Well, that's actually what I want, maybe
perhaps for Sandbox testing to ensure that VMs talked to one another but don't talk
out on the real network. So down below I can specify the IP address range. In this
case it's defaulted to 192.168.233.0 and the Subnet mask is 255.255.255.0. So what
this means is that the network is being identified as 192.168.233.

Now I could change that if we wanted to, but I'm going to leave that setting. There's
also an option to Use local DHCP to distribute IP addresses to VMs so we can click
DHCP Settings to specify that.

A dialog box opens named DHCP Settings. It displays various fields, namely,
Network, Subnet IP, Subnet mask, and Broadcast address.

So what this is doing then it is looking at our subnet range down below and what
we're doing is using the VMware DHCP service to handle IP configuration
information to clients connected to this network. OK, that's perfect. I'm OK with that,
so I'm going to go ahead and just click OK. And then I am going to click the OK
button. So we've just defined VMnet5. It just restarted a couple of services ever so
quickly.

So what I want to do is I want to connect one of my virtual machines to that network.


So I've got a server here that I'm going to select it's a Windows Server 2019 virtual
machine.

He opens the Windows Server 2019-2 tab. The left pane displays two buttons: Power
on this virtual machine and Edit virtual machine settings. Below this, the pane lists
various options under the headings Devices and Description. The main pane displays
a black colour screen.

I'm going to click Edit virtual machine settings. And what I want to do is I want to
select the Network Adapter

The Virtual Machine Settings dialog box opens. It contains 2 sections: Hardware and
Options. The Hardware section contains 2 panes. The left pane displays a list that has
the following column headers: Device and Summary. The right pane contains
information about Memory.

currently notice it's connected to Bridged.

He selects an option named Network Adapter. The right pane displays various radio
button options, namely, Bridged, NAT, and Custom: Specific virtual network.

In other words, this network adapter is as if it was actually on the real network that the
host running VMware is on, and that's fine.

But what I want to do is choose Custom: Specific virtual network and I'm going to
select VMnet5, which as we know is a host only network and then I'm going to click
OK. So now what I want to do is fire up that virtual machine. To see if it will acquire
an IP address within the range that we designated for VMnet5. So once I've signed
into the virtual machine, I can then go into the start menu. I'm going to search for cmd
to open a Command Prompt and I'm going to type ipconfig and sure enough this
server is configured for DHCP and so it has acquired an IPv4 address from the DHCP
service for VMware.

Based on our configuration for VMnet5 so the address here starts with 192.168.233,
which is perfect. If I were to do an ipconfig /all. We would then get a lot of other
details such as DHCP Enabled being Yes, and then we also have the DHCP Server IP
address and details related to that, such as when the IP address lease was obtained and
when the IP address lease expires. And so now that the virtual machine is running. I'm
just going to press Ctrl+Alt, so that I can kind of get back to my VMware menu.

If I were to go into the Workstation menu, down under VM, Settings, I can also
change the network connection while the virtual machine is running. So if I were to
choose my Network Adapter, let's say I go back to Bridged and click OK. So if I were
then to type ipconfig /renew, then it would go through the DHCP process once again,
except this time because we're connected to a different network, we're Bridged on the
real network, the IP address subnet is different. Now we've got 192.168.4 as a
network prefix. So interestingly enough, we can easily switch network card
connectivity to virtual networks while VMs are running.

Configuring IP Routing in the Cloud


Topic title: Configuring IP Routing in the Cloud. Your host for this session is Dan
Lachance.

As a server technician, you don't need to be an expert in IP routing, but you do need to
have a solid grasp of routing fundamentals and that means understanding how to
troubleshoot routing issues, how to solve those problems quickly, and also how to do
that on-premises with physical servers and physical networks with hypervisors and
virtual networks related to them. And also virtual networks in the cloud, which is what
we're going to focus on here. So here in Amazon Web Services, I have a virtual
machine instance called

A page called Instances| EC2 Management Console appears. It contains a search bar
on the top. The left pane appears with the following options: EC2 Dashboard, EC2
Global View, Events, Tags, Limits, Instances, Images, and so on. The right pane
displays the following column headers: Name, Instance ID, Instance state, Instance
type, Status check, and so on.

WinSrv2019 and its state is showing that it's up and Running.


If I were to select that virtual machine by putting a checkmark in the box that opens
up the properties down below and what I want to do here is go down to Networking.
Because down under Networking I will see

The following tabs appear down below: Details, Security, Networking, Storage, and
Tags. The Networking tab displays various details under the following headings:
Public IPv4 address, Private IPv4 addresses, VPC ID, Subnet ID, and so on.

the VPC it is associated with that is a virtual network. What I'm really interested in is
the subnet that it's associated with, and so we have a link to the subnet. So I want to
click on that subnet because I want to check out the routes related to that subnet. Here
in Amazon Web Services you can configure

The subsequent page opens that displays a table. The table has the following column
headers: Name, Subnet ID, State, VPC, and IPv4 CIDR.

routes at the subnet level and those routes will be made known to the virtual machines
in that subnet.

So here is Subnet2, so I'm going to click on its link to open up its details and what I'm

A page titled subnet-0e20db7b25d9f5e39 / Subnet2 appears. It displays details and


information under the following headings: Subnet ID. Subnet ARN, State, Network
ACL, Route table, and Outpost ID.

interested in is the Route table it is associated with. So we've got a virtual machine
which is a separate cloud resource from a VPC, which is a virtual network which is a
separate resource from a subnet, which is again a separate resource from this a routing
table. These are separate cloud resources that you can manage in the AWS cloud, so
let's click on the link for the routing table I'm interested in looking at the Routes, what
we have here by default is a route for the local subnet address range, but that's it.

A page titled Route tables opens in the main pane. It displays a table with the
following column headers: Destination, Target, Status, and Propagated.

There's only one route notice there is nothing here about getting traffic out to the
Internet.

All we have is a route to the subnet address itself. So what we're going to do is we're
going to take a look at that virtual machine and see if it can actually get out to the
Internet. So let's go back to EC2 here. I'm just going to pop that into the search bar at
the top,

He reopens the Instances page. At the top, the page displays various drop-down
buttons, namely, Instance state, Actions, and Launch Instances.

and when you want to connect to a virtual machine you can select it. You can go to
Actions and down in Security you can choose Get Windows password.

The menu displays various options, namely, Connect, View Details, Networking,
Security. The Security option further displays the following sub-options: Change
security groups, Get Windows password, and Modify IAM role.

Remember in Amazon Web services when you initially deploy an instance or virtual
machine you specify a key pair. And Amazon keeps the public one, but you have the
private one, and so we have to Browse to the key pair file.

So I've got that I'm going to go ahead and select it.

The Get Windows password page displays the following label: Key pair associated
with this instance VM_Key_Pair. Below, it displays a Browse button.

OK, so now I've supplied the PRIVATE KEY part, so now I can scroll down and
decrypt the administrator password and I can copy it for future reference, because
until I change it, it will always be the same.

The page now displays the following headers: Private IP address, User name, and
Password.

OK, let's try to RDP into this virtual machine, now I've got its Private IP address here,
but I can go back and I can select that VM and I can copy its Public IP address, which
I will do and I can go ahead and I can try to RDP into it.

The only problem is we can't even remote into that virtual machine to test if it can
connect to the Internet. So that either means there's a firewall problem or the virtual
machine isn't running, or routing is an issue, it's routing that's the problem here. So
what I'm going to do then is I'm going to go to the VPC Management Console here.

A drop-down menu appears that displays the following options: VPC, AWS Network
Firewall, and Detective.
Because what I want to do is go directly to the Route Tables view over on the left
there's our route table.

Actually, before we go into that, let's go to the Internet Gateways view.

The Route tables page displays a table. It has the following column headers: Name,
Route table ID, Explicit subnet associations, Edge associations, and VPC. It lists the
following routing table: rtb-94b75ce5.

A page opens in the main pane titled Internet gateways. It has a table that displays a
single entry. It reads: igw-755b8b0f.

I've already created what's called an Internet gateway here in AWS. And the Internet
gateway is a resource that allows connectivity to the Internet. And so I need to route
to it. So I'm going to go to my Routing Tables view, open up the routing table. And I
want to make sure that Subnet2 is associated with it. We already looked at this but
let's just verify here.

Well, looks like the subnet is not well let's double check that Edit subnet associations.
Let's make sure Subnet2 is associated here

A page titled Edit subnet associations appears. It displays a table under the heading
Available subnets. The table lists two entries: Subnet2 and Subnet1.

because that's where our VM is deployed. And back here in the routing table,

He opens the rtb-94b75ce5 page.

I want to click Edit routes because you might recall we only have one route, which
was for the local subnet itself that's not going to get us to or from the Internet. So, Edit
routes,

The Edit routes page displays a table that has 4 column headers: Destination, Target,
Status, and Propagated. It also displays two buttons named Remove and Add route.

what I want to do is add a route and I want to add a route to the default IPv4 route
0.0.0.0/0.

Now what I want to route to, and if I click in the Target field I want to route to an
Internet Gateway. That's what we just looked at, so I'm going to select it from the list
and click Save changes and that's it. This is the beauty of software defined networking
or SDN, in the cloud is that we don't have to have any specialized knowledge for how
to configure all of this stuff with underlying routers and so on. We just have to know
where to click, what to type in, here in the interface, and away we go. So let's try this
again and what I really mean by that is, let's see if we can even remote into that virtual
machine.

Even that would be enough of an indication that things are working. So if I select that
VM, go down under Details, I can copy it's Public IP. Let's try to do this once again.
So, I'm going to pop in that IP address.

A window titled Remote Desktop Connection opens. It displays two fields: Computer
and User name. Also, it has a button named Connect.

And I'm going to click Connect. Now we know it's working immediately because
we're being prompted for credentials that didn't happen before,

A dialog box named Windows Security opens. It displays two fields under the header
Enter your credentials. Their names are User name and Password.

so I'm going to go ahead and pop in the administrator password along with the user
name, and then we'll sign in. OK, now that I'm signed into that virtual machine, let's
just open up a Command Prompt from the start menu. So start menu and I'm going to
run cmd. And we'll just go ahead and enlarge it so it's somewhat legible, so I'll go to
Properties, go into the Font, maybe pump it up to 24.

That's a little better and now what I want to do is simply ping www.google.com. And
we know that we now have a route to the Internet number one because we were able
to RDP into this virtual machine in the first place. And number two because here we
are in the VM and we're able to go out to Google.com. So once you've got routing
setup in the cloud, it's business as usual within virtual machine operating systems.
They have no knowledge of whether we've got routing configured on real routers or if
we're working in a virtualized environment on-premises or in the cloud.

Configuring Cloud Virtual Network Peering


Topic title: Configuring Cloud Virtual Network Peering. Your host for this session is
Dan Lachance.

While the methods that we use to configure networking in the public cloud, might
differ from traditional network configuration. The underlying concepts are really just
the same.
A webpage labeled AWS Management Console appears. It contains search bar on the
top. It further contains 4 sections: AWSservices, Build a solution, Stay connected to
your AWSresources on-the-go, and Explore AWS .

So in this demonstration, what we're going to be doing is we're going to be linking or


peering two cloud-based virtual networks directly together. Why would you do this?
Well, when you link networks directly together, it allows resources in one network to
communicate with resources in the other using a private IP address instead of having
to use a public IP address. Plus in the case of the public cloud, in this case, Amazon
Web Services, it uses the AWS global backbone network.

So, the traffic between peered virtual networks or VPCs as they're called, does not go
over the Internet. So, to get started here in the AWS Management Console, first we're
going to create two VPCs and then we will link them together. So, to get started, up
here in the search bar I'm going to search for vpc so that we can go to the VPC
Management Console. On the left under VIRTUAL PRIVATE CLOUD.

A page displays with the heading VPC Management Console. The left pane contains
the following components: VPCDashboard, EC2Global View, VIRTUAL PRIVATE
CLOUD, Your VPCs, Route Tables, and so on. The right pane contains 2 tabs:
Launch VPCWizard and Launch EC2Instances. Currently, Launch VPCWizard is
highlighted. It further contains the following sections: Resources by Region, Service
Health, Settings, Additional Information, and Transit Gateway Network Manager.

I'll choose Your VPCs. Remember, in Amazon Web Services, a VPC is just a virtual
network.

The component called Your VPCs is highlighted from the left pane. The right pane
now appears with its subsequent information. It further contains a search bar and a
table with the various column headers: Name, VPC ID , State, IPv4 CIDR, IPv6
CIDR and so on.

So, I'm going to click the Create VPC button in the upper right.

We're going to call this VPC1 for the IP address block why don't we just go with
10.1.0.0/16? Now, we don't make this up randomly as we create VPCs. This has to
have been planned carefully by the network infrastructure team. So assuming that
that's been done

The page with the heading Create VPC appears underneath VPC Management
Console. It contains 2 sections called: VPC settings and Tags. The setting section
contains the following subsections: Name tag-optional, IPv4 CIDR block, IPv6CIDR
block, and Tenancy. The Tags section contains the following subsections: Key and
Value-optional.

it is just as easy as filling in the blanks. Because that's not the hard part. The hard part
is the planning.

I'm going to scroll all the way down towards the bottom and just Create the VPC.

A page labeled vpc-08d2ba27f5e138d84/VPC 1 appears. It contains various details


such as: VPC ID , State, DNS hostnames, DNS resolution, and so on.

So we've got that VPC created, if I go to the Your VPCs view, there's VPC1, but I
want to create a second VPC. So, I'm going to click the Create VPC button, could we
call this one VPC2. And again, we would have planned the IP addressing for this
VPC. So, let's assume I've done that and I'll fill in the IP address range using CIDR
notation.

Well CIDR notation means we've got the slash with the number of bits in the mask
which starts counting from the left. So, in this case, the network is 11.1. OK, there's
nothing else I'll change here so let's go ahead and create the second VPC. Let's go
back to the VPCs view. OK, so we're set to go. We've got two VPCs.

What I want to do is peer them or link them together. So, in the left-hand navigator
I'm going to scroll down until I see Peering Connections and I'll select it. Now there
are no peering connections that have been configured.

The component called Peering Connections is highlighted from the left pane. The
right pane now appears with its subsequent information. It further contains a search
bar and a table with the various column headers: Name, Peering connectionID ,
Status, Requester VPC, Accepter VPC, and so on.

So I'm going to click Create peering connection in the upper right. And I'm going to
call this Peer_VPC1_VPC2 and then were prompted to select the local VPC,

The page with the label Create peering connection displays underneath VPC
Management Console. It contains 2 sections called Peering connection settings and
Tags. It further contains the following subsections: Name-optional, Select a local
VPC to peer with: VPCID (Requester), Select another VPC to peer with : Account
and Region, and VPC ID(Accepter). The Tags section contains the following
subsections: Key and Value-optional.

which again is just a network in the cloud to peer with, OK. Well, why don't we
choose VPC1 and then the CIDR IP address range shows up. OK, that's good.

We want that. Now, you can peer VPCs within your same AWS account. You can
even peer VPCs with different AWS accounts. Some larger enterprises that use
Amazon Web Services will have separate AWS accounts perhaps for different
administrative units at the IT level, different countries, different projects, different
subsidiary child companies, that type of thing. I'm going to leave it on My account
and I'm going to leave it in the same region but, of course, you can select another VPC
to peer with in a different AWS region around the world.

So, I'm going to select for the second VPC I want to peer with, VPC2 which we had
created, its IP address range shows up. And that's really all I'm going to do here, so
I'm going to go ahead and click Create peering connection. OK, so it looks like the
peering connection has been requested,

A page labeled pcx-04e8c60270627679e / Peer_ VPC1_VPC2 opens. It contains


various Details such as: Requester owner ID , Accepter owner ID , Peering
connection ID , and so on.

notice a language that was used and we have some details down below. It
automatically assigns a Peering connection ID. If I go back to the Peering
Connections view and if we just kind of expand this a little bit so we can see what's
happening. OK, so we have a peering connection that is Pending acceptance. So, if I
were to select that peering connection and go to the Actions menu we can Accept the
request.

A drop box appears with the following options underneath Actions tab: View details,
Accept request, Reject request, Edit DNS settings, Edit ClassicLink settings, Manage
tags, and Delete peering connection.

So, even though it's within the same region both VPC1 and VPC2 and they're in the
same AWS account. We still have to go ahead and Accept the request. So, I'm going
to go ahead and do that, OK and notice now that the Status is showing as Active. They
have a little note at the top that says if you want to send or receive traffic across this
VPC peered connection, you need to add a route to the peered VPC for your routing
tables configuration.

OK, so to continue through with this example let's create a subnet for each of our two
VPCs. VPC1 and VPC2. So, I am going to Subnets, going to choose
The component called Subnets is highlighted from the left pane. The right pane now
appears with its related information. It contains a search bar and table with the
following column headers: Name, Subnet ID, State, VPC, IPv4 CIDR, and so on.

Create subnet. We're going to start by selecting VPC1.

The page now appears with the heading: Create subnet. It contains 2 sections: VPC
and Subnet settings. The VPC section contains: VPC ID and Associated VPC CIDRs.
The Subnet section contains the following subsections: Subnet name, Availability
Zone, and IPv4 CIDR block.

And what I want to do here is I want to create a subnet called VPC1_Subnet1. So


there's no confusion about which VPC it is associated with. And the CIDR block,
remember, for a subnet has to fall within the CIDR block for the VPC itself which is
10.1 here.

So I'm going to go ahead and specify 10.1.1.0/24 bits in the subnet mask. The first
three numbers now, identify the subnet. OK, and I'm going to go ahead and create that
subnet and then we've got that subnet, VPC1_Subnet1. Let's do one for VPC2. So,
from the VPC ID dropdown list, I'll choose VPC2. Remember it is 11.1 so I'm going
to call this VPC2_Subnet1 and we have to make sure that the range for this VPC falls
within the range for the subnet.

So in this case, I can specify that as 11.1 which is the VPC range .1.0/24. OK, so that's
fine. Again, all this would have been planned ahead of time. Create subnet, OK. Let's
see about routing tables now. So, for example, if I were to go into one of our Subnets.
So, if we look at our Subnets view, let's say, I go into VPC1_Subnet1. I'll click on the
link for it. The subnet is associated with a Route table. And if I click on the link for
the Route table ,this is where I can down below. Click on the Routes tab and what I
could then do is click Edit routes.

A page labeled Route tables underneath VPC Management appears. It contains a


search bar and a table with the following column headers: Name, Route table ID,
Explicit subnet associations, Edge associations, Main and VPC. It further contains
various sections: Details, Routes, Subnet associations, Edge associations, Route
propagation, and Tags.

So basically, VPC1 remember, uses the 10.1 range.


It's got a route to itself.

A page labeled Edit routes appear underneath VPC Management Console. It contains
the following sections: Destination, Target, Status, and Propagated.

A page called Edit routes appears underneath VPC Management Console. It displays
a table with the following column headers: Destination, Target, Status, and
Propagated.

I want to add a route to the peered network range. And VPC2 remember, uses 11.1
and the subnets will fall within that range. I want to peer. I want to add a route to the
peered IP range going through the peered connection. So, I'd click Add route, the
target subnet, let's say, would be 11.1.1.0/24. And from the target, this is where I'm
going to choose a peered network connection. So Peering Connection, here's our
peered connection, done.

We have to do the same thing in the reverse direction. So if I go to my Subnets, let's


say, this time I go to Subnet2,

A page titled rtb-07a52f2326c406a24 displays. It contains a list of details such as:


Route table ID , Main, Explicit subnet associations, Edge associations, VPC, and
Owner ID .

VPC1 and again you always have to take note of the address ranges. So, from
Subnet2, I want to be able to route to the 10.1.1 subnet, privately, with the private IP
addresses without going over the Internet through the peered connection. So, I'll click
on the subnet link, for that subnet again, it is associated with the routing table.

I'll click the routes tab at the bottom and I'll click Edit routes. Again it's only going to
route to itself, to its own network range. So, I'll click Add route. I want to be able to
route to 10.1.1.0/24 through our peered connections.

I'll choose Peering Connection and I'll select it. Save changes. Done. So, now if I have
virtual machine instances deployed into those subnets, they can communicate with
virtual machines in the peered VPC subnet using private IP addresses.

Network Cables and Interfaces


Topic title: Network Cables and Interfaces. Your host for this session is Dan
Lachance.
There are many different types of network cables and interfaces, and having an
understanding of them, can go a long way in helping you configure and troubleshoot
servers. The first consideration is the fact that, of course, we have wired and wireless
network interface cards or NICs. This means then that we have to think about the
different types of connectors that would apply certainly for wired type of networks,
such as RJ45. RJ45 is probably what most technicians are used to for a standard
network cable. It looks like a wider telephone cable and the idea is that it allows us to
plug a network card into a network switch, through the cable, of course. But then,
we've got fiber optic type of connectors as well, such as fiber straight tip or ST. But
we also have to think about the network cards themselves.

Now, most server motherboards will have network interface cards built in, normally
more than one. For instance, you might have a rack mount server that's got three
network interfaces built in, two for your use, and the third one that's used for remote
out-of-band management, which means hardware level remote management, even if
the OS isn't installed or running. The other thing to think about is that if you have to
add a network card, then you have to think about the card interface. In other words,
the slot type that it can plug into, such as PCIe. Then there are a number of factors we
have to consider when it comes to selecting cables or a wireless solution. One of
which is thinking about what type of cabling is currently in place, because maybe we
want to just continue with that, if it performs well, if it's secure.

Basically, if it aligns with organizational policies. We have to think about the speed
and reliability of the network throughput. Some cables are rated to transmit at only a
given maximum speed over a given distance. The idea is that when you're transmitting
electrical pulses down conductive wires within a network cable over distance, that
signal degrades. So, we want to make sure we think about distance when it comes to
selection of the cable type. We should also think about electromagnetic interference,
otherwise called EMI. A standard unshielded network cable is susceptible to external
electronic noise. So, if you're stringing unshielded network cables through, false
ceilings in an office environment and you're stringing them over fluorescent lights, it's
possible the fluorescent lights might periodically interfere with the electrical pulses
going down the wires in that unshielded network cable. And so, in that case, maybe
you're better off using shielded cabling, shielded twisted pair or STP, or perhaps using
some kind of fiber optic cable, which is not susceptible to EMI, by the way, because
with fiber optic cable we aren't transmitting electrical pulses, we're transmitting light
pulses.

Eavesdropping is possible when using standard copper wiring. It's possible for
someone to tap into the communication and capture it. That's much more difficult if
you're using fiber optic networking. So, common copper cable types include Cat5,
which is often called Fast Ethernet, which supports speeds up to 100 Mbps at 100 m,
always remember that distance plays a part here. Now you're not going to find Cat5 in
use in newer networks, but Cat5e or Gigabit Ethernet is fairly common, up to 1,000
Mbps, or in other words, 1 Gbps at 100 m. Now, after 100 m, you either need to
terminate the connection, meaning, having a device plugged in, or you might consider
plugging in another switch or router, or some device that allows the signal to be
repeated further down the line. Cat6 supports up to 10 Gbps at 55 m. For Cat7
supports the same speed, 10 Gbps, but at a longer range at 100 m.

Now we've mentioned fiber optic cabling, where we're transmitting light instead of
electrical pulses. This means that we end up having greater transmission distance and
speed. So, for example, you're not limited to transmitting at high speeds, but only for
100 meters. It's in the range of many kilometers, depending on the specific fiber optic
solution being used. It's also less susceptible to wiretapping and interference from
electronic noise. The network interface card or NIC has a unique 48-bit hexadecimal
address. Now, depending on software that you're using, you might also be able to
change that MAC address for that network card. So, the MAC address is 48-bits long
and it's represented in hexadecimal or base 16, which uses characters 0-9 and A-F,
where F means 15, A would be 10, B would be 11, and so on.

So, whether you've got a physical network interface card in a server or a virtual NIC
in a virtual machine, when it comes to configuring it within the OS, there is no
difference. Some network interface cards support extra features like Wake-on-LAN,
otherwise called WOL. Now Wake-on-LAN can be very useful in an enterprise
environment. I'll give you an example. I've had many cases over the years, where I
have had to wake up machines in the middle of the night to deploy updates to them,
because that's when nobody is using them. Now the thing is, if the machine is
powered off, how is that possible? Well, it's possible because there's just enough
power, still being fed to the network interface card for Wake-on-LAN purposes,
where you could send what's called a magic packet to that machine using a specific
tool and wake up the machine, apply the updates, and then the machine can go back to
sleep or in a powered off state.

Network cards can also support Preboot Execution Environment or PXE. This is one
of those things that has to also be enabled in the BIOS, or the UEFI, so that it's part of
the network boot sequence. But the idea is that we can boot directly from the network
card firmware even without an operating system installed locally, and we might do
that for recovery purposes because the OS won't start. We might use it for imaging
purposes because it's a brand-new piece of hardware and there's no operating system
yet. Or we might use it for malware scanning, if we don't want to boot up an infected
OS. There are many ways to use PXE booting. And then we have the notion of NIC
bonding or teaming. So, with NIC bonding and teaming we have two or more server
NICs plugged into switch ports for more throughput and the thing about this though,
is that you have to configure NIC bonding or teaming up both at the operating system
level, as well as at the switch level. So, what happens, let's say, if we have two
network cards plugged into switch ports and we configure it at the OS and switch
level, what then happens, is that the server will advertise a single MAC address for
what's called a virtual NIC. It's treated as a single network card. It also means from an
IP addressing perspective, that we have a single IP address for the new virtual NIC,
that results from our configuration of NIC bonding or teaming.

Configuring On-Premises Virtual Machine NICs


Topic title: Configuring On-Premises Virtual Machine NICs. Your host for this
session is Dan Lachance.

In this demonstration, I'm going to add a second network card or NIC to a virtual
machine running on-premises. So specifically, I'm going to be doing this using
VMware Workstation, a Type 2 hypervisor. So, I've already got a virtual machine up
and running and it's running the Windows Server 2019 operating system. Not that that
really matters when you're adding virtual hardware.

A page labeled Windows Server 2019-2 appears underneath VMware Workstation. It


contains a toolbar with the following elements: File, Edit, View, VM and so on. He
highlights the Window Server2019-2 from the main pane.

So, I'm going to go to the VM menu and I'm going to go into Settings.

A drop-down box appears underneath VM element. It contains various options such


as: Removable Devices, Pause, Grab Input, Settings, and so on.

This would be settings for the current active VM and if I select Network Adapter
currently, we have one Network Adapter that is Bridged.

A dialog box appears with the heading Virtual Machine Settings. It contains 2
sections: Hardware and Options. The Hardware section contains 2 panes. The left
pane consists of a table with the following column headers: Device and Summary.
The right pane contains information about Device status and Network connection.

OK, so it's directly connected to the physical network, whether that network is wired
or whether it's wireless makes no difference. But what I want to do is add another
network interface. Now why would you ever want to do that on a server? Plenty of
reasons. One reason might be, because you are configuring your server as a firewall
appliance and so you want it to have a connection to two or more networks. That
might be your reason, or perhaps what you want to do is enable NIC teaming or
bonding to get better throughput, for the network interfaces that will eventually be
advertised as one on the network, even though it might actually be two. And in a
virtual environment you might actually do that by linking it to a physical network
interface.

In other words, your VM would have two virtual NICs, but each of those is linked to a
physical NIC into the underlying server. Other reasons might, include you want a
management NIC connected to a VLAN only that allows remote management, and
maybe you're using iSCSI for a Storage Area Network, so you want to have a separate
NIC for that connectivity, there are a lot of reasons. Either way, let's get started. So, I
want to add another Network Adapter, so I'm going to click the Add button and from
the list I'm going to choose

A dialog box appears with the label Add Hardware Wizard. It contains 3 sections:
Hardware Type, Hardware Types, and Explanation.

Network Adapter and then I'll choose Finish. So, I want it to be connected at power
on, it's showing up here in the list on the left, as Network Adapter 2, but it's set to use
NAT, to share the host's IP address to communicate on the network, and that's fine. Or
we could Bridge it to the network, or we could connect it to a custom virtual network,
which I'm going to do. So, I've got a custom virtual network here called VMnet5, and
it's Host-only, which means only allows connectivity between VMs.

However, let's say in this particular case, the virtual machine I'm using here, I need a
connection to the Bridged network. But I also at the same time want it to talk to other
VMs on VMnet5, so I can select that, and then I just click OK. So, what it's doing is
saving or restoring the virtual machine state because it was up and running as I made
that change. That's fine. What I'm going to do is just log into the virtual machine so
we can take a look at the network interfaces from the OS perspective. So, I'm going to
go ahead. If you've clicked inside the window, by the way, you can also just move it
up out of the window to get up to your VM menu. I'm going to Send Ctrl+Alt+Del. I
could've press Ctrl+Alt+Insert on the keyboard, it really doesn't matter, and I'm going
to go ahead and sign in to my VM. Now when I'm within the VM, one of the first
things we can do here, well, we see the previous commands output.

A page labeled Administrator: Command Prompt appears. It contains several lines of


command. He now enters the following command: c1.
We've got a network adapter called Ethernet0. OK, well, let's see what we have here,
just by running ipconfig by itself.

The command now reads: ipconfig.

So, we still have Ethernet0, but we just added a second network adapter and it's
showing up here as Ethernet1. Notice the IP addressing our first network interface is
on a different subnet then our second interface.

He highlights the following IP addresses: 192.168.4.156 and 192.168.233.129.

So, the first network interface is 192.168.4, the second one that we just added is one in
192.168.233. Remember, it was connected to VMnet5, the first one is bridged to the
physical network. OK, so you can add and remove network adapters while the virtual
machine is running. Now that's not always recommended depending on what's
running within the virtual machine and how it's configured, but it is possible Now the
other thing I want to do here in the virtual machine, I'll just go to View, Full Screen, is
I want to take a look at the network interfaces at the GUI level.

So, from the Start menu, I'll go into the Control Panel, Network and Internet, Network
and Sharing Center, what you would normally do even on a physical server. Change
adapter settings and sure enough, our second network interface is showing up. I could
right-click on it and I could go into the Properties and configure it and do whatever I
need to do for IPv4, IPv6. It's business as usual.

A dialog box appears with the heading Ethernet1 Properties. It contains 2 sections:
Networking and Sharing. Currently, Networking section is highlighted. It contains the
list of connection items.

All we've done is added another network adapter at the virtualization layer to the OS,
it doesn't know the difference. So, we're ready to rock and roll with our newly added
network interface. The other thing that's interesting is if we go back into our
Command Prompt here in our servers, I'll open the Start menu, go back into a
Command Prompt. I'm going to run ipconfig /all, because what I want to point out

The page called Administrator: Command Prompt appears again. It contains several
lines of command. He now enters the following command: ipconfig/all.

is the hardware or the MAC Address, or what Windows calls the Physical Address.
The Physical Address is assigned. It's a 48-bit hexadecimal address, it's been assigned
to our second network interface.
Now, while the VM is running, if I go into my VM menu and go into the Settings and
then select that Network Adapter, over on the right, I can click Advanced where we

A dialog box appears with the label: Network Adapter Advanced Settings. It contains
3 sections: Incoming Transfer, Outgoing Transfer, and MAC Address.

have a number of interesting options. So, for the virtual Network adapter, one of the
things we might do is set a cap on the Bandwidth. In other words, we might throttle it,
so we could specify those details. But also down below, we have the MAC Address
that was assigned, although the Generate button is unavailable, and that's because it's
in use, the virtual machine is running. So, I'm going to cancel that, cancel out of the
settings and I'm going to go ahead and shut down the virtual machine guest.

A drop-down box appears with the following sections: Start Up Guest, Shut Down
Guest, Suspend guest, Restart Guest, Power on, Power Off, Suspend, and Reset.

Now, once that's been done, if I edit the virtual machine settings. And go into my
second Network Adapter once again, back under Advanced. Notice this time that the
MAC Address field is actually editable. We can even click Generate, if we want it to
generate a different MAC Address. So, either you can specify the MAC Address, if
you have a specific reason to do that, maybe for testing, or you can have it generated
by VMware, but either way that's how you can start working with virtual machine
network cards in a hypervisor.

Configuring Cloud-Based Virtual Machine NICs


Topic title: Configuring Cloud-Based Virtual Machine NICs. Your host for this
session is Dan Lachance.

In Amazon Web Services, a network interface card or a NIC is its own type of cloud
resource, and then you can attach it to a virtual machine or detach it from a virtual
machine as needed.

A webpage labeled AWS Management Console appears. It contains a search bar on


the top. It further contains 4 sections: AWS services, Build a solution, Stay connected
to your AWS resources on-the-go, and Explore AWS .

So, it's pretty easy compared to what we had to do in the past, if we wanted to add a
network card to a machine and it didn't already have it built into the motherboard. So,
to get started here in the AWS Management Console, I'm going to search in the search
bar at the top for ec2 to open up the EC2 Management Console, because that is where
I will have things like network interfaces and virtual machine instances.

A page called Network interfaces| EC2 Management appears. It contains a search


bar on the top. The left pane appears with the following options: EC2 Dashboard,
EC2 Global View, Events, Tags, Limits, Instances, Images, and so on. The right pane
displays the following column headers: Name, Network Interface ID, Subnet ID, VPC
ID, Availability Zone, and so on.

Virtual machine instances are available in the Instances view,

A page called Instances appears. It contains a table with various column headers:
Name, Instance ID, Instance state, Instance type, Status check, Alarm status, and so
on.

and if I scroll down on the left, I can go all the way down under Network & Security
where I'll find Network Interfaces.

So, the network interfaces just separate resource. I can select a network interface and
from the Actions menu, I can choose to Attach it. Not in this particular case, Attach is
grayed out because it's already attached to an instance, but I could detach it from that

The Network Interfaces page appears again. A drop-down box appears with the
following options: Attach, Detach, Delete and Manage IP addresses. The Network
interface details are given underneath the table. The details section contains the
following options: Network interface ID, Name, Description, Network interface
status, Interface type, Security groups, IP addresses, Instance ID, MAC Address, and
so on.

instance and then attach it to another. So, when I select a network interface down
below under the Details tab, we have a lot of very important and interesting
information, such as the fact that the network interface is in use. We can see the VPC
and Subnet that it is attached to, and as I scroll further down, I'll also get to the point
where I have IP address in information for this network interface. A Private IP, a
Public IP, and also some DNS names, a Private DNS name and a Public DNS name.
And if I go even down further, I get the MAC address. The 48-bit MAC address or
layer 2 address, that was assigned to this network interface, and then I have a link to
the instance or the virtual machine that this network interface is tied to.

So, if I click on that, it's actually tied to my running WinSrv2019 virtual machine.
What I'd like to do is add a second network interface to that virtual machine. So, I'm
going to go back to the Network Interfaces view on the left where I'm going to choose
Create network interface. I can add a Description. I'm going to call this Second NIC
for Windows Server. And I can specify a Subnet where I want to create it. I'm going
to create it, let's say in this subnet so I'll select it from the list, we can determine
whether the Private IP address is Auto-assigned

A page labeled Create network interface. It contains 3 options: Details, Security


groups, and Tags- optional. The Details option contains the following sections:
Description - optional, Subnet, Private IPv4 address, Elastic Fabric Adaptor, and
Advanced settings. The Security group option contains a table with the following
table headers: Group ID, Group name, and Description. The Tags option contains 2
sub options : Key and Value- optional.

through DHCP in the cloud or Custom. It'll pick up an IP address, if we leave it on


Auto-assign from the range assigned, the IP range assigned to that Subnet, and that is
what I want to do.

Down below, I can also associate this with a security group. A security group you can
think of as being a simple firewall where you can determine what traffic is allowed,
into in this particular case, the network interface or the virtual machine it's attached to.
I'm just going to select the default security group and then I'm going to scroll down
and under Tags, I'll click the Add new tag button. I'm going to add the Name tag here,
and I'm going to call this NIC2. Alright, so now that I've done that, I'll click Create
network interface. And so, NIC2 is now showing up in the list. If I were to actually
select it, well, when we go down below and look at the details for that network
interface, it's not currently in use, instead, its status is showing as being Available.
And so, if I have that selected, I can go to the Actions menu and choose Attach. And
what I'm going to do is select my WinSrv2019 EC2 instance

A pop-up box appears with the heading: Attach network interface. It contains 2
sections: Network interface and Instance.

from the list and then I'll click the Attach button. OK, so it looks like all that stuff
worked just fine.

But really, if I go back at the instance level and select my Windows server and then go
into Networking, this is where we can get some interesting information such as IP
addressing

The page labeled Instances displays again. The middle pane appears with the
heading: Instance: i-0fd18332059e6d96d ( WinSrv2019). It contains the following
sections: Details, Security, Networking, Storage, Status checks, Monitoring, and
Tags. Currently, the Networking section is highlighted. It consist of the following
sections: Networking details, Networking interfaces, and Elastic IP addresses.

but notice interestingly that it has 2 Private IP addresses, of course, because we have 2
network interfaces. And, if I scroll down under the Network Interfaces section will be
able to get a list of the network interfaces that are available or not available, but rather
attached actually to this virtual machine instance. We have our Primary network
interface that was automatically done when the virtual machine was launched,
although the Second NIC for Windows Server as we so described it is now showing
up as the second network interface. And so, that means then when you're in the virtual
machine operating system, you're going to have a second network interface, let's take
a look at that. Now, as usual in Amazon Web Services, you can select a virtual
machine, go to Action, Security, Get Windows password, specify your Private key
file,

A drop-down box appears underneath Actions. It contains the following options:


Connect, View details, Manage instance state, Instance settings, Networking,
Security, Image and templates, and Monitor and troubleshoot. The Security option
contains the following sub-options: Change security groups, Get Windows password,
and Modify IAM role.

and then decrypt the admin password, to make a connection to it.

A page labeled Get Windows password appears. The following message is highlighted
in a box: Key pair associated with this instance - VM_Key_Pair.

Of course, you can also select the VM and then copy its public IP to remote into it.

So, I'm going to do that right now. So, now that I'm into that virtual machine, I'm
going to go ahead and go into the Start menu and let's say we run cmd to open a
Command Prompt because we can do this. We could type ipconfig and it will show us
all network interfaces. And we've got two network interfaces showing up here.

A page labeled Administrator: Command Prompt appears. It contains several lines of


command. He now enters the following command: ipconfig.

Certainly, if we go into the Control Panel on our server, we're also, of course, in the
GUI going to be able to see that we have 2 NICs. If I go into Network and Internet,
Network and Sharing Center, and once that pops up Change adapter settings on the
left, both network interfaces are showing up, Ethernet 3 being the new one. And so, in
this way we have a very easy method by which we can create network interfaces and
attach them to virtual machines in the cloud and even detach them if they're no longer
needed, and we want to attach them to a different VM.

Course Summary
Topic title: Course Summary

So, in this course, we've examined mapping network functions to the OSI model. We
examined various network models, cables, and interfaces and configured a virtual
network and a virtual NIC. We did this by exploring, how the OSI model maps to
network communications as well as examining different types of networks. We
examined network switching and routing, and also how to deploy a hypervisor virtual
network. Next, we configured IP routing in the cloud and virtual network peering. We
discussed NICs and cable types and finally configured on-premises and cloud-based
virtual machine network interface cards. In our next course, we'll move on to explore
working with TCP/IP.

CompTIA Server+ (SK0-005): Network


Directory Services
A network directory service uses a distributed database, which contains various
objects, replicated among multiple servers. When working with servers, you need to
know how to manage these directory services. Take this mostly hands-on course to
learn how to achieve this. Discover how Microsoft Active Directory works. Then, try
your hand at creating a new forest and domain in an on-premises virtualized
environment. Join computers to the domain for centralized authentication and
management purposes. And plan Group Policy settings for domain users and
computers. Moving on, inventory servers before configuring a cloud-based directory
service and joining devices to it. When you're done, you'll be able to plan, deploy, and
manage on-premises and cloud-based directory services. You'll also be more prepared
for the CompTIA Server+ SK0-005 certification exam.

Course Overview
Topic title: Course Overview.

A network directory service uses a distributed database that gets replicated among
multiple servers. The database can contain app configuration settings, user and group
accounts, group definitions and PKI certificates, among many other types of objects.
In this course, I'll define how Microsoft Active Directory works by describing the
relationship between forests, domains, domain controllers and the like. Next, I'll
create a new forest and domain in an on-premises virtualized environment, followed
by joining computers to the domain for centralized authentication and management
purposes.

I'll then determine how Group Policy settings will be applied to domain users and
computers, followed by configuring various Group Policy settings. Moving on, I'll
inventory servers, and then configure a cloud-based directory service and join devices
to it. This course is part of a collection that prepares you for the CompTIA Server+
SK0-005 certification exam.

Microsoft Active Directory Domain Services


Topic title: Microsoft Active Directory Domain Services. Your host for this session is
Dan Lachance.

Microsoft Active Directory or AD is an example of a network directory service. But


what does that even mean? So a network directory service you can think of as being a
centralized network directory or database, that will contain user accounts, group
information, computer accounts, app settings.

In the case of Active Directory, it contains many Group Policy settings that can apply
to all or a subset of users, or computers, in the Active Directory domain. So that's the
type of configuration you would have in that type of a database. And this database is
replicated among multiple servers called domain controllers.

Now, AD authentication is done through domain controllers. So whether it's a device


like your computer, that's joined to the domain, that's authenticating, or a user, or
could even be a software component using some kind of a service account to
authenticate to the domain. Either way, after successful authentication, the entity that
has authenticated is now given permissions to access and use a given resource, such as
a shared folder or access to a database or some other server.

It's also important that usage auditing be enabled so that we can track the different
types of device user or software accounts that are being used, to authenticate and then
access resources, to use those permissions. Active Directory supports the Lightweight
Directory Access Protocol, or LDAP. LDAP is a centralized network database access
protocol that works with Active Directory, but it's not exclusive to Active Directory.

It's been around for a very long time and there are many other solutions like
OpenLDAP that use it as a centralized network directory store. LDAP normally
listens on TCP port 389 or 636 if you're using Transport Layer Security or TLS.
Active Directory supports the centralized management of users, groups, computer
accounts, Group Policy settings, app settings.

The idea is we can connect to any domain controller server that has a copy of the
Active Directory database, make a management change, and that change will be
replicated elsewhere, hence centralized. We can also join devices to an AD domain,
like user laptops within the organization. The benefit, is that when a computer is
joined to the domain, it authenticates to Active Directory and can pull down any
Group Policy settings designed for that computer.

Of course, it also allows users to go to a domain-joined computer, and sign in with


their centralized AD user credentials. So it allows AD account logon then from any
device. Now, Group Policy settings, of course, can be applied to the device, regardless
of who is logged into it, and also to users, regardless of which device they're using. So
it kind of works in both directions.

Now, it's important to have an understanding of the Active Directory forest and
domain hierarchy. So as an example, let's say we install a brand new Windows Server
and we establish a brand new Active Directory environment from scratch. So we
would be creating then, in that case, what's called a forest root domain. And we might
call it, for example, quick24x7.local. So it uses DNS nomenclature.

Now, down below that, we could create one or more child domains, such as
east.quick24x7.local, and west.quick24x7.local, and so on. So the forest would consist
of all of these domains together, but we have each separate domain, which really
serves, if you will, kind of as a management boundary. You might have a domain, for
example, for Canadian operations if you're a multinational firm, and might have a
separate domain for the United States.

Or if it's a very large organization, you might have a separate domain for each
province or region or state. There are no rules. It really depends on your management
needs, and of course, it depends on the number of users and devices that you're going
to be managing.

Pictured on the screen, we've got a screenshot of an organizational unit within Active
Directory or AD. Now, organizational units are called OUs, and underneath
Western_Region in this screenshot, which is an OU, we have a subordinate OU within
it, called Users. Two users are listed on the right. There is another subordinate OU
called Computers. So this is kind of like setting up a file system directory structure, a
hierarchy, to organize things in a meaningful way.
Now, not only does it make it easier for technicians to navigate to Users in the
Western_Region, for example, but it also means that you can apply Group Policy
settings to any part of the OU hierarchy. So you could apply Group Policy, maybe
security settings, to the Western_Region, which would differ perhaps than the Group
Policy settings that might be applied to the Eastern_Region.

So we have a number of options available then when we start using organizational


units or OUs. But notice that if you're working with organizational units, the icon has
what looks like a little page within the yellow folder If you have an icon, and this is
the Active Directory Users and Computers tool, if you have an icon that's just a solid
yellow folder that's not an OU. It's a container.

And they are not the same thing. OUs in the screenshot include: Domain Controllers,
Eastern_Region, Western_Region, and Head_Office. Containers include: Builtin,
Computers, Managed Service Account, Users. Because you cannot apply Group
Policy settings to a container, but you can to an OU. So that's kind of what we're
working with here in this screenshot. Now we have mentioned that a domain
controller is a Windows Server, that's part of the domain, that has a copy or a replica
of the Active Directory database. And so, the database replicates among domain
controllers.

It's pretty much immediate on a local area network, but you can configure site settings
for Active Directory to control the replication schedule across a wide area network. So
we're talking about replicating the contents of Active Directory like users and groups
and computer accounts and so on, as well as Group Policy settings. So users then can
log on to any domain-joined device using their centralized account.

And then we have the notion of a cloud-based Active Directory configuration. As you
might guess, it means that this is a service in the cloud and it's managed. Now you
could manually install a virtual machine in the cloud and manually configure it as a
domain controller. But if you use it as a managed service in the cloud, what that really
means is, that is all taken care of for you.

All you're doing is specifying details, like the number of objects you might want to
manage inside your domain. Also, you would then work with things like the domain
name and then actually manage the objects in the domain. But with a managed
service, you don't have to worry about setting up the virtual machine in the cloud and
configuring the Active Directory domain services role, and all that jazz.

So, with a cloud-based Active Directory service, it's really business as usual once
you've got the domain established. You use standard tools, command line and GUI-
based, that you might already be used to using on-premises, to manage the directory
service in the cloud. You can also even link your cloud Active Directory to an on-
premises Active Directory environment that already exists. Now you might wonder,
why would I do that?

Sounds like I'm duplicating everything. Not quite, because what you could do is use
pass-through authentication when you link cloud Active Directory to on-prem Active
Directory. What that means is, if you've got a user, let's say, that needs to access a
cloud service, well, instead of requiring them to authenticate again; or instead of
requiring technicians to build a second account in the cloud, because the cloud AD
would be linked to the on-prem AD, when the cloud gets a request to authenticate a
user for a cloud app, it can just pass it through to the on-prem AD, where users can
continue to use their AD credentials as they're already used to.

So server technicians then have to have an understanding of a server role that might
entail dealing with authentication, and having a centralized network directory
database.

Deploying a Microsoft Active Directory Domain


Topic title: Deploying a Microsoft Active Directory Domain. Your host for this
session is Dan Lachance.

In this demonstration, I'm going to be creating an Active Directory domain within an


on-premises environment. So what I've got running here is a Windows Server 2019
virtual machine. Now let's go into the settings of it for just a moment.

Specifically, I want to go into the Control Panel, and I'm interested in going through
the Network and Internet area, into Network and Sharing Center, and then Change
adapter settings over on the left. I'm going to right click on my Ethernet interface and
go into Properties, because one of the things you have to think about if you're going to
be creating or establishing a brand new Microsoft Active Directory domain, is the IP
configuration of the first server.

So, I'm using IPv4, and if I take a look at it, I have manually specified the IP address,
the Subnet mask, the Default gateway, and two DNS servers. Now, the first Preferred
DNS server is the IP address of this server itself. IP address and Preferred DNS
server read: 192.168.4.167, Subnet mask reads: 255.255.252.0, Default gateway
reads: 192.168.4.1, Alternate DNS server reads: 8.8.8.8. Active Directory relies
100% on DNS to discover domain controllers.

Now remember, domain controllers are servers that hold a copy of the Active
Directory database. And this is absolutely crucial in an Active Directory environment,
even for client devices. If they aren't able to find the appropriate service location
records for domain controllers in DNS, then even clients won't be able to authenticate
or pull down group policy, or even join the domain in the first place.

So it's very important, then, that we have a static IP configuration set up, in
accordance with whatever the network range is where this server exists, and that the
Preferred DNS server points to itself. Now, once we've done that, we're good to go.
And what I mean by that, is we can go, for example, into the Server Manager GUI;
you could do this from PowerShell if you wanted to, I'm going to do it this way.

In the Server Manager GUI we can click Add roles and features, because Active
Directory Domain Services, or ADDS, is actually just a server role. The host clicks on
Add roles and Features under the Quick Start menu. This opens a new window that
contains a wizard. The steps of the wizard are listed on the left side. The steps read:
Before You Begin, Installation Type, Server Selection, Server Roles, Features,
Confirmation, and Results. So, I'm going to click Next to continue through the wizard,
until I get to the point where I'm on the Select server roles screen. And what I want is
Active Directory Domain Services, so I'll turn on the checkmark for that. It's going to
pop up and say, well, do you want these tools to be installed along with that role?

I do. So I'm going to click Add Features and I'll click Next. I'm not going to change
anything here on the Select features screen, I'll click Next again, Under Remote
Server Administration Tools, it reads: Role Administration Tools, AD DS and AD
LDS Tools, Active Directory module for Windows PowerShell, AD DS Tools, Active
Directory Administrative Center, AD DS Snap-Ins and Command-Line Tools. The
feature selected is .NET Framework 4.7 Features. and I'm just going to continue
through the wizard and click Install, which gets us to the point where it's copying the
files. We're just going to wait a moment because naturally, there's going to be more to
configure.

We haven't even specified the name of the forest root domain or anything like that, so,
we'll just wait a moment while the files get copied. Once the files are copied, I'll go
ahead and click Close, and in my notification area in the upper right, I've got a yield
type of symbol here in the Server Manager Dashboard. I'm going to click on it and it
requires some Post-deployment Configuration. And there's a link here that says
Promote this server to a domain controller.

Yes, that sounds like what I want to do so I'm going to click on that. A window opens
that features a wizard. The steps of the wizard are listed on the left side. The steps
read: Deployment Configuration, Domain Controller Options, Additional Options,
Paths, Review Options, Prerequisites Check, Installation, and Results. So we can Add
a domain controller to an existing domain. No, we want to establish a new one. So
Add a new domain to an existing forest, we don't even have a forest. Remember, a
forest is a collection of related domains that use the same type of DNS nomenclature.
No, we don't have that yet.

We want to do it right from the beginning. So, Add a new forest. What do you want to
call it? I'm going to call this quick24x7.local, and then I'm going to go ahead and click
Next. The host moves to the next step that reads: Domain Controller
Options. Naturally, that's the type of thing you would have planned, just like your IP
addressing scheme, long before you sit down and actually start configuring it on the
machines.

So, what it wants to do is, it wants me to specify a recovery password for Directory
Services Restore Mode. So I'm going to go ahead and specify a Password and confirm
it. And then I'll click Next, and I'll continue through, and we'll wait a moment as it
verifies that the name is unique. What it's going to do is come up with a NetBIOS
name. NetBIOS is an older protocol used for connecting to computers by name, and
so it's got the shortened version QUICK24X7. That's fine. I'll click Next to accept
that.

I'll accept the default locations for storage of the database, log files, and the sys
volume for the Domain Controller, and then I'll click Next. At the end, it's going to
verify that the prerequisites have been met on this service, such as having a statically
configured IP address. Now we've got a few notes down here that result from the
prerequisite check, but we're good to go. It says at the top: All prerequisite checks
passed successfully. Click 'Install' to begin installation.

Sure. OK. I'm going to click Install to do that. OK. So after the configuration
completes, the server restarts, and on the login screen, it wants me to log in as the
Administrator account for the domain. Notice the way it's written. QUICK24X7 is the
name of the domain \Administrator. I could alternatively login as
[email protected], so there are a couple of different ways you can
sign in.

So I'm going to go ahead and then put in the password, and let's check out our work.
So now that I'm signed in, from the Start menu, I'm going to go into Windows
Administrative Tools, where I now have a couple of Active Directory-related tools
available, such as Active Directory Users and Computers.

In here, my domain shows up; quick24x7.local, and if I drill down, I have, for
instance, the default Domain Controllers OU, organizational unit, where I've got my
server, the one I'm sitting at right now, shown here as being a Domain Controller. The
name of the domain controller reads: WIN-M6CGN1MFJLQ. If I go to Users, I've got
the built-in User account, such as the domain Administrator account, and I've also got
the Builtin container. This is a container, not an OU, because it's a solid yellow folder
icon.

Domain Controllers has a little paper icon in it, which implies it is an OU. And you
can apply group policy to OUs, but not containers. Anyway, the Builtin container has
built-in groups like Administrators, Guests, Event Log Readers, and so on. So there
you have it. At this point, we have successfully established a brand new Active
Directory domain.

Joining Stations to an Active Directory Domain


Topic title: Joining Stations to an Active Directory Domain. Your host for this session
is Dan Lachance.

The focus of this demonstration is going to be to join a Windows computer to an


existing Microsoft Active Directory domain. Now you could be doing this from a
Windows client operating system, but in this case, I'm going to be using Windows
Server 2019 to connect to an existing Active Directory domain. So the first thing I
have to think about is my network configuration. I have to be able to reach at least one
domain controller from the existing domain over TCP IP.

So let's go ahead and take a look at our TCP IP settings here on this computer, that
will be joined to a domain. So for that, I'm going to go into my Start menu and then
into the Control Panel. I'll choose Network and Internet, Network and Sharing Center,
and then Change adapter settings, and what I want to do is right click on my network
interface.

And if you've got multiple network interfaces which might be connected to different
VLANs, that type of thing, make sure you're selecting the appropriate network
interface that will be able to contact a domain controller. So I'm going to choose
Properties, I'm interested in Internet Protocol Version 4, so IPv4. The options visible
on-screen are: Client for Microsoft Networks, File and Printer Sharing for Microsoft
Networks, QoS Packet Scheduler, Microsoft Network Adapter Multiplexor Protocol,
Microsoft LLDP Protocol driver, Internet Protocol Version 6 (TCP/IPv6). So
currently this server is set to Obtain an IP address automatically. And that's fine. I
don't have to change that.

But what I really need to do when I want to join a machine to a domain is I need to
make sure I'm pointing to a DNS server that has service location records for domain
controllers. Otherwise, this machine would never find a domain controller, and by
extension, would never be able to join the domain or anything like that. And this is
also important for troubleshooting. Making sure you can reach a domain controller to
authenticate, so to log in, and also to pull down group policy.

So I'm going to put in the IP address of what I know to be a domain controller for the
domain I want to join The host types 192.168.4.167 in the Preferred DNS server
field. and I'll click OK, and OK. So now that that's done, we can proceed with joining
the machine to the domain, so I can search the Control Panel here for join, and
immediately I have under the System category: Join a domain. So I can go ahead and
do that, click the Change button.

I can also change the Computer name at the same time, if I really want to. So why
don't we call this WindowsServer2019-2, and I want it to be a Member of a Domain.
And now the Domain I have to know the name of. It's quick24x7.local. And that's
where DNS is important.

If we don't point to the correct DNS server, then that will never be resolved correctly.
So I'm going to go ahead and click OK and I'll click OK again. The host clicks OK to
save the changes and a dialog window appears which informs that the NetBIOS name
of the computer is limited to 15 characters. The host clicks OK again to confirm. If
you get to the point where it's asking you to authenticate to the domain, you're in a
good place. So I'm going to go ahead and specify the administrator account for the
domain, and I'll also specify the password for it.

Technically, you don't need an admin account to join the computer to the domain, but
in this example, I'm just going to go ahead and use that. And everything's working. It
says: Welcome to the quick24x7.local domain. Perfect. OK. So what I'm going to do
then is Close and Restart Now.

So when I reboot that machine, now that it's joined to the domain, this machine still
wants to log on with the local Administrator account, but I want to do a domain log
on, so I'm going to click Other user, and down below it says Sign in to: QUICK24X7.
OK, well, that is what I want, so I'm going to go ahead and specify; it doesn't have to
be an Administrator account, but that's the account that I have in that domain, and I'm
going to go ahead and specify those credentials and I'm going to continue on.

And at this point, I am actually signed in to the Active Directory domain. Let's flip to
the domain controller and just check out Active Directory domain Users and
Computers. So here on the domain controller, I'm going to open up the Start menu, go
down under Windows Administrative Tools and run Active Directory Users and
Computers.
What I'm really interested in here is going to the Computers container where we've
got our newly joined Windows Computer. Now the name includes 20, I only
originally meant for it to have a 2 in the name, the end, but that's OK. It doesn't make
a difference. We could just rename the computer at any time. But the point is this: we
have successfully joined the domain.

Now, while we're here on the domain controller, why don't we open up under the Start
menu and Windows Administrative Tools, why don't we take a look at DNS? We've
been going on about how DNS is so important. Let's open up the DNS Manager tool
on this server because this server is a DNS server. And if I look at the Forward
Lookup Zones for DNS name resolution, there's one here called, here's a surprise,
quick24x7.local.

That's the name of the Active Directory domain. And if I open that up, I've got a
number of other items like _tcp, where I have Service Location records, SRV records,
which are used to locate domain controllers. So that's what I was talking about when I
mentioned the reliance of Active Directory on DNS. The rest of the items include:
_msdcs, _sites, _udp, DomainDnsZones, ForestDnsZones.

Planning Microsoft Group Policy Usage


Topic title: Planning Microsoft Group Policy Usage. Your host for this session is Dan
Lachance.

If you're working in a Microsoft Active Directory domain environment, then it


becomes very important to start thinking about planning how you're going to use
Group Policy. So what is Group Policy? Group Policy you can think of as being
thousands of Windows settings to configure Windows devices that are available to be
configured centrally in the Active Directory domain.

So, you could configure Group Policy settings on a local machine. So, for example,
even on a Windows client operating system, depending on the edition, you could use
the gpedit.msc command to pop up the local Group Policy editor and configure those
potential thousands of Windows settings on that single machine. Of course, you can
do that on Windows Server operating systems, too. But you could also do it through
Active Directory, as we've described.

Now when you configure Group Policy settings in Active Directory, you have to think
about whether it's computer settings that you're configuring, or user settings, and the
scope to which it should be applied. Should it be applied to an entire Active Directory
site, which could consist of multiple domains or a forest, or just an individual domain
or an OU? So we have to think about the settings and what they will apply to. So
settings can be configured then, to follow users; if you've configured user settings then
it doesn't matter which station the user signs into on the network.

If it's domain joined, the user settings will follow them. Same goes for a device,
meaning, it doesn't matter who logs into a device that's joined to a domain, If you
have device-specific Group Policy settings, they will apply to that device. Regardless
of who's sitting at it. So Microsoft Group Policy Objects, or GPOs, are what you
configure Group Policy settings in.

We've got a screenshot here of the Group Policy Management GUI tool. And what's
happening here is on the left a GPO called Security_Baseline_Settings is selected.
And so then on the right, we have some Settings Details, and some date and
timestamps related to the GPO. We don't actually see the potential thousands of
settings, not right here at this level.

What's interesting is you'll notice that the Security_Baselines GPO is indented


hierarchically underneath an OU, an Active Directory OU, called Western_Region.
And so, this implies then, that that GPO is linked to that OU. So the potential
thousands of user and computer settings that might be configured in that GPO, will
apply to users and computers in the Western_Region.

And the settings will flow down to subordinate OUs, under the Western_Region; kind
of like in a file system security flows down through the file system hierarchy. Same
kind of idea. On the right hand side of the screenshot, the Settings tab is selected. The
Details visible on-screen include: Domain, Owner, Created, Modified, User
Revisions, Computer Revisions, Unique ID, and GPO Status. So the Group Policy
hierarchy is important to understand. GPOs can be applied to an Active Directory site,
to a specific domain, and of course, as we've just talked about, to a specific OU.

It really depends on what your requirements are. So the settings flow down through
the Active Directory hierarchy. But, just because by default, inheritance is enabled for
subordinate OUs, for instance, you can actually go to a specific OU and block GPO
inheritance. Maybe you've got an OU for administrators where you don't want security
restrictions to be put into effect.

So GPO linking then, means that we associate the GPO with the part of the hierarchy
where the settings will be applied. And so in this particular screenshot, we've got the
same GPO being used twice, and that's perfectly valid. So, the GPO is called
Security_Baseline_Settings, so let's say it's general or generic security settings that we
might want applied in different regions.
So, the GPO has been linked in this particular example to the Eastern_Region OU, as
well as to the Western_Region OU. And so this is an important concept to understand
when it comes to working with Group Policy. Here, we have a specific screenshot of
going into a GPO and managing the settings within it. Remember, there could be
thousands of settings.

So specifically, what's been done here in the left-hand navigator, is somebody has
drilled down under Computer Configuration, so right away, the settings will apply to a
domain-joined computer that is affected by this GPO, regardless of who sits at that
computer and logs in. So under Computer Configuration, here in the Group Policy
settings, someone has gone under Policies, Windows Settings, Security Settings,
Account Policies, and finally, down to the Account Lockout Policy.

And over on the right, things like the Account lockout duration can be set. This is
something you would do if you want to lockout an account, The other options are:
Account lockout threshold, Reset account lockout counter after. if a user enters the
incorrect password, let's say three times in a row, that type of thing. Thing to watch
out for, specifically when it comes to password-type policies, is that they only work in
a GPO that is assigned to a domain.

Now, even if you're working with GPO settings that are applied to an OU, you will
still see the same thing. You will still see account password settings. So it's
misleading, because you would think that you could apply different domain password
settings at the OU level, but you cannot. It only works at the domain level.

But pretty much most other settings will apply wherever you set the link for the GPO,
so the site, the domain, or the OU level. So to apply Group Policy, if you just wait
long enough, it will apply automatically. So, depending on your network environment
and whether you've got a wide area network, you network speed, how many users are
signed in at the same time, how many devices are joined to the domain.

All of those play into how long it takes for Group Policy settings to be applied to
users and machines. Generally, we can say between 60 and 90 minutes, depending on
the type of setting. But if you want to, you can apply the application of Group Policy
settings on a specific machine on demand, by issuing the gpupdate/force command.
Or you could go to the server-side tools, like where you're looking at GPOs, and you
could manually force a refresh of the Group Policy from that level as well.

Deploying Configuration Settings Using Group Policy


Topic title: Deploying Configuration Settings Using Group Policy. Your host for this
session is Dan Lachance.
In this demonstration, I'm going to be configuring Group Policy. Now Group Policy
can be configured on a local individual machine, even a client Windows operating
system, if it's the right edition. For instance, if you're using Windows 10 Home
edition, you won't be able to configure Group Policy locally, at least not using tools
like gpedit.msc. But in this case, I'm going to be configuring Group Policy centrally in
Active Directory.

So I'm already at an Active Directory domain controller computer where I'm going to
use the Group Policy Management tools. You can run these tools remotely from a
machine, let's say, that's connected to the Active Directory domain. You don't have to
be at the server itself. However, that's where I am.

So I'm going to go to my Start menu, I'm going to go into Windows Administrative


Tools and in the Gs, I'm interested in Group Policy Management. Now, when I go into
Group Policy Management, the current forest, that I'm authenticated to, will show up,
and I can drill down under that to get to the Domains; there's my domain, and also
Active Directory Sites.

I'm interested in the domain, in this case, it's called quick24x7.local, where I've got a
Default Domain Policy GPO, or Group Policy Object. I'll select it, I'll just say, Don't
show this message again. OK. And so, the fact that this Default Domain Policy, GPO,
is indented under the domain, tells me that it is linked to the domain.

And we know that with Group Policy Objects, GPOs, the settings within them, and
we'll look at those in a moment, flow down through the Active Directory hierarchy. In
this case, it's going to flow down to everything in and under the quick24x7.local
domain.

And if I were to right click on the Default Domain Policy GPO and choose Edit, that
opens up the Group Policy Management Editor tool, and in here is where I get the
categorizations for all of the computer Group Policy settings that apply to a machine;
so regardless of who's logged into it, as long as it's joined to the domain, and also the
settings tied to users, that would follow users around regardless of which station they
sign into.

As long as it's connected to the Active Directory domain. So, that's fine. Let's go
ahead and close that out for now. I'm going to go to the Start menu and under the
Windows Administrative Tools, I'm going to fire up Active Directory Users and
Computers. The only organizational unit or the only OU I have is called Domain
Controllers.
That's there by default, and like it implies, it contains the domain controller server
accounts. It contains one item, labeled: WIN-M6CGN1MFJLQ. And we know it's an
OU because the icon is not just a solid yellow folder, but in the yellow folder it's got a
little, what looks like kind of a piece of paper. So I'm going to build a new OU. I'm
going to right click on my domain, I'm going to choose New and I'm going to choose
Organizational Unit.

And let's say we'll make an OU called HQ for headquarters. I'll OK that. Within that
OU, this is where we could create User accounts. So I'm going to create a User, let's
say here for Codey Blackwell, The host clicks the New Object User button in the tool
menu of the window Active Directory Users and Computers. This opens a window
that reads: New Object - User. It contains fields to specify First name, Initials, Last
name, Full Name, and User Logon Name. Below is the Next button. and the User
logon name will be cblackwell. So I'm just creating a user account and I'll specify and
confirm a Password. And if it's a unique Password that is not known, then I could
remove the requirement for the user to change the password at next logon.

Anyways, I'm going to go ahead and continue to the creation of the User. There's the
User. I could also create a Group. The host clicks the New Object Group button in the
tool menu of the window. Let's say I'll call this HelpDesk, and there's the group,
HelpDesk. So what I could do is select a User and I could click the add user to group
button.

And I want to add this user to the HelpDesk group so I can kind of search here for
help and Check Names. There's HelpDesk group. The User was added to the group.
So the point is I've got a new OU called HQ. Now, back in the Group Policy
Management tool, if I click the Refresh button at the top, the HQ OU now shows up.

And so, if I wanted to, I could configure a Group Policy Object or a GPO, that applied
only to HQ, and all the users and computers that might exist under HQ. I'm going to
right click on HQ and I can either create a new GPO and link it here to this OU, or
link an existing one. I want to make a new one. So I'm going to choose Create a GPO
in this domain, and Link it here. And it's going to be called HQ_Security_Settings.

Click OK. And if I drill down now under HQ, there's the GPO:
HQ_Security_Settings. If I right click on it and choose Edit, as we know, that's how
we can go in and start configuring it. So most security settings exist at the Computer
level as opposed to the User level.

So if I go down under Computer Configuration, drill down under Policies, and if I


were to go down under Windows Settings, and then under Security Settings, that's
where we would have a lot of security options. Now be careful. Things like Password
Policy, even though they show up here, The policies listed are: Enforce password
history, Maximum password age, Minimum password age, Minimum password
length, Password must meet complexity requirements, Store passwords using
reversible encryption. won't work, because they only apply at the domain level. This
GPO is linked to an OU, so even though it shows up here, be careful.

The other items are fine. So, for example, why don't we go and make a configuration
change here? Maybe on the left under Local Policies I'll go to Security Options, I'll
click on that, and for example, over on the right, I'm going to Prevent users from
installing printer drivers, so I'm going to define that setting and enable it. So I'm going
to Prevent users from installing printer drivers.

Also down below for Interactive logon: Don't display the last signed-in username.
OK? That's probably a good idea from a security perspective. I'm going to select that
and enable that. That's especially a good setting in a shared computer environment
where a lot of different people might log on, or certainly in a public kiosk type of
environment.

Now I could also drill down here on the left under Administrative Templates, I can
actually even search it. I could right click and choose Filter Options. Let's say what I
want to do is I want to filter this on removable, as in removable media. So show me
any Group Policy settings under Administrative Templates, that have removable in the
Policy Setting Title, the Help Text or the Comment.

And when we click OK we've got little filter icons here and I can just go to All
Settings, and here we have all of our options related to what we searched for. So, what
I can do here is I can specify which options I'm actually interested in restricting, such
as prevent the installation of removable devices. I'll go ahead and select that and I will
enable that. And for existing removable devices that might already be enabled, we can
deny read access, so I could enable that.

We could also deny write access. Depending on our environment, we might not want
to allow reading and writing to removable media for security purposes, essentially for
data loss prevention. At any rate, once we've got these settings in Group Policy and
there are potentially thousands of things you might configure. Once they are created
within the GPO, they will apply within 60, 90 minutes, perhaps longer, if you've got a
wide area network and different site configurations in Active Directory.

But at this point, there's nothing else we have to do. It's just a matter of waiting for the
Group Policy settings to be applied. If you wanted to, you could go to an individual
machine at its Windows Command Prompt. The gpupdate command with the /force
switch is interesting. Force means it will force the machine to reevaluate all applicable
Group Policy settings for the computer, that we're doing this at, and the user currently
logged on.

It'll force it to check all of them, even if they might have been pulled down previously.
So I'm going to go ahead and press Enter. Now, at this point, it needs to talk to the
nearest domain controller. Well, I'm actually doing it right on a domain controller in
this example, but it doesn't have to be done this way. The thing is, you would only be
forcing Group Policy on an individual machine where you type in this command.

So sometimes, if you are testing, this is applicable. But at any rate, now we have a
sense of how we might work with GPOs, Group Policy Objects, and force the update
of Group Policy settings on a specific domain-joined computer.

Enabling Server Inventory


Topic title: Enabling Server Inventory. Your host for this session is Dan Lachance.

There might be some cases where a server technician is responsible only for one
server, maybe a handful of servers, maybe even a dozen. But what about when you've
got many dozens of servers or hundreds or even thousands that you have to track,
whether they be physical or virtual?

You need a way to know what is installed on all of those servers and what versions of
software; you need to be able to inventory them. And I don't mean remoting into each
and every one periodically and manually taking a list of what's there. We need an
automated way. We can do that even in the cloud. So we're going to do that here in
Amazon Web Services or AWS. We're going to be using something called the AWS
Systems Manager, which has a number of different tools.

It's a suite of tools, one of which allows us to work with server inventorying. So here
in the AWS Management Console, I've signed in, in the search field up at the top I'm
going to go ahead and search for systems manager and I'm going to click it in the
resultant search list and that places me in the AWS Systems Manager console. So I've
got this navigational menu system over on the left related to Systems Manager.

And sure enough, if I scroll down under Node Management, we have an Inventory
option. So I'm going to click Inventory and Settings and I'm going to click Setup new
inventory. The panel contains a template to specify inventory details, Targets,
Schedule, and Parameters. So it's going to call it Inventory-Association. I'm fine with
that. I can choose to select all instances in the account; now that means EC2 instances,
virtual machines, but what's interesting about AWS is many of their options allow you
to filter things out and select them by Specifying a tag.
For example, what I could do is say, I'd like to inventory all of the servers or instances
that were tagged as Project A. Or it doesn't have to be a project, it could be any key
value pair. Maybe your company uses CostCenter, and maybe it's CostCenter
Toronto1. Whatever the case is, you could use that to filter out which instances you
want to inventory.

Of course, when you deploy instances, you have to make sure that you tag them
accordingly. Otherwise this won't work as well as it really otherwise should. Anyway,
I'm going to go ahead and choose manually select instances. And just bear in mind,
only running instances will be shown here. So I've got a running WinSrv2019
instance, so I'm going to go ahead and select it. Now, of course, we could select
multiple instances.

The idea being, we want a centralized way to get inventory for many servers. But
anyway, that's what we're going to do in this example, just the one. We can specify the
inventory collection interval, which is set here to every 30 minutes, but you could
change that unit to Hour(s) or Day(s).

I'm going to leave it as the default and it's going to inventory Applications, even AWS
software components running in the virtual machine, network configuration, Windows
Updates, and all of the other items that are turned on here by default. And I'm good
with that. I'm not going to change any of that. The rest of the items turned on are:
Instance Detailed Information, Services, Windows Roles, Custom Inventory, Billing
Info, File, Windows Registry. We could also sync inventory logs to an S3 storage
bucket, but I'm not going to do that.

So I'm going to click Setup Inventory. And it says, Setup inventory request succeeded.
So what that does is it then puts me on the Inventory Dashboard, where we can start to
get some details about what has been inventoried. And it may take some time, of
course, it depends on your collection interval, how many instances you're
inventorying, and whatnot.

But what we're looking at discovering are things like the components that we are
going to be inventorying, so that's based on our configuration. And also, currently, the
installation count that it's determined for our instances, it's showing that we have
Windows Server 2019 Datacenter edition. And then it starts showing some of the top
applications and the top server roles, such as File Server, File and iSCSI Services, and
so on. The other roles are: .NET Framework 4.7, .NET Framework 4.7 Features, File
and Storage Services.

Down below the Top 5 Services, based on the number of installation instances, of
course, we only have one virtual machine, but they're starting to be shown here as
well. And we've got a couple of AWS Agents that are shown here as being part of the
Top 5 Services installed on virtual machines that are being inventoried. The Top 5
Services are: AWS Life Guest Agent, Amazon SSM Agent, Background Tasks,
Infrastructure Service, Base Filtering Engine, CNG Key Isolation.

Another way to view the inventory information for managed instances or servers, is
over on the left, if I kind of scroll down to Node Management and choose Fleet
Manager, then in this tool I'll have a list of managed instances, meaning they're
managed by AWS Systems Manager.

So, if I click on the link for the Instance ID, it gives me some details at the top, so the
platform is Windows, we have the OS being shown, the SSM Agent version for
System Manager, even the IP address and all that stuff, The Instance ID reads i-
026729b520e7f9a16. The next console page features a Tools menu list on the left. The
Instance overview is selected. OS name reads: Microsoft Windows Server 2019
Datacenter, SSM Agent version reads 3.1.459.0, IP address reads 172.31.2.4. but
down below I can go down into the Inventory area.

If I click Inventory, The remaining tabs read from left to right: Tags, Associations,
Patch, Configuration compliance. then I have categories of Inventory, such as AWS
applications that are being inventoried on that, such as the AWS SSM Agent, and its
detailed version, AWS Tools for Windows and so on.

But, if I change that filter in the list, so let's say I go down and choose AWS:Network,
let's say, then it filters it out and only shows me details related to the network
configuration. So here I have the MAC address that was assigned to this cloud-based
virtual server, the IPv4 address, IPv6 link local addresses, which are designed to be
used on a local network link, always have an fe80 prefix.

I have the Subnet mask, the Gateway, the DHCP server IP address, which is handled
often automatically in the cloud. Same with the DNS server. So a lot of that stuff
shows up, and that's under the AWS:Network type of category. If I were to choose
AWS:InstanceDetailedInformation, then I can see things like the CPU model.

It's showing, of course it's virtual, but it's showing up here as an Intel Xeon CPU at
2.4 GHz, 2 CPU cores and so on. So modern server management in the cloud really
does deal a lot with working with things that might not seem server centric, like
working with a specific cloud provider solution that allows us to gain insight into a
server, such as through Inventory.

Configuring a Cloud-based Directory Service


Topic title: Configuring a Cloud-based Directory Service. Your host for this session is
Dan Lachance.

In this demonstration, I'm going to enable a cloud directory services domain.


Basically, what you can do in the cloud, of course, is you can always deploy your own
manual Infrastructure-as-a-Service virtual machines, in which you could then install
things like Microsoft Active Directory.

You can completely do that manually. What I'm going to be doing here, though, in the
Microsoft Azure public cloud, is I'm going to be creating a new Azure AD, that's
Azure Active Directory, tenant. That's kind of the loose equivalent of creating a new
on-premises Active Directory domain. Because a separate Azure AD tenant is tied to a
different subscription which allows objects to be created.

It has its own list of user accounts and groups, and registered applications, that type of
thing. But it's not like having full fledged, full-featured Active Directory in the cloud,
because it doesn't support things like Group Policy and so on. So to get started here,
I've signed into the Microsoft Azure Portal, and when you create an Azure account in
the cloud and tie it to a subscription, you always have a default Azure AD directory or
tenant.

Now, if I click here towards the upper right here in the portal, where my user account
information is, I have a Switch directory option. The host is redirected to the page
labeled: Portal settings, Directories + subscriptions. And down below I'll have a list
of the different directory services or Azure AD tenants. The one we're using right now
is named Azure DevOps, and of course, the Domain name is shown kind of like a
URL. The default URL suffix, the DNS suffix, for your Azure AD tenant will be
onmicrosoft.com. The URL reads: stefansammsoutlook.onmicrosoft.com.

Now, of course, you can customize that, but that's the default. But take note that we
are under the Favorites, link at the bottom. If I click All Directories, this is where we
truly get a list of all the Azure AD tenants that were created for this Microsoft Azure
account. And the current one is shown here, but we have a Switch button for all of the
different Azure AD tenants. And you might wonder, why would I need multiple
Azure AD tenants?

Well, if you're a Microsoft person, you would also have to ask yourself, why would I
ever need more than a single Microsoft Active Directory domain on-premises? Well,
the answer usually goes something like this: I have a different set of administrators,
maybe for a different project, a different department, maybe a different country, state,
province, business unit, whatever the case is, that might necessitate the creation of a
separate domain, or, in our case, an Azure AD tenant.
So, I can go down and I can select a specific tenant and click the Switch button, to
switch over to that Azure AD tenant. Now that means at this point, we're looking at
things from the perspective of that tenant. If I go to Azure Active Directory over on
the left, and then if I start looking at Users, well, it's going to be a different list of user
accounts, because it's a different Azure AD tenant.

If I click Groups, well, we're going to have a different list of Groups, and so on. The
other thing to think about is if I go Home here, now if I go to create something, if I
were to go click and Create a resource and maybe choose Windows Server 2019, it
says you're currently signed into a directory or an Azure AD tenant, that doesn't have
any subscriptions. The directory reads: 'FakeCorp356378'. The rest of the message
on-screen reads: You have other directories you can switch to or you can sign up for
a new subscription.

So an Azure subscription is tied to an Azure AD tenant, which allows you to create


resources, stuff in the cloud. Subscriptions are also how billing actually works,
because if I switch back here; let me click on my user account name in the upper right,
I'll choose Switch directory, and we're going to Switch back to the first one we were
at, Azure DevOps Switch. Here, if I go to Create a resource and say Windows Server
2019, it allows me to Create a virtual machine.

It will let me do it. Because a subscription is tied to this tenant. In the Azure portal, in
the top center bar, if I were to search for sub and then choose Subscriptions, here it is.
Pay-As-You-Go. If I click on it, it gives me some details about that Subscription, The
Details include: Subscription ID, Subscription name, Directory, My role, Status,
Plan, Parent management group, Secure score. but when I go back to the
Subscriptions view, it is tied to the current tenant, which is shown here in the upper
left. So there's an important relationship then between those two items.

However, what about creating a new Azure AD tenant? Isn't that really why we're
here? Yes, it is. So let's start by going to Azure Active Directory on the left, and I'm
going to click the Manage tenants button up at the top. This is where all the tenants
are shown again, but what we're here to do is click Create. So I want to create an
Azure Active Directory or Azure AD tenant.

I'll click Next for Configuration, and for the name I'm going to call it Twidale
Investments, a fictitious company name. And for the initial DNS domain name, how
about twidaleinvestments all together? And it's going to tack on the standard DNS
suffix of .onmicrosoft.com. But I can change that, of course, to a custom DNS domain
name, if I really want to.
Datacenter location is set to United States, I'll leave that, and I'm going to click Next:
Review + create, validation has passed, meaning it's a unique name and I filled in
what I needed to fill in. So I'm going to go ahead and Create that Azure AD tenant. So
now it's asking us to prove that we're not a robot.

So let me just get a different CAPTCHA listing here that I can actually read, and I'll
just type in what it wants and we'll click Submit. So it says: Tenant creation in
progress, this will take a few minutes. Now is a good time to grab a coffee. OK, so
before too long, tenant creation is showing as having succeeded. Excellent, it says
Click here to navigate to your new tenant.

We also know how to switch between tenants, so I'm going to go ahead and do just
that. Now remember, of course, there's no subscription tied to this. So if I were to
search for sub and go into Subscriptions now that I've switched to that, it says: You
don't have any subscriptions, but I could click the Add button and select something
appropriate and then continue on with creating resources in the cloud.

But I won't do that. What I want to do is go back to Home. Essentially, what I really
want to do is open up Azure AD for this new tenant and go to Users. So what I'm
going to have is the user account that I'm signed in as, as a member here, but I can
also click New user. So, for example, I can create a user directly here or invite them
through email to participate, but I'm going to create the user directly.

The user here is going to be cblackwell. Now what it's going to do is put an @ symbol
after that, so it looks like an email address for authentication, and then it's going to use
my DNS name for my Azure AD tenant, in this case
twidaleinvestments.onmicrosoft.com.

I can then continue spelling out all of the details for that user account down below,
also filling in a Password, or, whether I want to fill it in or auto-generate it, doesn't
make a difference. I'm going to let it auto-generate it and I'm going to copy it. I'd have
to communicate that to the user, of course, to allow them to use that Password.

Or I could create the password. I could specify a complex password right here, which
is what I'm going to do, actually. Once that's been done, I can opt to add the user to
Groups or Roles so they get permissions, that type of thing. I also have authentication
settings, blocking sign in. I can fill in Usage location.

That can also be important in some cases, depending on the types of licensed products
you might be using in Azure. I'm going to create that user, so user Codey Blackwell
now exists. So that's just a quick idea of how to set up a cloud-based directory service
with a default domain name, with a user account, in Microsoft Azure.
Joining Stations to a Cloud-based Directory Service
Topic title: Joining Stations to a Cloud-based Directory Service. Your host for this
session is Dan Lachance.

In this demonstration, I'll be joining a computer to a cloud-based directory service


domain. Now specifically, what I want to do is join a Windows 10 enterprise on-
premises computer to Microsoft Azure Active Directory, otherwise called Azure AD.

Now, in order for that to take place, we should probably first ensure that the
appropriate items have been configured on the cloud side in Azure. I've signed into
the Microsoft Azure Portal, where in the right, if I click towards where my credentials
are in the upper right, I can see the current directory that is selected.

And of course, I could click Switch directory, to switch between all of the different
directories or Azure AD tenants; you could call them Azure AD domains, we could
Switch to any one of them by clicking the Switch button, respectively, next to the
correct name. Now, currently, Twidale Investments is the Current Azure AD tenant,
and that's the one I want to join the computer to. Startup directory is Azure
DevOps. So in the left-hand navigator here in the portal, if I scroll down, I can go to
Azure Active Directory, where I can click Users over on the left.

Specifically, I've got a user already created here whose name is Codey Blackwell.
Now we're going to need to know the sign-in name. The sign-in name is
[email protected]. Now that's fine. What we need to
do is specify that as the sign-in name, as well as the password, and if the user
password is forgotten, there's a Reset password button available right here.

While we're here, if we go into the Devices view, notice that we don't have any
Devices that are shown. It says: No devices found. Well, that will change after we've
joined our Windows 10 device to this Azure AD tenant. So back here in Windows 10,
let's join this thing to an Azure AD tenant.

I'm going to go into my Start menu and I'm going to search for settings, and I'm going
to go into my Settings. What I want to do here is go into Accounts, so I'll click that,
and over on the left I'm interested in the Access work or school option, where I can
then on the right click Connect. Now this is going to prompt me to sign in with a work
or school account.

This is where I'm going to put in the Email address for Codey Blackwell. Once I've
done that, I'll go ahead and click Next, at which point it will prompt me for the
Password. So it wants me to update my password. That's no problem, I'll go ahead and
do that now, and I'll click the Sign in button. So at this point, it's registering this
device with Azure AD. And if we had any device policies that were configured, which
we don't, they would then be applied.

Before you know it, we get the You're all set message. So now at this point, I'm going
to click Done. And when I'm working here in the Settings part of the Control Panel
area, when I look at Access work or school, we now have our Work or school account
showing here. Of course, I could click on it at any time, The account reads:
[email protected]

I could choose Manage your account, and that takes me into a web browser where I
could change some details about my Azure AD account. I could also, of course, select
that and Disconnect from Azure AD. Back here in the Azure Portal, I'm still looking
at my Twidale Investments Azure AD tenant. What I want to do this time in the left-
hand navigator, is scroll down and click Devices.

And so on the right, we've got a number here of Devices, that says 1 Total number of
devices so I'm going to click on that link. And sure enough, our desktop is now
showing up here. It's running the Windows OS. It's a Version of Windows 10. It is
Azure AD registered with an Owner shown as being Codey Blackwell.

So if I click directly on that desktop to open up the details, we get some other
attributes like the Device ID, the Object ID here in Azure. The Device ID reads:
ee311a44-00e7-4d81-b731-e7215e9b0126. The Object ID reads: 452867cf-8d64-
4d28-abf8-d1b78a462a1b. And again, we get the details like the Owner, and the User
name, and so on. Now what's interesting about this, too, is if I look at it from the
perspective of the user.

So let's go back to our Azure AD tenant, Users, open up Codey Blackwell again, and
like we started off with initially, if we look at the Devices tied to that user, of course,
the device now shows up there. So the way that this is often used in Azure AD is if I
go back to my Azure AD tenant, and in the left-hand navigator, if I go down to
Security, this is used with Conditional Access policies.

Now, Conditional Access Policies can be created so I can click the add New policy.
Now, if that's grayed out when you try it, it usually means you don't have the right
type of feature set with your subscription, like the Enterprise Mobility or Azure AD
Premium that's been assigned. At any rate, when I click New policy, one of the things
that I can specify are Conditions.

And if I go down to Conditions and select that, I can specify the device platform.
Other properties that can be specified are: User risk, Sign-in risk, Locations. It's Not
configured, so I could choose Yes, and then I could specify that it needs to be a
Windows device. Now, of course, as you might imagine, in order for that to be
checked, it means the device needs to be joined to Azure AD. And so we can use that,
then, in those types of policies.

Course Summary
Topic title: Course Summary.

So, in this course, we've examined how to plan, deploy, and manage on-premises and
cloud-based directory services. We did this by exploring deploying Microsoft Active
Directory domains and joining computers to it. We examined group policy planning
and configuration. We worked with server inventory gathering and configuring a
cloud-based directory service, followed by joining computers to a cloud directory
service domain.

In our next course, we'll move on to explore file system security.

You might also like