Authentication Services

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 31

Authentication Services

In today’s lesson we look at RADIUS.


We define what RADIUS is, how it’s used, and discuss the different types of RADIUS
authentication types for the type of user requesting access.
We take a deeper look at specific types of authentication such as LDAP, Kerberos, and
explore what their purposes are, as well as their relationship to the protocol stack they
engage.
Transcript
We start by looking at RADIUS, RADIUS is the remote authentication that user system this
allows for authenticating remote users. Periodically we might have remote users working
from home, working from another client location and they need access to internal systems
and resources. These people have to be authenticated properly, via solutions like RADIUS.
RADIUS is often used in dial-up user connections if you do dial-up networking connecting to
the internet using telephone lines and enterprise grid wireless authentication setups.
RADIUS allows for authenticating our users, with internal computers and resources and
other network devices. RADIUS is very important because it facilitates authentication of
users that are connecting remotely to internal network devices. Next we look at TACACS+.
TACACS+ is a completely re-written version of TACACS it provides support for multi factor
or strong factor authentication; it works similarly to the RADIUS system. TACACS+ also is
built on TCP and encrypts all information exchanged between the client and the server. We
also have Kerberos, Kerberos is the authentication standard it employs the Key
Distributions Centre the KDC, the Ticket Granting Service the TGS and it uses tickets to
prevent replay of credentials. These tickets are usually time stamped so that once they are
presented to the servers; the servers are able to track the use of tickets to prevent a replay
attack. Kerberos uses the symmetric key – cryptography was designed to provide single
synonym in a client server network environment. Single synonym earlier discussed in the
previous video where our users only provide only one set of credentials to log on to every
system they need access to on the system that is supported by Kerberos. It is the primary
authentication mechanism in both UNIX and Windows environment, Kerberos supports
mutual authentication between systems and users. For mutual authentication, two entities
must authenticate to each other before communication proceeds. So say for example a user
is trying to access a server the user would authenticate to the server and the server
authenticates to the user. A lot of banks carry out mutual authentication these days; the
user is made to select an image on the server. An image they like to see every time they log
on to the serve. This is the way by which the server authenticates to the user. So once the
user provides their log on, they see that image next they provide their credentials. So the
server also authenticated to the user and the user authenticates to the server, so this way
they can be no man in the middle. Mutual authentication ensures no man in the middle,
because both entities now know they are communicating to each other and not a man in the
middle. The last item we discuss for section 5.1 is LDAP – Lightweight Directive Access
Protocol. This is the protocol that is used to query database. In the very early days the
phone companies developed these protocols so that they could query databases to find out
you know what the content of the database is. Usually our network these days are created
in active directory and within active directory we have objects that could be within the active
directory design. Our objects could include users, printers, groups and computers. So we
use the Lightweight Directive Access Protocol to query the directories finding where users
reside, objects like printers reside, and other systems reside within every domain.

Access Control

Access Controls
Welcome to the Access Control lesson of the Chapter on Authentication Services. In today’s
lesson not only do we explore the different types of Access Control, but we also look explain
the Access Control life cycle and what that process looks like both server and client site.
We’ll define for you what authentication is, how that differs from authorization and what an
Access Control list is.  You’ll learn the difference between mandatory and discretionary
access, what Rule vs. Role Based Access is, if the two ever work together, and why that is
important.
Transcript
Welcome, my name is John Oyeleke, subject matter expert for the CompTIA Security Plus.
Today we will be looking at different types of access controls. First, we start with…to access
a system a user needs to identify themselves to the system and this is the stage called
identification. When you identify, you simply provide your identity to the system. It could be
in the form of a username, a user id or an email that is the identification stage. The first
process. Next the system has to verify, this credentials that you are giving. You also need to
provide an authentication factor. What is authentication? Authentication is a process. It is
the process by which the system verifies that you are who you say you are. You have
identified yourself as user a, with a password you also have you provide for the
authentication step. You provide your username and your password. The system will then
compare that to what in the database. If there is a match, you are granted access. If there is
a mismatch, access denied you get an error message after authentication is done the next
step is authorization. Authorization has to do with the system checking your permissions.
The system will check your permission. Something like having [inaudible] the access control
list, to see what you are allowed to do or not allowed to do on the system once you are
granted access. These three steps must happen in that fashion. One, two and three. We
don’t need or we don’t want these steps happening authentication happening before
identification or even before authorization before authentication. They must happen
identification, authentication then authorization. The access control lists are simply list of
permissions associated to objects. Basically the access control list, they specify the type of
access that a user could have or a group of users could have on a specific object or groups
of objects. All of these are very important so that we can have accountability whose had
what access, who did what and when. We are now going to discuss some access control
models. We have mandatory access. This is an access control model based on the security
labels. Usually, you have the objects are granted security labels and the subjects are also
granted security labels. The subject is the entity trying to access the object so the system
will see. The system will check what is the security label of the subject trying to access the
object. The system effects, simply checks the security labels to determine if access should
be granted. In the mandatory access control, access control is based on security labels. It is
put in effect by the system. For discretionary access control we use…the discretionary
access control depends on the discretion of the owner of the object. The owner is usually, it
could be the head of the department, that person will approve or deny access to it could be
a database, it could be a facility…most cases a database. The owner gives the discretion.
The owner’s discretion will grant access or deny access to the object. The most important
thing is to remember the owner of the resource permits or denies access or grants access
or denies access to the resource. Next we look at is rule based access control. This is
access control based on a set of rules. In many cases we implement our firewalls using rule
based access control. You implement the rules, you dictate the rules on the firewall and the
firewall is able to filter traffic based on the rules you set. Where you set no rules your
firewall will allow all traffic. When you set the rules, the firewall will inspect the traffic and
based on the rules you have determine when to drop the traffic or allow the traffic. Next we
have role based access control. What role do you play? This type of access control
depends on the role you play on the organization. What you access on the database
depends on the role you play. Where you can go in the facility depends on the role you play.
What you can do on the network depend on the role you play. The role you play dictates
what you have access to. Say we have a basic user in the HR department. The user can
see probably my date of employment but not my date of birth. The HR director could see my
date of birth and everything else about me because of the role they play. The role you play
dictates the level of access you have for role based access control. Please bear in mind for
the exam do not confuse rule–>based with role–>based. It’s very often a little trick. You
could fall for that. We have rule based access control, role based access control. The
principles are clearly different from each other. Lastly, we have something called time of day
restrictions. With time of day restrictions, we have the ability to restrict access to a facility,
network devices, PCs based on the day of the week or the time of the day for the individual
users or a collective group of users. We could restricts access to a facility on certain days of
the week, certain time of the day, we could use this on a network to restrict access to
printers, restrict access to computers based on the time of the day or the day of the week
for specific users. We use this as a form of access control. You could implement this on a
server to ensure your users can only sign on a specific date at a specific time. They can
only stay logged on for a specific duration. We refer to this as time of day restriction.

Authentication Authorization and Access Control (part 1)


We always need to know who is on our network and why for gaining access to the system. 
In our next lesson, we look at Authentication Authorization and Access Control in terms
account management and administration of rights, permission and password usage.
We discuss the entire lifecycle, what happens at each stage of those processes, and where
those processes take place during Authentication, Authorization, and then Access Control.
Transcript
We always need to know who did what and when on our systems so for that we must have
every user have an account or every entity have an account because sometimes it might be
another system gaining access to a system or a user gaining access to a system. So every
entity must have an account. If you have an account to get on to the system you have to
identify yourself to the system. so the first step is identification, in the identification stage
which is the first step the system or the user provides the user ID or an email address, the
user ID or email address could also be known by many people so it is not all that is required
to gain access to the system authentication needs to take place. If we ask ourselves, what
is authentication? Authentication is the process by which the system verifies that you are
who you say you are, so the system will take your ID and the credentials you submitted,
check in the database if access is permitted, the system will grant you access. If access is
denied, the system will prevent access to the resources on the system so this is how
authentication takes place, if you are locked on to a local machine, authentication takes
place in what we call the security’s account manage. If you are logging on to the system on
a domain authentication takes place on the domain controller. When we authenticate to the
system, we use authentication factors i.e. a factor presented to the system to facilitate
authentication. There are several factors that could be used for this. One is something you
know, here you provide a PIN (Personal Identification Number) a password or a pass
phrase. Sometimes we require only one authentication factor, the most common is
something you know , you go you try to log on to your email you put in your user ID you
identify yourself to the system then you put in your password, that is single factor
authentication. You are only authenticating with one factor and in that case it is a password.
A pass phrase is a much longer form of a password, it is considered much more secure
because there are many more characters in there so a pass phrase is also something you
know. And finally the PIN (Personal Identification Number) is something you know. Another
authentication factor is something you have; this involves the use of tokens. A token is a
physical device in your possession, you press the button in there and it generates a
number, you then key in the number to the system with some other requirements then you
have access or then you are able to complete the transaction. So it is a device in your
possession, we have many types of tokens, tokens could be; cards issued, say you go to a
hotel this is your key it’s a token, you will need access to a facility you could use the
proximity card like this is a token, something in your possession. A token could also be a
USB device, so organizational will put your permissions and limitations or restrictions on
that USB device. The token must be connected to the system; otherwise the system will not
start. Your phone could also be considered a token, some organizations will send you
information to that number they have on record for you, so only you in possession of that
device have access to the information and that is something in your possession. Our cards
we use at the ATM, are also tokens something you have. Something you are, you are using
the physical attributes of a person 22 to identify them we call them biometrics. Biometrics
involves the physical attributes using the physical attributes of a person to uniquely identify
that person. We have finger prints, we have hand geometry, we also have pupil pattern,
retina pattern at the back of the eye, facial recognition, DNA and this could be used to
uniquely identify one person from the other. Somewhere you are, location based
authentication we can allow you authenticate from regular system in enterprise but not at
strategic systems, maybe at the help desk or at the receptionist or within the server room,
somewhere you are. It could also be based on IP address, so the location where you are at
by IP address will determine where you can log on from. Something you do is also used as
authentication factors, we have signature dynamics, using pressure sensitive plates, pads
we can measure the speed at which you write the signature and the pressure you exert on
that pad. So even if someone else was to attempt to fake your signature they could get the
pattern right but they can’t exert the same pressure neither could they get the same speed
so that is for signature dynamics. For keyboard dynamics, we measure your properties on
the keyboard we have sensitive keyboards, specialized keyboards that can capture the time
you move between the keys and how much time you spend on each key. So the time with
which you move between the keys we call that your flight time and also your dwell time so
several words are flashed across the screen and you type in those words, the keyboard will
capture your flight time and your dwell time. So if someone else has access to the system,
they flash the same words yes they can type in the words correctly but they couldn’t match
your flight time and your dwell time so this could be used to uniquely identify you from those
persons. So having these authentication factors, we have to consider what we call single
factor authentication and multi factor authentication. Something you know let’s call it A,
something you have B, C for us to say single factor authentication you could be using A or
B or C, you are using only one of them, that is single factor authentication where you are
using either A or B or C you are said to be doing single factor authentication. Multi factor
authentication or two factor authentication you could be doing AB or BC or CA then it’s
considered two factor authentication. It’s also considered three factor authentication or multi
factor authentication if you are doing A,B& C, it’s not two factor authentication if you are
doing AA or BB or CC so if the two factors come from A it’s not factor authentication they
must come from different lines to consider them as multi factor authentication. Look out for
that on the exam please, so you could be told that I sign in using my PIN and my password
that is still single factor because both items are in the same line, you could be told that I
have two different tokens that is still single factor because both items are in the same line. I
could do hand geometry and I do retina pattern at the back of the eye because both items
are in the same line so please watch out for that. Next we talk about authorization, before
the system allows access your permissions are reviewed are you a general user or are you
an administrator what should you have access to, what should be blocked. The system will
review your permission before you finally have access to the system, that way if we need to
log down control panel it’s locked down this is usually applied where group policy. So if we
need to log down certain management interfaces they are locked down and only things for
which you are assigned or allowed would you have access based on the principle of least
privilege you know, this principle dictates that our users have only the permissions they
need to do their work no more no less. So authorization will also take place before you
finally have access then you are able to use the system and you are restricted by the
system for what you should do and what you shouldn’t do and this finally gives room for
users to fully use the system.

Authentication Authorization and Access Control (part 2)


In this lesson we continue exploring the lifecycle of Authentication Authorization and Access
Control by looking at access control mechanism in use today.
For example, we define Common Access Card (CAC), what it is, how it’s used and what
types of environments deploys it.
We contrast the CAC with the Personal Identification Verification Card and discuss
examples of that method and how it’s used.  Then we take a deeper look at how these
mechanisms are used within the Rule and Role Based Access Control concept to
demonstrate the entire security life cycle as deployed in real time.
Transcript
Next we look at the common access card. The common access card is a D.O.D issued card
Departmental of Defense issues this card is a card that actually also is smart card it has a
chip on it. It allows you to have your ID, your picture, your information, information that
could verify your identity it also allows to have . . . your permissions could be stored on that
chip. Users could use this as a token; you could have it to authenticate you on a system.
You could have it to authenticate your other facility in some elevators you insert it into a slot
and it drowns you access to a certain floor. It also could be used to log on to a system
whereby you must insert the card into a card reader on the system before you are able to
log on. The common access card is issued by the Department of Defense is used for
authenticating users. We also have the personal identification verification card. a perfect
example of this is the driver’s license this is used to verify your identity. It carries your
picture of yourself, your name and certain other information to verify your identity. Next we
talk about the smart card a smart card is simply a card like the A.T.M card but this card has
a chip on it and information could be stored on that chip that allows processing to take place
on the card when we insert it into a machine. So the card inserted into machine information
can be read off that chip to either allow access to certain resources or permissions of the
users on a system. Next we talk about the principle of least privilege. This is a very
important principle in information security. Our users want all the access in the world but if
we follow the principle of least privilege it dictates that our user should only have the exact
permission they need to do their work no more, no less. If you give insufficient permissions
they’ll end up calling the help desk no productivity. I can’t do this, I can’t print, I can’t reach
this server, I can’t do that. But if we give sufficient permissions they can do all their work.
We should not give excessive permissions or otherwise these will be abused. So we should
always follow the principle of least privilege to ensure our users cannot abuse privileged
assigned to them. Next we talk about the separation of duty. The principle of separation of
duties dictates that critical job processes being broken down into multiple functions. This
way you have multiple individuals responsible for these different functions. It allows us to
prevent fraud, it allows us to prevent one person starting a critical job functions from the
beginning all the way to the end. There will be no checks and balances but if we have
multiple individuals responsible in multiple roles then we can keep checks and balances.
We should be aware of the collusions that will take place amongst individuals to defeat the
principle of separation of duties. So when collusion takes place the people to safe guard
each other check some balance might even be working together to defeat the system. So
we should put measures in place to watch out for collusion. The principle of a single sign on
within organizations you require passwords to log on. Some organizations require very long
password complexity rules in place for users to log on. So in the course of one day’s work a
user needs access to several other systems after logging on to their systems each system
requiring a new pass word. So the user logs on with one password. They need access to
several other systems now at the end of the day they need to know or learn it different
passwords complexity passwords, passwords with uppercase, lowercase, special
characters and numbers. The users are frustrated to where they start to try to write down
these passwords we know a best practice with passwords we do not write passwords down.
The administrators can step in; your administrators could configure a single sign on, on your
servers such that when your users log on to this system they only require one log on. The
passwords are known to the system and this way subsequent log on to other machines on
the network require no more passwords. So the burden of having to learn multiple
passwords is taking on the shoulders of your users. However we should be careful there is
a downside to these; a malicious person knowing the passwords for your users now has all
access to all the locations your users have access to. Well so some other measures should
be put in place to ensure that if we put a single sign on in place user’s passwords could not
easily be compromised. Next we talk about job rotation this principle helps ensure
availability. So if one person is absent from work maybe they decide not to show up
anymore they are caught up in traffic or some other accidents occur another individual
should be available to do their work. So periodically over time we should ensure that staffs
are is rotated amongst multiple job functions. Some organizations refer to these as cross
training. So Person A learns person B’s job and person B will learn person C, person C will
learn D job’s in the absence of one person another person is available to step in their
shoes. So there is no vacuum created by the absence of one individual. Next we talk about
mandatory vacations, mandatory vacations required if our staff periodically gone on
vacation, this is to help prevent fraud. So it allows us the opportunity to audit the job
functions of a certain individual what exactly are they doing as we will see individuals that
have long resisted going on vacation are probably carrying out some malicious activities
within the network. So they want to be the first person there every day, they never miss a
day they are also the last person to leave. So if they go on vacation it allows us time to audit
their work, it allows us opportunity for somebody else to step in their shoes and carry out
their functions. So if they’ve been carrying out some malicious activities on the network this
could be discovered and disclosed in the time they’re away. Next we talk about time of day
restrictions. It is possible to limit access to network resources based on this certain day of
the week and time of the day. It could be that we want to limit access to network printers,
systems, or facility so by putting in proper controls use can be done at the system locally or
from the server we could limit access to certain days of the week and certain times of the
day. We call this time of day restrictions you don’t want your users coming in at times;
they’re not required to come in then claiming overtime so you could put in time of day
restrictions. You don’t want certain individuals printing to very sensitive printers unless there
are multiple people in the room you could use time of day restrictions. So we could do time
of the restrictions to limit access to printers, workstations, or certain locations in the facility.
Then we talk about implicit deny, the principal of implicit deny requires us to consider
everything suspicious. We deny everything until explicitly allowed so if something is not
explicitly allowed into the facility someone is not explicitly allowed onto the network the
principle of implicit deny dictates that we deny access. So it is not sufficient that they are not
on the, “do not allow list” they must also be on the allow least otherwise no access. Access
control lists. These are list that contain the matrices for which our users can do what they
can do on the system when they log on, what they can do on the network when they log on,
so these are lists that we generate to dictate the permission and authorization of all our
users our users would want to have all access, but do we generate the access control list to
limit their access based on their job description? That we have access control; we’re
controlling access to our resources, we’re controlling access to the facility so every time you
try to access a facility or a resource that you are prompted to authenticate it means we have
access control in place. You are prompted via a lock, you are prompted by a log on
dialogue box, and you are prompted by guard to identify yourself which means we have
access control in place. Best practice should always be done to ensure access control;
access control could also be ensured using solutions like man traps. A man trap is a system
of two doors where you must identify yourself there you go into the trap the door behind will
shut before the next one would open this way the system ensures that only you is going into
the facility and you are properly identified. Next we have several models of access control.
We have the mandatory access control these are based on security levels. We should do
proper data classification and when we talk about that classification we do classification in
terms of secret, top secret, classified, and secure button classified. We do classification for
our personnel, and we also do classification for our data. So using the access matrices the
system can enforce the access, the system can control who has access to what. So if a
subject tries to access the object the object is the entity being accessed examples could be
files or resources on the network. If the subject tries to access the object the system will
check the security levels so to determine if access should be granted or denied. So the
access is based on the security labels and it is enforced by the system in mandatory access
control. For discretionary access control we talk about the owner of the resource if a user
decides to place a file on the network in the network share the user can then dictate who
has access to that file and what they can do with it. So access to the file and what they can
do to that file is based on the discretion of the owner the owner is the author or the person
creating the document and putting it on the network making it available for sharing. For rule
based access control this is access control based on a certain set of rules. It could be
access control through our fire walls, it could be access control to the facility so we provide
the rules by which we can allow or prevent access. We usually do rule based access control
for our fire walls we set the rules for which the fire wall would allow the traffic or block the
traffic. We could also have a rule based access controls they’re put in place and enforced
by our security guards. You don’t want people coming into the facility with weapons the
security guards site them or check them for weapons and if they are found to have these
weapons they’re not allowed access. If they have no such weapons on them they are
allowed access so these are how we implement rule based access control. Now for role
based access control this is access control based on the role you play What role do you
play if you are a basic user you might have all need specific access to the data base your
job description does not require you seeing everything. So using what we call data base
views we can block your access to a certain fields within the database. But a hike kind of
staff might require full access to everything in the database then we grant them access to
everything they require to see as per their job description. And this is how we implement
rule based access control what the role you play determines the access you have.

Authentication Factors
For this lesson we now look at 5 Authentication Factors, define how they are deployed as a
security strategy, and discuss the pros and cons of each.
For examples, You’ll learn why using “Something You Know” concept may be good for
users when assigning and remembering their own PIN codes and Passwords, but why it’s
also an obvious key to a hacker for breaking into your computer or accessing a secure
document, account, or other resources.
You’ll learn the benefits of signature authorization for tracking and accessing individuals by
location and how what insights that can provide when monitoring for example, authorized
and unauthorized entry events.
Transcript
We’ll now be looking at authentication factors, factors that facilitate authentication we have
something you know, something you have, something you are, somewhere you are, and
something you do. Something you know something; some could you know could be a pin, a
pass word or a pass gate, a pin personal identification number or a password or a pass
phrase When you’re trying to log on to a system you’re required to provide a pin or a
password or a pass phrase that is a something you know only you should know that.
Something you have this refers to tokens we have different types of tokens sometime
tokens you press it generate a code you key the code into the system then you are granted
access if there a perfect match. So it must be a device in your passing it could be a token, it
could be a U.S.B, It could be a key something you have. And some think you are we say
this is bio-metrics with something you’re we’re doing bio-metrics and that involves using the
physical attributes of a person to uniquely identify that individual. So you’re using the
physical attributes of human beings to uniquely identify the individuals. This could be a
fingerprint, hand geometry, the pupil pattern, retina pattern at the back of the eye those are
just some examples of bio metrics. Some where you are, where are you log in from it could
be you’re working from home, you are at the office, your office is on the third floor and
you’re trying to log on maybe at management floor in the fifth floor you might not have
access. Somewhere you’re dictates what systems are you login from? You could have the
access to log on from your desk but not at the front desk. Somewhere you are where are
you in the facility? Where are you authorized to log on from? Something you do has to do
with signature dynamics or keyboard dynamics. With the signature the system will capture
how you provide your signature. So every time you provide your signature we use a special
pressure sensitive plate to capture your signature. Yes, somebody else could try faking your
signature but they couldn’t exert the same pressure as you did. For keyboards dynamics we
have something called the flight time and the dwell time. So we use specialized keyboards
to capture from your credentials. So when you type your credentials in enrollment several
times over the keyboard will capture your flight time, that’s how long it takes to travel
between the keys and your dwell time How long you actually spend on the keys should
somebody else happen to know your passwords they couldn’t perfect a match between your
flight time and your dwell time, so these are the different authentication factors. We also has
what we call the one factor authentication, two factor authentication or multi factor
authentication. So if you’re log on to any system or at the facility and you’re using any one
of these authentication factors it’s one factor authentication. So you’re log in to your email
you provide only your password that is single factor authentication. If you’re log in on and
you have to provide any two that is, two factor authentications it’s also multi factor
authentication. If you have to use modern two’s that is you’re using three or more multi
factor authentication please be careful. If you’re two from the same line say two from there
or two from there it’s not two factor authentications. It is got to be one from there and one
from there or one from there and one from there or one from here and one from there to
make it two factor authentication. The two factors cannot be in the same line for two factor
authentication. They would try that on the exact watch out for that. So we have something
you know, something you have, something you are these are the basic authentication
factors. Next we have the somewhere you are and something you do and that is it for the
authentication factors.

Implementing Appropriate Security Controls When Performing Account Management


In this lesson we discuss the strategies for establishing single vs. multiple entity accounts
when establishing administrative and user accounts to access and or manage network
resources.
For example, we discuss why it’s an important best practice to have a separate
administrative with only specific admin rights and a separate user account for restructure
user rights for network admins, and to have separate user accounts with and without admin
rights for users who have different roles within the organization to prevent abuse of power.
Transcript
The first item we look at is to mitigate issues associated with users with multiple accounts or
multiple roles. It is good practice that when you have user account on your systems, each
account is for only one entity. If you must have multiple users perform the same type of
roles you need to have a different account for each entity. Possibly you have 4
administrators, Administrator A, administrator B, administrator C and administrator D. These
4 individuals should log on to the system with different accounts. Yes they are 4
administrators but they should each have their independent account otherwise it becomes
an issue who did what. Was it person A, B, C or D? There can be no room for
accountability. Their accounts have to be different from each other for us to say it was
administrator A who did it or B, C or D. It’s bad practice that users can just demand different
accounts periodically. The system will see these accounts as a different person. Unless a
user is performing multiple roles, they should only have one account within the system.
Where a user is performing multiple roles, the user could have 2 accounts provided one
account is only used for administrative purposes and the other account is used for basic
user roles. It’s bad practice to have an administrator periodically changing roles and still
using the same account. There could be abuse of power. The principle of least privilege
dictates that ‘the user should have only the permission required to do their work’ so it would
be inappropriate for an administrator to Also have user roles and carry out those user roles
while logged on as the administrator. An administrator that is also performing user roles
should have one account for administrator roles and another account completely different
for the basic user roles. If you must have multiple accounts it is also good practice that the 2
accounts do not share the same password. On the administrator account you could have a
different set of passwords to what you have on the user account. We don’t use the same
password on multiple accounts even if they are used by the same user. Next, we must
perform account policy enforcement. All the rules that apply to the account must be the
same across the board for all the users. Rules for password complexity, use for even the
identification nomenclature, meaning the user ID. Some organizations would do first initial
last name. This has to be the same across the board for all the users. Administrators do not
change the nomenclature as they wish. This would create confusion and result in user
errors and possibly account management issues. So the account policies should be
enforced across the board for all the users. How the users can log on to the system whether
locally or on the domain. All of these have to be considered. Next we talk about password
complexity. When creating the policies for the users, if we implement password complexity it
means that the users must log on using passwords that are very complex. This involves the
use of the uppercase, lowercase, special characters and some numbers. All of these have
to be into the password before the password meet the requirements of the password
complexity. The use of password complexity makes it tougher for malicious persons to
easily crack our passwords. Usually we have 2 types of basic password attacks. Something
called Brute force and another called Dictionary attacks. In the Brute force attack, malicious
person is sat at the system trying to guess one character after the other hoping to
successfully guess the password and in the software dictionary attack, the malicious
persons will gather a dictionary of words not necessarily the English dictionary but it could
be words about the user. May be their hobby or their lifestyle and these words are used in
software to try to attack the password but if we introduce complexity passwords, the
malicious persons are kept guessing. It’s difficult to tell what characters appear first and
subsequent characters after that because you use uppercase, lowercase, special
characters and numbers and these are mixed up in no particular sequence to further
confuse the malicious persons. It is good practice that we expire passwords periodically.
Users should be encouraged to change their passwords. However, if you encourage your
users to change their passwords some might not change their passwords so best practice is
you implement it as a control. That way it can always be enforced across the board.
Everybody is given a count down and their passwords are periodically expired. This way
they are forced to change the password. You expire the passwords after a period of 30 to
60 days or 90 days based on the policy dictation. Where your users are also going on
vacation, best practice is that you expire their passwords. When you expire the password of
an individual going on vacation it means themselves and other persons who might know the
passwords cannot use the passwords. This way we are able to guarantee that only the user
as we see then know the passwords. For password recovery, periodically our users might
forget their passwords. It most happens when users go on vacation when they return. “Oh I
have forgotten my password. I cannot log in”. There are many strategies by which we could
carry out password recovery. In some cases these users could have a link on the site such
that if they forget their passwords they just make a request and this request is serviced via
email. A link is then provided to the user via email with which they are able to recover the
passwords. The users could also call the help desk and the administrators can possibly
after identifying the users properly, authenticating users through a password reset after
which the user is prompted to change password at first log on. This way it ensures that the
administrators no longer know the passwords for which the user is accessing the systems.
It’s bad practice that a system’s admin can do password recovery for a user by just looking
in the server. For these purposes, passwords should not be saved with reversible
encryption. If you save your passwords with reversible encryption anybody having access to
the server could reverse the process and thereby decipher what the password is. Malicious
person can also do this. Passwords should be saved using hashing which is a one-way
process. If we provide the data it should be hashed and saved on the servers. This way
nobody else can reverse that process to decipher what the password is. Best practice is
also that the length of our passwords should be at least a minimum of about 8 characters.
Nothing less than that. Characters less than eight are very easy to decipher, very easy to
crack and gain unauthorized access into our systems. We should do at least a minimum of
8 characters. Passwords could be as much as a 127 characters but as humans we find it
difficult to retain long sets of passwords in memory. A minimum of 8 characters is best
practice over here. It’s also good practice to disable passwords that are no longer in use.
You could disable passwords for accounts. In some cases you might even disable the use
of a password if the account is a non-sensitive account. A non-sensitive account where you
might allow guest users coming and use the system, you could disable the use of a
password thereby no password is required to access the system. Only for non-sensitive
accounts that have non-sensitive privileges whereby the users could not abuse the system.
We have account lock out. After several failed attempts, the policies dictate that the account
should be locked out. Where a user has failed to log on by producing or providing the
correct credentials, the system should lock out. No more access to the system. The number
of failed attempts that trigger the lock out is what we call the lock out threshold. Best
practice is that we set the lock out threshold to between 3 and 5. Anything lower than that
could lock out the real user even if they were to make a typo and anything above that if you
make it 7 to 9 or something like that, it gives the malicious persons enough opportunities to
guess the passwords right. We want to set it between 3 and 5. How long should the user
stay locked out for if they provided the wrong credentials? This is what we call the lock out
duration. The duration could be 20 to 30 minutes or 40 minutes. Well it could be any
minutes as dictated by the policy. This is set on the servers and implemented through
account policies that are passed through group policy objects. Next we have user assigned
privileges. What our users could do when they log on. These are assigned through
privileges given to the user accounts. The privileges dictate what the users can do on the
systems that they can read, write, modify or have access to certain directories dictated by
the privileges assigned to the users. However if we sometimes have to issue permissions to
multiple users, best practice is that we create a group. This way we assign the permissions
only once and multiple users needing access to such permissions we make them a member
of that group. Once the user has become a member of the group, they inherit the
permissions that govern the group and if we need them not having the membership
anymore we simply remove their membership of the group and automatically they lose the
privileges that are assigned. This is a very efficient way to manage privileges. It ensures
that rather than having to issue 3000 persons the same types of privileges and we are
having to do it over 3000 times. We simply do it once by assigning the privilege to one
group and you make the 3000 users a member of that group. They automatically inherit the
permissions that govern that group as they become a member.

Best Practices for Account Management


As a continuance of our previous less addressing security controls for account
management, we now look at best practices for Account Management in general.
We begin with special situations as multiple accounts, multiple roles, and shared accounts
and how rights and permissions can be granted for each type of account permitting different
types and levels of access and permission to perform certain actions within a level of
access. You’ll also learn when it may be necessary to have a shared account and how it
should be managed.
We’ll discuss Account Policy enforcement and what that entails including why it’s necessary
to have strict password generation, expiration, reuse, and recovery policies.
Transcript
Today we would be looking at best practices for account management. We have best
practices that will be followed we need to mitigate issues associated with users with multiple
accounts; multiple roles and we also need to look at shared accounts. For multiple accounts
best practice within any enterprise is that we don’t have users just coming up with multiple
accounts every day. Unless they are clear situations where a user needs to have multiple
roles only then should they have multiple accounts. Sometimes you have a person to
conduct administrative duties also possibly not administrative duties they could have
multiple roles so they might need multiple accounts. This also helps to support the principle
of least privilege if you have multiple accounts the privileges in each account are different.
We should also be concerned about shared accounts. Sometimes in an enterprise it could
be that you have accounts for consultants. These people come and go periodically they
come and go. These accounts have to be properly monitored to see what sort of privilege
they have; shared accounts would allow multiple users use an account over time. We have
to ensure there’s proper documentation for who’s having access to what account for a
certain period of time even if it’s a shared account. That’s our way we can keep watch over
accountability to know who logged on to the system, who did what and when. For multiple
accounts it is bad practice that a user has an account to use in January and maybe and
another account to use in February at the end of the day all year long they have multiple
accounts within the system. Only under very clear situations should user’s have multiple
accounts possibly they’re carrying out multiple roles or job functions. We will now discuss
some accounts policy enforcement. We have to have credentials management one of the
topics we look at under this is credential management. In some organizations or some
environments some users usually have more than one password or user ID. It usually
becomes a burden trying to remember all your user I.D.’s and the Corresponding
passwords. So we have solutions in place to store the user credential within the local
system and these will then be encrypted using the password. So this is how credentials
could be managed so that user’s are not writing down their ID and passwords or your user
ID and passwords on paper’s that they leave around plugged in the desk. So using the
solution on the local system the user can store their user ID’s and their password and also
encrypt that with some software password as well to encrypt the credential management
solution. We also have group policy, group policies can be used to enforce password
properties like length, complexity and age. You can use it to enforce accounts lockout
policies threshold and duration across the enterprise. So when you implement a group
policy this can be used in Microsoft environments or other computing environments to
enforce policies like these passwords length, enforcement, audit account event, assign
specific rights and controlled over permissions across the enterprise for all our users group
policies use to achieve this we also look at password complexity, it is good practice to have
password complexity what this means is we have passwords that have uppercase,
lowercase, special characters and numbers. This gives complexity to the password it makes
it difficult for someone trying to guess your password so you would have uppercase,
lowercase, special characters and numbers all mixed around in your password to defeat the
malicious person trying to compromise your password. Another best practice with
passwords is that we carry out password explorations passwords should be made to expire
after thirty to sixty days. More users should have a password at least all year around the
same password periodically we’ve made the passwords expire. Best way to achieve this is
to . . . You automate it such that the user server account is down everyday till they change
their passwords. Well periodically in any enterprise you would notice that users forget their
passwords, users lose their passwords when users go on vacation one other thing they
forget is their password. So we should have clear cut measures by which we do password
recovery some organizations offload the burden of password recovery to the users
themselves. Probably the user would log on to the server and would click on a button that
says forgotten password. A link is then sent to their email where they could reset their
passwords. It is bad practice that we use reversible encryption to store passwords in the
server because that way anybody could go into the server to recall their passwords and that
defeat the entire purpose of having the password. We also could disable passwords a staff
has been terminated, a stuff is on probation, we could disable the password. The essence
of disabling the password is such that nobody has the use of that account, the user
themselves or any other persons that have previous knowledge to the account do not know
cannot use the account anymore. A staff has left the organization, a staff died, a staff is
terminated voluntarily or involuntarily you disable the password. Another best practice we
follow with account policy enforcement is something called the lockout policy. What this
policy dictates is that after several failed attempts the user account would lock out so the
malicious person is trying to gain unauthorized access to a system after several failed
attempt putting in the right passwords the account should lock–>out. How many failed
attempts should trigger the lock–>out we call that the lockout threshold. So the lockout
threshold this is the number of failed attempts that trigger a lockout. Many organizations will
set these between three and five. We set that between three and five If you make it too low
the actual owners could make a mistake and you’re locking too many people out of the
account and if you make it too high you could give enough opportunities for malicious
person to gain knowledge of the password. So between three and five you set the password
threshold if the number of failed attempts get to that the account is locked out for certain
duration of time. The duration for which the account is locked out is called the lockout
duration the lockout duration this could be determined by the enterprise and enforced by the
administrators. We could have a lockout duration of thirty minutes, forty five minutes that is
the total duration of time for which the account is locked out. Another best practice with
passwords and account policy enforcement is password history. If we implement password
history we dictate how many new password a user could use before they can re–>use a
previously used passwords. So if we say our password history is four the user must have
four new passwords before they can reuse a password they’ve used already. So, if I had a
password A another password, B another pass word, C another password, D until I’ve used
one, two, three, four say I have now a password, E so until I’ve used this then could I repeat
one more A or B or C or D. So if a user has not used four new passwords they cannot reuse
a frequently use password, this is to allow users create randomness in their use of their
password. Password reuse, the policy could dictate that a user cannot reuse the same
password in the same year. So if you have one password you’ve used already possibly for
duration of that year you could re–>use that password again, this is also to deter anyone
knowing your passwords able to compromise your account. Our password length is a very
important topic; best practice dictates that the password length should be at least the
minimal of six to eight characters minimal. If you make the password too short it’s very easy
to compromise, very easy to crack but if you make it minimum six to eight characters you
are increasing the complexity at which the password could be compromised. If you then add
that to password complexity where you’re throwing special numbers uppercase, lower case
and special characters it becomes very very difficult to compromise your passwords.
Another important account policy enforcement to look at is group based privileges and user
assigned privileges. All users on the network having access to resources should be
assigned privileges and these privileges could include permissions like, permission to read
like modify this should be dictated across the enterprise for every user. However if you’re
having to manage large groups of users it’s much easier if you create groups such that you
create a group, assign the privileges to the group and if you have multiple users you simply
add them as members to the group. This way you’re signed the permissions once and you
add as many members as should join the group the moment they become a member of the
group they inherit the permissions that govern the group and if you don’t need them having
the permissions anymore all you do is remove their membership of the group. This way they
lose the permissions from that group, so permissions could be user assigned privileges
which are individual’s assignments and you could also have group based privileges. With all
these accounts and privileges being issued we need to do periodic user access reviews.
We need to review what users have access to the system, what level of access do they
have. Sometimes we add on permissions periodically so that users could achieve certain
objectives. Maybe a user has gone on vacation another user has to relieve them we give
permissions but are we removing these permissions? So, periodically we need to do user
access reviews to see who has work permission and why? Otherwise we could result in a
situation where users have excessive permissions and these defeats the principle of least
privilege. Users should only have the permission they need to do their work no more, no
less. So, it is good practice that periodically throughout the year we do access reviews, who
has had access? Why do they have access? And we check review, the permissions for all
our users. Why does a user still have access to a certain resource? We need to know. It
could be that they had a resource they had access to resource in January for a project, but
we’re talking this is September they still have access to the same resource, they no longer
working on the project. So, those accesses should be revoked. We also should do
continuous monitoring; continuous monitoring of user access it should be an ongoing
process for account management. We need to do reviews of account access probably for
escalation of rights, monitor new users account, monitor group memberships, monitor, and
permissions assigned to groups, monitor failed log on, we continuously monitor these all
year long. We could have everything okay by one month, but by the next month or by the
next week, issues could arise. So, we do continuous monitoring, continuous monitoring of
all our accounts, and continuous monitoring of group membership. Privileges collisions
these are best practices we need to do for account policy enforcement.

General Cryptography Concepts


We begin this lesson by looking at Symmetric Key Cryptography and demonstrate when,
how it works and the importance of encryption key management.
We then contrast with Asymmetric key Cryptography in explaining public/private key
encryption and demonstrate how to ensure authenticity and confidentiality. So if you want a
complete understanding of how Cryptography works or what the Asymmetric encryption life
cycle is, you don’t want to miss this lesson!
Transcript
We start off by looking at symmetric key cryptography. With symmetric key cryptography
encryption we talk about only one key is used. The key that is used to encrypt the message
is the same key used to decrypt the message. Some books will refer to symmetric key as
“same key encryption” because only one key is used. You could have users encrypt a
message, the most shared key with the recipient of the message otherwise the recipient
cannot decrypt the message. Now key management is a very important issue. How do you
share the keys such that it does not fall in the hands of the malicious persons? We have to
be very careful to share the keys in an “out of band” method, meaning if you send a
message via email, you don’t send the key via email as well. You could walk over to the
user’s desk and provide them the key or you send the key via another method of
communication. With symmetric key encryption, only one key is involved. We also call this a
“Shared Key encryption” because we have to share the keys with the recipients of the
message. Another type of encryption is Asymmetric key encryption. With Asymmetric key
encryption, 2 keys are involved. We have 2 keys. One is your public key and the other is a
private key. These 2 keys are completely different. However, they are mathematically
related. If the public key is used to encrypt the message, the corresponding private key,
only that corresponding private key can decrypt the message. There are some rules we
need to know here. The public key could be advertised to anybody who cares to know. The
public key could be advertised in a directory or given to anybody who cares to have it.
However, the private key should only be known by the individual that owns the account or
for which the public key belongs. Now, there are certain times we would encrypt messages
with the public key and there are certain times we would encrypt messages with the private
key. So if we need to ensure confidentiality, we would encrypt the messages with the public
key of the recipient. That way, only the recipient has the corresponding private key for that.
Only the recipients can open that message. Confidentiality is assured. If we need to ensure
authenticity, the sender of the message could encrypt with their own private key. Anybody
receiving it having access to their public key can decrypt the message. This is only to prove
the message came from the sender. It guarantees authenticity but does not guarantee
confidentiality because anybody could have the public key. The use of these two keys is
what we call asymmetric keys. The keys are completely different but mathematically related.
All right, let us consider this scenario: Dan needs to send Adam a message. To ensure
confidentiality, how should Dan encrypt the message? Confidentiality dictates that
unauthorized disclosures are not allowed. So we put the control in place. The control here is
that encryption has been put in place. We want to ensure that only Adam can decrypt the
message. So should we use Dan’s public key? If we use Dan’s public key, Adam does not
have the corresponding private key so this is not an option. Using Dan’s public key to
encrypt the message is not an option, Adam does not have the corresponding private key.
Should we use Adam’s private key? Dan does not know Adam’s private key. Only Adam
knows his private key. This is not an option. Should we use Dan’s private key? If we use
Dan’s private key, anybody with Dan’s public key can decrypt the message. So that does
not guarantee confidentiality. It only guarantees authenticity. If we use Dan’s private key,
anybody in possession of Dan’s public key can decrypt the message. C only provides
authenticity and not confidentiality. To assure confidentiality, we would encrypt the message
using Adam’s public key such that only Adam has the corresponding private key and only
Adam can decrypt the message. This assures confidentiality and that would be the proper
response to a question like this. Selecting C is only to guarantee authenticity. You use the
sender’s private key and if you use the receiver’s public key, only the receiver has the
corresponding private key to decrypt the message.
Block Cyphers & Stream Cyphers
Now that we’ve explained and demonstrated the basics of cryptography, let’s talk about
cyphers.  This lesson discusses two types of cyphers, Block Cyphers and Stream Cyphers
and then nicely demonstrates the differences in how they work.
You’ll observe through a very detailed demonstration how a block cypher process works
using a basic sentence.  Using the same sentence, you’ll learn how a stream cypher
process works and you’ll learn an example of a Stream Cypher which you must know for the
Security+ exam.  Good Stuff, don’t miss it.
Transcript
Next we look at the fundamental differences and encryption methods, we look at block
Cypher compare them to stream Cypher. A block Cypher will take information to be
processed and process it in block sizes, specific block sizes. You would be told that the
block Cypher in either 56 bit blocks or 128 bit blocks. Whatever the block size is the block
Cypher will proceed in this fashion. Let’s assume we are using a block size of 4. It would
take 4 characters at a time. So we take 1, 2 , 3, 4. That is one block. Encrypt that, it would
then take the next block, another 4 characters: 1,2,3,4 and it does that. 3 And it does this
until everything is exhausted in the message. This is how blocks cyphers will work. Stream
cyphers on the other hand will take information to be encrypted individually or in bits. So a
stream cypher will work in this fashion. Take that, encrypt it one at a time, individually or in
bits we say. So stream cyphers work this way, encrypting one character at a time until the
entire message is encrypted. For this syllabus, the only stream cypher we have to discuss is
RC4. This will be requested on the exam, give an example of a Stream cypher. The answer
to that will be RC4. RC4 is the only stream cypher we have for the syllabus and block
cyphers; they process information by encrypting them in specific block sizes.

Transport Encryption
Next we look at Transport Encryption, or the process of security data as it moves across the
network.
This lesson diagrams a general VPN network and shows the interrelationships of nodes and
users to demonstrate how the Transportation Encryption process works and where that
encryption takes place and why.
Transcript
The next topic we look at is Transport encryption. Transport encryption guarantees
confidentiality for data in motion. While we are sending data across the internet, we need to
ensure confidentiality for this data as it moves across the internet. Periodically we could
have remote users working from home or third party or other company’s locations and they
have to communicate with their internal networks. These communications are being sent
across the internet. As we know many people are connected to the internet today. Good
guys, bad guys, everybody has access to the internet. So what we want to do to ensure
confidentiality is to use solutions like VPN, SSL-Secure Socket Layer; this guarantees end
to end security between the remote users and internal network servers. When we use VPN-
Virtual Private Network. This allows organizations to create a tunnel through the internet, a
virtual network so that information stays confidential to all the other people on the network.
That way, the information is sent via VPN to the internal networks. Usually on such
architectures we could also have VPN concentrators to ensure that everything coming in to
the network or traffic are encrypted because certain devices in the network cannot process
encrypted traffic, so all traffic is decrypted as it comes into the network. The VPN
concentrators also ensure that all traffic getting to the VPN is encrypted. That way,
confidentiality is assured as communications move across the internet. So it is best practice
to encrypt our traffic, otherwise people on the internet could eavesdrop on our
communications and thereby have unauthorized access to the content of such
communications. A VPN is best practice used in this strategy.

Hashing and Steganography


This lesson talks about Hashing and Steganography. Hashing is a mathematical integrity
checking process.  You’ll learn how it’s configured, observe what it reveals and how it
determines the integrity of an object and how it relates to the key encryption process.
Steganography is the process of hiding information.  In the second half of our lesson, you’ll
learn how Steganography works, what tools you can use to hide and reveal data, images,
within other objects as part of the Steganographic process and why it’s challenging to detect
if you are unaware of Steganography technology.
We close out this lesson with a discussion and examples of proven encryption technologies
and what specifics you’ll need to know for the Security+ Exam.
Transcript
The next topic we consider now is hashing. The hashing technique is a cryptographic hash
function, takes the arbitrary block of data and produces a fixed length value which we call
the hash value. This could be done to check for integrity because we take an arbitrary
length and we get a fixed value, this value is called the hash value. Any time there is a
modification to this it will reflect in the hash value. The hash value should change. When we
do hashing we can inform the recipient about the hash value so that before they process the
message they test the message as well. If the hash value is different from what is received,
if the hash value advertised is different from what they get when they receive the message,
they know not to trust the message. If the hash value they get on testing the message is the
same as what was disclosed to them, they know to trust the message. Sometimes the
vendors for purchase also advertise their purchase on the internet. The purchases are
released on the internet and the hash value for each part is also advertised next to it. As the
systems administrator we download the purchase. Before we use the purchase we would
run the purchase through software that do hashing to see the hash value of the patch we’ve
just downloaded. The hash value should correspond to the hash value advertised on the
internet. If it does, then you know to trust the patch you’ve just downloaded but if the hash
value is different you know that something has probably happened to the download and you
know not to trust that download. Let’s give an example. If someone were to change this
message. This message is changed to read, if this message is changed to read, “The
[greenway] stays only in the veins,” It should not give us the same hash value. This should
not happen. If we get a different hash value then we say the hashing algorithm is collusion
resistant we have changed the message, we should get a different hash value and if we do
we say, the hashing algorithm is collusion resistant. However, if we change the message
but yet can force the hashing algorithm to give us the same hash value as that we say, the
hashing algorithm is not collusion resistance. A hashing algorithm that is not collusion
resistant cannot guarantee integrity. There has been modification to the message yet you
can force it to give you the same hash value as the original message. That hashing
algorithm is not collusion resistant. This could cause confusion for the recipients trusting a
message that has been compromised. Hashing is also important when we do forensic
analysis. We capture the evidence from the crime scene, we capture the evidence and
because best practice is we work on the original as little as possible, what we want to do is,
you want to hash the original. You capture an image of the original. Next you take the hash
again of the image. Here we hashed the original. We get a certain hash value. Say we get a
hash value AB754. You want to capture an image of the original, then you hash the image
of the original again and you should get a corresponding value to the original. This way, you
have the assurance that the image is an exact copy as the original. Then you know to trust
the image and you work on the image. Where you don’t get an exact duplicate of that you
know that you don’t have an exact image of the original. Best practice is that we hash the
original, capture the image of the original then hash the image to give assurance that we
are working on an exact copy as the original. Hashing plays a very important role when we
do forensic analysis as well. Earlier on we discussed the use of asymmetry key encryption,
where we have the public key and private key. Organizations usually have what is called a
key escrow. The key escrow is the trusted 3rd party entity with which we safeguard copies
of our keys for a time of need. It could be that within the organization you have an entity
working with the keys, the private and public key. Maybe that person won’t show up
anymore, maybe they’ll forget the keys or misplace the keys. We have to send someone to
the key escrow to go retrieve these keys so that we still have access to the keys. They key
escrow is very important for organizations that use symmetric key encryption. The key
escrow is a trusted 3rd party entity with which copies of the keys are kept for a time of need.
The next item we talk about is steganography. In steganography we hide the existence of
information. Using software. We could hide information within other information. We use
what is called the least significant bit. Usually, this is done within software. The files that we
are trying to hide are hidden in a carrier file. Using the least significant bit of the carrier file
we are able to hide the carried files which we don’t want other people having access to. By
using the least significant bit, it is possible that we do not distort the carrier files. If we look
at the scenario you have, we call this the carrier file. This is what you allow people see so
using the least significant bit of the carrier file we are able to hide our carried files within
there. You could have several files. These could be picture files, audio files, hidden or text
files, hidden in this message. These files would be your carried files. Several types of
software exist to do this. The software would ask for your carrier file, using the least
significant bit of the carrier file, it will break down the carried files and you could even use
multiple carrier files. When people have access to your drives, your thumb drives or other
files that is what they see. It could just be casual pictures of someone on vacation, pictures
of me in France for example. Meanwhile, I have other files hidden within that. Users or
malicious persons could employ this technology. The essence of this is to ensure
confidentiality, only people who know about the technology or even have the password, can
access the files provided they key in the right password when they extract these files using
software that can extract it. This is steganography. It is possible to hide audio files in audio
files, picture files in audio files, text files in picture files and even picture files within picture
files can be hidden. It is a very inconspicuous way of moving messages way of moving
messages unless people are aware of the technology, they cannot detect that files are
hidden in other files. The use of proven technologies gives the highest level of assurance
that whatever we are trying to encrypt will be very robust, the encryption methods will be
robust, cannot be easily decrypted by anyone having unauthorized access to it. It gives
assurance that we have positive results, every time we do our encryption. We Don’t want to
use software that is not tested. We don’t also want to use software that is not trusted. We
also don’t want to use software that has not been proven to do what they say it does. The
use of proven technologies gives that assurance that yes, our information is encrypted and
we have confidence in the encryption. Elliptic curve cryptography involves the use of
mathematics and curves to do the cryptography we have to carry out. It is an approach of
public key cryptography that is based on the algebraic structure of elliptic curves over finite
fields, doing these calculations. Elliptic curve cryptography operates on a smaller set of
keys compared to other sets of encryption and it does this, it operates on a smaller set of
keys for efficient performance. It also offers considerably greater security for a given key
size and compact implementations for levels of security. The elliptic curves are calculated
and drawn to do encryption and they operate on a smaller set of keys. It is very difficult to
discuss elliptic curves on the exam without exposing the fact that you are using curves. The
mention of mathematics tells you its elliptic curve. Mention of curves tells you its elliptic
curves. The key word to look out for on the exam is that elliptic curves use a smaller set of
keys. Next we talk about quantum cryptography. For quantum cryptography we are using
the physical properties of light. Photons of light are being used in fiber network or
environment to securely exchange messages between users. The photon measurements
are conducted to see if there are any deviations in the measurements, we can then tell that
maybe there has been possible eavesdropping on the traffic moving between a sender and
a recipient. This concludes Section 6.1 of the security plus syllabus.

WEP, WPA, and WPA2


Now we’re ready to explore and discuss wireless technology encryption. WEP, WPA and
WPA2 reflect the evolution of wireless encryption.  We’ll explain the foundation for each
encryption type, their pros and cons and how they were compromised, and what strategies
we currently employ in the wireless environment.
We explain the Access Point process using WPA2 and how encryption in the wireless
environment works when granting user access to the resource behind the access point.
Transcript
We talk about WEP, WPA and WPA2. Starting off, WEP was the early type of encryption
mechanism to secure a wireless traffic. It was quickly discovered that WEP–I mean WEP
relies on RC4 and RC4 is victim to what we call the initialization [veto] attack. The problem
with RC4 is that it employs keys with a small key space. In many cases keys are usually
easily repeated. This makes it very easy for malicious persons to compromise WEP. It is
easy to review the code in RC4 and the [cypher that] it is -the keys are constantly being
repeated and by reverse engineering, malicious persons can attack RC4 and subsequently
compromise WEP. We moved away from WEP to WPA. WPA employs TKIP- the Temporal
key integrity protocol. This brings some solutions to WPA but within a very short time, some
malicious persons were able to compromise WPA. So we moved to WPA2. WPA2 depends
on CCMP. To date it is the strongest encryption mechanism. This WPA2, organizations and
individuals are advised to check their access points at home or at work and ensure they are
on the strongest form of encryption which is WPA2. WPA2 employs CCMP. With our access
point we have something called PSK, where this stands for the Pre-shared Key. The Pre-
shared Key; the key on the access point is also the key that is shared with other users
seeking to gain access to the access point. So the same key has to be used. In some other
environments they use something called WPA2 enterprise. With the enterprise, the user
attempting to connect would provide some credentials; maybe a user ID and a password.
That is referred to as the enterprise method. We have PSK which is the Pre-shared Key and
enterprise. With PSK-Pre-shared Key, the key on the device is the same key that would be
put into the system attempting to connect and for the enterprise method, the user has to
provide their credentials and also a password with which they are able to log in.
MD5, SHA, and AES
For this lesson we look at and discuss three algorithm encryption methods and standards: 
message digest algorithm (MD5), secure hashing algorithm (SHA) and the advanced
encryption standard (AES).
You’ll learn what functions MD5 and SHA achieve, how the integrity of the network can be
compromised, what vulnerabilities these encryption methods have, what’s met by collision
resistant, and which encryption method is the most effective countermeasures that can
currently be used as an access control strategy.
Transcript
The next item we look at is the MD5, otherwise known as Message Digest Algorithm 5. This
is a widely used hashing function that produces a 128 bit hash expressed as a 32 character
hexadecimal product. It is commonly used to check the validity of critical system files. It also
is used to obscure our passwords. When we are typing in our passwords on the system and
you know somebody might be stood next to you, you need them seeing your password. So
it obscures our password. Any entry we want to obscure, it is achieved with MD5. It also
helps to verify downloaded content. Some organizations, when they advertise their software
online, they would also calculate the hash value with MD5. The hash value is advertised
next to the download. When a user downloads the software, you can use, you can run it in
software that functions with MD5 to get the hash value. If the hash value of the downloaded
software is the same as the advertised hash value on the server, you would know to trust
the download. Sometimes malicious persons could upload their malicious payload onto
these sites. So we want to test that the hash value we get for the download is exactly what it
is as advertised on the web servers. However, for MD5 it has been profoundly
demonstrated that it is not collision resistant to a number of attacks. What do we mean not
collision resistant? When you have a random set of data, you create the hash value. A
different set of data is able to create the same hash value. Malicious persons are able to
force the value that has been changed to still give the original hash value, thereby deceiving
end users or deceiving administrators into trusting information that has been corrupt. So
MD5 has been known to be vulnerable to such attacks, hence we say it is not collision
resistant. The next Algorithm here is the Secure Hashing Algorithm, which we call as SHA-
is a checksum Algorithm capable of producing a fixed and variable digest sizes up to 512
bits. We have SHA1, SHA2, SHA3 and these are SHA 0, 1 and 2. We have SHA 0, SHA 1,
SHA 2. These are different variants of this hashing Algorithm. Stronger hashing Algorithms
equate to longer hash values and they may take more time to generate or to secure
solutions and this increases the overall overhead when trying to reverse the hash value.
Advance Encryption standard -this is one of the best encryption standards in the world. The
Hashing Algorithms because –this solution works very fast either in software or hardware
implementations, and it requires very little memory for operation. As a result it is preferred
by many environments. It is also an Algorithm that has gone through a 5-year rigorous
testing process where many other brands have been tested for fitness and AES was found
to be the best it is one of the most popular in use today. It operates very fast either in
hardware or software implementations and requires very little memory for usage.

DES, HMA and CHAP


Our next lesson explores a series of additional encryption standards including DES, HMA
RSA and CHAP.  In this lesson, we discuss what each data encryption standard is and how
it works within the encryption process.
For example, we discuss how the HMA key system works and demonstrates how it
identifies the missing key for successful encryption transaction.
In another example, you’ll learn why the more rounds of computation your encryption
algorithm has, the more complex network is to breach and more secure your environment
is.
Transcript
The next algorithm we look at now is the data encryption standard, DES. This standard is a
duplicated 56 bit block cipher. It’s no longer in use. It’s been termed obsolete because there
have been multiple attacks against DES. It’s been compromised and as a result the inventor
for DES came up with triple DES. With triple DES we take DES through trial runs of
computation to improve on its security. This is one factor we are going to see going forward
in terms of the rounds of computation, the more the runs of computation, the stronger the
algorithm is. We triple DES as a follow up to DES, the algorithm is increased in the terms,
the number or rounds of computation, by ensuring that its computed 3, it goes through 3
rounds of encryption. We will see that DES will go through, will use 56 bits blocks but triple
DES results in 56 bit blocks in 3 places which give you 168 bit key. We also have the hash
base message authentication code, which is HMAC. Here, this uses a secret key with the
hashing algorithm to generate a MAC value. The MAC value is then applied to the
message. The key is never applied to the message, it’s the MAC value is applied to the
message and sent with the message. We should know that the key is never sent with the
message, neither is it applied to the message. On the receiving end, the receiver Would
also know the key. They need knowledge of the key so that they could decipher the
message that has been sent. The RXA is another algorithm that employs a padding
scheme, with this strategy we can take information that has nothing to do with the data we
want to encrypt. We add this information to further vary the confusion, should information
fall into the wrong hands. By adding information that has no bearings to what you want to
encrypt you increase the complexity of anyone trying to decipher the messages, once
they’ve been encrypted. RC4 is vulnerable to what we call the initialization vector attack.
RC4 was lightly used with WEP, as a result of the vulnerability to the initialization vector
attack, we don’t use RC4 anymore. The key vulnerability in RC4 was that it was limited in its
key generation. The likelihood at which keys would be reused was very high. This allowed
RC4 easily be compromised and as a result we moved away from WEP. That depends
largely on RC4 to WPA, which depends on TKIP, even that was easily compromised and
we moved away again to WPA2, this relies on CCMP.RC4 is a string cipher. The one time
part encryption algorithm, this combines plaint text input with a random key or part of equal
length and is used only once. The one time part provides perfect secrecy whenever the
generated key is truly random, is kept secret and is never reused. These are the conditions
with which we best use the one time part. The key must be truly random. The key should be
used only once. There must be only 1 instance of the key at every point and time. The key
should never be reused. Challenge handshake authentication protocol, CHAP. This is a
protocol that allows users to securely connect to a system and typically used with point to
point connections. CHAP was created to replace PAP, which is the password authentication
protocol, due to its vulnerability to eavesdropping and passing of authentication credentials
in clear text. This is the problem we have with the password authentication protocol. It is a
protocol with which our passwords are validated on the servers. The problem with PAP is
that PAP will move the passwords in clear text and this is very vulnerable to eavesdropping.
Malicious persons looking onto our network communications can decipher what our
passwords are. The next item we look at now is the comparative strength of algorithms.
Actually, there are 2 things to review when we discuss the comparative strength of
algorithms, the first is, the key space. The size of the key space determines the number of
keys that could be generated when you are producing your keys with which you do your
cryptography. If you are using algorithms that have a large key space, the likelihood of
generating more keys is very high. This would be very good because you have many keys
to use before you run out. However, if you use algorithms that have a smaller key space,
the likelihood of you repeating keys will be very high because you only have a few sets of
keys. By the time you’ve sent a couple of messages, you start reusing your keys and the
malicious persons are able to easily detect that keys are being reused. Once they decipher
2 of your keys, it is possible that they will decipher every other message that has been
encrypted with such keys. The next item we consider is the number of rounds of
computation. I typically would give an example here. Say you have your car, you want to
paint your car. The garage on the left hand side says, “For $50 you get one coat of paint.”
The garage on the right hand side, “For$50 you get 3 coats of paint.” We will see that for the
same amount of money they give a different number of coats. 3 coats of paint would give
better protection, the same idea applies to our encryption algorithms, the more rounds of
computation your algorithms take the data, the more complex and the higher the work value
to compromise this information if malicious persons are to try to decipher. The numbers of
rounds of computation increases the complexity or what we call the work value to decipher
this information. Finally we have key stretching. Key stretching techniques strengthen a
weak key usually, a password against brute force attacks by increasing the time for testing
each potential key. The malicious persons will have to spend so much time having to test
these keys. This increases the frustration for the malicious persons. The strategy employs
techniques that create enhanced keys or to put the block ciphers being used in a loop. This
would theoretically be impossible to crack if we expand the keys. We have 2 strategies that
could be employed, 2 common functions that use key stretching, password based key
derivation functions. We have the PBKDF2 and the B crypt.

Public Key Infrastructure


Our last lesson for this chapter on General Cryptography Concepts talks about the Public
Key Infrastructure. In this lesson we discuss what the Public Key Infrastructure is and
explain each of its sub components and what role they have.
For example, we explored Symmetric and the Public/Private key interrelationship of
Asymmetric Encryption in an earlier lesson.  Now we cover Registration Authority, define
what a recovery Agent and look other PKI components.
You’ll also learn about Certificates of Authority, how they are generated, what is validate
what the certificate revocation list (CRL) is.
And finally, we’ll dissect CA hashing, demonstrate what it looks like during encryption and
decryption at the browser level and learn why that is important during the encryption
process.
Transcript
Welcome to cybrary.IT. my name is John Oyeleke, subject matter expert for the security +
certification. In this video we will be discussing sections 6.3 and 6.4. These have to do with
public key infrastructure and the associated components. Public key infrastructure entails
the use of multiple infrastructure technologies, protocols, cryptography to ensure they
exchange messages across the internet. Organizations will use a collection of several
technologies. These include your symmetric keys that have to do with the public key and
the private keys, this registration authority, key escrow, the recovery agent, the certificate
authority and certificate revocation list. All of these are brought together to guarantee that
secure messages can be exchanged across the internet between individuals or users who
need to share information. With the use of public key infrastructure we use symmetric key.
Symmetric key involves 2 keys, public key and the private key. The public key can be
advertised to anyone but the private key is never disclosed. If we encrypt messages with
the public key, only the corresponding private key could be used to decrypt the message.
When we have the use of these keys, organizations are usually provided keys so that they
can do verification of themselves. If they proceed to get the certificates with which they
properly authenticate themselves to other individuals or anybody accessing their sites, the
first entity is the registration authority the registration authority is responsible for verification
of the entity, seeking to achieve a certificate. The registration authority will see the address,
the name, and the business documents that verify that business, so that a certificate will
then be individually assigned to that person by the certificate authority. The registration
authority does not do assignment of certificate rather, they do verification of entity. If the
registration authority is satisfied, now the entity or the company is passed over to the
certificate authority. If we are using symmetric keys, public and private keys, we need to
have something called the key escrow. The key escrow is a trusted 3rd party entity, with
which we keep copies of our keys for a time of need. Should it be that we lose the keys, we
forget the keys or the users that daily work with the key no longer members of the
organization, we could send the recovery agent to the key escrow. That entity we sent to
the key escrow to recover copies of the key is what we call the recovery agent. The key
escrow is the trusted 3rd party entity with which we safeguard copies of our keys for a time
of need. Within the organization we have one entity we send to the key escrow to retrieve
the keys, that entity is called the recovery agent. The recovery agent does not safeguard
the keys. The key escrow safeguards the keys. The recovery agent is only to retrieve the
key from the key escrow. Next we have the certificate authority. The certificate authorities
generate the certificates. They sign the certificates and also issue the certificates.
Certificates are only issued by the certificate authorities. The certificate authorities could
also publish what is called a certificate revocation list. The certificate revocation list is a list
of revoked certificates. Certificates could be revoked for several reasons, one, they could
expire or certain information could have been false while they were doing verification and
certificates need to be revoked if this has been discovered. Certificates that have been
tampered with need to be revoked and these certificates that have also expired will be
revoked and published on a certificate revocation list. The essence of the certificate
revocation list is so that we have a list of certificates that are no longer in use and not to be
trusted. The certificate revocation list can only be published by the certificate authority. The
certificate authorities would only revoke certificates that were issued by themselves, they do
not revoke certificates issued by other certificate authorities. This concludes the section 6.3
and 6.4 of the Security + syllabus.

Welcome to Cybrary.IT, My name is John Oyeleke, subject matter expert for the security
plus-certification. In this video I’m giving an overview of the exam – the security plus exam.
The exam is broken down into one, two, three, four, five, six sections. We start off with
network security. This is 21% of the exam. Then we talk about the compliance and
operational security that takes 18% of the exam, with threats and vulnerability taking 21%.
Application data and host security consist of 16% and access control and identity
management 13% of the exam. The smallest section on the exam is that of cryptography
taking 11%. In total you have a 100%. For this exam, we have to look at the number of
questions on the exam is put at 90. That is in maximum of 90 questions. Well, in some
cases students could get anything between 75 and 80 questions. Well, the maximum is 90
questions. So, why do you prepare for the exam? Ninety questions at a time whatever
question banks you are preparing from use 90 questions at a time and for the time you have
90 minutes. So, test yourself. See that you can do 90 questions in 90 minutes such that
even if you are given less questions, you can still finish the exams comfortably in the time
provided. Now, what type of questions would we have? We have multiple choice or
performance based questions. For multiple choice, you are given options A, B, C, D, E or F.
You would be asked to select one, or you could be asked to select two or three. Where you
are required to select two, it would also be stated after the question; “Choose two.” Where
you are required to select three; “Choose three.” The performance based questions are
simulation based questions where you have drag and drop, mix and match, click and select,
you know, these are simulation type questions. Usually on the exams, they would present
you the performance based questions first. Students are advised if you can’t really answer
the performance based questions, you click for review move on to the next. So, you could
click for review so that while you are done with the objectives you could return to review the
performance based. You possibly would have a better understanding by the time you have
gone through some objectives or you’ve been able to calm down and review the objective
properly. This way, you could provide better answers to performance based questions. But
please, I advise, do not waste too much time trying to understand the performance based
questions. Rather, if you don’t understand it, mark for review, proceed to the next one. So
you don’t run out of time. Finally, the passing score is 750 points. Anything below that is a
fail. And everything above that is a good pass. You get 750, you pass the exam. You get
above 750, you still pass the exam. However, nobody requires your passing score once you
pass the exam. After you pass the exam CompTIA will send you an email that grants
access to a portal where you could download the PDF copy of the certificate. A couple of
weeks after the exam, you get the actual certificate in the post and a wallet card stating the
certificate for which you have been successful.

CompTIA Security+ Exam Overview


This Exam Review provides an overview the specifics of what the exam covers, tips on
what you should do before during and after your exam, the percentage of the exam each
topic covers, the range and total number questions you should expect, and how you can
self-test to prepare yourself,  and what you can expect as part of the certification exam
process.
The Exam Module review what you need to know and for which particular concepts, it
covers logistics, timing, and offers a number of certification exam tips.
You’ll learn techniques for addressing the types of queries you’ll encounter on the CompTIA
Security+ Exam.  For example, note what type of questions are presented first, in this case
performance based queries.  If you’d rather not answer those, the exam overview will
provide guidance on what to do and how you can return to that section and be better
prepared with responses.
We also discuss and provide some examples of simulation type questions, advise on total
number of queries the exam has and provide the passing score benchmark.  And finally, we
advise you on the exam and certification notification process and how you can obtain your
CompTIA Security+ Certification document.
The test taking tips discussed here apply to any exam or professional evaluation you’d
prepare yourself for not the CompTIA Security+ certification exam in particular.
The goal of this Exam Review is to help you manage your time wisely, think through the
exam to navigate it strategically, and master each phase of the exam thoroughly.

You might also like