Chapter 7 Administering Security-Unlocked
Chapter 7 Administering Security-Unlocked
In some cases, a group can be adequately represented by someone who is consulted at appropriate
times, rather than a committee member from each possible constituency being enlisted.
7.2 Risk Analysis
Good, effective security planning includes a careful risk analysis. A risk is a potential problem that the
system or its users may experience. We distinguish a risk from other project events by looking for three
things, as suggested by Rook [ROO93]:
1. A loss associated with an event. The event must generate a negative effect: compromised
security, lost time, diminished quality, lost money, lost control, lost understanding, and so on.
This loss is called the risk impact.
2. The likelihood that the event will occur. The probability of occurrence associated with each risk
2|Page
is measured from 0 (impossible) to 1 (certain). When the risk probability is 1, we say we have
a problem.
3. The degree to which we can change the outcome. We must determine what, if anything, we
can do to avoid the impact or at least reduce its effects. Risk control involves a set of actions
to reduce or eliminate the risk. Many of the security controls we describe in this book are
examples of risk control. We usually want to weigh the pros and cons of different actions we can take to
address each risk. To that end, we can quantify the effects of a risk by multiplying the risk impact by the
risk probability, yielding the risk exposure.Clearly, risk probabilities can change over time, so it is
important to track them and plan for events accordingly.
Risk is inevitable in life: Crossing the street is risky but that does not keep us from doing it. We can
identify, limit, avoid, or transfer risk but we can seldom eliminate it. In general, we have three strategies
for dealing with risk:
1. avoiding the risk, by changing requirements for security or other system characteristics
2. transferring the risk, by allocating the risk to other systems, people, organizations, or assets;
or by buying insurance to cover any financial loss should the risk become a reality
3. assuming the risk, by accepting it, controlling it with available resources, and preparing to deal with
the loss if it occurs
Thus, costs are associated not only with the risk's potential impact but also with reducing it. Risk
leverage is the difference in risk exposure divided by the cost of reducing the risk. In other words, risk
leverage is If the leverage value of a proposed action is not high enough, then we look for alternative but
less costly actions or more effective reduction techniques.
Risk analysis is the process of examining a system and its operational context to determine possible
exposures and the potential harm they can cause. Thus, the first step in a risk analysis is identify and list
all exposures in the computing system of interest. Then, for each exposure, we identify possible controls
and their costs. The last step is a cost benefit analysis: Does it cost less to implement a control or to
accept the expected cost of the loss? In the remainder of this section, we describe risk analysis, present
examples of risk analysis methods, and discuss some of the drawbacks to performing risk analysis.
The Nature of Risk
In our everyday lives, we take risks. In crossing the road, eating oysters, or playing the lottery, we take
the chance that our actions may result in some negative result such as being injured, getting sick, or
losing money. Consciously or unconsciously, we weigh the benefits of taking the action with the possible
losses that might result. Just because there is a risk to a certain act we do not necessarily avoid it; we may
look both ways before crossing the street, but we do cross it. In building and using computing systems,
we must take a more organized and careful approach to assessing our risks. Many of the systems we build
and use can have a dramatic impact on life and health if they fail. For this reason, risk analysis is an
essential part of security planning.
We cannot guarantee that our systems will be risk free; that is why our security plans must address
actions needed should an unexpected risk become a problem. And some risks are simply part of doing
business; for example, as we have seen, we must plan for disaster recovery, even though we take many
steps to avoid disasters in the first place.
When we acknowledge that a significant problem cannot be prevented, we can use controls to reduce the
seriousness of a threat. For example, you can back up files on your computer as a defense against the
possible failure of a file storage device. But as our computing systems become more complex and more
distributed, complete risk analysis becomes more difficult and time consuming and more essential.
Steps of a Risk Analysis
Risk analysis is performed in many different contexts; for example, environmental and health risks are
analyzed for activities such as building dams, disposing of nuclear waste, or changing a manufacturing
3|Page
process. Risk analysis for security is adapted from more general management practices, placing special
emphasis on the kinds of problems likely to arise from security issues. By following well-defined steps,
we can analyze the security risks in a computing system.
The basic steps of risk analysis are listed below.
1. Identify assets.
2. Determine vulnerabilities.
3. Estimate likelihood of exploitation.
4. Compute expected annual loss.
5. Survey applicable controls and their costs.
6. Project annual savings of control.
These steps are described in detail in the following sections.
Step 1: Identify Assets
Before we can identify vulnerabilities, we must first decide what we need to protect. Thus, the first step
of a risk analysis is to identify the assets of the computing system. The assets can be considered in
categories, as listed below. The remaining items are not strictly a part of a computing system but are
important to its proper functioning.
The guidelines involve five steps:
1. Identify the critical information to be protected.
2. Analyze the threats.
3. Analyze the vulnerabilities.
4. Assess the risks.
5. Apply countermeasures.
As you can see, the steps are similar, but their details are always tailored to the particular situation at hand.
For this reason, it is useful to use someone else's risk analysis process as a framework, but it is important
to change it to match your own situation.hardware: processors, boards, keyboards, monitors, terminals,
microcomputers, workstations, tape drives, printers, disks, disk drives, cables, connections,
communications controllers, and communications media software: source programs, object programs,
purchased programs, in-house programs, utility programs, operating systems, systems programs (such as
compilers), and maintenance diagnostic programs data: data used during execution, stored data on various
media, printed data, archival data, update logs, and audit records people: skills needed to run the computing
system or specific programs documentation: on programs, hardware, systems, administrative procedures,
and the entire system supplies: paper, forms, laser cartridges, magnetic media, and printer fluid It is
essential to tailor this list to your own situation. No two organizations will have the same assets to protect,
and something that is valuable in one organization may not be as valuable to another.
For example, if a project has one key designer, then that designer is an essential asset; on the other hand, if
a similar project has ten designers, any of whom could do the project's design, then each designer is not as
essential because there are nine easily available replacements. Thus, you must add to the list of assets the
other people, processes, and things that must be protected. For example, RAND Corporation's
Vulnerability Assessment and Mitigation (VAM) methodology includes additional assets, such as the
enabling infrastructure the building or vehicle in which the system will reside the power, water, air, and
other environmental conditions necessary for proper functioning human and social assets, such as policies,
procedures, and training The VAM methodology is a process supported by a tool to help people identify
assets, vulnerabilities, and countermeasures.
We use other aspects of VAM as an example technique in later risk analysis steps.
In a sense, the list of assets is an inventory of the system, including intangibles and human
resource items. For security purposes, this inventory is more comprehensive than the traditional
inventory of hardware and software often performed for configuration management or accounting
4|Page
purposes. The point is to identify all assets necessary for the system to be usable.
Step2:Determine Vulnerabilities
The next step in risk analysis is to determine the vulnerabilities of these assets. This step requires
imagination; we want to predict what damage might occur to the assets and from what sources. We can
enhance our imaginative skills by developing a clear idea of the nature of vulnerabilities. This nature
derives from the need to ensure the three basic goals of computer security: confidentiality, integrity, and
availability. Thus, a vulnerability is any situation that could cause loss of confidentiality, integrity, and
availability. We want to use an organized approach to considering situations that could cause these losses
for a particular object.
Software engineering offers us several techniques for investigating possible problems.Hazard
analysis, described explores failures that may occur and faults that may cause them. These techniques have
been used successfully in analyzing safety-critical systems. However, additional techniques are tailored
specifically to security concerns; we address those techniques in this and following sections.
7.3. Security Policies
A key element of any organization's security planning is an effective security policy. A security policy
must answer three questions: who can access which resources in what manner?
A security policy is a high-level management document to inform all users of the goals of and
constraints on using a system. A policy document is written in broad enough terms that it does not
change frequently. The information security policy is the foundation upon which all protection efforts
are built. It should be a visible representation of priorities of the entire organization, definitively stating
underlying assumptions that drive security activities.The policy should articulate senior management's
decisions regarding security as well as asserting management's commitment to security. To be effective,
the policy must be understood by everyone as the product of a directive from an authoritative and
influential person at the top of the organization.
People sometimes issue other documents, called procedures or guidelines, to define how the policy
translates into specific actions and controls. In this section, we examine how to write a useful and
effective security policy.
Purpose
Security policies are used for several purposes, including the following:
-recognizing sensitive information assets
-clarifying security responsibilities
-promoting awareness for existing employees
-guiding new employees
Audience
A security policy addresses several different audiences with different expectations. That is, each group
users, owners, and beneficiaries use the security policy in important but different ways.
Users
Users legitimately expect a certain degree of confidentiality, integrity, and continuous availability in the
computing resources provided to them. Although the degree varies with the situation, a security policy
should reaffirm a commitment to this requirement for service. Users also need to know and appreciate
what is considered acceptable use of their computers, data, and programs. For users, a security policy
should define acceptable use.
Owners
Each piece of computing equipment is owned by someone, and the owner may not be a system user. An
owner provides the equipment to users for a purpose, such as to further education, support commerce, or
enhance productivity. A security policy should also reflect the expectations and needs of owners.
Beneficiaries
5|Page
A business has paying customers or clients; they are beneficiaries of the products and services
offered by that business. At the same time, the general public may benefit in several ways: as a
source of employment or by provision of infrastructure. In the same way, the government has customers:
the citizens of its country, and "guests" who have visas enabling entry for various purposes and times. A
university's customers include its students and faculty; other beneficiaries include the immediate
community (which can take advantage of lectures and concerts on campus) and often the world
population (enriched by the results of research and service).
To varying degrees, these beneficiaries depend, directly or indirectly, on the existence of or access to
computers, their data and programs, and their computational power. For this set of beneficiaries,
continuity and integrity of computing are very important. In addition, beneficiaries value confidentiality
and correctness of the data involved. Thus, the interests of beneficiaries of a system must be reflected in
the system's security policy.
Balance Among All Parties
A security policy must relate to the needs of users, owners, and beneficiaries. Unfortunately, the
needs of these groups may conflict. A beneficiary might require immediate access to data, but
owners or users might not want to bear the expense or inconvenience of providing access at all
hours.Continuous availability may be a goal for users, but that goal is inconsistent with a need to
perform preventive or emergency maintenance. Thus, the security policy must balance the priorities of
all affected communities.
Contents
A security policy must identify its audiences: the beneficiaries, users, and owners. The policy should
describe the nature of each audience and their security goals. Several other sections are required,
including the purpose of the computing system, the resources needing protection, and the nature of the
protection to be supplied. We discuss each one in turn.
Purpose
The policy should state the purpose of the organization's security functions, reflecting the
requirements of beneficiaries, users, and owners. For example, the policy may state that the system will
"protect customers' confidentiality or preserve a trust relationship," "ensure continual usability," or
"maintain profitability." There are typically three to five goals, such as:
1. Promote efficient business operation.
2. Facilitate sharing of information throughout the organization.
3. Safeguard business and personal information.
4. Ensure that accurate information is available to support business processes.
5. Ensure a safe and productive place to work.
6. Comply with applicable laws and regulations.
The security goals should be related to the overall goal or nature of the organization. It is important that
the system's purpose be stated clearly and completely because subsequent sections of the policy will
relate back to these goals, making the policy a goal-driven product.
Protected Resources
A risk analysis will have identified the assets that are to be protected. These assets should be listed in the
policy, in the sense that the policy lays out which items it addresses. For example, will the policy apply
to all computers or only to those on the network? Will it apply to all data or only to client or
management data? Will security be provided to all programs or only the ones that interact with
customers? If the degree of protection varies from one service, product, or data type to another, the
policy should state the differences. For example, data that uniquely identify clients may be protected
more carefully than the names of cities in which clients reside.
Nature of the Protection
6|Page
The asset list tells us what should be protected. The policy should also indicate who should have access
to the protected items. It may also indicate how that access will be ensured and how unauthorized people
will be denied access. All the mechanisms described in this book are at your disposal in deciding which
controls should protect which objects. In particular, the security policy should state what degree of
protection should be provided to which kinds of resources.
Characteristics of a Good Security Policy
If a security policy is written poorly, it cannot guide the developers and users in providing
appropriate security mechanisms to protect important assets. Certain characteristics make a
security policy a good one.
Coverage
A security policy must be comprehensive: It must either apply to or explicitly exclude all possible
situations. Furthermore, a security policy may not be updated as each new situation arises, so it
must be general enough to apply naturally to new cases that occur as the system is used in unusual or
unexpected ways.
Durability
An important key to durability is keeping the policy free from ties to specific data or protection
mechanisms that almost certainly will change.
Realism
The policy must be realistic. That is, it must be possible to implement the stated security
requirements with existing technology. Moreover, the implementation must be beneficial in terms of time,
cost, and convenience; the policy should not recommend a control that works but prevents the system or
its users from performing their activities and functions
Usefulness
An obscure or incomplete security policy will not be implemented properly, if at all. The policy must be
written in language that can be read, understood, and followed by anyone who must implement it or is
affected by it. For this reason, the policy should be succinct, clear, and direct.
7.4. Cybersecurity
Cybersecurity threats and risks are notoriously hard to quantify and estimate. Some vulnerabilities, such as
buff we can scrutinize our systems to find and fix them. But other vulnerabilities are less understood or not
yet appear the likelihood that a hacker will attack a network, and how do you know the precise value of the
assets the hack have happened (such as widespread virus attacks) estimates of the damage vary widely, so
how can we be expecting have not happened?
7.5. Ethics
Ethics Codes or principles of an individual or group that regulate and define acceptable
behaviour. Attitudes toward the ethics of computer use are affected by many factors other than nationality.
Differences are found among people within the same country, within the same social
class, and within the same company. Key studies reveal that education is the overriding factor
in leveling ethical perceptions within a small population. Employees must be trained and kept
aware of many topics related to information security, not the least of which is the expected
behavior of an ethical employee. This education is especially important in information security, as many
employees may not have the formal technical training to understand that their
behavior is unethical or even illegal. Proper ethical and legal training is vital to creating an
informed and well-prepared system user. Ethics are socially acceptable behavior. The key difference
between laws and ethics is that laws carry the authority of a governing body and ethics do not.
7|Page