BScSwEng Course Handout - Chapter2, Chaptrer3 and Chapter4
BScSwEng Course Handout - Chapter2, Chaptrer3 and Chapter4
Page | 1
vulnerabilities or weaknesses that may be present in your system. You should carry out various
types of testing, including functional testing to check if your system functions correctly and
securely according to specifications, non-functional testing to assess how your system behaves
under different conditions and scenarios, and penetration testing to evaluate how it responds to
simulated attacks from malicious actors.
Step 5. Monitor security status
The fifth step to secure your system design and development is to monitor the security status
of your system. This continuous activity allows you to collect and analyze data and information
about your system's security performance, events, and incidents, as well as detect and respond to
any security issues or threats that may arise or escalate.
Step 6. Review security practices
The sixth and final step to secure your system design and development is to review your
security practices regularly and periodically. This iterative activity will allow you to evaluate and
improve your security processes, methods, and outcomes, as well as adapt and update your
security requirements, principles, and controls.
2.1 Understanding Threats
As new businesses take shape, new threats need to be identified and mitigated to allow for the
continued success of those businesses. Over time, new businesses can use additional security
technology to mitigate such threats. As your organization enters new businesses, it may be
worthwhile to consider developing, buying, and deploying new technological solutions that help
mitigate threats that did not exist prior to the organization’s entry into that new business.
Different types of businesses will be more sensitive to different threats, and will have
different security goals to mitigate those threats. Understanding threats is important in
determining a system’s security goals.
2.1.1 Defacement
Defacement is a form of online vandalism in which attackers replace legitimate pages of an
organization’s web site with illegitimate ones. In the years 1999 and 2001, for example, the
White House web site was defaced by supposed anti-NATO activists (Dennis and Gold 1999)
and Chinese hackers (Anderson 2001). In such defacement attacks, the attackers usually replace
the front page of a web site with one of their own choice.
Page | 2
2.1.2 Infiltration
In general, infiltration is an attack in which an unauthorized party gains full access to the
resources of a computer system (including, but not limited to, use of the CPUs, disks, and
network bandwidth). In later chapters, we study how buffer overflow, command injection, and
other software vulnerabilities can be used by attackers to infiltrate and “own” computers.
In some defacement attacks, an attacker may have to infiltrate a web server to conduct the
defacement. But the threat from infiltration can be quite different than that of defacement,
depending on the type of web site, in the case of the political web site, you most importantly
need to defend against an attacker who attempts to gain write capability, while in the case of a
financial web site, it is most important to defend against an attacker who attempts to gain read
capability.
he preceding example illustrates that different types of web sites are going to have different
security goals. In the case of a political web site, the integrity of the web site content is the most
significant concern, while in the case of a financial web site, integrity and confidentiality of
customer data are both of high importance. Military web sites have still different security
sensitivities. If a military web site is defaced, it might simply be embarrassing for them.
Infiltration of a military web site, in which confidential or classified data is acquired by the
attacker, however, could be a threat to national security.
2.1.3 Phishing
Phishing is an attack in which an attacker (in this case, a phisher) sets up a spoofed web site that
looks similar to a legitimate web site. The attacker then attempts to lure victims to the spoofed
web site and enter their login credentials, such as their usernames and passwords. In a phishing
attack, attackers typically lure users to the spoofed web site by sending them e-mails suggesting
that there is some problem with their account, and that the user should click a link within the e-
mail to “verify” their account information. The link included in the e-mail, of course, is to the
attacker’s web site, not the legitimate site. When unsuspecting users click the link, they arrive at
the spoofed site and enter their login credentials. The site simply logs the credentials, and either
reports an error to the user or redirects the user to the legitimate site (or both). The attacker later
uses the logged credentials to log into the user’s account and transfer money from the user’s
account to their own.
Page | 3
Why do users fall for clicking such links in e-mails sent by phishers? Phishers use various
techniques to hide the fact that the link is to their illegitimate, spoofed site. Following is an
example.
First, in HTML documents, a link is constructed as follows:
<A HREF='http://www.destination-site.com/'> Click here </A>
When the e-mail is rendered by a browser, the link will look like this: Click here, and the
destination address will not be apparent to an unsuspecting user.
An attacker can use code such as the following in an HTML e-mail sent to the victim:
http://www.legitimate-site.com/
<A HREF=http://www.evil-site.com/> http://www.legitimate-site.com/ </A>
The browser displays http://www.legitimate-site.com/, but when the user clicks the link,
the browser loads the front page of www.evil-site.com since that is what is specified by the
hyperlink reference (HREF) in the anchor (A) tag in the HTML e-mail. In real phishing attacks,
the phisher might have the browser display www.paypal.com or www.google.com, and have the
hyperlink reference point to www.paypa1.com (with a “1” instead of a “l”) or www.gogole.com
(“google” misspelled), respectively.
2.1.4 Pharming
Pharming is another attack in which a user can be fooled into entering sensitive data into a
spoofed web site. It is different than phishing in that the attacker does not have to rely on the user
clicking a link in an e-mail. With pharming, even if the user correctly enters a URL (uniform
resource locator)—or web address—into a browser’s address bar, the attacker can still redirect
the user to a malicious web site.
n a pharming attack, an attacker interferes with the machine name–to–IP address translation
for which the DNS is responsible. The attacker can do so by, for instance, compromising the
DNS server, and coaxing it into returning the attacker’s IP address instead of the legitimate one.
If the user is browsing via HTTP, the attack can be unnoticeable to the user. However, if a user
connects to a site using SSL, a pharming attack (in most cases) will result in a dialog box from
the browser complaining that it was not able to authenticate the server due to a “certificate
mismatch.”
Pharming (a.k.a. DNS cache poisoning)
Page | 4
2.1.5 Insider Threats
A surprisingly large percentage of attacks take place with the cooperation of insiders. Insiders
could be, for instance, employees at a corporation who abuse their privileges to carry out
malicious deeds. As such, it is sometimes important to defend a system against the very people
that are responsible for using it on a daily basis. Database administrators, for example, have
traditionally been given the “keys to the entire kingdom,” and have complete access to all
employee and customer data stored in a database. System administrators similarly are given
“superuser” access to all resources and data under the control of an operating system. Additional
features are needed in both database and operating systems to provide for separation of privilege,
the concept that an individual should only be given the privileges that he needs, without also
being given unrestricted access to all data and resources in the system.
2.1.6 Click Fraud
Prior to the advent of pay-per-click advertising, the threat of click fraud never existed. Pay-per-
click advertising is an Internet advertising model in which advertisers provide advertisements to
search engines. Search engines work with web site publishers to insert advertisements not only
on search result pages, but also on publisher’s content pages. The idea is that the entire page of
content on a publisher’s site is considered a “query” for the search engine, and relevant ads are
inserted on the publisher’s web page. Advertisers pay the search engine whenever users click on
those advertisements. Web site publishers typically receive a revenue share for clicks on ads that
occur on their site. Advertisers usually set a maximum daily budget for their advertising
campaigns so that their advertising costs do not go unbounded. Such a pay-per-click advertising
system can be abused in several ways. We will describe two of them.
In one type of click fraud, an advertiser will click a competitor’s ad with the intention of
“maxing out” their competitor’s budget. Once their competitor’s budget has been exhausted,
their ads may exclusively be shown to legitimate users. Such an attack ends up wasting the
competitor’s financial resources, and allows the attacker to receive all the legitimate ad clicks
that their competitor might have received.
In another type of click fraud, a web site publisher will click on ads shown on their own web
site in an attempt to receive the revenue share for those clicks. In some cases, the fraudulent
publisher can hire a third-party firm or deploy malware to click on the ads.
Click fraud detection methods: on one hand, they are currently using security by obscurity
Page | 5
2.1.7 Denial-of-Service (DoS)
Another significant threat that e-commerce and financial institutions face are DoS attacks. In one
type of DoS attack, the attacker sends so many packets to a web site that it cannot service the
legitimate users that are trying access it. A financial institution or e-commerce site can end up
losing money and revenue as the result of such a DoS attack because its customers will not be
able to conduct transactions or make online purchases.
2.2 Designing-In Security
At the highest level, designing security into a software application means that one should keep
security in mind while building it, starting with its requirements and design. It is not advisable to
write your code first, and then worry about making it secure afterward. Experience has shown
that it is very hard to add on security later.
An example of how it is very difficult to add security as an afterthought is the design of the
Internet itself. When the Internet was designed, all of the hosts (computers) on the network were
effectively trusted because they were owned by universities or military installations that trusted
each other and wanted to collaborate with one another. (The Internet grew out of a government
project funded by DARPA, the Defense Advanced Research Project Agency.) In the mid-1990s,
due to the mass commercialization of the Internet, just about everyone started connecting their
computers to the Internet. New hosts were allowed to connect to the existing hosts regardless of
whether the existing parties on the network trusted the newly connected hosts. To protect
themselves, some hosts started deploying firewalls.
2.3 Convenience and Security
Security comes at a price not only to the company that is developing an information system, but
to the users of that system. The system may become less convenient for the users as more
security technology is deployed. For example, if you allow your users to choose whatever
password they like, this may lead to security vulnerabilities since some users may choose
passwords that are easy for attackers to guess. On the other hand, if you deploy a security
technology that assigns complicated passwords to users, your system may seem more secure, but
it will be less convenient to your users, since they may forget the passwords if they’re too
complicated. We say “seem” more secure because if the passwords are so hard to remember that
users start writing them down, this introduces another vulnerability that may end up actually
decreasing the security of the overall system. If those written-down passwords are stored in a
Page | 6
user’s wallet with all of his other credentials, that would involve some risk; but if they’re on a
Post-it note stuck to the side of a monitor in a public office space, that would involve
significantly more risk!
A good security technology can increase both convenience and security—although that may
not always be possible. For example, if you allow users to choose their own passwords, but make
them choose sufficiently complicated ones (e.g., require that users enter one digit or special
character into a password that’s between eight and ten characters), this might significantly
increase security at the cost of only a little bit of inconvenience. A good security technology will
provide a relative security benefit at only a slight inconvenience to users. A good technology will
increase both convenience and security, because even if it introduces a slight inconvenience, it
can reduce or eliminate more significant inconveniences (and damages) that may occur as the
result of a successful attack.
2.4 Security in Software Requirements
From the Internet examples in the preceding section, you learned that it is not advisable to
retrofit software or communications protocols by adding security on afterward. It is advisable to
design for security from the beginning. One way to start “baking” security into your software is
to define concrete, measurable security goals in software requirements documents.
1. Security in Software Requirements 5. Including Validation and Fraud Checks
2. Specifying Error Handling Requirements 6. Writing Measurable Security
3. Sharing Requirements with (QA) Requirements
4. Handling Internal Errors Securely
2.5 “Good Enough” Security
Every security book (including this one) will tell you to design security in from the beginning.
We definitely agree. At the same time, we do not think it is worth it to spend more than some
reasonable fraction of your time thinking about security. The fraction of time that you spend
thinking about security should be proportional to the number and types of threats that your
software and business faces. So, the message is: design it with security in mind. Design it so that
your alpha version is “secure enough.” Have a discussion with a security person even before you
design the alpha to make sure that your design is reasonable and will not preclude your ability to
strengthen your software against attacks in the future. Design so that there is enough flexibility
for you to build a secure beta version if the alpha version is successful.
Page | 7
Chapter 3 Section II - Secure Design Principles
While the previous chapter section was concerned with high-level approaches and trade-offs in
security, this section will focus on security design principles. While there are many specific
security practices, they flow from a small set of well-accepted principles. Understanding the
fundamental principles puts you in the best position to implement specific practices where
needed in your own projects.
Here’s the bad news: it’s impossible to write perfectly secure applications. Bugs are going to
slip through, and if they do, attackers will find them. But that’s the bad news. The good news is
that you can design your applications to minimize the damage those bugs cause. The even better
news is that designing secure applications isn’t complicated or mysterious. The key is to follow a
few key principles during the application design phase. This way, even when bugs rear their ugly
heads, the damage they cause doesn’t lead to attackers obtaining all your valuable data, or the
entire service going down.
In this section, we’ll talk about key security principles that will work in any kind of
application. Following these principles is critical to ensuring that the software you ship is safe
and secure for your customers.
3.1 Principle of Least Privilege
The first principle for secure design is the Principle of Least Privilege. The Principle of Least
Privilege means that you ensure people only have enough access that they need to do their job.
For instance: if you design a system which holds sensitive customer financial information, it’s
good practice to limit who can access that information. The person who answers the phone and
schedules meetings probably doesn’t need access to all of the sensitive information. On the other
hand, an account manager probably does need access to that information. The key is ensure that
account manager doesn’t have access to information from accounts they don’t manage.
By ensuring that accounts have only the privileges necessary to do their job, you ensure that
if an attacker compromises an account, you minimize what information they can access. This
limits the damage of the attack.
Page | 9
instead Fails Securely, all of the doors lock. Instead of granting access to all of the doors in the
building, you don’t have access to any of them. No snooping for you, today!
The same concept applies to software design. A system that’s designed to Fail Securely only
grants access to parts of the system when every step of the process completes successfully.
3.5 Principle of Open Design
The Principle of Open Design says that your system security shouldn’t rely on the secrecy of
your implementation. This is a particularly important principle for security concepts like
cryptographic implementations. Well-designed cryptography implementations are published
publicly. They’re interrogated by the smartest people in the world before they’re put into
practice.
The same should be true for any software system. For instance, a system which doesn’t Fail
Securely, like before, might rely on the idea that “nobody would ever figure that out.” While it’s
unlikely that an attacker might deduce that a bug grants access to the system, it’s not impossible.
What’s more, if they ever gained access to your source code, they’d figure it out quickly. Instead,
follow the principles for secure design to ensure the system is safe, no matter whether someone
malicious gains access to your code.
3.6 Principle of Avoiding Security by Obscurity
Security by Obscurity is similar to the principle of Open Design. Imagine software which has a
hard-coded secret username and password combination. When authenticated, this account has
full access to every account in the system. The security of this system relies on the credentials of
this account remaining a secret. Over time, a growing number of users will gain access to this
account. Some will leave the company, but they won’t forget the password. At that point, the
security of your application relies on the good will of those employees.
It’s true that every application’s security relies on secrets. Passwords are a critical part of
most authentication schemes. There’s nothing that you can do about that. However, a better
design for this system is one where the account with full access doesn’t exist in the first place. If
you must include it, don’t hard-code the credentials. Instead, make the account a normal account.
When someone with access to the account leaves the company, change the password.
3.7 Principle of Minimizing Attack Surface Area
The Principle of Minimizing Attack Surface Area is all about removing parts of an application to
make it more secure. The classic example doesn’t come from software, but from thinking about
Page | 10
our imaginary data center again. If you’ve ever visited a data center, you likely noticed they
don’t have a lot of windows. Part of this is to benefit cooling the building. But part of the reason
data centers aren’t encased in windows is windows are easy to break. You might have the
absolute best locks in the business, but they don’t do any good if someone comes in through the
window.
Parts of your application are like windows. They look nice, but they might expose
functionality that leads to bugs. Minimizing Attack Surface Area questions whether a feature is
necessary. Sometimes, by redesigning a feature to make it more simple, the application’s overall
security improves.
Dive Deeper
While these principles aren’t magic, this is still a pretty high-level view. If you’re interested in
learning more about designing secure applications, there is much more detail. If you’re thinking
about starting a new application design, this knowing this and other principles is a great idea.
Many companies find security challenging because they’re unprepared when designing their
application. They bake that insecurity into their application. By the time they realize they have a
problem, a fix is costly and difficult. Instead, follow the principles laid out here. Learn more
about them, and seek to spot them in your application designs. When you do, you’ll improve
what you deliver. Your company and your customers will thank you for it!
Page | 11
Chapter 4 Risk management frameworks and processes
Security is risk management a continuous risk management process is a necessity. By teasing
apart architectural risk analysis (one of the critical software security touchpoints) and an overall
risk management framework (RMF), we can begin to make more sense of software security risk.
An RMF is at its heart a philosophy for software security. Following the RMF is by
definition a full lifecycle activity, no matter whether you're working on a little project or a huge
corporate application strategy. The key to reasonable risk management is to identify and keep
track of risks over time as a software project unfolds. As touchpoints are applied and risks are
uncovered, for example, an RMF allows us to track them and display information about status.
For the purposes of this chapter, consider risk management as a high-level approach to iterative
risk management that is deeply integrated throughout the software development lifecycle
(SDLC) and unfolds over time. The basic idea is simple: identify, rank, track, and understand
software security risk as it changes over time.
4.1 Putting Risk Management into Practice
The software security touchpoints exist to drive out technical risk. Critical to proper application
of the touchpoints is the notion of keeping track of security risks as they are uncovered and
making sure they are properly dealt with. The RMF is about identifying, tracking, and mitigating
software risk over time.
Software is a fairly geeky domain. It's about arcane technology that business people don't
understand. The question needs to be: How do you get business people to care whether their
software works or not? The answer has to be that software risk must be discussed, understood,
and related in terms of business impact. As a technical person, you need to say something like,
"If the flimflobble in sector four has a floosblozzle failure, that means we will miss the first
quarter number by $2 million" (as opposed to just saying the first part). Business people can
relate to the last part of the statement.
The purpose of an RMF like this is to allow a consistent and repeatable expertise-driven
approach to risk management. Don't forget that the main purpose of the RMF is to consistently
track and handle risks. By grounding the touchpoints in the RMF philosophy, you can develop a
powerful risk management capability.
Page | 12
4.2 The Five Stages of Activity
A picture of RMF in (Figure 4-1) is a closed-loop process with five basic activity stages, each of
which is numbered and then present a bird's-eye view of the five fundamental activities, which
can refer to as stages. The RMF is can be described in a pyramid fashion.
Page | 13
include but are not limited to increasing revenue, meeting service-level agreements (SLAs),
reducing development costs, and generating high return on investment (ROI). The purpose of
this stage is to gather data to answer the all-important "Who cares?" question.
4.2.2 Stage 2: Identify the Business and Technical Risks
Business risks directly threaten one or more business goals. The identification of such risks helps
to clarify and quantify the possibility that certain events will directly impact business goals.
Business risks have impacts that include direct financial loss, damage to brand or reputation,
violation of customer or regulatory constraints, exposure to liability, and increase in development
costs. The severity of a business risk should be expressed in financial or project management
terms. These include but are not limited to market share (percentage), direct cost, level of
productivity, and cost of rework.
Business risk identification helps to define and steer use of particular technical methods for
extracting, measuring, and mitigating software risk given various software artifacts. The
identification of business risks provides a necessary foundation that allows software risk
(especially impact) to be quantified and described in business terms. This makes impact
statements tangible and spurs action on risk mitigation.
Central to this stage of the RMF is the ability to discover and describe technical risks and
map them (through business risks) to business goals. A technical risk is a situation that runs
counter to the planned design or implementation of the system under consideration. For example,
a technical risk may give rise to the system behaving in an unexpected way, violating its own
design strictures, or failing to perform as required. If the builders do not make proper use of
touchpoints, these kinds of risks may slip by unnoticed. Technical risks can also be related to the
process of building software. The process an organization follows may offer too many
opportunities for mistakes in design or implementation. Technical risks involve impacts such as
unexpected system crashes, avoidance of controls (audit or otherwise), unauthorized data
modification or disclosure, and needless rework of artifacts during development. Technical risk
identification is supported by the software security touchpoints.
Page | 14
activities of synthesizing and prioritizing risks, the critical "Who cares?" question can (and must)
be answered. Synthesis and prioritization should be driven to answer questions such as: "What
shall we do first give the current risk situation?" and "What is the best allocation of resources,
especially in terms of risk mitigation activities?" Clearly, the prioritization process must take into
account which business goals are the most important to the organization, which goals are
immediately threatened, and how likely technical risks are to manifest themselves in a way that
impacts the business. This stage creates as its output lists of all the risks and their appropriate
weighting for resolution.
4.2.4 Stage 4: Define the Risk Mitigation Strategy
One of the big problems in software security is that technical analysts are pretty good at
finding technical problems and pointing them out, but not so good at determining what to do
about them. Nobody wants to hear about their problems without hearing some suggested fixes. A
risk analysis is only as good as the mitigation strategy it contains.
This inability to determine how to fix the problems holds true for a majority of security
consultants. It is always easier to break something than to design something that can't be broken.
Given a set of risks and their priorities from stage 3, the next stage is to create a coherent
strategy for mitigating the risks in a cost-effective manner. Any suggested mitigation activities
must take into account cost, implementation time, likelihood of success, completeness, and
impact over the entire corpus of risks. A risk mitigation strategy must be constrained by the
business context and should consider what the organization can afford, integrate, and understand.
The strategy must also directly identify validation techniques that can be used to demonstrate
that risks are properly mitigated.
4.2.5 Stage 5: Carry Out Fixes and Validate
Once a mitigation strategy has been defined, it must be executed. Those artifacts where problems
have been identified (e.g., architectural flaws in a design, requirements collisions, coding errors,
or problems in testing) should be rectified. Risk mitigation is carried out according to the
strategy defined in stage 4. Progress at this stage should be measured in terms of completeness
against the risk mitigation strategy. Good status metrics include but are not limited to progress
against risks, open risks remaining, and any artifact quality metrics previously identified.
This stage also involves carrying out the validation techniques previously identified. The
validation stage provides some confidence that risks have been properly mitigated through
Page | 15
artifact improvement and that the risk mitigation strategy is working. Testing can be used to
demonstrate and measure the effectiveness of risk mitigation activities. The central concern at
this stage is to validate that software artifacts and processes no longer bear unacceptable risk.
This stage should define and leave in place a repeatable, measurable, verifiable validation
process that can be run from time to time to continually verify artifact quality.
Typical metrics employed during this stage include artifact quality metrics as well as levels
of risk mitigation effectiveness.
4.3 Measuring and Reporting on Risk
The importance of the central activity of identifying, tracking, storing, measuring, and reporting
software risk information cannot be overemphasized. Successful use of the RMF depends on
continuous and consistent identification and storage of risk information as it changes over time.
A master list of risks should be maintained during all stages of RMF execution and continually
revisited.
we use a tool called the Workbench to track risk information or we used Excel spreadsheets.
Measurements regarding this master list make excellent reporting fodder.
4.4 The RMF Is a Multilevel Loop
The RMF shown in Figure 4-1 has an obvious loop. This loop is meant to graphically represent
the idea that risk management is a continuous process. That is, identifying risks only once during
a software project is insufficient. The idea of "crossing off a particular stage" once it has been
executed and never doing those activities again is incorrect. Though the five stages are shown in
a particular serial order in Figure 4-1, they may need to be applied over and over again
throughout a project, and their particular ordering may be interleaved in many different ways.
There are two main reasons for this complication. First, risks can crop up at any time during
the software lifecycle. One natural way to apply a cycle of the loop is during each particular
software lifecycle phase. For example, software risks should be identified, ranked, and mitigated
(one loop) during requirements and again during design (another loop). Second, risks can crop up
between stages, regardless of where in the process a project finds itself.
In addition to the issue of continuous looping is a further complication regarding level of
application. Put simply, the RMF is fractal. In other words, the entire process can be applied at
several different levels. The primary level is the project level. Each stage of the loop clearly must
have some representation during a complete project in order for risk management to be effective.
Page | 16
Another level is the software lifecycle phase level. The loop will most likely have a
representation at the requirements phase, the design phase, the architecture phase, the test
planning phase, and so on. A third level is the artifact level. The loop will have a representation
during both requirements analysis and use case analysis, for example. Fortunately, a generic
description of the validation loop as a serial looping process is sufficient to capture critical
aspects at all of these levels at once.
Understanding that the risk management process is by nature cumulative, and at times
arbitrary and difficult to predict (depending on project circumstances), is an important insight.
Given this insight, I acknowledge that the practice of specific RMF stages, tasks, and methods
(as described serially here, for pedagogical reasons) may occur independently of one another, in
parallel, repeatedly, and unsystematically.
Page | 17