Code Review Guide Singlecolumn V05
Code Review Guide Singlecolumn V05
REVIEW
GUIDE
The icons below represent what other YOU ARE FREE: 2.0
versions are available in print for this book
title.
RELEASE
To Share - To copy, distribute and trans-
ALPHA: “Alpha Quality” book content is a mit the work
working draft. Content is very rough and in
development until the next level of publish-
ing. To Remix - to adapt the work
Share Alike
If you alter, transform, or build upon this
work, you may distrib- ute the resulting
work only under the same, similar or a
compatible license.
Alpha Beta Release
The Open Web Application Security Project (OWASP) is a worldwide free and open community focused
on improving the security of application software. Our mission is to make application se- curity “visible”,
so that people and organizations can make informed decisions about application security risks. Every
one is free to participate in OWASP and all of our materials are available under a free and open software
license.The OWASP Foundation is a 501c3 not-for-profit charitable organi- zation that ensures the ongo-
ing availability and support for our work.
1 Introduction
3 5 6 8
2
Secure Code Review 9
Methodology 20
3
Technical Reference For Secure Code Review Appendix
A1 Injection 43 Code Review Do’s And Dont’s 192
A2 Broken Authentication And Session Management 58 Code Review Checklist 196
A3 Cross-Site Scripting (XSS) 70 Threat Modeling Example 200
A4 Insecure Direct Object Reference 77 Code Crawling 206
A5 Security Misconfiguration 82
A6 Sensitive Data Exposure 117
A7 Missing Function Level Access Control 133
A8 Cross-Site Request Forgery (CSRF) 139
A9 Using Components With Know Vulnerabilities 146
A10 Unvalidated Redirects And Forwards 149
HTML5 154
Same Origin Policy 158
Reviewing Logging Code 160
Error Handling 163
Reviewing Security Alerts 175
Review For Active Defence 178
Race Conditions 181
Buffer Overruns 183
Client Side JavaScript 188
2 Code Review Guide Foreword - By Eoin Keary 3
1 FOREWORD
By Eoin Keary,
Long Serving OWASP Global Board Member
The OWASP Code Review guide was originally born from the
OWASP Testing Guide. Initially code review was covered in the
Testing Guide, as it seemed like a good idea at the time. Howev-
er, the topic of security code review is too big and evolved into
its own stand-alone guide.
1
I started the Code Review Project in 2006. This current edition
was started in April 2013 via the OWASP Project Reboot initia-
tive and a grant from the United States Department of Home-
land Security.
We have seen a disturbing rise in threats and attacks on community institutions thru appli-
cation vulnerabilities, only by joining forces, and with unfretted information can we help
turn back the tide these threats. The world now runs on software and that software needs
to be trust worthy. Our deepest appreciation and thanks to DHS for helping and in sharing
in this goal.
FEEDBACK
If you have any feedback for the OWASP Code Review team, and/or find any mistakes or
improvements in this Code Review Guide please contact us at:
[email protected] VERSION 2.0, 2017
Content Contributors Michael Hidalgo David Li
Larry Conklin Reviewers Lawrence J Timmins
Gary Robinson Alison Shubert Kwok Cheng
Johanna Curiel Fernando Galves Ken Prole
Eoin Keary Sytze van Koningsveld David D’Amico
Islam Azeddine Mennouchi Carolyn Cohen Robert Ferris
Abbas Naderi Helen Gao Lenny Halseth
Carlos Pantelides Jan Masztal Kenneth F. Belva
Welcome to the second edition of the OWASP Code Review Guide Project. The second edition brings the
successful OWASP Code Review Guide up to date with current threats and countermeasures. This ver- The contents and the structure of the book have been carefully designed. Further, all the contributed chapters have been judi-
sion also includes new content reflecting the OWASP communities’ experiences of secure code review ciously edited and integrated into a unifying framework that provides uniformity in structure and style.
best practices.
This book is written to satisfy three different perspectives.
1. Management teams who wish to understand the reasons of why code reviews are needed and why they are included in best
practices in developing secure enterprise software for todays organizations. Senior management should thoroughly read sec-
tions one and two of this book. Management needs to consider the following items if doing secure coding is going to be part of
CONTENTS the organizations software development lifecycle:
Overview 2. Software leads who want to give manfully feedback to peers in code review with ample empirical artifacts as what to look for
in helping create secure enterprise software for their organizations. They should consider:
This section introduces the reader to secure code review and the advantages it can bring to a devel-
opment organization. It gives an overview of secure code review techniques and describes how code
•As a peer code reviewer, to use this book you first decided on the type of code review do you want to accomplish. Lets spend a
review compares other techniques for analyzing secure code. few minutes going over each type of code review to help in deciding how this book can be assistance to you.
Methodology • API/design code reviews. Use this book to understand how architecture designs can lead to security vulnerabilities. Also if the
The methodology section goes into more detail on how to integrate secure review techniques into de- API is a third party API what security controls are in place in the code to prevent security vulnerabilities.
velopment organizations S-SDLC and how the personnel reviewing the code can ensure they have the
correct context to conduct an effective review. Topics include applying risk based intelligence to securi- • Maintainability code reviews. These types of code reviews are more towards the organizations internal best coding practices.
ty code reviews, using threat modelling to understand the application being reviewed, and understand- This book does cover code metrics, which can help the code reviewer, better understand what code to look at for security vul-
ing how external business drivers can affect the need for secure code review. nerabilities if a section of code is overly complex.
• Integration code reviews. Again these types of code reviews are more towards the organizations internal coding policies. Is
the code being integrated into the project fully vetted by IT management and approved? Many security vulnerabilities are now
being implemented by using open source libraries whichh may bring in dependencies that are not secure.
• Testing code reviews. Agile and Test Driven design where programmer creates unit tests to prove code methods works as the
programmer intended. This code is not a guide for testing software. The code reviewer may want to pay attention to unit test
cases to make sure all methods have appropriate exceptions; code fails in a safe way. If possible each security control in code has
the appropriate unit test cases.
3. Secure code reviewer who wants an updated guide on how secure code reviews are integrated in to the organizations secure
software development lifecycle. This book will also work as a reference guide for the code review as code is in the review process.
This book provides a complete source of information needed by the code reviewer. It should be read first as a story about code
reviews and seconds as a desktop reference guide.
8 Secure Code Review 9
This section can be used to learn the important aspects of the various controls, and as an on-the-job reference
when conducting secure code reviews.
We start with the OWASP Top 10 issues, describing technical aspects to consider for each of these issues. We
then move onto other common application security issues not specific to the OWASP Top 10
Secure code review is probably the single-most effective technique for identifying security bugs early in the
system development lifecycle. When used together with automated and manual penetration testing, code
review can significantly increase the cost effectiveness of an application security verification effort.
2
This guide does not prescribe a process for performing a security code review. Rather, it provides guidance on
how the effort should be structured and executed. The guide also focuses on the mechanics of reviewing code
for certain vulnerabilities.
Manual secure code review provides insight into the “real risk” associated with insecure code. This contextual,
white-box approach is the single most important value. A human reviewer can understand the relevance of
a bug or vulnerability in code. Context requires human understanding of what is being assessed. With ap-
propriate context we can make a serious risk estimate that accounts for both the likelihood of attack and the
business impact of a breach. Correct categorization of vulnerabilities helps with priority of remediation and
fixing the right things as opposed to wasting time fixing everything.
These problems have become so important in recent years because we continue to increase connectivity
and add technologies and protocols at an extremely fast rate. The ability to invent technology has seriously
outstripped the ability to secure it. Many of the technologies in use today simply have not received enough
(or any) security scrutiny.
There are many reasons why businesses are not spending the appropriate amount of time on security. Ulti-
mately, these reasons stem from an underlying problem in the software market. Because software is essen-
tially a black box, it is extremely difficult for a customer to tell the difference between good code and insecure
code. Without this visibility vendors are not encouraged to spend extra effort to produce secure code. Nev-
ertheless, information security experts frequently get pushback when they advocate for security code review,
with the following (unjustified) excuses for not putting more effort into security:
“We have a firewall that protects our applications” 5.3 What is the difference between Code Review and Secure Code Review?
The Capability Maturity Model (CMM) is a widely recognized process model for measuring the development
“We trust our employees not to attack our applications” processes of a software development organization. It ranges from ‘level 1’ where development processes are
ad hoc, unstable and not repeatable, to ‘level 5’ where the development processes are well organized, docu-
Over the last 10 years, the team involved with the OWASP Code Review Project has performed thousands of mented and continuously improving. It is assumed that a company’s development processes would start out
application reviews, and found that every non-trivial application has had security vulnerabilities. If code has at level 1 when starting out (a.k.a start-up mode) and will become more defined, repeatable and generally
not been reviewed for security holes, the likelihood that the application has problems is virtually 100%. professional as the organization matures and improves. Introducing the ability to perform code reviews (note
this is not dealing with secure code review yet) comes in when an organization has reached level 2 (Repeat-
Still, there are many organizations that choose not to know about the security of their code. To them, consider able) or level 3 (Defined).
Rumsfeld’s cryptic explanation of what we actually know:
“...we know, there are known knowns; there are things we know we know. We also know there are known un- Secure Code Review is an enhancement to the standard code review practice where the structure of the re-
knowns; that is to say we know there are some things we do not know. But there are also unknown unknowns view process places security considerations, such as company security standards, at the forefront of the de-
-- the ones we don’t know we don’t know.”- Donald Rumsfeld cision-making. Many of these decisions will be explained in this document and attempt to ensure that the
review process can adequately cover security risks in the code base, for example ensuring high risk code is
If informed decisions are being made based on a measurement of risk in the enterprise, which will be fully reviewed in more depth, ensuring reviewers have the correct security context when reviewing the code, en-
supported. However, if risks are not being understood, the company is not being duly diligent, and is being suring reviewers have the necessary skills and secure coding knowledge to effectively evaluate the code.
irresponsible both to shareholders and customers.
5.4 Determining the Scale of a Secure Source Code Review?
5.2 What is Secure Code Review? The level of secure source code review will vary depending on the business or regulatory needs of the software, the
Code review aims to identify security flaws in the application related to its features and design, along with the size of the software development organization writing the applications and the skills of the personnel. Similar to
exact root causes. With the increasing complexity of applications and the advent of new technologies, the other aspects of software development such as performance, scalability and maintainability, security is a measure of
traditional way of testing may fail to detect all the security flaws present in the applications. One must under- maturity in an application. Security is one of the non-functional requirements that should be built into every serious
application or tool that is used for commercial or governmental purposes.
stand the code of the application, external components, and configurations to have a better chance of finding
the flaws. Such a deep dive into the application code also helps in determining exact mitigation techniques
If the development environment consists of one person programming as a hobby and writing a program to track
that can be used to avert the security flaws. their weekly shopping in visual basic (CMM level 1), it is unlikely that that programmer will use all of the advice
within this document to perform extensive levels of secure code review. On the other extreme, a large organization
It is the process of auditing the source code of an application to verify that the proper security and logical with thousands of developers writing hundreds of applications will (if they wish to be successful) take security very
controls are present, that they work as intended, and that they have been invoked in the right places. Code seriously, just like they would take performance and scalability seriously.
review is a way of helping ensure that the application has been developed so as to be “self-defending” in its
given environment. Not every development organization has the necessity, or resources, to follow and implement all of the topics in
this document, but all organizations should be able to begin to write their development processes in a way that
Secure code review allows a company to assure application developers are following secure development can accommodate the processes and technical advice most important to them. Those processes should then
techniques. A general rule of thumb is that a penetration test should not discover any additional application be extensible to accommodate more of the secure code review considerations as the organization develops and
vulnerabilities relating to the developed code after the application has undergone a proper security code matures.
review. At the least very few issues should be discovered.
In a start-up consisting of 3 people in a darkened room, there will not be a ‘code review team’ to send the code to,
instead it’ll be the bloke in the corner who read a secure coding book once and now uses it to prop up his monitor.
All security code reviews are a combination of human effort and technology support. At one end of the spec-
trum is an inexperienced person with a text editor. At the other end of the scale is an expert security team with In a medium sized company there might be 400 developers, some with security as an interest or specialty, however
advanced static analysis (SAST) tools. Unfortunately, it takes a fairly serious level of expertise to use the current the organizations processes might give the same amount of time to review a 3 line CSS change as it gives to a
application security tools effectively. They also don’t understand dynamic data flow or business logic. SAST redesign of the flagship products authentication code. Here the challenge is to increase the workforce’s secure
tools are great for coverage and setting a minimum baseline. coding knowledge (in general) and improve the processes through things like threat modelling and secure code
review.
Tools can be used to perform this task but they always need human verification. They do not understand
context, which is the keystone of security code review. Tools are good at assessing large amounts of code and For some larger companies with many thousands of developers, the need for security in the S-SDLC is at its greatest,
pointing out possible issues, but a person needs to verify every result to determine if it is a real issue, if it is but process efficiency has an impact on the bottom line. Take an example of a large company with 5,000 developers.
actually exploitable, and calculate the risk to the enterprise. Human reviewers are also necessary to fill in for If a change is introduced to the process that results in each developer taking an extra 15 minutes a week to perform
a task, suddenly that’s 1,250 hours extra each week for the company as a whole This results in a need for an extra 30
the significant blind spots, which automated tools, simply cannot check.
full time developers each year just to stay on track (assuming a 40 hour week). The challenge here is to ensure the
security changes to the lifecycle are efficient and do not impede the developers from performing their task.
12 Secure Code Review Secure Code Review 13
• Automated Penetration Testing (black/grey box) through penetrating testing tools automatic scans, where
Skilling a Workforce for Secure Code Review the tool is installed on the network with the web site being tested, and runs a set of pre-defined tests against
the web site URLs.
There seems to be a catch-22 with the following sentiment; as many code developers are not aware or skilled in
security, a company should implement peer secure code reviews amongst developers. • Manual Penetration Testing, again using tools, but with the expertise of a penetration tester performing more
How does a workforce introduce the security skills to implement a secure code review methodology? Many complicated tests.
security maturity models (e.g. BSIMM or OpenSAMM) discuss the concept of a core security team, who are skilled
developers and skill security subject matter experts (SMEs). In the early days of a company rolling out a se- • Secure Code Review with a security subject matter expert.
cure code review process, the security SMEs will be central in the higher risk reviews, using their experience and
knowledge to point out aspects of the code that could introduce risk. It should be noted that no one method will be able to identify all vulnerabilities that a software project might
encounter, however a defense-in-depth approach will reduce the risk of unknown issues being including in
As well as the core security team a further group of developers with an interest in security can act as team local production software.
security SMEs, taking part in many secure code reviews. These satellites (as BSIMM calls them) will be guided by
the core security team on technical issues, and will help encourage secure coding. During a survey at AppSec USA 2015 the respondents rated which security method was the most effective in
finding:
Over time, an organization builds security knowledge within its core, and satellite teams, which in turns spreads
the security knowledge across all developers since most code reviews will have a security SME taking part. 1) General security vulnerabilities
2) Privacy issues
The ‘on-the-job’ training this gives to all developers is very important. Whereas an organization can send their de- 3) Business logic bugs
velopers on training courses (classroom or CBT) which will introduce them to common security topics and create 4) Compliance issues (such as HIPPA, PCI, etc.)
awareness, no training course can be 100% relevant to a developer’s job. In the secure code review process, each 5) Availability issues
developer who submits their code will receive security related feedback that is entirely relevant to them, since
the review is of the code they produced. The results are shown in figure 1.
It must be remembered though, no matter what size the organization, the reason to perform secure code re-
view is to catch more bugs and catch them earlier in the S-SDLC. It is quicker to conduct a secure code review
and find bugs that way, compared to finding the bugs in testing or in production. For the 5,000-person orga-
Figure 1: Survey relating detection methods to general vulnerability types
nization, how long will it take to find a bug in testing, investigate, re-code, re-review, re-release and re-test?
What if the code goes to production where project management and support will get involved in tracking the
issue and communicating with customers? Maybe 15 minutes a week will seem like a bargain. 35
30
5.5 We Can’t Hack Ourselves Secure
25
Penetration testing is generally a black-box point in time test and should be repeated on each release (or
build) of the source code to find any regressions. Many continuous integration tools (e.g. Jenkins/Hudson) 20
allow repeatable tests, including automated penetration tests, to be run against a built and installed version 15
of a product.
10
As source code changes, the value of the findings of an unmaintained penetration tests degrade with time. 5
There are also privacy, compliance and stability and availability concerns, which may not covered by penetra- 0
tion testing, but can be covered in code reviews. Data information leakage in a cloud environment for example Vulnerabilities Privacy Business Logic Compliance Availability
may not be discovered, or allowed, via a penetration test. Therefore penetration testing should be seen as an (HIPPA)
important tool in the arsenal, but alone it will not ensure product software is secure.
Figure 2: Survey relating detection methods to OWASP Top 10 vulnerability types Knowing the internal code structure from the code review, and using that knowledge to form test cases and
abuse cases is known as white box testing (also called clear box and glass box testing). This approach can lead to
a more productive penetration test, since testing can be focused on suspected or even known vulnerabilities. Us-
35
ing knowledge of the specific frameworks, libraries and languages used in the web application, the penetration
30 test can concentrate on weaknesses known to exist in those frameworks, libraries and languages.
25
A white box penetration test can also be used to establish the actual risk posed by a vulnerability discovered
20
through code review. A vulnerability found during code review may turn out not to be exploitable during pen-
15 etration test due to the code reviewer(s) not considering a protective measure (input validation, for instance).
10 While the vulnerability in this case is real, the actual risk may be lower due to the lack of exposure. However there
5 is still an advantage to adding the penetration test encase the protective measure is changed in the future and
therefore exposes the vulnerability.
0
A1 A2 A3 A4 A5 A6 A7 A8 A9 A10 While vulnerabilities exploited during a white box penetration test (based on secure code review) are certainly
real, the actual risk of these vulnerabilities should be carefully analyzed. It is unrealistic that an attacker would
be given access to the target web application’s source code and advice from its developers. Thus, the risk that
Source Code Scanning Tool an outside attacker could exploit the vulnerabilities found by the white box penetration tester is probably low-
er. However, if the web application organization is concerned with the risk of attackers with inside knowledge
Automated Scan
(former employees or collusion with current employees or contractors), the real-world risk may be just as high.
Manual Pen Test
The results of the penetration test can then be used to target additional areas for code review. Besides address-
Manual Code Review
ing the par-ticular vulnerability exploited in the test, it is a good practice to look for additional places where that
same class of vulnerability is present, even if not explicitly exploited in test. For instance, if output encoding is
not used in one area of the application and the penetration test exploited that, it is quite possible that output
These surveys show that manual code review should be a component of a company’s secure lifecycle, as in encoding is also not used elsewhere in the application.
many cases it is as good, or better, than other methods of detecting security issues.
5.7 Implicit Advantages of Code Review to Development Practices
5.6 Coupling Source Code Review and Penetration Testing Integrating code review into a company’s development processes can have many benefits which will depend
The term “360 review” refers to an approach in which the results of a source code review are used to plan and upon the processes and tools used to perform code reviews, how well that data is backed up, and how those
execute a penetration test, and the results of the penetration test are, in turn, used to inform additional source tools are used. The days of bringing developers into a room and displaying code on a projector, whilst recording
code review. the review results on a printed copy are long gone, today many tools exist to make code review more efficient
and to track the review records and decisions. When the code review process is structured correctly, the act of
reviewing code can be efficient and provide educational, auditable and historical benefits to any organization.
Figure 3: Code Review and Penetration Testing Interactions This section provides a list of benefits that a code review procedure can add to development organization.
Capturing those review discussions in a review tool automatically and storing them for future reference will pro-
vide the development organization with a history of the changes on the module which can be queried at a lat- within a code review, as unfortunately the secure coding issues are understood only by a subset of the develop-
er time by new developers. These discussions can also contain links to any architectural/functional/design/test ment team. Therefore it can be possible to include teams with various technical expertise into the code reviews,
specifications, bug or enhancement numbers. i.e. someone from the security team (or that person in the corner who knows all the security stuff) can be invited
as a technical subject expert to the review to check the code from their particular angle. This is where the OWASP
Verification that the change has been tested top 10 guidelines could be enforced.
When a developer is about to submit code into the repository, how does the company know they have sufficient-
ly tested it? Adding a description of the tests they have run (manually or automated) against the changed code
can give reviewers (and management) confidence that the change will work and not cause any regressions. Also 5.8 Technical Aspects of Secure Code Review
by declaring the tests the writer has ran against their change, the author is allowing reviewers to review the tests Security code reviews are very specific to the application being reviewed. They may highlight some flaws that
and suggest further testing that may have been missed by the author. are new or specific to the code implementation of the application, like insecure termination of execution flow,
synchronization errors, etc. These flaws can only be uncovered when we understand the application code flow
In a development scenario where automated unit or component testing exists, the coding guidelines can require and its logic. Thus, security code review is not just about scanning the code for set of unknown insecure code
that the developer include those unit/component tests in the code review. This again allows reviewers within this patterns but it also involves understanding the code implementation of the application and enumerating the
environment to ensure the correct unit/component tests are going to be included in the environment, keeping flaws specific to it.
the quality of the continuous integration cycles.
The application being reviewed might have been designed with some security controls in place, for example a
Coding education for junior developers centralized blacklist, input validation, etc. These security controls must be studied carefully to identify if they
After an employee learns the basics of a language and read a few of the best practices book, how can they get are fool-proof. According to the implementation of the control, the nature of attack or any specific attack vec-
good on-the-job skills to learn more? Besides buddy coding (which rarely happens and is never cost effective) tor that can be used to bypass it, must be analyzed. Enumerating the weakness in the existing security control
and training sessions (brown bag sessions on coding, tech talks, etc.) the design and code decisions discussed is another important aspect of the security code reviews.
during a code review can be a learning experience for junior developers. Many experienced developers admit to
this being a two way street, where new developers can come in with new ideas or tricks that the older developers There are various reasons why security flaws manifest in the application, like a lack of input validation or
can learn from. Altogether this cross pollination of experience and ideas can only be beneficial to a development parameter mishandling. In the process of a code review the exact root cause of flaws are exposed and the
organization. complete data flow is traced. The term ‘source to sink analysis’ means to determine all possible inputs to the
application (source) and how they are being processed by it (sink). A sink could be an insecure code pattern
Familiarization with code base like a dynamic SQL query, a log writer, or a response to a client device.
When a new feature is developed, it is often integrated with the main code base, and here code review can be a
conduit for the wider team to learn about the new feature and how its code will impact the product. This helps Consider a scenario where the source is a user input. It flows through the different classes/components of the
prevent functional duplication where separate teams end up coding the same small piece of functionality. application and finally falls into a concatenated SQL query (a sink) and there is no proper validation being
applied to it in the path. In this case the application will be vulnerable to SQL injection attack, as identified
This also applies for development environments with siloed teams. Here the code review author can reach out to by the source to sink analysis. Such an analysis helps in understanding, which vulnerable inputs can lead to a
other teams to gain their insight, and allow those other teams to review their modules, and everyone then learns possibility of an exploit in the application.
a bit more about the company’s code base.
Once a flaw is identified, the reviewer must enumerate all the possible instances present in the application.
Pre-warning of integration clashes This would not be a code review initiated by a code change, this would be a code scan initiated by manage-
In a busy code base there will be times (especially on core code modules) where multiple developers can write ment based on a flaw being discovered and resources being committed to find if that flaw exists in other
code affecting the same module. Many people have had the experience of cutting the code and running the parts of the product. For example, an application can be vulnerable to XSS vulnerability because of use of
tests, only to discover upon submission that some other change has modified the functionality, requiring the un-validated inputs in insecure display methods like scriptlets ‘response.write’ method, etc. in several places.
author to recode and retest some aspects of their change. Spreading the word on upcoming changes via code
reviews gives a greater chance of a developer learning that a change is about to impact their upcoming commit, 5.9 Code Reviews and Regulatory Compliance
and development timelines, etc., can be updated accordingly. Many organizations with responsibility for safeguarding the integrity, confidentiality and availability of their
software and data need to meet regulatory compliance. This compliance is usually mandatory rather than a
voluntary step taken by the organization.
Secure Coding Guidelines Touch Point
Compliance regulations include:
• PCI (Payment Card Industry) standards
Many development environments have coding guidelines which new code must adhere to. Coding guidelines
• Central bank regulations
can take many forms. It’s worth pointing out that security guidelines can be a particularly relevant touch point
18 Secure Code Review Secure Code Review 19
To execute proper code reviews that meet compliance rules it is imperative to use an approved methodolo- 5.1.4 Review of payment application code prior to release to customers after any significant change, to identi-
gy. Compliance requirements such as PCI, specifically requirement 6: “Develop and maintain secure systems”, fy any potential coding vulnerability.
while PCI-DSS 3.0, which has been available since November 2013, exposes a series of requirements which
apply to development of software and identifying vulnerabilities in code. The Payment Card Industry Data Note: This requirement for code reviews applies to all payment application components (both internal and
Security Standard (PCI-DSS) became a mandatory compliance step for companies processing credit card pay- public-facing web applications), as part of the system development life cycle. Code reviews can be conducted
ments in June 2005. Performing code reviews on custom code has been a requirement since the first version by knowledgeable internal personnel or third parties.
of the standard.
The PCI standard contains several points relating to secure application development, but this guide will focus
solely on the points, which mandate code reviews. All of the points relating to code reviews can be found in
requirement 6 “Develop and maintain secure systems and applications”.
• Code changes are reviewed by individuals other than the originating code author, and by individuals knowl-
edgeable about code review techniques and secure coding practices.
• Code review results are reviewed and approved by management prior to release.
The PCI Council expanded option one to include internal resources performing code reviews. This added
weight to an internal code review and should provide an additional reason to ensure this process is performed
correctly.
The Payment Application Data Security Standard (PA-DSS) is a set of rules and requirements similar to PCI-DSS.
However, PA-DSS applies especially to software vendors and others who develop payment applications that
store, process, or transmit cardholder data as part of authorization or settlement, where these payment appli-
cations are sold, distributed, or licensed to third parties.
20 Methodology Methodology 21
METHODOLOGY
Code review is systematic examination of computer source code and reviews are done in various forms and Resources, Time & Deadlines
can be accomplished in various stages of each organization S-SDLC. This book does not attempt to tell each As ever, this is a fundamental factor. A proper code review for a complex program will take longer and it will
organization how to implement code reviews in their organization but this section does go over in generic need higher analysis skills than a simple one. The risks involved if resources are not properly provided are
terms and methodology of doing code reviews from informal walkthroughs, formal inspections, or Tool-assist- higher. Make sure that this is clearly assessed when executing a review.
ed code reviews.
6.2 Integrating Code Reviews in the S-SDLC
6.1 Factors to Consider when Developing a Code Review Process Code reviews exist in every formal Secure Software Development Lifecycle (S-SDLC), but code reviews also
When planning to execute a security code review, there are multiple factors to consider since every code vary widely in their level of formality. To confuse the subject more, code reviews vary in purpose and in rela-
review is unique to its context. In addition to the elements discussed in this section, one must consider any tion to what the code reviewer is looking for, be it security, compliance, programming style, etc. Throughout
technical or business related factors (business decisions such as deadlines and resources) that impact the the S-SDLC (XP, Agile, RAD, BSIMM, CMMI, Microsoft ALM) there are points where an application security SME
analysis as these factors and may ultimately decide the course of the code review and the most effective way should to be involved. The idea of integrating secure code reviews into an S-SLDC may sound daunting as
to execute it. there is another layer of complexity or additional cost and time to an already over budget and time con-
strained project. However it is proven to be cost effective and provides an additional level of security that
Risks static analyzers cannot provide.
It is impossible to secure everything at 100%, therefore it is essential to prioritize what features and components
must be securely reviewed with a risk based approach. While this project highlights some of the vital areas of design In some industries the drive for secure enhancements to a company’s S-SDLC may not be driven purely by
security peer programmers should review all code being submitted to a repository, not all code will receive the atten- the desire to produce better code, these industries have regulations and laws that demand a level of due care
tion and scrutiny of a secure code review. when writing software (e.g. the governmental and financial industries) and the fines levelled at a company
who has not attempted to secure their S-SDLC will be far greater than the costs of adding security into the
Purpose & Context development lifecycle.
Computer programs have different purposes and consequently the grade of security will vary depending on the
functionality being implemented. A payment web application will have higher security standards than a promotion- When integrating secure code reviews into the S-SDLC the organization should create standards and policies
al website. Stay reminded of what the business wants to protect. In the case of a payment application, data such as that the secure code reviewer should adhere to. This will create the right importance of the task so it is not just
credit cards will have the highest priority however in the case of a promotional website, one of the most important looked at as a project task that just needs to be checked off. Project time also needs to be assigned to the task
things to protect would be the connection credentials to the web servers. This is another way to place context into a so there is enough time to complete the tasks (and for any remedial tasks that come out of the secure code
risk-based approach. Persons conducting the security review should be aware of these priorities. review). Standards also allow management and security experts (e.g. CISOs, security architects) to direct em-
ployees on what secure coding is to be adhered to, and allows the employees to refer to the (standard) when
Lines of Code review arbitration is necessary.
An indicator of the amount of work is the number of lines of code that must be reviewed. IDEs (Integrated De-
velopment Environments) such as Visual Studio or Eclipse contain features, which allows the amount of lines
of code to be calculated, or in Unix/Linux there are simple tools like ‘wc’ that can count the lines. Programs Code Review Reports
written in object-oriented languages are divided into classes and each class is equivalent to a page of code.
Generally line numbers help pinpoint the exact location of the code that must be corrected and is very useful
A standard report template will provide enough information to enable the code reviewer to clas-
when reviewing corrections done by a developer (such as the history in a code repository). The more lines of sify and prioritize the software vulnerabilities based on the applications threat model. This report
code a program contains, the greater the chances that errors are present in the code. does not need to be pages in length, it can be document based or incorporated into many auto-
mated code review tools. A report should provide the following information:
Programming language
Programs written in typed safe languages (such as C# or Java) are less vulnerable to certain security bugs • Date of review.
such as buf- fer overflows than others like C and C++. When executing code review, the kind of language will
determine the types of expected bugs. Typically software houses tend towards a few languages that their • Application name, code modules reviewed.
programmers are experienced in, however when a decision is made to create new code in a language new to
the developer management must be aware of the increased risk of securely reviewing that code due to the • Developers and code reviewer names.
lack of in-house experience. Throughout this guide, sections explain the most common issues surrounding
• Task or feature name, (TFS, GIT, Subversion, trouble ticket, etc.).
the specific programming language code to be reviewed, use this as a reference to spot specific security issues
in the code.
22 Methodology Methodology 23
er will be less inclined to fix smaller issues once the code has been checked in, usually with a mantra of “Well the code
• A brief sentence(s) to classify and prioritize software vulnerability if any and what if any remedial is in now, it’ll do.”. There also a risk of timing, as other developers could write other code fixes into the same module
tasks need to be accomplished or follow up is needed.
before the review is done or changes and tests have been written, meaning the developer not only has to implement
the code changes from the peer or security review, but they also have to do so in a way that does not break other
• Link to documents related to task/feature, including requirements, design, testing and threat
modeling documents. subsequent changes. Suddenly the developer has to re-test the subsequent fixes to ensure no regressions.
• Code Review checklist if used, or link to organization Code Review Checklist. (see Appendix A) Some development organizations using the Agile methodology add a ‘security sprint’ into their processes. During the
security sprint the code can be security reviewed, and have security specific test cases (written or automated) added.
• Testing the developer has carried out on the code. Preferably the unit or automated tests them-
selves can be part of the review submission. When code audits are done
Some organizations have processes to review code at certain intervals (i.e. yearly) or when a vulnerable piece of
• If any tools such as FxCop, BinScope Binary Analyzer, etc. were used prior to code review. code is suspected of being repeated throughout the code base. Here static code analyzers, or simple string searches
through the code (for specific vulnerability patterns) can speed up the process. This review is not connected to the
submission of a feature or bug fix, they are triggered by process considerations and are likely to involve the review of
an entire application or code base rather than a review of a single submission.
Today most organizations have modified their S-SDLC process to add agile into their S-SDLC process. Because of this
the organization is going to need to look at their own internal development practices to best determine where and
how often secure code reviews need to happen. If the project is late and over budget then this increases the chance Who Should Perform Secure Code Reviews
that a software fix could cause a secure vulnerability since now the emphasis is on getting the project to deployment
quicker. Code reviews for code in production may find software vulnerabilities but understand that there is a race
with hackers to find the bug and the vulnerable software will remain in production while the remedial fix is being Some organizations assume secure code review can be a job for a security or risk-analysis team member. How-
worked on. ever all developers need to understand the exposure points of their applications and what threats exist for their
applications.
6.3 When to Code Review
Once an organization decides to include code reviews part of their internal code process. The next big question to Many companies have security teams that do not have members with coding backgrounds, which can make
ask is to determine what stages of the SDLC will the code be reviewed. This section talks about three possible ways to interactions with development teams challenging. Because of this development teams are usually skeptical of
include code reviews. There are three stages be in the SDLC when code can be reviewed: security input and guidance. Security teams are usually willing to slow things down to ensure confidentiality
and integrity controls are in place while developers are face with pressure from business units they support to
When code is about to be checked in (pre-commit) create and update code as quickly as possible. Unfortunately the more critical the application to operational or
The development organization can state in their process that all code has to be reviewed before the code can be business needs, the more pressure to deploy the code to production.
submitted to the source code repository. This has the disadvantage of slowing the check-in process down, as the
review can take time, however it has many advantages in that below standard code is never placed in the code line, It is best to weave secure code reviews into the SDLC processes so that development organizations do not see
and management can be confident that (if processes are being followed) the submitted code is at the quality that security as a hindrance, but as an assistance. As mentioned previously, spreading secure coding SMEs through-
has been stipulated. out an organization (satellites in BSIMM terminology) allows the secure code review tasks to scale and reach
more development teams. As the process grows, more of the developers gain awareness of secure coding issues
For example, processes may state that code to be submitted must include links to requirements and design docu- (as they have reviews rejected on secure coding grounds) and the frequency of secure coding issues in code
mentation and necessary unit and automated tests. This way the reviewers will have context on the exact code mod- reviews should drop.
ification being done (due to the documentation) and they will know how the developer has tested the code (due to
the tests). If the peer reviewers do not think the documentation is complete, or the tests are extensive enough, they
6.4 Security Code Review for Agile and Waterfall Development
can reject the review, not because of the code itself, but because the necessary docs or tests are not complete. In an
Today agile development is an umbrella term for a lot of practices that include programming, continuous inte-
environment using CI with automated tests running nightly, the development team as a whole will know the next
gration, testing, project management, etc. There are many flavors of agile development, perhaps as many flavors
day (following check-in) if the submitted code was of enough quality. Also management know that once a bug or
as there are practitioners. Agile development is a heterogeneous reference framework where the development
feature is checked in that the developer has finished their task, there’s no “I’ll finish those tests up next week” scenar-
team can pick what practices they want to use.
ios which adds risk to the development task.
Agile has some practices that could affect how and when code is reviewed, for example agile tries to keep code
When code has just been checked into a code base (post-commit)
review and testing as near as possible to the development phase. It is a common practice to define short devel-
Here the developer submits their code change, and then uses the code repository change-lists to send the code
opment cycles (a.k.a. Iterations or Sprints). At the end of each cycle, all the code should be production quality
diff for review. This has the advantage of being faster for the developer as there’s no review gate to pass before they
code. It can be incomplete, but it must add some value. That affects the review process, as reviewing should be
check-in their code. The disadvantage is that, in practice, this method can lead to a lesser quality of code. A develop-
continuous. From the point of view of secure coding review, it shouldn’t make a difference if the development
24 Methodology Methodology 25
organization uses agile or waterfall development practices. Code review is aligned to the code submitted, not the Risk is chance of something bad happening and the damage that can be caused if it occurs. The criteria for decid-
order of feature development vs testing, or the time patterns assigned to the coding task. In many organizations ing the risk profile of different code modules will be up to the management team responsible for delivering the
the line between waterfall and agile is becoming blurred, with traditional waterfall departments introducing the changes, examples are provided in table 2.
continuous integration (CI) aspects from agile, including nightly builds, automated testing, test driven develop-
ment, etc.
Table 2: Common Criteria For Establishing The Risk Profile Of A Code Module
sity of the code review varies based on the perceived risk that the change presents. Value of loss How much could be lost if the module has a vulnerability introduced? Does the module contain some critical password
hashing mechanism, or a simple change to HTML border on some internal test tool?
In the end, the scale of the code review comes down to the management of resources (skilled persons, company
Regulatory controls If a piece of code implements business logic associated with a standard that must be complied with, then these mod-
time, machines, etc.). It would not be scalable to bring in multiple security experts for every code change occur- ules can be considered high risk as the penalties for non-conformity can be high.
ring on a product, the resources of those persons or those teams would not be large enough to handle every
change. There- fore companies can make a call on which changes are important and need to be closely scruti-
nized, and which ones can be allowed through with minimal inspection. This will allow management to better When levels of risk have been associated with products and modules, then the policies can be created deter-
size the development cycle, if a change is going to be done in an area which is high risk, management can know mining what level of code review must be conducted. It could be that code changes in a level one risk module
to set aside sufficient time for code review and ensure persons with relevant skills will be available. The process must be reviewed by 3 persons including a Security Architect, whereas changes in a level 4 risk module only
of deciding which changes need which level of code review is based on the risk level of the module the change need a quick one person peer review.
is within.
Other options (or criteria) for riskier modules can include demands on automated testing or static analysis, e.g.
If the review intensity of code changes is based on the risk level of the module being changed, who should code changes in high risk code must include 80% code coverage on static analysis tools, and sufficient auto-
decide the level of risk? Ultimately management is responsible for the output of a company, and thus they are mated tests to ensure no regressions occur. These criteria can be demanded and checked as part of the code
responsible for the risk associated with products sold by the company. Therefore it is up to management (or per- review to ensure they are capable of testing the changed code.
sons delegated by management) to create a reproducible measure or framework for deciding the risk associated
with a code change. Some companies logically split their code into differing repositories, with more sensitive code appearing in a
repository with a limited subset of developers having access. If the code is split in this fashion, then it must be
Decisions on the risk of a module or piece of code should be based on solid cost benefit analysis and it would be remembered that only developers with access to the riskier code should be able to conduct reviews of that
irresponsible to decide all modules are high risk. Therefore management should meet with persons who have code.
an understanding of the code base and security issues faced by the products, and create a measure of risk for
various ele-ments of code. Code could be split up into modules, directories, products, etc., each with a risk level Risk analysis could also be used during the code review to decide how to react to a code change that introduc-
associated with it. es risk into the product, as in table 3. In a typical risk analysis process, the team needs to decide whether to
accept, transfer, avoid or reduce the risks. When it comes to code reviews it is not possible to transfer the risk
Various methods exist in the realm of risk analysis to assign risk to entities, and many books have been dedicated as transferring risk normally means taking out insurance to cover the cost of exposure.
to this type of discussion. The three main techniques for establishing risk are outlined in table 1 below.
and controls. A walkthrough of the actual running application is very helpful to give the reviewers a good idea about
Code Review Checklist
how the application is intended to work. Also a brief overview of the structure of the code base and any libraries used
can help the reviewers get started.
Defining a generic checklist, which the development team can fill out can give reviewers the desired context. The
If the information about the application cannot be gained in any other way, then the reviewers will have to spend checklist is a good barometer for the level of security the developers have attempted or thought of. If security
some time doing reconnaissance and sharing information about how the application appears to work by examining code review becomes a common requirement, then this checklist can be incorporated into a development pro-
the code. Preferably this information can then be documented to aid future reviews. cedure (e.g. document templates) so that the information is always available to code reviewers. See Appendix A
for a sample code review checklist.
Security code review is not simply about the code structure. It is important to remember the data; the reason that
we review code is to ensure that it adequately protects the information and assets it has been entrusted with, such The checklist should cover the most critical security controls and vulnerability areas such as:
as money, intellectual property, trade secrets, or lives. The context of the data with which the application is intended
• Data Validation
to process is very important in establishing potential risk. If the application is developed using an inbuilt/well-known
design framework the answers to the most of these questions would be pre-defined. But, in case it is custom then this • Authentication
information will surely aid the review process, mainly in capturing the data flow and internal validations. Knowing
• Session Management
the architecture of the application goes a long way in understanding the security threats that can be applicable to
the application. • Authorization
A design is a blueprint of an application; it lays a foundation for its development. It illustrates the layout of the appli- • Cryptography
cation and identifies different application components needed for it. It is a structure that determines execution flow • Error Handling
of the application. Most of the application designs are based on a concept of MVC. In such designs different compo-
nents interact with each other in an ordered sequence to serve any user request. Design review should be an integral • Logging
part of secure software development process. Design reviews also help to implementing the security requirements • Security Configuration
in a better way.
• Network Architecture
Collecting all the required information of the proposed design including flow charts, sequence diagrams, class dia-
grams and requirements documents to understand the objective of the proposed design. The design is thoroughly
highlights some questions that can be asked of the architecture and design to aid secure code reviews.
studied mainly with respect to the data flow, different application component interactions and data handling. This is
Every security requirement should be associated with a security control best suited for the design. Here, we would
achieved through manual analysis and discussions with the design or technical architect’s team. The design and the
identify exact changes or additions to be incorporated in the design that are needed to meet any requirement or
architecture of the application must be understood thoroughly to analyze vulnerable areas that can lead to security
mitigate a threat. The list of security requirements and proposed controls can be then discussed with the develop-
breaches in the application.
ment teams. The queries of the teams should be addressed and feasibility of incorporating the controls must be
determined. Exceptions, if any must be taken into account and alternate recommendations should be proposed. In
After understanding the design, the next phase is to analyze the threats to the design. This involves observing the
this phase a final agreement on the security controls is achieved. The final design incorporated by the development
design from an attacker’s perspective and uncovering the backdoors and insecure areas present in it. Table 4 below
teams can be reviewed again and finalized for further development process.
Table 4: Example Design Questions During Secure Code Review 6.8 Static Code Analysis
Static Code Analysis is carried out during the implementation phase of S-SDLC. Static code analysis commonly
Design Area Questions to consider
refers to running static code analysis tools that attempt to highlight possible vulnerabilities within the ‘static’
Data Flow • Are user inputs used to directly reference business logic? (non-running) source code.
• Is there potential for data binding flaws?
• Is the execution flow correct in failure cases?
Ideally static code analysis tools would automatically find security flaws with few false positives. That means it should
Authentication and • Does the design implement access control for all resources? have a high degree of confidence that the bugs that it finds are real flaws. However, this ideal is beyond the state of
access control • Are sessions handled correctly?
• What functionality can be accessed without authentication? the art for many types of application security flaws. Thus, such tools frequently serve as aids for an analyst to help
them zero in on security relevant portions of code so they can find flaws more efficiently, rather than a tool that finds
Existing security controls • Are there any known weaknesses in third-part security controls
• Is the placements of security controls correct? all flaws automatically.
Architecture • Are connections to external servers secure? Bugs may exist in the application due to insecure code, design or configuration. Automated analysis can be carried on
• Are inputs from external sources validated?
the application code to identify bugs through either of the following two options:
Configuration files and • Is there any sensitive data in configuration files?
data stores • Who has access to configuration or data files?
30 Methodology Methodology 31
1. Static code scanner scripts based on a pattern search (in-house and open source). Some of the criteria for choosing a tool are:
• Does the tool support the programming language used?
2. Static code analyzers (commercial and open source).
• Is there a preference between commercial or free tools?Usually the commercial tools have more features and are
Advantages and disadvantages of source code scanners are shown in tables 5 and 6. more reliable than the free ones, whilst their usability might differ.
Table 5: Advantages To Using Source Code Scanners • What type of analysis is being carried out? Is it security, quality, static or dynamic analysis?
Advantage Explanation The next step requires that some work is done since it is quite subjective. The best thing to do is to test a few tools to
see if the team is satisfied with different aspects such as the user experience, the reporting of vulnerabilities, the level
Reduction in manual The type of patterns to be scanned for remains common across applications, computers are better at such scans than
humans. In this scenario, scanners play a big role is automating the process of searching the vulnerabilities through of false positives and the customization and the customer support. The choice should not be based on the number of
efforts
large codebases. features, but on the features needed and how they could be integrated in the S-SDLC. Also, before choosing the tool,
the expertise of the targeted users should be clearly evaluated in order to choose an appropriate tool.
Find all the instances of Scanners are very effective in identifying all the instances of a particular vulnerability with their exact location. This is
the vulnerabilities helpful for larger code base where tracing for flaws in all the files is difficult.
6.9 Application Threat Modeling
Source to sink analysis Some analyzers can trace the code and identify the vulnerabilities through source to sink analysis. They identify possible
Threat modeling is an in-depth approach for analyzing the security of an application. It is a structured approach that
inputs to the application and trace them thoroughly throughout the code until they find them to be associated with any
insecure code pattern. Such a source to sink analysis helps the developers in understanding the flaws better as they get enables employees to identify, quantify, and address the security risks associated with an application. Threat model-
a complete root cause analysis of the flaw
ing is not an approach to reviewing code, but it complements the secure code review process by providing context
Elaborate reporting Scanners provide a detailed report on the observed vulnerabilities with exact code snippets, risk rating and complete and risk analysis of the application.
format description of the vulnerabilities. This helps the development teams to easily understand the flaws and implement
necessary controls
The inclusion of threat modeling in the S-SDLC can help to ensure that applications are being developed with securi-
ty built-in from the very beginning. This, combined with the documentation produced as part of the threat modeling
Though code scanning scripts and open source tools can be efficient at finding insecure code patterns, they often process, can give the reviewer a greater understanding of the system, allows the reviewer to see where the entry
lack the capability of tracing the data flow. This gap is filled by static code analyzers, which identify the insecure code points to the application are (i.e. the attack surface) and the associated threats with each entry point (i.e. attack vec-
patterns by partially (or fully) compiling the code and investigating the execution branches, allowing for source to tors).
sink analysis. Static code analyzers and scanners are comprehensive options to complement the process of code
review. The concept of threat modeling is not new but there has been a clear mind-set change in recent years. Modern threat
modeling looks at a system from a potential attacker’s perspective, as opposed to a defender’s viewpoint. Many com-
panies have been strong advocates of the process over the past number of years, including Microsoft who has made
Table 6: Disadvantages To Using Source Code Scanners
threat modeling a core component of their S-SDLC, which they claim to be one of the reasons for the increased se-
Limitation Explanation curity of their products in recent years.
Business logic flaws The flaws that are related to application’s business logic, transactions, and sensitive data remain untouched by the scan- When source code analysis is performed outside the S-SDLC, such as on existing applications, the results of the threat
remain untouched ners. The security controls that need to be implemented in the application specific to its features and design are often
not pointed by the scanners. This is considered as the biggest limitation of static code analyzers. modeling help in reducing the complexity of the source code analysis by promoting a risk based approach. Instead of
reviewing all source code with equal focus, a reviewer can prioritize the security code review of components whose
Limited scope Static code analyzers are often designed for specific frameworks or languages and within that scope they can search
threat modeling has ranked with high risk threats.
for a certain set of vulnerable patterns. Outside of this scope they fail to address the issues not covered in their search
pattern repository.
The threat modeling process can be decomposed into 3 high level steps:
Design flaws Design flaws are not specific to the code structure and static code analyzers focus on the code. A scanner/analyzer will
not spot a design issue when looking at the code, whilst a human can often identify design issues when looking at their
implementation.
6.9.1. Step 1: Decompose the Application.
The first step in the threat modelling process is concerned with gaining an understanding of the application and how
False positives Not all of the issues flagged by static code analyzers are truly issues, and thus the results from these tools need to be
understood and triaged by an experienced programmer who understands secure coding. Therefore anyone hoping it interacts with external entities. This involves creating use-cases to understand how the application is used, identi-
that secure code checking can be automated and run at the end of the build will be disappointed, and there is still a
deal of manual intervention required with analyzers. fying entry points to see where a potential attacker could interact with the application, identifying assets i.e. items/
areas that the attacker would be interested in, and identifying trust levels which represent the access rights that the
application will grant to external entities. This information is documented in the threat model document and it is also
used to produce data flow diagrams (DFDs) for the application. The DFDs show the different data paths through the
Choosing a static analysis tool system, highlighting the privilege (trust) boundaries.
Choosing a static analysis tool is a difficult task since there are a lot of choices. The comparison charts below could
help organization decide which tool is right for them, although this list is not exhaustive.
32 Methodology Methodology 33
Items to consider when decomposing the application include Figure 4: Example process diagram for identifying input paths
External Dependencies
External dependencies are items external to the code of the application that may pose a threat to the application.
These items are typically still within the control of the organization, but possibly not within the control of the de-
velopment team. The first area to look at when investigating external dependencies is how the application will be
deployed in a production environment.
This involves looking at how the application is or is not intended to be run. For example if the application is expected
to be run on a server that has been hardened to the organization’s hardening standard and it is expected to sit behind
TRANSITIONAL
a firewall, then this information should be documented. ANALYSIS
INITIATION
Entry Points
Entry points (aka attack vectors) define the interfaces through which potential attackers can interact with the appli-
cation or supply it with data. In order for a potential attacker to attack an application, entry points must exist. Entry
points in an application can be layered, for example each web page in a web application may contain multiple entry
IDENTIFY INPUT PATHS
points.
Assets IDENTIFY
The system must have something that the attacker is interested in; these items/areas of interest are defined as assets. ATTACK
Assets are essentially threat targets, i.e. they are the reason threats will exist. Assets can be both physical assets and SURFACE
abstract assets. For example, an asset of an application might be a list of clients and their personal information; this is
a physical asset. An abstract asset might be the reputation of an organization.
INPUT INPUT INPUT INPUT
Determining the Attack Surface PARAMETERS PARAMETERS PARAMETERS PARAMETERS
The attack surface is determined by analyzing the inputs, data flows and transactions. A major part of actually per- (CONFIG) (USER) (CONTROL) (BLACKEND)
forming a security code review is performing an analysis of the attack surface. An application takes inputs and pro-
duces output of some kind. The first step is to identify all input to the code.
IDENTIFY IDENTIFY IDENTIFY
Inputs to the application may include the bullet points below and figure 4 describes an example process for identi- ATTACK ATTACK ATTACK
fying an applications input paths: SURFACE SURFACE SURFACE
• Browser input
• Cookies
• Property files
• External processes
• Data feeds
IDENTIFY AREAS FOLLOW PATH
• Service responses IDENTIFY AREAS
OF LATE & DY- EACH PARAME-
• Flat files OF CONFIG FILE
NAMIC BINDING TER THROUGH
REFERENCE
• Command line parameters CODE
• Environment variables
34 Methodology Methodology 35
Trust Levels
Trust levels represent the access rights that the application will grant to external entities. The trust levels are cross-ref- MULTIPLE PROCESS The multiple process shape is used to present a collection of
subprocesses. The multiple process can be broken down into
erenced with the entry points and assets. This allows a team to define the access rights or privileges required at each its subprocesses in another DFD.
entry point, and those required to interact with each asset.
Transaction analysis DATA FLOW The data flow shape represents data movement within the
application. The direction of the data movement is represent-
Transaction analysis is needed to identify and analyze all transactions within the application, along with the relevant ed by the arrow.
security functions invoked.
There are a number of symbols that are used in DFDs for threat modelling, as show in the following table 7 below: The goal of the threat categorization is to help identify threats both from the attacker (STRIDE) and the defen-
sive perspective (ASF). DFDs produced in step 1 help to identify the potential threat targets from the attacker’s
Table 7: Threat Modeling Symbols perspective, such as data sources, processes, data flows, and interactions with users. These threats can be
identified further as the roots for threat trees; there is one tree for each threat goal.
Methodology
ELEMENT IMAGE DESCRIPTION
From the defensive perspective, ASF categorization helps to identify the threats as weaknesses of security
EXTERNAL ENTITY The external entity shape is used to represent any entity controls for such threats. Common threat-lists with examples can help in the identification of such threats. Use
outside the application that interacts with the application via and abuse cases can illustrate how existing protective measures could be bypassed, or where a lack of such
an entry point.
protection exists.
The determination of the security risk for each threat can be determined using a value-based risk model such
PROCESS The process shape represents a task that handles data within as DREAD or a less subjective qualitative risk model based upon general risk factors (e.g. likelihood and im-
the application. The task may process the data or perform an
action based on the data.
pact).
The first step in the determination of threats is adopting a threat categorization. A threat categorization pro-
36 Methodology Methodology 37
vides a set of threat categories with corresponding examples so that threats can be systematically identified harvesting (e.g. username not found), or SQL injection (e.g. SQL exception errors). From the defensive perspective,
in the application in a structured and repeatable manner. the identification of threats driven by security control categorization such as ASF allows a threat analyst to focus on
specific issues related to weaknesses (e.g. vulnerabilities) in security controls. Typically the process of threat identi-
STRIDE fication involves going through iterative cycles where initially all the possible threats in the threat list that apply to
Threat lists based on the STRIDE model are useful in the identification of threats with regards to the attacker each component are evaluated. At the next iteration, threats are further analyzed by exploring the attack paths, the
goals. For example, if the threat scenario is attacking the login, would the attacker brute force the password to root causes (e.g. vulnerabilities) for the threat to be exploited, and the necessary mitigation controls (e.g. counter-
break the authentication? If the threat scenario is to try to elevate privileges to gain another user’s privileges, measures).
would the attacker try to perform forceful browsing?
Once common threats, vulnerabilities, and attacks are assessed, a more focused threat analysis should take in con-
A threat categorization such as STRIDE is useful in the identification of threats by classifying attacker goals sideration use and abuse cases. By thoroughly analyzing the use scenarios, weaknesses can be identified that could
such as shown in table 8. lead to the realization of a threat. Abuse cases should be identified as part of the security requirement engineering
activity. These abuse cases can illustrate how existing protective measures could be bypassed, or where a lack of such
Table 8: Explanation Of The Stride Attributes protection exists. Finally, it is possible to bring all of this together by determining the types of threat to each compo-
nent of the decomposed system. This can be done by repeating the techniques already discussed on a lower level
STRIDE Explanation threat model, again using a threat categorization such as STRIDE or ASF, the use of threat trees to determine how the
threat can be exposed by vulnerability, and use and misuse cases to further validate the lack of a countermeasure to
Spoofing “Identity spoofing” is a key risk for applications that have many users but provide a single execution context at the ap-
mitigate the threat.
plication and database level. In particular, users should not be able to become any other user or assume the attributes
of another user.
Denial of Service Application designers should be aware that their applications may be subject to a denial of service attack. The use of Table 9: Explanation Of The Dread Attributes
expensive resources such as large files, complex calculations, heavy-duty searches, or long queries should be reserved
for authenticated and authorized users, and not available to anonymous users.
DREAD Questions
Elevation of Privilege If an application provides distinct user and administrative roles, then it is vital to ensure that the user cannot elevate
his/her role to a higher privilege one. Damage How big would the damage be if the attack succeeded?
potential malicious input to exploit vulnerabilities such as SQL injection, cross site scripting, and buffer overflows. Can the exploit be automated?
Additionally, the data flow passing through that point has to be used to determine the threats to the entry points
Exploitability How much time, effort, and expertise is needed to exploit the threat?
to the next components along the flow. If the following components can be regarded critical (e.g. the hold sensitive
Does the attacker need to be authenticated?
data), that entry point can be regarded more critical as well. In an end to end data flow the input data (i.e. username
and password) from a login page, passed on without validation, could be exploited for a SQL injection attack to ma-
Affected Users If a threat were exploited, what percentage of users would be affected?
nipulate a query for breaking the authentication or to modify a table in the database.
Can an attacker gain administrative access to the system?
Exit points might serve as attack points to the client (e.g. XSS vulnerabilities) as well for the realization of information Discoverability How easy is it for an attacker to discover this threat?
disclosure vulnerabilities. In the case of exit points from components handling confidential data (e.g. data access
components), any exit points lacking security controls to protect the confidentiality and integrity can lead to disclo-
sure of such confidential information to an unauthorized user. The impact mainly depends on the damage potential and the extent of the impact, such as the number of
components that are affected by a threat.
In many cases threats enabled by exit points are related to the threats of the corresponding entry point. In the login
example, error messages returned to the user via the exit point might allow for entry point attacks, such as account
38 Methodology Methodology 39
Partially mitigated Threats partially mitigated by one or more countermeasures, which represent vulnerabilities that can only partially be
Likelihood threats exploited and cause a limited impact.
Fully mitigated threats These threats have appropriate countermeasures in place and do not expose vulnerability and cause impact.
A more generic risk model takes into consideration the Likelihood (e.g. probability of an attack) and the Impact
(e.g. damage potential):
6.10 Metrics and Code Review
Risk = Likelihood x Impact Metrics measure the size and complexity of a piece of code. There is a long list of quality and security char-
acteristics that can be considered when reviewing code (such as, but not limited to, correctness, efficiency,
Note that this is a conceptual formula and is not expected to use actual values for likelihood and impact. The portability, maintainability, reliability and securability). No two-code review sessions will be the same so some
likelihood or probability is defined by the ease of exploitation, which mainly depends on the type of threat and judgment will be needed to decide the best path. Metrics can help decide the scale of a code review.
the system characteristics, and by the possibility to realize a threat, which is determined by the existence of an
appropriate countermeasure. Metrics can also be recorded relating to the performance of the code reviewers and the accuracy of the review
process, the performance of the code review function, and the efficiency and effectiveness of the code review
function.
The figure 5 describes the use of metrics throughout the code review process.
6.9.3 Step 3: Determine countermeasures and mitigation.
A lack of protection against a threat might indicate a vulnerability whose risk exposure could be mitigated Some of the options for calculating the size of a review task include:
with the implementation of a countermeasure. Such countermeasures can be identified using threat-counter-
measure mapping lists. Once a risk ranking is assigned to the threats, it is possible to sort threats from the high- Lines of Code (LOC):
est to the lowest risk, and prioritize the mitigation effort, such as by responding to such threats by applying the A count of the executable lines of code (commented-out code or blank lines are not counted). This gives a
identified countermeasures. rough estimate but is not particularly scientific.
The risk mitigation strategy might involve evaluating these threats from the business impact that they pose Function Point:
and establishing countermeasures (or design changes) to reduce the risk. The estimation of software size by measuring functionality. The combination of a number of statements which
perform a specific task, independent of programming language used or development methodology. In an
Other options might include accepting the risk, assuming the business impact is acceptable because of com- object orientated language a class could be a functional point.
pensating controls, informing the user of the threat, removing the risk posed by the threat completely, or the
least preferable option, that is, to do nothing. If the risk identified is extreme, the functionality or product Defect Density:
could be discontinued, as the risk of something going wrong is greater than the benefit. The average occurrence of programming faults per Lines of Code (LOC). This gives a high level view of the code
quality but not much more. Fault density on its own does not give rise to a pragmatic metric. Defect density
The purpose of the countermeasure identification is to determine if there is some kind of protective measure would cover minor issues as well as major security flaws in the code; all are treated the same way. Security of
(e.g. security control, policy measures) in place that can prevent each threat previously identified via threat code cannot be judged accurately using defect density alone.
analysis from being realized. Vulnerabilities are then those threats that have no countermeasures.
Risk Density:
Since each of these threats has been categorized either with STRIDE or ASF, it can be possible to find appropri- Similar to defect density, but discovered issues are rated by risk (high, medium & low). In doing this we can
ate countermeasures in the application within the given category. Each of the above steps is documented as give insight into the quality of the code being developed via a [X Risk / LoC] or [Y Risk / Function Point] value
they are carried out. The resulting set of documents is the threat model for the application. Detailed examples (X&Y being high, medium or low risks) as defined by internal application development policies and standards.
of how to carry out threat modeling is given in Appendix B.
For example:
Threat Profile 4 High Risk Defects per 1000 (Lines of Code)
Once threats and corresponding countermeasures are identified it is possible to derive a threat profile with the 2 Medium Risk Defects per 3 Function Points
following criteria:
40 Methodology Methodology 41
Figure 4: Example process diagram for identifying input paths Cyclomatic complexity (CC):
A static analysis metric used to assist in the establishment of risk and stability estimations on an item of code,
such as a class, method, or even a complete system. It was defined by Thomas McCabe in the 70’s and it is easy
to calculate and apply, hence its usefulness.
FIGURE The McCabe cyclomatic complexity metric is designed to indicate a program’s testability, understandability
5 and maintainability. This is accomplished by measuring the control flow structure, in order to predict the diffi-
culty of understanding, testing, maintaining, etc. Once the control flow structure is understood one can gain a
realization of the extent to which the program is likely to contain defects. The cyclomatic complexity metric is
intended to be independent of language and language format that measures the number of linearly indepen-
The Use of Metrics
Throughout The CODE dent paths through a program module. It is also the minimum number of paths that should be tested.
CODE
Code Review RESUBMITED TREND
SUBMITED
FOR ANALYSIS
Process FOR SCR By knowing the cyclomatic complexity of the product, one can focus on the module with the highest com-
RE-REVIEW
plexity. This will most likely be one of the paths data will take, thus able to guide one to a potentially high risk
location for vulnerabilities. The higher the complexity the greater potential for more bugs. The more bugs the
higher the probability for more security flaws.
RECOMENDED HAS CONTEXT DEFINE CRITERIA Does cyclomatic complexity reveal security risk? One will not know until after a review of the security posture
CODE TRIAGE NO OF CODE BEEN PROJECT OR
MEETING DEFINED VULNERABILITY
of the module. The cyclomatic complexity metric provides a risk-based approach on where to begin to review
BASED and analyze the code. Securing an application is a complex task and in many ways complexity an enemy of
security as software complexity can make software bugs hard to detect. Complexity of software increases over
YES time as the product is updated or maintained.
COMMUNICATE
GUIDELINES
As the decision count increases, so do the complexity and the number of paths. Complex code leads to less
RESULTS TO TEAM stability and maintainability.
CODE REVIEW
DATABASE The more complex the code, the higher risk of defects. A company can establish thresholds for cyclomatic
RECORD complexity for a module:
FINDINGS
0-10: Stable code, acceptable complexity
11-15: Medium Risk, more complex
16-20: High Risk code, too many decisions for a unit of code.
Modules with a very high cyclomatic complexity are extremely complex and could be refactored into smaller
methods.
DEVELOP PREVIOUS
METRICS FINDINGS
Bad Fix Probability:
This is the probability of an error accidentally inserted into a program while trying to fix a previous error,
known in some companies as a regression.
PERSIST METRICS Cyclomatic Complexity: 1 – 10 == Bad Fix Probability: 5%
Cyclomatic Complexity: 20 –30 == Bad Fix Probability: 20%
Cyclomatic Complexity: > 50 == Bad Fix Probability: 40%
Cyclomatic Complexity: Approaching 100 == Bad Fix Probability: 60%
As the complexity of software increase so does the probability to introduce new errors.
42 Methodology 43
Inspection Rate:
This metric can be used to get a rough idea of the required duration to perform a code review. The inspection
rate is the rate of coverage a code reviewer can cover per unit of time. For example, a rate of 250 lines per hour
could be a baseline. This rate should not be used as part of a measure of review quality, but simply to deter-
mine duration of the task.
A1
for key code pointers wherein possible security vulnerability might reside. Certain APIs are related to inter-
facing to the external world or file IO or user management, which are key areas for an attacker to focus on. In
crawling code we look for APIs relating to these areas. We also need to look for business logic areas which may
cause security issues, but generally these are bespoke methods which have bespoke names and cannot be de-
tected directly, even though we may touch on certain methods due to their relationship with a certain key API.
We also need to look for common issues relating to a specific language; issues that may not be security related
but which may affect the stability/availability of the application in the case of extraordinary circumstances.
Other issues when performing a code review are areas such a simple copyright notice in order to protect one’s
intellectual property. Generally these issues should be part of a companies Coding Guidelines (or Standard),
and should be enforceable during a code review. For example a reviewer can reject a code review because
the code violates something in the Coding Guidelines, regardless of whether or not the code would work in
its current state.
Crawling code can be done manually or in an automated fashion using automated tools. However working
manually is probably not effective, as (as can be seen below) there are plenty of indicators, which can apply to
a language. Tools as simple as grep or wingrep can be used. Other tools are available which would search for
keywords relating to a specific programming language. If a team is using a particular review tool that allows
it to specify strings to be highlighted in a review (e.g. Python based review tools using pygments syntax high-
lighter, or an in-house tool for which the team can change the source code) then they could add the relevant
string indicators from the lists below and have them highlighted to reviewers automatically.
The basis of the code review is to locate and analyze areas of code, which may have application security impli-
cations. Assuming the code reviewer has a thorough understanding of the code, what it is intended to do, and
the context in which it is to be used, firstly one needs to sweep the code base for areas of interest.
Appendix C gives practical examples of how to carry out code crawling in the following programming lan-
guages:
• .Net
• Java
• ASP
• C++/Apache
44 A1 - Injection A1 - Injection 45
4. Use Stored Procedures. Stored procedures will generally help the SQL parser differentiate code and data.
A1 INJECTION However Stored Procedures can be used to build dynamic SQL statements allowing the code and data to be-
come blended together causing the it to become vulnerable to injection.
7.2 SQL Injection Effectively the attacker uses SQL queries to determine what error responses are returned for valid SQL, and
The most common injection vulnerability is SQL injection. Injection vulnerability is also easy to remediate and which responses are returned for invalid SQL. Then the attacker can probe; for example check if a table called
protect against. This vulnerability covers SQL, LDAP, Xpath, OS commands, XML parsers. “user_password_table” exists. Once they have that information, they could use an attack like the one described
above to maliciously delete the table, or attempt to return information from the table (does the username
Injection vulnerability can lead to… “john” exist?). Blind SQL injections can also use timings instead of error messages, e.g. if invalid SQL takes 2
1. Disclosure/leaking of sensitive information. seconds to respond, but valid SQL returns in 0.5 seconds, the attacker can use this information.
2. Data integrity issues. SQL injection may modify data, add new data, or delete data.
3. Elevation of privileges. Parameterized SQL Queries
4. Gaining access to back-end network. Parameterized SQL queries (sometimes called prepared statements) allow the SQL query string to be defined
in such a way that the client input can’t be treated as part of the SQL syntax.
SQL commands are not protected from the untrusted input. SQL parser is not able to distinguish between Take the example in sample 7.1:
code and data.
Sample 7.1
String custQuery = “ Select custName, address1,address2,city,postalCode WHERE custID= ‘“ + request.GetParameter(“id”) + ““
1 String query = “SELECT id, firstname, lastname FROM authors WHERE forename = ? and surname = ?”;
2 PreparedStatement pstmt = connection.prepareStatement( query );
3 pstmt.setString( 1, firstname );
Code Data
4 pstmt.setString( 2, lastname );
Using string concatenation to generate a SQL statement is very common in legacy applications where de- In this example the string ‘query’ is constructed in a way that it does not rely on any client input, and the
velopers were not considering security. The issue is this coding technique does not tell the parser which part ‘PreparedStatement’ is constructed from that string. When the client input is to be entered into the SQl, the
of the statement is code and which part is data. In situations where user input is concatenated into the SQL ‘setString’ function is used and the first question mark “?” is replaced by the string value of ‘firstname’, the sec-
statement, an attacker can modify the SQL statement by adding SQL code to the input data. ond question mark is replaced by the value of ‘lastname’. When the ‘setString’ function is called, this function
automatically checks that no SQL syntax is contained within the string value. Most prepared statement APIs
1. Untrusted input is acceptable by the application. There are several ways to mitigate injection vulnerability, allow you to specify the type that should be entered, e.g. ‘setInt’, or ‘setBinary’, etc.
whitelisting, regex, etc. The five best ways are. All five should be used together for a defense in depth approach.
Safe String Concatenation?
1. HtmlEncode all user input. So does this mean you can’t use string concatenation at all in your DB handling code? It is possible to use string
concatenation safely, but it does increase the risk of an error, even without an attacker attempting to inject
2. Using static analysis tools. Most static analysis for languages like .Net, Java, python are accurate. However SQL syntax into your application.
static analysis can become an issue when injection comes from JavaScript and CSS.
You should never use string concatenation in combination with the client input. value Take an example where
3. Parameterize SQL queries. Use SQL methods provided by the programming language or framework that the existence (not the value) of a client input variable “surname” is used to construct the SQL query of the
parameterize the statements, so that the SQL parser can distinguish between code and data. prepared statement;
46 A1 - Injection A1 - Injection 47
Sample 7.2 a maintenance point of view this could invite future programmers to misunderstand the difference between safe
concatenation and the unsafe version (using input string values directly).
1 String query = “Select id, firstname, lastname FROM authors WHERE forename = ?”; One option for flexible parameterized statements is to use ‘if’ statements to select the correct query based on the
2 if (lastname!= NULL && lastname.length != 0) { input values provided, for example:
3 query += “ and surname = ?”;
4 }
Sample 7.4
5 query += “;”;
6
7 PreparedStatement pstmt = connection.prepareStatement( query ); 1 String query;
8 pstmt.setString( 1, firstname); 2 PreparedStatement pstmt;
9 3
10 if (lastname!= NULL && lastname.length != 0) { pstmt.setString( 2, lastname ); } 4 if ( (firstname!= NULL && firstname.length != 0) &&
5 lastname!= NULL && lastname.length != 0) ) {
6 query = “Select id, firstname, lastname FROM authors WHERE forename = ? and surname = ?”
Here the value of ‘lastname’ is not being used, but the existance of it is being evaluated. However there is still a risk 7 pstmt = connection.prepareStatement( query );
when the SQL statement is larger and has more complex business logic involved in creating it. Take the following 8 pstmt.setString( 1, firstname );
example where the function will search based on firstname or lastname: 9 pstmt.setString( 2, lastname );
10 }
11 else if (firstname != NULL && firstname.length != 0) {
Sample 7.3 12 query = “Select id, firstname, lastname FROM authors WHERE forename = ?”;
13 pstmt = connection.prepareStatement( query );
14 pstmt.setString( 1, firstname );
1 String query = “select id, firstname, lastname FROM authors”;
15 }
2
16 else if (lastname != NULL && lastname.length != 0){
3 if (( firstname != NULL && firstname.length != 0 ) && ( lastname != NULL && lastname.length != 0 )) {
17 query = “Select id, firstname, lastname FROM authors WHERE surname= ?”;
4 query += “WHERE forename = ? AND surname = ?”;
18 pstmt = connection.prepareStatement( query );
5}
19 pstmt.setString( 1, lastname);
6 else if ( firstname != NULL && firstname.length != 0 ) {
20 }
7 query += “WHERE forename = ?”;
21 else{
8}
22 throw NameNotSpecifiedException(); }
9 else if ( lastname!= NULL && lastname.length != 0 ) {
10 query += “WHERE surname = ?”;
11 }
PHP SQL Injection
12
An SQL injection attack consists of injecting SQL query portions in the back-end database system via the client
13 query += “;”;
14 interface in the web application. The consequence of a successful exploitation of an SQL injection varies from
15 PreparedStatement pstmt = connection.prepareStatement( query ) just reading data to modifying data or executing system commands. SQL Injection in PHP remains the number
one attack vector, and also the number one reason for data compromises as shown in sample 7.5.
Example 1 :
This logic will be fine when either firstname, or lastname is given, however if neither were given then the SQL state-
ment would not have any WHERE clause, and the entire table would be returned. This is not an SQL injection (the Sample 7.5
attacker has done nothing to cause this situation, except not passing two values) however the end result is the same,
information has been leaked from the database, despite the fact that a parameterized query was used. 1 <?php
1 $pass=$_GET[“pass”];
For this reason, the advice is to avoid using string concatenation to create SQL query strings, even when using param- 2 $con = mysql_connect(‘localhost’, ‘owasp’, ‘abc123’);
eterized queries, especially if the concatenation involves building any items in the where clause. 3 mysql_select_db(“owasp_php”, $con);
4 $sql=”SELECT card FROM users WHERE password = ‘”.$pass.”’”;
Using Flexible Parameterized Statements 5 $result = mysql_query($sql);
Functional requirements often need the SQL query being executed to be flexible based on the user input, e.g. if the 6 ?>
end user specifies a time span for their transaction search then this should be used, or they might wish to query based
on either surname or forename, or both. In this case the safe string concatenation above could be used, however from
48 A1 - Injection A1 - Injection 49
The most common ways to prevent SQL Injection in PHP are using functions such as addslashes() and mysql_ .NET Sql Injection
real_escape_string() but those function can always cause SQL Injections in some cases. Framework 1.0 & 2.0 might be more vulnerable to SQL injections than the later versions of .NET. Thanks to the
proper implementation and use of design patters already embedded in ASP.NET such as MVC(also depending
Addslashes : on the version), it is possible to create applications free from SQL injections, however, there might be times
You will avoid Sql injection using addslashes() only in the case when you wrap the query string with quotes. where a developer might prefer to use SQL code directly in the code.
The following example would still be vulnerable
Example.
A developer creates a webpage with 3 fields and submit button, to search for employees on fields ‘name’,
Sample 7.6 ‘lastname’ and ‘id’
1 $id = addslashes( $_GET[‘id’] ); The developer implements a string concatenated SQL statement or stored procedure in the code such as in
2 $query = ‘SELECT title FROM books WHERE id = ‘ . $id; sample 7.8.
Sample 7.8
mysql_real_escape_string():
mysql_real_escape_string() is a little bit more powerful than addslashes() as it calls MySQL’s library function
mysql_real_escape_string, which prepends backslashes to the following characters: \x00, \n, \r, \, ‘, “ and \x1a. 1 SqlDataAdapter thisCommand = new SqlDataAdapter(
2 “SELECT name, lastname FROM employees WHERE ei_id = ‘” +
As with addslashes(), mysql_real_escape_string() will only work if the query string is wrapped in quotes. A idNumber.Text + “’”, thisConnection);
string such as the following would still be vulnerable to an SQL injection:
SQL injections occur when input to a web application is not controlled or sanitized before executing to the This code is equivalent to the executed SQL statement in sample 7.9.
back-end database.
Sample 7.9
The attacker tries to exploit this vulnerability by passing SQL commands in her/his input and therefore will
create a undesired response from the database such as providing information that bypasses the authorization
and authentication programmed in the web application. An example of vulnerable java code is shown in 1 SqlDataAdapter thisCommand = new SqlDataAdapter(
sample 7.7 2 “SearchEmployeeSP ‘” + idNumber.Text + “’”, thisConnection);
Sample 7.7
A hacker can then insert the following employee ID via the web interface “123’;DROP TABLE pubs --” and exe-
cute the following code:
1 HttpServletRequest request = ...; SELECT name, lastname FROM authors WHERE ei_id = ‘123’; DROP TABLE pubs --’
2 String userName = request.getParameter(“name”);
3 Connection con = ... The semicolon “;” provides SQL with a signal that it has reached the end of the sql statement, however, the
4 String query = “SELECT * FROM Users WHERE name = ‘” + userName + “’”; hacker uses this to continue the statement with the malicious SQL code
5 con.execute(query); ; DROP TABLE pubs;
Parameter collections
Parameter collections such as SqlParameterCollection provide type checking and length validation. If you use
An example of a vulnerable java code a parameters collection, input is treated as a literal value, and SQL Server does not treat it as executable code,
The input parameter “name” is passed to the String query without any proper validation or verification. The and therefore the payload can not be injected.
query ‘SELECT* FROM users where name” is equal to the string ‘username’ can be easily misused to bypass
something different that just the ‘name’. For example, the attacker can attempt to pass instead in this way ac- Using a parameters collection lets you enforce type and length checks. Values outside of the range trigger an
cessing all user records and not only the one entitled to the specific user exception. Make sure you handle the exception correctly. Example of the SqlParameterCollection:
Hibernate Query Language (HQL)
“ OR 1=1.
50 A1 - Injection A1 - Injection 51
You should review all code that calls EXECUTE, EXEC, any SQL calls that can call outside resources or command line. Do not allow JSON data to construct dynamic HTML. Always us safe DOM features like innerText or CreateText-
Node(…)
OWASP References
• https://www.owasp.org/index.php/SQL_Injection_Prevention_Cheat_Sheet OWASP SQL Injection Prevention Object/Relational Mapping (ORM)
Cheat Sheet Object/Relation Mapping (ORM) facilitates the storage and retrieval of domain objects via HQL (Hibernate Query
Language) or .NET Entity framework.
• https://www.owasp.org/index.php/Query_Parameterization_Cheat_Sheet OWASP Query Parameterization
Cheat Sheet It is a very common misconception that ORM solutions, like hibernate are SQL Injection proof. They are not. ORM’s
allow the use of “native SQL”. Thru proprietary query language, called HQL is prone to SQL Injection and the later is
• https://www.owasp.org/index.php/Command_Injection OWASP Command Injection Article prone to HQL (or ORM) injection. Linq is not SQL and because of that is not prone to SQL injection. However using
excutequery or excutecommand via linq causes the program not to use linq protection mechanism and is vulnera-
• https://www.owasp.org/index.php/XXE OWASP XML eXternal Entity (XXE) Reference Article bility to SQL injection.
52 A1 - Injection A1 - Injection 53
Content-Security-Poli- Like Content-Security-Policy, but only reports. Useful during implementation, tuning Content-Security-Pol-
Risk cy-Report_Only and testing efforts. icy-Report-Only: de-
fault-src ‘self’; report-uri
The risk with CSP can have 2 main sources: http://loghost.example.
• Policies misconfiguration, https://www.w3.org/TR/ com/reports.jsp
CSP/
• Too permissive policies.
What to Review
Code reviewer needs to understand what content security policies were required by application design and Note the Spring Security library can assist with these headers, see http://docs.spring.io/spring-security/site/
how these policies are tested to ensure they are in use by the application. docs/current/reference/html/headers.html
54 A1 - Injection A1 - Injection 55
Bad Example
References
Apache: http://httpd.apache.org/docs/2.0/mod/mod_headers.html
Sample 7.11
IIS: http://technet.microsoft.com/pl-pl/library/cc753133(v=ws.10).aspx
All data from users needs to be considered untrusted. Remember one of the top rules of secure coding is
“Don’t trust user input”. Always validate user data with the full knowledge of what your application is trying Good Example
to accomplish.
Sample 7.12
Regular expressions can be used to validate user input, but the more complicated the regular express are the
more chance it is not full proof and has errors for corner cases. Regular expressions are also very hard fro QA
to test. Regular expressions may also make it hard for the code reviewer to do a good review of the regular public static void main(String[] args) throws IOException {
expressions. File x = new File(“/cmd/” + args[1]);
String canonicalPath = x.getCanonicalPath();
Data Validation
All external input to the system (and between systems/applications) should undergo input validation. The
validation rules are defined by the business requirements for the application. If possible, an exact match vali-
dator should be implemented. Exact match only permits data that conforms to an expected value. A “Known .NET Request Validation
good” approach (white-list), which is a little weaker, but more flexible, is common. Known good only permits One solution is to use .Net “Request Validation”. Using request validation is a good start on validating user
characters/ASCII ranges defined within a white-list. data and is useful. The downside is too generic and not specific enough to meet all of our requirements to
provide full trust of user data.
Such a range is defined by the business requirements of the input field. The other approaches to data valida-
tion are “known bad,” which is a black list of “bad characters”. This approach is not future proof and would need You can never use request validation for securing your application against cross-site scripting attacks.
maintenance. “Encode bad” would be very weak, as it would simply encode characters considered “bad” to a
format, which should not affect the functionality of the application. The following example shows how to use a static method in the Uri class to determine whether the Uri provid-
ed by a user is valid.
Business Validation
Business validation is concerned with business logic. An understanding of the business logic is required prior var isValidUri = Uri.IsWellFormedUriString(passedUri, UriKind.Absolute);
to reviewing the code, which performs such logic. Business validation could be used to limit the value range However, to sufficiently verify the Uri, you should also check to make sure it specifies http or https. The follow-
or a transaction inputted by a user or reject input, which does not make too much business sense. Reviewing ing example uses instance methods to verify that the Uri is valid.
code for business validation can also include rounding errors or floating point issues which may give rise to
issues such as integer overflows, which can dramatically damage the bottom line. var uriToVerify = new Uri(passedUri);
var isValidUri = uriToVerify.IsWellFormedOriginalString();
Canonicalization var isValidScheme = uriToVerify.Scheme == “http” || uriToVerify.Scheme == “https”;
Canonicalization is the process by which various equivalent forms of a name can be resolved to a single stan-
dard name, or the “canonical” name. Before rendering user input as HTML or including user input in a SQL query, encode the values to ensure ma-
licious code is not included.
The most popular encodings are UTF-8, UTF-16, and so on (which are described in detail in RFC 2279). A single
character, such as a period/full-stop (.), may be represented in many different ways: ASCII 2E, Unicode C0 AE, You can HTML encode the value in markup with the <%: %> syntax, as shown below.
and many others.
<span><%: userInput %></span>
With the myriad ways of encoding user input, a web application’s filters can be easily circumvented if they’re
not carefully built. Or, in Razor syntax, you can HTML encode with @, as shown below.
56 A1 - Injection A1 - Injection 57
• Ensure all input that can (and will) be modified by a malicious user such as HTTP headers, input fields, hidden
<span>@userInput</span> • fields, drop down lists, and other web components are properly validated.
• Ensure that the proper length checks on all input exist.
The next example shows how to HTML encode a value in code-behind. • Ensure that all fields, cookies, http headers/bodies, and form fields are validated.
• Ensure that the data is well formed and contains only known good chars if possible.
var encodedInput = Server.HtmlEncode(userInput); • Ensure that the data validation occurs on the server side.
• Examine where data validation occurs and if a centralized model or decentralized model is used.
Managed Code and Non-Managed Code • Ensure there are no backdoors in the data validation model.
Both Java and .Net have the concept of managed and non-managed code. To offer some of these protections • “Golden Rule: All external input, no matter what it is, will be examined and validated.”
during the invocation of native code, do not declare a native method public. Instead, declare it private and
expose the functionality through a public wrapper method. A wrapper can safely perform any necessary input Resources:
validation prior to the invocation of the native method: http://msdn.microsoft.com/en-us/library/vstudio/system.uri
Java Sample code to call a Native Method with Data Validation in place
Sample 7.13
// validate input
// Note offset+len would be subject to integer overflow.
// For instance if offset = 1 and len = Integer.MAX_VALUE,
// then offset+len == Integer.MIN_VALUE which is lower
// than data.length.
// Further,
// loops of the form
// for (int i=offset; i<offset+len; ++i) { ... }
// would not throw an exception or cause native code to
// crash.
8.1 Overview
Web applications and Web services both use authentication as the primary means of access control from log-
ins via user id and passwords. This control is essential to the prevention of confidential files, data, or web pages
from being access by hackers or users who do not have the necessary access control level.
8.2 Description
Authentication is important, as it is the gateway to the functionality you are wishing to protect. Once a user
is authenticated their requests will be authorized to perform some level of interaction with your application
that non-authenticated users will be barred from. You cannot control how users manage their authentication
information or tokens, but you can ensure there is now way to perform application functions without proper
authentication occurring.
A2
There are many forms of authentication with passwords being the most common. Other forms include client
certificates, biometrics, one time passwords over SMS or special devices, or authentication frameworks such as
Open Authorization (OAUTH) or Single Sign On (SSO).
Typically authentication is done once, when the user logs into a website, and successful authentication results
in a web session being setup for the user (see Session Management). Further (and stronger) authentication
can be subsequently requested if the user attempts to perform a high risk function, for example a bank user
could be asked to confirm an 6 digit number that was sent to their registered phone number before allowing
money to be transferred.
Authentication is just as important within a companies firewall as outside it. Attackers should not be able to
run free on a companies internal applications simply because they found a way in through a firewall. Also
separation of privilege (or duties) means someone working in accounts should not be able to modify code in
a repository, or application managers should not be able to edit the payroll spreadsheets.
• Make sure your usernames/user-ids are case insensitive. Many sites use email addresses for usernames and
email addresses are already case insensitive. Regardless, it would be very strange for user ‘smith’ and user
‘Smith’ to be different users. Could result in serious confusion.
• Ensure failure messages for invalid usernames or passwords do not leak information. If the error message
indicates the username was valid, but the password was wrong, then attackers will know that username exists.
If the password was wrong, do not indicate how it was wrong.
• Make sure that every character the user types in is actually included in the password.
60 A2 - Broken Authentication and Session Management A2 - Broken Authentication and Session Management 61
• Do not log invalid passwords. Many times an e-mail address is used as the username, and those users will The attribute RequireHttpsAttribute() can be used to make sure the application runs over SSL/TLS
have a few passwords memorized but may forget which one they used on your web site. The first time they It is recommended that this be enabled for SSL/TLS sites.
may use a password that in invalid for your site, but valid for many other sites that this user (identified by the
username). If you log that username and password combination, and that log leaks out, this low level compro- • For high risk functions, e.g. banking transactions, user profile updates, etc., utilize multi-factor authentication
mise on your site could negatively affect many other sites. (MFA). This also mitigates against CSRF and session hijacking attacks. MFA is using more than one authentication
factor to logon or process a transaction:
• Longer passwords provide a greater combination of characters and consequently make it more difficult for • Something you know (account details or passwords)
an attacker to guess. Minimum length of the passwords should be enforced by the application. Passwords • Something you have (tokens or mobile phones)
shorter than 10 characters are considered to be weak ([1]). Passphrases should be encouraged. For more on • Something you are (biometrics)
password lengths see the OWASP Authentication Cheat Sheet.
• If the client machine is in a controlled environment utilize SSL Client Authentication, also known as two-way
• To prevent brute force attacks, implement temporary account lockouts or rate limit login responses. If a user SSL authentication, which consists of both browser and server sending their respective SSL certificates during the TLS
fails to provide the correct username and password 5 times, then lock the account for X minutes, or implement handshake process. This provides stronger authentication as the server administrator can create and issue client cer-
logic where login responses take an extra 10 seconds. Be careful though, this could leak the fact that the user- tificates, allowing the server to only trust login requests from machines that have this client SSL certificate installed.
name is valid to attackers continually trying random usernames, so as an extra measure, consider implement- Secure transmission of the client certificate is important.
ing the same logic for invalid usernames.
References
• For internal systems, consider forcing the users to change passwords after a set period of time, and store a • https://www.owasp.org/index.php/Authentication_Cheat_Sheet
reference (e.g. hash) of the last 5 or more passwords to ensure the user is not simply re-using their old pass- • http://csrc.nist.gov/publications/nistpubs/800-132/nist-sp800-132.pdf
word. • http://www.codeproject.com/Articles/326574/An-Introduction-to-Mutual-SSL-Authentication
• https://cwe.mitre.org/data/definitions/287.html Improper Authentication
• Password complexity should be enforced by making users choose password strings that include various type of • OWASP ASVS requirements areas for Authentication (V2)
characters (e.g. upper- and lower-case letters, numbers, punctuation, etc.). Ideally, the application would indicate to
the user as they type in their new password how much of the complexity policy their new password meets. For more 8.4 Forgot Password
on password complexity see the OWASP Authentication Cheat Sheet. Overview
If your web site needs to have user authentication then most likely it will require user name and password to
• It is common for an application to have a mechanism that provides a means for a user to gain access to their authenticate user accesses. However as computer system have increased in complexity, so has authenticating
account in the event they forget their password. This is an example of web site functionality this is invoked by unau- users has also increased. As a result the code reviewer needs to be aware of the benefits and drawbacks of user
thenticated users (as they have not provided their password). Ensure interfaces like this are protected from misuse, if authentication referred to as “Direct Authentication” pattern in this section. This section is going to emphasis
asking for password reminder results in an e-mail or SMS being sent to the registered user, do not allow the password design patterns for when users forget user id and or password and what the code reviewer needs to consider
reset feature to be used to spam the user by attackers constantly entering the username into this form. Please see when reviewing how user id and passwords can be retrieved when forgotten by the user and how to do this
Forgot Password Cheat Sheet for details on this feature. in a secure manner.
• It is critical for an application to store a password using the right cryptographic technique. Please see General considerations
Password Storage Cheat Sheet for details on this feature. Notified user by (phone sms, email) an email where the user has to click a link in the email that takes them to
your site and ask the user to enter a new password.
When reviewing MVC .NET is is important to make sure the application transmits and received over a secure link. It is
recommended to have all web pages not just login pages use SSL. Ask user to enter login credentials they already have (Facebook, Twitter, Google, Microsoft Live, OpenID etc) to
We need to protect session cookies, which are useful as users credentials. validate user before allowing user to change password.
Send notifications that account information has been changed for registered email. Set appropriate time out
public static void RegisterGlobalFilters(GlobalFilterCollection filters) { value. I.e. If user does not respond to email within 48 hours then user will be frozen out of system until user
‘’’filters.Add(new RequireHttpsAttribute());’’’ re-affirms password change.
}
• The identity and shared secret/password must be transferred using encryption to provide data confidenti-
ality. HTTPS should also be used but in itself should not be the only mechanism used for data confidentiality.
In the global.asax file we can review the RegisterGlobalFilters method.
62 A2 - Broken Authentication and Session Management A2 - Broken Authentication and Session Management 63
• A shared secret can never be stored in clear text format, even if only for a short time in a message queue. security and really hurts human accuracy.
• A shared secret must always be stored in hashed or encrypted format in a database. 5. Use anti-recognition techniques as a means of strengthening CAPTCHA security: Rotation, scaling and
rotating some characters and using various font sizes will reduce the recognition efficiency and increase secu-
• The organization storing the encrypted shared secret does not need the ability to view or decrypt users rity by making character width less predictable.
passwords. User password must never be sent back to a user.
6. Keep the line within the CAPTCHAs: Lines must cross only some of the CAPTCHA letters, so that it is
• If the client must cache the username and password for presentation for subsequent calls to a Web service impossible to tell whether it is a line or a character segment.
then a secure cache mechanism needs to be in place to protect user name and password.
7. Use large lines: Using lines that are not as wide as the character segments gives an attacker a robust
• When reporting an invalid entry back to a user, the username and or password should no be identified as discriminator and makes the line anti-segmentation technique vulnerable to many attack techniques.
being invalid. User feed back/error message must consider both user name and password as one item “user creden-
tial”. I.e. “The username or password you entered is incorrect.” 8. CAPTCHA does create issues for web sites that must be ADA (Americans with Disabilities Act of 1990)
compliant. Code reviewer may need to be aware of web accessibilities and security to review the CAPTCHA
8.5 CAPTCHA implementation where web site is required to be ADA complaint by law.
Overview
CAPTCHA (an acronym for “Completely Automated Public Turing test to tell Computers and Humans Apart”.) is References
an access control technique. • UNITED STATES of AMERICA vs KENNETH LOWSON, KRISTOFER KIRSCH, LOEL STEVENSON Federal Indictment.
February 23, 2010. Retrieved 2012-01-02.
CAPTCHA is used to prevent automated software from gaining access to webmail services like Gmail, Hotmail • http://www.google.com/recaptcha/captcha
and Yahoo to create e-mail spam, automated postings to blogs, forums and wikis for the purpose of promotion • http://www.ada.gov/anprm2010/web%20anprm_2010.htm
(commercial, and or political) or harassment and vandalism and automated account creation. • Inaccessibility of CAPTCHA - Alternatives to Visual Turing Tests on the Web http://www.w3.org/TR/turingtest/
CAPTCHA’s have proved useful and their use has been upheld in court. Circumventing CAPTCHA has been 8.6 Out-of-Band Communication
upheld in US Courts as a violation Digital Millennium Copyright Act anti-circumvention section 1201(a)(3) and Overview
European Directive 2001/29/EC. The term ‘out-of-band’ is commonly used when an web application communicates with an end user over a
channel separate to the HTTP request/responses conducted through the users’ web browser. Common exam-
General considerations ples include text/SMS, phone calls, e-mail and regular mail.
Code review of CAPTCHA’s the reviewer needs to pay attention to the following rules to make sure the CAPT-
CHA is built with strong security principals. Description
• Do not allow the user to enter multiple guesses after an incorrect attempt. The main reason an application would wish to communicate with the end user via these separate channels
is for security. A username and password combination could be sufficient authentication to allow a user to
• The software designer and code review need to understand the statics of guessing. I.e. One CAPTCHA browse and use non-sensitive parts of a website, however more sensitive (or risky) functions could require
design shows four (3 cats and 1 boat) pictures, User is requested to pick the picture where it is not in the same cat- a stronger form of authentication. A username and password could have been stolen through an infected
egory of the other pictures. Automated software will have a success rate of 25% by always picking the first picture. computer, through social engineering, database leak or other attacks, meaning the web application cannot
Second depending on the fixed pool of CAPTCHA images over time an attacker can create a database of correct put too much in trust a web session providing the valid username and password combination is actually the
answers then gain 100% access. intended user.
• Consider using a key being passed to the server that uses a HMAC (Hash-based message authentication Examples of sensitive operations could include:
code) the answer. • Changing password
• Changing account details, such as e-mail address, home address, etc
Text base CAPTCHA’s should adhere to the following security design principals... • Transferring funds in a banking application
1. Randomize the CAPTCHA length: Don’t use a fixed length; it gives too much information to the attacker. • Submitting, modifying or cancelling orders
2. Randomize the character size: Make sure the attacker can’t make educated guesses by using several font In these cases many applications will communicate with you via a channel other than a browsing session.
sizes / several fonts. Many large on-line stores will send you confirmation e-mails when you change account details, or purchase
items. This protects in the case that an attacker has the username and password, if they buy something, the
3. Wave the CAPTCHA: Waving the CAPTCHA increases the difficulty for the attacker. legitimate user will get an e-mail and have a chance to cancel the order or alert the website that they did not
modify the account.
4. Don’t use a complex charset: Using a large charset does not improve significantly the CAPTCHA scheme’s
64 A2 - Broken Authentication and Session Management A2 - Broken Authentication and Session Management 65
When out-of-band techniques are performed for authentication it is termed two-factor authentication. There a positive or negative response.
are three ways to authenticate: Many sectors including the banking sector have regulations requiring the use of two-factor authentication for certain
1. Something you know (e.g. password, passphrase, memorized PIN) types of transactions. In other cases two-factor authentication can reduce costs due to fraud and re-assure customers
2. Something you have (e.g. mobile phone, cash card, RSA token) of the security of a website.
3. Something you are (e.g. iris scan, fingerprint)
References
If a banking website allows users to initiate transactions online, it could use two-factor authentication by tak- • https://www.owasp.org/index.php/Forgot_Password_Cheat_Sheet
ing 1) the password used to log in and 2) sending an PIN number over SMS to the users registered phone, and • http://securelist.com/blog/virus-watch/57860/new-zitmo-for-android-and-blackberry/
then requiring the user enter the PIN before completing the transaction. This requires something the user
knows (password) and has (phone to receive the PIN). 8.7 Session Management
Overview
A ‘chip-and-pin’ banking card will use two-factor authentication, by requiring users to have the card with them A web session is a sequence of network HTTP request and response transactions associated to the same user. Session
(something they have) and also enter a PIN when performing a purchase (something they know). A ‘chip-and- management or state is needed by web applications that require the retaining of information or status about each
pin’ card is not use within the PIN number, likewise knowing the PIN number is useless if you do not have the user for the duration of multiple requests. Therefore, sessions provide the ability to establish variables – such as access
card. rights and localization settings – which will apply to each and every interaction a user has with the web application
for the duration of the session.
What to Review
When reviewing code modules which perform out-of-band functions, some common issues to look out for Description
include: Code reviewer needs to understand what session techniques the developers used, and how to spot vulnerabilities
1. Recognize the risk of the system being abused. Attackers would like to flood someone with SMS messages that may create potential security risks. Web applications can create sessions to keep track of anonymous users after
from your site, or e-mails to random people. Ensure: the very first user request. An example would be maintaining the user language preference. Additionally, web ap-
plications will make use of sessions once the user has authenticated. This ensures the ability to identify the user on
2. When possible, only authenticated users can access links that cause an out-of-band feature to be invoked any subsequent requests as well as being able to apply security access controls, authorized access to the user private
(forgot password being an exception). data, and to increase the usability of the application. Therefore, current web applications can provide session capabil-
ities both pre and post authentication.
3. Rate limit the interface, thus users with infected machines, or hacked accounts, can’t use it to flood
out-of-band messages to a user. The session ID or token binds the user authentication credentials (in the form of a user session) to the user
HTTP traffic and the appropriate access controls enforced by the web application. The complexity of these
4. Do not allow the feature to accept the destination from the user, only use registered phone numbers, three components (authentication, session management, and access control) in modern web applications,
e-mails, addresses. plus the fact that its implementation and binding resides on the web developer’s hands (as web development
framework do not provide strict relationships between these modules), makes the implementation of a secure
5. For high risk sites (e.g. banking) the users phone number can be registered in person rather than via the session management module very challenging.
web site.
The disclosure, capture, prediction, brute force, or fixation of the session ID will lead to session hijacking (or
6. Do not send any personal or authentication information in the out-of-band communication. sidejacking) attacks, where an attacker is able to fully impersonate a victim user in the web application. Attack-
ers can perform two types of session hijacking attacks, targeted or generic. In a targeted attack, the attacker’s
7. Ensure any PINs or passwords send over out-of-band channels have a short life-span and are random. goal is to impersonate a specific (or privileged) web application victim user. For generic attacks, the attacker’s
goal is to impersonate (or get access as) any valid or legitimate user in the web application.
8. A consideration can be to prevent SMS messages being sent to the device currently conducting the
browsing session (i.e. user browsing on their iPhone, were the SMS is sent to). However this can be hard to enforce. With the goal of implementing secure session IDs, the generation of identifiers (IDs or tokens) must meet the
a. If possible give users the choice of band they wish to use. For banking sites Zitmo malware on mobile following properties:
devices (see references) can intercept the SMS messages, however iOS devices have not been affected by this mal- • The name used by the session ID should not be extremely descriptive nor offer unnecessary details about the
ware yet, so users could choose to have SMS PINs sent to their Apple devices, and when on Android they could use purpose and meaning of the ID.
recorded voice messages to receive the PIN.
• It is recommended to change the default session ID name of the web development framework to a generic
9. In a typical deployments specialized hardware/software separate from the web application will handle the name, such as “id”.
out-of-band communication, including the creation of temporary PINs and possibly passwords. In this scenario there
is no need to expose the PIN/password to your web application (increasing risk of exposure), instead the web appli- • The session ID length must be at least 128 bits (16 bytes) (The session ID value must provide at least 64 bits
cation should query the specialized hardware/software with the PIN/password supplied by the end user, and receive of entropy).
66 A2 - Broken Authentication and Session Management A2 - Broken Authentication and Session Management 67
• The session ID content (or value) must be meaningless to prevent information disclosure attacks, where an application increases the exposure of other session-based attacks, as for the attacker to be able to reuse a valid
attacker is able to decode the contents of the ID and extract details of the user, the session, or the inner work- session ID and hijack the associated session, it must still be active.
ings of the web application. Remember for secure coding one of our goals is to reduce the attack surface of our application.
.Net ASPX
It is recommended to create cryptographically strong session IDs through the usage of cryptographic hash
functions such as SHA2 (256 bits). Sample 8.5
What to Review
Require cookies when your application includes authentication. Code reviewer needs to understand what in- <system.web>
formation is stored in the application cookies. Risk management is needed to address if sensitive information <sessionState
is stored in the cookie requiring SSL for the cookie mode=”InProc”
cookieless=”true”
.Net ASPX web.config timeout=”15” />
</system.web>
Sample 8.2
ASPX the developer can change the default time out for a session. This code in the web.config file sets the
<authentication mode=”Forms”> timeout session to 15 minutes. The default timeout for an aspx session is 30 minutes.
<forms loginUrl=”member_login.aspx” Java
cookieless=”UseCookies”
requireSSL=”true”
path=”/MyApplication” /> Sample 8.6
</authentication>
<session-config>
<session-timeout>1</session-timeout>
Java web.xml </session-config>
Sample 8.3
PHP
Does not have a session timeout mechanism. PHP developers will need to create their own custom session timeout.
<session-config>
<cookie-config> Session Logout/Ending.
<secure>true</secure> Web applications should provide mechanisms that allow security aware users to actively close their session once they
</cookie-config> have finished using the web application.
</session-config>
.Net ASPX Session.Abandon() method destroys all the objects stored in a Session object and releases their resources.
If you do not call the Abandon method explicitly, the server destroys these objects when the session times out. You
PHP.INI should use it when the user logs out. Session.Clear() Removes all keys and values from the session. Does not change
session ID. Use this command if you if you don’t want the user to relogin and reset all the session specific data.
Sample 8.4
Session Attacks
Generally three sorts of session attacks are possible:
void session_set_cookie_params ( int $lifetime [, string $path [, string $domain [, bool $secure = true [, bool • Session Hijacking: stealing someone’s session-id, and using it to impersonate that user.
$httponly = true ]]]] )
• Session Fixation: setting someone’s session-id to a predefined value, and impersonating them using that
known value
Session Expiration
In reviewing session handling code the reviewer needs to understand what expiration timeouts are needed • Session Elevation: when the importance of a session is changed, but its ID is not.
by the web application or if default session timeout are being used. Insufficient session expiration by the web
68 A2 - Broken Authentication and Session Management A2 - Broken Authentication and Session Management 69
• Geographical location checking can help detect simple hijacking scenarios. Advanced hijackers use the
Java
same
IP (or range) of the victim.
Sample 8.8
• An active session should be warned when it is accessed from another location.
request.getSession(false).invalidate();
• An active users should be warned when s/he has an active session somewhere else (if the policy allows //and then create a new session with
multiple sessions for a single user). getSession(true) (getSession())
Session Fixation
1. If the application sees a new session-id that is not present in the pool, it should be rejected and a new
PHP.INI
session-id should be advertised. This is the sole method to prevent fixation.
1. All the session-ids should be generated by the application, and then stored in a pool to be checked later Sample 8.9
for.
Application is the sole authority for session generation. session.use_trans_sid = 0
session.use_only_cookies
Session Elevation
• Whenever a session is elevated (login, logout, certain authorization), it should be rolled.
• Many applications create sessions for visitors as well (and not just authenticated users). They should defi-
References
nitely
• https://wwww.owasp.org/index.php/SecureFlag
roll the session on elevation, because the user expects the application to treat them securely after they login.
• When a down-elevation occurs, the session information regarding the higher level should be flushed.
• Sessions should be rolled when they are elevated. Rolling means that the session-id should be changed, and
the session information should be transferred to the new id.
9.1 Overview
What is Cross-Site Scripting (XSS)?
Cross-site scripting (XSS) is a type of coding vulnerability. It is usually found in web applications. XSS enables attackers
to inject malicious into web pages viewed by other users. XSS may allow attackers to bypass access controls such as
the same-origin policy may. This is one of the most common vulnerabilities found accordingly with OWASP Top 10.
Symantec in its annual threat report found that XSS was the number two vulnerability found on web servers. The se-
verity/risk of this vulnerability may range from a nuisance to a major security risk, depending on the sensitivity of the
data handled by the vulnerable site and the nature of any security mitigation implemented by the site’s organization.
Description
There are three types of XSS, Reflected XSS (Non-Persistent), Stored XSS(Persistent), and DOM based XSS. Each of
these types has different means to deliver a malicious payload to the server. The important takeaway is the conse-
A3
quences are the same.
What to Review
Cross-site scripting vulnerabilities are difficult to identify and remove from a web application
Cross-site scripting flaws can be difficult to identify and remove from a web application. The best practice to search
for flaws is to perform an intense code review and search for all places where user input through a HTTP request could
possibly make its way into the HTML output.
2. When data is transmitted from the server to the client, untrusted data must be properly encoded in JSON format
and the HTTP response MUST have a Content-Type of application/json. Do not assume data from the server is safe.
Best practice is to always check data.
3. When introduced into the DOM, untrusted data MUST be introduced using one of the following APIs:
a. Node.textContent
b. document.createTextNode
c. Element.setAttribute (second parameter only)
Code reviewer should also be aware of the HTML tags (such as <img src…>, <iframe…>, <bgsound src…> etc.) can
be used to transmit malicious JavaScript.
Web application vulnerability automated tools/scanners can help to find Cross-Site scripting flaws. They cannot find
all this is way manual code reviews are important. Manual code reviews wont catch all either but a defense in depth
approach is always the best approach based on your level of risk.
OWASP Zed Attack Proxy(ZAP) is an easy to use integrated penetration-testing tool for finding vulnerabilities in web
applications. ZAP provides automated scanners as well as a set of tools that allow you to find security vulnerabilities
manually. It acts as a web proxy that you point your browser to so it can see the traffic going to a site and allows you
to spider, scan, fuzz, and attack the application. There are other scanners available both open source and commercial.
72 73
2. .Net framework 4.0 does not allow page validation to be turned off. Hence if the programmer wants to turn of • https://cwe.mitre.org/data/definitions/79.html
page validation the developer will need to regress back to 2.0 validation mode. <httpRuntime requestValidation-
Mode=”2.0” /> • http://webblaze.cs.berkeley.edu/papers/scriptgard.pdf
3. Code reviewer needs to make sure page validation is never turned off on anywhere and if it is understand • http://html5sec.org
why and the risks it opens the organization to. <%@ Page Language=”C#” ValidationRequest=”false”
• https://cve.mitre.org
.NET MVC
When MVC web apps are exposed to malicious XSS code, they will throw an error like the following one: 9.2 HTML Attribute Encoding
HTML attributes may contain untrusted data. It is important to determine if any ot the HTML attributes on a
Figure 6: Example .Net XSS Framework Error given page contains data from outside the trust boundary.
Some HTML attributes are considered safer than others such as align, alink, alt, bgcolor, border, cellpadding,
cellspacing, class, color, cols, colspan, coords, dir, face, height, hspace, ismap, lang, marginheight, marginwidth,
multiple, nohref, noresize, noshade, nowrap, ref, rel, rev, rows, rowspan, scrolling, shape, span, summary, tabin-
dex, title, usemap, valign, value, vlink, vspace, width.
When reviewing code for XSS we need to look for HTML attributes such as the following reviewing code for
XSS we need to look for HTML attributes such as the following:
<input type=”text” name=”fname” value=”UNTRUSTED DATA”>
http://msdn.microsoft.com/en-us/library/wdek0zbf.aspx
<script>var currentValue=’UNTRUSTED DATA’;</script> <script>someFunction(‘UNTRUSTED DATA’);</
2. OWASP Java Encoder Project script> attack: ‘);/* BAD STUFF */
https://www.owasp.org/index.php/OWASP_Java_Encoder_Project
Potential solutions:
HTML Entity OWASP HTML Sanitizer ProjecT
HTML elements which contain user controlled data or data from untrusted sourced should be reviewed for OWASP JSON Sanitizer Project
contextual output encoding. In the case of HTML entities we need to help ensure HTML Entity encoding is
performed: ESAPI JavaScript escaping can be call in this manner:
HTML Entity Encoding is required For example (note this is an example of how NOT to use JavaScript):
& --> & < --> < > --> > “ --> " ‘ --> ' <script> window.setInterval(‘...EVEN IF YOU ESCAPE UNTRUSTED DATA YOU ARE XSSED HERE...’); </script>
It is recommended to review where/if untrusted data is placed within entity objects. Searching the source eval()
code fro the following encoders may help establish if HTML entity encoding is being done in the application
and in a consistent manner.
Sample 9.1
OWASP Java Encoder Project
https://www.owasp.org/index.php/OWASP_Java_Encoder_Project var txtField = “A1”; var txtUserInput = “’[email protected]’;alert(1);”; eval( “document.forms[0].” + txtField +
“.value =” + A1);
OWASP ESAPI
http://code.google.com/p/owasp-esapi-java/source/browse/trunk/src/main/java/org/owasp/esapi/codecs/ Sample 9.2
HTMLEntityCodec.java
var txtAlertMsg = “Hello World: “; var txtUserInput = “test<script>alert(1)<\/script>”; $(“#message”).html(
txtAlertMsg +”” + txtUserInput + “”);
String safe = ESAPI.encoder().encodeForHTML( request.getParameter( “input” ) );
.text
Nested Contexts
Best to avoid such nested contexts: an element attribute calling a JavaScript function etc. these contexts can
really mess with your mind.
When the browser processes this it will first HTML decode the contents of the onclick attribute. It will pass the
A4
results to the JavaScript Interpreter. So we have 2 contextx here...HTML and Javascript (2 browser parsers). We
need to apply “layered” encoding in the RIGHT order:
1. JavaScript encode
2. HTML Attribute Encode so it “unwinds” properly and is not vulnerable.
A4 INSECURE DIRECT OBJECT REFERENCE the fid and cid parameter values allow him to delete other comments from the forum, that are actually posted
by another user.
Next, he used the same method to test post deletion mechanism and found a similar vulnerability in that. A
normal HTTP Header POST request of deleting a post is:
10.1 Overview
Insecure Direct Object Reference is a commonplace vulnerability with web applications that provide varying
levels of access or exposes an internal object to the user. Examples of what can be exposed are database re- POST cmd=delete_item&crumb=SbWqLz.LDP0
cords, URLs, files, account numbers, or allowing users to bypass web security controls by manipulating URLs.
The user may be authorized to access the web application, but not a specific object, such as a database record,
He found that, appending the fid (topic id) variable to the URL allows him to delete the respective post of other
specific file or even an URL. Potential threats can come from an authorized user of the web application who al-
users:
ters a parameter value that directly points to an object that the user isn’t authorized to access. If the application
doesn’t verify the user for that specific object, it can result in an insecure direct object reference flaw.
POST cmd=delete_item&crumb=SbWqLz.LDP0&fid=xxxxxxxx
10.2 Description
The source of the problem of this risk is based on the manipulation or updating of the data generated previ-
ously at server side.
After futher analysis he found an attacker could modify the parameters in HTTP POST requests to delete 1.5
million records entered by Yahoo users
What to Review
SQL Injection
Indirect Reference Maps
An example of an attack making use of this vulnerability could be a web application where the user has already
Moreover, the attackers may find out the internal naming conventions and infer the method names for oper-
been validated. Now the user wants to view open his open invoices via another web page. The application
ation functionality. For instance, if an application has URLs for retrieving detail information of an object like:
passes the account number using the URL string. The an application uses unverified data in a SQL call that is
Attackers will try to use the following URLS to perform modification on the object:
accessing account information:
String query = “SELECT * FROM accts WHERE account = ?”; PreparedStatement pstmt = connection.
prepareStatement(query , ... ); Also if web application is returning an object listing part of the directory path or object name an attacker can
pstmt.setString( 1, request.getParameter(“acct”)); modify these.
ResultSet results = pstmt.executeQuery();
xyz.com/Customers/Update/2148102445 or xyz.com/Customers/Modify.aspx?ID=2148102445
Or xyz.com/Customers/admin
The attacker simply modifies the ‘acct’ parameter in their browser to send whatever account number they
want. If the application does not perform user verification, the attacker can access any user’s account, instead
Data Binding Technique
of only the intended customer’s account.
Another popular feature seen in most of the design frameworks (JSP/Structs, Spring) is data binding, where HTTP
GET request parameters or HTTP POST variables get directly bound to the variables of the corresponding business/
HTTP POST requests
command object. Binding here means that the instance variables of such classes get automatically initialize with
A Cyber Security Analyst (Ibrahim Raafat) found an insecure direct object reference vulnerability with Yahoo!
the request parameter values based on their names. Consider a sample design given below; observe that the busi-
Suggestions by using Live HTTP Headers check the content in the POST request he could see:
ness logic class binds the business object class binds the business object with the request parameters.
The flaw in such design is that the business objects may have variables that are not dependent on the request pa-
prop=addressbook&fid=367443&crumb=Q4.PSLBfBe.&cid=1236547890&cmd= delete_comment
rameters. Such variables could be key variables like price, max limit, role, etc. having static values or dependent on
some server side processing logic. A threat in such scenarios is that an attacker may supply additional parameters
in request and try to bind values for unexposed variable of business object class.
Where parameter ‘fid’ is the topic id and ‘cid’ is the respective comment ID. While testing, he found changing
80 A4 - Insecure Direct Object Reference A4 - Insecure Direct Object Reference 81
• In any case the application must accept only desired inputs from the user and the rest must be rejected or left ID: <%= Html.TextBox(“ID”) %> <br>
unbound. And initialization of unexposed of variables, if any must take place after the binding logic. Name: <%= Html.TextBox(“Name”) %> <br>
<-- no isAdmin here!
Review Criteria
Review the application design and check if it incorporates a data binding logic. In case it does, check if business
objects/beans that get bound to the request parameters have unexposed variables that are meant to have static The corresponding HTML for this model contain 2 fields: ID and Name
values. If such variables are initialized before the binding logic this attack will work successfully. If an attacker adds the isAdmin parameter to the form and submits they can change the model object above.
So a malicious attacker may change isAdmin=true
What the Code Reviewer needs to do:
Code reviewer needs to “map out all locations in the code being reviewed where user input is used to reference Recommendations:
objects directly”. These locations include where user input is used to access a database row, a file, application pag- 1 - Use a model which does not have values the user should not edit.
es, etc. The reviewer needs to understand if modification of the input used to reference objects can result in the 2 - Use the bind method and whitelist attributes which can be updated.
retrieval of objects the user is not authorized to view. 3 - Use the controller.UpdateModel method to exclude certain attribute updates.
If untrusted input is used to access objects on the server side, then proper authorization checks must be employed References
to ensure data cannot be leaked. Proper input validation will also be required to ensure the untrusted input is • OWASP Top 10-2007 on Insecure Dir Object References
properly understood and used by the server side code. Note that this authorization and input validation must be • ESAPI Access Reference Map API
performed on the server side; client side code can be bypassed by the attacker. • ESAPI Access Control API (See isAuthorizedForData(), isAuthorizedForFile(), isAuthorizedForFunction() )
• https://www.owasp.org/index.php/Category:OWASP_Application_Security_Verification_Standard_Project
Binding issues in MVC .NET • https://cwe.mitre.org/data/definitions/639.html
A.K.A Over-Posting A.K.A Mass assignments • https://cwe.mitre.org/data/definitions/22.html
In MVC framework, mass assignments are a mechanism that allows us to update our models with data coming in a
request in HTTP form fields. As the data that needs to be updated comes in a collection of form fields, a user could
send a request and modify other fields in the model that may not be in the form and the developer didn’t intend
to be updated.
Depending on the models you create, there might be sensitive data that you would not like to be modified. The
vulnerability is exploited when a malicious user modifys a model’s fields, which are not exposed to the user via the
view, and the malicious user to change hidden model values adds additional model parameters.
Sample 10.2
A5 SECURITY MISCONFIGURATION
Many modern applications are developed on frameworks. These frameworks provide the developer less work
to do, as the framework does much of the “housekeeping”. The code developed will extend the functionality
of the framework. It is here that the knowledge of a given framework, and language in which the application
is implemented, is of paramount importance. Much of the transactional functionality may not be visible in the
developer’s code and handled in “parent” classes. The reviewer should be aware and knowledgeable of the
underlying framework.
Web applications do not execute in isolation, they typically are deployed within an application server frame-
work, running within an operating system on a physical host, within a network.
Secure operating system configuration (also called hardening) is not typically within the scope of code review.
For more information, see the Center for Internet Security operating system benchmarks.
A5
Networks today consist of much more than routers and switches providing transport services. Filtering switch-
es, VLANs (virtual LANs), firewalls, WAFs (Web Application Firewall), and various middle boxes (e.g. reverse
proxies, intrusion detection and prevention systems) all provide critical security services when configured to
do so. This is a big topic, but outside the scope of this web application code review guide. For a good summary,
see the SANS (System Administration, Networking, and Security) Institute Critical Control 10: Secure Configu-
rations for Network Devices such as Firewalls, Routers, and Switches.
Application server frameworks have many security related capabilities. These capabilities are enabled and con-
figured in static configuration files, commonly in XML format, but may also be expressed as annotations within
the code.
The struts framework has a validator engine, which relies on regular expressions to validate the input data.
The beauty of the validator is that no code has to be written for each form bean. (Form bean is the Java object
which received the data from the HTTP request). The validator is not enabled by default in struts. To enable the
validator, a plug-in must be defined in the <plug-in> section of struts-config.xml. The property defined tells
the struts framework where the custom validation rules are defined (validation.xml) and a definition of the
actual rules themselves (validation-rules.xml).
Without a proper understanding of the struts framework, and by simply auditing the Java code, one would not
see any validation being executed, and one does not see the relationship between the defined rules and the
Java functions.
The action mappings define the action taken by the application upon receiving a request. Here, in sample
11.1, we can see that when the URL contains “/login” the LoginAction shall be called. From the action map-
pings we can see the transactions the application performs when external input is received.
84 A5 - Security Misconfiguration A5 - Security Misconfiguration 85
The example web component descriptor in sample 11.2, (included in the “web.xml” file) defines a Catalog <!-- Specify which Users Can Access Protected Resources -->
servlet, a “manager” role, a SalesInfo resource within the servlet accessible via GET and POST requests, and <auth-constraint>
specifies that only users with “manager” role, using SSL and successfully using HTTP basic authentication <role-name>manager</role-name>
should be granted access. </auth-constraint>
86 A5 - Security Misconfiguration A5 - Security Misconfiguration 87
<!-- Specify Secure Transport using SSL (confidential guarantee) --> <method-permission>
<user-data-constraint> <description>The employee role may access the findByPrimaryKey,
<transport-guarantee>CONFIDENTIAL</transport-guarantee> getEmployeeInfo, and the updateEmployeeInfo(String) method of
</user-data-constraint> the AardvarkPayroll bean </description>
</security-constraint> <role-name>employee</role-name>
<method>
<!-- Specify HTTP Basic Authentication Method --> <ejb-name>AardvarkPayroll</ejb-name>
<login-config> <method-name>findByPrimaryKey</method-name>
<auth-method>BASIC</auth-method> </method>
<realm-name>file</realm-name> <method>
</login-config> <ejb-name>AardvarkPayroll</ejb-name>
</web-app> <method-name>getEmployeeInfo</method-name>
</method>
<method>
Security roles can also be declared for enterprise Java beans in the “ejb-jar.xml” file as seen in figure sample 11.3. <ejb-name>AardvarkPayroll</ejb-name>
<method-name>updateEmployeeInfo</method-name>
<method-params>
Sample 11.3 <method-param>java.lang.String</method-param>
</method-params>
<ejb-jar> </method>
<assembly-descriptor> </method-permission>
<security-role> <method-permission>
<description>The single application role</description> <description>The admin role may access any method of the
<role-name>TheApplicationRole</role-name> EmployeeServiceAdmin bean </description>
</security-role> <role-name>admin</role-name>
</assembly-descriptor> <method>
</ejb-jar> <ejb-name>EmployeeServiceAdmin</ejb-name>
<method-name>*</method-name>
</method>
</method-permission>
For beans, however, rather than specifying access to resources within servlets, access to bean methods is <method-permission>
specified. The example in sample 11.4 illustrates several types of method access constraints for several beans. <description>Any authenticated user may access any method of the
EmployeeServiceHelp bean</description>
Sample 11.4 <unchecked/>
<method>
<ejb-name>EmployeeServiceHelp</ejb-name>
<ejb-jar> <method-name>*</method-name>
<assembly-descriptor> </method>
<method-permission> </method-permission>
<description>The employee and temp-employee roles may access any <exclude-list>
method of the EmployeeService bean </description> <description>No fireTheCTO methods of the EmployeeFiring bean may be
<role-name>employee</role-name> used in this deployment</description>
<role-name>temp-employee</role-name> <method>
<method> <ejb-name>EmployeeFiring</ejb-name>
<ejb-name>EmployeeService</ejb-name> <method-name>fireTheCTO</method-name>
<method-name>*</method-name> </method>
</method> </exclude-list>
</method-permission>
88 A5 - Security Misconfiguration A5 - Security Misconfiguration 89
For example the code in sample 11.5 allows employees and managers to add movies to the persistent store,
anyone to list movies, but only managers may delete movies.
Filters are especially powerful, and a code review should validate they are used unless there is a compelling reason
not to.
Sample 11.5
Jetty
Jetty adds several security enhancements:
public class Movies {
• Limiting form content
private EntityManager entityManager;
• Obfuscating passwords
@RolesAllowed({“Employee”, “Manager”})
The maximum form content size and number of form keys can be configured at server and web application level
public void addMovie(Movie movie) throws Exception {
in the “jetty-web.xml” file.
entityManager.persist(movie);
}
Sample 11.6
@RolesAllowed({“Manager”})
public void deleteMovie(Movie movie) throws Exception {
<Configure class=”org.eclipse.jetty.webapp.WebAppContext”>
entityManager.remove(movie); …
} <Set name=”maxFormContentSize”>200000</Set>
<Set name=”maxFormKeys”>200</Set>
@PermitAll </Configure>
public List<Movie> getMovies() throws Exception {
Query query = entityManager.createQuery(“SELECT m from Movie as m”); <configure class=”org.eclipse.jetty.server.Server”>
return query.getResultList(); ...
} <Call name=”setAttribute”>
} <Arg>org.eclipse.jetty.server.Request.maxFormContentSize</Arg>
<Arg>100000</Arg>
</Call>
<Call name=”setAttribute”>
<Arg>org.eclipse.jetty.server.Request.maxFormKeys</Arg>
Code review should look for such annotations. If present, ensure they reflect the correct roles and permissions, <Arg>2000</Arg>
and are consistent with any declared role permissions in the “ejb-jar.xml” file. </Call>
</configure>
90 A5 - Security Misconfiguration A5 - Security Misconfiguration 91
Jetty also supports the use of obfuscated passwords in jetty XML files where a plain text password is usually Table 13: Weblogic Security Parameters
needed. sample 11.7 shows example code setting the password for a JDBC Datasource with obfuscation (the
obfuscated password is generated by Jetty org.eclipse.jetty.util.security.Password utility). Parameter Description
externally-defined Role to principal mappings are externally defined in WebLogic Admin Console
Sample 11.7
run-as-principal-name Assign a principal to a role when running as that role
<Set name=”jdbcUrl”>jdbc:mysql://localhost:3306/foo</Set>
security-role-assignment Explicitly assign principals to a role
<Set name=”username”>dbuser</Set>
<Set name=”password”>
<Call class=”org.eclipse.jetty.util.security.Password” name=”deobfuscate”> More information on WebLogic additional deployment descriptors may be found at weblogic.xml Deployment
<Arg>OBF:1ri71v1r1v2n1ri71shq1ri71shs1ri71v1r1v2n1ri7</Arg> Descriptors.
</Call>
</Set> For general guidelines on securing web applications running within WebLogic, see the Programming WebLog-
<Set name=”minConnectionsPerPartition”>5</Set> ic Security guide and the NSA’s BEA WebLogic Platform Security Guide.
<Set name=”maxConnectionsPerPartition”>50</Set>
<Set name=”acquireIncrement”>5</Set> 11.5 Programmatic Configuration: J2EE
<Set name=”idleConnectionTestPeriod”>30</Set> The J2EE API for programmatic security consists of methods of the EJBContext interface and the HttpServle-
</New> tRequest interface. These methods allow components to make business-logic decisions based on the security
</Arg> role of the caller or remote user (there are also methods to authenticate users, but that is outside the scope of
secure deployment configuration).
</New>
The J2EE APIs that interact with J2EE security configuration include:
• getRemoteUser, which determines the user name with which the client authenticated
JBoss AS
JBoss Application Server, like Jetty, allows password obfuscation (called password masking in JBoss) in its XML • isUserInRole, which determines whether a remote user is in a specific security role.
configuration files. After using JBoss password utility to create password mask, replace any occurrence of a
masked password in XML configuration files with the following annotation. • getUserPrincipal, which determines the principal name of the current user and returns a java.security.
Principal object
Sample 11.8 Use of these programmatic APIs should be reviewed to ensure consistency with the configuration. Specifically,
the security-role-ref element should be declared in the deployment descriptor with a role-name subelement
<annotation>@org.jboss.security.integration.password.Password containing the role name to be passed to the isUserInRole method.
(securityDomain=MASK_NAME,methodName=setPROPERTY_NAME)
</annotation> The code in sample 11.9 demonstrates the use of programmatic security for the purposes of programmatic
login and establishing identities and roles. This servlet does the following:
try {
request.login(userName, password);
} catch(ServletException ex) {
out.println(“Login Failed with a ServletException..” ApplicationHost.config web.config web.config
+ ex.getMessage()); windows\system32\intsrv c:\inetpub\wwwroot d:\MyApp
return;
http://localhost http://localhost/MyApp
}
out.println(“After Login...”+”<br><br>”);
out.println(“IsUserInRole?..”
It is possible to provide a file web.config at the root of the virtual directory for a web application. If the file is
94 A5 - Security Misconfiguration A5 - Security Misconfiguration 95
absent, the default configuration settings in machine.config will be used. If the file is present, any settings in
web.config will override the default settings. For guidelines on securing the overall configuration of Microsoft IIS, see the IIS supports basic, client certificate,
digest, IIS client certificate, and Windows authentication methods. They are configured in the <system.web-
Sample 11.10 Server><security><authentication> section.
The example in sample 11.11 disables anonymous authentication for a site named MySite, then enables both
<authentication mode=”Forms”> basic authentication and windows authentication for the site.
<forms name=”name”
loginUrl=”url”
Sample 11.11
protection=”Encryption”
timeout=”30” path=”/”
requireSSL=”true|” <location path=”MySite”>
slidingExpiration=”false”> <system.webServer>
<credentials passwordFormat=”Clear”> <security>
<user name=”username” password=”password”/> <authentication>
</credentials> <anonymousAuthentication enabled=”false” />
</forms> <basicAuthentication enabled=”true” defaultLogonDomain=”MySite” />
<windowsAuthentication enabled=”true” />
<passport redirectUrl=”internal”/>
</authentication>
</authentication>
</security>
</system.webServer>
</location>
Many of the important security settings are not set in the code, but in the framework configuration files.
Knowledge of the framework is of paramount importance when reviewing framework-based applications.
Some examples of framework specific parameters in the web.config file are shown in table 14.
IIS authorization configuration allows specification of users access to the site or server and is configured in the
<system.webServer><security><authorization> section.
Table 14: Parameters In The Web.config File
Parameter Description The configuration in sample 11.12 removes the default IIS authorization settings, which allows all users access
to Web site or application content, and then configures an authorization rule that allows only users with ad-
authentication mode The default authentication mode is ASP.NET forms-based authentication. ministrator privileges to access the content.
loginUrl Specifies the URL where the request is redirected for login if no valid authentication cookie is found.
Sample 11.12
protection Specifies that the cookie is encrypted using 3DES or DES but DV is not performed on the cookie. Beware of plaintext
attacks.
IIS allows specifying whether SSL is supported, is required, whether client authentication is supported or re-
Table 14: IIS Security Parameters
quired, and cipher strength. It is configured in the <system.webServer><security><access> section. The ex-
Parameter Function ample in figure A5.13 specifies SSL as required for all connections to the site MySite.
denyQueryStringSequences Prohibited query strings IIS allows restrictions on source IP addresses or DNS names. It is configured in the <system.webServer><se-
curity><ipSecurity> section as shown in sample 11.15 where the example configuration denies access to the
filteringRules Custom filtering rules
IP address 192.168.100.1 and to the entire 169.254.0.0 network:
These parameters are configured in the <system.webServer><security><requestFiltering> section. The ex- Sample 11.15
ample in sample 11.13:
• Denies access to two URL sequences. The first sequence prevents directory transversal and the second se- <location path=”Default Web Site”>
quence prevents access to alternate data streams. <system.webServer>
<security>
<ipSecurity>
• Sets the maximum length for a URL to 2KB and the maximum length for a query string to 1KB.
<add ipAddress=”192.168.100.1” />
<add ipAddress=”169.254.0.0” subnetMask=”255.255.0.0” />
• Denies access to unlisted file name extensions and unlisted HTTP verbs. </ipSecurity>
</security>
</system.webServer>
Sample 11.13
</location>
<configuration>
<system.webServer>
<security> Detailed information on IIS security configuration can be found at IIS Security Configuration. Specific security
<requestFiltering> feature configuration information can be found at Authentication, Authorization, SSL, Source IP, Request Filter-
<denyUrlSequences> ing, and Custom Request Filtering[12].
<add sequence=”..” />
<add sequence=”:” /> 11.8 Programmatic Configuration: Microsoft IIS
</denyUrlSequences> Microsoft IIS security configuration can also be programmatically set from various languages:
<fileExtensions allowUnlisted=”false” />
<requestLimits maxUrl=”2048” maxQueryString=”1024” />
• appcmd.exe set config
<verbs allowUnlisted=”false” />
• C#
</requestFiltering>
</security> • Visual Basic
</system.webServer> • JavaScript
</configuration>
For example, disabling anonymous authentication for a site named MySite, then enabling both basic authenti-
cation and windows authentication for the site (as done via configuration in the section above) can be accom-
plished from the command line using the commands in figure sample 11.16.
98 A5 - Security Misconfiguration A5 - Security Misconfiguration 99
Sample 11.16 built-in security feature allows to filter undesired URL request but it is also possible to configure different kinds
of filtering. To begin with, it is important to understand how the IIS pipeline works when a request is done. The
following diagram shows the order in these modules
appcmd.exe set config “MySite” -section:system.webServer/security/authentication
/anonymousAuthentication /enabled:”False” /commit:apphost
appcmd.exe set config “MySite” -section:system.webServer/security/authentication Figure 8: IIS Request FilteringFiles
/basicAuthentication /enabled:”True” /commit:apphost
appcmd.exe set config “MySite” -section:system.webServer/security/authentication
/windowsAuthentication /enabled:”True” /commit:apphost HTTP
REQUEST
Alternatively the same authentication setup can be coded programmatically as in sample 11.17.
Sample 11.17
REQUEST FILTERING
(HIGH PRIORITY)
using System;
using System.Text;
using Microsoft.Web.Administration; BEGIN REQUEST
internal static class Sample {
URL REWRITE MODULE
private static void Main() { (MEDIUM PRIORITY)
ConfigurationSection anonymousAuthenticationSection =
config.GetSection(“system.webServer/security/authentication
/anonymousAuthentication”, “MySite”); AUTHORIZE REQUEST
anonymousAuthenticationSection[“enabled”] = false;
ConfigurationSection basicAuthenticationSection =
config.GetSection(“system.webServer/security/authentication
/basicAuthentication”, “MySite”); RESOLVE CACHE
basicAuthenticationSection[“enabled”] = true;
ConfigurationSection windowsAuthenticationSection =
config.GetSection(“system.webServer/security/authentication
/windowsAuthentication”, “MySite”); END REQUEST
windowsAuthenticationSection[“enabled”] = true;
serverManager.CommitChanges();
}
}
}
HTTP
RESPONSE (Yakushev, 2008)
When reviewing source code, special attention should be paid to configuration updates in security sections.
Sample 11.19 For example, “../” (dot-dot-slash) characters represent %2E%2E%2f in hexadecimal representation. When the
% symbol is encoded again, its representation in hexadecimal code is %25. The result from the double encod-
using System; ing process ”../”(dot-dot-slash) would be %252E%252E%252F:
using System.Text;
using Microsoft.Web.Administration; • The hexadecimal encoding of “../” represents “%2E%2E%2f”
internal static class Sample
{ • Then encoding the “%” represents “%25”
private static void Main()
{ • Double encoding of “../” represents “%252E%252E%252F”
using (ServerManager serverManager = new ServerManager())
{ If you do not want IIS to allow doubled-encoded requests to be served, use the following (IIS Team,2007):
Configuration config = serverManager.GetWebConfiguration(“Default Web Site”);
ConfigurationSection requestFilteringSection = config.GetSection(“system.webServer/security
/requestFiltering”); Sample 11.20
ConfigurationElementCollection denyUrlSequencesCollection =
requestFilteringSection.GetCollection(“denyUrlSequences”);
ConfigurationElement addElement = denyUrlSequencesCollection.CreateElement(“add”); <configuration>
addElement[“sequence”] = @”..”; <system.webServer>
denyUrlSequencesCollection.Add(addElement); <security>
ConfigurationElement addElement1 = denyUrlSequencesCollection.CreateElement(“add”); <requestFiltering
addElement1[“sequence”] = @”:”; allowDoubleEscaping=”false”>
denyUrlSequencesCollection.Add(addElement1); </requestFiltering>
ConfigurationElement addElement2 = denyUrlSequencesCollection.CreateElement(“add”); </security>
addElement2[“sequence”] = @”\”; </system.webServer>
denyUrlSequencesCollection.Add(addElement2); </configuration>
102 A5 - Security Misconfiguration A5 - Security Misconfiguration 103
Filter by Verbs
When IIS reject a request based on this feature, the error code logged is 404.6. This corresponds to the UseAl-
Filter Based on File Extensions lowVerbs, AllowVerbs, and DenyVerbs options in UrlScan.
Using this filter you can allow IIS to a request based on file extensions, the error code logged is 404.7. The Al- In case you want the application to use only certain type of verb, it is necessary to firt set the allowUnlisted to
lowExtensions and DenyExtensions options are the UrlScan equivalents. ‘false’ and then set the verbs that you would like to allow (see example)
<configuration> <configuration>
<system.webServer> <system.webServer>
<security> <security>
<requestFiltering> <requestFiltering>
<fileExtensions allowUnlisted=”true” > <verbs allowUnlisted=”false”>
<add fileExtension=”.asp” allowed=”false”/> <add verb=”GET” allowed=”true” />
</fileExtensions> </verbs>
</requestFiltering> </requestFiltering>
</security> </security>
</system.webServer>
</system.webServer>
</configuration>
</configuration>
read before user code is run. The following sections cannot be encrypted:
<security>
<requestFiltering>
• <processModel>
<denyUrlSequences>
• <runtime>
<add sequence=”..”/>
</denyUrlSequences> • <mscorlib>
</requestFiltering> • <startup>
</security> • <system.runtime.remoting>
</system.webServer> • <configProtectedData>
</configuration> • <satelliteassemblies>
• <cryptographySettings>
• <cryptoNameMapping>
Filter Out Hidden Segments • <cryptoClasses>
In case you want IIS to serve content in binary directory but not the binary, you can apply this configuration.
Machine-Level RSA key container or User-Level Key Containers
Encrypting a single file using machine-level RSA key has its disadvantages when this file is moved to other
Sample 11.25
servers. In this case, user-level RSA key container is strongly advised. The RSAProtectedConfigurationProvider
supports machine-level and user-level key containers for key storage.
<configuration>
<system.webServer> RSA machine key containers are stored in the following folder:
<security> \Documents and Settings\All Users\Application Data\Microsoft\Crypto\RSA\MachineKeys
<requestFiltering>
<hiddenSegments> User Key Container
<add segment=”BIN”/> When the application that needs to be protected is in a shared hosting environment and protection of sensi-
</hiddenSegments>
tive data cannot be accessible to other applications, the user key container is strongly recommended.
</requestFiltering>
In this case each application should have a separate identity.
</security>
</system.webServer> RSA user-level key containers are stored in the following folder:
</configuration>
\Documents and Settings\{UserName}\Application Data\Microsoft\Crypto\RSA
IIS configurations
Password protection and sensitive information Depending on the version of IIS that must be configured, it is important to revise some of its settings which
The web.config files might include sensitive information in the connection strings such as database passwords, can comprise security in the server.
mail server user names among others.
Trust level
Sections that are required to be encrypted are: The trust level is a set of Code Access Security permissions granted to an application within a hosting environ-
• <appSettings>. This section contains custom application settings. ment. These are defined using policy files. Depending on the trust level that must be configured, it is possible
• <connectionStrings>. This section contains connection strings. to grant FULL, HIGH, MEDIUM, LOW or MINIMAL level. The ASP.NET host does not apply any additional policy
• <identity>. This section can contain impersonation credentials. to applications that are running at the full-trust level.
• <sessionState>. This section contains the connection string for the out-of-process session state provider.
Example:
Passwords and user names contained in a <connectionstring> section should be encrypted. ASP.NET allows
you to encrypt this information by using the functionality aspnet_regiis .This utility is found in the installed
Sample 11.26
.NET framework under the folder
%windows%\Microsoft.NET\Framework\v2.0.50727
<system.web>
You can specify the section you need to encrypt by using the command: <securityPolicy>
aspnet_regiis -pef sectiontobeencryoted <trustLevel name=”Full” policyFile=”internal”/>
</securityPolicy>
Encrypting sections in Web.Config file </system.web>
Even though encrypting sections is possible, not all sections can be encrypted, specifically sections that are
106 A5 - Security Misconfiguration A5 - Security Misconfiguration 107
Lock Trust Levels Code review needs to be aware if strong naming is being used, benefits and what threat vectors strong naming
In the .NET framework web.config file is possible to lock applications from changing their trust level helps prevent or understand the reasons for not using strong naming.
This file is found at:
A strong name is a method to sign an assembly’s identity using its text name, version number, culture informa-
C:\Windows\Microsoft.NET\Framework\{version}\CONFIG tion, a public key and a digital signature. (Solis, 2012)
The following example shows how to lock 2 different application configuration trust levels (MSDN, 2013) • Strong naming guarantees a unique name for that assembly.
Sample 11.27 • Strong names protect the version lineage of an assembly. A strong name can ensure that no one can pro-
duce a subsequent version of your assembly. Users can be sure that a version of the assembly they are loading
comes from the same publisher that created the version the application was built with.
<configuration>
<location path=”application1” allowOverride=”false”> The above two points are very important if you are going to use Global Assembly Cache (GAC).
<system.web>
<trust level=”High” /> • Strong names provide a strong integrity check and prevent spoofing. Passing the .NET Framework security
</system.web>
checks guarantees that the contents of the assembly have not been changed since it was built.
</location>
<location path=”application2” allowOverride=”false”>
<system.web> Note, however, that strong names in and of themselves do not imply a level of trust like that provided, for
<trust level=”Medium” /> example, by a digital signature and supporting certificate. If you use the GAC assemblies remember the assem-
</system.web> blies are not verified each time they load since the GAC by design is a locked-down, admin-only store.
</location>
</configuration> What strong names can’t prevent is a malicious user from stripping the strong name signature entirely, modi-
fying the assembly, or re-signing it with the malicious user’s key.
The code reviewer needs to understand how the strong name private key will be kept secure and managed.
References This is crucible if you decide strong name signatures are a good fit for your organization.
• Yakushev Ruslan , 2008 “IIS 7.0 Request Filtering and URL Rewriting “ available at http://www.iis.net/learn/
extensions/url-rewrite-module/iis-request-filtering-and-url-rewriting (Last accessed on 14 July, 2013) If principle of least privilege is used so code is not or less susceptible to be access by the hacker and the GAC is
not being used strong names provides less benefits or no benefits at all.
• OWASP, 2009 “Double Encoding” available at https://www.owasp.org/index.php/Double_Encoding (Last ac-
cessed on 14 July, 2013) How to use Strong Naming
Signing tools
• IIS Team, 2007 “Use Request Filtering “ available at http://www.iis.net/learn/manage/configuring-security/ In order to create a strong name assembly there are a set of tools and steps that you need to follow
use-request-filtering (Last accessed on 14 July, 2013)
Using Visual Studio
• Aguilar Carlos ,2006 “The new Configuration System in IIS 7” available at http://blogs.msdn.com/b/carlosag/ In order to use Visual Studio to create a Strongly Named Assembly, it is necessary to have a copy of the public/
archive/2006/04/25/iis7configurationsystem.aspx (Last accessed on 14 July, 2013) private key pair file. It is also possible to create this pair key in Visual Studio
• MSDN, 2013 . How to: Lock ASP.NET Configuration Settings available at http://msdn.microsoft.com/en-us/ In Visual Studio 2005, the C#, Visual Basic, and Visual J# integrated development environments (IDEs) allow you
library/ms178693.aspx (Last accessed on 14 July, 2013) to generate key pairs and sign assemblies without the need to create a key pair using Sn.exe (Strong Name
Tool).
11.10 Strongly Named Assemblies
During the build process either QA or Developers are going to publish the code into executable formats. Usu- These IDEs have a Signing tab in the Project Designer. . The use of the AssemblyKeyFileAttribute to identify key
ally this consists of an exe or and one or several DLL’s. During the build/publish process a decision needs to be file pairs has been made obsolete in Visual Studio 2005.
made to sign or not sign the code.
The following figure illustrates the process done by the compiler
Signing your code is called creating “strong names” by Microsoft. If you create a project using Visual Studio and
use Microsofts “Run code analysis” most likely your will encounter a Microsoft design error if the code is not
strong named; “Warning 1 CA2210 : Microsoft.Design : Sign ‘xxx.exe’ with a strong name key.”
108 A5 - Security Misconfiguration A5 - Security Misconfiguration 109
Figure 9: C# Strong Naming This tool is automatically installed with Visual Studio and with the Windows SDK. To run the tool, we recom-
mend that you use the Visual Studio Command Prompt or the Windows SDK Command Prompt (CMD Shell).
These utilities enable you to run the tool easily, without navigating to the installation folder. For more informa-
tion, see Visual Studio and Windows SDK Command Prompts.
The following command creates an executable file t2a.exe with an assembly from the t2.netmodule module.
Using Strong Name tool The entry point is the Main method in MyClass.
The Sign Tool is a command-line tool that digitally signs files, verifies signatures in files, or time stamps files. al t2.netmodule /target:exe /out:t2a.exe /main:MyClass.Main
The Sign Tool is not supported on Microsoft Windows NT, Windows Me, Windows 98, or Windows 95.
In case you aren’t using the “Visual Studio Command Prompt” (Start >> Microsoft Visual Studio 2010 >> Visual Use Assembly attributes
Studio Tools >> Visual Studio Command Prompt (2010)) you can locate sn.exe at %ProgramFiles%\Microsoft You can insert the strong name information in the code directly. For this, depending on where the key file is
SDKs\Windows\v7.0A\bin\sn.exe located you can use AssemblyKeyFileAttribute or AssemblyKeyNameAttribute
The following command creates a new, random key pair and stores it in keyPair.snk. Use Compiler options :use /keyfile or /delaysign
sn -k keyPair.snk Safeguarding the key pair from developers is necessary to maintain and guarantee the integrity of the assem-
blies. The public key should be accessible, but access to the private key is restricted to only a few individuals.
The following command stores the key in keyPair.snk in the container MyContainer in the strong name CSP. When developing assemblies with strong names, each assembly that references the strong-named target as-
sn -i keyPair.snk MyContainer sembly contains the token of the public key used to give the target assembly a strong name. This requires that
the public key be available during the development process.
The following command extracts the public key from keyPair.snk and stores it in publicKey.snk.
sn -p keyPair.snk publicKey.snk You can use delayed or partial signing at build time to reserve space in the portable executable (PE) file for the
strong name signature, but defer the actual signing until some later stage (typically just before shipping the
The following command displays the public key and the token for the public key contained in publicKey.snk. assembly).
sn -tp publicKey.snk You can use /keyfile or /delaysign in C# and VB.NET (MSDN)
The following command deletes MyContainer from the default CSP. • http://msdn.microsoft.com/en-us/library/c405shex(v=vs.110).aspx
sn -d MyContainer
• http://msdn.microsoft.com/en-us/library/k5b5tt23(v=vs.80).aspx
Using the Assembly Linker(AI.exe)
110 A5 - Security Misconfiguration A5 - Security Misconfiguration 111
Input validation
Anything coming from external sources can be consider as input in a web application. Not only the user in- <authentication mode=”Forms”>
<forms loginUrl=”Restricted\login.aspx” Login page in an SSL protected folder
serting data through a web form, but also data retrieved from a web service or database, also headers sent
protection=”All” Privacy and integrity
from the browsers fall under this concept. A way of defining when input is safe can be done through outlining
requireSSL=”true” Prevents cookie being sent over http
a trust boundary. timeout=”10” Limited session lifetime
name=”AppNameCookie” Unique per-application name
Defining what is known as trust boundary can help us to visualize all possible untrusted inputs. One of those path=”/FormsAuth” and path
are user input.ASP.NET has different types of validations depending on the level of control to be applied. By slidingExpiration=”true” > Sliding session lifetime
default, web pages code is validated against malicious users. The following is a list types of validations used </forms>
(MSDN, 2013): </authentication>
Classic ASP
User-defined CustomValidator Checks the user’s entry using validation logic that you write yourself. This type
of validation enables you to check for values derived at run time. For classic ASP pages, authentication is usually performed manually by including the user information in ses-
sion variablesafter validation against a DB, so you can look for something like:
Session (“UserId”) = UserName
References Session (“Roles”) = UserRoles
MSDN, 2013 “Securing ASP.NET Configurations” available at
http://msdn.microsoft.com/en-us/library/ms178699%28v=vs.100%29.aspx (Last Viewed, 25th July 2013) Code Review .Net Manage Code
.NET Managed code is less vulnerable to common vulnerabilities found in unmanaged code such as Buffer
11.11 .NET Authentication Controls Overflows and memory corruption however there could be issues in the code that can affect performance and
In the .NET, there are Authentication tags in the configuration file. The <authentication> element configures security. The following is a summary of the recommended practices to look for during the code review. Also, it
the authentication mode that your applications use. The appropriate authentication mode depends on how is worth mentioning some tools that can make the work easier on this part and they can help you understand
your application or Web service has been designed. The default Machine.config setting applies a secure Win- and pin point flaws in your code
dows authentication default as shown below
Code Access Security
authentication Attributes:mode=”[Windows|Forms|Passport|None]” This supports the execution of semi-trusted code, preventing several forms of security threats. The following is
a summary of possible vulnerabilities due to improper use of Code Access security:
<authentication mode=”Windows” />
114 A5 - Security Misconfiguration A5 - Security Misconfiguration 115
Table 15: Code Access Security Vulnerabilities Some of the available rules regarding security are (CodePlex, 2010):
Vulnerability Implications
Table 16: FxCop Flags
Improper use of link demands or asserts The code is susceptible to luring attacks
Rule Description
Code allows untrusted callers Malicious code can use the code to perform sensitive operations and access resources
EnableEventValidationShouldBeTrue Verifies if the EnableEventValidation directive is disabled on a certain page.
Declarative security ValidateRequestShouldBeEnabled Verifies if the ValidateRequest directive is disabled on a certain page.
Sample 11.32
EnableViewStateShouldBeTrue Verifies if the EnableViewState directive is not set to false on a certain page.
ViewStateUserKeyShouldBeUsed Verifies if the Page.ViewStateUserKey is being used in the application to prevent CSRF.
[MyPermission(SecurityAction.Demand, Unrestricted = true)]
public class MyClass DebugCompilationMustBeDisabled Verifies that debug compilation is turned off. This eliminates potential performance and se-
{ curity issues related to debug code enabled and additional extensive error messages being
public MyClass() returned.
{
CustomErrorPageShouldBeSpecified Verifies that the CustomErrors section is configured to have a default URL for redirecting uses
//The constructor is protected by the security call. in case of error.
}
public void MyMethod() FormAuthenticationShouldNotContainFormAuthentica Verifies that no credentials are specified under the form authentication configuration.
tionCredentials
{
//This method is protected by the security call. EnableCrossAppRedirectsShouldBeTrue Verifies that system.web.authentication.forms enableCrossAppRedirects is set to true. The
} settings indicate if the user should be redirected to another application url after the authen-
tication process. If the setting is false, the authentication process will not allow redirection to
public void YourMethod() another application or host. This helps prevent an attacker to force the user to be redirected
{ to another site during the authentication process. This attack is commonly called Open redi-
rect and is used mostly during phishing attacks.
//This method is protected by the security call.
} FormAuthenticationProtectionShouldBeAll Verifies that the protection attribute on the system.web.authentication.forms protection is
} set to All which specifies that the application use both data validation and encryption to help
protect the authentication cookie.
Rule Description
AnonymousAccessIsEnabled Looks in the web.config file to see if the authorization section allows anonymous access.
RoleManagerCookieProtectionShouldBeAll Verifies that the system.web.rolemanager cookieProtection is set to All which enforces the
cookie to be both encrypted and validated by the server.
RoleManagerCookieRequireSSLShouldBeTrue Verifies that the system.web.rolemanager cookieRequireSSL attribute is set to True which
forces the role manager cookie to specify the secure attribute. This directs the browser to only
provide the cookie over SSL.
RoleManagerCookieSlidingExpirationShouldBeTrue Verifies that the system.web.rolemanager cookieSlidingExpiration is set to false when the site
is being served over HTTP. This will force the authentication cookie to have a fixed timeout
value instead of being refreshed by each request. Since the cookie will traverse over clear text
network and could potentially be intercepted, having a fixed timeout value on the cookie
will limit the amount of time the cookie can be replayed. If the cookie is being sent only over
HTTPS, it is less likely to be intercepted and having the slidingExpiration setting to True will
cause the timeout to be refreshed after each request which gives a better user experience.
A6
HttpRuntimeEnableHeaderCheckingShouldBeTrue Verifies that the system.web.httpRuntime enableHeaderChecking attribute is set to true. The
setting indicates whether ASP.NET should check the request header for potential injection
attacks. If an attack is detected, ASP.NET responds with an error. This forces ASP.NET to apply
the ValidateRequest protection to headers sent by the client. If an attack is detected the ap-
plication throws HttpRequestValidationException.
PagesViewStateEncryptionModeShouldBeAlways Verifies that the viewstate encryption mode is not configured to never encrypt.
CustomErrorsModeShouldBeOn Verifies that the system.web.customErrors mode is set to On or RemoteOnly. This disable de-
tailed error message returned by ASP.NET to remote users.
Many web applications do not properly protect sensitive data, such as credit cards, tax IDs, and authentication
credentials. Attackers may steal or modify such weakly protected data to conduct credit card fraud, identity Table 17: Cryptographic Definitions
theft, or other crimes. Sensitive data deserves extra protection such as encryption at rest or in transit, as well
Term Description
as special precautions when exchanged with the browser.
Encoding Transforming data from one form into another, typically with the aim of making the data easier to work with. For ex-
ample encoding binary data (which could not be printed to a screen into printable ASCII format which can be copy/
12.1 Cryptographic Controls pasted. Note that encoding does not aim to hide the data, the method to return the encoded data back to its original
Software developers, architects and designers are at the forefront of deciding which category a particular form will be publically known.
application resides in. Cryptography provides for security of data at rest (via encryption), enforcement of data
Entropy Essentially this is randomness. Cryptographic functions will need to work with some form of randomness to allow the
integrity (via hashing/digesting), and non-repudiation of data (via signing). To ensure this cryptographic code source data to be encrypted in such a way that an attacker cannot reverse the encryption without the necessary key.
adequately protections the data, all source code must use a standard (secure) algorithms with strong key sizes. Having a good source of entropy is essential to any cryptographic algorithm.
Hashing Non-reversible transformation of data into what is called a ‘fingerprint’ or ‘hashvalue’. Input of any size can be taken
Common flaws when implementing cryptographic code includes the use of non-standard cryptographic algo- and always results in the same size of output (for the algorithm). The aim is not to convert the fingerprint back into the
source data at a later time, but to run the hash algorithm over two sets of data to determine if they produce the same
rithms, custom implementation of cryptography (standard & non-standard) algorithms, use of standard algo- fingerprint. This would show that data has not been tampered with.
rithms which are cryptographically insecure (e.g. DES), and the implementation of insecure keys can weaken
the overall security posture of any application. Implementation of the aforementioned flaws enable attackers Salt A non-secret value that can be added to a hashing algorithm to modify the fingerprint result. One attack against
hashing algorithms is a ‘rainbow table attack’ where all source values are pre-computed and a table produced. The
to use cryptanalytic tools and techniques to decrypt sensitive data. attacker can then take a fingerprint, look it up in the table, and correspond it to the original data. Using a unique salt
value for each data to be hashed protects against rainbow tables, as a rainbow table for each salt value would need to
be created, which would greatly extend the time taken by an attacker. The salt is not a secret and can be stored or sent
12.2 Description with the fingerprint.
Many companies handle sensitive information for their customers, for instance medical details or credit card numbers, Encryption Transformation of source data into an encrypted form that can be reversed back to the original source. Typically the
and industry regulations dictate this sensitive information must be encrypted to protect the customers’ information. algorithms used to encrypt are publically known, but rely on a secret ‘key’ to guide the transformation. An attacker
without the key should not be able to transform the data back into the original source.
In the medical industry the HIPAA regulations advise businesses what protections must be applied to medical data,
in the financial industry many regulations cover PII (personally identifiable information) controls. Symmetric Encryption A form of encryption where the same key is known to both the sender and the receiver. This is a fast form of encryption,
however requires a secure, out-of-band method to pass the symmetric key between the sender and receiver.
Regardless of the financial impact of regulatory penalties, there are many business reasons to protect (though en-
cryption or hashing) the information processed by an application, including privacy and fraud detection/protection. Public-Key Encryption A form of encryption using two keys, one to encrypt the data, and one to decrypt the data back to its original form. This
(PKI) is a slower method of encryption however one of the keys can be publically known (referred to as a ‘public key’). The
All sensitive data that the application handles should be identified and encryption should be enforced. Simi- other key is called a ‘private key’ and is kept secret. Any data encrypted with the public key can be decrypted back into
its original form using the private key. Similarly any data encrypted with the private key can be decrypted back to its
larly a decision should be made as to whether sensitive data must be encrypted in transit (i.e. being sent from original form using the public key.
one computer to another) and/or at rest (i.e. stored in a DB, file, keychain, etc.):
Certificate An association between an entity (e.g. person, company) and a public key. Typically this forms part of a public-key
infrastructure where certain trusted entities (e.g. Certificate Authorities in internet TLS) perform validation of an entitles
1) Protection in transit; this typically means using the SSL/TLS layer to encrypt data travelling on the HTTP credentials and assert (using their own certificate) that a declared public key belongs to the entity.
protocol, although it can also include FTPS, or even SSL on TCP. Frameworks such as IIS and Apache Struts
come with SSL/TLS functionality included, and thus the developer will not be coding the actual TLS encryp-
tion, but instead will be configuring a framework to provide TLS security. 12.3 What to Review: Protection in Transit
The terms, Secure Socket Layer (SSL) and Transport Layer Security (TLS) are often used interchangeably. In
However the decisions made here, even at an architectural level, need to be well informed, and a discussion on fact, SSL v3.1 is equivalent to TLS v1.0. However, different versions of SSL and TLS are supported by modern
TLS design decisions is covered in section 1.3. web browsers and by most modern web frameworks and platforms. Note that since developments in attacks
against the SSL protocol have shown it to be weaker against attacks, this guide will use the term TLS to refer to
2) Protection at rest; this can include encryption of credit cards in the database, hashing of passwords, transport layer security over the HTTP or TCP protocols.
producing message authentication codes (MACs) to ensure a message has not been modified between com-
puters. Where TLS code will come with a framework, code to encrypt or hash data to be stored will typically The primary benefit of transport layer security is the protection of web application data from unauthorized
need to use APIs provided by cryptographic libraries. disclosure and modification when it is transmitted between clients (web browsers) and the web application
server, and between the web application server and back end and other non-browser based enterprise com-
The developer will not be writing code to implement the AES algorithm (OpenSSL or CryptoAPI will do that), the ponents.
developer will be writing modules to use an AES implantation in the correct way. Again the correct decisions need to In theory, the decision to use TLS to protect computer to computer communication should be based on the
be made regarding up-to-date algorithms, key storage, and other design decisions, which are covered in section 1.4. nature of the traffic or functionality available over the interface. If sensitive information is passing over the
120 A6 - Sensitive Data Exposure A6 - Sensitive Data Exposure 121
interface, TLS will prevent eavesdroppers from being able to view or modify the data. Likewise if the interface to server logs.
allows money to be transferred, or sensitive functions to be initiated, then TLS will protect the associated login
or session information authorizing the user to perform those functions. However with the price of certificates • Prevent the caching of sensitive data. The TLS protocol provides confidentiality only for data in transit but it
dropping, and TLS configuration within frameworks becoming easier, TLS protection of an interface is not a does not help with potential data leakage issues at the client or intermediary proxies.
large endeavor and many web sites are using TLS protections for their entire site (i.e. there are only HTTPS
pages, no HTTP pages are available). • Use HTTP Strict Transport Security (HSTS) for high risk interfaces. HSTS will prevent any web clients from
attempting to connect to your web site over a non-TLS protocol. From a server-side point of view this may
The server validation component of TLS provides authentication of the server to the client. If configured to re- seem irrelevant if no non-TLS pages are provided, however a web site setting up HSTS does protect clients
quire client side certificates, TLS can also play a role in client authentication to the server. However, in practice from other attacks (e.g. DNS cache poisioning).
client side certificates are not often used in lieu of username and password based authentication models for
clients. • Use 2048 key lengths (and above) and SHA-256 (and above). The private key used to generate the cipher key
must be sufficiently strong for the anticipated lifetime of the private key and corresponding certificate. The
Using Validated Implementations current best practice is to select a key size of at least 2048 bits. Note that attacks against SHA-1 have shown
The US government provides a list of software that has been validated to provide a strong and secure imple- weakness and the current best practice is to use at least SHA-256 or equivalent.
mentation of various cryptographic functions, including those used in TLS. This list is referred to as the FIPS
140-2 validated cryptomodules [insert reference]. • Only use specified, fully qualified domain names in your certificates. Do not use wildcard certificates, or RFC
1918 addresses (e.g. 10.* or 192.168.*). If you need to support multiple domain names use Subject Alternate
A cryptomodule, whether it is a software library or a hardware device, implements cryptographic algorithms Names (SANs) which provide a specific listing of multiple names where the certificate is valid. For example the
(symmetric and asymmetric algorithms, hash algorithms, random number generator algorithms, and message certificate could list the subject’s CN as example.com, and list two SANs: abc.example.com and xyz.example.
authentication code algorithms). The security of a cryptomodule and its services (and the web applications com. These certificates are sometimes referred to as “multiple domain certificates”.
that call the cryptomodule) depend on the correct implementation and integration of each of these three
parts. In addition, the cryptomodule must be used and accessed securely. In order to leverage the benefits of • Always provide all certificates in the chain. When a user receives a server or host’s certificate, the certificate
TLS it is important to use a TLS service (e.g. library, web framework, web application server) which has been must be validated back to a trusted root certification authority. This is known as path validation. There can
FIPS 140-2 validated. In addition, the cryptomodule must be installed, configured and operated in either an be one or more intermediate certificates in between the end-entity (server or host) certificate and root certif-
approved or an allowed mode to provide a high degree of certainty that the FIPS 140-2 validated cryptomod- icate. In addition to validating both endpoints, the client software will also have to validate all intermediate
ule is providing the expected security services in the expected manner. certificates, which can cause failures if the client does not have the certificates. This occurs in many mobile
platforms.
When reviewing designs or code that is handling TLS encryption, items to look out for include:
• Use TLS for the login pages and any authenticated pages. Failure to utilize TLS for the login landing page 12.4 What to Review: Protection at Rest
allows an attacker to modify the login form action, causing the user’s credentials to be posted to an arbitrary As a general recommendation, companies should not create their own custom cryptographic libraries and al-
location. Failure to utilize TLS for authenticated pages after the login enables an attacker to view the unen- gorithms. There is a huge distinction between groups, organizations, and individuals developing cryptograph-
crypted session ID and compromise the user’s authenticated session. ic algorithms and those that implement cryptography either in software or in hardware. Using an established
cryptographic library that has been developed by experts and tested by the industry is the safest way to im-
• Use TLS internally when transmitting sensitive data or exposing authenticated functionality. All networks, plement cryptographic functions in a company’s code. Some common examples of libraries used in various
both external and internal, which transmit sensitive data must utilize TLS or an equivalent transport layer se- languages and environments are covered in the below table.
curity mechanism. It is not sufficient to claim that access to the internal network is “restricted to employees”.
Numerous recent data compromises have shown that the internal network can be breached by attackers. In Table 18: Popular Cryptographic implementations according to environment
these attacks, sniffers have been installed to access unencrypted sensitive data sent on the internal network.
Language Libraries Discussion
• Prefer all interfaces (or pages) being accessible only over HTTPS. All pages which are available over TLS must
C# .NET Class libraries For applications coded in C#.NET there are class libraries and implementations within the ‘System.Security.Cryptog-
not be available over a non-TLS connection. A user may inadvertently bookmark or manually type a URL to a within ‘Sys- raphy’ that should be used. This namespace within .NET aims to provide a number of warppers that do not require
proficient knowledge of cryptography in order to use it.
HTTP page (e.g. http://example.com/myaccount) within the authenticated portion of the application. tem.Security.
Cryptogra-
phy’
• Use the “secure” and “http-only” cookie flags for authentication cookies. Failure to use the “secure” flag
enables an attacker to access the session cookie by tricking the user’s browser into submitting a request to an C/C++ CryptoAPI For C/C++ code running on Win32 platforms, the CreyptoAPI and DPAPI are recommended.
(Win32) and DPAPI
unencrypted page on the site. The “http-only” flag denies JavaScript functions access to the cookies contents.
C/C++ OpenSSL, For C/C++ on Linux/Unix operating systems, us OpenSSL, NSS, or one of the many forks of these libraries.
(Linux) NSS, boringssl
• Do not put sensitive data in the URL. TLS will protect the contents of the traffic on the wire, including the URL
when transported, however remember that URL’s are visible in browser history settings, and typically written
122 A6 - Sensitive Data Exposure A6 - Sensitive Data Exposure 123
ASP CryptoAPI Classis ASP pages do not have direct access to cryptographic functions, so the only way is to create COM wrappers in o Verify no proprietary algorithms are being used
and DPAPI Visual C++ or Visual Basic, implementing calls to CryptoAPI or DPAPI. Then call them from ASP pages using the Server. o Check that RNGCryptoServiceProvider is used for PRNG
CreateObject method.
o Verify key length is at least 128 bits
Java Java Cryp- JCE is a standard API that any cryptographic library can implement to provide cryptographic functions to the developer.
tography Oracle provide a list of companies that act as Cryptographic Service Providers and/or offer clean room implementations
Extension, of the JCE. BouncyCastle is on of the more popular implementations. Spring Secuirty is also popular in application • For Java check that the Java Cryptography Extension (JCE) is being used
BouncyCas- where Spring is already being utilized.
tle, Spring o Verify no proprietary algorithms are being used
Security
o Check that SecureRandom (or similar) is used for PRNG
o Verify key length is at least 128 bits
A secure way to implement robust encryption mechanisms within source code is by using FIPS [7] compliant algo-
rithms with the use of the Microsoft Data Protection API (DPAPI) [4] or the Java Cryptography Extension (JCE) [5]. Bad Practice: Use of Insecure Cryptographic Algorithms
The DES and SHA-0 algorithms are cryptographically insecure. The example in sample 12.1 outlines a cryptographic
A company should identify minimum standards for the following when establishing your cryptographic code module using DES (available per using the Java Cryptographic Extensions) which should not be used. Additionally,
strategy: SHA-1 and MD5 should be avoided in new applications moving forward.
• Which standard algorithms are to be used by applications
1.1.4 References
* 3. Encrypt the bytes using doFinal method [1] Bruce Schneier, Applied Cryptography, John Wiley & Sons, 2nd edition, 1996.
*/
strDataToEncrypt = “Hello World of Encryption using AES “; [2] Michael Howard, Steve Lipner, The Security Development Lifecycle, 2006, pp. 251 - 258
byte[] byteDataToEncrypt = strDataToEncrypt.getBytes();
byte[] byteCipherText = aesCipher.doFinal(byteDataToEncrypt); [3] .NET Framework Developer’s Guide, Cryptographic Services, http://msdn2.microsoft.com/en-us/library/93bsk-
strCipherText = new BASE64Encoder().encode(byteCipherText); f9z.aspx
System.out.println(“Cipher Text generated using AES is “ +strCipherText);
[4] Microsoft Developer Network, Windows Data Protection, http://msdn2.microsoft.com/en-us/library/
/** ms995355.aspx
* Step 5. Decrypt the Data
* 1. Initialize the Cipher for Decryption [5] Sun Developer Network, Java Cryptography Extension, http://java.sun.com/products/jce/
* 2. Decrypt the cipher bytes using doFinal method
*/ [6] Sun Developer Network, Cryptographic Service Providers and Clean Room Implementations, http://java.sun.
aesCipher.init(Cipher.DECRYPT_MODE,secretKey,aesCipher.getParameters()); com/products/jce/jce122_providers.html
byte[] byteDecryptedText = aesCipher.doFinal(byteCipherText);
strDecryptedText = new String(byteDecryptedText); [7] Federal Information Processing Standards, http://csrc.nist.gov/publications/fips/
System.out.println(“ Decrypted Text message is “ +strDecryptedText);
} 12.5 Encryption, Hashing & Salting
A cryptographic hash algorithm; also called a hash “function” is a computer algorithm designed to provide a random
catch (NoSuchAlgorithmException noSuchAlgo) mapping from an arbitrary block of data (string of binary data) and return a fixed-size bit string known as a “message
{ digest” and achieve certain security.
System.out.println(“ No Such Algorithm exists “ + noSuchAlgo);
} Cryptographic hashing functions are used to create digital signatures, message authentication codes (MACs), other
catch (NoSuchPaddingException noSuchPad) forms of authentication and many other security applications in the information infrastructure. They are also used
{ to store user passwords in databases instead of storing the password in clear text and help prevent data leakage in
System.out.println(“ No Such Padding exists “ + noSuchPad); session management for web applications. The actual algorithm used to create a cryptology function varies per im-
} plementation (SHA-256, SHA-512, etc.)
catch (InvalidKeyException invalidKey)
{ Never accept in a code review an algorithm created by the programmer for hashing. Always use cryptographic func-
System.out.println(“ Invalid Key “ + invalidKey); tions that are provided by the language, framework, or common (trusted) cryptographic libraries. These functions are
} well vetted and well tested by experience cryptographers.
catch (BadPaddingException badPadding) In the United States in 2000, the department of Commerce Bureau of Export revised encryption export regulations.
{ The results of the new export regulations it that the regulations have been greatly relaxed. However if the code is to
System.out.println(“ Bad Padding “ + badPadding); be exported outside of the source country current export laws for the export and import counties should be reviewed
} for compliance.
catch (IllegalBlockSizeException illegalBlockSize)
{ Case in point is if the entire message is hashed instead of a digital signature of the message the National Security
System.out.println(“ Illegal Block Size “ + illegalBlockSize); Agency (NSA) considers this a quasi-encryption and State controls would apply.
}
catch (InvalidAlgorithmParameterException invalidParam) It is always a valid choice to seek legal advice within the organization if the code review is being done to ensure legal
{ compliance.
System.out.println(“ Invalid Parameter “ + invalidParam);
} With security nothing is secure forever. This is especially true with cryptographic hashing functions. Some hashing
} algorithms such as Windows LanMan hashes are considered completely broken. Others like MD5, which in the past
} were considered safe for password hash usage, have known issues like collision attacks (note that collision attacks do
not affect password hashes). The code reviewer needs to understand the weaknesses of obsolete hashing functions
as well as the current best practices for the choice of cryptographic algorithms.
128 A6 - Sensitive Data Exposure A6 - Sensitive Data Exposure 129
Working with Salts One way to generate a salt value is using a pseudo-random number generator, as shown in sample 12.4 below.
The most common programmatic issue with hashing is:
• Not using a salt value Sample 12.4
• Using a salt the salt value is too short
• Same salt value is used in multiple hashes.
private int minSaltSize = 8;
The purpose of a salt is to make it harder for an attacker to perform pre-computed hashing attack (e.g., using rainbow private int maxSaltSize = 24;
tables). Take for example that the SHA512 has of ‘password’ is as shown in row 1 of table X, and any attacker with a private int saltSize;
rainbow table will spot the hash value corresponding to ‘password’. Taking into consideration it takes days or weeks
to compute a rainbow table to values up to around 8 or 10 characters, the effort to produce this table is worth it when private byte[] GetSalt(string input) {
an application is not using any salts. byte[] data;
byte[] saltBytes;
Now take a scenario where an application adds a salt of ‘WindowCleaner’ to all passwords entered. Now the hash of RNGCryptoServiceProvider rng = new RNGCryptoServiceProvider();
‘password’ becomes the hash of ‘passwordWindowCleaner’, which is shown in row 2 of table X. This is unlikely to be saltBytes = new byte[saltSize];
in the attackers rainbow table, however the attacker can now spend the next 10 days (for example) computing a new rng.GetNonZeroBytes(saltBytes);
rainbow table with ‘WindowCleaner’ on the end of every 8 to 10 character string and they again can now decode our data = Encoding.UTF8.GetBytes(input);
hashed database of passwords. byte[] dataWithSaltBytes =
new byte[data.Length + saltBytes.Length];
At a last step, an application can create a random salt for each entry, and store that salt in the DB with the hashed for (int i = 0; i < data.Length; i++)
password. Now for user1, the random salt is ‘a0w8hsdfas8ls587uas87’, meaning the password to be hashed is ‘pass- dataWithSaltBytes[i] = data[i];
worda0w8hsdfas8ls587uas87’, shown in row 3 of table X, and for user2, the random salt is ‘8ash87123klnf9d8dq3w’, for (int i = 0; i < saltBytes.Length; i++)
meaning the password to be hashed is ‘password8ash87123klnf9d8dq3w’, shown in row 4 of table X, and repeat for dataWithSaltBytes[data.Length + i] = saltBytes[i];
all users. return dataWithSaltBytes;
}
Now an attacker would need a rainbow table for each users’ password they mean to decrypt – whereas before it took
10 days to decrypt all of the DB passwords using the same salt, now it takes 10 days to create a rainbow table for
user1’s password, and another 10 days for user2’s password, etc. If there were 100,000 users, that’s now 100,000 x 10
days = 1,000,000 days or 2738 years, to create rainbow tables for all the users. Note that a salt value does not need to possess the quality of a cryptographically secure randomness.
Best practices is to use a cryptographically function to create the salt, create a salt value for each hash value,
As can be seen, the salt does not need to be secret, as it is the fact that unique salts are used for each user that and a minimum value of 128 bits (16 characters). The bits are not costly so don’t save a few bits thinking you
slows down an attacker. gain something back in performance instead use a value of 256-bit salt value. It is highly recommended.
Best Practices
Table 19: Salt usages and associated fingerprints Industry leading Cryptographer’s are advising that MD5 and SHA-1 should not be used for any applications.
Method to hash ‘password’ Fingerprint The United State FEDERAL INFORMATION PROCESSING STANDARDS PUBLICATION (FIPS) specifies seven cryp-
tographic hash algorithms — SHA-1, SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224, and SHA-512/256
No salt B109F3BBBC244EB82441917ED06D618B9008DD09B3 are approved for federal use.
BEFD1B5E07394C706A8BB980B1D7785E5976EC049B
46DF5F1326AF5A2EA6D103FD07C95385FFAB0CACBC86
The code reviewer should consider this standard because the FIPS is also widely adopted by the information
Salt = ‘WindowCleaner’ E6F9DCB1D07E5412135120C0257BAA1A27659D41DC7 technology industry.
7FE2DE4C345E23CB973415F8DFDFFF6AA7F0AE0BDD
61560FB028EFEDF2B5422B40E5EE040A0223D16F06F
The code reviewer should raise a red flag if MD5 and SHA-1 are used and a risk assessment be done to under-
Salt = ‘a0w8hsdfas8ls587uas87’ 5AA762E7C83CFF223B5A00ADA939FBD186C4A2CD01
1B0A7FE7AF86B8CA5420C7A47B52AFD2FA6B9BB172 stand why these functions would be used instead of other better-suited hash functions. FIPS does allow that
22ACF32B3E13F8C436447C36364A5E2BE998416A103A
MD5 can be used only when used as part of an approved key transport scheme where no security is provided
Salt = ‘8ash87123klnf9d8dq3w’ 8058D43195B1CF2794D012A86AC809BFE73254A82C8C by the algorithm.
E6C10256D1C46B9F45700D040A6AC6290746058A63E5
0AAF8C87ABCD5C3AA00CDBDB31C10BA6D12A1A7
sample 12.5 below shows an example function which could implement a generic hash feature for an applica-
tion.
130 A6 - Sensitive Data Exposure A6 - Sensitive Data Exposure 131
Sample 12.5 Michael Howard (at Microsoft) and other researchers have developed a method for measuring the Attack Surface of
an application, and to track changes to the Attack Surface over time, called the Relative Attack Surface Quotient (RSQ).
App Code File: It is assumed that the application Attack Surface is already known, probably through some previous threat modeling
<add key=”HashMethod” value=”SHA512”/> exercise, or Architectural Risk Analysis. Therefor the entry and exit points are known, the sensitivity of the data within
the application is understood, and the various users of the system, and their entitlements, have been mapped in
C# Code: relation to the functions and data.
1: preferredHash = HashAlgorithm.Create((string)ConfigurationManager.AppSettings[“HashMethod”]);
2: From a code review point of view, the aim would be to ensure the change being reviewed is not unnecessarily in-
3: hash = computeHash(preferredHash, testString); creasing the Attack Surface. For example, is the code change suddenly using HTTP where only HTTPS was used
4: before? Is the coder deciding to write their own hash function instead of using the pre-existing (and well exercised/
5: private string computeHash(HashAlgorithm myHash, string input) { tested) central repository of crypto functions? In some development environments the Attack Surface changes can
6: byte[] data; be checked during the design phase if such detail is captured, however at code review the actual implementation is
7: data = myHash.ComputeHash(Encoding.UTF8.GetBytes(input)); reflected in the code and such Attack Surface exposures can be identified.
8: sb = new StringBuilder();
9: for (int i = 0; i < data.Length; i++) { You can also build up a picture of the Attack Surface by scanning the application. For web apps you can use a tool like
10: sb.Append(data[i].ToString(“x2”)); the OWASP Zed Attack Proxy Project (ZAP), Arachni, Skipfish, w3af or one of the many commercial dynamic testing
11: } and vulnerability scanning tools or services to crawl your app and map the parts of the application that are accessible
12: return sb.ToString(); over the web. Once you have a map of the Attack Surface, identify the high risk areas, then understand what com-
13: } pensating controls you have in place.
Note that backups of code and data (online, and on offline media) are an important but often ignored part of a sys-
Line 1 lets us get our hashing algorithm we are going to use from the config file. If we use the machine config tem’s Attack Surface. Protecting your data and IP by writing secure software and hardening the infrastructure will all
file our implementation would be server wide instead of application specific. be wasted if you hand everything over to bad guys by not protecting your backups.
Line 3 allows us to use the config value and set it according as our choice of hashing function. ComputHash What to Review
could be SHA-256 or SHA-512. When reviewing code modules from an Attack Surface point of view, some common issues to look out for include:
References • Does the code change modify the attack surface? By applying the change to the current Attack Surface of the ap-
http://valerieaurora.org/hash.html (Lifetimes of cryptographic hash functions) plication does it open new ports or accept new inputs? If it does could the change be done in a way that does not
increase the attack surface? If a better implementation exists then that should be recommended, however if there is
http://docs.oracle.com/javase/6/docs/api/java/security/SecureRandom.html no way to implement the code without increasing the Attack Surface, make sure the business knows of the increased
risk.
http://msdn.microsoft.com/en-us/library/system.security.cryptography.rngcryptoserviceprovider.aspx
• Is the feature unnecessarily using HTTP instead of HTTPS?
http://csrc.nist.gov/publications/fips/fips180-4/fips-180-4.pdf
• Is the function going to be available to non-authenticated users? If no authentication is necessary for the function
Ferguson and Schneier (2003) Practical Cryptography (see Chapter 6; section 6.2 Real Hash Functions) to be invoked, then the risk of attackers using the interface is increased. Does the function invoke a backend task that
could be used to deny services to other legitimate users?
12.6 Reducing the attack surface o E.g. if the function writes to a file, or sends an SMS, or causes a CPU intensive calculation, could an attacker write a
The Attack Surface of an application is a description of the entry/exit points, the roles/entitlements of the users, and script to call the function many times per second and prevent legitimate users access to that task?
the sensitivity of the data held within the application. For example, entry points such as login screens, HTML forms,
file upload screens, all introduce a level of risk to the application. Note that the code structure also forms part of • Are searches controlled? Search is a risky operation as it typically queries the database for some criteria and returns
the Attack Surface, in that the code checking authentication, or crypto, etc., is exercised by critical functions on the the results, if attacker can inject SQL into query then they could access more data than intended.
application.
• Is important data stored separately from trivial data (in DB, file storage, etc). Is the change going to allow unauthen-
Description ticated users to search for publicly available store locations in a database table in the same partition as the username/
The attack surface of a software environment is a description of the entry points where an attacker can try to manipu- password table? Should this store location data be put into a different database, or different partition, to reduce the
late an application, it typically takes the form of a systems diagram where all entry points (interfaces) are pointed out. risk to the database information?
132 A6 - Sensitive Data Exposure 133
• If file uploads are allowed, are they be authenticated? Is there rate limiting? Is there a maximum file size for each
upload or aggregate for each user? Does the application restrict the file uploads to certain types of file (by checking
MIME data or file suffix). Is the application is going to run virus checking?
• If you have administration users with high privilege, are their actions logged/tracked in such a way that they a) can’t
erase/modify the log and b) can’t deny their actions?
o Are there any alarms or monitoring to spot if they are accessing sensitive data that they shouldn’t be? This could
apply to all types of users, not only administrators.
• Will changes be compatible with existing countermeasures, or security code, or will new code/countermeasures
need to be developed?
• Is the change attempting to introduce some non-centralized security code module, instead of re-using or extending
an existing security module?
A7
• Is the change adding unnecessary user levels or entitlements that will complicate the attack surface.
• If the change is storing PII or confidential data, is all of the new information absolutely necessary? There is little value
in increasing the risk to an application by storing the social security numbers of millions of people, if the data is never
used.
• Does application configuration cause the attack surface to vary greatly depending on configuration settings, and is
that configuration simple to use and alert the administrator when the attack surface is being expanded?
• Could the change be done in a different way that would reduce the attack surface, i.e instead of making help items
searchable and storing help item text in a database table beside the main username/password store, providing static
help text on HTML pages reduces the risk through the ‘help’ interface.
1.2.3 References
https://www.owasp.org/index.php/Attack_Surface_Analysis_Cheat_Sheet
https://www.owasp.org/index.php/OWASP_Zed_Attack_Proxy_Project
http://www.cs.cmu.edu/~wing/publications/Howard-Wing03.pdf
134 A7 - Missing Function Level Access Control A7 - Missing Function Level Access Control 135
A7 MISSING FUNCTION LEVEL ACCESS CONTROL Figure 10: MVC Access Control
Most web applications verify function level access rights before making that functionality visible in the UI. However,
applications need to perform the same access control checks on the server when each function is accessed. If requests 1. REQUEST 2 READ CONFIG FILE
are not verified, attackers will be able to forge requests in order to access functionality without proper authorization. URL = ACTION
13.1 Authorization
3 RETURN ACTION MAPPING
Authorization is as important as authentication. Access to application functionality and access to all data should be
authorized. For data access authorization, application logic should check if the data belongs to the authenticated CONTROLLER Config.xml
user, or if the user should be able to access that data. SERVIET
INTANTIATE ACTION CLASS AND
Placement of security checks is a vital area of review in an application design. Incorrect placement can render the 4 CALLS EXECUTE METHOD ACTION
applied security controls useless, thus it is important to review the application design and determine the correctness CLASS
of such checks. Many web application designs are based on the concept of Model-View-Controller (MVC) that have a RETURN DATA TO BE VIEWED/
5 EDITED
EXECUTE{}
central controller which listens to all incoming request and delegates control to appropriate form/business process-
ing logic. Ultimately the user is rendered with a view. In such a layered design, when there are many entities involved 6
in processing a request, developers can go wrong in placing the security controls in the incorrect place, for example
RENDER VIEW AND DATA
some application developers feel “view” is the right place to have the authorization checks.
Authentication
Authorization issues cover a wide array of layers in a web application; from the functional authorization of a user to AUTH check built
gain access to a particular function of the application at the application layer, to the Database access authorization CHECK
inside the view
and least privilege issues at the persistence layer.
7. RESPONSE VIEW
In most of the applications the request parameters or the URL’s serve as sole factors to determine the processing logic.
In such a scenario the elements in the request, which are used for such identifications, may be subject to manipula-
tion attacks to obtain access to restricted resources or pages in the application.
In this example neither the controller servlet (central processing entity) nor the action classes have any access
There are two main design methods to implements authorization: Role Base Access Control (RBAC) and Access Con- control checks. If the user requests an internal action such as add user details, without authentication, it will
trol Lists (ACLs). RBAC is used when assigning users to roles, and then roles to permissions. This is a more logical get processed, but the only difference is that the user will be shown an error page as resultant view will be
modeling of actual system authorization. It also allows administrators to fine-grain and re-check role-permission as- disallowed to the user. A similar flaw is observed in ASP.NET applications where the developers tend to mix the
signments, while making sure that every role has the permissions it is supposed to have (and nothing more or less). code for handling POSTBACK’s and authentication checks. Usually it is observed that the authentication check
in the ASP.NET pages are not applied for POSTBACKs, as indicated below. Here, if an attacker tries to access the
Thus assigning users to roles should reduce the chance of human-error. Many web frameworks allow roles to be page without authentication an error page will be rendered. Instead, if the attacker tries to send an internal
assigned to logged in users, and custom code can check the session information to authorize functionality based on POSTBACK request directly without authentication it would succeed.
the current users role.
Unused or undeclared actions or functionality can present in the configuration files. Such configurations that
13.2 Description are not exposed (or tested) as valid features in the application expand the attack surface and raise the risk to
It seems logical to restrict the users at the page/view level they won’t be able to perform any operation in the appli- the application. An unused configuration present in a configuration file is shown in sample 13.1, where the
cation. But what if instead of requesting for a page/view an unauthorized user tries to request for an internal action ‘TestAction’ at the end of the file has been left over from the testing cycle and will be exposed to external users.
such as to add/modify any data in the application? It will be processed but the resultant view will be denied to the It is likely this action would not be checked on every release and could expose a vulnerability.
user; because the flaw lies in just having a view based access control in the applications. Much of the business logic
processing (for a request) is done before the “view” is executed. So the request to process any action would get pro- Sample 13.1
cessed successfully without authorization.
In an MVC based system given this issue is shown in figure 10 below, where the authentication check is pres- <mapping>
ent in the view action. <url>/InsecureDesign/action/AddUserDetails</url>
<action>Action.UserAction</action>
136 A7 - Missing Function Level Access Control A7 - Missing Function Level Access Control 137
session. If this is implemented we may need to provide unauthorized access to certain pages such as a registration
<success>JSP_WithDesign/Success.jsp</success> page, public welcome page or a login page.
</mapping>
The directive “AllowAnonymous” is used to provide access to public pages with no valid session required. The code
<mapping> may look like this:
<url>/InsecureDesign/action/ChangePassword</url>
<action>Action.ChangePasswordAction</action>
<success>JSP_WithDesign/Success.jsp</success> Sample 13.3
</mapping>
[AllowAnonymous]
<mapping>
public ActionResult LogMeIn(string returnUrl)
<url>/InsecureDesign/action/test</url>
<action>Action.TestAction</action>
<success>JSP_WithDesign/Success.jsp</success>
</mapping> When reviewing code for authorization, the following considerations can be checked for:
• Every entry point should be authorized. Every function should be authorized.
Another popular feature seen in most of the design frameworks today is data binding, where the request
parameters get directly bound to the variables of the corresponding business/command object. Binding here • Authorization checks should be efficient, and implemented in a central code base such that it can be applied
means that the instance variables of such classes get automatically initialized with the request parameter val- consistently.
ues based on their names. The issue with this design is that the business objects may have variables that are
not dependent on the request parameters. Such variables could be key variables like price, max limit, role etc. • In cases where authorization fails, a HTTP 403 not authorized page should be returned.
having static values or dependent on some server side processing logic. A threat in such scenarios is that an at-
tacker may supply additional parameters in request and try to bind values for unexposed variable of business • When using RBAC, there must be some way for the application to report on the currently provisioned users of the
object class. In this case the attacker can send an additional “price” parameter in the request which binds with system and their associated roles. This allows the business to periodically audit the user access to the system and
the unexposed variable “price” in business object, thereby manipulating business logic. ensure it is accurate. For example, if a user is provisioned as an admin on the system, then that user changes job to
another department, it could be the case that the admin role is no longer appropriate.
What to Review
It is imperative to place all validation checks before processing any business logic and in case of ASP.NET applications • There should be an easy method to change or remove a user’s role (in RBAC systems). Adding, modifying or remov-
independent of the POSTBACKs. The security controls like authentication check must be place before processing any ing a user from a role should result in audit logs.
request.
• For roles that are higher risk, addition, modification and deletion of those roles should involve multiple levels of
The use of filters is recommended when authorization is being implemented in MVC 3 and above as .NET MVC 3 authorization (e.g. maker/checker), this may be tracked within the application itself, or through some centralized
introduced a method in global.asax called RegisterGlobalFilters which can be used to default deny access to URL’s in role application. Both the functionality and code of the system controlling roles should be part of the review scope.
the application.
• At a design level attempt to keep the range of roles simple. Applications with multiple permission levels/roles often
increases the possibility of conflicting permission sets resulting in unanticipated privileges.
Sample 13.2
• In application architectures with thick clients (i.e. mobile apps or binaries running on a PC) do not attempt to
public static void RegisterGlobalFilters(GlobalFilterCollection filters) perform any authorization in the client code, as this could be bypassed by an attacker. In browser based applications
{ do not perform any authorization decisions in JavaScript.
filters.Add(new HandleErrorAttribute());
filters.Add(new System.Web.Mvc.AuthorizeAttribute()); • Never base authorization decisions on untrusted data. For example do not use a header, or hidden field, from the
} client request to determine the level of authorization a user will have, again this can be manipulated by an attacker.
• Follow the principle of ‘complete mediation’, where authorization is checked at every stage of a function. For exam-
ple, if an application has four pages to browse through to purchase an item (browse.html, basket.html, inputPay-
It is recommended when reviewing MVC3/4 .NET to take a look at how authorization is being implemented. The line ment.html, makePayment.html) then check user authorization at every page, and stage within pages, instead of only
above, “filters.Add(new System.Web.Mvc.AuthorizeAttribute());” default denies access to any request without a valid performing a check in the first page.
138 A7 - Missing Function Level Access Control 139
• By default deny access to any page, and then use authorization logic to explicitly allow access based on roles/
ACL rules.
• The business/form/command objects must have only those instance variables that are dependent on the
user inputs.
A8
140 A8 - Cross-Site Request Forgery (CSRF) A8 - Cross-Site Request Forgery (CSRF) 141
A8 CROSS-SITE REQUEST FORGERY (CSRF) an attacker makes the victim perform actions that they didn’t intend to, such as purchase an item. Sample
14.1 shows an example an HTTP POST to a ticket vendor to purchase a number of tickets.
A CSRF attack forces a logged-on victim’s browser to send a forged HTTP request, including the victim’s session Sample 14.1
cookie and any other automatically included authentication information, to a vulnerable web application. This
allows the attacker to force the victim’s browser to generate requests the vulnerable application thinks are
legitimate requests from the victim. POST http://TicketMeister.com/Buy_ticket.htm HTTP/1.1
Host: ticketmeister
14.1 Description User-Agent: Mozilla/5.0 (Macintosh; U; PPC Mac OS X Mach-O;) Firefox/1.4.1
CSRF is an attack which forces an end user to execute unwanted actions on a web application in which they are Cookie: JSPSESSIONID=34JHURHD894LOP04957HR49I3JE383940123K
currently authenticated. With a little help of social engineering (like sending a link via email/chat), an attacker ticketId=ATHX1138&to=PO BOX 1198 DUBLIN 2&amount=10&date=11042008
may force the users of a web application to execute actions of the attacker’s choosing. A successful CSRF ex- The response of the vendor is to acknowledge the purchase of the tickets:
ploit can compromise end user data, and protected functionality, in the case of a normal privileged user. If the
targeted end user is the administrator account, this can compromise the entire web application. HTTP/1.0 200 OK
Date: Fri, 02 May 2008 10:01:20 GMT
The impact of a successful cross-site request forgery attack is limited to the capabilities exposed by the vul- Server: IBM_HTTP_Server
nerable application. For example, this attack could result in a transfer of funds, changing a password, or pur- Content-Type: text/xml;charset=ISO-8859-1
chasing an item in the user’s context. In effect, CSRF attacks are used by an attacker to make a target system Content-Language: en-US
perform a function (funds Transfer, form submission etc.) via the target’s browser without knowledge of the X-Cache: MISS from app-proxy-2.proxy.ie
target user, at least until the unauthorized function has been committed. Connection: close
CSRF is not the same as XSS (Cross Site Scripting), which forces malicious content to be served by a trusted <?xml version=”1.0” encoding=”ISO-8859-1”?>
website to an unsuspecting victim. Cross-Site Request Forgery (CSRF, a.k.a C-SURF or Confused-Deputy) at- <pge_data> Ticket Purchased, Thank you for your custom.
tacks are considered useful if the attacker knows the target is authenticated to a web based system. They only </pge_data>
work if the target is logged into the system, and therefore have a small attack footprint. Other logical weak-
nesses also need to be present such as no transaction authorization required by the user. In effect CSRF attacks
are used by an attacker to make a target system perform a function (Funds Transfer, Form submission etc..) via
the target’s browser without the knowledge of the target user, at least until the unauthorized function has What to Review
been committed. A primary target is the exploitation of “ease of use” features on web applications (One-click This issue is simple to detect, but there may be compensating controls around the functionality of the application
purchase). which may alert the user to a CSRF attempt. As long as the application accepts a well formed HTTP request and the
request adheres to some business logic of the application CSRF shall work.
Impacts of successful CSRF exploits vary greatly based on the role of the victim. When targeting a normal user,
a successful CSRF attack can compromise end-user data and their associated functions. If the targeted end By checking the page rendering we need to see if any unique identifiers are appended to the links rendered by the
user is an administrator account, a CSRF attack can compromise the entire Web application. The sites that are application in the user’s browser. If there is no unique identifier relating to each HTTP request to tie a HTTP request to
more likely to be attacked are community Websites (social networking, email) or sites that have high dollar the user, we are vulnerable. Session ID is not enough, as the session ID shall be sent automatically if a user clicks on a
value accounts associated with them (banks, stock brokerages, bill pay services). This attack can happen even rogue link, as the user is already authenticated.
if the user is logged into a Web site using strong encryption (HTTPS). Utilizing social engineering, an attacker
will embed malicious HTML or JavaScript code into an email or Website to request a specific ‘task url’. The task Prevention Measures That Do NOT Work
then executes with or without the user’s knowledge, either directly or by utilizing a Cross-site Scripting flaw Examples of attempted CSRF prevent techniques which attackers can bypass are listed in table 20, these measures
(ex: Samy MySpace Worm). should not be used in sensitive applications and should fail code review.
How They Work Table 20: Unsuccessful Countermeasures For Csrf Attacks
CSRF attacks work by sending a rogue HTTP request from an authenticated user’s browser to the application,
Measure Description
which then commits a transaction without authorization given by the target user. As long as the user is au-
thenticated and a meaningful HTTP request is sent by the user’s browser to a target application, the applica- Using a Secret Cookie Remember that all cookies, even the secret ones, will be submitted with every request. All authentication tokens
tion does not know if the origin of the request is a valid transaction or a link clicked by the user (that was, say, will be submitted regardless of whether or not the end-user was tricked into submitting the request. Further-
more, session identifiers are simply used by the application container to associate the request with a specific
in an email) while the user is authenticated to the application. The request will be authenticated as the request session object. The session identifier does not verify that the end-user intended to submit the request.
from the users browser will automatically include the ‘Cookie’ header, which is the basis for authentication. So
142 A8 - Cross-Site Request Forgery (CSRF) A8 - Cross-Site Request Forgery (CSRF) 143
Only Accepting POST Requests Applications can be developed to only accept POST requests for the execution of business logic. The misconcep When a Web application formulates a request (by generating a link or form that causes a request when submitted or
clicked by the user), the application should include a hidden input parameter with a common name such as “CSRFTo-
Applications can be developed to only accept POST requests for the execution of business logic. The misconcep- ken”. The value of this token must be randomly generated such that it cannot be guessed by an attacker. Consider
tion is that since the attacker cannot construct a malicious link, a CSRF attack cannot be executed. Unfortunate- leveraging the java.security.SecureRandom class for Java applications to generate a sufficiently long random token.
ly, this logic is incorrect. There are numerous methods in which an attacker can trick a victim into submitting a
forged POST request, such as a simple form hosted in an attacker’s website with hidden values. This form can be Alternative generation algorithms include the use of 256-bit BASE64 encoded hashes. Developers that choose this
triggered automatically by JavaScript or can be triggered by the victim who thinks the form will do something
else. generation algorithm must make sure that there is randomness and uniqueness utilized in the data that is hashed to
generate the random token.
Salt = ‘a0w8hsdfas8ls587uas87’ Multi-Step transactions are not an adequate prevention of CSRF. As long as an attacker can predict or deduce
each step of the completed transaction, then CSRF is possible.
Sample 14.2
URL Rewriting This might be seen as a useful CSRF prevention technique as the attacker cannot guess the victim’s session ID.
However, the user’s credential is exposed over the URL.
site or on a hacked site). This attack scenario is easy to prevent, the referer will be omitted if the origin of the request ing a secret token to authenticate requests. The following methods can prevent CSRF by relying upon similar rules
is HTTPS. Therefore this attack does not affect web applications that are HTTPS only. that CSRF exploits can never break.
The ideal solution is to only include the CSRF token in POST requests and modify server-side actions that have state
changing affect to only respond to POST requests. This is in fact what the RFC 2616 requires for GET requests. If sen- Checking The Referer Header
sitive server-side actions are guaranteed to only ever respond to POST requests, then there is no need to include the Although it is trivial to spoof the referer header on your own browser, it is impossible to do so in a CSRF attack. Check-
token in GET requests. ing the referer is a commonly used method of preventing CSRF on embedded network devices because it does
not require a per-user state. This makes a referer a useful method of CSRF prevention when memory is scarce. This
Viewstate (ASP.NET) method of CSRF mitigation is also commonly used with unauthenticated requests, such as requests made prior to
ASP.NET has an option to maintain your ViewState. The ViewState indicates the status of a page when submitted to establishing a session state which is required to keep track of a synchronization token.
the server. The status is defined through a hidden field placed on each page with a <form runat=”server”> control.
Viewstate can be used as a CSRF defense, as it is difficult for an attacker to forge a valid Viewstate. It is not impossible However, checking the referer is considered to be a weaker from of CSRF protection. For example, open redirect
to forge a valid Viewstate since it is feasible that parameter values could be obtained or guessed by the attacker. vulnerabilities can be used to exploit GET-based requests that are protected with a referer check and some organiza-
However, if the current session ID is added to the ViewState, it then makes each Viewstate unique, and thus immune tions or browser tools remove referrer headers as a form of data protection. There are also common implementation
to CSRF. mistakes with referer checks. For example if the CSRF attack originates from an HTTPS domain then the referer will be
omitted. In this case the lack of a referer should be considered to be an attack when the request is performing a state
To use the ViewStateUserKey property within the Viewstate to protect against spoofed post backs add the following change. Also note that the attacker has limited influence over the referer. For example, if the victim’s domain is “site.
in the OnInit virtual method of the page-derived class (This property must be set in the Page.Init event) com” then an attacker have the CSRF exploit originate from “site.com.attacker.com” which may fool a broken referer
check implementation. XSS can be used to bypass a referer check.
Sample 14.3
In short, referer checking is a reasonable form of CSRF intrusion detection and prevention even though it is not a
complete protection. Referer checking can detect some attacks but not stop all attacks. For example, if the HTTP
protected override OnInit(EventArgs e) { referrer is from a different domain and you are expecting requests from your domain only, you can safely block that
base.OnInit(e); request.
if (User.Identity.IsAuthenticated)
ViewStateUserKey = Session.SessionID; } Checking The Origin Header
The Origin HTTP Header standard [add reference] was introduced as a method of defending against CSRF and other
Cross-Domain attacks. Unlike the referer, the origin will be present in HTTP request that originates from an HTTPS
URL. If the origin header is present, then it should be checked for consistency.
To key the Viewstate to an individual using a unique value of your choice use “(Page.ViewStateUserKey)”. This must
be applied in Page_Init because the key has to be provided to ASP.NET before Viewstate is loaded. This option has Challenge-Response
been available since ASP.NET 1.1. However, there are limitations on this mechanism. Such as, ViewState MACs are only Challenge-Response is another defense option for CSRF. As mentioned before it is typically used when the func-
checked on POSTback, so any other application requests not using postbacks will happily allow CSRF. tionality being invoked is high risk. While challenge-response is a very strong defense to CSRF (assuming proper
implementation), it does impact user experience. For applications in need of high security, tokens (transparent) and
Double Submit Cookies challenge-response should be used on high risk functions.
Double submitting cookies is defined as sending a random value in both a cookie and as a request parameter, with
the server verifying if the cookie value and request value are equal. The following are some examples of challenge-response options:
• CAPTCHA
When a user authenticates to a site, the site should generate a (cryptographically strong) pseudorandom value and • Re-Authentication (password)
set it as a cookie on the user’s machine separate from the session id. The site does not have to save this value in any • One-time Token
way. The site should then require every sensitive submission to include this random value as a hidden form value (or
other request parameter) and also as a cookie value. An attacker cannot read any data sent from the server or modify No Cross-Site Scripting (XSS) Vulnerabilities
cookie values, per the same-origin policy. This means that while an attacker can send any value he wants with a mali- Cross-Site Scripting is not necessary for CSRF to work. However, any cross-site scripting vulnerability can be used to
cious CSRF request, the attacker will be unable to modify or read the value stored in the cookie. Since the cookie value defeat token, Double-Submit cookie, referer and origin based CSRF defenses. This is because an XSS payload can
and the request parameter or form value must be the same, the attacker will be unable to successfully submit a form simply read any page on the site using a XMLHttpRequest and obtain the generated token from the response, and in-
unless he is able to guess the random CSRF value. clude that token with a forged request. This technique is exactly how the MySpace (Samy) worm defeated MySpace’s
anti CSRF defenses in 2005, which enabled the worm to propagate. XSS cannot defeat challenge-response defenses
Direct Web Remoting (DWR) Java library version 2.0 has CSRF protection built in as it implements the double cookie such as Captcha, re-authentication or one-time passwords. It is imperative that no XSS vulnerabilities are present to
submission transparently. [add reference] ensure that CSRF defenses can’t be circumvented.
The above CSRF prevents rely on the use of a unique token and the Same-Origin Policy to prevent CSRF by maintain-
146 A9 - Using Components with Known Vulnerabilities 147
Components, such as libraries, frameworks, and other software modules, almost always run with full privileg-
es. If a vulnerable component is exploited, such an attack can facilitate serious data loss or server takeover.
Applications using components with known vulnerabilities may undermine application defenses and enable
a range of possible attacks and impacts.
15.1 Description
Today it would be rare for an application or software component to be developed without the re-use of some
open source or paid-for library or framework. This makes a lot of sense as these frameworks and libraries are
already developed and working, and have had a good degree of testing applied. However these third party
components can also be a source of security vulnerabilities when an attacker finds a flaw in the components
A9
code, in fact this flaw has added attraction since the attacker knows the exploit will work on everyone who is
using the component.
This issue has matured to such a state that flaws/exploits for popular frameworks, libraries and operating sys-
tems are sold on underground markets for large sums of money.
What to Review
There is really no code to review for this topic, unless your organization has taken it upon itself to review the code of
the component (assuming its open source and not a closed source third party library), in which case the code review
would be similar to any other audit review. However code review can be used within the larger company-wide track-
ing or audit mechanisms that lets the organization know what third party code it is using.
Regardless of the size of company, the use of third party components, and their versions, should be tracked to en-
sure the organization can be alerted when any security vulnerabilities are flagged. For smaller companies with 1 or
2 products this tracking could be as easy as a spreadsheet or wiki page, however for larger companies with 100s of
applications or products, the task of tracking developer use of third party frameworks and libraries is equally as large
as the risk posed by those libraries.
If a company has 20 products and each of those products use 5 or 6 third party components (e.g. Apache web servers,
OpenSSL crypto libraries, Java libraries for regex and DB interactions, etc.) that leaves the company with over 100
external sources where security vulnerabilities can come from. If the company suddenly hears of a heartbleed type
vulnerability, it has to be able to react and upgrade those affected applications, or take other countermeasures, to
protect itself and its customers.
This allows management and risk controllers to know their risk profile to vulnerabilities on the market, if a bug ap-
pears in bouncycastle, they know they are not exposed (i.e. some developer didn’t use bouncycastle on one of the
products, because it’s not on the list of crypto libraries to use). On the other hand, if there is a bug in OpenSSL, all their
eggs are in that basket and they need to upgrade immediately.
148 A9 - Using Components With Known Vulnerabilities 149
There will obviously be technical challenges to limiting the choices of third party components, and such a policy
could be unpopular with developers who’ll want to use the latest and greatest framework, but the first step to secur-
ing a product is knowing what ingredients you’ve made it with.
How can such a policy be tracked or enforced? At some point the library or framework, in the form of .dll/.so or as
source code, will be integrated into the codeline.
Such integrations should be subject to code review, and as a task of this code review the reviewer can check:
1. The library is one that can be used in the product suite (or maybe is already used and the developer is simply
unaware, in which case the review should be rejected and the original integration used)
2. Any tracking or auditing software (even a basic spread sheet) is updated to reflect that the product is using the
third party library. This allows for rapid remediation if a vulnerability appears, meaning the product will be patched.
A10
by the application? This increases the attack surface of the application and can cause unexpected behavior when
that extra code opens a port and communicates to the internet.
If the reviewer thinks too much functionality/code is being introduced they can advise to turn off non-used function-
ality, or better still find a way to not include that functionality in the product (e.g. by stripping out code, or hardcoding
branches so unused functions are never used).
The OWASP project “OWASP Dependency Check” can provide a measure of automation for library checking
(https://www.owasp.org/index.php/OWASP_Dependency_Check)
150 A10 - Unvalidated Redirects And Forwards A10 - Unvalidated Redirects And Forwards 151
If instead an attacker used the forward to attempt to access to a different page within the web site, e.g. admin.do, then
A10 UNVALIDATED REDIRECTS AND FORWARDS they may access pages that they are not authorized to view, because authorization is being applied on the ‘accept-
payment’ page, instead of the forwarded page.
What to Review
Web applications frequently redirect and forward users to other pages and websites, and use untrusted data to de- If any part of the URL being forwarded, or redirected, to is based on user input, then the site could be at risk. Ensure:
termine the destination pages. Without proper validation, attackers can redirect victims to phishing or malware sites, • All redirects/forwards are constructed based on a whitelist, or
or use forwards to access unauthorized pages. • All redirtects/forwards use reletive paths to ensure they stay on the trusted site
Redirects
Redirect functionality on a web site allows a user’s browser to be told to go to a different page on the site. This can be The following PHP code obtains a URL from the query string and then redirects the user to that URL.
done to improve user interface or track how users are navigating the site.
Sample 16.2
To provide the redirect functionality a site may have a specific URL to perform the redirect:
• http://www.example.com/utility/redirect.cgi
$redirect_url = $_GET[‘url’];
header(“Location: “ . $redirect_url);
This page will take a parameter (from URL or POST body) of ‘URL’ and will send back a message to the user’s browser
to go to that page, for example:
• http://www.example.com/utility/redirect.cgi?URL=http://www.example.com/viewtxn.html
However this can be abused as an attacker can attempt to make a valid user click on a link that appears to be for A similar example of C# .NET Vulnerable Code:
www.example.com but which will invoke the redirect functionality on example.com to cause the users browser to go
to a malicious site (one that could look like example.com and trick the user into entering sensitive or authentication Sample 16.3
information:
• http://www.example.com/utiltiy/redirect cgi?URL=http://attacker.com/fakelogin.html
string url = request.QueryString[“url”];
Forwards Response.Redirect(url);
Forwards are similar to redirects however the new page is not retrieved by the users browser (as occurred with the
redirect) but instead the server framework will obtain the forwarded page and return it to the users browser. This is
achieved by ‘forward’ commands within Java frameworks (e.g. Struts) or ‘Server.Transfer’ in .Net. As the forward is per-
formed by the server framework itself, it limits the range of URLs the attacker can exploit to the current web site (i.e. The above code is vulnerable to an attack if no validation or extra method controls are applied to verify the
attacker cannot ‘forward’ to attacker.com), however this attack can be used to bypass access controls. For example, certainty of the URL. This vulnerability could be used as part of a phishing scam by redirecting users to a ma-
where a site sends the forwarded page in the response: licious site. If user input has to be used as part of the URL to be used, then apply strict validation to the input,
ensuring it cannot be used for purposes other than intended.
• If purchasing, forward to ‘purchase.do’
• If cancelling, forward to ‘cancelled.do’ Note that vulnerable code does not need to explicitly call a ‘redirect’ function, but instead could directly modi-
fy the response to cause the client browser to go to the redirected page. Code to look for is shown in table 21.
This will then be passed as a parameter to the web site:
• http://www.example.com/txn/acceptpayment.html?FWD=purchase
152 A10 - Unvalidated Redirects And Forwards A10 - Unvalidated Redirects And Forwards 153
Table 21: Redirect Risks • CWE Entry 601 on Open Redirects http://cwe.mitre.org/data/definitions/601.html
Where an attacker has posted a redirecting URL on a forum, or sends in an e-mail, the web site can check the
referer header to ensure the user is coming from a page within the site, although this countermeasure will not
apply if the malicious URL is contained within the site itself.
Consider creating a whitelist of URLs or options that a redirect is allowed to go to, or deny the ability for the
user input to determine the scheme or hostname of the redirect. A site could also encode (or encrypt) the
URL value to be redirected to such that an attacker cannot easily create a malicious URL parameter that, when
unencoded (or unencypted), will be considered valid.
Forwards
The countermeasure for forwards is to either whitelist the range of pages that can be forwarded to (similar to
redirects) and to enforce authentication on the forwarded page as well as the forwarding page. This means
that even if an attacker manages to force a forward to a page they should not have access to, the authentica-
tion check on the forwarded page will deny them access.
Note on J2EE
There is a noted flaw related to the “sendRedirect” method in J2EE applications. For example:
• response.sendRedirect(“home.html”);
This method is used to send a redirection response to the user who then gets redirected to the desired web
component whose URL is passed an argument to the method. One such misconception is that execution flow
in the Servlet/JSP page that is redirecting the user stops after a call to this method. Note that if there is code
present after the ‘If’ condition it will be executed.
The fact that execution of a servlet or JSP continues even after sendRedirect() method, also applies to Forward
method of the RequestDispatcher Class. However, <jsp:forward> tag is an exception, it is observed that the
execution flow stops after the use of <jsp:forward> tag.
After issue a redirect or forward, terminate code flow using a “return” statement.
References
• OWASP Article on Open Redirects https://www.owasp.org/index.php/Open_redirect
154 HTML 5 HTML 5 155
Cross-Origin requests have an Origin header that identifies the domain initiating the request and is automat-
HTML5 was created to replace HTLML4, XHTML and the HTML DOM Level 2. The main purpose of this new stan- ically included by the browser in the request sent to the server. CORS defines the protocol between a web
dard is to provide dynamic content without the use of extra proprietary client side plugins. This allows designers browser and a server that will determine whether a cross-origin request is allowed. In order to accomplish
and developers to create exceptional sites providing a great user experience without having to install any addi- this goal, there are HTTP headers that provide information on the messaging context including: Origin, Ac-
tional plug-ins into the browser. cess-Control-Request-Method, Access-Control-Request-Headers, Access-Control-Allow-Origin, Access-Con-
trol-Allow-Credentials, Access-Control-Allow-Methods, Access-Control-Allow-Headers.
17.1 Description
Ideally users should have the latest web browser installed but this does not happens as regularly as security ex- The CORS specification mandates that for non simple requests, such as requests other than GET or POST or requests
perts advice, therefore the website should implement 2 layer controls, one layer independent from browser type, that uses credentials, a pre-flight OPTIONS request must be sent in advance to check if the type of request will have a
second, as an additional control. negative impact on the data. The pre-flight request checks the methods, headers allowed by the server, and if creden-
tials are permitted, based on the result of the OPTIONS request, the browser decides whether the request is allowed
What to Review: Web Messaging or not. More information on CORS requests can be found at reference [x] and [y].
Web Messaging (also known as Cross Domain Messaging) provides a means of messaging between documents
from different origins in a way that is generally safer than the multiple hacks used in the past to accomplish this Items to note when reviewing code related to CORS includes:
task. The communication API is as follows: • Ensure that URLs responding with ‘Access-Control-Allow-Origin: *’ do not include any sensitive content or informa-
tion that might aid attacker in further attacks. Use the ‘Access-Control-Allow-Origin’ header only on chosen URLs that
However, there are still some recommendations to keep in mind: need to be accessed cross-domain. Don’t use the header for the whole domain.
• When posting a message, explicitly state the expected origin as the second argument to ‘postMessage’ rather
than ‘*’ in order to prevent sending the message to an unknown origin after a redirect or some other means of the • Allow only selected, trusted domains in the ‘Access-Control-Allow-Origin’ header. Prefer whitelisting domains over
target window’s origin changing. blacklisting or allowing any domain (do not use ‘*’ wildcard nor blindly return the ‘Origin’ header content without any
checks).
• The receiving page should always:
• Keep in mind that CORS does not prevent the requested data from going to an unauthenticated location. It’s still
o Check the ‘origin’ attribute of the sender to verify the data is originating from the expected location. important for the server to perform usual Cross-Site Request Forgery prevention.
o Perform input validation on the ‘data’ attribute of the event to ensure that it’s in the desired format.
• While the RFC recommends a pre-flight request with the ‘OPTIONS’ verb, current implementations might not per-
• Don’t assume you have control over the ‘data’ attribute. A single Cross Site Scripting flaw in the sending page form this request, so it’s important that “ordinary” (‘GET’ and ‘POST’) requests perform any access control necessary.
allows an attacker to send messages of any given format.
• Discard requests received over plain HTTP with HTTPS origins to prevent mixed content bugs.
• Both pages should only interpret the exchanged messages as ‘data’. Never evaluate passed messages as code (e.g.
via ‘eval()’) or insert it to a page DOM (e.g. via ‘innerHTML’), as that would create a DOM-based XSS vulnerability. • Don’t rely only on the Origin header for Access Control checks. Browser always sends this header in CORS requests,
but may be spoofed outside the browser. Application-level protocols should be used to protect sensitive data.
• To assign the data value to an element, instead of using an insecure method like ‘element.innerHTML = data’, use
the safer option: ‘element.textContent = data;’ What to Review: WebSockets
Traditionally the HTTP protocol only allows one request/response per TCP connection. Asynchronous JavaS-
• Check the origin properly exactly to match the FQDN(s) you expect. Note that the following code: ‘ if(message. cript and XML (AJAX) allows clients to send and receive data asynchronously (in the background without a
orgin.indexOf(“.owasp.org”)!=-1) { /* ... */ }’ is very insecure and will not have the desired behavior as ‘www.owasp. page refresh) to the server, however, AJAX requires the client to initiate the requests and wait for the server
org.attacker.com’ will match. responses (half-duplex). HTML5 WebSockets allow the client/server to create a ‘full-duplex’ (two-way) commu-
nication channels, allowing the client and server to truly communicate asynchronously. WebSockets conduct
• If you need to embed external content/untrusted gadgets and allow user-controlled scripts (which is highly their initial ‘upgrade’ handshake over HTTP and from then on all communication is carried out over TCP chan-
discouraged), consider using a JavaScript rewriting framework such as Google’s Caja or check the information nels. More information about websockets can be found at [x] and [y].
on sandboxed frames.
The following is sample code of an application using Web Sockets:
When reviewing code implementing websockets, the following items should be taken into consideration:
156 HTML 5 HTML 5 157
• Drop backward compatibility in implemented client/servers and use only protocol versions above hybi-00. Popular • When implementing servers, check the ‘Origin:’ header in the Websockets handshake. Though it might be spoofed
Hixie-76 version (hiby-00) and older are outdated and insecure. outside a browser, browsers always add the Origin of the page that initiated the Websockets connection.
• The recommended version supported in latest versions of all current browsers is rfc6455 RFC 6455 (supported by • As a WebSockets client in a browser is accessible through JavaScript calls, all Websockets communication can be
Firefox 11+, Chrome 16+, Safari 6, Opera 12.50, and IE10). spoofed or hijacked through [[Cross Site Scripting Flaw|Cross Site Scripting]]. Always validate data coming through a
WebSockets connection.
• While it’s relatively easy to tunnel TCP services through WebSockets (e.g. VNC, FTP), doing so enables access to these
tunneled services for the in-browser attacker in case of a Cross Site Scripting attack. These services might also be 5.1.5 What to Review: Server-Sent Events
called directly from a malicious page or program. Server sent events seem similar to WebSockets, however they do not use a special protocol (they re-used HTTP) and
they allow the client browser to solely listen for updates (messages) from the server, thereby removing the need for
• The protocol doesn’t handle authorization and/or authentication. Application-level protocols should handle that the client to send any polling or other messages up to the server.
separately in case sensitive data is being transferred.
When reviewing code that is handling server sent events, items to keep in mind are:
• Validate URLs passed to the ‘EventSource’ constructor, even though only same-origin URLs are allowed.
Sample 17.1
• As mentioned before, process the messages (‘event.data’) as data and never evaluate the content as HTML or script
[Constructor(in DOMString url, optional in DOMString protocol)] code.
interface WebSocket
{ readonly attribute DOMString URL; • Always check the origin attribute of the message (‘event.origin’) to ensure the message is coming from a
// ready state const unsigned short CONNECTING = 0; trusted domain. Use a whitelist.
const unsigned short OPEN = 1;
const unsigned short CLOSED = 2;
readonly attribute unsigned short readyState;
readonly attribute unsigned long bufferedAmount;
// networking
attribute Function onopen;
attribute Function onmessage;
attribute Function onclose;
boolean send(in DOMString data);
void close();
};
WebSocket implements EventTarget;
• Process the messages received by the websocket as data. Don’t try to assign it directly to the DOM nor evaluate as
code. If the response is JSON, never use the insecure eval() function; use the safe option JSON.parse() instead.
• Endpoints exposed through the ‘ws://’ protocol are easily reversible to plain text. Only ‘wss://’ (WebSockets over SSL/
TLS) should be used for protection against Man-In-The-Middle attacks.
• Spoofing the client is possible outside a browser, so the WebSockets server should be able to handle incorrect/
malicious input. Always validate input coming from the remote site, as it might have been altered.
158 Same Origin Policy Same Origin Policy 159
• foo://username:[email protected]:8042/over/there/index.dtb?type=animal&name =narwhal#nose
• Make sure authority section should only contain alphanumerics, “-“, and “.” And be followed by “/”, “?”,”#”. The
18.1 Description
risk here an IDN homograph attack.
Internet Explorer has two major exceptions when it comes to same origin policy:
1. Trust Zones: if both domains are in highly trusted zone e.g, corporate domains, then the same origin limita-
• Code reviewer needs to make sure the programmer is not assuming default behavior because the program-
tions are not applied.
mers browser properly escapes a particular character or browser standard says the character will be escaped
properly before allowing any URL-derived values are put inside a database query or the URL is echoed back
2. Port: IE doesn’t include port into Same Origin components, therefore http://yourcompany.com:81/index.
to the user.
html and http://yourcompany.com/index.html are considered from same origin and no restrictions are applied.
• Resources with a MIME type of image/png are treated as images and resources with MIME type of text/html
These exceptions are non-standard and not supported in any of other browser but would be helpful if developing an
are treated as HTML documents. Web applications can limit that content’s authority by restricting its MIME
app for Windows RT (or) IE based web application.
type. For example, serving user-generated content as image/png is less risky than serving user-generated
content as text/html.
The following figure displays the various parts of the URL:
• Privileges on document and resources should grant or withhold privileges from origins as a whole (rath-
Figure 12 er than discriminating between individual documents within an origin). Withholding privileges is ineffective
because the document without the privilege can usually obtain the privilege anyway because SOP does not
isolate documents within an origin.
foo://username:[email protected]:8042/over/there/index.dtb?type=animal&name=narwhal#nose
fragment
interpretable as filename
160 Reviewing Logging Code Reviewing Logging Code 161
Applications log messages of varying intensity and to varying sinks. Many logging APIs allow you to set the • When using the file system, it is preferable to use a separate partition than those used by the operating system,
granularity of log message from a state of logging nearly all messages at level ‘trace’ or ‘debug’ to only logging other application files and user generated content
the most important messages at level ‘critical’. Where the log message is written to is also a consideration,
sometimes it can be written to a local file, other times to a database log table, or it could be written over a • For file-based logs, apply strict permissions concerning which users can access the directories, and the permissions
network link to a central logging server. of files within the directories
• In web applications, the logs should not be exposed in web-accessible locations, and if done so, should have
The volume of logging has to be controlled since the act of writing messages to the log uses CPU cycles, thus restricted access and be configured with a plain text MIME type (not HTML)
writing every small detail to a log will use up more resources (CPU, network bandwidth, disk space). Couple
that with the fact that the logs have to be parsed or interpreted by a tool or human in order for them to be use- • When using a database, it is preferable to utilize a separate database account that is only used for writing log
ful, the consumer of the log could have to parse through thousands of lines to find a message of consequence. data and which has very restrictive database, table, function and command permissions
All types of applications may send event data to remote systems, either directly over a network connection, or asyn- o User performing the action
chronously though a daily/weekly/monthly secure copy of the log to some centralized log collection and manage-
ment system (e.g. SIEM or SEM) or another application elsewhere. o Action being performed/attempted
If the information in the log is important, and could possibly be used for legal matters, consider how the source (log) o Information on the client, e.g. IP address, source port, user-agent
can be verified, and how integrity and non-repudiation can be enforced. Log data, temporary debug logs, and back-
ups/copies/extractions, must not be destroyed before the duration of the required data retention period, and must o External classifications e.g. NIST Security Content Automation Protocol (SCAP), Mitre Common Attack Pat-
not be kept beyond this time. Legal, regulatory and contractual obligations may impact on these periods. tern Enumeration and Classification (CAPEC)
Server applications commonly write event log data to the file system or a database (SQL or NoSQL), however logging o Perform sanitization on all event data to prevent log injection attacks e.g. carriage return (CR), line feed (LF)
could be required on client devices such as applications installed on desktops and on mobile devices may use local and delimiter characters (and optionally to remove sensitive data)
storage and local databases. Consider how this client logging data is transferred to the server.
• If writing to databases, read, understand and apply the SQL injection cheat sheet
What to Review
When reviewing code modules from a logging point of view, some common issues to look out for include: • Ensure logging is implemented and enabled during application security, fuzz, penetration and performance testing
162 Reviewing Logging Code Error Handling 163
• Ensure logging cannot be used to deplete system resources, for example by filling up disk space or exceeding
database transaction log space, leading to denial of service
ERROR HANDLING
• The logging mechanisms and collected event data must be protected from mis-use such as tampering in
transit, and unauthorized access, modification and deletion once stored
• Other Common Log File System (CLFS), Microsoft For example, SQL injection is much tougher to successfully execute without some healthy error messages. It lessens
the attack footprint, and an attacker would have to resort to using “blind SQL injection” which is more difficult and
time consuming.
2. A proper centralised error strategy is easier to maintain and reduces the chance of any uncaught errors “bubbling
up” to the front end of an application.
3. Information leakage could lead to social engineering exploits, for example if the hosting companies name is
returned, or some employees name can be seen.
Regardless of whether the development language provide checked exceptions or not, reviewers should
remember:
• Not all errors are exceptions. Don’t rely on exception handling to be your only way of handling errors, handle all
case statement ‘default’ sections, ensure all ‘if’ statements have their ‘else’ clauses covered, ensure that all exits from
a function (e.g. return statements, exceptions, etc.) are covered. RAII concepts (e.g. auto pointers and the like) are
an advantage here. In languages like Java and C#, remember that errors are different from exceptions (different
hierarchy) and should be handled.
164 Error Handling Error Handling 165
• Catching an exception is not automatically handling it. You’ve caught your exception, so how do you handle Sample 20.2
it? For many cases this should be obvious enough, based on your business logic, but for some (e.g. out of
memory, array index out of bounds, etc.) the handling many not be so simple.
<error-page>
• Don’t catch more that you can handle. Catch all clauses (e.g. ‘catch(Exception e)’ in Java & C# or ‘catch(...) <exception-type>UnhandledException</exception-type>
in C++) should be avoided as you will not know what type of exception you are handling, and if you don’t <location>GenericError.jsp</location>
know the exception type, how do you accurately handle it? It could be that the downstream server is not </error-page>
responding, or a user may have exceeded their quota, or you may be out of memory, these issues should be
handled in different ways and thus should be caught in exception clauses that are specific.
Also in the case of HTTP 404 or HTTP 500 errors during the review you may find:
When an exception or error is thrown, we also need to log this occurrence. Sometimes this is due to bad
development, but it can be the result of an attack or some other service your application relies on failing. This Sample 20.3
has to be imagined in the production scenario, if your application handles ‘failing securely’ by returning an
error response to the client, and since we don’t want to leak information that error will be generic, we need to
have some way of identifying why the failure occurred. If your customer reports 1000’s of errors occurred last <error-page>
night, you know that customer is going to want to know why. If you don’t have proper logging and traceability <error-code>500</error-code>
coded into your application then you will not be able to establish if those errors were due to some attempted <location>GenericError.jsp</location>
hack, or an error in your business logic when handling a particular type of error. </error-page>
All code paths that can cause an exception to be thrown should check for success in order for the exception
For IIS development the ‘Application_Error()’ handler will allow the application to catch all uncaught exceptions and
not to be thrown. This could be hard to impossible for a manual code review to cover, especially for large
handle them in a consistent way. Note this is important or else there is a chance your exception informaiton could be
bodies of code. However if there is a debug version of the code, then modules/functions could throw relevant
sent back to the client in the response.
exceptions/errors and an automated tool can ensure the state and error responses from the module is as
expected. This then means the code reviewer has the job of ensuring all relevant exceptions/errors are tested
For Apache development, returning failures from handlers or modules can prevent an further processing by the
in the debug code.
Apache engine and result in an error response from the server. Response headers, body, etc can be set by by the
handler/module or can be configured using the“ErrorDocument”configuration. We should use a localized description
What to Review
string in every exception, a friendly error reason such as “System Error – Please try again later”. When the user sees an
When reviewing code it is recommended that you assess the commonality within the application from an
error message, it will be derived from this description string of the exception that was thrown, and never from the
error/exception handling perspective. Frameworks have error handling resources which can be exploited to
exception class which may contain a stack trace, line number where the error occurred, class name, or method name.
assist in secure programming, and such resources within the framework should be reviewed to assess if the
error handling is “wired-up” correctly. A generic error page should be used for all exceptions if possible as this
Do not expose sensitive information like exception messages. Information such as paths on the local file system is
prevents the attacker from identifying internal responses to error states. This also makes it more difficult for
considered privileged information; any internal system information should be hidden from the user. As mentioned
automated tools to identify successful attacks.
before, an attacker could use this information to gather private user information from the application or components
that make up the app.
For JSP struts this could be controlled in the struts-config.xml file, a key file when reviewing the wired-up struts
environment:
Don’t put people’s names or any internal contact information in error messages. Don’t put any “human” information,
which would lead to a level of familiarity and a social engineering exploit.
Sample 20.1
What to Review: Failing Securely
There can be many different reasons why an application may fail, for example:
<exception key=”bank.error.nowonga” • The result of business logic conditions not being met.
path=”/NoWonga.jsp”
type=”mybank.account.NoCashException”/> • The result of the environment wherein the business logic resides fails.
• The result of upstream or downstream systems upon which the application depends fail.
Specification can be done for JSP in web.xml in order to handle unhandled exceptions. When unhandled
• Technical hardware / physical failure.
exceptions occur, but are not caught in code, the user is forwarded to a generic error page:
166 Error Handling Error Handling 167
Failures are like the Spanish Inquisition; popularly nobody expected the Spanish Inquisition (see Monty Python) but in When an error occurs, either the system or the currently executing application reports it by throwing an exception
real life the Spanish knew when an inquisition was going to occur and were prepared for it, similarly in an application, containing information about the error, similar to Java. Once thrown, an exception is handled by the application or
though you don’t expect errors to occur your code should be prepared for them to happen. In the event of a failure, by the default exception handler. This Exception object contains similar methods to the Java implementation such as:
it is important not to leave the “doors” of the application open and the keys to other “rooms” within the application • StackTrace
sitting on the table. In the course of a logical workflow, which is designed based upon requirements, errors may occur • Source
which can be programmatically handled, such as a connection pool not being available, or a downstream server • Message
returning a failure. • HelpLink
Such areas of failure should be examined during the course of the code review. It should be examined if resources In .NET we need to look at the error handling strategy from the point of view of global error handling and the handling
should be released such as memory, connection pools, file handles etc. of unexpected errors. This can be done in many ways and this article is not an exhaustive list. Firstly, an Error Event is
thrown when an unhandled exception is thrown.
The review of code should also include pinpointing areas where the user session should be terminated or invalidated. This is part of the TemplateControl class, see reference:
Sometimes errors may occur which do not make any logical sense from a business logic perspective or a technical • http://msdn.microsoft.com/library/default.asp?url=/library/enus/cpref/html/
standpoint, for example a logged in user looking to access an account which is not registered to that user. Such frlrfSystemWebUITemplateControlClassErrorTopic.asp
conditions reflect possible malicious activity. Here we should review if the code is in any way defensive and kills
the user’s session object and forwards the user to the login page. (Keep in mind that the session object should be Error handling can be done in three ways in .NET, executed in the following order:
examined upon every HTTP request). • On the aspx or associated codebehind page in the Page_Error.
• In the global.asax file’s Application_Error (as mentioned before).
What to Review: Potentially Vulnerable Code • In the web.config file’s customErrors section.
Java
In Java we have the concept of an error object; the Exception object. This lives in the Java package java.lang and is It is recommended to look in these areas to understand the error strategy of the application.
derived from the Throwable object. Exceptions are thrown when an abnormal occurrence has occurred. Another
object derived from Throwable is the Error object, which is thrown when something more serious occurs. The Error Classic ASP
object can be caught in a catch clause, but cannot be handled, the best you can do is log some information about Unlike Java and .NET, classic ASP pages do not have structured error handling in try-catch blocks. Instead they have a
the Error and then re-throw it. specific object called “err”. This makes error handling in a classic ASP pages hard to do and prone to design errors on
error handlers, causing race conditions and information leakage. Also, as ASP uses VBScript (a subtract of Visual Basic),
Information leakage can occur when developers use some exception methods, which ‘bubble’ to the user UI due to a sentences like “On Error GoTo label” are not available. In classic ASP there are two ways to do error handling, the first
poor error handling strategy. The methods are as follows: is using the err object with a “On Error Resume Next” and “On Error GoTo 0”.
• printStackTrace()
• getStackTrace()
Sample 20.4
Also important to know is that the output of these methods is printed in System console, the same as System.out.
println(e) where there is an Exception. Be sure to not redirect the outputStream to PrintWriter object of JSP, by Public Function IsInteger (ByVal Number)
convention called “out”, for example: Dim Res, tNumber
• printStackTrace(out); Number = Trim(Number)
tNumber=Number
Note it is possible to change where system.err and system.out write to (like modifying fd 1 & 2 in bash or C/C++), On Error Resume Next ‘If an error occurs continue execution
using the java.lang.system package: Number = CInt(Number) ‘if Number is a alphanumeric string a Type Mismatch error will occur
• setErr() for the System.err field, and Res = (err.number = 0) ‘If there are no errors then return true
• setOut() for the System.out field. On Error GoTo 0 ‘If an error occurs stop execution and display error
re.Pattern = “^[\+\-]? *\d+$” ‘only one +/- and digits are allowed
This could be used on a process wide basis to ensure no output gets written to standard error or standard out (which IsInteger = re.Test(tNumber) And Res
can be reflected back to the client) but instead write to a configured log file. End Function
C#.NET
In .NET a System.Exception object exists and has commonly used child objects such as ApplicationException and The second is using an error handler on an error page (http://support.microsoft.com/kb/299981).
SystemException are used. It is not recommended that you throw or catch a SystemException this is thrown by
runtime.
168 Error Handling Error Handling 169
What to Review: Error Handling in IIS Web.config has custom error tags which can be used to handle errors. This is called last and if Page_error or
Page_Error is page level handling which is run on the server side in .NET. Below is an example but the error Application_error is called and has functionality, that functionality shall be executed first. If the previous two
information is a little too informative and hence bad practice. handling mechanisms do not redirect or clear (Response.Redirect or a Server.ClearError), this will be called and
you shall be forwarded to the page defined in web.config in the customErrors section, which is configured as
follows:
Sample 20.6
Sample 20.8
<script language=”C#” runat=”server”>
Sub Page_Error(Source As Object, E As EventArgs)
<customErrors mode=”<On|Off|RemoteOnly>” defaultRedirect=”<default redirect page>”>
Dim message As String = Request.Url.ToString()& Server.GetLastError().ToString()
<error statusCode=”<HTTP status code>” redirect=”<specific redirect page for listed status code>”/>
Response.Write(message) // display message
</customErrors>
End Sub
</script>
The text in the example above has a number of issues. Firstly, it displays the HTTP request to the user in the The “mode” attribute value of “On” means that custom errors are enabled whilst the “Off” value means that
form of Request.Url.ToString(). custom errors are disabled. The “mode” attribute can also be set to “RemoteOnly” which specifies that custom
errors are shown only to remote clients and ASP.NET errors are shown to requests coming from the the local
Assuming there has been no data validation prior to this point, we are vulnerable to cross site scripting attacks. host. If the “mode” attribute is not set then it defaults to “RemoteOnly”.
Secondly, the error message and stack trace is displayed to the user using Server.GetLastError().ToString()
which divulges internal information regarding the application. When an error occurs, if the status code of the response matches one of the error elements, then the relevent
After the Page_Error is called, the Application_Error sub is called. ‘redirect’ value is returned as the error page. If the status code does not match then the error page from the
‘defaultRedirect’ attribute will be displayed. If no value is set for ‘defaultRedirect’ then a generic IIS error page
When an error occurs, the Application_Error function is called. In this method we can log the error and redirect is returned.
to another page. In fact catching errors in Application_Error instead of Page_Error would be an example of
centralizing errors as described earlier. An example of the customErrors section completed for an application is as follows:
170 Error Handling Error Handling 171
<customErrors mode=”On” defaultRedirect=”error.html”> ErrorDocument 500 “Sorry, our script crashed. Oh dear”
<error statusCode=”500” redirect=”err500.aspx”/> ErrorDocument 500 /cgi-bin/crash-recover
<error statusCode=”404” redirect=”notHere.aspx”/> ErrorDocument 500 http://error.example.com/server_error.html
<error statusCode=”403” redirect=”notAuthz.aspx”/> ErrorDocument 404 /errors/not_found.html
</customErrors> ErrorDocument 401 /subscription/how_to_subscribe.html
What to Review: Error Handling in Apache What to Review: Leading Practice for Error Handling
In Apache you have two choices in how to return error messages to the client: Code that might throw exceptions should be in a try block and code that handles exceptions in a catch block.
1. You can write the error status code into the req object and write the response to appear the way you want, The catch block is a series of statements beginning with the keyword catch, followed by an exception type and
then have you handler return ‘DONE’ (which means the Apache framework will not allow any further handlers/ an action to be taken.
filters to process the request and will send the response to the client.
Example: Java Try-Catch:
2. Your handler or filter code can return pre-defined values which will tell the Apache framework the result of
your codes processsing (essentially the HTTP status code). You can then configure what error pages should be
Sample 20.12
returned for each error code.
In the interest of centralizing all error code handling, option 2 can make more sense. To return a specific pre- public class DoStuff {
defined value from your handler, refer to the Apache documentation for the list of values to use, and then public static void Main() {
return from the handler function as shown in the following example: try {
StreamReader sr = File.OpenText(“stuff.txt”);
Sample 20.10 Console.WriteLine(“Reading line {0}”, sr.ReadLine());
}
catch(MyClassExtendedFromException e) {
static int my_handler(request_rec *r) Console.WriteLine(“An error occurred. Please leave to room”);
{ logerror(“Error: “, e);
if ( problem_processing() ) }
{ }
return HTTP_INTERNAL_SERVER_ERROR; }
}
... continue processing request ...
}
.NET Try–Catch
Sample 20.13
In the httpd.conf file you can then specify which page should be returned for each error code using the
‘ErrorDocument’ directive. The format of this directive is as follows:
public void run() {
• ErrorDocument <3-digit-code> <action> while (!stop) {
try {
... where the 3 digit code is the HTTP response code set by the handler, and the action is a local or external URL // Perform work here
to be returned, or specific text to display. The following examples are taken from the Apache ErrorDocument } catch (Throwable t) {
documentation (https://httpd.apache.org/docs/2.4/custom-error.html) which contains more information // Log the exception and continue
and options on ErrorDocument directives: WriteToUser(“An Error has occurred, put the kettle on”);
logger.log(Level.SEVERE, “Unexception exception”, t);
172 Error Handling Error Handling 173
} catch ( const MyClassExtendedFromStdException& e) { If the language in question has a finally method, use it. The finally method is guaranteed to always be called.
// Log the exception and continue The finally method can be used to release resources referenced by the method that threw the exception. This is
WriteToUser(“An Error has occurred, put the kettle on”); very important. An example would be if a method gained a database connection from a pool of connections,
logger.log(Level.SEVERE, “Unexception exception”, e); and an exception occurred without finally, the connection object shall not be returned to the pool for some
} time (until the timeout). This can lead to pool exhaustion. finally() is called even if no exception is thrown.
}
Sample 20.16
In general, it is best practice to catch a specific type of exception rather than use the basic catch(Exception) or void perform_fn() {
try {
catch(Throwable) statement in the case of Java.
// Perform work here
What to Review: The Order of Catching Exceptions } catch ( const MyClassExtendedFromStdException& e) {
Keep in mind that many languages will attempt to match the thrown exception to the catch clause even if it means // Log the exception and continue
matching the thrown exception to a parent class. Also remember that catch clauses are checked in the order they are WriteToUser(“An Error has occurred, put the kettle on”);
coded on the page. This could leave you in the situation where a certain type of exception might never be handled logger.log(Level.SEVERE, “Unexception exception”, e);
correctly, take the following example where ‘non_even_argument’ is a subclass of ‘std::invalid_argument’: }
}
Sample 20.15
class non_even_argument : public std::invalid_argument { A Java example showing finally() being used to release system resources.
public:
explicit non_even_argument (const string& what_arg); What to Review: Releasing resources and good housekeeping
}; RAII is Resource Acquisition Is Initialization, which is a way of saying that when you first create an instance of
a type, it should be fully setup (or as much as possible) so that it’s in a good state. Another advantage of RAII
void do_fn() is how objects are disposed of, effectively when an object instance is no longer needed then it resources are
{ automatically returned when the object goes out of scope (C++) or when it’s ‘using’ block is finished (C# ‘using’
try directive which calls the Dispose method, or Java 7’s try-with-resources feature)
{
// Perform work that could throw RAII has the advantage that programmers (and users to libraries) don’t need to explicitly delete objects, the
} objects will be removed themselves, and in the process of removing themselves (destructor or Dispose)
catch ( const std::invalid_argument& e )
{ For Classic ASP pages it is recommended to enclose all cleaning in a function and call it into an error handling
174 Error Handling Reviewing Security Alerts 175
Building an infrastructure for consistent error reporting proves more difficult than error handling. Struts REVIEWING SECURITY ALERTS
provides the ActionMessages and ActionErrors classes for maintaining a stack of error messages to be reported,
which can be used with JSP tags like <html: error> to display these error messages to the user.
How will your code and applications react when something has gone wrong? Many companies that follow
To report a different severity of a message in a different manner (like error, warning, or information) the
secure design and coding principals do so to prevent attackers from getting into their network, however
following tasks are required:
many companies do not consider designing and coding for the scenario where an attacker may have found
1. Register, instantiate the errors under the appropriate severity
a vulnerability, or has already exploited it to run code within a companies firewalls (i.e. within the Intranet).
2. Identify these messages and show them in a consistent manner.
Many companies employ SIEM logging technologies to monitor network and OS logs for patterns that detect
suspicions activity, this section aims to further encourage application layers and interfaces to do the same.
Struts ActionErrors class makes error handling quite easy:
21.1 Description
Sample 20.17 This section concentrates on:
1. Design and code that allows the user to react when a system is being attacked.
2. Concepts allowing applications to flag when they have been breached.
ActionErrors errors = new ActionErrors()
errors.add(“fatal”, new ActionError(“....”));
When a company implements secure design and coding, it will have the aim of preventing attackers from
errors.add(“error”, new ActionError(“....”));
misusing the software and accessing information they should not have access to. Input validation checks for
errors.add(“warning”, new ActionError(“....”));
SQL injections, XSS, CSRF, etc. should prevent attackers from being able to exploit these types of vulnerabilities
errors.add(“information”, new ActionError(“....”));
against the software. However how should software react when an attacker is attempting to breach the
saveErrors(request,errors); // Important to do this
defenses, or the protections have been breached?
For an application to alert to security issues, it needs context on what is ‘normal’ and what constitutes a
security issue. This will differ based on the application and the context within which it is running. In general
Now that we have added the errors, we display them by using tags in the HTML page.
applications should not attempt to log every item that occurs as the excessive logging will slow down the
system, fill up disk or DB space, and make it very hard to filter through all the information to find the security
Sample 20.18 issue.
At the same time, if not enough information is monitored or logged, then security alerting will be very hard to
<logic:messagePresent property=”error”>
do based on the available information. To achieve this balance an application could use its own risk scoring
<html:messages property=”error” id=”errMsg” >
system, monitoring at a system level what risk triggers have been spotted (i.e. invalid inputs, failed passwords,
<bean:write name=”errMsg”/>
etc.) and use different modes of logging. Take an example of normal usage, in this scenario only critical items
</html:messages>
are logged. However if the security risk is perceived to have increased, then major or security level items
</logic:messagePresent >
can be logged and acted upon. This higher security risk could also invoke further security functionality as
described later in this section.
Take an example where an online form (post authentication) allows a user to enter a month of the year. Here
References
the UI is designed to give the user a drop down list of the months (January through to December). In this case
• For classic ASP pages you need to do some IIS configuration, follow http://support.microsoft.com/
the logged in user should only ever enter one of 12 values, since they typically should not be entering any text,
kb/299981 for more information.
instead they are simply selecting one of the pre-defined drop down values.
• For default HTTP error page handling in struts (web.xml) see https://software-security.sans.org/
If the server receiving this form has followed secure coding practices, it will typically check that the form field
blog/2010/08/11/security-misconfigurations-java-webxml-files
matches one of the 12 allowed values, and then considers it valid. If the form field does not match, it returns
an error, and may log a message in the server. This prevents the attacker from exploiting this particular field,
however this is unlikely to deter an attacker and they would move onto other form fields.
176 Reviewing Security Alerts Reviewing Security Alerts 177
In this scenario we have more information available to us than we have recorded. We have returned an error Another good option is ‘honey accounts’ which are usernames and passwords that are never given out to
back to the user, and maybe logged an error on the server. In fact we know a lot more; an authenticated user users. These accounts are added just like any other user, and stored in the DB, however they are also recorded
has entered an invalid value which they should never have been able to do (as it’s a drop down list) in normal in a special cache and checked on login. Since they are never given to any user, no user should ever logon with
usage. them, however if one of those accounts are used, then the only way that username password combination
could be known is if an attacker got the database, and this information allows the application to move to a
This could be due to a few reasons: more secure state and alert the company that the DB has been hacked.
• There’s a bug in the software and the user is not malicious.
What to Review
• An attacker has stolen the users login credentials and is attempting to attack the system. When reviewing code modules from a security alerting point of view, some common issues to look out for
include:
• A user has logged in but has a virus/trojan which is attempting to attack the system.
• Will the application know if it’s being attacked? Does it ignore invalid inputs, logins, etc. or does it log them
• A user has logged in but is experiencing a man-in-the-middle attack. and monitor this state to capture a cumulative perception of the current risk to the system?
• A user is not intending to be malicious but has somehow changed the value with some browser plugin, etc. • Can the application automatically change its logging level to react to security threats? Is changing security
levels dynamic or does it require a restart?
If it’s the first case above, then the company should know about it to fix their system. If it’s case 2, 3 or 3 then the
application should take some action to protect itself and the user, such as reducing the functionality available • Does the SDLC requirements or design documentation capture what would constitute a security alert? Has
to the user (i.e. no PII viewable, can’t change passwords, can’t perform financial transactions) or forcing further this determination been peer reviewed? Does the testing cycle run through these scenarios?
authentication such as security questions or out-of-band authentication. The system could also alert the user
to the fact that the unexpected input was spotted and advise them to run antivirus, etc., thus stopping an • Does the system employ ‘honey accounts’ such that the application will know if the DB has been compromised?
attack when it is underway.
• Is there a risk based scoring system that records the normal usage of users and allows for determination or
Obviously care must be taken in limiting user functionality or alerting users encase it’s an honest mistake, so reaction if the risk increases?
using a risk score or noting session alerts should be used. For example, if everything has been normal in the
browsing session and 1 character is out of place, then showing a red pop-up box stating the user has been • If a SIEM system is being used, have appropriate triggers been identified? Has automated tests been created
hacked is not reasonable, however if this is not the usual IP address for the user, they have logged in at an to ensure those trigger log messages are not accidentally modified by future enhancements or bug fixes?
unusual time, and this is the 5th malformed entry with what looks like an SQL injection string, then it would be
reasonable for the application to react. This possible reaction would need to be stated in legal documentation. • Does the system track how many failed login attempts a user has experienced? Does the system react to this?
In another scenario, if an attacker has got through the application defenses extracted part of the applications • Does certain functionality (i.e. transaction initiation, changing password, etc) have different modes of
customer database, would the company know? Splitting information in the database into separate tables operation based on the current risk score the application is currently operating within?
makes sense from an efficiency point of view, but also from a security view, even putting confidential
information into a separate partition can make it harder for the attacker. However if the attacker has the • Can the application revert back to ‘normal’ operation when the security risk score drops to normal levels?
information it can be hard to detect and applications should make steps to aid alerting software (e.g. SIEM
systems). Many financial institutions use risk scoring systems to look at elements of the user’s session to give • How are administrators alerted when security risk score rises? Or when a breach has been assumed? At an
a risk score, if Johnny always logs in at 6pm on a Thursday from the same IP, then we have a trusted pattern. operational level, is this tested regularly? How are changes of personnel handled?
If suddenly Johnny logs in at 2:15am from an IP address on the other side of the world, after getting the
password wrong 7 times, then maybe he’s jetlagged after a long trip, or perhaps his account has been hacked.
Either way, asking him for out-of-band authentication would be reasonable to allow Johnny to log in, or to
block an attacker from using Johnny’s account.
If the application takes this to a larger view, it can determine that on a normal day 3% of the users log on
in what would be considered a riskier way, i.e. different IP address, different time, etc. If on Thursday it sees
this number rise to 23% then has something strange happened to the user base, or has the database been
hacked? This type of information can be used to enforce a blanket out-of-band authentication (and internal
investigation of the logs) for the 23% of ‘riskier’ users, thereby combining the risk score for the user with the
overall risk score for the application.
178 Reviewing for Active Defense Reviewing for Active Defense 179
A useful approach for identifying such code is to find the name of a dedicated module for detecting suspicious
activity (such as OWASP AppSensor). Additionally a company can implement a policy of tagging active
REVIEW FOR ACTIVE DEFENSE defense detection points based on [http://capec.mitre.org/ Mitre’s Common Attack Pattern Enumeration and
Classification] [reference] (CAPEC), strings such as CAPEC-212, CAPEC-213, etc.
The OWASP AppSensor detection point type identifiers and CAPEC codes will often have been used
in configuration values [reference] (e.g. [https://code.google.com/p/appsensor/source/browse/trunk/
Attack detection undertaken at the application layer has access to the complete context of an interaction AppSensor/src/test/resources/.esapi/ESAPI.properties?r=53 in ESAPI for Java]), parameter names and security
and enhanced information about the user. If logic is applied within the code to detect suspicious activity event classification. Also, examine error logging and security event logging mechanisms as these may be
(similar to an application level IPS) then the application will know what is a high-value issue and what is noise. being used to collect data that can then be used for attack detection. Identify the code or services called that
Input data are already decrypted and canonicalized within the application and therefore application-specific perform this logging and examine the event properties recorded/sent. Then identify all places where these
intrusion detection is less susceptible to advanced evasion techniques. This leads to a very low level of attack are called from.
identification false positives, providing appropriate detection points are selected.
An examination of error handling code relating to input and output validation is very likely to reveal the
The fundamental requirements are the ability to perform four tasks: presence of detection points. For example, in a whitelist type of detection point, additional code may have
1. Detection of a selection of suspicious and malicious events. been added adjacent, or within error handling code flow:
22.1 Description
Applications can undertake a range of responses that may include high risk functionality such as changes to In some situations attack detection points are looking for blacklisted input, and the test may not exist otherwise,
a user’s account or other changes to the application’s defensive posture. It can be difficult to detect active so brand new code is needed. Identification of detection points should also have found the locations where
defense in dynamic analysis since the responses may be invisible to the tester. Code review is the best method events are recorded (the “event store”). If detection points cannot be found, continue to review the code for
to determine the existence of this defense. execution of response, as this may provide insight into the existence of active defense.
Other application functionality like authentication failure counts and lock-out, or limits on rate of file uploads The event store has to be analysed in real time or very frequently, in order to identify attacks based on
are ‘localized’ protection mechanisms. This sort of standalone logic is ‘not’ active defense equivalents in predefined criteria. The criteria should be defined in configuration settings (e.g. in configuration files, or read
the context of this review, unless they are rigged together into an application-wide sensory network and from another source such as a database). A process will examine the event store to determine if an attack is in
centralized analytical engine. progress, typically this will be attempting to identify an authenticated user, but it may also consider a single
It is not a bolt-on tool or code library, but instead offers insight to an approach for organizations to specify IP address, range of IP addresses, or groups of users such as one or more roles, users with a particular privilege
or develop their own implementations – specific to their own business, applications, environments, and risk or even all users.
profile – building upon existing standard security controls.
Once an attack has been identified, the response will be selected based on predefined criteria. Again an
What to Review examination of configuration data should reveal the thresholds related to each detection point, groups of
In the case where a code review is being used to detect the presence of a defense, its absence should be noted detection points or overall thresholds.
as a weakness. Note that active defense cannot defend an application that has known vulnerabilities, and
therefore the other parts of this guide are extremely important. The code reviewer should note the absence The most common response actions are user warning messages, log out, account lockout and administrator
of active defense as a vulnerability. notification. However, as this approach is connected into the application, the possibilities of response actions
are limited only by the coded capabilities of the application.
The purpose of code review is not necessarily to determine the efficacy of the active defense, but could simply
be to determine if such capability exists. Search code for any global includes that poll attack identification/response identified above. Response actions
(again a user, IP address, group of users, etc) will usually be initiated by the application, but in some cases other
Detection points can be integrated into presentation, business and data layers of the application. Application- applications (e.g. alter a fraud setting) or infrastructure components (e.g. block an IP address range) may also
specific intrusion detection does not need to identify all invalid usage, to be able to determine an attack. There be involved.
is no need for “infinite data” or “big data” and therefore the location of “detection points” may be very sparse
within source code. Examine configuration files and any external communication the application performs.
180 Reviewing for Active Defense Race Conditions 181
The value should actually be 25, as each thread added 10 to the initial value of 5. But the actual value is 15 due
to T2 not letting T1 save into X before it takes a value of X for its addition.
This leads to undefined behavior, where the application is in an unsure state and therefore security cannot be
accurately enforced.
What to Review
• In C#.NET look for code which used multithreaded environments:
o Thread
o System.Threading
o ThreadPool
o System.Threading.Interlocked
o wait()
o notify()
o notifyAll()
BUFFER OVERRUNS
• For classic ASP multithreading is not a directly supported feature, so this kind of race condition could be
present only when using COM objects.
A buffer is an amount of contiguous memory set aside for storing information. For example if a program
• Static methods and variables (one per class, not one per object) are an issue particularly if there is a shared
has to remember certain things, such as what your shopping cart contains or what data was inputted prior
state among multiple threads. For example, in Apache, struts static members should not be used to store
to the current operation. This information is stored in memory in a buffer. Languages like C, C++ (which
information relating to a particular request. The same instance of a class can be used by multiple threads, and
many operating systems are written in), and Objective-C are extremely efficient, however they allow code to
the value of the static member cannot be guaranteed.
access process memory directly (through memory allocation and pointers) and intermingle data and control
information (e.g. in the process stack). If a programmer makes a mistake with a buffer and allows user input
• Instances of classes do not need to be thread safe as one is made per operation/request. Static states must
to run past the allocated memory, the user input can overwrite program control information and allow the
be thread safe.
user to modify the execution of the code.
o References to static variables, these must be thread locked.
o Releasing a lock in places other then finally{} may cause issues.
Note that Java, C#.NET, Python and Ruby are not vulnerable to buffer overflows due to the way they store their
o Static methods that alter static state.
strings in char arrays, of which the bounds are automatically checked by the frameworks, and the fact that
they do not allow the programmer direct access to the memory (the virtual machine layer handles memory
instead). Therefore this section does not apply to those languages. Note however that native code called
References
within those languages (e.g. assembly, C, C++) through interfaces such as JNI or ‘unsafe’ C# sections can be
• http://msdn2.microsoft.com/en-us/library/f857xew0(vs.71).aspx
susceptible to buffer overflows.
24.1 Description
To allocate a buffer the code declares a variable of a particular size:
• char myBuffer[100]; // large enough to hold 100 char variables
As there is no automatic bounds checking code can attempt to add a Widget at array location 23 (which does
not exist). When the code does this, the complier will calculate where the 23rd Widget should be placed in
memory (by multiplying 23 x sizeof(Widget) and adding this to the location of the ‘myWidgetArray’ pointer).
Any other object, or program control variable/register, that existed at this location will be overwritten.
Arrays, vectors, etc. are indexed starting from 0, meaning the first element in the container is at ‘myBuffer[0]’,
this means the last element in the container is not at array index 100, but at array index 99. This can often lead
to mistakes and the ‘off by one’ error, when loops or programming logic assume objects can be written to the
last index without corrupting memory.
In C, and before the C++ STL became popular, strings were held as arrays of characters:
• char nameString[10];
This means that the ‘nameString’ array of characters is vulnerable to array indexing problems described above,
and when many of the string manipulation functions (such as strcpy, strcat, described later) are used, the
possibility of writing beyond the 10th element allows a buffer overrun and thus memory corruption.
As an example, a program might want to keep track of the days of the week. The programmer tells the
computer to store a space for 7 numbers. This is an example of a buffer. But what happens if an attempt to add
184 Buffer Overruns Buffer Overruns 185
8 numbers is performed? Languages such as C and C++ do not perform bounds checking, and therefore if the Modern day C++ (C++11) programs have access to many STL objects and templates that help prevent security
program is written in such a language, the 8th piece of data would overwrite the program space of the next vulnerabilities. The std::string object does not require the calling code have any access to underlying pointers,
program in memory, and would result in data corruption. This can cause the program to crash at a minimum and automatically grows the underlying string representation (character buffer on the heap) to accommodate
or a carefully crafted overflow can cause malicious code to be executed, as the overflow payload is actual code. the operations being performed. Therefore code is unable to cause a buffer overflow on a std::string object.
What to Review: Buffer Overruns Regarding pointers (which can be used in other ways to cause overflows), C++11 has smart pointers which
again take away any necessity for the calling code to user the underlying pointer, these types of pointers are
automatically allocated and destroyed when the variable goes out of scope. This helps to prevent memory leaks and
Sample 24.1
double delete errors. Also the STL containers such as std::vector, std::list, etc., all allocate their memory dynamically
meaning normal usage will not result in buffer overflows. Note that it is still possible to access these containers
void copyData(char *userId) { underlying raw pointers, or reinterpret_cast the objects, thus buffer overflows are possible, however they are more
char smallBuffer[10]; // size of 10 difficult to cause.
strcpy (smallBuffer, userId);
} Compliers also help with memory issues, in modern compilers there are ‘stack canaries’ which are subtle elements
placed in the complied code which check for out-of-bound memory accesses. These can be enabled when compiling
int main(int argc, char *argv[]) { the code, or they could be enabled automatically. There are many examples of these stack canaries, and for some
char *userId = “01234567890”; // Payload of 12 when you include the ‘\n’ string termination system many choices of stack canaries depending on an organizations appetite for security versus performance.
// automatically added by the “01234567890” literal Apple also have stack canaries for iOS code as Objective-C is also susceptible to buffer overflows.
copyData (userId); // this shall cause a buffer overload
} In general, there are obvious examples of code where a manual code reviewer can spot the potential for overflows
and off-by-one errors, however other memory leaks (or issues) can be harder to spot. Therefore manual code review
should be backed up by memory checking programs available on the market.
C library functions such as strcpy (), strcat (), sprintf () and vsprintf () operate on null terminated strings and perform
no bounds checking. gets() is another function that reads input (into a buffer) from stdin until a terminating newline What to Review: Format Function Overruns
or EOF (End of File) is found. The scanf () family of functions also may result in buffer overflows. A format function is a function within the ANSI C specification that can be used to tailor primitive C data types to
human readable form. They are used in nearly all C programs to output information, print error messages, or process
Using strncpy(), strncat() and snprintf() functions allows a third ‘length’ parameter to be passed which determines the strings.
maximum length of data that will be copied/etc. into the destination buffer. If this is correctly set to the size of the
buffer being written to, it will prevent the target buffer being overflowed. Also note fgets() is a replacement for gets(). Table 22: Format Function Overruns
Always check the bounds of an array before writing it to a buffer. The Microsoft C runtime also provides additional
versions of many functions with an ‘_s’ suffix (strcpy_s, strcat_s, sprintf_s). These functions perform additional checks Format String Relevant Input
for error conditions and call an error handler on failure.
%x Hexadecimal values (unsigned int)
The C code below is not vulnerable to buffer overflow as the copy functionality is performed by ‘strncpy’ which
%s Strings ((const) (unsigned) char*)
specifies the third argument of the length of the character array to be copied, 10.
%n Integer
Through supplying the format string to the format function we are able to control the behaviour of it. So Sample 24.3
supplying input as a format string makes our application do things it’s not meant to. What exactly are we able
to make the application do?
#include <stdio.h>
If we supply %x (hex unsigned int) as the input, the ‘printf’ function shall expect to find an integer relating to
that format string, but no argument exists. This cannot be detected at compile time. At runtime this issue shall int main(void){
surface. int val;
val = 0x7fffffff; /* 2147483647*/
For every % in the argument the printf function finds it assumes that there is an associated value on the stack. printf(“val = %d (0x%x)\n”, val, val);
In this way the function walks the stack downwards reading the corresponding values from the stack and printf(“val + 1 = %d (0x%x)\n”, val + 1 , val + 1); /*Overflow the int*/
printing them to the user. return 0;
}
Using format strings we can execute some invalid pointer access by using a format string such as:
• printf (“%s%s%s%s%s%s%s%s%s%s%s%s”);
The binary representation of 0x7fffffff is 1111111111111111111111111111111; this integer is initialized with
Worse again is using the ‘%n’ directive in ‘printf()’. This directive takes an ‘int*’ and ‘writes’ the number of bytes the highest positive value a signed long integer can hold.
so far to that location.
Here when we add 1 to the hex value of 0x7fffffff the value of the integer overflows and goes to a negative
Where to look for this potential vulnerability. This issue is prevalent with the ‘printf()’ family of functions, number (0x7fffffff + 1 = 80000000) In decimal this is (-2147483648). Think of the problems this may cause.
‘’printf(),fprintf(), sprintf(), snprintf().’ Also ‘syslog()’ (writes system log information) and setproctitle(const char Compilers will not detect this and the application will not notice this issue.
*fmt, ...); (which sets the string used to display process identifier information).
We get these issues when we use signed integers in comparisons or in arithmetic and also when comparing
What to Review: Integer Overflows signed integers with unsigned integers.
Data representation for integers will have a finite amount of space, for example a short in many languages is
16 bits twos complement number, which means it can hold a maximum number of 32,767 and a minimum Sample 24.4
number of -32,768. Twos complement means that the very first bit (of the 16) is a representation of whether
the number of positive or negative. If the first bit is ‘1’, then it is a negative number.
int myArray[100];
The representation of some boundary numbers are given in table 23.
int fillArray(int v1, int v2){
if(v2 > sizeof(myArray) / sizeof(int) -1 ){
Table 23: Integer Overflows return -1; /* Too Big */
Number Representation }
myArray [v2] = v1;
32,766 0111111111111110 return 0;
}
32,767 0111111111111111
-32,768 1000000000000000
-1 1111111111111111
Here if v2 is a massive negative number the “if” condition shall pass. This condition checks to see if v2 is bigger
than the array size.
If you add 1 to 32,766, it adds 1 to the representation giving the representation for 32,767 shown above. If the bounds check was not performed the line “myArray[v2] = v1” could have assigned the value v1 to a
However if you add one more again, it sets the first bit (a.k.a. most significant bit), which is then interpreted by location out of the bounds of the array causing unexpected results.
the system as -32,768.
References
If you have a loop (or other logic) which is adding or counting values in a short, then the application could • See the OWASP article on [[Buffer_overflow_attack|Buffer Overflow]] Attacks.
experience this overflow. Note also that subtracting values below -32,768 also means the number will wrap
around to a high positive, which is called underflow. • See the OWASP article on [[Buffer_Overflow|Buffer Overflow]] Vulnerabilities.
188 Buffer Overruns Buffer Overruns 189
• See the [[:Category:OWASP Guide Project|OWASP Development Guide]] article on how to [[Buffer_Over- Taint analysis needs to be incorporated into static analysis engine. Taint Analysis attempts to identify variables
flows|Avoid Buffer Overflow]] Vulnerabilities. that have been ‘tainted’ with user controllable input and traces them to possible vulnerable functions also known
as a ‘sink’. If the tainted variable gets passed to a sink without first being sanitized it is flagged as vulnerability.
• See the [[:Category:OWASP Testing Project|OWASP Testing Guide]] article on how to [[Testing_for_Buffer_ Second the code reviewer needs to be certain the code was tested with JavaScript was turned off to make sure
Overflow (OWASP-DV-014)|Test for Buffer Overflow]] Vulnerabilities. all client sided data validation was also validated on the server side.
• See Security Enhancements in the CRT: http://msdn2.microsoft.com/en-us/library/8ef0s5kh(VS.80).aspx Code examples of JavaScript vulnerabilities.
<html>
JavaScript has several known security vulnerabilities, with HTML5 and JavaScript becoming more prevalent in <script type=”text/javascript”>
web sites today and with more web sites moving to responsive web design with its dependence on JavaScript var pos=document.URL.indexOf(“name=”)+5;
the code reviewer needs to understand what vulnerabilities to look for. JavaScript is fast becoming a signif- document.write( document.URL.substring(pos,document.URL.length));
icant point of entry of hackers to web application. For that reason we have included in the A1 Injection sub </script>
section. <html>
The most significant vulnerabilities in JavaScript are cross-site scripting (XSS) and Document Object Model,
DOM-based XSS. Explanation: An attacker can send a link such as “http://hostname/welcome.html#name=<script>alert(1)</
script>” to the victim resulting in the victim’s browser executing the injected client-side code.
Detection of DOM-based XSS can be challenging. This is caused by the following reasons.
• JavaScript is often obfuscated to protect intellectual property.
Sample 25.2
• JavaScript is often compressed out of concerned for bandwidth.
In both of these cases it is strongly recommended the code review be able to review the JavaScript before it var url = document.location.url;
has been obfuscated and or compressed. This is a huge point of contention with QA software professionals var loginIdx = url.indexOf(‘login’);
because you are reviewing code that is not in its production state. var loginSuffix = url.substring(loginIdx);
url = ‘http://mySite/html/sso/’ + loginSuffix;
Another aspect that makes code review of JavaScript challenging is its reliance of large frameworks such as document.location.url = url;
Microsoft .NET and Java Server Faces and the use of JavaScript frameworks, such as JQuery, Knockout, Angular,
Backbone. These frameworks aggravate the problem because the code can only be fully analyzed given the
source code of the framework itself. These frameworks are usually several orders of magnitude larger then the Line 5 may be a false-positive and prove to be safe code or it may be open to “Open redirect attack” with taint
code the code reviewer needs to review. analysis the static analysis should be able to correctly identified if this vulnerability exists. If static analysis
relies only on black-box component this code will have flagged as vulnerable requiring the code reviewer to
Because of time and money most companies simple accept that these frameworks are secure or the risks are do a complete source to sink review.
low and acceptable to the organization.
Additional examples and potential security risks
Because of these challenges we recommend a hybrid analysis for JavaScript. Manual source to sink validation Source: document.url
when necessary, static analysis with black-box testing and taint testing. Sink: document.write()
Results: Results:document.write(“<script>malicious code</script>”);
First use a static analysis. Code Reviewer and the organization needs to understand that because of event-driv-
en behaviors, complex dependencies between HTML DOM and JavaScript code, and asynchronous commu- Cybercriminal may controlled the following DOM elements including
nication with the server side static analysis will always fall short and may show both positive, false, false –pos- document.url,document.location,document.referrer,window.location
itive, and positive-false findings.
Source: document.location
Black-box traditional methods detection of reflected or stored XSS needs to be preformed. However this ap- Sink: windon.location.href
proach will not work for DOM-based XSS vulnerabilities. Results: windon.location.href = http://www.BadGuysSite; - Client code open redirect.
190 Buffer Overruns 191
Source: document.url
Storage: windows.localstorage.name
Sink: elem.innerHTML
Results: elem.innerHTML = <value> =Stored DOM-based Cross-site Scripting
The above if used may raise security threats. JavaScript when used to dynamically evaluate code will create a
potential security risk.
References:
• http://docstore.mik.ua/orelly/web/jscript/ch20_04.html
• https://www.owasp.org/index.php/Static_Code_Analysis
• http://www.cs.tau.ac.il/~omertrip/fse11/paper.pdf
• http://www.jshint.com
192 Code Review Do’s And Dont’s Code Review Do’s And Dont’s 193
At work we are professions. But we need to make sure that even as professionals that when we do code reviews AGILE
besides the technical aspects of the code reviews we need to make sure we consider the human side of code SOFTWARE
reviews. Here is a list of discussion points that code reviewers; peer developers need to take into consideration.
DEVELOPMENT
This list is not comprehensive but a suggestion starting point for an enterprise to make sure code reviews are
effective and not disruptive and a source of discourse. If code reviews become a source of discourse within LIFECYCLE
an organization the effectives of finding security, functional bugs will decline and developers will find a way
around the process. Being a good code reviewer requires good social skills, and is a skill that requires practice
just like learning to code. INTERACTION
FASE
• You don’t have to find fault in the code to do a code review. If you always fine something to criticize your
comments
will loose credibility. 1
• Do not rush a code review. Finding security and functionality bugs is important but other developers or team
mem- DESIGN
bers are waiting on you so you need to temper your do not rush with the proper amount urgency. CODE
• When reviewing code you need to know what is expected. Are you reviewing for security, functionality, main-
TEST
tain- DEPLOY
ability, and/or style? Does your organization have tools and documents on code style or are you using your INTERACTION INTERACTION
own coding style? Does your organization give tools to developers to mark unacceptable coding standards FASE FASE
per the organizations own coding standards?
4 2
• Before beginning a code review does your organization have a defined way to resolve any conflicts that may
come
up in the code review by the developer and code reviewer?
DESIGN DESIGN
• Does the code reviewer have a define set of artifacts that need to be produce as the result of the code review? CODE CODE
TEST TEST
• What is the process of the code review when code during the code review needs to be changed? DEPLOY DEPLOY
• Is the code reviewer knowledgeable about the domain knowledge of the code that is being reviewed? Ample INTERACTION
evidence abounds that code reviews are most effective if the code reviewer is knowledgeable about the do- FASE
main of the code I.e. Compliance regularizations for industry and government, business functionality, risks, etc.
Agile Software Development Lifecycle
3
DESIGN
CODE
TEST
DEPLOY
194 Code Review Do’s And Dont’s Code Review Do’s And Dont’s 195
Integrating security into the agile sdlc process flow is difficult. The organization will need constant involve- application (scm) several times a day. An automated build server and application verify each check-in. The
ment from security team and or a dedication to security with well-trained coders on every team. advantage is team members can quickly detect build problems early in the software development process.
The disadvantage of CI for the Code Reviewer is while code may build properly; software security vulnerabil-
ities may still exist. Code review may be part of the check-in process but review may only to make sure the
SAST - AdHoc Static Analysis - Developer Initiated
code only meets the minimum standards of the organization. Code review is not a secure code review with risk
assessment approach on what needs to have additional time spent doing a code review.
The second disadvantage for the code review is because the organization is moving quickly with Agile process
SAST a design flaw may be introduced allowing a security vulnerabilities and the vulnerabilities may get deployed.
DEVELOPMENT 6 PASS OR 5
FAIL?
YES: YES: A red flag for the code reviewer is…
FIX CODE FAIL
1. No user stories talk about security vulnerabilities based on risk.
2. User stories do not openly describe source and sinks.
1
3. No risk assessment for the application has been done.
CHECK-IN
8
ADJUST RULEBASE SPRINT
YES
NO BACKLOG
FINISH/
FALSE SIGNOFF SIGNED
POSITIVE CONTINOUS
OFF
INTEGRATION
PERIOD
YES
SIGN/OFF? RECORD METRICS STORY STORY 24
HOUR
ISR (MAX)
1 2
Continuous Integration and Test Driven Development
The term ‘Continuous Integration’ originated with the Extreme Programming development process. Today it is
PRODUCT
one of the best practices of SDLC Agile. CI requires developers to check code into source control management BACKLOG
196 Code Review Do’s And Dont’s Code Review Do’s And Dont’s 197
General Are there backdoor/unexposed business logic classes? Cryptography Are database credentials stored in an encrypted format
Business Logic and Are there unused configurations related to business logic? Business Logic and Does the design support weak data stores like flat files
Design Design
Business Logic and If request parameters are used to identify business logic methods, is there a proper mapping of Business Logic and Does the centralized validation get applied to all requests and all the inputs?
Design user privileges and methods/actions allowed to them? Design
Business Logic and Check if unexposed instance variables are present in form objects that get bound to user inputs. Business Logic and Does the centralized validation check block all the special characters?
Design If present, check if they have default values. Design
Business Logic and Check if unexposed instance variables present in form objects that get bound to user inputs. If Business Logic and Does are there any special kind of request skipped from validation?
Design present, check if they get initialized before form binding. Design
Authorization Is the placement of authentication and authorization check correct? Business Logic and Does the design maintain any exclusion list for parameters or features from being validated?
Design
Authorization Is there execution stopped/terminated after for invalid request? I.e. when authentication/autho-
rization check fails? Imput Validation Are all the untrusted inputs validated?
Input data is constrained and validated for type, length, format, and range.
Authorization Are the checks correct implemented? Is there any backdoor parameter?
Cryptography Is the data sent on encrypted channel? Does the application use HTTPClient for making external
Authorization Is the check applied on all the required files and folder within web root directory? connections?
Authorization Is Password Complexity Check enforced on the password? Authorization Does application support password expiration?
Cryptography Is password stored in an encrypted format?
Cryptography Does application use custom schemes for hashing and or cryptographic?
Authorization Is password disclosed to user/written to a file/logs/console?
198 Code Review Do’s And Dont’s Code Review Do’s And Dont’s 199
Cryptography Are cryptographic functions used by the application the most recent version of these protocols, Session Session inactivity timeouts are enforced
patched and process in place to keep them updated? Management
General Are external libraries, tools, plugins used by the application functions the most recent version of Data Management Data is validated on server side
these protocols, patched and process in place to keep them updated?
Data Management HTTP headers are validated for each request
General Classes that contain security secrets (like passwords) are only accessible through protected API’s
Business Logic and Are all of the entry points and trust boundaries identified by the design and are in risk analysis
Cryptography Does are there any special kind of request skipped from validation? Design report?
Data Management Is all XML input data validated against an agreed schema?
General Classes that contain security secrets (like passwords) are only accessible through protected API’s
Data Management Is output that contains untrusted data supplied input have the correct type of encoding (URL
Cryptography Keys are not held in code. encoding, HTML encoding)?
General Plain text secrets are not stored in memory for extended periods of time. Data Management Has the correct encoding been applied to all data being output by the application
General Array bounds are checked. Web Services Web service has documentation protocol is disable if the application does not need dynamic
generation of WSDL.
User Management User and role based privileges are documented
and Authentication Web Services Web service endpoints address in Web Services Description Language (WSDL) is checked for validity
General All sensitive information used by application has been identified Web Services Web service protocols that are unnecessary are disable (HTTP GET and HTTP POST
User Management Authorization checks are granular (page and directory level)
and Authentication
User Management Authorization works properly and cannot be circumvented by parameter manipulation
and Authentication
Figure 13
Entry Points
General Information
Entry points should be documented as follows:
1. ID
The first item in the threat model is the general information relating to the threat model. This must include A unique ID assigned to the entry point. This will be used to cross reference the entry point with any threats or
the following: vulnerabilities that are identified. In the case of layer entry points, a major, minor notation should be used.
2. Name
1. Application Name: The name of the application A descriptive name identifying the entry point and its purpose.
Figure 16
Figure 14
Assets
OWNER - DAVID LOWRY
Assets are documented in the threat model as follows:
THREAT MODEL PARTICIPANTS - DAVID ROOK 1. ID
INFORMATION
A unique ID is assigned to identify each asset. This will be used to cross reference the asset with any threats or
REVIEWER - EOIN KEARY vulnerabilities that are identified.
V2.0
2. Name
A descriptive name that clearly identifies the asset.
Description 3. Description
The college library website is the first implementation of a website to provide librarians and library patrons A textual description of what the asset is and why it needs to be protected.
(students and college staff ) with online services. As this is the first implementation of the website, the function-
ality will be limited. There will be three users of the application: 4. Trust Levels
The level of access required to access the entry point is documented here. These will be cross-referenced with the
1. Students trust levels defined in the next step.
2. Staff
202 Code Review Do’s And Dont’s Code Review Do’s And Dont’s 203
Figure 17 Specifically the user login data flow diagram will appear as in figure 19.
Trust Levels
Figure 19
Trust levels are documented in the threat model as follows:
1. ID
A unique number is assigned to each trust level. This is used to cross reference the trust level with the entry points Example
and assets. application threat
model of the user
USER / WEBSERVER BOUNDARY login
2. Name
A descriptive name that allows identification of the external entities that have been granted this trust level.
3. Description
LOGIN REQUEST WEB AUTHENTICATE USER ()
A textual description of the trust level detailing the external entity who has been granted the trust level. LOGIN
USERS SERVER
LOGIN RESPONSE AUTHENTICATE USER RESULT
PROCESS
By using the understanding learned on the college library website architecture and design, the data flow dia-
gram can be created as shown in figure X.
WEB SERVER / AUTHENTICATE AUTHENTICATE
DATABASE BOUNDARY PAGES USER SQL USER SQL
QUERY RESULT QUERY RESULT
Figure 18
DATA
COLEGE
WEB LIBRARY DATABASE
USERS LIBRARIANS PAGES
DATABASE FILES
DATA
REQUEST REQUEST
COLEGE
SQL QUERY CALLS
COLEGE
DATA 23
WEB PAGES DATABASE
PAGES LIBRARY LIBRARY
ON DISK FILES
DATABASE DATA
DATABASE DATA
Stride
A threat list of generic threats organized in these categories with examples and the affected security con-
WEB SERVER / DATABASE BOUNDARY trols is provided in the following table:
204 Code Review Do’s And Dont’s Code Review Do’s And Dont’s 205
By referring to the college library website it is possible to document sample threats related to the use cases such
as:
Threat: Malicious user views confidential information of students, faculty members and librarians.
1. Damage potential
Threat to reputation as well as financial and legal liability:8
2. Reproducibility
Fully reproducible:10
3. Exploitability
Require to be on the same subnet or have compromised a router:7
4. Affected users
Affects all users:10
5. Discoverability
Can be found out easily:10
Overall DREAD score: (8+10+7+10+10) / 5 = 9
In this case having 9 on a 10 point scale is certainly an high risk threat.
Figure 20
Provided below is a brief and limited checklist which is by no means an exhaustive list for identifying
countermeasures for specific threats.
206 Code Crawling Code Crawling 207
CODE CRAWLING
This appendix gives practical examples of how to carry out code crawling in the following programming languages: SQL & Database
Locating where a database may be involved in the code is an important aspect of the code review. Looking at
• .Net the database code will help determine if the application is vulnerable to SQL injection. One aspect of this is
• Java to verify that the code uses either SqlParameter, OleDbParameter, or OdbcParameter(System.Data.SqlClient).
• ASP These are typed and treat parameters as the literal value and not executable code in the database.
• C++/Apache
STRING TO SEARCH
Searching for Code in .NET
Firstly one needs to be familiar with the tools one can use in order to perform text searching, following this one needs to know exec sp_ select from insert update
what to look for. delete from where delete execute sp_ exec xp_
Logging
Input Controls
Logging can be a source of information leakage. It is important to examine all calls to the logging subsystem and to
The input controls below are server classes used to produce and display web application form fields. Looking for
determine if any sensitive information is being logged. Common mistakes are logging userID in conjunction with
such references helps locate entry points into the application.
passwords within the authentication functionality or logging database requests which may contain sensitive data.
webcontrols.dropdownlist
machine.config
It is important that many variables in machine.config can be overridden in the web.config file for a particular
application.
WEB.config
The .NET Framework relies on .config files to define configuration settings. The .config files are text-based XML
files. Many .config files can, and typically do, exist on a single system. Web applications refer to a web.config
file located in the application’s root directory. For ASP.NET applications, web.config contains information about
most aspects of the application’s operation. STRING TO SEARCH
STRING TO SEARCH
global.asax
Class Design
Each application has its own global.asax file if one is required. Global.asax sets the event code and values for
Public and Sealed relate to the design at class level. Classes that are not intended to be derived from should
an application using scripts. One must ensure that application variables do not contain sensitive information,
be sealed. Make sure all class fields are Public for a reason. Don’t expose anything that is not necessary.
as they are accessible to the whole application and to all users within it.
ControlDomainPolicy ControlPolicy
Exceptions & Errors
Ensure that the catch blocks do not leak information to the user in the case of an exception. Ensure when
dealing with resources that the finally block is used. Having trace enabled is not great from an information
leakage perspective. Ensure customized errors are properly implemented
Legacy Methods
Some standard functions that should be checked in any context include the following.
STRING TO SEARCH
STRING TO SEARCH
printf strcpy
Cryptography
If cryptography is used then is a strong enough cipher used, i.e. AES or 3DES? What size key is used? The larger
the better. Where is hashing performed? Are passwords that are being persisted hashed? They should be. How
are random numbers generated? Is the PRNG “random enough”? Searching for Code in Java
STRING TO SEARCH
SecureString ProtectedMemory
212 Code Crawling Code Crawling 213
Servlets
Redirection
These API calls may be avenues for parameter/header/URL/cookie tampering, HTTP Response Splitting and
Any time an application is sending a redirect response, ensure that the logic involved cannot be manipulated
information leakage. They should be examined closely as many of such APIs obtain the parameters directly
by an attackers input. Especially when input is used to determine where the redirect goes to.
from HTTP requests.
SSL
Looking for code which utilises SSL as a medium for point to point encryption. The following fragments
Cross Site Scripting should indicate where SSL functionality has been developed.
These API calls should be checked in code review as they could be a source of Cross Site Scripting vulnerabilities.
STRING TO SEARCH
STRING TO SEARCH
com.sun.net.ssl SSLContext SSLSocketFactory TrustManagerFactory
javax.servlet.ServletOutputStream.print strcpy
HttpsURLConnection KeyManagerFactory
Response Splitting
Response splitting allows an attacker to take control of the response body by adding extra CRLFs into headers.
In HTTP the headers and bodies are separated by 2 CRLF characters, and thus if an attackers input is used in Session Management
a response header, and that input contained 2 CRLFs, then anything after the CRLFs would be interpreted as The following APIs should be checked in code review when they control session management.
the response body. In code review ensure functionality is sanitizing any information being put into headers.
java.lang.Runtime.exec java.lang.Runtime.getRuntime getId err. Server.GetLastError On Error Resume Next On Error GoTo 0
JDLabAgent
Database
These APIs can be used to interact with a database, which can lead to SQL attacks. Code review can check
these API calls use sanitized input.
Ajax and JavaScript
Look for Ajax usage, and possible JavaScript issues:
STRING TO SEARCH
document.URL document.URL
DOS Prevention & Logging Searching for Code in C++ and Apache
The following ASP APIs can help prevent DOS attacks against the application. Leaking information to a log Commonly when a C++ developer is building a web service they will build a CGI program to be invoked by
can be of use to an attacker, hence the following API call can be checked in code review to ensure no sensitive a web server (though this is not efficient) or they will use the Apache httpd framework and write a handler
information is being written to logs. or filter to process HTTP requests/responses. To aid these developers, this section deals with generic C/C++
functions used when processing HTTP input and output, along with some of the common Apache APIs that
are used in handlers.
STRING TO SEARCH
Legacy C/C++ Methods
server.ScriptTimeout IsClientConnected WriteEntry For any C/C++ code interacting with web requests, code that handles strings and outputs should be checked
to ensure the logic does not have any flaws.
STRING TO SEARCH
Cookie Processing
Cookie can be obtained from the list of request headers, or from specialized Apache functions.
STRING TO SEARCH
ap_cookie_read ap_cookie_check_string
Logging
Log messages can be implemented using custom loggers included in the module (e.g. log4cxx, etc), by using
the Apache provided logging API, or by simply writing to standard out or standard error.
STRING TO SEARCH
HTML Encoding
HTML When the team has got a handle for the HTML input or output in the C/C++ handler, the following methods
can be used to ensure/check HTML encoding.
STRING TO SEARCH
ap_escape_path_segment
220