0% found this document useful (0 votes)
81 views53 pages

Day 3 Session 2-Secure Software Implementation

This document summarizes common software vulnerabilities and countermeasures. It discusses the CWE/SANS Top 25 list of most widespread and critical software errors that can lead to serious vulnerabilities. It also discusses the OWASP Top 10 list of web application vulnerabilities. Finally, it provides more detail on specific vulnerabilities like SQL injection, command injection, and integer overflow, explaining the attack methodology and recommended countermeasures like input validation and avoiding direct user input in program logic.

Uploaded by

mrz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
81 views53 pages

Day 3 Session 2-Secure Software Implementation

This document summarizes common software vulnerabilities and countermeasures. It discusses the CWE/SANS Top 25 list of most widespread and critical software errors that can lead to serious vulnerabilities. It also discusses the OWASP Top 10 list of web application vulnerabilities. Finally, it provides more detail on specific vulnerabilities like SQL injection, command injection, and integer overflow, explaining the attack methodology and recommended countermeasures like input validation and avoiding direct user input in program logic.

Uploaded by

mrz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 53

Day 3 Session 2: Secure Software Implementation/Programming

Presented by
Engr. Md. Mushfiqur Rahman
CISA, CISSP, OSCP, LPT (Master), PMP, CCISO, CISM, CRISC, CGEIT, CDPSE, CEH, ECSA, CHFI,
CLPTP, ISO 27001 LA, CND, COBIT5, MCP, MCSA, MCSE,MCTS, MCITP, OCP, CCNA, CCNP, OCP,
SCSA, Prince2, CTIA, CSA, ITIL, MCT, PCT, ECI, APMGCIT
Common Software Vulnerabilities and Countermeasures
CWE/SANS TOP 25 VULNERABILITY CATEGORIES
Begun by MITRE and supported by the U.S. Department of Homeland Security, the CWE/SANS Top 25 list is the
result of collaboration between many top software security experts worldwide. This list represents the most
widespread and critical errors that can lead to serious vulnerabilities in software. They are often easy to find, and
easy to exploit. Left unmitigated, they are easy targets for attackers and can result in widespread damage to
software, data, and even enterprise security.
The Top 25 list can be used in many ways. It is useful as a tool for development teams to provide education and
awareness about the kinds of vulnerabilities that plague the software industry. The list can be used in software
procurement as a specification of elements that need to be mitigated in purchased software. Although the list has
not been updated since 2011, it is still highly relevant. One could argue over the relative position on the list, but at
the end of the day, all the common vulnerabilities that can be exploited need to be fixed.
Common Software Vulnerabilities and Countermeasures
The Top 25 list can serve many roles in the secure
development process. For programmers, the list can be used
as a checklist of reminders, as a source for a custom “Top N”
list that incorporates internal historical data. The data can
also be used to create a master list of mitigations, which
when applied, will reduce occurrence and severity of the
vulnerabilities. Testers can use the list to build a test suite
that can be used to ensure that the issues identified are tested
for before shipping.
Common Software Vulnerabilities and Countermeasures
OWASP VULNERABILITY CATEGORIES
The Open Web Application Security Project (OWASP) is an open community dedicated to finding and fighting
the causes of insecure web application software. All of the OWASP tools, documents, forums, and chapters are
free and open to anyone interested in improving web application security, and are available at www.owasp.org.
OWASP has published several significant publications associated with building more secure web applications.
Their main treatise, “A Guide to Building Secure Web Applications and Web Services,” provides detailed
information on a wide range of vulnerabilities and how to avoid them. Another commonly used item from
OWASP is their Top 10 list of web application vulnerabilities.

OWASP Top 10’2013 (Current)


A1 – Injection
A2 – Broken Authentication and Session Managemen
A3 – Cross-Site Scripting (XSS)
A4 – Insecure Direct Object References
A5 – Security Misconfiguration
A6 – Sensitive Data Exposure
A7 – Missing Function-Level Access Control
A8 – Cross-Site Request Forgery (CSRF)
A9 – Using Known Vulnerable Components
A10 – Unvalidated Redirects and Forwards
Common Software Vulnerabilities and Countermeasures
COMMON VULNERABILITIES AND COUNTERMEASURES
The list of Top 25 and the list of Top 10 web application vulnerabilities overlap. All of the Top 10 items are in the
Top 25. This is not unexpected, as web application programming is a subset of programming as a whole
discipline. To examine the best countermeasure strategy, it is easier to group the vulnerabilities into like causes
and apply countermeasures that address several specific issues at once.
Injection Attacks
Injection attacks are some of the most common and severe that are currently being seen in software. These
attacks include SQL Injection, OS Command Injection, Integer Overflow or Wraparound, Path Traversal, Cross-
Site Scripting (XSS), and Cross-Site Request Forgery (CSRF). Injection-type attacks can also be used against
LDAP, XML, and other common protocols.
Injection attacks can be difficult to decode on the fly, as in many cases, the inputs go through a series of parsers
that change the form of the input before use. In these cases, it is better to have previously approved lists of
options and let the user select the option based on a master list as opposed to being defined by input streams.
Using user input in any direct fashion can result in unintended behaviors when malicious users enter code
specifically designed to cause problems. Cleansing or correcting user input streams is difficult, if not impossible,
in some situations, and the prudent course is to never allow users to directly define elements of programmatic
behavior.
Common Software Vulnerabilities and Countermeasures
SQL Injections
Databases are one of the primary methods used to store data, especially large quantities of user data. Access to and
manipulation of the data is done using Structured Query Language (SQL) statements. The SQL injection attack is
performed by an attacker inputting a specific string to manipulate the SQL statement to do something other than
that intended by the programmer or designer. This is a form of improper input validation that results in
unintended behavior. The defense is easy, but it requires that the SQL statements be constructed in a manner that
protects them from manipulation as a result of user input.
The best method to avoid SQL injection is to design database SQL access in a manner that does not allow the SQL
statements to be manipulated by users. The safest method is to use stored procedures for all access, with user input
being in the form of variables to the stored procedure. The stored procedure can validate the input and ensure that
the SQL statements are not manipulated. Another method is to use parameterized queries.
The primary mitigation for SQL injection is developer awareness. SQL injection vulnerabilities can be designed out
of a project, and where exposure is unavoidable, input validation can greatly mitigate the issue. SQL injection can
be easily tested for and caught as part of a normal test cycle. Failure to catch inputs susceptible to SQL injection is
a testing failure. As with all known common attack vectors, SQL injection should be included in test plans.
Common Software Vulnerabilities and Countermeasures
SQL Injection Attack Methodology
The SQL injection attack has several steps:
1.   Test input values to see if SQL is accessible and can be manipulated.
2.   Experiment with SQL inputs, using error messages to enumerate the database and provide exploitation
information.
3.   Craft a SQL exploit input to achieve the exploitation goal.
Even if the SQL errors are suppressed, a structured form of attack referred to as blind SQL injection can use
Boolean-based SQL statements rather effectively.
Common Software Vulnerabilities and Countermeasures

Command Injections
A command injection attack is similar to the SQL injection attack, but rather than trying to influence a
dynamic SQL input, the target is a dynamic command-generation element. When a program needs to
perform a function that is normally handled by the operating system, it is common practice to use the
operating system to perform the action. To craft the specific operation, it is common to use some form of
user input in the command to be executed. Using user-supplied input that is not properly validated can
lead to serious consequences.
There are two common methods of using user-supplied input in command injection attacks. The first is
where the end user-input is used as an argument in the command to be executed. This can have several
interesting consequences, from actions on unintended files to additional commands that are appended
to the arguments. The second form of this attack is where the user input includes the command to be
executed. This can be even more risky, as unvalidated or improperly validated input strings can result in
disastrous consequences.
The primary mitigation for command injection vulnerability is developer awareness. Command injection
vulnerabilities can be designed out of a project, and where exposure is unavoidable, input validation can
greatly mitigate the issue. Command injection can be easily tested for and caught as part of a normal test
cycle. Failure to catch inputs susceptible to command injection is a testing failure. As with all known
common attack vectors, command injection should be included in test plans.
Common Software Vulnerabilities and Countermeasures
Integer Overflow
Computer programs store numbers in variables of a defined size. For integers, these can be 8, 16, 32, and 64 bits,
and in either signed or unsigned forms. This restricts the size of numbers that can be stored in the variable. When
a value is larger than allowed, a variety of errors can ensue. In some cases, the values simply wrap around; in
others, it just sticks as the maximum value. These can be processor and language dependent. In many cases,
including the C language, overflows can result in undefined behavior.
Integer overflows can occur in the course of arithmetic operations. Using a web application that dispenses licenses
to users, we can see how this can be manipulated. Once the user enters the application, there are three values: the
number of licenses, a place to enter the number desired, and the number of remaining licenses. Assuming the
program uses 32-bit signed variables and that user input checks verify that all the inputs are the correct size, how
can there be an overflow? Let N = number of licenses held, R = the number requested, and B = the balance after
R is dispensed. After verifying that R is a legitimate unsigned 32 int, the program performs the following: B = N –
R. The intent is to check to see if B is < 0, which would indicate that sufficient licenses did not exist and disallow
that transaction. But if the value of N – R does not fit in an int32, then the calculation will overflow, as the internal
operation is to calculate N – R, put the value in a register, and then move to the location of B. The calculation of N
– R is the problem.
Overflows can be resolved in a variety of language-specific methods. The use of the checked directive in C#, for
instance, turns on exception handling that allows for the trapping and management of overflows before the
problem is exposed. Integer overflows can be specifically tested for, using both boundary values and values that
will force internal errors as described earlier. These cases need to be designed and built into the test plan as part
of the regular test plan development.
Common Software Vulnerabilities and Countermeasures
Path Traversal
Known by several names, including dot-dot-slash, directory traversal, directory climbing, and backtracking
attacks, the path traversal attack attempts to access files and directories that are stored outside the web root
folder. By using “../” notation in the path to a file, it is possible to traverse across the directory structure to
access a specific file in a specific location. This file system navigation methodology takes advantage of the way
that the system is designed. To mask the “../” characters in the input stream, the characters can be encoded, i.e.,
%2e%2e%2f.
Virtually every web application has a need for local resources, image file scripts, configurations, etc. To prevent
a directory traversal attack, the key is to not use user input when accessing a local resource. Although it may
require additional coding, matching the user input to a specific resource and then using a hard-coded path and
resource to prevent the attack is the strongest defense.
Common Software Vulnerabilities and Countermeasures
Cross-Site Scripting (XSS)
Cross-site scripting (XSS) is one of the most common web attack methodologies. The cause of the vulnerability is
weak user input validation. The attack works because a user includes a script in their input and this script is not
mitigated, but instead is rendered as part of the web process. There are several different types of XSS attacks,
which are distinguished by the effect of the script.
A nonpersistent XSS attack is one where the injected script is not persisted or stored, but rather is immediately
executed and passed back via the web server. A persistent XSS attack is one where the script is permanently
stored on the web server or some back-end storage. This allows the script to be used against others who log in to
the system. A document object model (DOM-based) XSS attack is one where the script is executed in the browser
via the DOM process as opposed to the web server.
Cross-site scripting attacks can result in a wide range of consequences, and in some cases, the list can be anything
that a clever scripter can devise. Common uses that have been seen in the wild include

Theft of authentication information from a web application


•   Session hijacking
•   Deploy hostile content
•   Change user settings, including future users
•   Impersonate a user
•   Phish or steal sensitive information
Common Software Vulnerabilities and Countermeasures
Controls to defend against XSS attacks include the use of anti-XSS libraries to strip scripts from the input
sequences. There are a variety of other mitigating factors, including limiting types of uploads and screening size of
uploads, whitelisting inputs, etc., but attempting to remove scripts from inputs can be a tricky task. Well-designed
anti-XSS input library functions have proven to be the best defense.
Cross-site scripting vulnerabilities are easily tested for and should be a part of the test plan for every application.
Testing a variety of encoded and unencoded inputs for scripting vulnerability is an essential test element.

Cross-Site Request Forgery (CSRF)


Cross-site request forgery attacks utilize unintended behaviors that are proper in defined use but are performed
under circumstances outside the authorized use. This is an example of a confused deputy problem, a class of
problems where one entity mistakenly performs an action on behalf of another. A CSRF attack relies upon several
conditions to be effective. It is performed against sites that have an authenticated user and exploits the site’s trust
in a previous authentication event. Then, by tricking a user’s browser to send an HTTP request to the target site,
the trust is exploited. Assume your bank allows you to log in and perform financial transactions, but does not
validate the authentication for each subsequent transaction. If a user is logged in and has not closed their
browser, then an action in another browser tab could send a hidden request to the bank resulting in a transaction
that appears to be authorized, but in fact was not done by the user.
There are many different mitigation techniques that can be employed, from limiting authentication times, to
cookie expiration, to managing some specific elements of a web page like header checking. The strongest method
is the use of random CSRF tokens in form submissions. Subsequent requests cannot work, as the token was not
set in advance. Testing for CSRF takes a bit more planning than other injection-type attacks, but this, too, can be
accomplished as part of the design process.
Common Software Vulnerabilities and Countermeasures
Cryptographic Failures
Failures in the application of cryptography can result in failed protection for data and programs. Several attacks
fall into this category: Hard-Coded Credentials, Missing Encryption of Sensitive Data, Use of a Broken or Risky
Cryptographic Algorithm, Download of Code Without Integrity Check, and Use of a One-Way Hash Without a
Salt. Using industry-accepted cryptographic libraries and not creating your own will assist in avoiding this type of
failure. Ensuring cryptography is used both properly and from approved libraries is a necessity to avoid common
cryptographic failures. Even with strong cryptography, hard-coded credentials that are reverse-engineered out of
software result in complete failure of the otherwise-secure algorithm and subsequent failure of protection.

Hard-Coded Credentials
Hard-coding passwords, keys, and other sensitive data into programs has several serious drawbacks. First, it
makes them difficult to change. Yes, a program update can change them, but this is a messy way of managing
secret data. But most importantly, they will not stay secret. With some simple techniques, hackers can reverse-
engineer code, and through a series of analysis steps, determine the location and value of the secret key. This has
happened to some large firms with serious consequences in a very public forum. This is easy to check for during
code walkthroughs and should never be allowed in code.
Common Software Vulnerabilities and Countermeasures
Missing Encryption of Sensitive Data
This may seem to be a simple issue—how can one miss encrypting sensitive information?—yet it happens all the
time. There are several causes, the first being ignorance on the part of the development team. Some items are
obviously sensitive, but some may not be so obvious. The data owner is responsible for documenting the
sensitivity of data and its protection requirements. When this step fails, it is hard to blame the development team.
Other cases of missing protection can also arise, typically as part of program operations. Are backups protected?
Are log files protected? Backups and log files are two common places that secrets can become exposed if not
protected. Error-reporting mechanisms can also handle sensitive data, and again, if not encrypted, is it exposed to
risk of loss? The answer to all of these questions is yes, and many an enterprise has learned the hard way after the
loss occurs that a simple encryption step would have prevented a breach and subsequent notification actions.

Use of a Broken or Risky Cryptographic Algorithm


Cryptography is one of the more difficult technical challenges of modern times. Despite a lot of effort, there are
surprisingly few secure cryptographic algorithms. The rise of computing power has caused many of the older
algorithms to fail under massive number-crunching attacks, attacks that used to take significant resources but are
managed today on a desktop. Data Encryption Standard (DES), the gold standard for decades, is now considered
obsolete, as are many other common cryptographic functions.
Even worse is when a development team decides to create their own encryption methodology. This has been tried
by many teams and always ends up with the system being exploited as the algorithm is broken by hackers. This
forces a redesign/re-engineering effort after the software is deployed, which is an expensive solution to a problem
that should never have occurred in the first place. The solution is simple—always use approved cryptographic
libraries.
Common Software Vulnerabilities and Countermeasures
A common mode of cryptographic failure revolves around the random number function. The pseudo-random
function that is built into most libraries may appear random and have statistically random properties, but it is not
sufficiently random for cryptographic use. Cryptographically sufficient random number functions are available in
approved cryptographic libraries and should be used for all cryptographic random calculations.
Hash functions have been falling to a series of attacks. MD-5 and SHA-1 are no longer considered secure. Others
will continue to fall, which has led to the SHA-3 series being developed by the National Institute of Standards and
Technology (NIST). Until the new hash functions are deployed, SHA-256, SHA-384, and SHA-512 are still
available, with the number signifying the bit length of the digest. This brings up a design consideration. If the
current design is to use SHA-256, it would be wise when planning data structures to plan for longer hash values,
up to 512 bits, so that if the SHA function needs to be upgraded in the future, then the data structure will support
the upgrade.
Common Software Vulnerabilities and Countermeasures
Download of Code Without Integrity Check
The Internet has become the medium of choice for distributing software, updates, data, and most digital content.
This raises a series of concerns; how does one know the digital content is correct and from the correct source.
There are known instances of malware being attached to downloaded code and then being passed off as
legitimate. Hash values can be used to verify the integrity of a file that is being downloaded. For reasons of
integrity, whether to guard against malicious code or just accidental errors that will later affect production, all
downloaded code should have its integrity verified before installation and use.

This requires designing in a checking mechanism, as integrity codes will need to be made available and a
mechanism to verify them established. Simply attaching the hash values to the download is not sufficient, as this
mechanism can be replicated by hackers who can recompute hash values after modifying an update. The hash
values need to be made available in a manner that lets the user know they are from a valid source.

Some download methods, such as Adobe Update and Windows Update, perform the hash check automatically as
part of the update process. Using the vendor’s update methodology can help quite a bit, but verify before trusting.
Contact the vendor and verify the safeguards are in place before trusting automatic update mechanisms.
Common Software Vulnerabilities and Countermeasures
Use of a One-Way Hash Without a Salt
Hashing is a common function used to secure data, such as passwords, from exposure to unauthorized parties. As
hash values are impossible to reverse, the only solution is to try all possible inputs and look for a matching hash
value. Hashtag worked well until the creation of rainbow tables. Rainbow tables exist for all possible
combinations of passwords up to 14 characters, making the hash value a simple lookup field to get the original
password from the table. The solution to this is simple using a technique called salting the hash. A salt value is
concatenated to the password, or other value being hashed, effectively increasing its length beyond that of a
rainbow table. Salting a hash also solves a second problem. If the salt also contains an element from the
username, then the issue of identical passwords between different accounts will no longer yield the same hash
value. If two items have the same hash, the inputs are considered to be identical. By increasing the length of the
input with a salt value, you solve rainbow table lookups. By making part of the salt specific to USERID, you solve
the issue of identical passwords being shown by identical hash values.
Common Software Vulnerabilities and Countermeasures
INPUT VALIDATION FAILURES
Probably the most important defensive mechanism that can be employed is input validation. Considering all
inputs to be hostile until properly validated can mitigate many attacks based on common vulnerabilities. This is a
challenge, as the validation efforts need to occur after all parsers have completed manipulating input streams, a
common function in web-based applications using Unicode and other international character sets.
Input validation is especially well suited for the following vulnerabilities: Buffer Overflow, Reliance on Untrusted
Inputs in a Security Decision, Cross-Site Scripting (XSS), Cross-Site Request Forgery (CSRF), Path Traversal, and
Incorrect Calculation of Buffer Size.
Input validation may seem suitable for various injection attacks, but given the complexity of the input and
ramifications from legal but improper input streams, this method falls short for most injection attacks. What can
work is a form of recognition and whitelisting approach, where the input is validated and then parsed into a
standard structure that is then executed. This restricts the attack surface to not only legal, but also expected,
inputs.
Output validation is just as important in many cases as input validations. If querying a database for a username
and password match, the expected forms of the output of the match function should be either one match or none.
If using record count to indicate the level of match, a common practice, then a value other than 0 or 1 would be an
error. Defensive coding using output validation would not act on values >1, as these are clearly an error and
should be treated as a failure.
Common Software Vulnerabilities and Countermeasures
Buffer Overflow
The most famous of all input validation failures is the incorrect calculation of buffer size, or buffer overflow attack. This attack comes when
the input data is larger than the memory space allocated, overwriting other crucial elements. If there’s one item that could be labeled as
the “Most Wanted” in coding security, it would be the buffer overflow. The Computer Emergency Response Team Coordination Center
(CERT/CC) at Carnegie Mellon University estimates that nearly half of all exploits of computer programs stem historically from some form of
buffer overflow. Finding a vaccine to buffer overflows would stamp out 50 percent of these security-related incidents by type and probably
90 percent by volume. The Morris finger worm in 1988 was an exploit of an overflow, as were recent big-name events such as Code Red and
Slammer. The generic classification of buffer overflows includes many variants, such as static buffer overruns, indexing errors, format string
bugs, Unicode and ANSI buffer size mismatches, and heap overruns.
The concept behind these vulnerabilities is relatively simple. The input buffer that is used to hold program input is overwritten with data
that is larger than the buffer can hold. The root cause of this vulnerability is a mixture of two things: poor programming practice and
programming language weaknesses. Programming languages such as C were designed for space and performance constraints. Many
functions in C, like gets(), are unsafe in that they will permit unsafe operations, such as unbounded string manipulation into fixed buffer
locations. The C language also permits direct memory access via pointers, a functionality that provides a lot of programming power, but
carries with it the burden of proper safeguards being provided by the programmer.
The first line of defense is to write solid code. Regardless of the language used or the source of outside input, prudent programming
practice is to treat all input from outside a function as hostile. Validate all inputs as if they were hostile and an attempt to force a buffer
overflow. Accept the notion that although during development, everyone may be on the same team, be conscientious, and be compliant
with design rules, future maintainers may not be as robust. Designing prevention into functions is a foundational defense against this type
of vulnerability.
There is good news in the buffer overflow category—significant attention has been paid to this type of vulnerability, and although it is the
largest contributor to past vulnerabilities, its presence is significantly reduced in newly discovered vulnerabilities.
Common Software Vulnerabilities and Countermeasures
Canonical Form
In today’s computing environment, a wide range of character sets is used. Unicode allows multilanguage support.
Character code sets allow multilanguage capability. Various encoding schemes, such as hex encoding, are
supported to allow diverse inputs. The net result of all these input methods is that there are numerous ways to
create the same input to a program. Canonicalization is the process by which application programs manipulate
strings to a base form, creating a foundational representation of the input. The definition of canonical form is the
simplest or standard form. Input can be encoded for a variety of reasons, sometimes for transport, sometimes to
deal with legacy or older system compatibility, sometimes because of other protocols involved.

Canonicalization errors arise from the fact that inputs to a web application may be processed by multiple
applications, such as the web server, application server, and database server, each with its own parsers to resolve
appropriate canonicalization issues. Where this is an issue relates to the form of the input string at the time of
error checking. If the error checking routine occurs prior to resolution to canonical form, then issues may be
missed. The string representing /../, used in directory traversal attacks, can be obscured by encoding and hence,
missed by a character string match before an application parser manipulates it to canonical form. The bottom line
is simple: Input streams may not be what they seem.
Common Software Vulnerabilities and Countermeasures
Missing Defense Functions
Common defense mechanisms such as authentication and authorization can only be effective when they are
invoked as part of a protection scheme. Ensuring that the appropriate defensive mechanisms are employed on any
activity that crosses a trust boundary will mitigate many common attacks. This is effective against vulnerabilities
such as Missing Authentication for Critical Functions, Missing Authorization, Unrestricted Upload of File with
Dangerous Type, Incorrect Authorization, Incorrect Permission Assignment for Critical Resource, Execution with
Unnecessary Privileges, Improper Restriction of Excessive Authentication Attempts, URL Redirection to Untrusted
Site (“Open Redirect”), and Uncontrolled Format String. Ensuring that the basics of security such as authentication
and authorization are uniformly applied across an application is essential to good practice. Having a ticket to a
football game may get you into the stadium, but to get to the good seats, one must show their ticket again. Multiple
checks aligned with the importance of the asset under protection are simply applying the fundamentals of security.
Common Software Vulnerabilities and Countermeasures
General Programming Failures
Programming is treated by many as an art, when it has progressed far from that form. Today’s modern
programming is a complex engineering-type evolution with rules and guidelines to prevent failures. The use of a
style guide that restricts certain functions for safety and security reasons is seen as handcuffs by many, but also as
prudent by professionals. For each dangerous function, there is a manner by which it can be tamed, typically by
substitution of a safe version. Buffer overflows due to functions that do not validate input size are a common
example of such dangerous functions. An example is strcpy() in the C/C++ language. This function does not
validate input length, leaving it up to the programmer to manage independently. The companion function
strncpy() does the check, and although it takes longer, it still takes less time than separate validation. This is just
one example of the Use of Potentially Dangerous Functions, one of the Top 25. Another source of programming
errors is the inclusion of old code or code obtained from another source. Without running these source code
elements through the same software development lifecycle (SDLC) processes, one is stuck with any potential and
unchecked vulnerabilities in the code. The Inclusion of Functionality from Untrusted Control Sphere error is just
this, using code that has not been validated. Although we may choose to ignore our own legacy code inside the
enterprise, many a major defect has come from older code bases and direct adoption, even of internally
generated code. All source code should be tested using static test tools that can screen code for a wide variety of
issues. From examination for obsolete or disallowed libraries and functions, to common weakness patterns, to
errors like off by one or failure to properly initialize, the list of vulnerabilities a static code scanner can find is long.
And ensuring that these common errors are cleared prior to each build is an essential mitigation step.
Common Software Vulnerabilities and Countermeasures
COMMON ENUMERATIONS
Understanding and sharing information between professionals requires a common language and taxonomy to
facilitate the information exchange in terms understood by all parties. Enumerations of known software
weaknesses and vulnerabilities have been compiled and published as part of MITRE Corporation’s “Making
Security Measureable” program, an effort sponsored by the U.S. government and including significant
international input. Two of the first enumerations are the Common Weakness Enumeration (CWE) and Common
Vulnerabilities and Exposures (CVE). These enumerations have enabled significant advancement in the
development of methods to reduce code vulnerabilities through facilitated information exchange using a common
standard language. Both the CVE and CWE are vendor- and language-neutral methods of describing errors and
act as a common vocabulary for communication about weaknesses and vulnerabilities.
This common vocabulary has also led to the development of automated tools to manage the tracking of these
issues. Automated tools can perform operations that would be mind-numbing to a human reader, such as
verification of syntax, looking for specific patterns, verifying that specific off-by-one vulnerabilities are counted,
etc. Automated tools require a vocabulary to match events to, and this is where the common enumerations have
great benefit. The common enumerations can be used by different tool vendors to provide a consistent descriptive
output.
Common Software Vulnerabilities and Countermeasures
Common Weakness Enumerations (CWE)
The Common Weakness Enumeration (CWE) is a list of software weaknesses created by a community initiative.
Many entities contribute to the list, creating specific and succinct definitions for each of the elements in the list.
The result is a list of standard identifiers for known software weaknesses in implementation that have been
known to result in vulnerabilities. By using numerous entities from industry, individuals, and government, the
CWE list has grown to cover a wide range of issues, making it useful in describing the range of weaknesses that
exist in software today.
The CWE provides for a common language to describe and exchange information about the causes of software
vulnerabilities. It is a structured list of identifying information, including the time of introduction of a weakness,
the location of the weakness (configuration, code, or environment), the intent of the weakness, and other
information. Using a hierarchical classification scheme, the CWE enables for both a broad description of families
of weaknesses and the specifics of the child elements of the family.
Common Software Vulnerabilities and Countermeasures
Common Vulnerabilities and Exposures (CVE)
The CVE is a list of standard identifiers for known software vulnerabilities that have been found in software. This
list acts as a dictionary for publicly known vulnerabilities. The use of a common identifier for specific
vulnerabilities enables the sharing of data and information specific to the vulnerability. Prior to the CVE,
different groups would label vulnerabilities in different ways, leading to multiple names for the same
vulnerability.
The CVE includes a single standardized description for the entry. Developed by the MITRE Corporation under
contract with the U.S. Department of Homeland Security, the list is managed by the CVE Editorial Board. This
group ensures entries are legitimate and unique. The list itself is free to anyone who wishes to download and use
it. It has become the industry standard for describing vulnerabilities in software. The use of CVE identifiers
allows for data exchange between information security programs and services, enabling smooth interoperability
and communication.

•   A number (CVE-1999-0067), which includes the year and a unique number ID. As of 2016, seven-digit unique
IDs have been used, making the latest format CVE-YYYY-XXXXXXX.
•   A brief description of the vulnerability or exposure
•   Pertinent references
Common Software Vulnerabilities and Countermeasures
VIRTUALIZATION
Virtualization is a software technology that allows multiple operating systems to coexist on a single hardware
resource. Virtualization capitalizes on the advances in hardware capability, allowing for greater utilization of the
hardware resource. As virtualization has matured, other operational benefits have been realized, including
backups, moving servers, and managing large networks of boxes. From a CSSLP point of view, the virtualization
layer should be transparent to most applications and not be a factor in software development for programs that
run on top of the virtualization layer. That said, it is important to consider that applications will be on virtualized
rather than static real hardware, and when designing certain elements, it would be wise to consider the case of
the physical hardware being virtualized. Tying an application to a specific piece of hardware for licensing
purposes might be easy, but also will remove the operational advantages of being able to manage hardware via a
virtualization layer, making the application incompatible with virtual environments.

EMBEDDED SYSTEMS
Embedded systems are combinations of software and hardware embedded in a larger system to result in a
specific form of control or computation. Embedded systems tend to be designed for a specific purpose rather
than running on a general-purpose PC. Embedded systems are typically those where software is specifically
purposed and not updated on a regular basis. Most attacks against embedded systems are aimed at information
disclosure. The other attack avenue is one of denial of service, where the attack leaves the device nonfunctioning.
Common Software Vulnerabilities and Countermeasures
SIDE CHANNEL
The term side channel attack comes from the cryptographic world, where it represents an attack against the
implementation of a cryptosystem, rather than the strength of the algorithm itself (e.g., cold booting). Attacks
that use some byproduct of a system are typically called side channel attacks. There are different types of side
channel attacks, including timing attacks, power attacks, data remanence attacks, and electromagnetic attacks.
Attacks against the human element, also called social engineering attacks, may fit the general description of a
side channel attack, but are usually considered separately and are covered in the next section.
Timing and power attacks examine elements such as power used or time to achieve some function to make
determinations about what is happening. Although these seem far-fetched, they have been used successfully to
reveal information about what is happening inside a program. Electromagnetic attacks were very famous in the
era of cathode ray tube (CRT) monitors, as devices were constructed that could read the magnetic patterns of a
CRT from a distance, reproducing what was on the screen. A modern equivalent is the acoustic attack, where the
computer’s own microphone is used to record keystrokes and then decode them based on the different sounds
each key makes.
The data remanence attack has been in the headlines lately, where researchers have cooled RAM in a machine to
very cold temperatures, allowing them time to get key values out of the RAM even after the power was turned off.
Some types of malware are known to scrape the memory of systems in search of key elements, such as keys and
other secret values. Modern efforts such as Address Space Layout Randomization (ASLR) are designed to defeat
this, but as in all tech “wars,” both sides keep improving the game. The current ASLR scheme used in Windows is
already beginning to show signs of age and will probably be enhanced in the next version.
Common Software Vulnerabilities and Countermeasures
SOCIAL ENGINEERING ATTACKS
Social engineering refers to attacks against the people side
of a system. People can be tricked, deceived, or
manipulated into revealing information even when they
do not realize it. Whether performed in person or via some
electronic form of communication, social engineering is
the process of convincing an individual to perform some
action that they otherwise would not do. Clicking a link in
an email may seem to most to be a clear “do not do”
activity, but if the email appeared to be from a trusted
friend and the topic included some specific joke or other
enticing information, how many would not click on it? In a
current set of advanced attacks, labeled advanced
persistent threat (APT) attacks, these often begin with a
PDF file that contains legitimate information and an
attached piece of malware. Delivered to an inbox
appearing to come from a coworker, these attack vectors
can be very convincing.
Common Software Vulnerabilities and Countermeasures
Phishing
Phishing (pronounced “fishing”) is a type of social engineering in which an attacker attempts to obtain sensitive
information from a user by masquerading as a trusted entity in an email or instant message sent to a large group
of often random users. The attacker attempts to obtain information such as usernames, passwords, credit card
numbers, and details about the user’s bank accounts. The message sent often encourages the user to go to a
website that appears to be for a reputable entity such as PayPal or eBay, both of which have frequently been used
in phishing attempts. The website the user actually visits is not owned by the reputable organization, however, and
asks the user to supply information that can be used in a later attack. Often the message sent to the user will state
that the user’s account has been compromised and will request, for security purposes, that the user enter their
account information to verify the details. Preying upon the sense of urgency to correct a risky situation, the email
attempts to convey the importance of doing something immediately.
In another very common example of phishing, the attacker sends a bulk email, supposedly from a bank, telling the
recipients that a security breach has occurred and instructing them to click a link to verify that their account has
not been tampered with. If the individual actually clicks the link, they are taken to a site that appears to be owned
by the bank but is actually controlled by the attacker. When they supply their account and password for
“verification” purposes, they are actually giving it to the attacker. The emails and websites generated by the
attackers often appear to be legitimate. A few clues, however, can tip off the user that the email might not be what
it claims to be. The email may contain grammatical and typographical errors, for example. Organizations that are
used in these phishing attempts (such as eBay and PayPal) are careful about their images and will not send a
security-related email to users containing obvious errors. In addition, almost unanimously, organizations tell their
users that they will never ask for sensitive information (such as a password or account number) via an email. The
URL of the website that the users are taken to may also provide a clue that the site is not what it appears to be.
Common Software Vulnerabilities and Countermeasures
Despite the increasing media coverage concerning phishing attempts, some Internet users still fall for them, which resu
in attackers continuing to use this relatively cheap method to gain the information they are seeking.

A few clues, however, can tip off the user that the email might not be what it claims to be. The email may contain
grammatical and typographical errors, for example. Organizations that are used in these phishing attempts (such as eB
and PayPal) are careful about their images and will not send a security-related email to users containing obvious errors
addition, almost unanimously, organizations tell their users that they will never ask for sensitive information (such as a
password or account number) via an email. The URL of the website that the users are taken to may also provide a clue t
the site is not what it appears to be. Despite the increasing media coverage concerning phishing attempts, some Interne
users still fall for them, which results in attackers continuing to use this relatively cheap method to gain the information
they are seeking.
A recent development has been the introduction of a modification to the original phishing attack. Spear phishing is the
term that has been created to refer to the special targeting of groups with something in common when launching a phis
attack. By targeting specific groups, the ratio of successful attacks (that is, the number of responses received) to the tota
number of emails or messages sent usually increases because a targeted attack will seem more plausible than a message
sent to users randomly. Spear phishing can commonly include information to make something appear more personal o
more correct. The ultimate in spear phishing is the individual attack on a specific person, and the attack vector may use
information specifically crafted to make the message believable.
Another recent and related type of attack is pharming. In pharming, an attacker attempts to redirect a user to a bogus
website that appears similar to the website the user had intended to access. The attacker attempts to obtain sensitive
information (such as credit card numbers) while the user is at the bogus site. The redirection can occur as a result of
modifications to a system’s hosts file or through attacks on Domain Name Service (DNS) servers, which causes an
Defensive Coding Practices

DECLARATIVE VS. PROGRAMMATIC SECURITY


Security can be instantiated in two different ways in code: in the container itself or in the content of the container.
Declarative programming is when programming specifies the what, but not the how, with respect to the tasks to be
accomplished. An example is SQL, where the “what” is described and the SQL engine manages the “how.” Thus,
declarative security refers to defining security relations with respect to the container. Using a container-based
approach to instantiating security creates a solution that is more flexible, with security rules that are configured as
part of the deployment and not the code itself. Security is managed by the operational personnel, not the
development team.
Imperative programming, also called programmatic security, is the opposite case, where the security
implementation is embedded into the code itself. This can enable a much greater granularity in the approach to
security. This type of fine-grained security, under programmatic control, can be used to enforce complex business
rules that would not be possible under an all-or-nothing container-based approach. This is an advantage for
specific conditions, but it tends to make code less portable or reusable because of the specific business logic that is
built into the program.
The choice of declarative or imperative security functions, or even a mix of both, is a design-level decision. Once
the system is designed with a particular methodology, then the secure development lifecycle (SDL) can build
suitable protections based on the design. This is one of the elements that requires an early design decision, as
many other elements are dependent upon it.
Defensive Coding Practices
Bootstrapping
Bootstrapping refers to the self-sustaining startup process that occurs when a computer starts or a program is
initiated. When a computer system is started, an orchestrated set of activities is begun that includes power on
self-test (POST) routines, boot loaders, and operating system initialization activities. Securing a startup sequence
is a challenge—malicious software is known to interrupt the bootstrapping process and insert its own hooks into
the operating system.
When coding an application that relies upon system elements, such as environment variables like path, care must
be taken to ensure that values are not being changed outside the control of the application. Using configuration
files to manage startup elements and keeping them under application control can help in securing the startup and
operational aspects of the application.

Cryptographic Agility
Cryptography is a complex issue, and one that changes over time as weaknesses in algorithms are discovered.
When an algorithm is known to have failed, as in the case of Data Encryption Standard (DES), MD5, RC2, and a
host of others, there needs to be a mechanism to efficiently replace it in software. History has shown that the
cryptographic algorithms we depend upon today will be deprecated in the future. Cryptography can be used to
protect confidentiality and integrity of data when at rest, in transit (communication), or even in some cases when
being acted upon. This is achieved through careful selection of proper algorithms and proper implementation.
Cryptographic agility is the ability to manage the specifics of cryptographic function that are embodied in code
without recompiling, typically through a configuration file. Most often, this is as simple as switching from an
insecure to a more secure algorithm. The challenge is in doing this without replacing the code itself.
Defensive Coding Practices
Producing cryptographically agile code is not as simple as it seems. The objective is to create software that can be
reconfigured on the fly via configuration files. There are a couple of ways of doing this, and they involve using
library calls for cryptographic functions. The library calls are then abstracted in a manner by which assignments are
managed via a configuration file. This enables the ability to change algorithms via a configuration file change and a
program restart.
Cryptographic agility can also assist in the international problem of approved cryptography. In some cases, certain
cryptographic algorithms are not permitted to be exported to or used in a particular country. Rather than creating
different source-code versions for each country, agility can allow the code to be managed via configurations.
Cryptographic agility functionality is a design-level decision. Once the decision is made with respect to whether
cryptographic agility is included or not, then the SDL can build suitable protections based on the design. This is one
of the elements that requires an early design decision, as many other elements are dependent upon it.
Defensive Coding Practices
Handling Configuration Parameters
Configuration parameters can change the behavior of an application. Securing configuration parameters is an
important issue when configuration can change programmatic behaviors. Managing the security of configuration
parameters can be critical. To determine the criticality of configuration parameters, one needs to analyze what
application functionality is subject to alteration. The risk can be virtually none for parameters of no significance to
extremely high if critical functions such as cryptographic functions can be changed or disabled.
Securing critical data such as configuration files is not a subject to be taken lightly. As in all risk-based security
issues, the level of protection should be commensurate with the risk of exposure. When designing configuration
setups, it is important to recognize the level of protection needed. The simplest levels include having the file in a
directory protected by the access control list (ACL); the extreme end would include encrypting the sensitive data that
is stored in the configuration file.
Configuration data can also be passed to an application by a calling application. This can occur in a variety of ways—
for example, as part of a URL string or as a direct memory injection—based on information provided by the target
application. Testing should explore the use of URLs, cookies, temp files, and other settings to validate correct
handling of configuration data.
Defensive Coding Practices
MEMORY MANAGEMENT
Memory management is a crucial aspect of code security. Memory is used to hold the operational code, data,
variables, and working space. Memory management is a complex issue because of the dynamic nature of the usage
of memory across a single program, multiple programs, and the operating system. The allocation and management
of memory is the responsibility of both the operating systems and the application. In managed code applications,
the combination of managed code and the intermediate code execution engine takes care of memory management,
and type safety makes the tasking easier. Memory management is one of the principal strengths of managed code.
Another advantage of managed code is the automatic lifetime control over all resources. Because the code runs in a
sandbox environment, the runtime engine maintains control over all resources.
In unmanaged code situations, the responsibility for memory management is shared between the operating system
and the application, with the task being even more difficult because of the issues associated with variable type
mismatch. In unmanaged code, virtually all operations associated with resources and memory are the
responsibility of the developer, including garbage collection, thread pooling, memory overflows, and more. As in all
situations, complexity is the enemy of security.
Defensive Coding Practices

Type-Safe Practice
Type safety is the extent to which a programming language prevents errors resulting from different data types in a
program. Type safety can be enforced either statically at compile time or dynamically at runtime to prevent errors.
Type safety is linked to memory safety. Type-safe code will not inadvertently access arbitrary locations of memory
outside the expected memory range. Type safety defines all variables, and this typing defines the memory lengths.
One of the results of this definition is that type-safe programming resolves many memory-related issues
automatically.

Locality
Locality is a principle that given a memory reference by a program, subsequent memory accesses are often
predictable and are in close proximity to previous references. Buffer overflows are a significant issue associated
with memory management and malicious code. There are various memory attacks that take advantage of the
locality principle. There are also defenses against memory corruption based on locality attacks. Address Space
Layout Randomization (ASLR) is a specific memory management technique developed by Microsoft to defend
against locality attacks.
Defensive Coding Practices
ERROR HANDLING
No application is perfect, and given enough time, they will all experience failure. How an application detects and
handles failures is important. Some errors are user driven; some can be unexpected consequences or
programmatic errors. The challenge is in how the application responds when an error occurs. This is referred to as
error handling. The specific coding aspect of error handling is referred to as exception management.
When errors are detected and processed by an application, it is important for the correct processes to be initiated.
If logging of critical information is a proper course of action, one must take care not to expose sensitive
information such as personally identifiable information (PII) in the log entries. If information is being sent to the
screen or terminal, then again, one must take care as to what is displayed. Disclosing paths, locations, passwords,
userids, or any of a myriad of other information that would be useful to an adversary should be avoided.
Defensive Coding Practices
Exception Management
Exception management is the programmatic response to the occurrence of an exception during the operation of a
program. Properly coded for, exceptions are handled by special functions in code referred to as exception handlers.
Exception handlers can be designed to specifically address known exceptions and handle them according to pre-
established business rules.
There are some broad classes of exceptions that are routinely trapped and handled by software. Arithmetic
overflows are a prime example. Properly coded for, trapped, and handled with business logic, this type of error can
be handled inside software itself. Determining appropriate recovery values from arithmetic errors is something that
the application is well positioned to do, and something that the operating system is not.
Part of the development of an application should be an examination of the ways in which the application could fail,
and also the correct ways to address those failures. This is a means of defensive programming, for if the exceptions
are not trapped and handled by the application, they will be handled by the operating system. The operating system
(OS) does not have the embedded knowledge necessary to properly handle the exceptions.
Exceptions are typically not security issues—however, unhandled exceptions can become security issues. If the
application properly handles an exception, then ultimately through logging of the condition and later correction by
the development team, rare, random issues can be detected and fixed over the course of versions. Exceptions that
are unhandled by the application or left to the OS to handle are the ones where issues such as privilege escalation
typically occur.
Defensive Coding Practices
INTERFACE CODING
Application programming interfaces (APIs) define how software components are connected to and interacted
with. Modern software development is done in a modular fashion, using APIs to connect the functionality of the
various modules. APIs are significant in that they represent entry points into software. The attack surface analysis
and threat model should identify the APIs that could be attacked and the mitigation plans to limit the risk. Third-
party APIs that are being included as part of the application should also be examined, and errors or issues be
mitigated as part of the SDL process. Older, weak, and deprecated APIs should be identified and not allowed into
the final application.
On all interface inputs into your application, it is important to have the appropriate level of authentication. It is
also important to audit the external interactions for any privileged operations performed via an interface.
Defensive Coding Practices
PRIMARY MITIGATIONS
There are a set of primary mitigations that have been established over time as proven best practices. As a CSSLP,
you should have these standard tools in your toolbox. An understanding of each, along with where and how it can
be applied, is essential knowledge for all members of the development team. These will usually be employed
through the use of the threat report. The standard best practice–based primary mitigations are as follows:
•   Lock down your environment.
•   Establish and maintain control over all of your inputs.
•   Establish and maintain control over all of your outputs.
•   Assume that external components can be subverted and your code can be read by anyone.
•   Use libraries and frameworks that make it easier to avoid introducing weaknesses.
•   Use industry-accepted security features instead of inventing your own.
•   Integrate security into the entire software development lifecycle.
•   Use a broad mix of methods to comprehensively find and prevent weaknesses.

Defensive coding is not a black art; it is merely applying the materials detailed in the threat report. Attack surface
reduction, an understanding of common coding vulnerabilities, and standard mitigations are the foundational
elements of defensive coding. Additional items in the defensive coding toolkit include code analysis, code review,
versioning, cryptographic agility, memory management, exception handling, interface coding, and managed code.
Secure Software Coding Operations
CODE ANALYSIS (STATIC AND DYNAMIC)
Code analysis is a term used to describe the processes to inspect code for weaknesses and vulnerabilities. It can be
divided into two forms: static and dynamic. Static analysis involves examination of the code without execution.
Dynamic analysis involves the execution of the code as part of the testing. Both static and dynamic analyses are
typically done with tools, which are much better at the detailed analysis steps needed for any but the smallest code
samples.
Code analysis can be performed at virtually any level of development, from unit level to subsystem to system to
complete application. The higher the level, the greater the test space and more complex the analysis. When the
analysis is done by teams of humans reading the code, typically at the smaller unit level, it is referred to as code
reviews. Code analysis should be done at every level of development, because the sooner that weaknesses and
vulnerabilities are discovered, the easier they are to fix. Issues found in design are cheaper to fix than those found
in coding, which are cheaper than those found in final testing, and all of these are cheaper than fixing errors once
the software has been deployed.
Static
Static code analysis is when the code is examined without being executed. This analysis can be performed on both
source and object code bases. The term source code is typically used to designate the high-level language code,
although technically, source code is the original code base in any form, from high language to machine code. Static
analysis can be performed by humans or tools, with humans limited to the high-level language, while tools can be
used against virtually any form of code base.
Static code analysis is frequently performed using automated tools. These tools are given a variety of names, but
are commonly called source code analyzers. Sometimes, extra phrases, such as binary scanners or byte code
scanners, are used to differentiate the tools. Static tools use a variety of mechanisms to search for weaknesses and
Secure Software Coding Operations
Dynamic
Dynamic analysis is performed while the software is executed, either on a target or emulated system. The system
is fed specific test inputs designed to produce specific forms of behaviors. Dynamic analysis can be particularly
important on systems such as embedded systems, where a high degree of operational autonomy is expected. As a
case in point, the failure to perform adequate testing of software on the Ariane rocket program led to the loss of
an Ariane V booster during takeoff. Subsequent analysis showed that if proper testing had been performed, the
error conditions could have been detected and corrected without the loss of the flight vehicle.
Dynamic analysis requires specialized automation to perform specific testing. There are dynamic test suites
designed to monitor operations for programs that have high degrees of parallel functions. There are thread-
checking routines to ensure multicore processors and software are managing threads correctly. There are
programs designed to detect race conditions and memory addressing errors.
Secure Software Coding Operations
CODE/PEER REVIEW
Code reviews are a team-based activity where members of the development team inspect code. The premise
behind peer-based code review is simple. Many eyes can discover what one does not see. This concept is not
without flaws, however, and humans have limited abilities to parse into multilayer obfuscated code. But herein
lies the rub—the objective of most programming efforts is to produce clean, highly legible code that works not
only now, but also, when it is updated later, the new developer can understand what is happening, how it works,
and how to modify it appropriately. This makes the primary mission of code review to be shared between finding
potential weaknesses or vulnerabilities and assisting developers in the production of clean, understandable code.

The process of the review is simple. The author of the code explains to the team, step by step, line by line, how the
code works. The rest of the team can look for errors that each has experienced in the past and observe coding
style, level of comments, etc. Having to present your code to the team and actually explain how it works leads
developers to make cleaner, more defendable code to the group. This then has the benefits of the code being
more maintainable in the long run. By explaining how it works, this also helps others on the team understand how
it works and provides for backups if a developer leaves the team and someone else is arbitrarily assigned to
modify the code.
Secure Software Coding Operations
Code walkthroughs are ideal times for checking for and ensuring mitigation against certain types of errors. Lists
of common defects, such as the SANS Top 25 and the OWASP Top 10, can be checked. The list of previous errors
experienced by the firm can be checked, for if it happened once, it is best not to repeat those issues. Unauthorized
code elements, including Easter eggs and logic bombs, are much harder to include in code if the entire team sees
all the code. A partial list of errors and how they can be caught with walkthroughs is shown in Table 14-1.
Secure Software Coding Operations
BUILD ENVIRONMENT
Creating software in a modern development environment is a multistep process. Once the source code is created, it
must still be compiled, linked, tested, packaged (including signing), and distributed. There is typically a tool or set
of tools for each of these tasks. Building software involves partially applying these tools with the correct options set
to create the correct outputs. Options on elements such as compilers are important, for the options can determine
what tests and error checks are performed during the compiling process.
Organizations employing a secure development lifecycle (SDL) process will have clearly defined processes and
procedures to ensure the correct tools are used and used with the correct settings. Using these built-in protections
can go a long way toward ensuring that the code being produced does not have issues that should have been caught
during development.

Determining the correct set of tools and settings is not a simple task. Language dependencies and legacy issues
make these choices difficult, and yet these are essential steps if one is to fully employ the capabilities of these tools.
Microsoft’s SDL guidelines have required settings for compilers, linkers, and code analysis tools. Enabling these
options will result in more work earlier in the process, but will reduce the potential for errors later in the
development process, where remediation is more time consuming and expensive.
In addition to the actual tools used for building, there is an opportunity to define safe libraries. Approved libraries
of cryptographic and other difficult tasks can make function call errors a lesser possibility. Create a library of safe
function calls for common problem functions such as buffer overflows, XSS, and injection attacks. Examples of these
libraries are the OWASP Enterprise Security API project and the Microsoft Anti-Cross Site Scripting Library for .NET.
Secure Software Coding Operations
Integrated Development Environment (IDE)
Automated tools can be built into the integrated development environment, making it easy for the developer to do
both forms of static and dynamic checking automatically. Integrated development environments have come a long
way in their quest to improving workflow and developer productivity. The current version of Microsoft’s Visual
Studio integrates from requirements to data design to coding and testing, all on a single team-based platform that
offers integrated task management, workflow, code analysis, and bug tracking.
A wide array of IDEs exists for different platforms and languages, with varying capabilities. Using automation such
as a modern IDE is an essential part of an SDL, for it eliminates a whole range of simple errors and allows tracking
of significant metrics. Although using an advanced IDE means a learning curve for the development team, this
curve is short compared to the time that is saved with the team using the tool. Each daily build and the number of
issues prevented early due to more efficient work results in saved time that would be lost to rework and repair after
issues are found, either later in testing or in the field.
Secure Software Coding Operations
ANTITAMPERING TECHNIQUES
An important factor in ensuring that software is genuine and has not been altered is a method of testing the
software integrity. With software being updated across the Web, how can one be sure that the code received is
genuine and has not been tampered with? The answer comes from the application of digital signatures to the
code, a process known as code signing.
Code signing involves applying a digital signature to code, providing a mechanism where the end user can verify
the code integrity. In addition to verifying the integrity of the code, digital signatures provide evidence as to the
source of the software. Code signing rests upon the established public key infrastructure. To use code signing, a
developer will need a key pair. For this key to be recognized by the end user, it needs to be signed by a recognized
certificate authority.
Automatic update services, such as Microsoft’s Windows Update service, use code signing technologies to ensure
that updates are only applied if they are proper in content and source. This technology is built into the update
application, requiring no specific interaction from the end user to ensure authenticity or integrity of the updates.
Secure Software Coding Operations

Steps to Code Signing


1.   The code author uses a one-way hash of the code to produce a digest.
2.   The digest is encrypted with the signer’s private key.
3.   The code and the signed digest are transmitted to end users.
4.   The end user produces a digest of the code using the same hash function as the code author.
5.   The end user decrypts the signed digest with the signer’s public key.
6.   If the two digests match, the code is authenticated and integrity is assured.

Code signing should be used for all software distribution, and is essential when the code is distributed via the
Web. End users should not update or install software without some means of verifying the proof of origin and
the integrity of the code being installed. Code signing will not guarantee that the code is defect free; it only
demonstrates that the code has not been altered since it was signed and identifies the source of the code.
Secure Software Coding Operations
CONFIGURATION MANAGEMENT: SOURCE CODE AND VERSIONING
Development of computer code is not a simple “write it and be done” task. Modern applications take significant
time to build all the pieces and assemble a complete functioning product. The individual pieces all go through a
series of separate builds or versions. Some programming shops do daily builds slowly, building a stable code base
from stable parts. Managing the versions and changes associated with all these individual pieces is referred to as
version control. Sometimes referred to as revision control, the objective is to uniquely mark and manage each
individually different release. This is typically done with numbers or combinations of numbers and letters, with
numbers to the left of the decimal point indicating major releases, and numbers on the right indicating the level of
change relative to the major release.
As projects grow in size and complexity, a version control system, capable of tracking all the pieces and enabling
complete management, is needed. Suppose you need to go back two minor versions on a config file—which one is
it, how do you integrate it into the build stream, and how do you manage the variants? These are all questions
asked by the management team and that are handled by the version control system. The version control system can
also manage access to source files, locking sections of code so that only one developer at a time can check out and
modify pieces of code. This prevents two different developers from overwriting each other’s work in a seamless
fashion. This can also be done by allowing multiple edits and then performing a version merge of the changes,
although this can create issues if collisions are not properly managed by the development team.
Configuration management and version control operations are highly detailed, with lots of recordkeeping.
Management of this level of detail is best done with an automated system that removes human error from the
operational loop. The level of detail across the breadth of a development team makes automation the only way in
which this can be done in an efficient and effective manner. A wide range of software options are available to a
development team to manage this information. Once a specific product is chosen, it can be integrated into the SDL
Common Software Vulnerabilities and Countermeasures
Quick Tips
•   The CWE/SANS Top 25 and OWASP Top 10 lists can be used as a checklist of reminders and as a source for a
custom “Top N” list that incorporates internal historical data.
•   Injection attacks are some of the most common and severe attacks that are currently being seen in software.
•   The SQL injection attack is performed by an attacker inputting a specific string to manipulate the SQL
statement to do something other than that intended by the programmer or designer.
•   Command injection attacks manipulate the input to cause additional command-level functionality.
•   Cross-site scripting and cross-site request forgery attacks are web application attacks that use improperly
validated input strings to result in unauthorized and undesired behaviors.
•   Failures in the application of cryptography can result in failed protection for data.
•   Credentials or other secret data should never be hard-coded in a program.
•   Not encrypting all of the sensitive data is a common failure mode.
•   All user input should be considered suspect and validated before use.
•   The Common Weakness Enumeration (CWE) is a list of software weaknesses created by a community
initiative.
•   The Common Vulnerabilities and Exposures (CVE) is a list of standard identifiers for known software
vulnerabilities that have been found in software.
•   Social engineering refers to attacks against the people side of a system.
Common Software Vulnerabilities and Countermeasures
Quick Tips
•   Declarative security refers to defining security relations with respect to the container.
•   Programmatic security is where the security implementation is embedded into the code itself.
•   Cryptographic agility is the ability to manage the specifics of cryptographic function that are embodied in code
without recompiling, typically through a configuration file.
•   Securing configuration parameters is an important issue when configuration can change programmatic
behaviors.
•   Memory management is a crucial aspect of code security.
•   In managed code applications, the combination of managed code and the intermediate code execution engine
takes care of memory management, and type safety makes the tasking easier.
•   In unmanaged code situations, the responsibility for memory management is shared between the operating
system and the application, with the task being even more difficult because of the issues associated with variable
type mismatch.
•   Type-safe code will not inadvertently access arbitrary locations of memory outside the expected memory range.
•   Locality is a principle that, given a memory reference by a program, subsequent memory accesses are often
predictable and are in close proximity to previous references.
•   Exception management is the programmatic response to the occurrence of an exception during the operation of
a program.
•   APIs are significant in that they represent entry points into software.
•   A set of primary mitigations have been established over time as proven best practices.
Common Software Vulnerabilities and Countermeasures

Quick Tips
•   Code should be inspected during development for weaknesses and vulnerabilities.
•   Static code analysis is performed without executing the code.
•   Dynamic code analysis involves examining the code under production conditions.
•   Code walkthroughs are team events designed to find errors using human-led inspection of source code.
•   Software development is a highly automated task, with many tools available to assist developers in efficient
production of secure code.
•   Integrated development environments provide a wide range of automated functionality designed to make the
development team more productive.
•   Compilers and tools can be configured to do specific testing of code during the production process, and they
need to be integrated into the SDL environment.
•   Code can be cryptographically signed to demonstrate both authenticity and integrity.
•   The management of the various elements of code, files, and settings requires a configuration
management/versioning control system to do this efficiently and effectively.
More questions about PowerPoint?

You might also like