Microsoft SDL - Version 4.1a
Microsoft SDL - Version 4.1a
Microsoft SDL - Version 4.1a
The following documentation provides an in-depth description of the Microsoft SDL methodology and
requirements. Proprietary technologies and resources that are only available internally at Microsoft have
been omitted from these guidelines.
These guidelines can also be found online on the Microsoft Developer Network (MSDN) at
http://msdn.microsoft.com/en-us/library/84aed186-1d75-4366-8e61-8d258746bopq.aspx.
For more information about the Microsoft SDL, visit the SDL Portal at http://www.microsoft.com/sdl.
The following documentation on the Microsoft Security Development Lifecycle version 4.1 (v4.1) is for illustrative purposes only.
This documentation is not an exhaustive reference on the SDL process as practiced at Microsoft. Additional assurance work may be
performed by product teams (but not necessarily documented) at their discretion. As a result, this example should not be considered
as the exact process that Microsoft follows to secure all products.
This documentation should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the
accuracy of any information presented herein.
This document is for informational purposes only. Microsoft makes no warranties, express, implied, or statutory, or statements about
applicability or fitness of purpose for any organization about the information in this document.
The information contained in this document represents the current view of Microsoft Corporation on the issues discussed as of the
date of publication. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment
on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information presented after the date of publication.
Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of
this document may be reproduced, stored in, or introduced into a retrieval system, or transmitted in any form, by any means
(electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of
Microsoft Corporation.
Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter
in this document. Except as expressly provided in any written license agreement from Microsoft, the furnishing of this document
does not give you any license to these patents, trademarks, copyrights, or other intellectual property.
Unless otherwise noted, the example companies, organizations, products, domain names, e-mail addresses, logos, people, places,
and events depicted herein are fictitious, and no association with any real company, organization, product, domain name, e-mail
address, logo, person, place, or event is intended or should be inferred.
Microsoft, ActiveX, Internet Explorer, Outlook, Silverlight, SQL Server, Visual C#, Visual C++, Visual Studio, the Visual Studio logo,
Win32, Windows, Windows Live OneCare, Windows Media, Windows Mobile, Windows Server, Windows Vista, and Xbox are
trademarks of the Microsoft group of companies.
The names of actual companies and products mentioned herein may be the trademarks of their respective owners.
Introduction..................................................................................................................................................5
Project Inception.......................................................................................................................................13
Cost Analysis...........................................................................................................................................15
Risk Analysis............................................................................................................................................21
Creating Documentation and Tools for Users That Address Security and Privacy.............................................23
Security Push...........................................................................................................................................35
Response Planning...................................................................................................................................39
Introduction..............................................................................................................................................45
SDL-Agile Requirements...........................................................................................................................46
Constraints..............................................................................................................................................48
SDL-Agile Example...................................................................................................................................53
Risk Assessment......................................................................................................................................57
Internal Review........................................................................................................................................63
Pre-Production Assessment.......................................................................................................................65
Post-Production Assessment.....................................................................................................................70
Appendix E: Required and Recommended Compilers, Tools, and Options for All Platforms.........................81
Appendix L: Glossary.................................................................................................................................99
Appendix V: Lessons Learned and General Policies for Developing LOB Applications...............................124
Appendix V: Lessons Learned and General Policies for Developing LOB Applications
Privacy also demands attention. To ignore privacy concerns of users can invite blocked deployments,
litigation, negative media coverage, and mistrust. Developers who protect privacy earn users’ loyalties
and distinguish themselves from their competitors.
Secure software development has three elements—best practices, process improvements, and metrics.
This document focuses primarily on the first two elements, and metrics are derived from measuring how
they are applied.
Microsoft has implemented a stringent software development process that focuses on these elements.
The goal is to minimize security-related vulnerabilities in the design, code, and documentation and to
detect and eliminate vulnerabilities as early as possible in the development life cycle. These
improvements reduce the number and severity of security vulnerabilities and improve the protection of
users’ privacy.
Secure software development is mandatory for software that is developed for the following uses:
• In a business environment
• To process personally identifiable information (PII) or other sensitive information
• To communicate regularly over the Internet or other networks
(For more specific definitions, see What Products and Services Are Required to Adopt the Security
Development Lifecycle Process? later in this introduction.)
This document describes both required and recommended changes to software development tools and
processes. These changes should be integrated into existing software development processes to facilitate
best practices and achieve measurably improved security and privacy.
Note: This document outlines the SDL process used by Microsoft product groups for application
development. It has been modified slightly to remove references to internal Microsoft resources and to
minimize Microsoft-specific jargon. We make no guarantees as to its applicability for all types of
application development or for all development environments. Implementers should use common sense in
choosing the portions of the SDL that make sense given existing resources and management support.
Secure by Design
• Secure architecture, design, and structure. Developers consider security issues part of the basic
architectural design of software development. They review detailed designs for possible security
issues and design and develop mitigations for all threats.
• Threat modeling and mitigation. Threat models are created, and threat mitigations are present in all
design and functional specifications.
• Elimination of vulnerabilities. No known security vulnerabilities that would present a significant risk
to anticipated use of the software remain in the code after review. This review includes the use of
analysis and testing tools to eliminate classes of vulnerabilities.
• Improvements in security. Less secure legacy protocols and code are deprecated, and, where
possible, users are provided with secure alternatives that are consistent with industry standards.
Secure by Default
• Least privilege. All components run with the fewest possible permissions.
• Defense in depth. Components do not rely on a single threat mitigation solution that leaves users
exposed if it fails.
• Conservative default settings. The development team is aware of the attack surface for the product
and minimizes it in the default configuration.
• Avoidance of risky default changes. Applications do not make any default changes to the operating
system or security settings that reduce security for the host computer. In some cases, such as for
security products (for example, Microsoft Internet Security and Acceleration [ISA] Server), it is
acceptable for a software program to strengthen (increase) security settings for the host computer.
The most common violations of this principle are games that either open up firewall ports without
informing the user or instruct users to do so without consideration of possible risks.
• Less commonly used services off by default. If fewer than 80 percent of a program’s users use a
feature, that feature should not be activated by default. Measuring 80 percent usage in a product is
often difficult because programs are designed for many different personas. It can be useful to
consider whether a feature addresses a core/primary use scenario for all personas. If it does, the
feature is sometimes referred to as a P1 feature.
Secure in Deployment
• Deployment guides. Prescriptive deployment guides outline how to deploy each feature of a
program securely, including providing users with information that enables them to assess the security
risk of activating non-default options (and thereby increasing the attack surface).
Communications
• Security response. Development teams respond promptly to reports of security vulnerabilities and
communicate information about security updates.
• Community engagement. Development teams proactively engage with users to answer questions
about security vulnerabilities, security updates, or changes in the security landscape.
An analogous concept to SD3+C for privacy is known as PD3+C. The guiding principles for PD3+C are
outlined in the following subsections.
Privacy by Design
• Provide notice and consent. Provide appropriate notice about data that is collected, stored, or
shared so that users can make informed decisions about their personal information.
• Enable user policy and control. Enable parents to manage privacy settings for their children, and
enable enterprises to manage privacy settings for their employees.
• Minimize data collection and sensitivity. Collect the minimum amount of data that is required for a
particular purpose, and use the least sensitive form of that data.
• Protect the storage and transfer of data. Encrypt PII in transfer, limit access to stored data, and
ensure that data usage complies with uses communicated to the user.
Privacy by Default
• Ship with conservative default settings. Obtain appropriate consent before collecting or
transferring any data. To prevent unauthorized access, protect personal data stored in access control
lists.
Privacy in Deployment
• Publish deployment guides. Disclose privacy mechanisms to enterprise users so that they can
establish internal privacy policies and maintain their users’ and employees' privacy.
Communications
• Publish author-appropriate privacy disclosures. Post privacy statements on appropriate Web
sites.
• Promote transparency. Actively engage mainstream and trade media outlets with white papers and
other documentation to help reduce anxiety about high-risk features.
• Establish a privacy response team. Assign staff responsible for responding if a privacy incident or
escalation occurs.
Process improvements are incremental and do not require radical changes in the development process.
However, it is important to make improvements consistently across an organization.
The rest of this document describes each step of the process in detail.
What Products and Services Are Required to Adopt the SDL Process?
• Any software release that is commonly used or deployed within any organization, such as a business
organization or a government or nonprofit agency.
• Any software release that regularly stores, processes, or communicates PII or other sensitive
information. Examples include financial or medical information.
• Any software product or service that targets or is attractive to children 13 years old or younger.
• Any software release that regularly connects to the Internet or other networks. Such software might
be designed to connect in different ways, including:
• Always online. Services provided by a product that involve a presence on the Internet (for
example, Windows® Messenger).
• Designed to be online. Browser or mail applications that expose Internet functionality (for
example, Microsoft Office Outlook® or Microsoft Internet Explorer®).
• Exposed online. Components that are routinely accessible through other products that interact
with the Internet (for example, Microsoft ActiveX® controls or PC–based games with multiplayer
online support).
• Any software release that automatically downloads updates.
• Any software release that accepts or processes data from an unauthenticated source, including:
• Callable interfaces that “listen.”
• Functionality that parses any unprotected file types that should be limited to system
administrators.
• Any release that contains ActiveX controls.
• Any release that contains COM controls.
How Are New Recommendations and Requirements Added to the SDL Process?
The Security Development Lifecycle consists of the proven best practices and tools that were successfully
used to develop recent products. However, the area of security and privacy changes frequently, and the
Security Development Lifecycle must continue to evolve and to use new knowledge and tools to help
build even more trusted products. But because product development teams must also have some visibility
and predictability of security requirements in order to plan schedules, it is necessary to define how new
recommendations and requirements are introduced, as well as when new requirements are added to the
SDL.
New SDL recommendations may be added at any time, and they do not require immediate
implementation by product teams. New SDL requirements should be released and published at six-month
intervals. New requirements will be finalized and published three months before the beginning of the next
six-month interval for which they are required. For more information about how to hold teams accountable
for requirements, see How Are SDL Requirements Determined for a Specific Product Release?
A number of key knowledge concepts are important to successful software security. These concepts can
be broadly categorized as either basic or advanced security knowledge. Each technical member of a
project team (developer, tester, program manager) should be exposed to the knowledge concepts in the
following subsections.
Basic Concepts
• Secure design, including the following topics:
• Attack surface reduction
• Defense in depth
• Principle of least privilege
• Secure defaults
• Threat modeling, including the following topics:
• Overview of threat modeling
• Design to a threat model
• Coding to a threat model
• Testing to a threat model
• Secure coding, including the following topics:
• Buffer overruns
• Integer arithmetic errors
• Cross-site scripting
• SQL injection
• Weak cryptography
• Managed code issues (Microsoft .NET/Java)
• Security testing, including the following topics:
• Security testing versus functional testing
• Risk assessment
• Test methodologies
• Test automation
Advanced Concepts
The preceding training concepts establish an adequate knowledge baseline for technical personnel. As
time and resources permit, it is recommended that you explore other advanced concepts. Examples
include (but are not limited to):
• Security design and architecture.
• User interface design.
• Security concerns in detail.
• Security response processes.
• Implementing custom threat mitigations.
Security Requirements
• All developers, testers, and program managers must complete at least one security training class
each year. Individuals who have not taken a class in the basics of security design, development, and
testing must do so.
• At least 80 percent of the project team staff who work on products or services must be in compliance
with the standards listed earlier before their product or service is released. Relevant managers must
also be in compliance with these standards. Project teams are strongly encouraged to plan security
training early in the development process so that training can be completed as early as possible and
have a maximum positive effect on the project’s security.
Security Recommendations
Microsoft recommends that staff who work in all disciplines read the following publications:
• Writing Secure Code, Second Edition (ISBN 9780735617223; ISBN: 0-7356-1722-8).
• Uncover Security Design Flaws Using the STRIDE Approach (ISBN: 0-7356-1991-3).
Privacy Recommendations
Microsoft recommends that staff who work in all disciplines read the following documents:
• Appendix A: Privacy at a Glance (Sample)
• Microsoft Privacy Guidelines for Developing Software Products and Services
Resources
• The Security Development Lifecycle (ISBN 9780735622142; ISBN-10 0-7356-2214-0), Chapter 5:
Stage 0—Education and Awareness.
• Privacy: What Developers and IT Professionals Should Know (ISBN-10: 0-321-22409-4; ISBN-13:
978-0-321-22409-5)
• The Protection of Information in Computer Systems
The Requirements phase of the SDL includes the project inception—when you consider
security and privacy at a foundational level—and a cost analysis—when you determine if development
and support costs for improving security and privacy are consistent with business needs.
Project Inception
The need to consider security and privacy at a foundational level is a fundamental tenet of system
development. The best opportunity to build trusted software is during the initial planning stages of a new
release or a new version because development teams can identify key objects and integrate security and
privacy, which minimizes disruption to plans and schedules.
Security Requirements
• Develop and answer a short questionnaire to verify whether your development team is subject to
Security Development Lifecycle (SDL) policies. The questionnaire has two possible outcomes:
1. If the project is subject to SDL policies, it must be assigned a security advisor who serves as the
point of contact for its Final Security Review (FSR). It is in the project team’s interest to register
promptly and establish the security requirements for which they will be held accountable. The
team will also be asked some technical questions as part of a security risk assessment to help
the security advisor identify potential security risks. (See Cost Analysis in this document.)
2. If the project is not subject to SDL policies, it is not necessary to assign a security advisor and the
release is classified as exempt from SDL security requirements.
• Identify the team or individual that is responsible for tracking and managing security for the product.
This team or individual does not have sole responsibility for ensuring that a software release is
secure, but the team or individual is responsible for coordinating and communicating the status of any
security issues. In smaller product groups, a single program manager might take on this role.
• Ensure that bug reporting tools can track security issues and that a database can be queried
dynamically for all security bugs at any time. The purpose of this query is to examine unfixed security
issues in the FSR. The project’s bug tracking system must accommodate the bug bar ranking value
recorded with each bug.
• Define and document the project’s security bug bar. This set of criteria establishes a minimum level of
quality. Defining it at the start of the project improves understanding of risks associated with security
issues and enables teams to identify and fix security issues during development. The project team
must negotiate a bug bar approved by the security advisor with project-specific clarifications and (as
appropriate) more stringent security requirements specified by the security advisor. The bug bar
must never be relaxed, though, even as the project’s release date nears. Bug bar examples can
be found in Appendix M: SDL Privacy Bug Bar and Appendix N: SDL Security Bug Bar.
Security Recommendations
It is useful to create a security plan document during the Design phase to outline the processes and work
items your team will follow to integrate security into their development process. The security plan should
identify the timing and resource requirements that the Security Development Lifecycle prescribes for
individual activities. These requirements should include:
• Team training.
• Threat modeling.
• Security push.
• Final Security Review (FSR).
The security plan should reflect a development team’s overall perspective on security goals, challenges,
and plans. Security plans can change, but articulating one early helps ensure that no requirements are
overlooked and avoids last-minute surprises. A sample security plan is included in Appendix O.
Consider using a tool to track security issues by cause and effect. This information is very important to
have later in a project. Ensure that the bug reporting tool used includes fields with the STRIDE values in
the following lists (definitions for these values are available in Appendix B: Security Definitions for
Vulnerability Work Item Tracking).
The tool’s Security Bug Effect field should be set to one or more of the following STRIDE values:
• Not a Security Bug
• Spoofing
• Tampering
• Repudiation
• Information Disclosure
• Denial of Service
• Elevation of Privilege
• Attack Surface Reduction
It is also important to use the Security Bug Cause field to log the cause of a vulnerability (this field
should be mandatory if Security Bug Effect is anything other than Not a Security Bug).
The Security Bug Cause field should be set to one of the following values:
• Not a security bug
• Buffer overflow/underflow
• Arithmetic error (for example, integer overflow)
• SQL/Script injection
• Directory traversal
Resources
• The Security Development Lifecycle (ISBN 9780735622142; ISBN-10 0-7356-2214-0), Chapter 6:
Stage 1—Project Inception.
Cost Analysis
Before you invest time in design and implementation, it is important to understand the costs and
requirements involved in handling data with privacy considerations. Privacy risks increase development
and support costs, so improving security and privacy can be consistent with business needs.
Security Requirements
A security risk assessment (SRA) is a mandatory exercise to identify functional aspects of the software
that might require deep security review. Given that program features and intended functionality might be
different from project to project, it is wise to start with a simple SRA and expand it as necessary to meet
the project scope.
Note: SRA guidelines are discussed in Chapter 8 of The Security Development Lifecycle, along with a
sample SRA on the DVD included with the book.
The Privacy Impact Rating (P1, P2, or P3) measures the sensitivity of the data your software will process
from a privacy point of view. More information about Privacy Impact Ratings can be found in Chapter 8 of
The Security Development Lifecycle. General definitions of privacy impact are defined as:
• P1 High Privacy Risk. The feature, product, or service stores or transfers PII or error reports,
monitors the user with an ongoing transfer of anonymous data, changes settings or file type
associations, or installs software.
• P2 Moderate Privacy Risk. The sole behavior that affects privacy in the feature, product, or service
is a one-time, user-initiated, anonymous data transfer (for example, the user clicks a link and goes
out to a Web site).
• P3 Low Privacy Risk. No behaviors exist within the feature, product, or service that affect privacy.
No anonymous or personal data is transferred, no PII is stored on the machine, no settings are
changed on the user's behalf, and no software is installed.
Product teams must complete only the work that is relevant to their Privacy Impact Rating. Complete the
initial assessment early in the product planning/requirements phase, before you write detailed
specifications or code.
Privacy Recommendations
If your Privacy Impact Rating is P1 or P2, understand your obligations and try to reduce your risk. Early
awareness of all the required steps for deploying a project with high privacy risk might help you decide
whether the costs are worth the business value gained. Review the guidance in Understand Your
Obligations and Try to Lower Your Risk of Appendix C: SDL Privacy Questionnaire. If your Privacy Impact
Rating is P1, schedule a “sanity check” with your organization's privacy expert. This person should be
able to guide you through implementation of a high-risk project and might have other ideas to help you
reduce your risk.
Resources
• The Security Development Lifecycle (ISBN 9780735622142; ISBN-10 0-7356-2214-0), Chapter 7:
Stage 2—Define and Follow Design Best Practices
• The Security Development Lifecycle (ISBN 9780735622142; ISBN-10 0-7356-2214-0), Chapter 8:
Stage 3—Product Risk Assessment
• Microsoft Privacy Guidelines for Developing Software Products and Services
The Design phase is when you build the plan for how you will take your project through the rest of the
SDL process—from implementation, to verification, to release. During the Design phase you establish
best practices to follow for this phase by way of functional and design specifications, and you perform risk
analysis to identify threats and vulnerabilities in your software.
Threat modeling (described in Design Phase: Risk Analysis) is the other critical security activity that must
be completed during the design phase.
Security Requirements
• Complete a security design review with a security advisor for any project or portion of a project that
requires one. Some low-risk components might not require a detailed security design review.
• When developing with managed code, use strong-named assemblies and request minimal
permission. When using strong-named assemblies, do not use APTCA (AllowPartiallyTrustedCallers
attribute) unless the assembly was approved after a security review. Without specific security review
and approval, assemblies that use APTCA generate FxCop errors and fail to pass a Final Security
Review (FSR).
• For online services, all new releases must use the Relying Party Suite (RPS) v4.0 SDK. RPS
provides significant security advantages over the current Passport Manager (PPM) SDK; most
important being the elimination of the shared symmetric encryption keys, which mitigates security
issues involving key distribution, deployment, and administration. This also provides a significantly
reduced cost of key revision.
• (New for SDL 4.1) User Account Control (UAC) is a new feature in Windows Vista that is intended to
help the transition to non-administrative users. Comply with UAC best practices to ensure that your
application runs correctly as a non-administrator. Exit criteria for this requirement is confirmation from
the project team that it has analyzed and minimized the need for elevated privileges and followed
best practices for operation in a UAC environment. Following this requirement enables teams to
design and develop applications with a standard user in mind. This results in a reduced attack surface
exposed by applications, thus increasing the security of the user and system.
Security Recommendations
• Include in all functional and design specifications a section that describes impacts on security.
• Write a security architecture document that provides a description of a software project that focuses
on security. Such a document should complement and reference existing traditional development
collateral without replacing it. A security architecture document should contain, at a minimum:
• Attack surface measurement. After all design specifications are complete, define and document
what the program’s default and maximum attack surfaces are. The size of the attack surface
indicates the likelihood of a successful attack. Therefore, your goal should be to minimize the
attack surface. You can find additional background information in the papers Fending Off Future
Attacks by Reducing Attack Surface and Measuring Relative Attack Surfaces.
• Product structure or layering. Highly structured software with well-defined dependencies among
components is less likely to be vulnerable than software with less structure. Ideally, software
should be structured in a layered hierarchy so that higher components (layers) depend on lower
ones. Lower layers should never depend on higher ones. Developing this sort of layered design is
difficult and might not be feasible with legacy or pre-existing software. However, teams that
develop new software should consider layered and highly structured designs.
• Minimize default attack surface/enable least privilege.
• All feature specifications should consider whether the features should be enabled by default. If a
feature is not used frequently, you should disable it. Consider carefully whether to enable by
default those features that are used infrequently.
• If the program needs to create new user accounts, ensure that they have as few permissions as
possible for the required function and that they also have strong passwords.
• Be very aware of access control issues. Always run code with the fewest possible permissions.
When code fails, find out why it failed and fix the problem instead of increasing permissions. The
more permissions any code has, the greater its exposure to abuse.
• Default installation should be secure. Review functionality and exposed features that are enabled by
default and constitute the attack surface carefully for vulnerabilities.
• Consider a defense-in-depth approach. The most exposed entry points should have multiple
protection mechanisms to reduce the likelihood of exploitation of any security vulnerabilities that
might exist. If possible, review public sources of information for known vulnerabilities in competitive
products, analyze them, and adjust your product’s design accordingly.
• If the program is a new release of an existing product, examine past vulnerabilities in previous
versions of the product and analyze their root causes. This analysis might uncover additional
instances of the same classes of problems.
• Deprecate outdated functionality. If the product is a new release of an existing product, evaluate
support for older protocols, file formats, and standards, and strongly consider removing them in the
new release. Older code written when security awareness was less prevalent almost always contains
security vulnerabilities.
Privacy Recommendations
• If your project has a privacy impact rating of P2, identify a compliant design based on the concepts,
scenarios, and rules in the Microsoft Privacy Guidelines for Developing Software Products and
Services. Additional guidance can be found in Appendix C: SDL Privacy Questionnaire.
• Use FxCop to enforce design guidelines in managed code. Many rules are built in by default.
Resources
• The Security Development Lifecycle (ISBN 9780735622142; ISBN-10 0-7356-2214-0), Chapter 8:
Stage 3—Product Risk Assessment
• The Security Development Lifecycle (ISBN 9780735622142; ISBN-10 0-7356-2214-0), Chapter 20:
SDL Minimum Cryptographic Standards (pp. 251-258)
• Writing Secure Code, Second Edition (ISBN 9780735617223; ISBN-10 0-7356-1722-8), Appendix C:
A Designer's Security Checklist (p. 729)
• Fending Off Future Attacks by Reducing Attack Surface , an MSDN article on the process for
determining attack surface
• Measuring Relative Attack Surfaces, a more in-depth research paper
For security concerns, threat modeling is a systematic process that is used to identify threats and
vulnerabilities in software. You must complete threat modeling during project design. A team cannot build
secure software unless it understands the assets the project is trying to protect, the threats and
vulnerabilities introduced by the project, and details of how the project mitigates those threats.
Security Requirements
• Complete threat models for all functionality identified during the cost analysis phase. Threat models
typically must consider the following areas:
• All projects. All code exposed on the attack surface and all code written by or licensed from a
third party.
• New projects. All features and functionality.
• Updated versions of existing projects. New features or functionality added in the updated
version.
Privacy Requirements
If a project has a privacy impact rating of P1:
• Complete Detailed Privacy Analysis in Appendix C: SDL Privacy Questionnaire. The questions will be
customized to the behaviors specified in the initial assessment.
• Hold a design review with your privacy subject-matter expert.
If your project has a privacy impact rating of P2:
• Complete the Detailed Privacy Analysis in Appendix C: SDL Privacy Questionnaire. The questions
will be customized to the behaviors specified in the initial assessment.
• Hold a design review with your privacy subject-matter expert only if one or more of these criteria
apply:
• The privacy subject-matter expert requests a design review.
• You want confirmation that the design is compliant.
• You wish to request an exception.
If your project has a privacy impact rating of P3, there are no privacy requirements during this phase.
Security Recommendations
• The person who manages the threat modeling process should complete threat modeling training
before working on threat models.
• After all specifications and threat models have been completed and approved, the process for making
changes to functional or design specifications—known as design change requests (DCRs—should
include an assessment of whether the changes alter existing threats, vulnerabilities, or the
effectiveness of mitigations.
• Create an individual work item for each vulnerability listed in the threat model so that your quality
assurance team can verify that the mitigation is implemented and functions as designed.
Resources
• The Security Development Lifecycle (ISBN 9780735622142; ISBN-10 0-7356-2214-0), Chapter 8:
Stage 3—Product Risk Assessment
• The Security Development Lifecycle (ISBN 9780735622142; ISBN-10 0-7356-2214-0), Chapter 9:
Stage 4—Risk Analysis
• Threat Modeling: Uncover Security Design Flaws Using The STRIDE Approach
• SDL Threat Modeling Tool
The Implementation phase is when the end user of your software is foremost in your mind.
During this phase you create the documentation and tools the customer uses to make informed decisions
about how to deploy your software securely. To this end, the Implementation phase is when you establish
development best practices to detect and remove security and privacy issues early in the development
cycle.
Creating Documentation and Tools for Users That Address Security and
Privacy
Every release of a software program should be secure by design, in its default configuration, and in
deployment. However, people use programs differently, and not everyone uses a program in its default
configuration. You need to provide users with enough security information so they can make informed
decisions about how to deploy a program securely. Because security and usability might conflict, you also
need to educate users about the threats that exist and the balance between risk and functionality when
deciding how to deploy and operate software programs.
It is difficult to discuss specific security documentation needs before development plans and functional
specifications stabilize. As soon as the architecture is reasonably stable, the user experience (UX) team
can develop a security documentation plan and schedule. Delivering documentation about how to use a
software program securely is just as important as delivering the program itself.
Security Recommendations
• Development management, program management, and UX teams should meet to identify and
discuss what information users will need to use the software program securely. Define realistic use
and deployment scenarios in functional and design specifications. Consider user needs for
documentation and tools.
• User experience teams should establish a plan to create user-facing security documentation. This
plan should include appropriate schedules and staffing needs. Communicating the security aspects of
a program to the user in a clear and concise fashion is as important as ensuring that the product code
or functionality is free of vulnerabilities.
• For new versions of existing programs, solicit or gather comments about what problems and
challenges users faced when securing prior versions.
• Make information about secure configurations available separately or as part of the default product
documentation and/or help files. Consider the following issues:
• The program will follow the best practice of reducing the default attack surface. However, what
should users know if they need to activate additional functionality? What risks will they be
exposed to?
Privacy Recommendations
• If the program contains privacy controls, create deployment guides for organizations to help them
protect their users’ privacy (for example, Group Policy controls).
• Create content to help users protect their privacy when using the program (for example, secure your
subnet).
Resources
• The Security Development Lifecycle (ISBN 9780735622142; ISBN-10 0-7356-2214-0), Chapter 10:
Stage 5—Creating Security Documents, Tools, and Best Practices for Customers
• Templates contained in the Windows Server 2003 Security Guide
Security Requirements
• Build tools. Use the currently required (or later) versions of compilers to compile options for the
Win32®, Win64, WinCE, and Macintosh target platforms, as listed in Appendix E: SDL Required and
Recommended Compilers, Tools, and Options for All Platforms.
• Compile C/C++ code with /GS or approved alternative on other platforms.
• Link C/C++ code with /SAFESEH or approved alternative on other platforms.
• Link C/C++ code with /NXCOMPAT (for more information, refer to Appendix F: SDL Requirement:
No Executable Pages) or approved alternative on other platforms.
• Use MIDL with /robust or approved alternative on other platforms.
Privacy Requirements
Establish and document development best practices for the development team. Communicate any design
changes that affect privacy to your team’s privacy lead so that they can document and review any
changes.
Security Recommendations
• Comply with minimal Standard Annotation Language (SAL) code annotation recommendations as
described in Appendix H: SDL Standard Annotation Language (SAL) Recommendations for Native
Win32 Code. Annotating code helps existing code analysis tools identify implementation issues better
and also helps improve the tools. SAL annotated code has additional code analysis requirements, as
described in SDL SAL Recommendations.
• All executable programs written using unmanaged code (.EXE) should call the HeapSetInformation
interface, as described in Appendix I: SDL Requirement Heap Manager Fail Fast Setting. Calling this
interface helps provide additional defense-in-depth protection against heap-based exploits (Win32
only).
• Review available information resources to adopt appropriate coding techniques and methodologies.
For a current and complete list of all development best practice information and resources, see
Writing Secure Code, Second Edition (ISBN 9780735617223; ISBN-10 0-7356-1722-8).
• Review recommended development tools and adopt appropriate tools, in addition to the tools
required by SDL. These tools can be found in The Security Development Lifecycle (ISBN
9780735622142; ISBN-10 0-7356-2214-0), Chapter 21: SDL-Required Tools and Compiler Options.
• Define, document, and communicate to your entire team all best practices and policies based on
analysis of all the resources and tools listed in this document.
• Document all tools that are used, including compiler versions, compile options (for example, /GS),
and additional tools used. Also, forecast any anticipated changes in tools. For more information
about minimum tool requirements and related policy, review How Are New Recommendations
and New Requirements Added to the Security Development Life Cycle Process?
• Create a coding checklist that describes the minimal requirements for any checked-in code. This
checklist can include some of the items from Writing Secure Code, Second Edition “Appendix D:
A Developer’s Security Checklist” (p. 731), clean compile warning level requirements (/W3 as
minimal and /W4 clean as ideal), or other desired minimum standards.
• Establish and document how the team enforces these practices. Is the team running scripts to
check for compliance when code is checked in? How often do you run analysis tools? The
development manager is ultimately responsible for establishing, documenting, and validating
compliance of development best practices.
• For online services and/or LOB applications that use JavaScript, avoid use of the eval() function.
• Additional development best practices for security can be divided into three general categories:
1. Review available information resources to adopt coding techniques and methodologies that
are appropriate for the product.
2. Review recommended development tools to adopt, and use those that are appropriate for the
product, in addition to the tools required by the SDL.
Identify any long-lived pointers in your code. Access to these functions should be through
encoded pointers using code like this:
The global pointer (g_pFoo) is encoded during the initialization phase, and its true value
remains encoded until the pointer is needed. Each time g_pFoo is to be accessed, the code must
call DecodePointer.
• (New for SDL 4.1) Fix code flagged by /W4 compiler warnings. Attackers are finding and
exploiting more obscure classes of vulnerabilities as traditional stack and heap buffer overruns
become harder to find. To this end, it is recommended that all W4 warning messages are fixed
prior to release.
• (New for SDL 4.1) Safe redirect, online only. Automatically redirecting the user (through
Response.Redirect, for example) to any arbitrary location specified in the request (such as a
query string parameter) could open the user to phishing attacks. Therefore, it is recommended
that you not allow HTTP redirects to arbitrary user-defined domains.
• (New for SDL 4.1) No global exception handlers. Exceptions are a powerful way to handle run-
time errors, but they can also be abused in a way that could mask errors or make it easier for
attackers to compromise systems.
• (New for SDL 4.1) Components must have no hard dependencies on the NTLM protocol. All
explicit uses of the NTLM package for network authentication must be replaced with the
Negotiate package. All client authentication calls must provide a properly formatted target name
(SPN). The purpose of the requirement is to enable systems to use Kerberos in place of NTLM
whenever possible.
Resources
• The Security Development Lifecycle (ISBN 9780735622142; ISBN-10 0-7356-2214-0), Chapter 11:
Stage 6—Secure Coding Policies, Chapter 19: SDL Banned Function Calls
• Compiler Security Checks in Depth
• SAL Annotations
During the Verification phase, you ensure that your code meets the security and privacy tenets you
established in the previous phases. This is done through security and privacy testing, and a security push
—which is a team-wide focus on threat model updates, code review, testing, and thorough documentation
review and edit. A public release privacy review is also completed during the Verification phase.
Security testing is important to the Security Development Lifecycle. As Michael Howard and David
LeBlanc note, in Writing Secure Code, Second Edition, “The designers and the specifications might
outline a secure design, the developers might be diligent and write secure code, but it’s the testing
process that determines whether the product is secure in the real world.”
Security Requirements
• File fuzzing is a technique that security researchers use to search for security issues. You can find
many different fuzzing tools on the Internet. There is also a simple fuzzer on the CD that
accompanies the book The Security Development Lifecycle (ISBN 9780735622142; ISBN-10 0-7356-
2214-0). Fuzzers can be very general or tuned for particular types of interfaces. File fuzzing requires
development teams to make a modest resource investment, but it has uncovered many issues. You
must conduct fuzz testing with “retail” (not debug) builds and must correct all issues as described in
the SDL Privacy Bug Bar (Sample) and SDL Security Bug Bar (Sample) appendices.
• If the program exposes remote procedure call (RPC) interfaces, you must use an RPC fuzzing tool to
test for problems. You can find RPC fuzzers on the Internet. This requirement applies only to
programs that expose RPC interfaces. All fuzz testing must be conducted using “retail” (not debug)
builds and must correct all issues as described in the SDL Bug Bar.
• If the project uses ActiveX controls, use an ActiveX fuzzer to test for problems. ActiveX controls pose
a significant security risk and require fuzz testing. You can find ActiveX fuzzers on the Internet.
Conduct all fuzz testing using “retail” (not debug) builds, and correct all issues as described in the
SDL Privacy Bug Bar (Sample) and SDL Security Bug Bar (Sample) appendices.
• Satisfy Win32 testing requirements as described in Appendix J: SDL Requirement: Application
Verifier. The Application Verifier is easy to use and identifies issues that are MSRC patch class
issues in unmanaged code. AppVerifier requires a modest resource investment and should be used
throughout the testing cycle. AppVerifier is not optimized for managed code.
• Define a security bug bar and use it to rate, file, and fix all security vulnerabilities. Vulnerabilities
include:
• (New for SDL 4.1) COM object testing. Any product that ships a registered COM object must meet
the following minimum criteria:
1. COM objects must be compiled and tested with the SDL required switches enabled (for example, a
COM object must be tested with NX and ASLR flags applied to the control and on a machine with NX
and ASLR enabled).
2. All methods in a COM object's supported interfaces must execute without access violations when
called with valid data.
3. COM objects must follow all published rules on reference counting. See the MSDN documentation
on Addref (http://msdn.microsoft.com/en-us/library/ms691379(VS.85).aspx) and Release
(http://msdn.microsoft.com/en-us/library/ms682317(VS.85).aspx).
4. COM objects must be tested for reliable query, instantiation, and interrogation by any COM
container without returning an invalid pointer, leaking memory, or causing access violations.
5. COM objects must follow the published rules for QueryInterface (http://msdn.microsoft.com/en-
us/library/ms682521(VS.85).aspx).
• (New for SDL 4.1) Perform Application Verifier tests. Test all discrete applications within a shipping
product for heap corruption and Win32 resource issues that might lead to security and reliability
issues. You can detect these issues using AppVerifier, available at http://technet.microsoft.com/en-
us/library/bb457063.aspx. Exit Criteria: All tests in the application's functional test suite have been run
under AppVerifier, and all issues have been fixed.
Security Recommendations
• Create and complete security testing plans that address these issues:
• Security features and functionality work as specified. Ensure that all security features and
functionality that are designed to mitigate threats perform as expected.
• Security features and functionality cannot be circumvented. If a mitigation can be bypassed, an
attacker can try to exploit software weaknesses, rendering security features and functionality
useless.
• (New for SDL 4.1) Secure Code Review. Security code reviews are a critical component of the
Security Development Lifecycle. Given the opportunity to review old code or work on a new cool
Exit Criteria
• All Pri 1 source code should be thoroughly reviewed by inspection teams and code-scanning
tools.
• All Pri 2 code should be reviewed using code-scanning tools and some human analysis.
• Development owners for all source code and testing owners for all binaries have been identified,
documented, and archived.
• All source code is assessed and assigned a severity—Pri 1, Pri 2, and Pri 3. This information is
recorded in a document or spreadsheet and is archived.
• (New for SDL 4.1) Network fuzzing. Fuzzing of network interfaces is one of the primary tools of
security researchers and attackers, and network-facing applications are arguably the most easily
accessed target for a remote attacker.
Resources
• The Security Development Lifecycle (ISBN 9780735622142; ISBN-10 0-7356-2214-0), Chapter 12:
Stage 7—Secure Testing Policies.
• How to Break Software: A Practical Guide to Testing (ISBN 978-0201796193; ISBN-10 0201796198).
• How to Break Software Security: Effective Techniques for Security Testing (ISBN 978-0321194336;
ISBN-10 0321194330).
Security Push
A security push is a team-wide focus on threat model updates, code review, testing, and thorough
documentation review and edit. A security push is not a substitute for a lack of security discipline. Rather,
it is an organized effort to uncover changes that might have occurred during development, improve
security in any legacy code, and identify and remediate any remaining vulnerabilities. However, it should
be noted that it is not possible to build security into software with only a security push.
A security push occurs after a product has entered the verification stage (reached code/feature complete).
It usually begins at about the time beta testing starts. Because the results of the security push might alter
the default configuration and behavior of a product, you should perform a final beta test review after the
security push is complete and after all issues and required changes are resolved.
It is important to note that the goal of a security push is to find vulnerabilities, not to fix them. The time to
fix vulnerabilities is after you complete the security push.
Push Preparation
A successful push requires planning:
• You should allocate time and resources for the push in your project’s schedule, before you begin
development. Rushing the security push will cause problems or delays during the Final Security
Review.
• Your team’s security coordinator should determine what resources are required, organize a security
push leadership team, and create the needed supporting materials and resources.
• The security representative should determine how to communicate security push information to the
rest of the team. It is helpful to establish a central intranet location for all information related to the
push, including news, schedules, plans, forms and documents, white papers, training schedules, and
links. The intranet site should link to internal resources that help the group execute the security push.
This site should serve as the primary source of information, answers, and news for employees during
the push.
• There must be well-defined criteria to determine when the push is complete.
Your team will need training before the push. At a minimum, this training should help team members
understand the intent and logistics of the push itself. Some members might also require updated security
Push Duration
The amount of time, energy, and team-wide focus that a security push requires differs depending on the
status of the code base and the amount of attention the team has given to security earlier in development.
A security push requires less time if your team has:
• Rigorously kept all threat models up to date.
• Actively and completely subjected those threat models to penetrations testing.
• Accurately tracked and documented attack surfaces and any changes made to them.
• Completed security code reviews for all high-severity code (see discussion later in this section for
details about how severity is assessed).
• Identified and documented development and testing contacts for all code released with the product.
• Rigorously brought all legacy code up to current security standards.
• Validated the security documentation plan.
The duration of a security push is determined by the amount of code that needs to be reviewed for
security. Try to conduct security code reviews throughout development, after the code is fairly stable. If
you try to condense too many code reviews into too brief a time period, the quality of code reviews
suffers. In general, a security push is measured in weeks, not days. You should aim to complete the push
in three weeks and extend the time as necessary.
Security Requirements
• Review and update threat models. Examine the threat models that were created during the Design
phase. If circumstances prevented creation of threat models during Design phase, you must develop
them in the earliest phase of the security push.
• Review all bugs that affect security against the security bug bar. Ensure that all security bugs contain
the security bug bar rating.
Privacy Requirements
Review and update the SDL Privacy Questionnaire form (Appendix C to this document) for any material
privacy changes that were made during the implementation and verification stages. Material changes
include:
• Changing the style of consent.
• Substantively changing the language of a notice.
• Collecting different data types.
• Exhibiting new behavior.
Security Recommendations
• Conduct security code reviews for at-risk components. Use the following information to help
determine which components are most at risk, and use this determination to set priorities for security
code review. High-risk items (Sev 1) must be reviewed earliest and most in depth. For a minimal
checklist for security issues to be aware of during code reviews, see “Appendix D: A Developer’s
Security Checklist” in Writing Secure Code, Second Edition (p. 731).
Resources
• The Security Development Lifecycle (ISBN 9780735622142; ISBN-10 0-7356-2214-0), Chapter 13:
Stage 8—The Security Push
The Release phase is when you ready your software for public consumption and, perhaps more
importantly, you ready yourself and your team for what happens once your software is in the hands of the
user. One of the core concepts in the Release phase is response planning—mapping out a plan of action,
should any security or privacy vulnerabilities be discovered in your release—and this carries over to post-
release, as well, in terms of response execution. To this end, a Final Security Review and privacy review
is required prior to release.
Although privacy requirements must be addressed before any public release of code, security
requirements need not be addressed before public release. However, you must complete a Final Security
Review before final release.
Privacy Requirements
• Review and update the Privacy Companion form.
• For a P1 project, your privacy advisor reviews your final SDL Privacy Questionnaire (Appendix C
to this document), helps determine whether a privacy disclosure statement is required, and gives
final privacy approval for public release.
• For a P2 project, you need validation by a privacy advisor if any of the following is true:
• A design review is requested by a privacy advisor.
• You want confirmation that the design is compliant with privacy standards.
• You wish to request an exception.
• For a P3 project, there are no additional privacy requirements.
• Complete the privacy disclosure.
Privacy Recommendations
• Create talking points as suggested by the privacy advisor to use after release to respond to any
potential privacy issues.
• Review deployment guidance for enterprise programs to verify that privacy controls that affect
functionality are documented. Conduct a legal review of the deployment guide.
• Create “quick text” for your support team that addresses likely user questions, and generally foster
strong and frequent communication between your development and support teams.
Resources
• The Security Development Lifecycle (ISBN 9780735622142; ISBN-10 0-7356-2214-0), Chapter 14:
Stage 9—The Final Security Review
Response Planning
Any software can be released with unknown security issues or privacy issues, despite best efforts and
intentions. Even programs with no known vulnerabilities at the time of release can be subject to new
threats that emerge and might require action. Similarly, privacy advocates might raise privacy concerns
after release. You must prepare before release to respond to potential security and privacy incidents. With
proper planning, you should be able to address many of the incidents that could occur in the course of
normal business operations.
Your team must be prepared for a zero-day exploit of a vulnerability—one for which a security update
does not exist. Your team must also be prepared to respond to a software security emergency. If you
create an emergency response plan before release, you will save time, money, and frustration when an
emergency response is required for either security or privacy reasons.
Security Requirements
• The project team must provide contact information for people who respond to security incidents.
Typically, such responses are handled differently for products and services.
• Provide information about which existing sustained engineering (SE) team has agreed to be
responsible for security incident response for the project. If the product does not have an
identified SE team, they must provide an emergency response plan (ERP) and provide it to the
incident response team. This plan must include contact information for three to five engineering
resources, three to five marketing resources, and one or two management resources who are the
first points of contact when you need to mobilize your team for a response effort. Someone must
be available 24 hours a day, seven days a week, and contacts must understand their roles and
responsibilities and be able to execute on them when necessary.
• Identify someone who is responsible for security servicing. All code developed outside the project
team (third-party components) must be listed by filename, version, and source (where it came
from).
Privacy Requirements
• For P1 and P2 projects, identify the person who is responsible for responding to all privacy incidents
that may occur. Add this person’s e-mail address to the Incident Response section of the SDL Privacy
Questionnaire (Appendix C to this document). If this person changes positions or leaves the team,
identify a new contact and update all SDL Privacy Questionnaire forms for which that person was
listed as the privacy incident response lead.
• Identify additional development and quality assurance resources on the project team to work on
privacy incident response issues. The privacy incident response lead is responsible for defining these
resources in the Incident Response section of the SDL Privacy Questionnaire.
• After release, if a privacy incident occurs, you must be prepared to follow the SDL Privacy Escalation
Response Framework (Appendix K to this document), which might include risk assessment, detailed
diagnosis, short-term and long-term action planning, and implementation of action plans. Your
response might include creating a patch, replying to media inquiries, and reaching out to influential
external contacts.
Resources
• The Security Development Lifecycle (ISBN 9780735622142; ISBN-10 0-7356-2214-0), Chapter 15:
Stage 10—Security Response Planning
• Appendix K: SDL Privacy Escalation Response Framework (Sample)
A Final Security Review can last anywhere from a few days to six weeks, depending on the number of
issues and the team’s ability to make necessary changes.
It is important to schedule the FSR carefully—that is, you need to allow enough time to address any
serious issues that might be found during the review. You also need to allow enough time for a thorough
analysis; insufficient time could cause you to make significant changes after the FSR is completed.
Security Requirements
• The project team must provide all required information before the scheduled FSR start date. Failure
to do so may delay completion of the FSR. If the schedule slips significantly before the FSR begins,
contact the assigned security advisor to reschedule.
• After the FSR is finished, the security advisor either signs off on the project as is or provides a list of
required changes.
• For online services and/or LOB applications, projects releasing services are required to have a
security score of B or above to successfully pass the FSR. Both Operations and Product groups are
responsible for compliance. A product’s security is managed at many levels. Vulnerabilities, whether
in code or at host level, put the entire product (and possibly the environment) at risk.
Security Recommendations
• Ensure the product team is constantly evaluating the severity of security vulnerabilities against the
standard that is used during the security push and FSR. Otherwise, a large number of security bugs
might be reactivated during the FSR.
Resources
• The Security Development Lifecycle (ISBN 9780735622142; ISBN-10 0-7356-2214-0), Chapter 16:
Stage 11—Product Release
Security Requirements
• To facilitate the debugging of security vulnerability reports and to help tools teams research cases in
which automated tools failed to identify security vulnerabilities, all product teams must submit
symbols for all publicly released products as part of the release process. This requirement is needed
only for RTM/RTW binaries and any post-release binaries that are publicly released to users (such as
service packs or updates, among others).
• Design and implement a sign-off process to ensure security and other policy compliance before you
ship. This process should include explicit acknowledgement that the product successfully passed the
FSR and was approved for release.
Privacy Requirements
• Design and implement a sign-off process to ensure privacy and other policy compliance before you
ship. This process should include explicit acknowledgement that the product successfully passed the
FSR and was approved for release.
Resources
N/A
Resources
• Appendix K: SDL Privacy Escalation Response Framework (Sample)
This section does not explain all the nuances of the SDL. To gain a deeper understanding of the SDL, you
can review the main section of this document.
The intended audience for this section is development teams who want to build more secure applications
using Agile methods. No extensive SDL or Agile knowledge is assumed.
Introduction
Many software development organizations, including many product and online services groups within
Microsoft, use Agile software development and management methods to build their applications.
Historically, security has not been given the attention it needs when developing software with Agile
methods. Since Agile methods focus on rapidly creating features that satisfy customers’ direct needs, and
security is a customer need, it’s important that it not be overlooked. In today’s highly interconnected
world, where there are strong regulatory and privacy requirements to protect private data, security must
be treated as a high priority.
There is a perception today that Agile methods do not create secure code, and, on further analysis, the
perception is reality. There is very little “secure Agile” expertise available in the market today. This needs
to change. But the only way the perception and reality can change is by actively taking steps to integrate
security requirements into Agile development methods.
Microsoft has embarked on a set of software development process improvements called the Security
Development Lifecycle (SDL). The SDL has been shown to reduce the number of vulnerabilities in
shipping software by more than 50 percent. However, from an Agile viewpoint, the SDL is heavyweight
because it was designed primarily to help secure very large products, such as Windows and Microsoft
Office, both of which have long development cycles.
If Agile practitioners are to adopt the SDL, two changes must be made. First, SDL additions to Agile
processes must be lean. This means that for each feature, the team does just enough SDL work for that
feature before working on the next one. Second, the development phases (design, implementation,
verification, and release) associated with the classic waterfall-style SDL do not apply to Agile and must be
reorganized into a more Agile-friendly format. To this end, the SDL team at Microsoft developed and put
into practice a streamlined approach that melds agile methods and security—the Security Development
Lifecycle for Agile Development (SDL-Agile).
SDL-Agile Requirements
A workhorse of Agile development is the sprint, which is a short period of time (usually 15 to 60 days)
within which a set of features or stories are designed, developed, tested, and then potentially delivered to
customers. The list of features to add to a product is called the product backlog, and prior to a sprint
commencing, a list of features is selected from the product backlog and added to the sprint backlog. The
SDL fits this metaphor perfectly—SDL requirements are represented as tasks and added to the product
and sprint backlogs. These tasks are then selected by team members to complete. You can think of the
bite-sized SDL tasks added to the backlog as non-functional stories.
Every-Sprint Requirements
In order to fit the weighty SDL requirements into the svelte Agile framework, SDL-Agile places each SDL
requirement and recommendation into one of three categories defined by frequency of completion. The
first category consists of the SDL requirements that are so essential to security that no software should
ever be released without these requirements being met. This category is called the every-sprint category.
Whether a team’s sprint is two weeks or two months long, every SDL requirement in the every-sprint
category must be completed in each and every sprint, or the sprint is deemed incomplete, and the
software cannot be released. This includes any release of the software to an external audience, whether
this is a box product release to manufacturing (RTM), online service release to Web (RTW), or alpha/beta
preview release.
• Run analysis tools daily or per build (see Tooling and Automation later in this Agile-SDL section).
• Threat model all new features (see Threat Modeling: The Cornerstone of the SDL).
• Ensure that each project member has completed at least one security training course in the past
year (see Security Education).
• Use filtering and escaping libraries around all Web output.
• Use only strong crypto in new code (AES, RSA, and SHA-256 or better).
For a complete list of the every-sprint requirements as followed by Microsoft SDL-Agile teams, see
Appendix P.
Bucket Requirements
The second category of SDL requirement consists of tasks that must be performed on a regular basis
over the lifetime of the project but that are not so critical as to be mandated for each sprint. This category
is called the bucket category and is subdivided into three separate buckets of related tasks. Currently
there are three buckets in the bucket category—verification tasks (mostly fuzzers and other analysis
tools), design review tasks, and response planning tasks. Instead of completing all bucket requirements
each sprint, product teams must complete only one SDL requirement from each bucket of related tasks
during each sprint. The table below contains only a sampling of the tasks for each bucket. To see a
complete list of all tasks for all three buckets, consult Appendix Q: SDL-Agile Bucket Requirements.
Attack surface analysis Review crypto design Update security response contacts
Binary analysis (BinScope) Assembly naming and APTCA Update network down plan
File fuzz testing User Account Control Define/update security bug bar
Table 1. Example of bucket categories. For a complete list of bucket items, see Appendix Q: SDL-Agile
Bucket Requirements.
In this example, a team would be required to complete one verification requirement, one design review
requirement, and one response planning requirement in every sprint (in addition to the every-sprint
requirements discussed earlier). For sprint one, the team might choose to complete ActiveX fuzzing,
Review crypto design, and Update security bug bar from the table. For sprint two, they might choose
Binary analysis, Conduct a privacy review, and Update network down plan.
It is left to the product teams to determine which tasks from each bucket that they would like to address in
any given sprint. The SDL-Agile does not mandate any type of round-robin or other task prioritization for
these requirements. If your team determines that they are best served by completing file fuzzing
requirements every other sprint but that SOAP fuzzing only needs to be performed every 10 sprints, that’s
acceptable.
However, no requirement can be completely ignored. Every requirement in the SDL has been shown to
identify or prevent some form of security or privacy issue, or both. Therefore, no SDL bucket requirement
can go more than six months without being completed.
One-Time Requirements
There are some SDL requirements that need to be met when you first start a new project with SDL-Agile
or when you first start using SDL-Agile with an existing project. These are generally once-per-project
tasks that won’t need to be repeated after they’re complete. This is the final category of SDL-Agile
requirements, called the one-time requirements.
The one-time requirements should generally be easy and quick to complete, with the exception of
creating a baseline threat model, which is discussed later in this section. Even though these tasks are
short, there are enough of them that it would not be feasible for a team just starting with SDL-Agile to
complete all of them in one sprint, given that the team also needs to complete the every-sprint
requirements and one requirement from each of the buckets.
To address this issue, the SDL-Agile allows a grace period to complete each one-time requirement. The
period generally ranges from one month to one year after the start of the project, depending on the size
and complexity of the requirement. For example, choosing a security advisor is considered an easy,
straightforward task and has a one-month completion deadline, whereas updating your project to use the
latest version of the compiler is considered a potentially long, difficult task and has a one-year completion
deadline. The current list of one-time requirements and the corresponding grace periods can be found in
Appendix R of this document. Figure 2 provides an illustration of this process in action.
Constraints
The main difficulty that SDL-Agile attempts to address is that of fitting the entire SDL into a short release
cycle. It is entirely reasonable to mandate that every SDL requirement be completed over the course of a
two- or three-year-long release cycle. It is not reasonable to mandate the same for a two- or three-week-
long release cycle. The categorization of SDL requirements into every-sprint, one-time, and the three
bucket groups is the SDL-Agile solution for dealing with this conundrum. However, an effect of this
categorization is that teams can temporarily skip some SDL requirements for some releases. The
Microsoft SDL team believes this is a necessary situation required to provide the best mix of security,
feature development, and speed of release for teams with short release cycles.
Although SDL-Agile was designed for teams with short release cycles, teams with longer release cycles
are still eligible to use the SDL-Agile process. However, they may find that they are actually performing
more security work than if they had used the classic, waterfall-based SDL. Requirements that a team only
needs to complete once in classic SDL may need to be met five or six (or more) times in SDL-Agile over
Security Education
Each member of a project team must complete at least one security training course every year. If more
than 20 percent of the project members are out of compliance with this non-negotiable requirement, the
requirement is failed (and consequently so is the sprint, and the product is not allowed to release).
Consult your sprint leader for a list of courses that satisfy SDL training requirements. You can also consult
the SDL Pro Network for training courses and recommendations.
Additionally, in the interests of staying lean, engineers and testers performing security-related tasks or
SDL-related tasks should acquire relevant security knowledge prior to performing the tasks on the sprint.
In this case, relevant is defined as security concepts that are pertinent to the features developed or tested
during the sprint. Examples include:
Web-based applications
• Buffer overflows
• Integer overflows
All languages
• Input validation
• Language-specific issues (PHP, Java, C#)
Cryptographic code
Acquiring security knowledge could be as simple as reading appropriate chapters in a book1 or watching
an online training class. If someone on the team wants to adopt the role of “security champion” or security
expert for their team, they should attend broader and deeper security education as part of their normal
ongoing education. Having a security expert close by is advantageous to the team and, more importantly,
to the customer.
1 19 Deadly Sins of Software Security by Howard, LeBlanc, and Viega is a book that focuses on language and domain-specific
coding vulnerabilities.
SDL-Agile requires the following tools to be run at least once per sprint and recommends that they be run
daily or as part of the build and check-in process:
.NET code:
Once a threat model baseline is in place, any extra work updating the threat model will usually be small,
incremental changes.
A threat model is a critical part of securing a product because a good threat model helps to:
Fuzz Testing
Fuzz testing is a brutally effective security testing technique, especially if the team has never used fuzz
testing on the product. The threat model should determine what portions of the application to fuzz test. If
no threat model exists, the initial list should include high-risk items, such as those defined in Appendix S:
SDL-Agile High-Risk Code.
After this list is complete, the relative exposure of each entry point should be determined, and this drives
the order in which entry points are fuzzed. For example, remotely accessible or unauthenticated
endpoints are higher risk than local-only or authenticated endpoints.
The beauty of fuzz testing is that once a computer or group of computers is configured to fuzz the
application, it can be left running, and only crashes need to be analyzed. If there are no crashes from the
outset of fuzz testing, the fuzz test is probably inadequate, and a new task should be created to analyze
why the fuzz tests are failing and make the necessary adjustments.
Using a Spike to Analyze and Measure Unsecure Code in Bug Dense and “At-
Risk” Code
A critical indicator of potential security bug density is the age of the code. Based on the experiences of
Microsoft developers and testers, the older the code, the higher the number of security vulnerabilities
found in the code. If your project has a large amount of legacy code or risky code (see Appendix S: SDL-
Agile High-Risk Code), you should locate as many vulnerabilities in this code as possible. This is
achieved through a spike. A spike is a time-boxed “side project” with a well-defined goal (in this case, to
find security vulnerabilities). You can think of this spike as a mini security push. The goal of the security
push at Microsoft is to bring risky code up to date in a short amount of time relative to the project duration.
Note that the security push doesn't propose fixing the vulnerabilities yet but rather analyzing them to
determine how bad they are. If a lot of security vulnerabilities are found in code with network connections
or in code that handles sensitive data, these vulnerabilities should not only be fixed soon, but also another
spike should be set up to comb the code more thoroughly for more security vulnerabilities.
• All code. Search for input validation failures leading to buffer overruns and integer overruns.
Also, search for insecure passwords and key handling, along with weak cryptographic algorithms.
• Web code. Search for vulnerabilities caused through improper validation of user input, such as
CSS.
• Database code. Search for SQL injection vulnerabilities.
• Safe for scripting ActiveX controls. Review for C/C++ errors, information leakage, and
dangerous operations.
Exceptions
The SDL requirement exception workflow is somewhat different in SDL-Agile than in the classic SDL.
Exceptions in SDL-Classic are granted for the life of the release, but this won’t work for Agile projects. A
“release” of an Agile project may only last for a few days until the next sprint is complete, and it would be
a waste of time for project managers to keep renewing exceptions every week.
To address this issue, project teams following SDL-Agile can choose to either apply for an exception for
the duration of the sprint (which works well for longer sprints) or for a specific amount of time, not to
exceed six months (which works well for shorter sprints). When reviewing the requirement exception, the
security advisor can choose to increase or decrease the severity of the exception by one level (and thus
increase or decrease the seniority of the manager required to approve the exception) based on the
requested exception duration.
For example, say a team requests an exception for a requirement normally classified as Moderate, which
requires manager approval. If they request the exception only for a very short period of time, say two
weeks, the security advisor may drop the severity to Low, which requires only approval from the team’s
security champion. On the other hand, if the team requests the full six months, the security advisor may
increase the severity to Important and require signoff from senior management due to the increased risk.
In addition to applying for exceptions for specific requirements, teams can also request an exception for
an entire bucket. Normally teams must complete at least one requirement from each of the bucket
categories during each sprint, but if a team cannot complete even one requirement from a bucket, the
team requests an exception to cover that entire bucket. The team can request an exception for the
duration of the sprint or for a specific time period, not to exceed six months, just like for single exceptions.
However, due to the broad nature of the exception—basically stating that the team is going to skip an
entire category of requirements—bucket exceptions are classified as Important and require the approval
of at least a senior manager.
• All every-sprint requirements have been completed, or exceptions for those requirements have
been granted.
• At least one requirement from each bucket requirement category has been completed (or an
exception has been granted for that bucket).
• No bucket requirement has gone more than six months without being completed (or an exception
has been granted).
• No one-time requirements have exceeded their grace period deadline (or exceptions have been
granted).
Now that the basic methodology and foundation is in place, it's time for an example scenario.
SDL-Agile Example
A database-driven Web product is currently in development by a team with a four-week sprint duration. It
is primarily written using C# and ASP.NET. There is a Windows service that processes some data from
the Web application. The service was originally written three years ago and is about 11,000 lines of C++
code—it’s pretty complex.
Input to the Web application is mostly unauthenticated, but it does offer a remotely accessible admin-only
interface. The application also uses a small ActiveX control written in C++.
The product backlog includes 45 user stories—21 of these are high-priority stories, 10 are medium-priority
stories, and 14 are low-priority stories. During the sprint planning phase, 10 user stories are selected for
the current sprint, 3 stories are high priority, 3 are medium priority, and the final 4 are low priority.
At this point, the team adds technology stories for each of the every-sprint SDL requirements. Even
though the product uses both managed and native modules, the team is only working on the managed-
code modules during this sprint, so only the every-sprint tasks that apply to managed online services are
added to the sprint.
Since this is the first sprint in which the team is using the SDL-Agile process, additional high-priority
stories are added to complete some of the one-time requirements (for example, registering the project in
the security compliance tracking system, creating a privacy form, identifying a privacy incident response
person, and identifying a security program manager). One more high-priority story is added to update the
build process to integrate the SDL-required, every-build requirements (use of the SDL-required compiler
and linker flags and integration of the FxCop security rules).
Finally, the team also adds in high-priority stories for the bucket tasks that the team wants to complete
during the current sprint. For this sprint, the team chooses to add tasks to run an attack surface analyzer,
review the crypto design of the system, and create a content publishing and user interface security plan.
The sprint begins, and two people on the team take on the task of building the threat model for the
features to be developed during this sprint. The big problem is that no one knows how to build a threat
model, so the two people read the threat modeling chapter in the SDL book2 and read Adam Shostack’s
series of threat modeling blog posts. This gives them enough information to perform the threat modeling
task.
After the threat model is built (and the corresponding story completed), the team uncovers a critical
vulnerability—the database contains sensitive data (users’ names, computer information, browser
information, and IP addresses), and the data is not protected from disclosure threats. Because the data is
sensitive, and it appears that unauthenticated attackers could access the data through a potential SQL
injection vulnerability, two more high-priority stories are added to the sprint backlog—one to add defenses
2 Howard, Michael, and Steve Lipner. The Security Development Lifecycle (Chapter 9). Microsoft Press, June 28, 2006.
One developer checks to ensure that the build environment is set up to use the SDL-required compiler
and linker switches and that the security-focused code analysis tools are also set to run as part of the
build process.
After looking at the new set of user stories so far, the team decides to remove two medium-priority stories
and two low-priority stories to keep within the sprint time box—these stories are put back in the product
backlog. Finally, after talking to the customer, one high-priority story is downgraded to a medium-priority
story but is to be completed in this sprint.
One developer elects to address the possible SQL injection vulnerabilities identified by the threat model.
He spends three days finding all database access code within the Web and C++ code and modifying it to
use stored procedures and parameterized queries. He also modifies the access rights of the interactive
database user so that it does not have access to any database tables that are not necessary for the
application. He also removes the interactive user’s permissions for deleting database objects and creating
new database objects, since these are also not necessary for the application to function. These practices
are good defense-in-depth measures that help prevent the system from being exploited in the event that a
vulnerability accidentally slips into the production code.
Nineteen days into the sprint, all of the SDL-required, high-priority stories are completed, as are many of
the selected user stories. The team finishes out the sprint, completing the rest of the selected user
stories. The sprint is a success, and the team is poised to release their new code to the public.
Note: The goal of this section is to supplement the main SDL document and allow you to tailor a process
specific to your LOB applications while meeting SDL requirements. If you don’t see specific guidance for a
particular task in the SDL-LOB, the guidance in the main SDL section is assumed to be in effect. To refer
back to a specific phase within the main SDL, click the icon next to each phase heading throughout the
SDL-LOB section.
To ensure minimal impact, the SDL-LOB overlays high-level security tasks against the standard SDL
phases, as listed in the chevrons in Figure 2.
The following table highlights LOB-specific tasks for each phase of the SDL. These tasks are in addition
to those outlined in the main SDL portion of this document. Each task in the table is discussed by phase
in the remainder of the LOB section. Note that the Response phase is not included in the table because
there are no additional tasks required for that phase beyond what is discussed in the main SDL.
Resources
• Visit the Information Security page for information on the Microsoft Information Security
group, which is responsible for security risk management for Microsoft LOB applications.
• Appendix V: Lessons Learned and General Policies for Developing LOB Applications
In this section and in the remainder of the SDL-LOB, only supplements to the original SDL are
highlighted. To create a complete security plan for LOB applications, you should consult each section of
the main SDL and the supplemental information contained in each phase of the SDL-LOB.
In addition to the basic concepts outlined in the main SDL, LOB training should include the following
additional topics:
Basic Concepts
• Secure design, including the following topics:
• Authentication.
• Authorization.
• Asset handling.
• Auditing and logging.
• Secure communication. The HTTP data for Web applications travels across networks in plain text
and is subject to network eavesdropping attacks. This also applies to client-to-server and server-
to-server communication.
• Secure coding, including the following topics:
• Integer overflow/underflow.
• Input validation and handling.
• Regulatory, which can include the following topics:
• Compliance with SOX, HIPAA, GLBA, PCI.
Resources
• Security Training: See Securing Applications on MSDN
• Testing for SQL Injection vulnerabilities
The Risk Assessment section that follows is exclusive to designing a security plan for LOB applications. It
includes information on completing an application portfolio, assessing application risk, and determining
service levels.
Risk Assessment
The Risk Assessment phase captures general security and privacy ‘‘qualities’’ to determine the
appropriate amount of oversight. During this phase, an application is assessed to understand the potential
risk it creates. If an application is high risk, it receives more oversight in the SDL-LOB process. If an
application is low risk, it receives less oversight in the SDL-LOB process. The application team and
application security team work in partnership to complete this phase.
When an application team proposes a new application or updates to an existing application, a risk
assessment is completed. Application teams understand they must complete this step as a prerequisite
for installing the application in a supported production environment. This risk assessment produces
repeatable guidance on the type of oversight the project will receive in the SDL-LOB process.
There will be a close collaboration between the security and privacy subject-matter experts (SMEs) and
the risk management/governance team for your organization. Risk management helps identify business
objectives and therefore guidance for evaluating the risk posed by individual line-of-business applications.
Risk management also affects and influences guidelines for rating the risk posed by individual
vulnerabilities and classes of vulnerabilities filed during internal review and verification phases. For more
information, see the Microsoft Security Risk Management Guide.
Security Requirements
• Application portfolio
• Application teams enter application details in an application portfolio system that is used to track the
life cycle of LOB applications within the enterprise.
• The portfolio system can track information, such as contacts, dependencies, version history,
deployment considerations, milestones, testing information and history, locations of relevant
documents, and tasks and security controls used during the applications life cycle. Ideally, the
portfolio would feature support, such as automated notification if the application is not in compliance
with required (and, as appropriate—optional) controls. If you have a dedicated security team, the
portfolio would also track the security SME assigned to perform an assessment, assessment history,
artifacts, and, if appropriate, actual bugs.
• Development teams must enter a new entry for the application if a new version of the application is
being released so that it can follow this process cycle again.
• Application risk assessment
• Application risk level is determined based on a questionnaire filled out by the application team. This
determines the SDL-LOB tasks the application owner must complete and is used to determine if the
application is in scope for a security and privacy assessment. Please see Appendix U: SDL-LOB Risk
Assessment Questionnaire for more details.
For example, based on application risk (High/Medium/Low), applications are serviced accordingly during
various phases of the development. Note that all applications require oversight; however, application
teams are still responsible for compliance with your implementation of the SDL-LOB (including
appropriate requirements and recommendations from main SDL described earlier in this document).
Mapping of risk level to service levels are depicted in the following table:
• Code review (white box). Conducted to determine how many code vulnerabilities exist. Severity
of findings is based on deviation from policy, standards, and best practices. The review should
balance a line-by-line code inspection against prioritizing sensitive parts of the application, such
as authentication, authorization, handling of sensitive data, and avoiding common security
vulnerabilities, such as poor input validation, SQL injection, and failure to properly encode Web
output.
In addition, the code review should verify compliance with security/privacy standards and policies.
Violations of standards and policies are viewed as must-fix, Sev 1 vulnerabilities.
• Penetration test (black box). Uses a mix of tools, such as wire sniffers and scanners, combined
with actually running the application to verify expected and unexpected behavior.
• Deployment review. Executed against production environments to ensure primarily that access
control and architectural issues conform to requirements; this niche serves double duty. This
service level is a good starting point for applications that do not immediately warrant the two
higher levels. These are often internal medium risk applications.
• Privacy review. Ensure the application complies with corporate, domestic, and international
privacy requirements to prevent malicious monitoring of behavior, obtaining sensitive information,
or identity theft.
Security Recommendations
• Visual Studio .NET Team System (or equivalent) can be used for bug tracking and management
purposes.
• Dedicated security and privacy subject matter experts assist the application team during application
development. These SMEs serve as resources for conducting all of the SDL-LOB tasks but, in
particular, help perform specific tasks, such as a code reviews and penetration tests, among others.
Resources
• Privacy home page.
• Microsoft Operation Framework Deliver Phase provides guidance for getting operational concerns
reflected during the Requirements phase of project development as well as getting release readiness
in place as a validation step prior to production.
• Governance, Risk, and Compliance Service Management.
The Design phase is crucial to ensure that the application is “secure by design” and compliant
with security and privacy policies and standards. As with the standard SDL, threat modeling is crucial to
accomplishing this, although the SDL-LOB distinguishes itself by taking a more asset-centric approach to
creating the threat model. Threat modeling evaluates the threats and vulnerabilities that exist in the
project’s environment or that result from interaction with other systems. You cannot consider the Design
phase complete unless you have a threat model or models that include such considerations. Threat
models are critical components of the Design phase and reference a project’s functional and design
specifications to describe vulnerabilities and mitigations.
Threat modeling and conducting design reviews is a systematic process that is used to identify threats
and vulnerabilities in the application. You must complete threat modeling during project design. A team
cannot build a secure application unless it understands the assets the project is trying to protect, the
threats and vulnerabilities introduced by the project, and details of how the project will mitigate those
threats.
Threat Modeling
Threat modeling is one of the most effective ways to build security into the application development
process. It makes the application less vulnerable to potential threats by identifying them before the
application is built. This proactive process is the most important phase of the SDL-LOB because it
reduces the reliance on reactive processes that depend either on penetration testing or user discovery of
security vulnerabilities.
Security Requirements
Threat models should be completed for all applications, regardless of risk level.
• Ensure that all threat models meet minimal threat model quality requirements. That is, all threat
models must contain digital assets or data, business objectives, components, and role information. It
must have application use cases, data flow, call flows, generated threats, and mitigations. Threat
model reports generated are consumed by the development team as actionable items. A threat model
that is not actionable (in terms of selecting countermeasures and prioritizing by risk) is an incomplete
threat model.
• All threat models and referenced mitigations should be reviewed and approved by the security SME.
Ask architects, developers, testers, program managers, and others who understand the software to
Design Reviews
Conduct design reviews of high-risk applications by a security SME to ensure that the design conforms to
security/privacy standards and policies. The advantages of this include:
• An architecture and design review helps you validate the security-related design features of your
application before you start the development phase. This allows you to identify and fix potential
vulnerabilities before they can be exploited and before the fix requires a substantial reengineering
effort. Essentially this results in a reduced attack surface exposed by applications, thus increasing the
security of the user and the system.
• Important design areas to be reviewed during this task are:
• Deployment and infrastructure considerations
• Input validation
• Authentication
• Authorization
• Configuration management
• Sensitive data
• Session management
• Cryptography
• Parameter manipulation
• Exception management
• Auditing and logging
• User provisioning/de-provisioning
• Tier-by-tier analysis; walk through the logical tiers of your application, and evaluate security choices
within your presentation, business, and data access layers
• Application life cycle, including end-of-life requirements
• Compliance with security/privacy standards and policies, in addition to regulatory requirements
There is a certain degree of overlap for some of these requirements and a threat model. Therefore the
SME will defer, as necessary, to the artifacts created in the threat modeling process.
Security Recommendations
In addition to the specific security recommendation in the SDL for threat modeling, perform the following:
• Security issues identified during the design review task should be logged under projects bug tracking
system.
Resources
• Threat modeling process: http://msdn.microsoft.com/en-us/security/aa570413.aspx. This is the home
page for the asset-centric Threat Analysis and Modeling tool and related content.
For the LOB-SDL, additional tasks beyond the standard SDL Implementation phase include
an internal review, which incorporates security checklists and standards, a self-directed code
review, and code analysis.
Internal Review
The internal review is conducted by the application team.
Security Requirements
• Microsoft Anti-Cross-Site Scripting Library V3.0. Incorporate Anti-XSS library to protect ASP.NET
Web-based applications from XSS attacks. This library offers a more rigorous “white-list” approach
than the native encoding methods found in .NET. Also featured is new support for globalization also
not present in the .NET library. Version 3.0 includes a runtime engine that automatically encodes
output for ASP.NET 2.0 controls and HTML wrappers. Some .NET controls/wrappers automatically
encode for you, and the runtime engine is smart enough not to “double-encode” in this case.
• CAT.NET. Run CAT.NET on managed code (C#, Visual Basic .NET, J#) applications. CAT.NET is a
snap-in to the Visual Studio IDE that helps you identify exploitable code paths for security
vulnerabilities, such as Cross-Site Scripting - SQL Injection - Process Command Injection - File
Canonicalization - Exception Information - LDAP Injection - XPATH Injection - Redirection to User
Controlled Site.
• FxCop. FxCop is an application that analyzes managed code assemblies (code that targets the .NET
Framework common language runtime) and reports information about the assemblies, such as
possible design, localization, performance, and security improvements.
• Microsoft Source Code Analyzer for SQL Injection. Run this static code analysis tool that helps
identify SQL injection vulnerabilities in Active Server Pages (ASP) code.
Security Recommendations
Security vulnerabilities identified during self review and through code analysis tools should be logged
under the project's bug tracking system.
Resources
• Index of Security Checklists. While focused on .NET and Web application development, much of the
guidance here is technology agnostic.
• Perform a Security Code Review for Managed Code (Baseline Activity).
• Anti-XSS 3.0 Library.
• CAT.NET, a code analysis tool for .NET.
• Microsoft Source Code Analyzer for SQL Injection.
After successful completion of the previous phase, the internal review portion of the Implementation
phase, expert application security SMEs are engaged. This phase verifies that an application being
deployed into production environments has been developed in a way that adheres to internal security
policies and follows industry best practice and internal guidance. Also, another objective is to identify any
residual risks not mitigated by application teams.
The assessments conducted during the Verification phase are typically conducted by a security or privacy
SME.
Pre-Production Assessment
An ideal comprehensive assessment includes a mix of both white and black box testing. There is a
tendency to prefer black box testing because “it's what the hackers do.” However, it is also more time
consuming and can have mixed results. In addition, it is difficult for individuals who are only “part-time”
penetration testers to develop the skills and expertise needed to efficiently perform a black box test.
Identifying multiple instances/class of vulnerability bugs is more easily accomplished in a code review
(white box). A code review, though, can make finding business logic issues very difficult. Reading the
source code for a complex AJAX-based ASP.NET form and actually playing with it can yield vastly
different results in terms of issues found.
Further, this phase should be conducted with a mix of manual process and automated tools. Manual
reviews may need to be time constrained and focus on high-risk features. Automated tools can reduce
overhead, but should not be relied upon exclusively.
Security Requirements
• The service level assigned to the application at the Risk Assessment phase governs the type of
assessment an application receives in this phase. An application that has been assigned a medium or
higher rating automatically requires a white-box code review, while applications assigned with a low
rating will not.
• Code review (white box)
• Security team is provided access to an application’s source code and documentation to aid
them in their assessment activities.
• Complete review using both manual code inspection and security tools, such as static
analysis or penetration testing.
• Review threat model. Code reviews are prioritized based on risk ratings identified through
threat modeling activities. Components of an application with the highest severity ratings get
the highest priority with respect to assigning code review resources, whereas components
with low severity ratings are assigned lesser priority.
• Validate tools results. The security expert also validates results from code analysis tools (if
applicable), such as CAT.NET to verify that vulnerabilities have been addressed by the
development team. In situations where this is not the case, the issue is filed in the bug
tracking system.
Security Recommendations
• Assessment results yielding Severity 0 or Severity 1 bugs automatically result in the application being
blocked from deploying into production environments until the issues have been addressed or an
exception has been granted by the business owner accepting the risk.
• The security bug bar for LOB applications has additional considerations than what is described earlier
in this document. Your business needs to establish guidelines for evaluating the risk posed by
individual vulnerabilities. This includes a risk rating framework that applies across all applications.
The risk rating framework is independent of the risk assigned to the entire application. The sample
table below presents a bug bar that accounts for the unique environment of an LOB application,
including the risk posed by individual bugs.
Severity Description
• Impact across the enterprise and not just the local LOB application/resources
Severity 0
• Exploitable vulnerability in deployed production application
• Exploitable security issue
• Policy or standards violation
Severity 1
• Affects local application or resources only
• Risk rating = High risk
• Difficult to exploit
Severity 2 • Non-exploitable due to other mitigation
• Risk rating = Medium risk
• Bad practice
• Non-exploitable
Severity 3 • Should not lead to exploit but helpful to attacker exploiting another
vulnerability
• Risk rating = Low risk
• There is a trade-off in proving that a vulnerability is actually exploitable against time constraints in
finding bugs. It may not be worthwhile to actually craft explicit exploit/malicious payload. In this case,
you can adjust the severity as appropriate, erring on the side of caution.
Compliance
Identified risks are logged in the bug-tracking system and assigned a severity rating. The output of this
phase results in the following:
• Bug reports
• Exception requests for the risk posed by issues that cannot or will not be fixed prior to production
In response to exception requests, security teams gather all pertinent data points, such as technical
details, business impact description, interim mitigation, and other exception information, and provide a
development team’s upper management with these details in the form of an exception form. Upper
management can then approve the exception request and accept identified risks for a small period of
time, or they reject the exception request and require the business group to mitigate the identified risks. It
is important that a specific business owner explicitly assume the risk posed by unmitigated Severity 0 and
Severity 1 bugs.
Security team tracks all approved exceptions and follows up with the application team after the exception
period has expired.
Note: Sev 0 and Sev 1 bugs may exist due to a technological or infrastructure limitation that cannot be
mitigated in the current release. An exception should be created in to track the issue until such time as
the limitation no longer exists.
Resources
• Code review information is available at http://msdn.microsoft.com/en-us/library/ms998364.aspx.
• Security tools:
• Web debugging proxy tools, such as Fiddler, allow you to inspect all HTTP(S) traffic, set
breakpoints and “tamper” with incoming or outgoing data, build custom requests, and replay
recorded requests.
• HTTP passive analysis tools capable of identifying issues related to user-controlled payloads
(potential XSS), insecure cookies, and HTTP headers.
• Microsoft Network Monitor or similar tools that allow you to capture and perform a protocol
analysis of network traffic.
• Browser plug-ins or standalone tools that allow lightweight tampering before data is placed on the
wire are also very useful for Web security testing.
• Automated penetration testing tools that crawl publically exposed interfaces (for example, user
interfaces and Web services) probing for known/common classes of vulnerabilities and
known/published exploits.
• Automated static code analysis tools that parse the syntax of your source code to identify
suspected and known vulnerabilities, such as cross-site scripting and code injection.
• Deployment Review Index is available at http://msdn.microsoft.com/en-us/library/ms998401.aspx.
• Privacy Guidelines for Developing Software Products and Services.
After deployment, several activities need to occur, including regular verification of patch management,
compliance, network and host scanning, and responding to any incremental releases for hotfixes and
service packs. For the SDL-LOB, these tasks are associated with the post-production assessment.
Post-Production Assessment
The post-production assessment is conducted by the operations team, and the service level is not
dictated by the risk level. All applications/hosts/network devices are in scope for assessment on a regular
basis. That is, for most organizations, these tasks take place continuously and have existing management
processes in place. In which case the application “plugs-in” to those existing processes that monitor
changes to the organizational infrastructure rather than occurring as a discrete post-production
assessment.
Security Requirements
• The actual list of servers deployed in production will likely vary dramatically from what was initially
recorded in the application portfolio at the beginning of the SDL-LOB process. Post-production,
operations may own both the servers and routine scanning of those servers for vulnerabilities, patch
management, and similar activities. It is a best practice to segregate the duties between the server
owners and the compliance organization. The compliance organization owns scanning in a timely
manner, and the application team follows the processes established by the compliance team for
moving into production.
• Host-level security. Providing security for the host computer involves the following items that are
audited on a regular basis on production servers:
• Patch management. The security SME verifies that servers have the latest applicable security
updates, including updates from every software manufacturer that has software running on the
server.
• Appropriate configuration. The servers are reviewed for compliance with established baselines.
For example, all unused services that are not required for the application are disabled and
blocked instead of running with default settings.
• Antivirus. Servers have antivirus software running and actively scanning all system file areas, in
addition to all shared directories. All systems must have their antivirus application or signature
files examined at logon to ensure that the latest antivirus application or current virus signature
files are present.
• Compliance. Verify compliance with internal business policies and external legal requirements, in
addition to standards such as PCI.
• Review access control/permissions. The access control list (ACL) permission settings on all file
shares and other system, database, and COM+ objects are reviewed to help prevent unauthorized
access. Regular review, for example, of administrator privileges on a given server should be
performed.
• Server auditing and logging. Ensuring that auditing with appropriate logging procedures for all
system objects that contain business-sensitive information is enabled. Logging procedures include
collecting log files and protecting access to log data to only appropriate users (members of security,
internal audit, or systems management teams) with the appropriate ACLs. Even more critical is
Security Recommendations
• Vulnerabilities identified in production should be remediated per operational processes defined by the
compliance team.
• Frequently, application teams have a variety of post-production changes to the application, ranging
from a hotfix, service pack, or entirely new features. Depending on the scope, the application team
either needs to start over by updating the application portfolio (which kicks off a new iteration of the
SDL-LOB life cycle), or perform a subset of the SDL-LOB tasks. At a minimum, this subset should
include a review/update of the threat model and selected tasks from the Internal Review conducting
during the Implementation phase.
Resources
• Microsoft Baseline Security Analyzer.
• Microsoft Operation Framework Deliver Phase provides guidance for getting operational concerns
reflected during the Requirements phase of project development as well as getting release readiness
in place as a validation step prior to production.
• Governance, Risk, and Compliance Service Management.
Ensure that the vulnerability/work item tracking system used includes fields with the following values (at a
minimum):
Introduction
The following questions are designed to help you complete the privacy aspects of the Security
Development Lifecycle (SDL). You will complete some sections, such as the initial assessment and a
detailed analysis, on your own. You should complete other sections, such as the privacy review, together
with your privacy advisor.
Initial Assessment
The initial assessment is a quick way to determine your Privacy Impact Rating and to estimate the work
required to be compliant. The rating (P1, P2, or P3) represents the degree of risk your software presents
from a privacy perspective. You need to complete only the steps that apply to your rating. For more detail,
see the main Microsoft Security Development Lifecycle document.
___ Stores personally identifiable information (PII) on the user's computer or transfers it from the user’s
computer (P1)
___ Installs new software or changes file type associations, home page, or search page (P1)
• Describe any software you install or changes you make to file types, home page, or search page:
__________________________________________________________________________
__________________________________________________________________________
Who on your team is the primary contact for Privacy Incident Response?
_____________________________________________
In addition to the port- and program-specific requirements for Windows XP SP2 listed earlier, the following
requirements must be met for Windows Vista and Windows Server 2008.
Requirements
Inbound 1. Applications must create rules during setup for traffic that is expected in more
firewall rules than 80% of installations of the application. Explicit user consent is required
to enable the rules.
2. Rules must be scoped to all these parameters—Program, Port, and
Profile(s).
3. For features that implement services, rules must also be scoped to that
service.
4. Services must implement Windows Service Hardening firewall rules.
Application Quality
Programs, applications, services, or other components that wish to receive unsolicited traffic must:
• Produce an independent threat model for the service which identifies each entry point explicitly,
including services that are “multiplexed” behind a common port.
Least Privilege
Firewall rules must adhere to the principle of least privilege by:
• Scoping the rule to “local subnet” or tighter when practical.
• Scoping the rule to only the network profile(s) where the feature is likely to be used. For example, if it
is an enterprise feature, then you should scope the rule to domain, private profiles. Unless you expect
your feature to be used in a public place like a WiFi hotspot, you should not scope the rule to the
public profile.
• Unless your feature requires NAT traversal using transition tunnel technologies, do not set the “Edge”
traversal flag.
• Limiting the privileges of the service that use the port to Network Service or more restrictive when
practical. When not practical, the threat model should explicitly call out the reasons why.
If services must run with privileges greater than Network Service, it is recommended that the services be
split into “privileged” and “non-privileged” components such that only the code that requires higher
privileges receives them, and other code is addressed through some IPC mechanism. The end result
being that the non-privileged service is the one that receives the traffic.
Informed Consent UI
This policy addresses user interface issues, but nothing in the policy should be interpreted to specify a
particular user interface. For example, when it says “The user is informed and acknowledges the open
port,” it does not imply that there must be a dialog that tells the user port 123 has been opened. The
requirement is that the user is informed of the change in some explicit fashion via the UI (not an entry in a
log file), and that the details are available for users who want to know.
Terminology
exception
In Windows XP SP2, this is a setting that when enabled allows unsolicited traffic through the firewall.
There are two types of exceptions:
• Port exceptions, which allow unsolicited traffic on the specified port.
• Program exceptions, which allow a particular program to receive unsolicited traffic, regardless of port.
The setting may be enabled, which means the traffic is allowed to bypass the firewall, or disabled, which
has no effect on the firewall.
In Windows Vista SP1 and Windows Server 2008, this term has been replaced by inbound firewall rule.
firewall rule
Firewall rules are created to allow or block a computer sending traffic or receiving traffic over a network.
Rules can be created for either inbound traffic or outbound traffic. The rule can be configured to specify
traffic that matches specific programs, services, ports, and protocols.
domain profile
This profile is applied when the computer is connected to a network in which the computer's domain
account resides. Typically, this is the least restrictive profile.
private profile
This profile is applied when the computer is connected to a network like a home or small office network
(without an Active Directory infrastructure).
public profile
This profile is applied when the computer is connected to a network in a public place, like a coffee shop or
airport.
SAL Compliance
Visual Studio 2005: 26020–26023
/analyze
Visual Studio 2005: 6029 6053 6057
6059 6063 6067 6201–6202 6248
Given a stack buffer overflow, an attacker can overflow the nearest exception record on the stack (there is
always at least one) and point it to an address in the page marked EXECUTABLE.
Because of the number of stack locations controlled by the overflow at the point the exception handler
takes over, many possible op-code sequences would reliably deliver execution back to the attack-
supplied buffer. One such sequence is {pop, pop, ret} (possibly interleaved with other instructions).
It is also possible to leverage a sequence of op-codes that would produce an arbitrary memory-overwrite
in a two-stage attack.
Because of the number of possibilities, it is very hard to prove that bytes on a page marked
EXECUTABLE cannot be abused sufficiently to take control.
Scope
The following subsections specify the scope of the No Executable Pages requirement proposal.
Operating Systems
This requirement applies to Win32 and Win64 operating systems but not to Windows CE or Macintosh.
Products/Services
This requirement applies to code that runs on users’ computers (products) and to code that runs only on
Microsoft-owned computers and accessed by users (for example, Microsoft-owned and provisioned online
services).
Technologies
This requirement applies to both unmanaged (native) code, such as C and C++, and managed code, such
as C#.
Exceptions
Sometimes it simply might not be possible to mark all pages as non-executable. Examples include digital
rights management technologies and just-in-time (JIT) technologies that dynamically create code. For
such cases, the following techniques can help make the pages safe:
Special Cases
Because marking a binary as “DEP compatible” (with /NXCOMPAT) changes how the operating system
interacts with the executable, it is important to test all executables (.EXE) marked as /NXCOMPAT with a
version of Windows that supports this functionality:
• Client software: Windows XP SP2, Windows Vista
• Server software: Windows Server 2003 Service Pack 1 (SP1) or Windows Server 2008
(Review the detailed description of the Data Execution Prevention [DEP] feature for specific details about
how to use DEP on Windows Server 2003 SP1.)
All executables (.EXE) marked as /NXCOMPAT are able to take advantage of Data Execution Protection.
Dynamic-link libraries (.DLL files) or other code called by executables (such as COM objects) do not gain
direct security benefits with /NXCOMPAT but need to coordinate enabling /DEP support with any
executable files that might call them. Any EXE with /NXCOMPAT enabled that loads other code
without /NXCOMPAT enabled may have the process fail unexpectedly unless the EXE and all of the other
code that it calls (such as DLLs or COM objects) have been thoroughly tested with DEP enabled (linked
with /NXCOMPAT option).
Requirements
Support Considerations
Rationale
The Portable Executable (PE) format allows binaries to define sections—named areas of code or data—
that have distinct properties, such as size, virtual address, and flags, which define the behavior of the
operating system as it maps the sections into memory when the binary image is loaded. An example of a
section would be text, which is typically present in all executable images. This section is used to store the
executing code and is marked as Code, Execute, Read, which means code can execute from it, but data
cannot be written to it. It is possible to define custom sections with desired names and properties by using
compiler/linker directives.
One such section flag is Shared. When it is used and the binary is loaded into multiple processes, the
shared section maps to the same physical memory address range. This functionality makes it possible for
multiple processes to write to and read from addresses that belong to the shared section.
Unfortunately, it is not possible to secure a shared section. Any malicious application that runs in the
same session can load the binary with a shared section and eavesdrop or inject data into shared memory
(depending on whether the section is read-only or read-write).
To avoid security vulnerabilities, use the CreateFileMapping function with proper security attributes to
create shared memory objects.
Or:
Resources
• Creating Named Shared Memory
All products developed using SDL should use a subset of SAL to help find deeper issues, such as buffer
overrun issues. Microsoft teams, such as Office and Windows, are using SAL beyond the requirements of
SDL.
SAL Details
SAL is primarily used as a method to help tools, such as the Visual C++ /analyze compiler option, to find
vulnerabilities by knowing more about a function interface. For the purposes of this appendix, SAL can
document three properties of a function:
• Whether a pointer can be NULL
• How much space can be written to a buffer
• How much can be read from a buffer (potentially including NULL termination)
At a high level, things become more complicated because there are two implementations of SAL:
• __declspec syntax (Visual Studio 2005 and Visual Studio 2008)
• Attribute syntax (Visual Studio 2008)
Each of these implementations maps onto lower-level primitives that are too verbose for typical use.
Therefore, developers should use macros that define commonly used combinations in a more concise
form. C or C++ should add the following to their precompiled headers:
• #include “sal.h”
SAL Recommendations
• Start with new code only. Microsoft strongly recommends that you also plan to annotate old code.
• All function prototypes that accept buffers in internal header files you create should be SAL
annotated.
• If you create public headers, you should annotate all function prototypes that read or write to buffers.
SAL in Practice
All examples use the __declspec form.
A classic example is that of a function that takes a buffer and a buffer size as arguments. You know that
the two arguments are closely connected, but the compiler and the source code analysis tools do not
know that. SAL helps bridge that gap.
cchBuf is the character count of buf. Adding SAL helps link the two arguments together:
void FillString(
__out_ecount(cchBuf) TCHAR* buf,
int cchBuf,
TCHAR ch) {
for (int i = 0; i < cchBuf; i++)
buf[i] = ch;
}
If you compile this code for Unicode, a potential buffer overrun exists when you call this code:
TCHAR buf[MAX_PATH];
FillString(buf, sizeof(buf), '\0');
sizeof is a byte count, not a character count. The programmer should have used countof.
In the __out_ecount macro, __out means the buffer is an “out” buffer and is written to by the
functions. The buffer size, in elements, is _ecount(cchBuf). Note that this function cannot handle a
NULL buf, if it could, then the following macro could be used: __out_ecount_opt(cchBuf), where
_opt means optional.
The following example shows a function that reads to a buffer and writes to another.
Tools Usage
To take advantage of SAL, make sure you compile your code with a version of Microsoft Visual C++®
2005 or Visual C++ 2008 that support the /analyze compile-time flag.
6057 Buffer overrun due to number of characters/number of bytes mismatch in call to <function>
6059 Incorrect length parameter in call to <function>: pass the number of remaining characters, not the
buffer size of <variable>
6200 Index <name> is out of valid index range <min> to <max> for non-stack buffer <variable>
6201 Buffer overrun for <variable>, which is possibly stack allocated: index <name> is out of valid
index range <min> to <max>
6202 Buffer overrun for <variable>, which is possibly stack allocated, in call to <function>: length
<size> exceeds buffer size <max>
6203 Buffer overrun for buffer <variable> in call to <function>: length <size> exceeds buffer size
6204 Possible buffer overrun in call to <function>: use of unchecked parameter <variable>
6209 Using “sizeof<variable1>” as parameter <number> in call to <function> where <variable2> may
be an array of wide characters; did you intend to use character count rather than byte count?
6383 Buffer overrun due to conversion of an element count into a byte count
Benefits of SAL
Because SAL provides more function interface information to the compiler toolset, SAL finds more issues
earlier and with less noise.
Resources
More detailed SAL information can be found in Chapter 1 of Writing Secure Code for Windows Vista from
Howard and LeBlanc and http://blogs.msdn.com/michael_howard/archive/2006/05/19/602077.aspx.
Summary
• Microsoft recommends that you start by annotating new code only. As time permits, existing code
should be annotated also.
• You should use SAL for all functions that write to buffers.
• You should consider using SAL for all functions that read from buffers.
• The SDL requirement does not mandate either SAL macro syntax. Use attribute or __declspec as you
see fit.
• Annotate the function prototypes in headers that you create.
• If you consume public headers, you must use only annotated headers.
A good example is Data Execution Protection (DEP). This feature was never enabled in Microsoft
Windows 2000 because it had no hardware support but became available in Windows Server 2003 as an
unsupported boot.ini option. DEP was then supported for the first time in Windows XP SP2 but set only
for the system and not for non-system applications.
Another example, and the focus of this requirement, is the ability to detect and respond to heap
corruption. In the past, there was no protection in the heap from heap-based buffer overruns. Microsoft
then added metadata checking, primarily in the form of forward and backward link-checking post block-
free to determine whether a heap overrun had occurred. However, for application compatibility reasons,
the mitigation was limited to preventing the arbitrary write controlled by a potential exploit from taking
place, and the application was allowed to continue to run after the point the corruption was detected.
Windows Vista includes a more robust mechanism—the application terminates when heap corruption is
detected. This mechanism also helps developers find and fix heap-based overruns early in the
development lifecycle.
This Heap Manager Fail Fast Setting requirement might cause reliability issues in applications that have
poor heap memory management. However, the failing code is found immediately and can be fixed, which
makes software both more secure and more reliable. Windows Vista has encountered only one such
example in a third-party ActiveX control.
This capability is enabled for some core operating system components but not for non-system
applications running on Windows Vista. This appendix outlines how to enable the option for non-system
applications.
Scope
The following subsections specify the scope of the Heap Manager Fail Fast Setting requirement proposal.
Operating System
This requirement applies only to Win32.
Products/Services
This proposal applies to code that runs on users’ computers (products) and to code that runs only on
Microsoft-owned computers and accessed by users (services).
Technologies
This proposal applies to unmanaged (native) code, such C and C++, but not to managed code, such as
C#.
External Applicability
This proposal applies to external third-party ISV code.
Requirement Definition
Before you use any heap–based memory, you must add the following code to your application startup:
(void)HeapSetInformation(NULL,
HeapEnableTerminationOnCorruption,
NULL,
0);
Microsoft also recommends that the code use the Low-Fragmentation Heap, which has been shown to be
more resistant to attack than the “normal” heap in Windows. To use the Low-Fragmentation Heap, use
the following code:
DWORD Frag = 2;
(void)HeapSetInformation(NULL,
HeapCompatibilityInformation,
&Frag,
sizeof(&Frag));
Compliance Measurement
A product/service team can verify compliance in either of the two ways mentioned in the following
requirement.
Or
• Run the application under a kernel debugger, issue the !heap -s command, and verify that the
following text appears: Termination on corruption : ENABLED.
Application Verifier is a tool that detects errors in a process (user-mode software) while the process is
running. Typical findings include heap corruptions (including heap buffer overruns) and incorrect
synchronizations and operations. Whenever Application Verifier finds an issue, it goes into debugger
mode. Therefore, either the application being verified should run under a user-mode debugger or the
system should run under a kernel debugger.
The scenarios in this appendix showcase the recommended command-line options for quality gates that
you should run during all tests (BVTs, stress, unit, and regression) that exercise the code change.
The pass rate for the tests may decrease significantly because random fault injections are introduced into
the normal operation.
1. Enable verifier and fault injection for the application(s) you wish to test by using the following
command-line syntax:
appverif /verify <MyApp.exe> /faults
Note: If you are testing a DLL, you can apply fault injection on a certain DLL instead of on the entire
process. The command-line syntax would be:
appverif /verify TARGET [/faults [PROBABILITY [TIMEOUT [DLL …]]]]
For example, appverif /verify <mytest.exe> /faults 5 1000 d3d9.dll
1. Run all your tests exercising the application.
2. Analyze any debugger break that you encounter. Debugger breaks signify bugs found by the verifier,
and you need to understand and fix them.
3. When you are finished, delete all settings made with:
appverif /n <MyApp.exe>
Note that running with and without fault injection exercises different code paths in an application.
Therefore, you must run both scenarios to obtain the full benefit of Application Verifier.
Purpose
The purpose of the Privacy Escalation Response Framework (PERF) is to define a systematic process
that you can use to resolve privacy escalations efficiently. The process must also manage the associated
internal and external communications and identify the root cause or causes of each escalation so that
policies or processes can be improved to help prevent recurrences.
Closing
After all appropriate resolutions are in place, the privacy escalation team should evaluate the
effectiveness of privacy escalation response actions. An effective remediation is one that resolves the
concerns of the reporting party, resolves associated user concerns, and helps to ensure that similar
events do not recur.
buffer overflow
A condition that occurs because of a failure to check or to limit input data buffer sizes before data is
manipulated or processed.
bug bar
A set of criteria that establishes a minimum level of quality.
deprecation
Designating a component for future removal from a software program.
fuzz testing
A means of testing that causes a software program to consume deliberately malformed data to see how
the program reacts.
giblets
Code that was created by external development groups in either source or object form.
harden
Take steps to ensure no weaknesses or vulnerabilities in a software program are exposed.
implicit consent
An implied form of consent in certain limited home and organizational networking scenarios.
informed consent
An explicitly stated form of consent that is usually provided after some form of conditions
acknowledgment.
port exception
An exception to a firewall policy that specifies a certain logical port in the firewall should be opened or
closed.
program exception
An exception to a firewall policy that exempts a specific program or programs from some aspect of the
policy.
security push
A team-wide focus on threat model updates, code review, testing, and documentation scrub. Typically, a
security push occurs after a product is code/feature complete.
zero-day exploit
An exploit of a vulnerability for which a security update does not exist.
○ Example: Transfer of sensitive personally identifiable information (PII) from the user's
system without prominent notice and explicit opt-in consent in the UI prior to transfer.
○ Example: Ongoing collection and transfer of non-essential PII without the ability
within the UI for the user to stop subsequent collection and transfer.
○ Example: Access to PII stored at organization is not restricted only to those who
have a valid business need or there is no policy to revoke access after it is no longer
required.
○ Example: Transfer of non-sensitive PII from the user's computer without prominent
notice and explicit opt-in consent in the UI prior to transfer.
• Data minimization
○ Example: PII is collected and stored locally as hidden metadata without any
means for a user to remove the metadata. PII is accessible by others or may be
transmitted if files or folders are shared.
• Data minimization
Sev 3
○ Example: Non-sensitive PII or anonymous data transmitted to an independent
third party is not necessary to achieve disclosed business purpose.
○ Example: Use of persistent cookie where a session cookie would satisfy the
purpose. Or, persisting a cookie for a period that is longer than necessary to satisfy
the purpose.
Sev 4 ○ Example: PII is collected and stored locally as hidden metadata without
discoverable notice. PII is not accessible by others and is not transmitted if files or
folders are shared.
○ Example: Automated data transfer of sensitive PII from the user's system without
Sev 1 prominent notice and explicit opt-in consent in the UI from the enterprise
administrator prior to transfer.
○ Example: Deployment or development guide for enterprise administrators provides legal advice.
Definition of Terms
anonymous data
Non-personal data that has no connection to an individual. By itself, it has no intrinsic link to an individual
user. For example, hair color or height (in the absence of other correlating information) does not identify a
user.
child or children
Under 14 years of age in Korea and under 13 years of age in the United States.
discoverable notice
A discoverable notice is one the user has to find (for example, by locating and reading a privacy
statement of a Web site or by selecting a privacy statement link from a Help menu).
discrete transfer
Data transfer is discrete when it is an isolated data capture event that is not ongoing.
essential metadata
Metadata that is necessary to the application for supporting the file (for example, file extension).
hidden metadata
Hidden metadata is information that is stored with a file but is not visible to the user in all views. Hidden
data may include personal information or information that the user would likely not want to distribute
publicly. If such information is included, the user must be made aware that this information exists and
must be given appropriate control over sharing it.
implicit consent
Implicit consent does not require an explicit action indicating consent from the user; the consent is implicit
in the operation the user initiates.
non-essential metadata
Metadata that is not necessary to the application for supporting the file (for example, key words).
persistent storage
Persistent storage of data means that the data continues to be available after the user exits the
application.
prominent notice
A prominent notice is one that is designed to catch the user’s attention. Prominent notices should contain
a high-level, substantive summary of the privacy-impacting aspects of the feature, such as what data is
being collected and how that data will be used. The summary should be fully visible to a user without
additional action on the part of the user, such as having to scroll down the page. Prominent notices
should also include clear instructions for where the user can get additional information (such as in a
privacy statement).
sensitive PII
Sensitive personally identifiable information includes any data that could (i) be used to discriminate
(ethnic heritage, religious preference, physical or mental health, for example), (ii) facilitate identity theft
(like mother’s maiden name), or (iii) permit access to a user’s account (like passwords or PINs). Note that
if the data described in this paragraph is not commingled with PII during storage or transfer, and it is not
correlated with PII, then the data can be treated as Anonymous Data. If there is any doubt, however, the
data should be treated as Sensitive PII. While not technically Sensitive PII, user data that makes users
nervous (such as real-time location) should be handled in accordance with the rules for Sensitive PII.
Sev 1. Release may create legal or regulatory liability for the organization.
Sev 2. Release may create high risk of negative reaction by privacy advocates or damage the
organization’s image.
temporary storage
Temporary storage of data means that the data is only available while the application is running.
Client
Extensive user action is defined as:
• “User interaction” can only happen in client-driven scenario.
• Normal, simple user actions, like previewing mail, viewing local folders, or file shares, are not extensive
user interaction.
• “Extensive” includes users manually navigating to a particular Web site (for example, typing in a URL)
or by clicking through a yes/no decision.
• “Not extensive” includes users clicking through e-mail links.
Sev 1 • Elevation of privilege (remote): The ability to either execute arbitrary code or to obtain more
privilege than intended
anonymous
Any attack which does not need to authenticate to complete.
client
Either software that runs locally on a single computer or software that accesses shared resources
provided by a server over a network.
default/common
Any features that are active out of the box or that reach more than 10 percent of users.
scenario
Any features that require special customization or use cases to enable, reaching less than 10 percent of
users.
server
Computer that is configured to run software that awaits and fulfills requests from client processes that run
on other computers.
Sev 1. A security vulnerability that would be rated as having the highest potential for damage.
Sev 2. A security vulnerability that would be rated as having significant potential for damage, but less
than Sev 1.
Sev 3. A security vulnerability that would be rated as having moderate potential for damage, but less
than Sev 2.
Sev 4. A security vulnerability that would be rated as having low potential for damage.
temporary DoS
A temporary DoS is a situation where the following criteria are met:
• The target cannot perform normal operations due to an attack.
• The response to an attack is roughly the same magnitude as the size of the attack.
• The target returns to the normal level of functionality shortly after the attack is finished. The exact
definition of “shortly” should be evaluated for each product.
For example, a server is unresponsive while an attacker is constantly sending a stream of packets across
a network, and the server returns to normal a few seconds after the packet stream stops.
permanent DoS
A permanent DoS is one that requires an administrator to start, restart, or reinstall all or parts of the
system. Any vulnerability that automatically restarts the system is also a permanent DoS.
This document outlines the security activities for <SAMPLE> as they relate to each step of the Security
Development Lifecycle (SDL). It describes the objective and provides a basic outline of each activity. It
also identifies owners and expected deliverables from the activity. Most of the deliverables are included
as exit criteria for different milestones for the project.
For successful execution of this security plan, the security team must perform security sign-offs, reviews,
check-pointing, among other activities, for the security ship-readiness of the product at various project
milestones. It is recommended that a “virtual” team be created, made up of individuals from program
management, development, test, and UX.
The remainder of this document describes the minimum activities needed to build a secure product, the
milestones during which they should be performed, and the owners and the deliverables for each activity.
Project Inception
• Determine whether the SDL applies to your component.
• Identify the team or individual that is responsible for tracking and managing security for your project.
• Ensure that bug reporting tools can track issues and that a database can be queried dynamically for
all security issues at any time.
• Define and document a project’s security bug bar.
Cost Analysis
• Complete a security risk assessment.
Security Push
• Determine if there is a need for a security push.
• Define the security push steps.
• Determine the timeline for the security push.
• Determine whether there will be intensive education of project staff prior to the push.
• Determine whether there are any other intensive tasks you want to focus on.
• Define how the security vulnerabilities will be tracked.
Response Planning
• Identify who is responsible for security servicing.
• Provide contact information for people who respond to security incidents.
• Ensure process are in place to handle all types of security issues—for example, code reused or
inherited from other teams and code licensed from third parties.
• Create a documented sustaining model that addresses the need to release immediate patches in
response to security vulnerabilities and does not depend entirely on infrequent service packs.
• ActiveX controls
A: Yes, but it is not the intent of SDL-Agile to allow teams to ignore or avoid certain SDL requirements
indefinitely. This is a side effect of a process that is designed to respect the needs of the team to spend a
significant amount of time innovating and implementing new features while still maintaining an appropriate
security baseline. No requirement can go more than six months without being completed (or having an
exception granted).
Q: Why not mandate a round-robin or other type of requirement rotation to ensure that all requirements
eventually get addressed?
A: Some teams feel strongly that certain requirements are a better use of their limited time budget. If, for
example, a team feels that the process of running and analyzing attack surface analyzer results is not as
valuable as running and analyzing file fuzzer results, it can perform file fuzzing more often and attack
surface analysis less often.
A: If teams want to do this, great! But it is not part of the SDL-Agile requirements. In general, one of the
guiding principles of SDL-Agile is to keep teams from spending so much time on security that it
significantly affects their feature velocity. A mandated security spike would definitely affect a team’s
feature release schedule.
Introduction
The following questions are designed to help determine the risk rating of line-of-business (LOB)
applications. The application team completes this questionnaire to assist in the determination of the risk
rating. You can arrange these questions in categories, such as Architecture or Data Classification.
• What type of user access does your application offer (internal, external [Internet-facing], both, or
neither)?
_____________________________________________
• What is the basic authentication and authorization for the external-facing (Internet) portion of your
application?
_____________________________________________
_____________________________________________
_____________________________________________
Data Classification
_____________________________________________
_____________________________________________
_____________________________________________
Functionality
• What function does your application fulfill? How critical is its role?
_____________________________________________
_____________________________________________
• Does your application have multiple user roles (for example, user and admin)?
_____________________________________________
• Is code executed on the client machine (for example, ActiveX control, assembly)?
_____________________________________________
_____________________________________________
Process Control
_____________________________________________
• Will the privacy statement or legal notice that was used in the existing application version change for
this release? Is there a new privacy statement or legal notice available?
_____________________________________________
_____________________________________________
_____________________________________________
Each company needs to define risk for their business and industry.
• Does this application handle personal information (employees, customers, business partners)?
Lessons Learned
Procedural lessons learned as part of this include the following:
• Address security during application development. Waiting until the production phase to address
security may expose vulnerabilities in the application.
• Create clearly written and easy-to-access documentation of security and privacy standards.
• Stabilize the process. Introducing constant changes to the standards or the process creates
considerable churn and confusion.
• Use a process to prioritize which applications are examined or the order in which they are examined
to help ensure that the most sensitive application/data is examined first.
• Develop a thoroughly considered process for tracking policy exceptions.
• Education is crucial to the success of a security and privacy program. Train developers, testers, and
support personnel on an ongoing basis to provide up-to-date information.
• Security and privacy are ongoing concerns. Implement an experienced security team and a well-
developed process to help ensure that applications incorporate ongoing changes.
• For third-party applications, a written statement from the vendor helps provide assurance that the
software does not contain any hidden mechanisms that could be used to compromise or circumvent
the software's security and privacy controls.
• Use an application portfolio management tool to track applications and to manage compliance with
the overall governance process.
• The scope of security and privacy work may require a cross charge model that helps manage a
balance between the availability of security/privacy subject matter experts (SMEs) against application
release cycles.
Technical lessons learned as part of this include the following:
• Create security checklists that include step-by-step instructions for securing applications, hosts, and
networks.
• Create privacy checklists that include systematic instructions for appropriate handling of applications
that collect, use, or contain personal data (including notification requirements).
• Use a sound reporting solution to help drive compliance with the process.
• Ensure regular scanning of network and host to identify vulnerabilities, confirm patch management,
and ensure regulatory compliance.
• Within a security tracking system, maintain an up-to-date inventory of the following items:
• Security and privacy SMEs who conduct/monitor service delivery for the service levels.
• Account management that acts as a liaison with application teams, manage the application portfolio,
and ensure that the process for SDL-LOB compliance runs smoothly.
• Remediation and risk management, which both prioritize applications for assessment and manage
the remediation of high-risk vulnerabilities found during the assessment.
• Operations team that conducts network and host scanning post-assessment across the enterprise
and production servers.
• Training and awareness for application teams to ensure that they comply with standards, policies,
and best practices.
• Help desk to answer common questions and, as needed, escalate to the security and privacy SMEs.
• Authoring checklists, standards, and even corporate policy to meet security and privacy requirements.
• Resources, tools, and other content that assist application teams building low-risk or medium-risk
LOB applications with security and privacy compliance.
In addition, there needs to be a security liaison within each development organization to ensure there is
consistent messaging to the application teams and a single point of contact within the actual LOB
organization.