Microsoft Corporation Improving Web Application
Microsoft Corporation Improving Web Application
Microsoft Corporation Improving Web Application
Countermeasures
by Microsoft Corporation ISBN:0735618429
Microsoft Press © 2003 (863 pages)
Table of Contents
Improving Web Application Security—Threats and
Countermeasures
Forewords
Introduction
Solutions at a Glance
Fast Track — How To Implement the Guidance
Part I - Introduction to Threats and Countermeasures
Chapter 1 - Web Application Security Fundamentals
Chapter 2 - Threats and Countermeasures
Chapter 3 - Threat Modeling
Part II - Designing Secure Web Applications
Chapter 4 - Design Guidelines for Secure Web Applications
Chapter 5 - Architecture and Design Review for Security
Part III - Building Secure Web Applications
Chapter 6 - .NET Security Overview
Chapter 7 - Building Secure Assemblies
Chapter 8 - Code Access Security in Practice
Chapter 9 - Using Code Access Security with ASP.NET
Chapter 10 - Building Secure ASP.NET Pages and Controls
Chapter 11 - Building Secure Serviced Components
Chapter 12 - Building Secure Web Services
Chapter 13 - Building Secure Remoted Components
Chapter 14 - Building Secure Data Access
Part IV - Securing Your Network, Host, and Application
Chapter 15 - Securing Your Network
Chapter 16 - Securing Your Web Server
Chapter 17 - Securing Your Application Server
Chapter 18 - Securing Your Database Server
Chapter 19 - Securing Your ASP.NET Application and Web Services
Chapter 20 - Hosting Multiple Web Applications
Part V - Assessing Your Security
Chapter 21 - Code Review
Chapter 22 - Deployment Review
Related Security Resources
Index of Checklists
Checklist - Architecture and Design Review
Checklist - Securing ASP.NET
Checklist - Securing Web Services
Checklist - Securing Enterprise Services
Checklist - Securing Remoting
Checklist - Securing Data Access
Checklist - Securing Your Network
Checklist - Securing Your Web Server
Checklist - Securing Your Database Server
Checklist - Security Review for Managed Code
How To - Index
How To - Implement Patch Management
How To - Harden the TCP/IP Stack
How To - Secure Your Developer Workstation
How To - Use IPSec for Filtering Ports and Authentication
How To - Use the Microsoft Baseline Security Analyzer
How To - Use IISLockdown.exe
How To - Use URLScan
How To - Create a Custom Encryption Permission
Use Code Access Security Policy to Constrain an
How To -
Assembly
Index
List of Figures
List of Tables
Back Cover
Information in this document, including URL and other Internet Web site references, is
subject to change without notice. Unless otherwise noted, the example companies,
organizations, products, domain names, e-mail addresses, logos, people, places and
events depicted herein are fictitious, and no association with any real company,
organization, product, domain name, e-mail address, logo, person, place or event is
intended or should be inferred. Complying with all applicable copyright laws is the
responsibility of the user. Without limiting the rights under copyright, no part of this
document may be reproduced, stored in or introduced into a retrieval system, or
transmitted in any form or by any means (electronic, mechanical, photocopying, recording,
or otherwise), or for any purpose, without the express written permission of Microsoft
Corporation.
Microsoft, MS-DOS, Windows, Windows NT, Active Directory, BizTalk, IntelliSense, MSDN,
Visual Basic, Visual C#, Visual C++, and Visual Studio are either registered trademarks or
trademarks of Microsoft Corporation in the United States and/or other countries.
Version 1.0
6/30/2003
ISBN: 0-7356-1842-9
The names of actual companies and products mentioned herein may be the trademarks of
their respective owners.
Forewords
Foreword by Mark Curphey
When the public talks about the Internet, in most cases they are actually talking about the
Web. The reality of the Web today never ceases to amaze me, and the tremendous
potential for what we can do on the Web is awe-inspiring. But, at the same time, one of the
greatest fears for many who want to embrace the Web — the one thing that is often
responsible for holding back the rate of change — is the security of Web technology. With
the constant barrage of high profile news stories about hackers exposing credit card
databases here and finding cunning ways into secret systems there, it's hardly surprising
that in a recent survey almost all users who chose not to use Internet banking cited security
as the reason. Putting your business online is no longer optional today, but is an essential
part of every business strategy. For this reason alone, it is crucial that users have the
confidence to embrace the new era.
As with any new technology, there is a delay from the time it is introduced to the market to
the time it is really understood by the industry. The breakneck speed at which Web
technologies were adopted has widened that window. The security industry as a whole has
not kept pace with these changes and has not developed the necessary skills and thought
processes to tackle the problem. To fully understand Web security, you must be a
developer, a security person, and a process manager. While many security professionals
can examine and evaluate the security of a Windows configuration, far fewer have access
to the workings of an Internet bank or an online book store, or can fully understand the level
of security that an online business requires.
Until a few years ago, the platform choices for building secure Web applications were
somewhat limited. Secure Web application development was the exclusive playground of
the highly experienced and highly skilled developer (and they were more than happy to let
you know that). The .NET Framework and ASP.NET in particular are an exciting and
extremely important evolution in the Web technology world and are of particular interest to
the security community. With this flexible and extensible security model and a wealth of
security features, almost anything is possible in less time and with less effort than on many
other platforms. The .NET Framework and ASP.NET are an excellent choice for building
highly secure, featurerich Web sites.
With that array of feature choices comes a corresponding array of decisions, and with each
and every decision in the process of designing, developing, deploying, and maintaining a
site can have significant security impact and implications.
This is the most comprehensive and well-written guide to building secure Web applications
that I have seen, and is a must read for anyone building a secure Web site or considering
using ASP.NET to provide security for their online business presence.
Mark Curphey
Mark Curphey has a Masters degree in Information Security and runs the Open Web
Application Security Project. He moderates the sister security mailing list to Bugtraq called
webappsec that specializes in Web application security. He is a former Director of
Information Security for Charles Schwab, consulting manager for Internet Security Systems,
and veteran of more banks and consulting clients than he cares to remember. He now
works for a company called Watchfire. He is also a former Java UNIX bigot now turned C#,
ASP.NET fan.
Foreword by Joel Scambray
I have been privileged to contribute to Improving Web Application Security: Threats and
Countermeasures, and its companion volume, Building Secure ASP.NET Web Applications.
As someone who encounters many such threats and relies on many of these
countermeasures every day at Microsoft's largest Internet-facing online properties, I can
say that this guide is a necessary component of any Web-facing business strategy. I'm
quite excited to see this knowledge shared widely with Microsoft's customers, and I look
forward to applying it in my daily work.
There is an increasing amount of information being published about Internet security, and
keeping up with it is a challenge. One of the first questions I ask when a new work like this
gets published is: "Does the quality of the information justify my time to read it?" In the case
of Improving Web Application Security: Threats and Countermeasures, I can answer an
unqualified yes. J.D. Meier and team have assembled a comprehensive reference on
Microsoft Web application security, and put it in a modular framework that makes it readily
accessible to Web application architects, developers, testers, technical managers,
operations engineers, and yes, even security professionals. The bulk of information
contained in this work can be intimidating, but it is well-organized around key milestones in
the product lifecycle — design, development, testing, deployment, and maintenance. It also
adheres to a security principles-based approach, so that each section is consistent with
common security themes.
Perhaps my favorite aspect of this guide is the thorough testing that went into each page.
During several discussions with the guide's development team, I always came away
impressed with their willingness to actually deploy the technologies discussed herein to
ensure that the theory portrayed aligned with practical reality. They also freely sought out
expertise internal and external to Microsoft to keep the contents useful and practical.
Some other key features that I found very useful include the concise, well-organized, and
comprehensive threat modeling chapter, the abundant tips and guidelines on .NET
Framework security (especially code access security), and the hands-on checklists for each
topic discussed.
Improving Web Application Security: Threats and Countermeasures will get any
organization out ahead of the Internet security curve by showing them how to bake security
into applications, rather than bolting it on as an afterthought. I highly recommend this guide
to those organizations who have developed or deployed Internet-facing applications and to
those organizations who are considering such an endeavor.
Joel Scambray
Web applications are the portals to many corporate secrets. Whether they sit on the edge
of the lawless Internet frontier or safeguard the corporate payroll, these applications are a
popular target for all sorts of mischief. Web application developers cannot afford to be
uncertain about the risks to their applications or the remedies that mitigate these risks. The
potential for damage and the variety of threats is staggering, both from within and without.
However, while many threats exist, the remedies can be crystallized into a tractable set of
practices and procedures that can mitigate known threats and help to guard against the
next unknown threat.
The .NET Framework and the Common Language Runtime were designed and built with
these threats in mind. They provide a powerful platform for writing secure applications and
a rich set of tools for validating and securing application assets. Note, however, that even
powerful tools must be guided by careful hands.
This guide presents a clear and structured approach to dealing with Web application
security. In it, you will find the building blocks that enable you to build and deploy secure
Web applications using ASP.NET and the .NET Framework.
The guide begins with a vocabulary for understanding the jargon-rich language of security
spoken by programmers and security professionals. It includes a catalog of threats faced
by Web applications and a model for identifying threats relevant to a given scenario. A
formal model is described for identifying, classifying, and understanding threats so that
sound designs and solid business decisions can be made.
The text provides a set of guidelines and recommended design and programming practices.
These guidelines are the collective wisdom that comes from a deep analysis of both
mistakes that have been made and mistakes that have been successfully avoided.
The tools of the craft provided by ASP.NET and the .NET Framework are introduced, with
detailed guidance on how to use them. Proven patterns and practices for writing secure
code, using data, and building Web applications and services are all documented.
Sometimes the desired solution is not the easiest path. To make it faster and easier to end
up in the right place, the authors have carefully condensed relevant sample code from real-
world applications into building blocks.
Finally, techniques for assessing application security are provided. The guide contains a set
of detailed checklists that can be used as guidelines for new applications or tools to
evaluate existing projects.
Whether you're just starting on your apprenticeship in Web application security or have
already mastered many of the techniques, you'll find this guide to be an indispensable aid
that will help you build more secure Web applications.
Erik Olson
Based on my experience, I can safely say that many people focus on securing the "core"
code and features, and give the security of features that depend on the core short shrift.
You simply cannot do this in a hostile environment such as the Web. Building secure
systems requires skill, education, and discipline at every stage of development: from design
to coding to testing to documentation to deployment, and finally, to management. Each and
every step must be as secure as possible. This is why I am excited about Improving Web
Application Security: Threats and Countermeasures. It's the first book to offer a "soup to
nuts" view of building a secure Web-based system using the Microsoft .NET Framework
and ASP.NET. The fact that the authors chose to focus on the Web-based product
development end-to-end lifecycle — and not just on securing small islands of technology —
is a testament to much of the work we are undertaking at Microsoft as part of the
Trustworthy Computing initiative. Delivering security and privacy to customers requires the
engagement of every person involved in the software process, rather than focusing on
single events or a single development discipline.
This book has something of value for everyone involved in software development,
deployment, and management, because everyone involved in these efforts has an impact on
product security. I would urge you, at a minimum, to read the sections that affect your
discipline. You will learn critical skills, and most importantly, you will secure every link in the
chain. After all, it takes only one loose thread and the entire garment unravels!
Michael Howard
The information in this guide is based on proven practices for improving your Web
application's security. The guidance is task-based and presented in parts that correspond to
product life cycles, tasks, and roles.
Part II, "Designing Secure Web Applications," gives you the guidance you
require to design secure Web applications. Even if you have deployed your
application, we recommend that you examine and evaluate the concepts, principles,
and techniques outlined in this part.
Part III, "Building Secure Web Applications," allows you to apply the secure
design practices introduced in Part II to create secure implementations. You will
learn defensive coding techniques that make your code and application resilient to
attack.
Part IV, "Securing Your Network, Host, and Application," describes how you will
apply security configuration settings to secure these three interrelated levels.
Instead of applying security randomly, you will learn the rationale behind the security
recommendations.
Part V, "Assessing Your Security," provides the tools you require to evaluate the
success of your security efforts. Starting with the application, you'll take an inside-
out approach to evaluating your code and design. You'll follow this with an outside-in
view of the security risks that challenge your network, host and application.
Why We Wrote This Guide
Traditionally, security has been considered a network issue, where the firewall is the
primary defense (the fortress model) or something that system administrators handle by
locking down the host computers. Application architects and developers have traditionally
treated security as an afterthought or as a feature to be considered as time permits —
usually after performance considerations are addressed.
The problem with the firewall, or fortress model, is that attacks can pass through network
defenses directly to the application. A typical firewall helps to restrict traffic to HTTP, but
the HTTP traffic can contain commands that exploit application vulnerabilities. Relying
entirely on locking down your hosts is another unsuccessful approach. While several threats
can be effectively countered at the host level, application attacks represent a serious and
increasing security issue.
Another area where security problems occur is deployment. A familiar scenario is when an
application fails when it is deployed in a locked-down production environment, which forces
the administrator to loosen security settings. This often leads to new security vulnerabilities.
In addition, a lack of security policy or application requirements that are inconsistent with
policy can compromise security. One of the goals of this guide is to help bridge this gap
between development and operations.
Random security is not enough. To make your application hack-resilient, you need a holistic
and systematic approach to securing your network, host, and application. The responsibility
spans phases and roles across the product life cycle. Security is not a destination; it is a
journey. This guide will help you on your way.
What Is a Hack-Resilient Application?
This guide helps you build hack-resilient applications. A hack-resilient application is one that
reduces the likelihood of a successful attack and mitigates the extent of damage if an
attack occurs. A hack-resilient application resides on a secure host (server) in a secure
network and is developed using secure design and development guidelines.
In 2002, eWeek sponsored its fourth Open Hack challenge, which proved that hackresilient
applications can be built using .NET technologies on servers running the Microsoft®
Windows® 2000 operating system. The Open Hack team built an ASP.NET Web application
using Microsoft Windows 2000 Advanced Server, Internet Information Services (IIS) 5.0,
Microsoft SQL Server™ 2000, and the .NET Framework. It successfully withstood more
than 82,500 attempted attacks and emerged from the competition unscathed.
This guide shares the methodology and experience used to secure Web applications
including the Open Hack application. In addition, the guide includes proven practices that
are used to secure networks and Web servers around the world. These methodologies and
best practices are condensed and offered here as practical guidance.
Scope of This Guide
Web application security must be addressed across the tiers and at multiple layers. A
weakness in any tier or layer makes your application vulnerable to attack.
Figure 1 shows the scope of the guide and the three-layered approach that it uses:
securing the network, securing the host, and securing the application. It also shows the
process called threat modeling, which provides a structure and rationale for the security
process and allows you to evaluate security threats and identify appropriate
countermeasures. If you do not know your threats, how can you secure your system?
The guide addresses security across the three physical tiers shown in Figure 1. It covers
the Web server, remote application server, and database server. At each tier, security is
addressed at the network layer, host layer, and application layer. Figure 1 also shows the
configuration categories that the guide uses to organize the various security configuration
settings that apply to the host and network, and the application vulnerability categories used
to structure application security considerations.
Technologies in Scope
While much of the information in this guide is technology agnostic, the guide focuses on
Web applications built with the .NET Framework and deployed on the Windows 2000
Server family of operating systems. The guide also pays special attention to .NET
Framework code access security, particularly in relation to the use of code access security
with ASP.NET. Where appropriate, new features provided by Windows Server 2003 are
highlighted. Table 1 shows the products and technologies that this guidance is based on.
Designers will learn how to avoid costly security mistakes and how to make appropriate
design choices early in the product development life cycle. Developers will learn how to
implement defensive coding techniques and build secure code. System administrators will
learn how to methodically secure servers and networks, and security analysts will learn how
to perform security assessments.
How to Use This Guide
Each chapter in the guide is modular. The guidance is task-based, and is presented in parts
which correspond to the various stages of the product development life cycle and to the
people and roles involved during the life cycle including architects, developers, system
administrators, and security analysts.
If you are responsible for or are involved in the design of a new or existing Web application,
you should read Part II, "Designing Secure Web Applications." Part II helps you identify
potential vulnerabilities in your application design.
If you are a developer, you should read Part III, "Building Secure Web Applications." The
information in this part helps you to develop secure code and components, including Web
pages and controls, Web services, remoting components, and data access code. As a
developer, you should also read Part IV, "Securing Your Network, Host, and Application" to
gain a better understanding of the type of secure environment that your code is likely to be
deployed in. If you understand more about your target environment, the risk of issues and
security vulnerabilities appearing at deployment time is reduced significantly.
If you are a system administrator, you should read Part IV, "Securing Your Network, Host,
and Application." The information in this part helps you create a secure network and server
infrastructure — one that is tuned to support .NET Web applications and Web services.
Anyone who is responsible for reviewing product security should read Part V, "Assessing
Your Security". This helps you identify vulnerabilities caused by insecure coding techniques
or deployment configurations.
Solutions at a Glance
The "Solutions at a Glance" section provides a problem index for the guide, highlighting key
areas of concern and where to go for more detail.
Fast Track
The "Fast Track" section in the front of the guide helps you implement the recommendations
and guidance quickly and easily.
Parts
This guide is divided into five parts:
Checklists
This section contains printable, task-based checklists, which are quick reference sheets to
help you turn information into action. This section includes the following checklists:
Focus on threats
Figure 4 shows the multiple layers covered by the guide, including the network, host, and
application. The host layer covers the operating system, platform services and components,
and run-time services and components. Platform services and components include SQL
Server and Enterprise Services. Run-time services and components include ASP.NET and
.NET code access security among others.
Focus on Threats
Your application's security measures can become useless, or even counter productive, if
those measures are applied without knowing the threats that the security measures are
designed to mitigate.
Threats can be external, such as attacker on the Internet, or internal, for example, a
disgruntled employee or administrator. This guide helps you identify threats in two ways:
It enumerates the top threats that affect Web applications at the network, host, and
application levels.
It helps you to identify which threats are relevant to your application through a
process called threat modeling.
Figure 5 shows the scope of Volume I. The guide addresses authentication, authorization,
and secure communication across the tiers of a distributed Web application. The
technologies that are covered are the same as the current guide and include Windows 2000
Server, IIS, ASP.NET Web applications and Web services, Enterprise Services, .NET
Remoting, SQL Server, and ADO.NET.
For additional related work, see the "Resources" chapter provided at the end of the guide.
Feedback and Support
We have made every effort to ensure the accuracy of this guide and its companion content.
Technical Support
Technical support for the Microsoft products and technologies referenced in this guide is
provided by Microsoft Product Support Services (PSS). For product support information,
please visit the Microsoft Product Support Web site at http://support.microsoft.com.
Table 2: Newsgroups
Newsgroup Address
.NET Framework
microsoft.public.dotnet.security
Security
ASP.NET Security microsoft.public.dotnet.framework.aspnet.security
Enterprise Services microsoft.public.dotnet.framework_component_services
Web Services microsoft.public.dotnet.framework.aspnet.webservices
Remoting microsoft.public.dotnet.framework.remoting
ADO.NET microsoft.public.dotnet.framework.adonet
SQL Server Security microsoft.public.sqlserver.security
MBSA microsoft.public.security.baseline_analyzer
Virus microsoft.public.security.virus
IIS Security microsoft.public.inetserver.iis.security
The Team Who Brought You This Guide
This guide was produced by the following .NET development specialists:
Alex Mackman, Content Master Ltd, Founding member and Principal Technologist
Microsoft Product Group: Michael Howard (Threat Modeling, Code Review, and
Deployment Review); Matt Lyons (demystifying code access security); Caesar
Samsi; Erik Olson (extensive validation and recommendations on ASP.NET); Andres
De Vivanco (securing SQL Server); Riyaz Pishori (Enterprise Services); Alan Shi;
Carlos Garcia Jurado Suarez; Raja Krishnaswamy, CLR Development Lead;
Christopher Brown; Dennis Angeline; Ivan Medvedev (code access security);
Jeffrey Cooperstein (Threat Modeling); Frank Swiderski; Manish Prabhu (.NET
Remoting); Michael Edwards, MSDE; Pranish Kumar, (VC++ PM); Richard
Waymire (SQL Security); Sebastian Lange; Greg Singleton; Thomas Deml (IIS
Lead PM); Wade Hilmo (IIS); Steven Pratschner; Willis Johnson (SQL Server); and
Girish Chander (SQL Server).
Microsoft Consulting Services and Product Support Services (PSS): Ilia Fortunov
(Senior Architect) for providing continuous and diligent feedback; Aaron Margosis
(extensive review, script injection, and SQL Injection); Jacquelyn Schmidt; Kenny
Jones; Wade Mascia (Web Services and Enterprise services); Aaron Barth; Jackie
Richards; Aaron Turner; Andy Erlandson (Director of PSS Security); Jayaprakasam
Siddian Thirunavukkarasu (SQL Server security); Jeremy Bostron; Jerry Bryant;
Mike Leuzinger; Robert Hensing (reviewing the Securing series); Gene Ferioli;
David Lawler; Jon Wall (threat modeling); Martin Born; Michael Thomassy; Michael
Royster; Phil McMillan; and Steven Ramirez.
Thanks to Joel Scambray; Rich Benack; Alisson Sol; Tavi Siochi (IT Audit); Don
Willits (raising the quality bar); Jay Nanduri (Microsoft.com) for reviewing and
sharing real world experience; Devendra Tiwari and Peter Dampier, for extensive
review and sharing best IT practices; Denny Dayton; Carlos Lyons; Eric Rachner;
Justin Clarke; Shawn Welch (IT Audit); Rick DeJarnette; Kent Sharkey (Hosting
scenarios); Andy Oakley; Vijay Rajagopalan (Dev Lead MS Operations); Gordon
Ritchie, Content Master Ltd; Chase Carpenter (Threat Modeling); Matt Powell (for
Web Services security); Joel Yoker; Juhan Lee [MSN Operations]; Lori Woehler;
Mike Sherrill; Mike Kass; Nilesh Bhide; Rebecca Hulse; Rob Oikawa (Architect);
Scott Greene; Shawn Nandi; Steve Riley; Mark Mortimore; Matt Priestley; and
David Ross.
Thanks to our editors: Sharon Smith; Kathleen Hartman (S&T OnSite); Tina Burden
(Entirenet); Cindy Riskin (S&T OnSite); and Pat Collins (Entirenet) for helping to
ensure a quality experience for the reader.
Finally, thanks to Naveen Yajaman; Philip Teale; Scott Densmore; Ron Jacobs;
Jason Hogg; Per Vonge Nielsen; Andrew Mason; Edward Jezierski; Michael Kropp;
Sandy Khaund; Shaun Hayes; Mohammad Al-Sabt; Edward Lafferty; Ken Perilman;
and Sanjeev Garg (Satyam Computer Services).
Tell Us About Your Success
If this guide helps you, we would like to know. Tell us by writing a short summary of the
problems you faced and how this guide helped you out. Submit your summary to:
[email protected].
Summary
In this introduction, you were shown the structure of the guide and the basic approach used
by the guide to secure Web applications. You were also shown how to apply the guidance
to your role or to specific phases of your product development life cycle.
Solutions at a Glance
This document roadmap summarizes the solutions presented in Improving Web Application
Security: Threats and Countermeasures. It provides links to the appropriate material in the
guide so that you can easily locate the information you need and find solutions to specific
problems.
Architecture and Design Solutions
For architects, the guide provides the following solutions to help you design secure Web
applications:
Use threat modeling to systematically identify threats rather than applying security
in a haphazard manner. Next, rate the threats based on the risk of an attack or
occurrence of a security compromise and the potential damage that could result.
This allows you to tackle threats in the appropriate order.
For more information about creating a threat model and evaluating threat risks, see
Chapter 3, "Threat Modeling."
Use tried and tested design principles. Focus on the critical areas where the correct
approach is essential and where mistakes are often made. This guide refers to
these as application vulnerability categories. They include input validation,
authentication, authorization, configuration management, sensitive data protection,
session management, cryptography, parameter manipulation, exception
management, and auditing and logging considerations. Pay serious attention to
deployment issues including topologies, network infrastructure, security policies,
and procedures.
For more information, see Chapter 4, "Design Guidelines for Secure Web
Applications."
For more information, see Chapter 5, "Architecture and Design Review for
Security."
Development Solutions
For developers, this guide provides the following solutions:
The .NET Framework provides user and code security models that allow you to
restrict what users can do and what code can do. To program role-based security
and code access security, use types from the System.Security namespace. The
.NET Framework also provides the System.Security.Cryptography namespace,
which exposes symmetric and asymmetric encryption and decryption, hashing,
random number generation, support for digital signatures, and more.
Use strong names to digitally sign your assemblies and to make them tamperproof.
At the same time you need to be aware of strong name issues when you use strong
name assemblies with ASP.NET. Reduce your assembly attack profile by adhering
to solid object oriented design principles, and then use code access security to
further restrict which code can call your code. Use structured exception handling to
prevent sensitive information from propagating beyond your current trust boundary
and to develop more robust code. Avoid canonicalization issues, particularly with
input file names and URLs.
For information about how to improve the security of your managed code, see
Chapter 7, "Building Secure Assemblies." For more information about how to use
code access security effectively to further improve security, see Chapter 8, "Code
Access Security in Practice." For information about performing managed code
reviews, see Chapter 21, "Code Review."
Do not reveal internal system or application details, such as stack traces, SQL
statement fragments, and so on. Ensure that this type of information is not allowed
to propagate to the end user or beyond your current trust boundary.
Fail securely in the event of an exception, and make sure your application denies
access and is not left in an insecure state. Do not log sensitive or private data such
as passwords, which could be compromised. When you log or report exceptions, if
user input is included in exception messages, validate it or sanitize it. For example,
if you return an HTML error message, you should encode the output to avoid script
injection.
For more information, see the "Exception Management" sections in Chapter 7,
"Building Secure Assemblies," and in Chapter 10, "Building Secure ASP.NET Pages
and Controls."
Use analysis tools such as FxCop to analyze binary assemblies and to ensure that
they conform to the .NET Framework design guidelines. Fix any security
vulnerabilities identified by your analysis tools. Use a text search facility to scan
your source code base for hard-coded secrets such as passwords. Then, review
specific elements of your application including Web pages and controls, data
access code, Web services, serviced components, and so on. Pay particular
attention to SQL injection and cross-site scripting vulnerabilities.
Also review the use of sensitive code access security techniques such as link
demands and asserts. For more information, see Chapter 21, "Code Review."
You can apply a methodology when securing your workstation. Secure your
accounts, protocols, ports, services, shares, files and directories, and registry.
Most importantly, keep your workstation current with the latest patches and
updates. If you run Internet Information Services (IIS) on Microsoft Windows® XP
or Windows 2000, then run IISLockdown. IISLockdown applies secures IIS
configurations and installs the URLScan Internet Security Application Programming
Interface (ISAPI) filter, which detects and rejects potentially malicious HTTP
requests. You may need to modify the default URLScan configuration, for example,
so you can debug Web applications during development and testing.
For more information, see "How To: Secure Your Developer Workstation," in the
"How To" section of this guide.
With.NET Framework version 1.1, you can set ASP.NET trust levels either in
Machine.config or Web.config. These trust levels use code access security to
restrict the resources that ASP.NET applications can access, such as the file
system, registry, network, databases, and so on. In addition, they provide
application isolation.
For more information about using code access security from ASP.NET, developing
partial trust Web applications, and sandboxing privileged code, see Chapter 9,
"Using Code Access Security with ASP.NET."
For more information about code access security fundamentals, see Chapter 8,
"Code Access Security in Practice."
For more information about the code access security issues that you need to
consider when developing managed code, see the "Code Access Security
Considerations" sections in Chapter 11, "Building Secure Serviced Components,"
Chapter 12, "Building Secure Web Services," "Building Secure Remoted
Components," and Chapter 14, "Building Secure Data Access."
You can restrict what code can do regardless of the account used to run the code.
You can use code access security to constrain the resources and operations that
your code is allowed to access, either by configuring policy or how you write your
code. If your code does not need to access a resource or perform a sensitive
operation such as calling unmanaged code, you can use declarative security
attributes to ensure that your code cannot be granted this permission by an
administrator.
You can use code access security to constrain an assembly's ability to access
areas of the file system and perform file I/O. For example, you can constrain a
Web application so that it can only perform file I/O beneath its virtual directory
hierarchy. You can also constrain file I/O to specific directories. You can do this
programmatically or by configuring code access security policy.
For more information, see "File I/O" in Chapter 8, "Code Access Security in
Practice" and "Medium Trust" in Chapter 9, "Using Code Access Security with
ASP.NET." For more information about configuring code access security policy, see
"How To: Use Code Access Security Policy to Constrain an Assembly" in the "How
To" section of this guide.
Use parameterized stored procedures for data access. The use of parameters
ensures that input values are checked for type and length. Parameters are also
treated as safe literal values and not executable code within the database. If you
cannot use stored procedures, use SQL statements with parameters. Do not build
SQL statements by concatenating input values with SQL commands. Also, ensure
that your application uses a least privileged database login to constrain its
capabilities in the database.
For more information about SQL injection and for further countermeasures, see
"SQL Injection" in Chapter 14, "Building Secure Data Access."
How to prevent cross-site scripting
Validate input for type, length, format, and range, and encode output. Encode
output if it includes input, including Web input. For example, encode form fields,
query string parameters, cookies and so on, and encode input read from a
database (especially a shared database) where you cannot assume the data is
safe. For free format input fields that you need to return to the client as HTML,
encode the output and then selectively remove the encoding on permitted elements
such as the <b> or <i> tags for formatting.
For more information, see "Cross-Site Scripting" in Chapter 10, "Building ASP.NET
Pages and Controls."
Look for alternate approaches to avoid storing secrets in the first place. If you must
store them, do not store them in clear text in source code or in configuration files.
Encrypt secrets with the Data Protection Application Programming Interface
(DPAPI) to avoid key management issues.
For more information, see "Sensitive Data" in Chapter 10, "Building Secure
ASP.NET Pages and Controls," "Cryptography" in Chapter 7, "Building Secure
Assemblies," and "Aspnet_setreg.exe and Process, Session, and Identity" in
Chapter 19, " Securing Your ASP.NET Application and Web Services."
Pay particular attention to the parameters passed to and from unmanaged APIs,
and guard against potential buffer overflows. Validate the lengths of input and
output string parameters, check array bounds, and be particularly careful with file
path lengths. Use custom permission demands to protect access to unmanaged
resources before asserting the unmanaged code permission. Use caution if you use
SuppressUnmanagedCodeSecurityAttribute to improve performance.
For more information, see the "Unmanaged Code" sections in Chapter 7, "Building
Secure Assemblies," and Chapter 8, "Code Access Security in Practice."
Constrain, reject, and sanitize your input because it is much easier to validate data
for known valid types, patterns, and ranges than it is to validate data by looking for
known bad characters. Validate data for type, length, format, and range. For string
input, use regular expressions. To perform type checks, use the .NET Framework
type system. On occasion, you may need to sanitize input. An example is encoding
data to make it safe.
For input validation design strategies, see "Input Validation" in Chapter 4, "Design
Guidelines for Secure Web Applications." For implementation details, see the "Input
Validation" sections in Chapter 10, "Building Secure ASP.NET Pages and Controls,"
Chapter 12, "Building Secure Web Services," Chapter 13, "Building Secure
Remoted Components," and Chapter 14, "Building Secure Data Access."
For more information, see the "Authentication" sections in Chapter 19, "Securing
Your ASP.NET Application and Web Services," and Chapter 10, "Building Secure
ASP.NET Pages and Controls."
Administration Solutions
For administrators, this guide provides the following solutions:
Use the Microsoft Baseline Security Analyzer (MBSA) to detect the patches and
updates that may be missing from your current installation. Run this on a regular
basis, and keep your servers current with the latest patches and updates. Back up
servers prior to applying patches, and test patches on test servers prior to installing
them on a production server. Also, use the security notification services provided by
Microsoft, and subscribe to receive security bulletins via e-mail.
For more information, see "How To: Implement Patch Management" in the "How To"
section of this guide.
Do not store passwords or sensitive data in plaintext. For example, use the
Aspnet_setreg.exe utility to encrypt the values for <processModel>, <identity>,
and <sessionState>. Do not reveal exception details to the client. For example do
not use mode="Off" for <customErrors> in ASP.NET because it causes detailed
error pages that contain system-level information to be returned to the client.
Restrict who has access to configuration files and settings. Lock configuration
settings if necessary, using the <location> tag and the allowOverride element.
For more information, see Chapter 16, "Securing Your Web Server."
How to secure a database server
For more information, see Chapter 18, "Securing Your Database Server."
Evaluate accounts, protocols, ports, services, shares, files and directories, and the
registry. Use Internet Protocol Security (IPSec) or SSL to secure the
communication channel between the Web server and the application server, and
between the application server and the database server. Review the security of
your Enterprise Services applications, Web services, and remoting applications.
Restrict the range of ports with which clients can connect to the application server,
and consider using IPSec restrictions to limit the range of clients.
For more information, see Chapter 17, "Securing Your Application Server."
Use separate identities to allow you to configure access control lists (ACLs) on
secure resources to control which applications have access to them. On the
Microsoft Windows Server 2003 operating system, use separate process identities
with IIS 6 application pools. On Windows 2000 Server, use multiple anonymous
Internet user accounts and enable impersonation. With the .NET Framework version
1.1 on both platforms, you can use partial trust levels and use code access security
to provide further application isolation. For example, you can use these methods to
prevent applications from accessing each other's virtual directories and critical
system resources.
For more information, see Chapter 20, "Hosting Multiple ASP.NET Applications."
In cross-platform scenarios and where you do not control both endpoints, use the
Web Services Enhancements 1.0 for Microsoft .NET (WSE) to implement message
level security solutions that conform to the emerging WS-Security standard. Pass
authentication tokens in Simple Object Access Protocol (SOAP) headers. Use XML
encryption to ensure that sensitive data remains private. Use digital signatures for
message integrity. Within the enterprise where you control both endpoints, you can
use the authentication, authorization, and secure communication features provided
by the operating system and IIS.
For more information, see Chapter 17, "Securing Your Application Server," Chapter
19, "Securing Your ASP.NET Application and Web Services." For information about
developing secure Web services, see Chapter 12, "Building Secure Web Services."
Configure server applications to run using least privileged accounts. Enable COM+
role-based security, and enforce component-level access checks. At the minimum,
use call-level authentication to prevent anonymous access. To secure the traffic
passed to remote serviced components, use IPSec encrypted channels or use
remote procedure call (RPC) encryption. Restrict the range of ports that Distributed
COM (DCOM) dynamically allocates or use static endpoint mapping to limit the port
range to specific ports. Regularly monitor for Quick Fix Engineer (QFE) updates to
the COM+ runtime.
For more information, see Chapter 17, "Securing Your Application Server."
For more information, see Chapter 17, "Securing Your Application Server."
You need to protect session state while in transit across the network and while in
the state store. If you use a remote state store, secure the communication channel
to the state store using SSL or IPSec. Also encrypt the connection string in
Machine.config. If you use a SQL Server state store, use Windows authentication
when you connect to the state store, and limit the application login in the database.
If you use the ASP.NET state service, use a least privileged account to run the
service, and consider changing the default port that the service listens to. If you do
not need the state service, disable it.
For more information, see "Session State" in Chapter 19, "Securing Your ASP.NET
Application and Web Services."
How to manage application configuration securely
Make sure the TCP/IP stack configuration on your server is hardened to protect
against attacks such as SYN floods. Configure ASP.NET to limit the size of
accepted POST requests and to place limits on request execution times.
For more information about hardening TCP/IP, see "How To: Harden the TCP/IP
Stack" in the "How To" section of this guide. For more information about ASP.NET
settings used to help prevent denial of service, see Chapter 19, "Securing Your
ASP.NET Application and Web Services."
You can configure code access security policy to ensure that individual assemblies
or entire Web applications are limited in their ability to access the file system. For
example, by configuring a Web application to run at the Medium trust level, you
prevent the application from being able to access files outside of its virtual directory
hierarchy.
Also, by granting a restricted file I/O permission to a particular assembly you can
control precisely which files it is able to access and how it should be able to access
them.
For more information, see Chapter 9, "Using Code Access Security with ASP.NET"
and "How To: Use Code Access Security Policy to Constrain an Assembly" in the
"How To" section of this guide.
For more information, see the "Remote Administration" sections in Chapter 16,
"Securing Your Web Server" and Chapter 18, "Securing Your Database Server."
Fast Track — How To Implement the Guidance
Goal and Scope
This guide helps you to design, build, and configure hack-resilient Web applications. These
applications reduce the likelihood of successful attacks and mitigate the extent of damage
should an attack occur. Figure 1 shows the scope of the guide and its three-layered
approach: securing the network, securing the host, and securing the application.
The guide addresses security across the three physical tiers shown in Figure 1. It covers
the Web server, remote application server, and database server. At each tier, security is
addressed at the network layer, host layer, and application layer. Figure 1 also shows the
configuration categories that the guide uses to organize the various security configuration
settings that apply to the host and network, and the application vulnerability categories,
which are used to structure application security considerations.
The Holistic Approach
Web application security must be addressed across application tiers and at multiple layers.
An attacker can exploit weaknesses at any layer. For this reason, the guide takes a holistic
approach to application security and applies it at all three layers. This holistic approach to
security is shown in Figure 2.
Figure 2 shows the multiple layers covered by the guide, including the network, host, and
application. The host layer covers the operating system, platform services and components,
and run-time services and components. Platform services and components include
Microsoft® SQL Server™ 2000 and Enterprise Services. Runtime services and components
include ASP.NET and .NET code access security among others.
Securing Your Network
The three core elements of a secure network are the router, firewall, and switch. The guide
covers all three elements. Table 1 provides a brief description of each element.
The guide organizes the precautions you must take and the settings you must configure into
categories. By using these configuration categories, you can systematically walk through
the securing process from top to bottom or pick a particular category and complete specific
steps.
Figure 3 shows the configuration categories used throughout Part IV of this guide, "Securing
Your Network, Host, and Application."
It lists the top threats that affect Web applications at the network, host, and
application layers.
It presents a threat modeling process to help you identify which threats are relevant
to your application.
An outline of the threat modeling process covered in the guide is shown in Figure 4.
Use simple diagrams and tables to document the architecture of your application,
including subsystems, trust boundaries, and data flow.
Document each threat using a common threat template that defines a core set of
attributes that you should capture for each threat.
Rate the threats to prioritize and address the most significant threats first. These
threats are the ones that present the biggest risk. The rating process weighs the
probability of the threat against the damage that could result should an attack
occur. It might turn out that certain threats do not warrant any action when you
compare the risk posed by the threat with the resulting mitigation costs.
Applying the Guidance to Your Product Life Cycle
Different parts of the guide apply to the different phases of the product development life
cycle. The sequence of chapters in the guide mirrors the typical phases of the life cycle.
The chapter-to-role relationship is shown in Figure 5.
Threat modeling and security assessment (specifically the code review and
Note deployment review chapters) apply when you build new Web applications or when
you review existing applications.
Implementing the Guidance
The guidance throughout the guide is task-based and modular, and each chapter relates to
the various stages of the product development life cycle and the various roles involved.
These roles include architects, developers, system administrators, and security
professionals. You can pick specific chapters to perform a particular task or use a series of
chapters for a phase of the product development life cycle.
The checklist shown in Table 3 highlights the areas covered by this guide that are required
to secure your network, host, and application.
Table 3: SecurityChecklist
Check Description
Educate your teams about the threats that affect the network, host,
and application layers. Identify common vulnerabilities and attacks, and
¨
learn countermeasures. For more information, see Chapter 2, "Threats
and Countermeasures."
Create threat models for your Web applications. For more information,
¨
see Chapter 3, "Threat Modeling."
Review and implement your company's security policies. If you do not
have security policies in place, create them. For more information about
¨
creating security policies, see "Security Policy Issues" at the SANS Info
Sec Reading Room at http://www.sans.org/rr/catindex.php?cat_id=50.
Review your network security. For more information, see Chapter 15,
¨
"Securing Your Network."
Patch and update your servers. Review your server security settings
and compare them with the snapshot of a secure server. For more
¨
information, see "Snapshot of a Secure Web Server" in Chapter 16,
"Securing Your Web Server."
Educate your architects and developers about Web application security
¨ design guidelines and principles. For more information, see Chapter 4,
"Design Guidelines for Secure Web Applications."
Educate your architects and developers about writing secure managed
¨ code. For more information, see Chapter 7, "Building Secure
Assemblies" and Chapter 8, "Code Access Security in Practice."
Secure your developer workstations. For more information, see "How
¨ To: Secure Your Developer Workstation" in the "How To" section of this
guide.
Review the designs of new Web applications and of existing
¨ applications. For more information, see Chapter 5, "Architecture and
Design Review for Security."
Educate developers about how to perform code reviews. Perform code
¨ reviews for applications in development. For more information, see
Chapter 21, "Code Review."
Perform deployment reviews of your applications to identify potential
¨ security vulnerabilities. For more information, see Chapter 22,
"Deployment Review."
Who Does What?
Designing and building secure applications is a collaborative effort involving multiple roles.
This guide is structured to address each role and the relevant security factors to be
considered by each role. The categorization and the issues addressed are outlined below.
RACI Chart
RACI stands for:
Keep Informed (people with a vested interest who should be kept informed)
You can use a RACI chart at the beginning of your project to identify the key security
related tasks together with the roles that should execute each task.
Table 4 illustrates a simple RACI chart for this guide. (The heading row lists the roles; the
first column lists tasks, and the remaining columns delineate levels of accountability for each
task according to role.)
Table 4: RACIChart
System Security
Tasks Architect Developer Tester
Administrator Professional
Security
R I A
Policies
Threat
A I I R
Modeling
Security
Design A I I C
Principles
Security
A C R
Architecture
Architecture
and Design R A
Review
Code
Development A R
Technology
Specific A R
Threats
Code
R I A
Review
Security
C I A C
Testing
Network
C R A
Security
Host
C A I R
Security
Application
C I A R
Security
Deployment
C R I I A
Review
Summary
This fast track has highlighted the basic approach taken by the guide to help you design and
develop hack-resilient Web applications, and to evaluate the security of existing
applications. It has also shown you how to apply the guidance depending on your specific
role in the project life cycle.
Part I: Introduction to Threats and Countermeasures
Chapter List
Chapter 1: Web Application Security Fundamentals
These are only some of the problems. Other significant problems are frequently overlooked.
Internal threats posed by rogue administrators, disgruntled employees, and the casual user
who mistakenly stumbles across sensitive data pose significant risk. The biggest problem of
all may be ignorance.
The solution to Web application security is more than technology. It is an ongoing process
involving people and practices.
We Are Secure — We Have a Firewall
This is a common misconception; it depends on the threat. For example, a firewall may not
detect malicious input sent to your Web application. Also, consider the scenario where a
rogue administrator has direct access to your application.
Do firewalls have their place? Of course they do. Firewalls are great at blocking ports.
Some firewall applications examine communications and can provide very advanced
protection. Firewalls are an integral part of your security, but they are not a complete
solution by themselves.
The same holds true for Secure Sockets Layer (SSL). SSL is great at encrypting traffic
over the network. However, it does not validate your application's input or protect you from
a poorly configured server.
What Do We Mean By Security?
Security is fundamentally about protecting assets. Assets may be tangible items, such as a
Web page or your customer database — or they may be less tangible, such as your
company's reputation.
Security is a path, not a destination. As you analyze your infrastructure and applications,
you identify potential threats and understand that each threat presents a degree of risk.
Security is about risk management and implementing effective countermeasures.
Authentication
Authentication addresses the question: who are you? It is the process of uniquely
identifying the clients of your applications and services. These might be end users,
other services, processes, or computers. In security parlance, authenticated clients
are referred to as principals.
Authorization
Authorization addresses the question: what can you do? It is the process that
governs the resources and operations that the authenticated client is permitted to
access. Resources include files, databases, tables, rows, and so on, together with
system-level resources such as registry keys and configuration data. Operations
include performing transactions such as purchasing a product, transferring money
from one account to another, or increasing a customer's credit rating.
Auditing
Confidentiality
Confidentiality, also referred to as privacy, is the process of making sure that data
remains private and confidential, and that it cannot be viewed by unauthorized users
or eavesdroppers who monitor the flow of traffic across a network. Encryption is
frequently used to enforce confidentiality. Access control lists (ACLs) are another
means of enforcing confidentiality.
Integrity
Availability
From a security perspective, availability means that systems remain available for
legitimate users. The goal for many attackers with denial of service attacks is to
crash an application or to make sure that it is sufficiently overwhelmed so that other
users cannot access the application.
Threats, Vulnerabilities, and Attacks Defined
A threat is any potential occurrence, malicious or otherwise, that could harm an asset. In
other words, a threat is any bad thing that can happen to your assets.
A vulnerability is a weakness that makes a threat possible. This may be because of poor
design, configuration mistakes, or inappropriate and insecure coding techniques. Weak
input validation is an example of an application layer vulnerability, which can result in input
attacks.
To summarize, a threat is a potential event that can adversely affect an asset, whereas a
successful attack exploits vulnerabilities in your system.
How Do You Build a Secure Web Application?
It is not possible to design and build a secure Web application until you know your threats.
An increasingly important discipline and one that is recommended to form part of your
application's design phase is threat modeling. The purpose of threat modeling is to analyze
your application's architecture and design and identify potentially vulnerable areas that may
allow a user, perhaps mistakenly, or an attacker with malicious intent, to compromise your
system's security.
After you know your threats, design with security in mind by applying timeworn and proven
security principles. As developers, you must follow secure coding techniques to develop
secure, robust, and hack-resilient solutions. The design and development of application
layer software must be supported by a secure network, host, and application configuration
on the servers where the application software is to be deployed.
Secure Your Network, Host, and Application
"A vulnerability in a network will allow a malicious user to exploit a host or an application.
A vulnerability in a host will allow a malicious user to exploit a network or an application. A
vulnerability in an application will allow a malicious user to exploit a network or a host."
To build secure Web applications, a holistic approach to application security is required and
security must be applied at all three layers. This approach is shown in Figure 1.1.
With the framework that these categories provide, you can systematically evaluate or
secure your server's configuration instead of applying security settings on an ad-hoc basis.
The rationale for these particular categories is shown in Table 1.2.
These categories are used as a framework throughout this guide. Because the categories
represent the areas where security mistakes are most frequently made, they are used to
illustrate guidance for application developers and architects. The categories are also used
as a framework when evaluating the security of a Web application. With these categories,
you can focus consistently on the key design and implementation choices that most affect
your application's security. Application vulnerability categories are described in Table 1.3.
For more information on the Open Hack Web application, see the MSDN article,
"Open Hack: Building and Configuring More Secure Web Sites," at
http://msdn.microsoft.com/library/default.asp?url=/library/en-
us/dnnetsec/html/openhack.asp.
How to identify and counter threats at the network, host, and application levels
Overview
When you incorporate security features into your application's design, implementation, and
deployment, it helps to have a good understanding of how attackers think. By thinking like
attackers and being aware of their likely tactics, you can be more effective when applying
countermeasures. This chapter describes the classic attacker methodology and profiles the
anatomy of a typical attack.
This chapter analyzes Web application security from the perspectives of threats,
countermeasures, vulnerabilities, and attacks. The following set of core terms are defined
to avoid confusion and to ensure they are used in the correct context.
Asset. A resource of value such as the data in a database or on the file system, or
a system resource
This chapter also identifies a set of common network, host, and application level threats,
and the recommended countermeasures to address each one. The chapter does not
contain an exhaustive list of threats, but it does highlight many top threats. With this
information and knowledge of how an attacker works, you will be able to identify additional
threats. You need to know the threats that are most likely to impact your system to be able
to build effective threat models. These threat models are the subject of Chapter 3, "Threat
Modeling."
How to Use This Chapter
The following are recommendations on how to use this chapter:
Become familiar with specific threats that affect the network host and
application. The threats are unique for the various parts of your system, although
the attacker's goals may be the same.
Use the threats to identify risk. Then create a plan to counter those threats.
When you design, build, and secure new systems, keep the threats in this
chapter in mind. The threats exist regardless of the platform or technologies that
you use.
Anatomy of an Attack
By understanding the basic approach used by attackers to target your Web application, you
will be better equipped to take defensive measures because you will know what you are up
against. The basic steps in attacker methodology are summarized below and illustrated in
Figure 2.1:
Escalate privileges
Maintain access
Deny service
For example, an attacker can detect a cross-site scripting (XSS) vulnerability by testing to
see if any controls in a Web page echo back output.
For an attacker, the easiest way into an application is through the same entrance that
legitimate users use — for example, through the application's logon page or a page that
does not require authentication.
Escalate Privileges
After attackers manage to compromise an application or network, perhaps by injecting
code into an application or creating an authenticated session with the Microsoft® Windows®
2000 operating system, they immediately attempt to escalate privileges. Specifically, they
look for administration privileges provided by accounts that are members of the
Administrators group. They also seek out the high level of privileges offered by the local
system account.
Using least privileged service accounts throughout your application is a primary defense
against privilege escalation attacks. Also, many network level privilege escalation attacks
require an interactive logon session.
Maintain Access
Having gained access to a system, an attacker takes steps to make future access easier
and to cover his or her tracks. Common approaches for making future access easier
include planting back-door programs or using an existing account that lacks strong
protection. Covering tracks typically involves clearing logs and hiding tools. As such, audit
logs are a primary target for the attacker.
Log files should be secured, and they should be analyzed on a regular basis. Log file
analysis can often uncover the early signs of an attempted break-in before damage is done.
Deny Service
Attackers who cannot gain access often mount a denial of service attack to prevent others
from using the application. For other attackers, the denial of service option is their goal from
the outset. An example is the SYN flood attack, where the attacker uses a program to send
a flood of TCP SYN requests to fill the pending connection queue on the server. This
prevents other users from establishing network connections.
Understanding Threat Categories
While there are many variations of specific attacks and attack techniques, it is useful to
think about threats in terms of what the attacker is trying to achieve. This changes your
focus from the identification of every specific attack — which is really just a means to an
end — to focusing on the end results of possible attacks.
STRIDE
Threats faced by the application can be categorized based on the goals and purposes of
the attacks. A working knowledge of these categories of threats can help you organize a
security strategy so that you have planned responses to threats. STRIDE is the acronym
used at Microsoft to categorize different threat types. STRIDE stands for:
Information gathering
Sniffing
Spoofing
Session hijacking
Denial of service
Information Gathering
Network devices can be discovered and profiled in much the same way as other types of
systems. Attackers usually start with port scanning. After they identify open ports, they use
banner grabbing and enumeration to detect device types and to determine operating
system and application versions. Armed with this information, an attacker can attack known
vulnerabilities that may not be updated with security patches.
Configure operating systems that host network software (for example, software
firewalls) to prevent footprinting by disabling unused protocols and unnecessary
ports.
Sniffing
Sniffing or eavesdropping is the act of monitoring traffic on the network for data such as
plaintext passwords or configuration information. With a simple packet sniffer, an attacker
can easily read all plaintext traffic. Also, attackers can crack packets encrypted by
lightweight hashing algorithms and can decipher the payload that you considered to be safe.
The sniffing of packets requires a packet sniffer in the path of the server/client
communication.
Spoofing
Spoofing is a means to hide one's true identity on the network. To create a spoofed identity,
an attacker uses a fake source address that does not represent the actual address of the
packet. Spoofing may be used to hide the original source of an attack or to work around
network access control lists (ACLs) that are in place to limit host access based on source
address rules.
Although carefully crafted spoofed packets may never be tracked to the original sender, a
combination of filtering rules prevents spoofed packets from originating from your network,
allowing you to block obviously spoofed packets.
Filter incoming packets that appear to come from an internal IP address at your
perimeter.
Filter outgoing packets that appear to originate from an invalid local IP address.
Session Hijacking
Also known as man in the middle attacks, session hijacking deceives a server or a client
into accepting the upstream host as the actual legitimate host. Instead the upstream host is
an attacker's host that is manipulating the network so the attacker's host appears to be the
desired destination.
Denial of Service
Denial of service denies legitimate users access to a server or services. The SYN flood
attack is a common example of a network level denial of service attack. It is easy to launch
and difficult to track. The aim of the attack is to send more requests to a server than it can
handle. The attack exploits a potential vulnerability in the TCP/IP connection establishment
mechanism and floods the server's pending connection queue.
Harden the TCP/IP stack by applying the appropriate registry settings to increase
the size of the TCP connection queue, decrease the connection establishment
period, and employ dynamic backlog mechanisms to ensure that the connection
queue is never exhausted.
Use a network Intrusion Detection System (IDS) because these can automatically
detect and respond to SYN attacks.
Host Threats and Countermeasures
Host threats are directed at the system software upon which your applications are built.
This includes Windows 2000, Internet Information Services (IIS), the .NET Framework, and
SQL Server 2000, depending upon the specific server role. Top host level threats include:
Footprinting
Profiling
Password cracking
Denial of service
Unauthorized access
Although these three threats are actually attacks, together they pose a significant threat to
Web applications, the hosts these applications live on, and the network used to deliver
these applications. The success of these attacks on any system is possible through many
vulnerabilities such as weak defaults, software bugs, user error, and inherent vulnerabilities
in Internet protocols.
Countermeasures that you can use against viruses, Trojan horses, and worms include:
Stay current with the latest operating system service packs and software patches.
Footprinting
Examples of footprinting are port scans, ping sweeps, and NetBIOS enumeration that can
be used by attackers to glean valuable system-level information to help prepare for more
significant attacks. The type of information potentially revealed by footprinting includes
account details, operating system and other software versions, server names, and
database schema details.
Use an IDS that can be configured to pick up footprinting patterns and reject
suspicious traffic.
Password Cracking
If the attacker cannot establish an anonymous connection with the server, he or she will try
to establish an authenticated connection. For this, the attacker must know a valid username
and password combination. If you use default account names, you are giving the attacker a
head start. Then the attacker only has to crack the account's password. The use of blank
or weak passwords makes the attacker's job even easier.
Apply lockout policies to end-user accounts to limit the number of retry attempts
that can be used to guess the password.
Do not use default account names, and rename standard accounts such as the
administrator's account and the anonymous Internet user account used by many
Web applications.
Denial of Service
Denial of service can be attained by many methods aimed at several targets within your
infrastructure. At the host, an attacker can disrupt service by brute force against your
application, or an attacker may know of a vulnerability that exists in the service your
application is hosted in or in the operating system that runs your server.
Configure your applications, services, and operating system with denial of service in
mind.
Make sure your account lockout policies cannot be exploited to lock out well known
service accounts.
Make sure your application is capable of handling high volumes of traffic and that
thresholds are in place to handle abnormally high loads.
Stay current with patches and updates to ensure that newly discovered buffer
overflows are speedily patched.
Unauthorized Access
Inadequate access controls could allow an unauthorized user to access restricted
information or perform restricted operations. Common vulnerabilities include weak IIS Web
access controls, including Web permissions and weak NTFS permissions.
When network and host level entry points are fully secured; the public interfaces exposed
by your application become the only source of attack. The input to your application is a
means to both test your system and a way to execute code on an attacker's behalf. Does
your application blindly trust input? If it does, your application may be susceptible to the
following:
Buffer overflows
Cross-site scripting
SQL injection
Canonicalization
The following section examines these vulnerabilities in detail, including what makes these
vulnerabilities possible.
Buffer Overflows
Buffer overflow vulnerabilities can lead to denial of service attacks or code injection. A
denial of service attack causes a process crash;. code injection alters the program
execution address to run an attacker's injected code. The following code fragment
illustrates a common example of a buffer overflow vulnerability.
void SomeFunction( char *pszInput )
{
char szBuffer[10];
// Input is copied straight into the buffer when no type checking
strcpy(szBuffer, pszInput);
. . .
}
Managed .NET code is not susceptible to this problem because array bounds are
automatically checked whenever an array is accessed. This makes the threat of buffer
overflow attacks on managed code much less of an issue. It is still a concern, however,
especially where managed code calls unmanaged APIs or COM objects.
When possible, limit your application's use of unmanaged code, and thoroughly
inspect the unmanaged APIs to ensure that input is properly validated.
Inspect the managed code that calls the unmanaged API to ensure that only
appropriate values can be passed as parameters to the unmanaged API.
Use the /GS flag to compile code developed with the Microsoft Visual C++(
development system. The /GS flag causes the compiler to inject security checks
into the compiled code. This is not a fail-proof solution or a replacement for your
specific validation code; it does, however, protect your code from commonly known
buffer overflow attacks. For more information, see the .NET Framework Product
documentation http://msdn.microsoft.com/library/default.asp?url=/library/en-
us/vccore/html/vclrfGSBufferSecurity.asp and Microsoft Knowledge Base article
325483 "WebCast: Compiler Security Checks: The –GS compiler switch."
The attacker's code usually ends up running under the process security context. This
emphasizes the importance of using least privileged process accounts. If the current thread
is impersonating, the attacker's code ends up running under the security context defined by
the thread impersonation token. The first thing an attacker usually does is call the
RevertToSelf API to revert to the process level security context that the attacker hopes
has higher privileges.
Make sure you validate input for type and length, especially before you call unmanaged
code because unmanaged code is particularly susceptible to buffer overflows.
Cross-Site Scripting
An XSS attack can cause arbitrary code to run in a user's browser while the browser is
connected to a trusted Web site. The attack targets your application's users and not the
application itself, but it uses your application as the vehicle for the attack.
Because the script code is downloaded by the browser from a trusted site, the browser has
no way of knowing that the code is not legitimate. Internet Explorer security zones provide
no defense. Since the attacker's code has access to the cookies associated with the
trusted site and are stored on the user's local computer, a user's authentication cookies are
typically the target of attack.
If the Web application takes the query string, fails to properly validate it, and then returns it
to the browser, the script code executes in the browser. The preceding example displays a
harmless pop-up message. With the appropriate script, the attacker can easily extract the
user's authentication cookie, post it to his site, and subsequently make a request to the
target Web site as the authenticated user.
Perform thorough input validation. Your applications must ensure that input from
query strings, form fields, and cookies are valid for the application. Consider all
user input as possibly malicious, and filter or sanitize for the context of the
downstream code. Validate all input for known valid values and then reject all other
input. Use regular expressions to validate input data received via HTML form fields,
cookies, and query strings.
Use HTMLEncode and URLEncode functions to encode any output that includes
user input. This converts executable script into harmless HTML.
SQL Injection
A SQL injection attack exploits vulnerabilities in input validation to run arbitrary commands in
the database. It can occur when your application uses input to construct dynamic SQL
statements to access the database. It can also occur if your code uses stored procedures
that are passed strings that contain unfiltered user input. Using the SQL injection attack, the
attacker can execute arbitrary commands in the database. The issue is magnified if the
application uses an over-privileged account to connect to the database. In this instance it is
possible to use the database server to run operating system commands and potentially
compromise other servers, in addition to being able to retrieve, manipulate, and destroy
data.
Attackers can inject SQL by terminating the intended SQL statement with the single quote
character followed by a semicolon character to begin a new command, and then executing
the command of their choice. Consider the following character string entered into the txtuid
field.
'; DROP TABLE Customers -
This results in the following statement being submitted to the database for execution.
SELECT * FROM Users WHERE UserName=''; DROP TABLE Customers --'
This deletes the Customers table, assuming that the application's login has sufficient
permissions in the database (another reason to use a least privileged login in the
database). The double dash (--) denotes a SQL comment and is used to comment out any
other characters added by the programmer, such as the trailing quote.
The semicolon is not actually required. SQL Server will execute two commands
Note
separated by spaces.
Other more subtle tricks can be performed. Supplying this input to the txtuid field:
' OR 1=1 -
Because 1=1 is always true, the attacker retrieves every row of data from the Users table.
Perform thorough input validation. Your application should validate its input prior to
sending a request to the database.
Use parameterized stored procedures for database access to ensure that input
strings are not treated as executable statements. If you cannot use stored
procedures, use SQL parameters when you build SQL commands.
Canonicalization
Different forms of input that resolve to the same standard name (the canonical name), is
referred to as canonicalization. Code is particularly susceptible to canonicalization issues if
it makes security decisions based on the name of a resource that is passed to the program
as input. Files, paths, and URLs are resource types that are vulnerable to canonicalization
because in each case there are many different ways to represent the same name. File
names are also problematic. For example, a single file could be represented as:
c:\temp\somefile.dat
somefile.dat
c:\temp\subdir\..\somefile.dat
c:\ temp\ somefile.dat
..\somefile.dat
Ideally, your code does not accept input file names. If it does, the name should be
converted to its canonical form prior to making security decisions, such as whether access
should be granted or denied to the specified file.
Avoid input file names where possible and instead use absolute file paths that
cannot be changed by the end user.
Make sure that file names are well formed (if you must accept file names as input)
and validate them within the context of your application. For example, check that
they are within your application's directory hierarchy.
Ensure that the character encoding is set correctly to limit how input can be
represented. Check that your application's Web.config has set the
requestEncoding and responseEncoding attributes on the <globalization>
element.
Authentication
Depending on your requirements, there are several available authentication mechanisms to
choose from. If they are not correctly chosen and implemented, the authentication
mechanism can expose vulnerabilities that attackers can exploit to gain access to your
system. The top threats that exploit authentication vulnerabilities include:
Network eavesdropping
Dictionary attacks
Credential theft
Network Eavesdropping
If authentication credentials are passed in plaintext from client to server, an attacker armed
with rudimentary network monitoring software on a host on the same network can capture
traffic and obtain user names and passwords.
Use authentication mechanisms that do not transmit the password over the network
such as Kerberos protocol or Windows authentication.
Make sure passwords are encrypted (if you must transmit passwords over the
network) or use an encrypted communication channel, for example with SSL.
Dictionary Attacks
This attack is used to obtain passwords. Most password systems do not store plaintext
passwords or encrypted passwords. They avoid encrypted passwords because a
compromised key leads to the compromise of all passwords in the data store. Lost keys
mean that all passwords are invalidated.
Most user store implementations hold password hashes (or digests). Users are
authenticated by re-computing the hash based on the user-supplied password value and
comparing it against the hash value stored in the database. If an attacker manages to
obtain the list of hashed passwords, a brute force attack can be used to crack the
password hashes.
With the dictionary attack, an attacker uses a program to iterate through all of the words in
a dictionary (or multiple dictionaries in different languages) and computes the hash for each
word. The resultant hash is compared with the value in the data store. Weak passwords
such as "Yankees" (a favorite team) or "Mustang" (a favorite car) will be cracked quickly.
Stronger passwords such as "?You'LlNevaFiNdMeyePasSWerd!", are less likely to be
cracked.
Once the attacker has obtained the list of password hashes, the dictionary attack
Note
can be performed offline and does not require interaction with the application.
Use strong passwords that are complex, are not regular words, and contain a
mixture of upper case, lower case, numeric, and special characters.
Store non-reversible password hashes in the user store. Also combine a salt value
(a cryptographically strong random number) with the password hash.
For more information about storing password hashes with added salt, see Chapter 14,
"Building Secure Data Access."
Use a cookie timeout to a value that forces authentication after a relatively short
time interval. Although this doesn't prevent replay attacks, it reduces the time
interval in which the attacker can replay a request without being forced to re-
authenticate because the session has timed out.
Credential Theft
If your application implements its own user store containing user account names and
passwords, compare its security to the credential stores provided by the platform, for
example, a Microsoft Active Directory® directory service or Security Accounts Manager
(SAM) user store. Browser history and cache also store user login information for future
use. If the terminal is accessed by someone other than the user who logged on, and the
same page is hit, the saved login will be available.
Store password verifiers in the form of one way hashes with added salt.
Enforce account lockout for end-user accounts after a set number of retry
attempts.
To counter the possibility of the browser cache allowing login access, create
functionality that either allows the user to choose to not save credentials, or force
this functionality as a default policy.
Authorization
Based on user identity and role membership, authorization to a particular resource or
service is either allowed or denied. Top threats that exploit authorization vulnerabilities
include:
Elevation of privilege
Data tampering
Luring attacks
Elevation of Privilege
When you design an authorization model, you must consider the threat of an attacker trying
to elevate privileges to a powerful account such as a member of the local administrators
group or the local system account. By doing this, the attacker is able to take complete
control over the application and local machine. For example, with classic ASP programming,
calling the RevertToSelf API from a component might cause the executing thread to run as
the local system account with the most power and privileges on the local machine.
The main countermeasure that you can use to prevent elevation of privilege is to use least
privileged process, service, and user accounts.
Perform role checks before allowing access to the operations that could potentially
reveal sensitive data.
Use standard encryption to store sensitive data in configuration files and databases.
Data Tampering
Use strong access controls to protect data in persistent stores to ensure that only
authorized users can access and modify the data.
Use role-based security to differentiate between users who can view data and
users who can modify data.
Luring Attacks
A luring attack occurs when an entity with few privileges is able to have an entity with more
privileges perform an action on its behalf.
To counter the threat, you must restrict access to trusted code with the appropriate
authorization. Using .NET Framework code access security helps in this respect by
authorizing calling code whenever a secure resource is accessed or a privileged operation
is performed.
Configuration Management
Many applications support configuration management interfaces and functionality to allow
operators and administrators to change configuration parameters, update Web site content,
and to perform routine maintenance. Top configuration management threats include:
Keep custom configuration stores outside of the Web space. This removes the
potential to download Web server configurations to exploit their vulnerabilities.
Network eavesdropping
Data tampering
You must secure sensitive data in storage to prevent a user — malicious or otherwise —
from gaining access to and reading the data.
Use restricted ACLs on the persistent data stores that contain sensitive data.
Use identity and role-based authorization to ensure that only the user or users with
the appropriate level of authority are allowed access to sensitive data. Use role-
based security to differentiate between users who can view data and users who
can modify data.
Network Eavesdropping
The HTTP data for Web application travels across networks in plaintext and is subject to
network eavesdropping attacks, where an attacker uses network monitoring software to
capture and potentially modify sensitive data.
Data Tampering
Data tampering refers to the unauthorized modification of data, often as it is passed over
the network.
One countermeasure to prevent data tampering is to protect sensitive data passed across
the network with tamper-resistant protocols such as hashed message authentication codes
(HMACs).
2. The sender transmits the hash along with the message payload.
3. The receiver uses the shared key to recalculate the hash based on the received
message payload. The receiver then compares the new hash value with the
transmitted hash value. If they are the same, the message cannot have been
tampered with.
Session Management
Session management for Web applications is an application layer responsibility. Session
security is critical to the overall security of the application.
Session hijacking
Session replay
Session Hijacking
A session hijacking attack occurs when an attacker uses network monitoring software to
capture the authentication token (often a cookie) used to represent a user's session with an
application. With the captured cookie, the attacker can spoof the user's session and gain
access to the application. The attacker has the same level of privileges as the legitimate
user.
Use SSL to create a secure communication channel and only pass the
authentication cookie over an HTTPS connection.
Make sure you limit the expiration period on the session cookie if you do not use
SSL. Although this does not prevent session hijacking, it reduces the time window
available to the attacker.
Session Replay
Session replay occurs when a user's session token is intercepted and submitted by an
attacker to bypass the authentication mechanism. For example, if the session token is in
plaintext in a cookie or URL, an attacker can sniff it. The attacker then posts a request
using the hijacked session token.
Create a "do not remember me" option to allow no session data to be stored on the
client.
A man in the middle attack occurs when the attacker intercepts messages sent between
you and your intended recipient. The attacker then changes your message and sends it to
the original recipient. The recipient receives the message, sees that it came from you, and
acts on it. When the recipient sends a message back to you, the attacker intercepts it,
alters it, and returns it to you. You and your recipient never know that you have been
attacked.
Use cryptography. If you encrypt the data before transmitting it, the attacker can
still intercept it but cannot read it or alter it. If the attacker cannot read it, he or she
cannot know which parts to alter. If the attacker blindly modifies your encrypted
message, then the original recipient is unable to successfully decrypt it and, as a
result, knows that it has been tampered with.
Checksum spoofing
Countermeasures to address the threat of poor key generation and key management
include:
Use built-in encryption routines that include secure key management. Data
Protection application programming interface (DPAPI) is an example of an
encryption service provided on Windows 2000 and later operating systems where
the operating system manages the key.
Use strong random key generation functions and store the key in a restricted
location — for example, in a registry key secured with a restricted ACL — if you
use an encryption mechanism that requires you to generate or manage the key.
Checksum Spoofing
Do not rely on hashes to provide data integrity for messages sent over networks. Hashes
such as Safe Hash Algorithm (SHA1) and Message Digest compression algorithm (MD5)
can be intercepted and changed. Consider the following base 64 encoding UTF-8 message
with an appended Message Authentication Code (MAC).
Plaintext: Place 10 orders.
Hash: T0mUNdEQh13IO9oTcaP4FYDX6pU=
If an attacker intercepts the message by monitoring the network, the attacker could update
the message and recompute the hash (guessing the algorithm that you used). For example,
the message could be changed to:
Plaintext: Place 100 orders.
Hash: oEDuJpv/ZtIU7BXDDNv17EAHeAU=
When recipients process the message, and they run the plaintext ("Place 100 orders")
through the hashing algorithm, and then recompute the hash, the hash they calculate will be
equal to whatever the attacker computed.
To counter this attack, use a MAC or HMAC. The Message Authentication Code Triple Data
Encryption Standard (MACTripleDES) algorithm computes a MAC, and HMACSHA1
computes an HMAC. Both use a key to produce a checksum. With these algorithms, an
attacker needs to know the key to generate a checksum that would compute correctly at
the receiver.
Parameter Manipulation
Parameter manipulation attacks are a class of attack that relies on the modification of the
parameter data sent between the client and Web application. This includes query strings,
form fields, cookies, and HTTP headers. Top parameter manipulation threats include:
Cookie manipulation
Avoid using query string parameters that contain sensitive data or data that can
influence the security logic on the server. Instead, use a session identifier to identify
the client and store sensitive items in the session store on the server.
To counter the threat of form field manipulation, instead of using hidden form fields, use
session identifiers to reference state maintained in the state store on the server.
Cookie Manipulation
Cookies are susceptible to modification by the client. This is true of both persistent and
memory-resident cookies. A number of tools are available to help an attacker modify the
contents of a memory-resident cookie. Cookie manipulation is the attack that refers to the
modification of a cookie, usually to gain unauthorized access to a Web site.
While SSL protects cookies over the network, it does not prevent them from being modified
on the client computer. To counter the threat of cookie manipulation, encrypt or use an
HMAC with the cookie.
Do not base your security decisions on HTTP headers. For example, do not trust the HTTP
Referer to determine where a client came from because this is easily falsified.
Exception Management
Exceptions that are allowed to propagate to the client can reveal internal implementation
details that make no sense to the end user but are useful to attackers. Applications that do
not use exception handling or implement it poorly are also subject to denial of service
attacks. Top exception handling threats include:
Denial of service
Countermeasures to help prevent internal implementation details from being revealed to the
client include:
Handle and log exceptions that are allowed to propagate to the application
boundary.
Denial of Service
Attackers will probe a Web application, usually by passing deliberately malformed input.
They often have two goals in mind. The first is to cause exceptions that reveal useful
information and the second is to crash the Web application process. This can occur if
exceptions are not properly caught and handled.
Audit and log activity on the Web server and database server, and on the
application server as well, if you use one.
Log key events such as transactions and login and logout events.
Do not use shared accounts since the original source cannot be determined.
Use platform-level auditing to audit login and logout events, access to the file
system, and failed object access attempts.
Back up log files and regularly analyze them for signs of suspicious activity.
This chapter has shown you the top threats that have the potential to compromise your
network, host infrastructure, and applications. Knowledge of these threats, together with
the appropriate countermeasures, provides essential information for the threat modeling
process It enables you to identify the threats that are specific to your particular scenario
and prioritize them based on the degree of risk they pose to your system. This structured
process for identifying and prioritizing threats is referred to as threat modeling. For more
information, see Chapter 3, "Threat Modeling."
Additional Resources
For further related reading, see the following resources:
For more information about network threats and countermeasures, see Chapter 15,
"Securing Your Network."
For more information about host threats and countermeasures, see Chapter 16,
"Securing Your Web Server," Chapter 17, "Securing Your Application Server,"
Chapter 18, "Securing Your Database Server," and Chapter 19, "Securing Your
ASP.NET Application."
For more information about addressing the application level threats presented in
this chapter, see the Building chapters in Part III, "Building Secure Web
Applications" of this guide.
Michael Howard and David LeBlanc, Writing Secure Code 2nd Edition. Microsoft
Press, Redmond, WA, 2002
For more information about tracking and fixing buffer overruns, see the MSDN
article, "Fix Those Buffer Overruns," at
http://msdn.microsoft.com/library/default.asp?url=/library/en-
us/dncode/html/secure05202002.asp
Chapter 3: Threat Modeling
In This Chapter
Steps to decompose an application architecture to discover vulnerabilities
How to identify and document threats that are relevant to your application
Overview
Threat modeling allows you to systematically identify and rate the threats that are most
likely to affect your system. By identifying and rating threats based on a solid understanding
of the architecture and implementation of your application, you can address threats with
appropriate countermeasures in a logical order, starting with the threats that present the
greatest risk.
Threat modeling has a structured approach that is far more cost efficient and effective than
applying security features in a haphazard manner without knowing precisely what threats
each feature is supposed to address. With a random, "shotgun" approach to security, how
do you know when your application is "secure enough," and how do you know the areas
where your application is still vulnerable? In short, until you know your threats, you cannot
secure your system.
Before You Begin
Before you start the threat modeling process, it is important that you understand the
following basic terminology:
Asset. A resource of value, such as the data in a database or on the file system. A
system resource.
Consider a simple house analogy: an item of jewelry in a house is an asset and a burglar is
an attacker. A door is a feature of the house and an open door represents a vulnerability.
The burglar can exploit the open door to gain access to the house and steal the jewelry. In
other words, the attacker exploits a vulnerability to gain access to an asset. The
appropriate countermeasure in this case is to close and lock the door.
How to Use This Chapter
This chapter outlines a generic process that helps you identify and document threats to your
application. The following are recommendations on how to use this chapter:
Establish a process for threat modeling. Use this chapter as a starting point for
introducing a threat modeling process in your organization if you do not already
have one. If you already have a process, then you can use this as a reference for
comparison.
Use the other chapters in this guide to familiarize yourself with the most
common threats. Read Chapter 2, "Threats and Countermeasures," for an
overview of common threats that occur at the network, host, and application levels.
For more specific threats to your Web server, application server, and
database server, see "Threats and Countermeasures" in Chapter 16,
"Securing Your Web Server," Chapter 17, "Securing Your Application
Server," and Chapter 18, "Securing Your Database Server."
Evolve your threat model. Build a threat model early and then evolve it as you go.
It is a work in progress. Security threats evolve, and so does your application.
Having a document that identifies both what the known threats are and how they
have been addressed (or not) puts you in control of the security of your application.
Threat Modeling Principles
Threat modeling should not be a one time only process. It should be an iterative process
that starts during the early phases of the design of your application and continues
throughout the application life cycle. There are two reasons for this. First, it is impossible to
identify all of the possible threats in a single pass. Second, because applications are rarely
static and need to be enhanced and adapted to suit changing business requirements, the
threat modeling process should be repeated as your application evolves.
The Process
Figure 3.1 shows the threat modeling process that you can perform using a six-stage
process.
The following process outline can be used for applications that are currently in
Note
development and for existing applications.
1. Identify assets.
Use simple diagrams and tables to document the architecture of your application,
including subsystems, trust boundaries, and data flow.
Keeping the goals of an attacker in mind, and with knowledge of the architecture
and potential vulnerabilities of your application, identify the threats that could
affect the application.
Document each threat using a common threat template that defines a core set of
attributes to capture for each threat.
Rate the threats to prioritize and address the most significant threats first. These
threats present the biggest risk. The rating process weighs the probability of the
threat against damage that could result should an attack occur. It might turn out
that certain threats do not warrant any action when you compare the risk posed
by the threat with the resulting mitigation costs.
The Output
The output from the threat modeling process is a document for the various members of your
project team. It allows them to clearly understand the threats that need to be addressed
and how to address them. Threat models consist of a definition of the architecture of your
application and a list of threats for your application scenario, as Figure 3.2 shows.
Here are some sample use cases for a self-service, employee human resources
application:
In the above cases you can look at the implications of the business rules being misused.
For example, consider a user trying to modify personal details of another user. He or she
should not be authorized to access those details according to the defined application
requirements.
Start by drawing a rough diagram that conveys the composition and structure of the
application and its subsystems together with its deployment characteristics. Then, evolve
the diagram by adding details about the trust boundaries, authentication, and authorization
mechanisms as and when you discover them (usually during Step 3 when you decompose
the application).
Start by analyzing trust boundaries from a code perspective. The assembly, which
represents one form of trust boundary, is a useful place to start. Which assemblies trust
which other assemblies? Does a particular assembly trust the code that calls it, or does it
use code access security to authorize the calling code?
Also consider server trust relationships. Does a particular server trust an upstream server
to authenticate and authorize the end users, or does the server provide its own gatekeeping
services? Also, does a server trust an upstream server to pass it data that is well formed
and correct?
For example, in Figure 3.3, the Web application accesses the database server by using a
fixed, trusted identity, which in this case is the ASPNET Web application process account.
In this scenario, the database server trusts the application to authenticate and authorize
callers and forward only valid data request data on behalf of authorized users.
In a .NET Framework application, the assembly defines the smallest unit of trust.
Whenever data is passed across an assembly boundary — which by definition
Note
includes an application domain, process, or machine boundary — the recipient
entry point should validate its input data.
Data flow across trust boundaries is particularly important because code that is passed
data from outside its own trust boundary should assume that the data is malicious and
perform thorough validation of the data.
Data flow diagrams (DFDs) and sequence diagrams can help with the formal
decomposition of a system. A DFD is a graphical representation of data flows,
Note data stores, and relationships between data sources and destinations. A
sequence diagram shows how a group of objects collaborate in terms of
chronological events.
Logical application entry points include user interfaces provide by Web pages, service
interfaces provided by Web services, serviced components, and .NET Remoting
components and message queues that provide asynchronous entry points. Physical or
platform entry points include ports and sockets.
Privileged code must be granted the appropriate code access security permissions by code
access security policy. Privileged code must ensure that the resources and operations that
it encapsulates are not exposed to untrusted and potentially malicious code. .NET
Framework code access security verifies the permissions granted to calling code by
performing stack walks. However, it is sometimes necessary to override this behavior and
short-circuit the full stack walk, for example, when you want to restrict privileged code with
a sandbox or otherwise isolate privileged code. Doing so opens your code up to luring
attacks, where malicious code calls your code through trusted intermediary code.
Whenever you override the default security behavior provided by code access security, do it
diligently and with the appropriate safeguards. For more information about reviewing code
for security flaws, see Chapter 21, "Code Review." For more information about code
access security, see Chapter 8, "Code Access Security in Practice" and Chapter 9, "Using
Code Access Security with ASP.NET."
The following table shows what kinds of questions to ask while analyzing each aspect of the
design and implementation of your application. For more information about reviewing
application architecture and design, see Chapter 5, "Architecture and Design Review."
Use STRIDE to identify threats. Consider the broad categories of threats, such
as spoofing, tampering, and denial of service, and use the STRIDE model from
Chapter 2, "Threats and Countermeasures" to ask questions in relation to each
aspect of the architecture and design of your application. This is a goal-based
approach where you consider the goals of an attacker. For example, could an
attacker spoof an identity to access your server or Web application? Could
someone tamper with data over the network or in a store? Could someone deny
service?
Use categorized threat lists. With this approach, you start with a laundry list of
common threats grouped by network, host, and application categories. Next, apply
the threat list to your own application architecture and any vulnerabilities you have
identified earlier in the process. You will be able to rule some threats out
immediately because they do not apply to your scenario.
Use the following resources to help you with the threat identification process:
For a list of threats organized by network, host, and application layers, as well as
explanations of the threats and associated countermeasures, see Chapter 2,
"Threats and Countermeasures."
Using security mechanisms that rely on the IP address of the sender. It is relatively
easy to send IP packets with false source IP addresses (IP spoofing).
Passing session identifiers or cookies over unencrypted network channels. This can
lead to IP session hijacking.
You must also ensure that your network is not vulnerable to threats arising from insecure
device and server configuration. For example, are unnecessary ports and protocols closed
and disabled? Are routing tables and DNS server secured? Are the TCP network stacks
hardened on your servers? For more information about preventing this type of vulnerability,
see Chapter 15, "Securing Your Network."
Using nonessential ports, protocols, and services, which increase the attack profile
and enable attackers to gather information about and exploit your environment.
Now use the broad STRIDE threat categories and predefined threat lists to scrutinize each
aspect of the security profile of your application. Focus on application threats, technology-
specific threats, and code threats. Key vulnerabilities to consider include:
Using poor input validation that leads to cross-site scripting (XSS), SQL injection,
and buffer overflow attacks.
Using weak password and account policies, which can lead to unauthorized access.
Using insecure data access coding techniques, which can increase the threat posed
by SQL injection.
Using weak or custom encryption and failing to adequately secure encryption keys.
Relying on the integrity of parameters that are passed from the Web browser, for
example, form fields, query strings, cookie data, and HTTP headers.
Using insecure exception handling, which can lead to denial of service attacks and
the disclosure of system-level details that are useful to an attacker.
Doing inadequate auditing and logging, which can lead to repudiation threats.
When you use previously prepared categorized lists of known threats, it only
reveals the common, known threats. Additional approaches, such as the use
Important
of attack trees and attack patterns, can help you identify other potential
threats.
An attack tree is a way of collecting and documenting the potential attacks on your system
in a structured and hierarchical manner. The tree structure gives you a descriptive
breakdown of various attacks that the attacker uses to compromise the system. By
creating attack trees, you create a reusable representation of security issues that helps
focus efforts. Your test team can create test plans to validate security design. Developers
can make tradeoffs during implementation and architects or developer leads can evaluate
the security cost of alternative approaches.
Start building an attack tree by creating root nodes that represent the goals of the attacker.
Then add the leaf nodes, which are the attack methodologies that represent unique attacks.
Figure 3.5 shows a simple example.
Figure 3.5: Representation of an attack tree
You can label leaf nodes with AND and OR labels. For example, in Figure 3.5, both 1.1 and
1.2 must occur for the threat to result in an attack.
Attack trees like the one shown above have a tendency to become complex quickly. They
are also time-consuming to create. An alternative approach favored by some teams is to
structure your attack tree using an outline such as the one shown below.
1. Goal One
1.1 Sub-goal one
1.2 Sub-goal two
2. Goal Two
2.1 Sub-goal one
2.2 Sub-goal two
For a complete example, see "Sample Attack Trees" in the "Cheat Sheets" section of this
guide.
Attack Patterns
Attack patterns are generic representations of commonly occurring attacks that can occur
in a variety of different contexts. The pattern defines the goal of the attack as well as the
conditions that must exist for the attack to occur, the steps that are required to perform the
attack, and the results of the attack. Attack patterns focus on attack techniques, whereas
STRIDE-based approaches focus on the goals of the attacker.
An example of an attack pattern is the code-injection attack pattern that is used to describe
code injection attacks in a generic way. Table 3.3 describes the code-injection attack
pattern.
For more information about attack patterns, see the "Additional References" section at the
end of this chapter.
Step 5. Document the Threats
To document the threats of your application, use a template that shows several threat
attributes similar to the one below. The threat description and threat target are essential
attributes. Leave the risk rating blank at this stage. This is used in the final stage of the
threat modeling process when you prioritize the identified threat list. Other attributes you
may want to include are the attack techniques, which can also highlight the vulnerabilities
exploited, and the countermeasures that are required to address the threat.
You can use a 1–10 scale for probability where 1 represents a threat that is very unlikely to
occur and 10 represents a near certainty. Similarly, you can use a 1–10 scale for damage
potential where 1 indicates minimal damage and 10 represents a catastrophe. Using this
approach, the risk posed by a threat with a low likelihood of occurring but with high damage
potential is equal to the risk posed by a threat with limited damage potential but that is
extremely likely to occur.
This approach results in a scale of 1–100, and you can divide the scale into three bands to
generate a High, Medium, or Low risk rating.
DREAD
The problem with a simplistic rating system is that team members usually will not agree on
ratings. To help solve this, add new dimensions that help determine what the impact of a
security threat really means. At Microsoft, the DREAD model is used to help calculate risk.
By using the DREAD model, you arrive at the risk rating for a given threat by asking the
following questions:
Damage potential: How great is the damage if the vulnerability is exploited?
You can use above items to rate each threat. You can also extend the above questions to
meet your needs. For example, you could add a question about potential reputation
damage:
Reputation: How high are the stakes? Is there a risk to reputation, which could lead to the
loss of customer trust?
Ratings do not have to use a large scale because this makes it difficult to rate threats
consistently alongside one another. You can use a simple scheme such as High (1), Medium
(2), and Low (3).
When you clearly define what each value represents for your rating system, it helps avoids
confusion. Table 3.6 shows a typical example of a rating table that can be used by team
members when prioritizing threats.
Very small
percentage of
All users, default Some users,
users, obscure
A Affected users configuration, non-default
feature; affects
key customers configuration
anonymous
users
The vulnerability
Published
is in a seldom-
information
used part of the
explains the The bug is
product, and
attack. The obscure, and it
only a few users
vulnerability is is unlikely that
D Discoverability should come
found in the users will work
across it. It
most commonly out damage
would take
used feature potential.
some thinking to
and is very
see malicious
noticeable.
use.
After you ask the above questions, count the values (1–3) for a given threat. The result can
fall in the range of 5–15. Then you can treat threats with overall ratings of 12–15 as High
risk, 8–11 as Medium risk, and 5–7 as Low risk.
Designers can use it to make secure design choices about technologies and
functionality.
Testers can write test cases to test if the application is vulnerable to the threats
identified by the analysis.
Organize the threats in the report by network, host, and application categories. This makes
the report easier to consume for different team members in different roles. Within each
category, present the threats in prioritized order starting with the ones given a high risk
rating followed by the threats that present less risk.
Summary
While you can mitigate the risk of an attack, you do not mitigate or eliminate the actual
threat. Threats still exist regardless of the security actions you take and the
countermeasures you apply. The reality in the security world is that you acknowledge the
presence of threats and you manage your risks. Threat modeling can help you manage and
communicate security risks across your team.
Treat threat modeling as an iterative process. Your threat model should be a dynamic item
that changes over time to cater to new types of threats and attacks as they are discovered.
It should also be capable of adapting to follow the natural evolution of your application as it
is enhanced and modified to accommodate changing business requirements.
Additional Resources
For additional related reading, see the following resources:
For information on attack patterns, see "Attack Modeling for Information Security
and Survivability," by Andrew P. Moore, Robert J. Ellison, and Richard C. Linger at
http://www.cert.org/archive/pdf/01tn001.pdf
For more information on creating DFDs, see Writing Secure Code, Second Edition,
by Michael Howard, David C. LeBlanc.
Part II: Designing Secure Web Applications
Chapter List
Chapter 4: Design Guidelines for Secure Web Applications
This chapter presents a set of secure architecture and design guidelines. They have been
organized by common application vulnerability category. These are key areas for Web
application security and they are the areas where mistakes are most often made.
How to Use This Chapter
This chapter focuses on the guidelines and principles you should follow when designing an
application. The following are recommendations on how to use this chapter:
Know the threats to your application so that you can make sure these are
addressed by your design. Read Chapter 2, "Threats and Countermeasures," to
gain understanding of the threat types to consider. Chapter 2 lists the threats that
may harm your application; keep these threats in mind during the design phase.
Some of the top issues that must be addressed with secure design practices are shown in
Figure 4.1.
The design guidelines in this chapter are organized by application vulnerability category.
Experience shows that poor design in these areas, in particular, leads to security
vulnerabilities. Table 4.1 lists the vulnerability categories, and for each one highlights the
potential problems that can occur due to bad design.
Table 4.1: Web Application Vulnerabilities and Potential Problem Due to Bad
Design
Vulnerability Category Potential Problem Due to Bad Design
Attacks performed by embedding malicious strings in
query strings, form fields, cookies, and HTTP
Input Validation headers. These include command execution, cross-
site scripting (XSS), SQL injection, and buffer
overflow attacks.
Identity spoofing, password cracking, elevation of
Authentication
privileges, and unauthorized access.
Access to confidential or restricted data, tampering,
Authorization
and execution of unauthorized operations.
Unauthorized access to administration interfaces,
Configuration
ability to update configuration data, and unauthorized
Management
access to user accounts and account profiles.
Confidential information disclosure and data
Sensitive Data
tampering.
Capture of session identifiers resulting in session
Session Management
hijacking and identity spoofing.
Access to confidential data or account credentials, or
Cryptography
both.
Path traversal attacks, command execution, and
bypass of access control mechanisms among others,
Parameter Manipulation
leading to information disclosure, elevation of
privileges, and denial of service.
Denial of service and disclosure of sensitive system
Exception Management
level details.
Failure to spot the signs of intrusion, inability to prove
Auditing and Logging
a user's actions, and difficulties in problem diagnosis.
Deployment Considerations
During the application design phase, you should review your corporate security policies and
procedures together with the infrastructure your application is to be deployed on.
Frequently, the target environment is rigid, and your application design must reflect the
restrictions. Sometimes design tradeoffs are required, for example, because of protocol or
port restrictions, or specific deployment topologies. Identify constraints early in the design
phase to avoid surprises later and involve members of the network and infrastructure teams
to help with this process.
Figure 4.2 shows the various deployment aspects that require design time consideration.
Identify how firewalls and firewall policies are likely to affect your application's design and
deployment. There may be firewalls to separate the Internet-facing applications from the
internal network. There may be additional firewalls in front of the database. These can
affect your possible communication ports and, therefore, authentication options from the
Web server to remote application and database servers. For example, Windows
authentication requires additional ports.
At the design stage, consider what protocols, ports, and services are allowed to access
internal resources from the Web servers in the perimeter network. Also identify the
protocols and ports that the application design requires and analyze the potential threats
that occur from opening new ports or using new protocols.
Communicate and record any assumptions made about network and application layer
security and which component will handle what. This prevents security controls from being
missed when both development and network teams assume that the other team is
addressing the issue. Pay attention to the security defenses that your application relies
upon the network to provide. Consider the implications of a change in network configuration.
How much security have you lost if you implement a specific network change?
Deployment Topologies
Your application's deployment topology and whether you have a remote application tier is a
key consideration that must be incorporated in your design. If you have a remote application
tier, you need to consider how to secure the network between servers to address the
network eavesdropping threat and to provide privacy and integrity for sensitive data.
Also consider identity flow and identify the accounts that will be used for network
authentication when your application connects to remote servers. A common approach is to
use a least privileged process account and create a duplicate (mirrored) account on the
remote server with the same password. Alternatively, you might use a domain process
account, which provides easier administration but is more problematic to secure because of
the difficulty of limiting the account's use throughout the network. An intervening firewall or
separate domains without trust relationships often makes the local account approach the
only viable option.
For more information about these and other scenario-specific issues, see the "Intranet
Security," "Extranet Security," and "Internet Security" sections in the "Microsoft patterns &
practices Volume I, Building Secure ASP.NET Applications: Authentication, Authorization,
and Secure Communication" at http://msdn.microsoft.com/library/en-
us/dnnetsec/html/secnetlpMSDN.asp.
Input Validation
Input validation is a challenging issue and the primary burden of a solution falls on
application developers. However, proper input validation is one of your strongest measures
of defense against today's application attacks. Proper input validation is an effective
countermeasure that can help prevent XSS, SQL injection, buffer overflows, and other input
attacks.
Input validation is challenging because there is not a single answer for what constitutes valid
input across applications or even within applications. Likewise, there is no single definition
of malicious input. Adding to this difficulty is that what your application does with this input
influences the risk of exploit. For example, do you store data for use by other applications
or does your application consume input from data sources created by other applications?
In many cases, individual fields require specific validation, for example, with specifically
developed regular expressions. However, you can frequently factor out common routines to
validate regularly used fields such as e-mail addresses, titles, names, postal addresses
including ZIP or postal codes, and so on. This approach is shown in Figure 4.3.
You should generally try to avoid designing applications that accept input file names from
the user to avoid canonicalization issues. Consider alternative designs instead. For
example, let the application determine the file name for the user.
If you do need to accept input file names, make sure they are strictly formed before making
security decisions such as granting or denying access to the specified file.
For more information about how to handle file names and to perform file I/O in a secure
manner, see the "File I/O" sections in Chapter 7, "Building Secure Assemblies," and
Chapter 8, "Code Access Security in Practice."
The preferred approach to validating input is to constrain what you allow from the beginning.
It is much easier to validate data for known valid types, patterns, and ranges than it is to
validate data by looking for known bad characters. When you design your application, you
know what your application expects. The range of valid data is generally a more finite set
than potentially malicious input. However, for defense in depth you may also want to reject
known bad input and then sanitize the input. The recommended strategy is shown in Figure
4.4.
To create an effective input validation strategy, be aware of the following approaches and
their tradeoffs:
Constrain input.
Sanitize input.
Constrain Input
Constraining input is about allowing good data. This is the preferred approach. The idea
here is to define a filter of acceptable input by using type, length, format, and range. Define
what is acceptable input for your application fields and enforce it. Reject everything else as
bad data.
Constraining input may involve setting character sets on the server so that you can establish
the canonical form of the input in a localized way.
String fields should also be length checked and in many cases checked for appropriate
format. For example, ZIP codes, personal identification numbers, and so on have well
defined formats that can be validated using regular expressions. Thorough checking is not
only good programming practice; it makes it more difficult for an attacker to exploit your
code. The attacker may get through your type check, but the length check may make
executing his favorite attack more difficult.
While useful for applications that are already deployed and when you cannot afford to make
significant changes, the "deny" approach is not as robust as the "allow" approach because
bad data, such as patterns that can be used to identify common attacks, do not remain
constant. Valid data remains constant while the range of bad data may change over time.
Sanitize Input
Sanitizing is about making potentially malicious data safe. It can be helpful when the range
of input that is allowed cannot guarantee that the input is safe. This includes anything from
stripping a null from the end of a user-supplied string to escaping out values so they are
treated as literals.
Another common example of sanitizing input in Web applications is using URL encoding or
HTML encoding to wrap data and treat it as literal text rather than executable script.
HtmlEncode methods escape out HTML characters, and UrlEncode methods encode a
URL so that it is a valid URI request.
In Practice
The following are examples applied to common input fields, using the preceding
approaches:
Last Name field. This is a good example where constraining input is appropriate In
this case, you might allow string data in the range ASCII A–Z and a–z, and also
hyphens and curly apostrophes (curly apostrophes have no significance to SQL) to
handle names such as O'Dell. You would also limit the length to your longest
expected value.
Quantity field. This is another case where constraining input works well. In this
example, you might use a simple type and range restriction. For example, the input
data may need to be a positive integer between 0 and 1000.
Some applications might allow users to mark up their text using a finite set of script
characters, such as bold "<b>", italic "<i>", or even include a link to their favorite
URL. In the case of a URL, your validation should encode the value so that it is
treated as a URL.
For more information about validating free text fields, see "Input Validation" in
Chapter 10, "Building Secure ASP.NET Pages and Controls."
An existing Web application that does not validate user input. In an ideal
scenario, the application checks for acceptable input for each field or entry point.
However, if you have an existing Web application that does not validate user input,
you need a stopgap approach to mitigate risk until you can improve your
application's input validation strategy. While neither of the following approaches
ensures safe handling of input, because that is dependent on where the input
comes from and how it is used in your application, they are in practice today as
quick fixes for short-term security improvement:
For more information and examples of input coding, using regular expressions, and
ASP.NET validation controls, see "Input Validation" in Chapter 10, "Building Secure ASP.NET
Pages and Controls."
Authentication
Authentication is the process of determining caller identity. There are three aspects to
consider:
Validate who the caller is. Users typically authenticate themselves with user names
and passwords.
Identify the user on subsequent requests. This requires some form of authentication
token.
Many Web applications use a password mechanism to authenticate users, where the user
supplies a user name and password in an HTML form. The issues and questions to consider
here include:
Are user names and passwords sent in plaintext over an insecure channel? If
so, an attacker can eavesdrop with network monitoring software to capture the
credentials. The countermeasure here is to secure the communication channel by
using Secure Socket Layer (SSL).
How are the credentials stored? If you are storing user names and passwords in
plaintext, either in files or in a database, you are inviting trouble. What if your
application directory is improperly configured and an attacker browses to the file
and downloads its contents or adds a new privileged logon account? What if a
disgruntled administrator takes your database of user names and passwords?
How are the credentials verified? There is no need to store user passwords if the
sole purpose is to verify that the user knows the password value. Instead, you can
store a verifier in the form of a hash value and re-compute the hash using the user-
supplied value during the logon process. To mitigate the threat of dictionary attacks
against the credential store, use strong passwords and combine a randomly
generated salt value with the password hash.
How is the authenticated user identified after the initial logon? Some form of
authentication ticket, for example an authentication cookie, is required. How is the
cookie secured? If it is sent across an insecure channel, an attacker can capture
the cookie and use it to access the application. A stolen authentication cookie is a
stolen logon.
By partitioning your site into public and restricted access areas, you can apply separate
authentication and authorization rules across the site and limit the use of SSL. To avoid the
unnecessary performance overhead associated with SSL, design your site to limit the use
of SSL to the areas that require authenticated access.
Be careful that account lockout policies cannot be abused in denial of service attacks. For
example, well known default service accounts such as IUSR_MACHINENAME should be
replaced by custom account names to prevent an attacker who obtains the Internet
Information Services (IIS) Web server name from locking out this critical account.
For examples of regular expressions to aid password validation, see "Input Validation" in
Chapter 10, "Building Secure ASP.NET Pages and Controls."
System level resources include files, folders, registry keys, Active Directory objects,
database objects, event logs, and so on. Use Windows Access Control Lists (ACLs) to
restrict which users can access what resources and the types of operations that they can
perform. Pay particular attention to anonymous Internet user accounts; lock these down
with ACLs on resources that explicitly deny access to anonymous users.
For more information about locking down anonymous Internet user accounts with Windows
ACLs, see Chapter 16, "Securing Your Web Server."
The least granular but most scalable approach uses the application's process identity for
resource access. This approach supports database connection pooling but it means that the
permissions granted to the application's identity in the database are common, irrespective
of the identity of the original caller. The primary authorization is performed in the
application's logical middle tier using roles, which group together users who share the same
privileges in the application. Access to classes and methods is restricted based on the role
membership of the caller. To support the retrieval of per user data, a common approach is
to include an identity column in the database tables and use query parameters to restrict
the retrieved data. For example, you may pass the original caller's identity to the database
at the application (not operating system) level through stored procedure parameters, and
write queries similar to the following:
SELECT field1, field2, field3 FROM Table1 WHERE {some search criteri
This model is referred to as the trusted subsystem or sometimes as the trusted server
model. It is shown in Figure 4.6.
Figure 4.6: Trusted subsystem model that supports database connection
pooling
The third option is to use a limited set of identities for resource access based on the role
membership of the caller. This is really a hybrid of the two models described earlier. Callers
are mapped to roles in the application's logical middle tier, and access to classes and
methods is restricted based on role membership. Downstream resource access is
performed using a restricted set of identities determined by the current caller's role
membership. The advantage of this approach is that permissions can be assigned to
separate logins in the database, and connection pooling is still effective with multiple pools
of connections. The downside is that creating multiple thread access tokens used to
establish different security contexts for downstream resource access using Windows
authentication is a privileged operation that requires privileged process accounts. This is
counter to the principle of least privilege. The hybrid model using multiple trusted service
identities for downstream resource access is shown in Figure 4.7.
The following practices improve the security of your Web application's configuration
management:
If possible, limit or avoid the use of remote administration and require administrators to log
on locally. If you need to support remote administration, use encrypted channels, for
example, with SSL or VPN technology, because of the sensitive nature of the data passed
over administrative interfaces. Also consider limiting remote administration to computers on
the internal network by using IPSec policies, to further reduce risk.
An important aspect of your application's configuration is the process accounts used to run
the Web server process and the service accounts used to access downstream resources
and systems. Make sure these accounts are set up as least privileged. If an attacker
manages to take control of a process, the process identity should have very restricted
access to the file system and other system resources to limit the damage that can be done.
Sensitive Data
Applications that deal with private user information such as credit card numbers, addresses,
medical records, and so on should take special steps to make sure that the data remains
private and unaltered. In addition, secrets used by the application's implementation, such as
passwords and database connection strings, must be secured. The security of sensitive
data is an issue while the data is stored in persistent storage and while it is passed across
the network.
Secrets
Secrets include passwords, database connection strings, and credit card numbers. The
following practices improve the security of your Web application's handling of secrets:
DPAPI is best suited for encrypting information that can be manually recreated when the
master keys are lost, for example, because a damaged server requires an operating
system re-install. Data that cannot be recovered because you do not know the plaintext
value, for example, customer credit card details, require an alternate approach that uses
traditional symmetric key-based cryptography such as the use of triple-DES.
For more information about using DPAPI from Web applications, see Chapter 10, "Building
Secure ASP.NET Web Pages and Controls."
The following practices improve your Web application's security of sensitive per user data:
Retrieve the secret when the application loads and then cache the encrypted secret in
memory, decrypting it when the application uses it. Clear the plaintext copy when it is no
longer needed. This approach avoids accessing the data store on a per request basis.
Avoid the overhead of decrypting the secret multiple times and store a plaintext copy of the
secret in memory. This is the least secure approach but offers the optimum performance.
Benchmark the other approaches before guessing that the additional performance gain is
worth the added security risk.
The following practices improve the security of your Web application's session
management:
You should secure the network link from the Web application to state store using IPSec or
SSL to mitigate the risk of eavesdropping. Also consider how the Web application is to be
authenticated by the state store. Use Windows authentication where possible to avoid
passing plaintext authentication credentials across the network and to benefit from secure
Windows account policies.
Cryptography
Cryptography in its fundamental form provides the following:
For large data encryption, use the TripleDES symmetric encryption algorithm. For slower
and stronger encryption of large data, use Rijndael. To encrypt data that is to be stored for
short periods of time, you can consider using a faster but weaker algorithm such as DES.
For digital signatures, use Rivest, Shamir, and Adleman (RSA) or Digital Signature
Algorithm (DSA). For hashing, use the Secure Hash Algorithm (SHA)1.0. For keyed hashes,
use the Hash-based Message Authentication Code (HMAC) SHA1.0.
An encryption key is a secret number used as input to the encryption and decryption
processes. For encrypted data to remain secure, the key must be protected. If an attacker
compromises the decryption key, your encrypted data is no longer secure.
A good approach is to design a centralized exception management and logging solution and
consider providing hooks into your exception management system to support
instrumentation and centralized monitoring to help system administrators.
The following practices help secure your Web application's exception management:
Catch exceptions.
Send detailed error messages to the error log. Send minimal information to the consumer of
your service or application, such as a generic error message and custom error log ID that
can subsequently be mapped to detailed message in the event logs. Make sure that you do
not log passwords or other sensitive data.
Catch Exceptions
Use structured exception handling and catch exception conditions. Doing so avoids leaving
your application in an inconsistent state that may lead to information disclosure. It also
helps protect your application from denial of service attacks. Decide how to propagate
exceptions internally in your application and give special consideration to what occurs at the
application boundary.
Consider how your application will flow caller identity across multiple application tiers. You
have two basic choices. You can flow the caller's identity at the operating system level
using the Kerberos protocol delegation. This allows you to use operating system level
auditing. The drawback with this approach is that it affects scalability because it means
there can be no effective database connection pooling at the middle tier. Alternatively, you
can flow the caller's identity at the application level and use trusted identities to access
back-end resources. With this approach, you have to trust the middle tier and there is a
potential repudiation risk. You should generate audit trails in the middle tier that can be
correlated with back-end audit trails. For this, you must make sure that the server clocks
are synchronized, although Microsoft Windows 2000 and Active Directory do this for you.
"Checklist: Architecture and Design Review" in the "Checklists" section of this guide.
Chapter 5: Architecture and Design Review for Security
In This Chapter
Analyzing and reviewing application architecture and design
If you have already created your application, you should still review this chapter and then
revisit the concepts, principles, and techniques that you used during your application design.
How to Use This Chapter
This chapter gives you the questions to ask when performing a thorough review of your
architecture design. The following are recommendations on how to use this chapter:
Integrate a security review into your architecture design process. Start early
on, and as your design changes, review those changes with the steps given in this
chapter.
Evolve your security review. This chapter provides questions that you can ask to
improve the security of your design. To complete the review process, you might
also need to add specific questions that are unique to your application.
Know the threats you are reviewing against. Chapter 2, "Threats and
Countermeasures," lists the threats that affect the various components and layers
that make up your application. Knowing these threats is essential to improving the
results of your review process.
Architecture and Design Review Process
The architecture and design review process analyzes the architecture and design from a
security perspective. If you have just completed the design, the design documentation can
help you with this process. Regardless of how comprehensive your design documentation
is, you must be able to decompose your application and be able to identify key items,
including trust boundaries, data flow, entry points, and privileged code. You must also know
the physical deployment configuration of your application. Pay attention to the design
approaches you have adopted for those areas that most commonly exhibit vulnerabilities.
This guide refers to these as application vulnerability categories.
Consider the following aspects when you review the architecture and design of your
application:
Application architecture and design. You review the approach to critical areas in
your application, including authentication, authorization, input validation, exception
management, and other areas. You can use the application vulnerability categories
as a roadmap and to ensure that you do not miss any key areas during the review.
Tier-by-tier analysis. You walk through the logical tiers of your application and
examine the security of ASP.NET Web pages and controls, Web services, serviced
components, Microsoft .NET Remoting, data access code, and others.
The remainder of this chapter presents the key considerations and questions to ask during
the review process for each of these distinct areas.
Deployment and Infrastructure Considerations
Examine the security settings that the underlying network and host infrastructure offer to the
application, and examine any restrictions that the target environment might impose. Also
consider your deployment topology and the impact of middle-tier application servers,
perimeter zones, and internal firewalls on your design.
Review the following questions to identify potential deployment and infrastructure issues:
While your application is responsible for handling and transforming data securely prior to
transit, the network is responsible for the integrity and privacy of the data as it transmits.
Use an appropriate encryption algorithm when the data must remain private. Additionally,
make sure that your network devices are secured because they maintain network integrity.
If you use domain accounts and Windows authentication, does the firewall open the
necessary ports? If not, or if the Web server and downstream server are in
separate domains, you can use mirrored local accounts. For example, you can
duplicate the least privileged local ASPNET account that is used to run the Web
application on the database server.
For more information about using the DTC through a firewall, see Microsoft
Knowledge Base article 250367, "INFO: Configuring Microsoft Distributed
Transaction Coordinator (DTC) to Work Through a Firewall."
If so, have you restricted the DCOM port range and does any internal firewall open
these ports?
For more information, see the following Microsoft Knowledge Base articles:
Article 248809, "PRB: DCOM Does Not Work over NAT-Based Firewall"
If so, how do middle-tier Web services authenticate the Web application? Does the
Web application configure credentials on the Web service proxy so that the Web
service can authenticate the Web server? If not, how does the Web service identify
the caller?
Does your design make any assumptions that the host infrastructure security restrictions
will invalidate? For example, the security restrictions may require design tradeoffs based on
the availability of required services, protocols, or account privileges. Review the following
questions:
Services and protocols that are available in the development and test environments
might not be available in the production environment. Communicate with the team
responsible for the infrastructure security to understand the restrictions and
requirements.
Your design should use least privileged process, service, and user accounts. Do
you perform operations that require sensitive privileges that might not be permitted?
If your application is going to be deployed in a Web farm, you can make no assumptions
about which server in the farm will process client requests. Successive requests from the
same client may be served by separate servers. As a result, you need to consider the
following issues:
In a Web farm, you cannot manage session state on the Web server. Instead, your
design must incorporate a remote state store on a server that is accessed by all
the Web servers in the farm. For more information, see "Session Management"
later in this chapter.
If you plan to use encryption to encrypt data in a shared data source, such as a
database, the encryption and decryption keys must be the same across all
machines in the farm. Check that your design does not require encryption
mechanisms that require machine affinity.
If so, you are reliant upon the <machineKey> settings. In a Web farm, you must
use common key across all servers.
If you use SSL to encrypt the traffic between browser and Web server, where do
you terminate the SSL connection? Your options include the Web server, a Web
server with an accelerator card, or a load balancer with an accelerator card.
Terminating the SSL session at a load balancer with an accelerator card generally
offers the best performance, particularly for sites with large numbers of
connections.
If you terminate SSL at the load balancer, network traffic is not encrypted from the
load balancer to the Web server. This means that an attacker can potentially sniff
network traffic after the data is decrypted, while it is in transit between the load
balancer and Web server. You can address this threat either by ensuring that the
Web server environment is physically secured or by using transport-level encryption
provided by IPSec policies to protect internal data center links.
If your Web application must run at a reduced trust level, this limits the types of resources
and privileged operations your code can perform. In partial trust scenarios, your design
should sandbox your privileged code. You should also use separate assemblies to isolate
your privileged code. This is done so that the privileged code can be configured separately
from the rest of the application and granted the necessary additional code access
permissions.
For more information, see Chapter 9, "Using Code Access Security with ASP.NET."
Trust levels are often an issue if you are planning to deploy your application onto
a shared server, or if your application is going to be run by a hosting company. In
Note
these cases, check the security policy and find out what trust levels it mandates
for Web applications.
Input Validation
Examine how your application validates input because many Web application attacks use
deliberately malformed input . SQL injection, cross-site scripting (XSS), buffer overflow,
code injection, and numerous other denial of service and elevation of privilege attacks can
exploit poor input validation. Table 5.1 highlights the most common input validation
vulnerabilities.
Review the following questions to help you identify potential input validation security issues:
Make sure the design identifies entry points of the application so that you can track
what happens to individual input fields. Consider Web page input, input to
components and Web services, and input from databases.
Input validation is not always necessary if the input is passed from a trusted source
inside your trust boundary, but it should be considered mandatory if the input is
passed from sources that are not trusted.
Do not consider the end user as a trusted source of data. Make sure you validate
regular and hidden form fields, query strings, and cookies.
The only case where it might be safe not to do so is where data is received from
inside the current trust boundary. However, with a defense in depth strategy,
multiple validation layers are recommended.
You should also validate this form of input, especially if other applications write to
the database. Make no assumptions about how thorough the input validation of the
other application is.
For common types of input fields, examine whether or not you are using common
validation and filtering libraries to ensure that validation rules are performed
consistently.
Do not. Client-side validation can be used to reduce the number of round trips to the
server, but do not rely on it for security because it is easy to bypass. Validate all
input at the server.
Check whether your application uses names based on input to make security
decisions. For example, does it accept user names, file names, or URLs? These
are notorious for canonicalization bugs because of the many ways that the names
can be represented. If your application does accept names as input, check that
they are validated and converted to their canonical representation before
processing.
Pay close attention to any input field that you use to form a SQL database query.
Check that these fields are suitably validated for type, format, length, and range.
Also check how the queries are generated. If you use parameterized stored
procedures, input parameters are treated as literals and are not treated as
executable code. This is effective risk mitigation.
If you include input fields in the HTML output stream, you might be vulnerable to
XSS. Check that input is validated and that output is encoded. Pay close attention
to how input fields that accept a range of HTML characters are processed.
Authentication
Examine how your application authenticates its callers, where it uses authentication, and
how it ensures that credentials remain secure while in storage and when passed over the
network. Vulnerabilities in authentication can make your application susceptible to spoofing
attacks, dictionary attacks, session hijacking, and other attacks. Table 5.2 highlights the
most common authentication vulnerabilities.
Review the following questions to identify potential vulnerabilities in the way your application
performs authentication:
If your application provides public areas that do not require authentication and restricted
areas that do require authentication, examine how your site design distinguishes between
the two. You should use separate subfolders for restricted pages and resources and then
secure those folders in Internet Information Services (IIS) by configuring them to require
SSL. This approach allows you to provide security for sensitive data and authentication
cookies using SSL in only those areas of your site that need it. You avoid the added
performance hit associated with SSL across the whole site.
Your design should identify the range of service accounts that is required to connect to
different resources, including databases, directory services, and other types of remote
network resources. Make sure that the design does not require a single, highly privileged
account with sufficient privileges to connect to the range of different resource types.
Have you identified which resources and operations require which privileges? Check
that the design identifies precisely which privileges each account requires to
perform its specific function and use least privileged accounts in all cases.
If so make sure that the credentials are encrypted and held in a restricted location,
such as a registry key with a restricted access control list (ACL).
Review the following aspects of authenticating a caller. The aspects you use depend on the
type of authentication your design uses.
If you use Forms or Basic authentication, or if you use Web services and pass
credentials in SOAP headers, make sure that you use SSL to protect the
credentials in transit.
If so, check where and how the user credentials will be stored. A common mistake
is to store plaintext or encrypted passwords in the user store. Instead, you should
store a password hash for verification.
If you validate credentials against a SQL Server user store, pay close attention to
the input user names and passwords. Check for the malicious injection of SQL
characters.
If so, in addition to using SSL to protect the credentials, you should use SSL to
protect the authentication cookie. Also check that your design uses a limited
session lifetime to counter the threat of cookie replay attacks and check that the
cookie is encrypted.
For more information about Forms authentication, see Chapter 10, "Building Secure
ASP.NET Web Pages and Controls" and Chapter 19, "Securing Your ASP.NET Application
and Web Services."
If your network infrastructure does not provide IPSec encrypted channels, make
sure a server certificate is installed on the database to provide automatic SQL
credential encryption. Also examine how you plan to secure database connection
strings because these strings contain SQL account user names and passwords.
If you use the process account of the application and connect to SQL Server using
Windows authentication, make sure that your design assumes a least privileged
account. The local ASPNET account is provided for this purpose, although with local
accounts, you need to create a duplicate account on the database server.
If you plan to use a domain account, make sure that it is a least privileged account
and check that all intervening firewalls support Windows authentication by opening
the relevant ports.
Do you use service accounts?
Also examine which process will be used to create the impersonated security
context using the service account. This should not be done by the ASP.NET
application process on Microsoft Windows 2000 because it forces you to increase
the privileges of the process account and grant the "Act as part of the operation
system" privilege. This should be avoided because it significantly increases the risk
factor.
For applications that use Forms or Passport authentication, you can configure a
separate anonymous user account for each application. Next, you can enable
impersonation and then use the anonymous identity to access the database. This
approach accommodates separate authorization and identity tracking for separate
applications on the same Web server.
If your design requires impersonation of the original caller, you need to consider
whether or not the approach provides sufficient scalability because connection
pooling is ineffective. An alternative approach is to flow the identity of the original
caller at the application level through trusted query parameters.
If database connection strings are hard coded or stored in clear text in configuration
files or the COM+ catalog, it makes them vulnerable. Instead, you should encrypt
them and restrict access to the encrypted data.
For more information about the different options for connecting to SQL Server and about
storing database connection strings securely, see Chapter 14, "Building Secure Data
Access."
For example, do your ASP.NET Web pages use regular expressions to verify
password complexity rules?
Make sure you do not display messages such as "Incorrect password" because this
tells malicious users that the user name is correct. This allows them to focus their
efforts on cracking passwords.
This is recommended because otherwise there is a high probability that a user will
not change his or her password, which makes it more vulnerable.
If an account is compromised, can you easily disable the account to prevent the
attacker from continuing to use the account?
Review the following questions to help validate the authorization strategy of your application
design:
Options include IIS Web permissions, NTFS permissions, ASP.NET file authorization
(which applies only with Windows authentication), URL authorization, and principal
permission demands. If certain types are not used, make sure you know the
reasons why not.
If so, how are the role lists maintained and how secure are the administration
interfaces that are required to do this?
Does your design provide the right degree of granularity so that the privileges that
are associated with distinct user roles are adequately separated? Avoid situations
where roles are granted elevated privileges just to satisfy the requirements of
certain users. Consider adding new roles instead.
This is recommended because the login of the application can only be granted
permissions to access the specified stored procedures. The login can be restricted
from performing direct create/read/update/delete (CRUD) operations against the
database.
This benefits security, and performance and future maintainability also benefit.
For more information about database authorization approaches, see Chapter 14, "Building
Secure Data Access."
Code access security provides a resource constraint model that can prevent code
(and Web applications) from accessing specific types of system-level resources.
When you use code access security, it inevitably influences your design. Identify
whether or not you want to include code access security in your design plans, and
then design accordingly by isolating and sandboxing privileged code and placing
resource access code in its own separate assemblies.
Your design should identify all of the identities that the application uses, including
the process identity, and any impersonated identities, including anonymous Internet
user accounts and service identities. The design should also indicate to which
resources these identities require access.
For more information about designing for code access security, see Chapter 9, "Using Code
Access Security with ASP.NET."
Configuration Management
If your application provides an administration interface that allows it to be configured,
examine how the administration interfaces are secured. Also examine how sensitive
configuration data is secured. Table 5.4 shows the most common configuration
management vulnerabilities.
Use the following questions to help validate the approach of your application design to
configuration management:
Configuration data that is held in files in the Web space is considered less secure
than data that is held outside the Web space. Host configuration mistakes or
undiscovered bugs could potentially allow an attacker to retrieve and download
configuration files over HTTP.
Make sure that key items of configuration data, such as database connection
strings, encryption keys, and service account credentials, are encrypted inside the
store.
Use the following questions to help validate the handling of sensitive data by your
application:
Also, if you use Windows authentication, you avoid storing connection strings with
embedded credentials.
If you use encryption, how do you secure the encryption keys? Consider using
platform-provided DPAPI encryption that takes care of the key management for
you.
Examine how your application stores its encrypted data. For maximum security,
access to the encrypted data should be restricted with Windows ACLs. Check that
the application does not store secrets in clear text or in source code.
If you use the Local Security Authority (LSA), the code that retrieves the secret has
to run with administrator privileges, which increases risk. An alternative approach
that does not require extended privileges is to use DPAPI.
Examine how your application accesses the secrets and how long they are retained
in memory in clear text form. Secrets should generally be retrieved on demand,
used for the smallest amount of time possible, and then discarded.
If so, make sure the cookie is encrypted and is not persisted on the client
computer.
What encryption algorithm do you use? You should encrypt the data using a
strong encryption algorithm with a large key size, such as Triple DES.
How do you secure the encryption keys? The data is only as secure as the
encryption key, so examine how you secure the key. Ideally, encrypt the key with
DPAPI and secure it in a restricted location, for example, a registry key.
Use the following questions to help validate the handling of sensitive data by your
application:
If you track session state with session identifiers — for example, tokens contained
in cookies — examine whether or not the identifier or cookie is only passed over an
encrypted channel, such as SSL.
Make sure that your application does not pass session identifiers in query strings.
These strings can be easily modified at the client, which would allow a user to
access the application as another user, access the private data of other users, and
potentially elevate privileges.
Examine how long your application considers a session identifier valid. The application
should limit this time to mitigate the threat of session hijacking and replay attacks.
For more information about securing ASP.NET session state, see "Session State" in
Chapter 19, "Securing Your ASP.NET Application and Web Services."
Cryptography
If your application uses cryptography to provide security, examine what it is used for and
the way it is used. Table 5.7 shows the most common vulnerabilities relating to
cryptography.
Review the following questions to help validate the handling of sensitive data by your
application:
Examine what algorithms your application uses and for what purpose. Larger key
sizes result in improved security, but performance suffers. Stronger encryption is
most important for persisted data that is retained in data stores for prolonged
periods of time.
For more information about choosing an appropriate algorithm and key size, see the
Cryptography section in Chapter 4, "Design Guidelines for Secure Web Applications."
The encrypted data is only as secure as the key. To decipher encrypted data, an attacker
must be able to retrieve the key and the cipher text. Therefore, examine your design to
ensure that the encryption keys and the encrypted data are secured. Consider the following
review questions:
If you use DPAPI, the platform manages the key for you. Otherwise, the application
is responsible for key management. Examine how your application secures its
encryption keys. A good approach is to use DPAPI to encrypt the encryption keys
that are required by other forms of encryption. Then securely store the encrypted
key, for example, by placing it in the registry beneath a key configured with a
restricted ACL.
Do not overuse keys. The longer the same key is used, the more likely it is to be
discovered. Does your design consider how and how often you are going to recycle
keys and how they are going to be distributed and installed on your servers?
Parameter Manipulation
Examine how your application uses parameters. These parameters include form fields,
query strings, cookies, HTTP headers, and view state that are passed between client and
server. If you pass sensitive data, such as session identifiers, using parameters such as
query strings, a malicious client can easily bypass your server side checks with simple
parameter manipulation. Table 5.8 shows the most common parameter manipulation
vulnerabilities.
Examine the following questions to help ensure that your design is not susceptible to
parameter manipulation attacks:
If your application uses a cookie that contains sensitive data, such as a user name
or a role list, make sure it is encrypted.
This is not recommended because there is no easy way to prevent the manipulation
of data in query strings or form fields. Instead, consider using encrypted session
identifiers and store the sensitive data in the session state store on the server.
If your Web pages or controls use view state to maintain state across HTTP
requests, check that the view state is encrypted and checked for integrity with
message authentication codes (MACs). You can configure this at the machine level
or on a page-by-page basis.
Review the following questions to help ensure that your design is not susceptible to
exception management security vulnerabilities:
Make sure that the application does not let internal exception conditions propagate
beyond the application boundary. Exceptions should be caught and logged on the
server and, if necessary, generic error messages should be returned to the client.
Your design should define the custom error messages will be used by your
application when critical errors occur. Make sure they do not contain any sensitive
items of data that could be exploited by a malicious user.
Review the following questions to help verify the approach to auditing and logging by your
application:
Check that you audit other key events, including data retrieval, network
communications, and administrative functions (such as enabling and disabling of
logging).
If you do not flow the original caller identity at the operating system level, for
example, because of the limited scalability that this approach offers, identify how
the application flows the original caller identity. This is required for cross-tier
auditing (and potentially for authorization).
Also, if multiple users are mapped to a single application role, check that the
application logs the identity of the original caller.
By considering your design in relation to the target deployment environment and the security
policies defined by that environment, you can help ensure a smooth and secure application
deployment.
If your application has already been created, the architecture and design review is still an
important part of the security assessment process that helps you fix vulnerabilities and
improve future designs.
Additional Resources
For more information, see the following resources:
For a printable checklist, see "Checklist: Architecture and Design Review for
Security," in the "Checklists" section of this guide.
Part III: Building Secure Web Applications
Chapter List
Chapter 6: .NET Security Overview
PrincipalPermission objects
This chapter emphasizes how .NET Framework security applies to ASP.NET Web
applications and Web services.
How to Use This Chapter
This chapter describes the security benefits inherent in using the .NET Framework and
explains the complementary features of .NET Framework user (or role-based) security and
.NET Framework code-based (or code access) security. We recommend that you use this
chapter as follows:
Create applications that use the security concepts in this chapter. This
chapter tells you when you should use user-based security and when you should
use code-based security. After reading this chapter, you will be able to identify how
any new applications you create can be more secure by using role-based or
codebased security.
Managed Code Benefits
Developing .NET Framework applications provides you with some immediate security
benefits, although there are still many issues for you to think about. These issues are
discussed in the Building chapters in Part III of this guide.
.NET Framework assemblies are built with managed code. Compilers for languages, such
as the Microsoft Visual C#® development tool and Microsoft Visual Basic® .NET
development system, output Microsoft intermediate language (MSIL) instructions, which are
contained in standard Microsoft Windows portable executable (PE) .dll or .exe files. When
the assembly is loaded and a method is called, the method's MSIL code is compiled by a
just-in-time (JIT) compiler into native machine instructions, which are subsequently
executed. Methods that are never called are not JIT-compiled.
The use of an intermediate language coupled with the run-time environment provided by the
common language runtime offers assembly developers immediate security advantages.
File format and metadata validation. The common language runtime verifies that
the PE file format is valid and that addresses do not point outside of the PE file.
This helps provide assembly isolation. The common language runtime also validates
the integrity of the metadata that is contained in the assembly.
Code verification. The MSIL code is verified for type safety at JIT compile time.
This is a major plus from a security perspective because the verification process
can prevent bad pointer manipulation, validate type conversions, check array
bounds, and so on. This virtually eliminates buffer overflow vulnerabilities in
managed code, although you still need to carefully inspect any code that calls
unmanaged application programming interfaces (APIs) for the possibility of buffer
overflow.
Code access security. The virtual execution environment provided by the common
language runtime allows additional security checks to be performed at runtime.
Specifically, code access security can make various run-time security decisions
based on the identity of the calling code.
User vs. Code Security
User security and code security are two complementary forms of security that are available
to .NET Framework applications. User security answers the questions, "Who is the user
and what can the user do?" while code security answers the questions "Where is the code
from, who wrote the code, and what can the code do?" Code security involves authorizing
the application's (not the user's) access to system-level resources, including the file system,
registry, network, directory services, and databases. In this case, it does not matter who
the end user is, or which user account runs the code, but it does matter what the code is
and is not allowed to do.
The .NET Framework user security implementation is called role-based security. The code
security implementation is called code access security.
Role-Based Security
.NET Framework role-based security allows a Web application to make security decisions
based on the identity or role membership of the user that interacts with the application. If
your application uses Windows authentication, then a role is a Windows group. If your
application uses other forms of authentication, then a role is application-defined and user
and role details are usually maintained in SQL Server or user stores based on Active
Directory.
The identity of the authenticated user and its associated role membership is made available
to Web applications through Principal objects, which are attached to the current Web
request.
Figure 6.1 shows a logical view of how user security is typically used in a Web application
to restrict user access to Web pages, business logic, operations, and data access.
Code access security is an important additional defense mechanism that you can use to
provide constraints on a piece of code. An administrator can configure code access security
policy to restrict the resource types that code can access and the other privileged
operations it can perform. From a Web application standpoint, this means that in the event
of a compromised process where an attacker takes control of a Web application process
or injects code to run inside the process, the additional constraints that code access
security provides can limit the damage that can be done.
Figure 6.2 shows a logical view of how code access security is used in a Web application
to constrain the application's access to system resources, resources owned by other
applications, and privileged operations, such as calling unmanaged code.
The authentication (identification) of code is based on evidence about the code, for
example, its strong name, publisher, or installation directory. Authorization is based on the
code access permissions granted to code by security policy. For more information about
.NET Framework code access security, see Chapter 8, "Code Access Security in Practice."
.NET Framework Role-Based Security
.NET Framework role-based security is a key technology that is used to authorize a user's
actions in an application. Roles are often used to enforce business rules. For example, a
financial application might allow only managers to perform monetary transfers that exceed a
particular threshold.
PrincipalPermission objects
URL authorization
There are many types of Principal objects and the precise type depends on the
authentication mechanism used by the application. However, all Principal objects implement
the System.Security.Principal.IPrincipal interface and they all maintain a list of roles of
which the user is a member.
Principal objects also contain Identity objects, which include the user's name, together with
flags that indicate the authentication type and whether or not the user has been
authenticated. This allows you to distinguish between authenticated and anonymous users.
There are different types of Identity objects, depending on the authentication type, although
all implement the System.Security.Principal.IIdentity interface.
The following table shows the range of possible authentication types and the different types
of Principal and Identity objects that ASP.NET Web applications use.
PrincipalPermission Objects
The PrincipalPermission object represents the identity and role that the current principal
must have to execute code. PrincipalPermission objects can be used declaratively or
imperatively in code.
Declarative Security
You can control precisely which users should be allowed to access a class or a method by
adding a PrincipalPermissionAttribute to the class or method definition. A class-level
attribute automatically applies to all class members unless it is overridden by a member-
level attribute. The PrincipalPermissionAttribute type is defined within the
System.Security.Permissions namespace.
The following example shows how to restrict access to a particular class to members of a
Managers group. Note that this example assumes Windows authentication, where the
format of the role name is in the format MachineName\RoleName or
DomainName\RoleName. For other authentication types, the format of the role name is
application specific and depends on the role-name strings held in the user store.
[PrincipalPermissionAttribute(SecurityAction.Demand, Role=@"DOMAINNA
public sealed class OnlyManagersCanCallMe
{
}
The trailing Attribute can be omitted from the attribute type names. This makes
Note the attribute type name appear to be the same as the associated permission type
name, which in this case is PrincipalPermission. They are distinct (but logically
related) types.
The next example shows how to restrict access to a particular method on a class. In this
example, access is restricted to members of the local administrators group, which is
identified by the special "BUILTIN\Administrators" identifier.
[PrincipalPermissionAttribute(SecurityAction.Demand,
Role=@"BUILTIN\Administrators")]
public void SomeMethod()
{
}
Other built-in Windows group names can be used by prefixing the group name with
"BUILTIN\" (for example, "BUILTIN\Users" and "BUILTIN\Power Users").
Imperative Security
If method-level security is not granular enough for your security requirements, you can
perform imperative security checks in code by using
System.Security.Permissions.PrincipalPermission objects.
To avoid a local variable, the code above can also be written as:
(new PrincipalPermission(null, @"DomainName\WindowsGroup")).Demand()
The code creates a PrincipalPermission object with a blank user name and a specified
role name, and then calls the Demand method. This causes the common language runtime
to interrogate the current Principal object that is attached to the current thread and check
whether the associated identity is a member of the specified role. Because Windows
authentication is used in this example, the role check uses a Windows group. If the current
identity is not a member of the specified role, a SecurityException is thrown.
Security attributes ensure that the permission demand is executed before any other
code in the method has a chance to run. This eliminates potential bugs where
security checks are performed too late.
Declarative checks at the class level apply to all class members. Imperative checks
apply at the call site.
The main advantages of imperative security and the main reasons that you sometimes must
use it are:
It allows you to dynamically shape the demand by using values only available at
runtime.
URL Authorization
Administrators can configure role-based security by using the <authorization> element in
Machine.config or Web.config. This element configures the ASP.NET
UrlAuthorizationModule, which uses the principal object attached to the current Web
request in order to make authorization decisions.
The authorization element contains child <allow> and <deny> elements, which are used to
determine which users or groups are allowed or denied access to specific directories or
pages. Unless the <authorization> element is contained within a <location> element, the
<authorization> element in Web.config controls access to the directory in which the
Web.config file resides. This is normally the Web application's virtual root directory.
The following example from Web.config uses Windows authentication and allows Bob and
Mary access but denies everyone else:
<authorization>
<allow users="DomainName\Bob, DomainName\Mary" />
<deny users="*" />
</authorization>
The following syntax and semantics apply to the configuration of the <authorization>
element:
Users and roles for URL authorization are determined by your authentication
settings:
You can also point the path attribute at a specific folder to apply access control to all the
files in that particular folder. For more information about the <location> element, see
Chapter 19, "Securing Your ASP.NET Application."
.NET Framework Security Namespaces
To program .NET Framework security, you use the types in the .NET Framework security
namespaces. This section introduces these namespaces and the types that you are likely to
use when you develop secure Web applications. For a full list of types, see the .NET
Framework documentation. The security namespaces are listed below and are shown in
Figure 6.3.
System.Security
System.Web.Security
System.Security.Cryptography
System.Security.Principal
System.Security.Policy
System.Security.Permissions
System.Security
This namespace contains the CodeAccessPermission base class from which all other
code access permission types derive. You are unlikely to use the base class directly. You
are more likely to use specific permission types that represent the rights of code to access
specific resource types or perform other privileged operations. For example,
FileIOPermission represents the rights to perform file I/O, EventLogPermission
represents the rights for code to access the event log, and so on. For a full list of code
access permission types, see Table 6.2 later in this chapter.
The System.Security namespace also contains classes that encapsulate permission sets.
These include the PermissionSet and NamedPermissionSet classes. The types you are
most likely to use when building secure Web applications are:
SecurityException. The exception type used to represent security errors.
System.Web.Security
This namespace contains the classes used to manage Web application authentication and
authorization. This includes Windows, Forms, and Passport authentication and URL and File
authorization, which are controlled by the UrlAuthorizationModule and
FileAuthorizationModule classes, respectively. The types you are most likely to use when
you build secure Web applications are:
System.Security.Cryptography
This namespace contains types that are used to perform encryption and decryption,
hashing, and random number generation. This is a large namespace that contains many
types. Many encryption algorithms are implemented in managed code, while others are
exposed by types in this namespace that wrap the underlying cryptographic functionality
provided by the Microsoft Win32®-based CryptoAPI.
System.Security.Principal
This namespace contains types that are used to support role-based security. They are used
to restrict which users can access classes and class members. The namespace includes
the IPrincipal and IIdentity interfaces. The types you are most likely to use when building
secure Web applications are:
GenericPrincipal and GenericIdentity. Allow you to define your own roles and
user identities. These are typically used with custom authentication mechanisms.
System.Security.Policy
This namespace contains types that are used to implement the code access security policy
system. It includes types to represent code groups, membership conditions, policy levels,
and evidence.
System.Security.Permissions
This namespace contains the majority of permission types that are used to encapsulate the
rights of code to access resources and perform privileged operations. The following table
shows the permission types that are defined in this namespace (in alphabetical order).
The SecurityPermission class warrants special attention because it represents the rights
of code to perform privileged operations, including asserting code access permissions,
calling unmanaged code, using reflection, and controlling policy and evidence, among
others. The precise right determined by the SecurityPermission class is determined by its
Flags property, which must be set to one of the enumerated values defined by the
SecurityPermissionFlags enumerated type (for example,
SecurityPermissionFlags.UnmanagedCode).
Summary
This chapter has introduced you to the .NET Framework security landscape by contrasting
user security and code security and by examining the security namespaces. The .NET
Framework refers to these two types of security as role-based security and code access
security, respectively. Both forms of security are layered on top of Windows security.
For more information about code access security, see Chapter 8, "Code Access
Security in Practice," and Chapter 9, "Using Code Access Security with ASP.NET."
For information about code access security and role-based security, see the MSDN
article, ".NET Framework Security," at
http://msdn.microsoft.com/library/default.asp?url=/library/en-
us/cpguide/html/cpconnetframeworksecurity.asp.
Chapter 7: Building Secure Assemblies
In This Chapter
Improving the security of your assemblies with simple, proven coding techniques.
Reducing the attack surface through well-designed interfaces and solid object
oriented programming techniques.
Writing secure resource access code including file I/O, registry, event log,
database, and network access.
Overview
Assemblies are the building blocks of .NET Framework applications and are the unit of
deployment, version control, and reuse. They are also the unit of trust for code access
security (all the code in an assembly is equally trusted). This chapter shows you how to
improve the security design and implementation of your assemblies. This includes evaluating
deployment considerations, following solid object-oriented programming practices,
tamperproofing your code, ensuring that internal system level information is not revealed to
the caller, and restricting who can call your code.
Managed code, the .NET Framework, and the common language runtime eliminate several
important security related vulnerabilities often found in unmanaged code. Type safe
verification of code is a good example where the .NET Framework helps. This makes it
virtually impossible for buffer overflows to occur in managed code, which all but eliminates
the threat of stack-based code injection. However, if you call unmanaged code, buffer
overflows can still occur. In addition, you must also consider many other issues when you
write managed code.
How to Use This Chapter
The following are recommendations on how to use this chapter:
Use the corresponding checklist. For a summary checklist that summarizes the
best practices and recommendations for both chapters, see "Checklist: Security
Review for Managed Code" in the Checklists section of this guide.
Threats and Countermeasures
Understanding threats and the common types of attack helps you to identify appropriate
countermeasures and allows you to build more secure and robust assemblies. The main
threats are:
Code injection
Information disclosure
Tampering
Vulnerabilities
Vulnerabilities that can lead to unauthorized access and privileged elevation include:
Non-sealed and unrestricted base classes, which allow any code to derive from
them
Attacks
A luring attack where malicious code accesses your assembly through a trusted
intermediary assembly to bypass authorization mechanisms
Countermeasures
Countermeasures that you can use to prevent unauthorized access and privilege elevation
include:
Use role-based authorization to provide access controls on all public classes and
class members.
Restrict type and member visibility to limit which code is publicly accessible.
Sandbox privileged code and ensure that calling code is authorized with the
appropriate permission demands.
Code Injection
With code injection, an attacker executes arbitrary code using your assembly's process
level security context. The risk is increased if your assembly calls unmanaged code and if
your assembly runs under a privileged account.
Vulnerabilities
Poor input validation, particularly where your assembly calls into unmanaged code
Attacks
Buffer overflows
Invoking a delegate from an untrusted source
Countermeasures
Use strongly typed delegates and deny permissions before calling the delegate.
Information Disclosure
Assemblies can suffer from information disclosure if they leak sensitive data such as
exception details and clear text secrets to legitimate and malicious users alike. It is also
easier to reverse engineer an assembly's Microsoft Intermediate Language (MSIL) into
source code than it is with binary machine code. This presents a threat to intellectual
property.
Vulnerabilities
Attacks
Countermeasures
Tampering
The risk with tampering is that your assembly is modified by altering the MSIL instructions in
the binary DLL or EXE assembly file.
Vulnerabilities
The primary vulnerability that makes your assembly vulnerable to tampering is the lack of a
strong name signature.
Attacks
Countermeasures
To counter the tampering threat, use a strong name to sign the assembly with a private key.
When a signed assembly is loaded, the common language runtime detects if the assembly
has been modified in any way and will not load the assembly if it has been altered.
Privileged Code
When you design and build secure assemblies, be able to identify privileged code. This has
important implications for code access security. Privileged code is managed code that
accesses secured resources or performs other security sensitive operations such as calling
unmanaged code, using serialization, or using reflection. It is referred to as privileged code
because it must be granted permission by code access security policy to be able to
function. Non-privileged code only requires the permission to execute.
Privileged Resources
The types of resources for which your code requires code access security permissions
include the file system, databases, registry, event log, Web services, sockets, DNS
databases, directory services, and environment variables.
Privileged Operations
Other privileged operations for which your code requires code access security permissions
include calling unmanaged code, using serialization, using reflection, creating and controlling
application domains, creating Principal objects, and manipulating security policy.
For more information about the specific types of code access security permissions required
for accessing resources and performing privileged operations, see "Privileged Code" in
Chapter 8, "Code Access Security in Practice."
Assembly Design Considerations
One of the most significant issues to consider at design time is the trust level of your
assembly's target environment, which affects the code access security permissions granted
to your code and to the code that calls your code. This is determined by code access
security policy defined by the administrator, and it affects the types of resources your code
is allowed to access and other privileged operations it can perform.
The target environment that your assembly is installed in is important because code access
security policy may constrain what your assembly is allowed to do. If, for example, your
assembly depends on the use of OLE DB, it will fail in anything less than a full trust
environment.
The risk of a security compromise increases significantly if your assembly supports partial
trust callers (that is, code that you do not fully trust.) Code access security has additional
safeguards to help mitigate the risk. For additional guidelines that apply to assemblies that
support partial trust callers, see Chapter 8, "Code Access Security in Practice." Without
additional programming, your code supports partial trust callers in the following two
situations:
A partial trust assembly can only gain access to a restricted set of resources and
perform a restricted set of operations, depending upon which code access security
permissions it is granted by code access security policy.
A partial trust assembly cannot call a strong named assembly unless it includes
AllowPartiallyTrustedCallersAttribute.
Other partial trust assemblies may not be able to call your assembly because they
do not have the necessary permissions. The permissions that a calling assembly
must be able to call your assembly are determined by:
To avoid granting powerful permissions to a whole application just to satisfy the needs of a
few methods that perform privileged operations, sandbox privileged code and put it in a
separate assembly. This allows an administrator to configure code access security policy to
grant the extended permissions to the code in the specific assembly and not to the whole
application.
For example, if your application needs to call unmanaged code, sandbox the unmanaged
calls in a wrapper assembly, so that an administrator can grant the
UnmanagedCodePermission to the wrapper assembly and not the whole application.
For more information about sandboxing unmanaged API calls, see "Unmanaged Code" in
Chapter 8, "Code Access Security in Practice."
Think carefully about which types and members form part of your assembly's public
interface. Limit the assembly's attack surface by minimizing the number of entry points and
using a well designed, minimal public interface.
Class Design Considerations
In addition to using a well defined and minimal public interface, you can further reduce your
assembly's attack surface by designing secure classes. Secure classes conform to solid
object oriented design principles, prevent inheritance where it is not required, limit who can
call them, and which code can call them. The following recommendations help you design
secure classes:
If a class is not designed as a base class, prevent inheritance using the sealed keyword as
shown in the following code sample.
public sealed class NobodyDerivesFromMe
{}
For base classes, you can restrict which other code is allowed to derive from your class by
using code access security inheritance demands. For more information, see "Authorizing
Code" in Chapter 8, "Code Access Security in Practice."
Whether or not you should strong name an assembly depends on the way in which you
intend it to be used. The main reasons for wanting to add a strong name to an assembly
include:
You want to ensure that partially trusted code is not able to call your assembly.
The common language runtime prevents partially trusted code from calling a strong
named assembly, by adding link demands for the FullTrust permission set. You can
override this behavior by using AllowPartiallyTrustedCallersAttribute (APTCA)
although you should do so with caution.
For more information about APTCA, see APTCA in Chapter 8, "Code Access
Security in Practice."
In this case, the assembly should be installed in the global assembly cache. This
requires a strong name. The global assembly cache supports side-by-side
versioning which allows different applications to bind to different versions of the
same assembly.
The public key portion of the strong name gives cryptographically strong evidence
for code access security. You can use the strong name to uniquely identify the
assembly when you configure code access security policy to grant the assembly
specific code access permissions. Other forms of cryptographically strong evidence
include the Authenticode signature (if you have used X.509 certificates to sign the
assembly) and an assembly's hash.
For more information about evidence types and code access security, see Chapter
8, "Code Access Security in Practice."
Strong named assemblies are signed with a digital signature. This protects the
assembly from modification. Any tampering causes the verification process that
occurs at assembly load time to fail. An exception is generated and the assembly is
not loaded.
Strong named assemblies cannot be called by partially trusted code, unless you
specifically add AllowPartiallyTrustedCallersAttribute (APTCA.)
If you do use APTCA, make sure you read Chapter 8, "Code Access
Note Security in Practice," for additional guidelines to further improve the
security of your assemblies.
Strong names provide cryptographically strong evidence for code access security
policy evaluation. This allows administrators to grant permissions to specific
assemblies. It also allows developers to use a StrongNameIdentityPermission to
restrict which code can call a public member or derive from a non-sealed class.
Delay Signing
It is good security practice to delay sign your assemblies during application development.
This results in the public key being placed in the assembly, which means that it is available
as evidence to code access security policy, but the assembly is not signed, and as a result
is not yet tamper proof. From a security perspective, delay signing has two main
advantages:
The private key used to sign the assembly and create its digital signature is held
securely in a central location. The key is only accessible by a few trusted
personnel. As a result, the chance of the private key being compromised is
significantly reduced.
A single public key, which can be used to represent the development organization or
publisher of the software, is used by all members of the development team, instead
of each developer using his or her own public, private key pair, typically generated
by the sn –k command.
This procedure is performed by the signing authority to create a public key file that
developers can use to delay sign their assemblies.
1. Create a key pair for your organization.
sn.exe -k keypair.snk
3. Secure Keypair.snk, which contains both the private and public keys. For example,
put it on a floppy or CD and physically secure it.
3. The delay signing process and the absence of an assembly signature means that
the assembly will fail verification at load time. To work around this, use the
following commands on development and test computers.
To disable verification for all assemblies with a particular public key, use
the following command.
sn -Vr *,publickeytoken
To extract the public key and key token (a truncated hash of the public
key), use the following command.
sn -Tp assembly.dll
4. To fully complete the signing process and create a digital signature to make the
assembly tamper proof, execute the following command. This requires the private
key and as a result the operation is normally performed as part of the formal
build/release process.
sn -r assembly.dll keypair.snk
You can strong name any other assembly that is called by your Web page code,
Note for example an assembly that contains resource access, data access or business
logic code, although the assembly must be placed in the global assembly cache.
The code of a domain-neutral assembly is shared by all application domains in the ASP.NET
process. This creates problems if a single strong named assembly is used by multiple Web
applications and each application grants it varying permissions or if the permission grant
varies between application domain restarts. In this situation, you may see the following
error message: "Assembly <assembly>.dll security permission grant set is incompatible
between appdomains."
To avoid this error, you must place strong named assemblies in the global assembly cache
and not in the application's private \bin directory.
Authenticode signatures and strong names were developed to solve separate problems and
you should not confuse them. Specifically:
Authenticode signatures should be used for mobile code, such as controls and
executables downloaded via Internet Explorer, to provide publisher trust and
integrity.
You can configure code access security (CAS) policy using both strong names and
Authenticode signatures in order to grant permissions to specific assemblies. However, the
Publisher evidence object, obtained from an Authenticode signature is only created by the
Internet Explorer host and not by the ASP.NET host. Therefore, on the server side, you
cannot use an Authenticode signature to identify a specific assembly (through a code
group.) Use strong names instead.
For more information about CAS, CAS policy and code groups, see Chapter 8, "Code
Access Security in Practice."
Table 7.1 compares the features of strong names and Authenticode signatures.
Not necessarily.
Use structured exception handling instead of returning error codes from methods because it
is easy to forget to check a return code and as a result fail to an insecure mode.
In the context of an ASP.NET Web application or Web service, this can be done with the
appropriate configuration of the <customErrors> element. For more information, see
Chapter 10, "Building Secure ASP.NET Web Pages and Controls."
In the above example, Visual Basic .NET is used to call the C# class library code because
Visual Basic .NET supports exception filters, unlike C#.
If you create two projects and then run the code, the output produced is shown below:
1> About to encounter an exception condition
2> Filter
3> Finally
4> Main: Catch ex as Exception
From this output, you can see that the exception filter executes before the code in the
finally block. If your code sets state that affects a security decision in the finally block,
malicious code that calls your code could add an exception filter to exploit this vulnerability.
For information about how to create an exception management framework and about best
practice exception management for .NET applications, see "Exception Management in
.NET" in the MSDN Library at http://msdn.microsoft.com/library/en-
us/dnbda/html/exceptdotnet.asp.
File I/O
Canonicalization issues are a major concern for code that accesses the file system. If you
have the choice, do not base security decisions on input file names because of the many
ways that a single file name can be represented. If your code needs to access a file using a
user-supplied file name, take steps to ensure your assembly cannot be used by a malicious
user to gain access to or overwrite sensitive data.
The following recommendations help you improve the security of your file I/O:
Check for a valid location, as defined by your application's context. For example,
are they within the directory hierarchy of your application?
To validate the path and file name, use the System.IO.Path.GetFullPath method as shown
in the following code sample. This method also canonicalizes the supplied file name.
using System.IO;
It checks that the file name does not contain any invalid characters, as defined by
Path.InvalidPathChars.
It checks that the file name represents a file and not an another device type such as
a physical drive, a named pipe, a mail slot or a DOS device such as LPT1, COM1,
AUX, and other devices.
It checks that the combined path and file name is not too long.
Direct access to the event logs using system administration tools such as the Event Viewer
is restricted by Windows security. Your main concern should be to ensure that the event
logging code you write cannot be used by a malicious user for unauthorizedaccess to the
event log.
To prevent the disclosure of sensitive data, do not log it in the first place. For example, do
not log account credentials. Also, your code cannot be exploited to read existing records or
to delete event logs if all it does is write new records using EventLog.WriteEvent. The
main threat to address in this instance is how to prevent a malicious caller from calling your
code a million or so times in an attempt to force a log file cycle to overwrite previous log
entries to cover tracks. The best way of approaching this problem is to use an out-of-band
mechanism, for example, by using Windows instrumentation to alert operators as soon as
the event log approaches its threshold.
Finally, you can use code access security and the EventLogPermission to put specific
constraints on what your code can do when it accesses the event log. For example, if you
write code that only needs to read records from the event log it should be constrained with
an EventLogPermissin that only supports browse access. For more information about how
to constrain event logging code, see "Event Log" in Chapter 8, "Code Access Security in
Practice."
Registry
The registry can provide a secure location for storing sensitive application configuration
data, such as encrypted database connection strings. You can store configuration data
under the single, local machine key (HKEY_LOCAL_MACHINE) or under the current user
key (HKEY_CURRENT_USER). Either way, make sure you encrypt the data using DPAPI
and store the encrypted data, not the clear text.
HKEY_LOCAL_MACHINE
If you store configuration data under HKEY_LOCAL_MACHINE, remember that any
process on the local computer can potentially access the data. To restrict access, apply a
restrictive access control list (ACL) to the specific registry key to limit access to
administrators and your specific process or thread token. If you use
HKEY_LOCAL_MACHINE, it does make it easier at installation time to store configuration
data and also to maintain it later on.
HKEY_CURRENT_USER
If your security requirements dictate an even less accessible storage solution, use a key
under HKEY_CURRENT_USER. This approach means that you do not have to explicitly
configure ACLs because access to the current user key is automatically restricted based on
process identity.
Version 1.1 of the .NET Framework loads the user profile for the ASPNET account on
Windows 2000. On Windows Server 2003, the profile for this account is only loaded if the
ASP.NET process model is used. It is not loaded explicitly by Internet Information Services
(IIS) 6 if the IIS 6 process model is used on Windows Server 2003.
Version 1.0 of the .NET Framework does not load the ASPNET user profile,
Note
which makes HKEY_CURRENT_USER a less practical option.
For more information about how to use the code access security RegistryPermission to
constrain registry access code for example to limit it to specific keys, see "Registry" in
Chapter 8, "Code Access Security in Practice."
Data Access
Two of the most important factors to consider when your code accesses a database are
how to manage database connection strings securely and how to construct SQL statements
and validate input to prevent SQL injection attacks. Also, when you write data access code,
consider the permission requirements of your chosen ADO.NET data provider. For detailed
information about these and other data access issues, see Chapter 14, "Building Secure
Data Access."
For information about how to use SqlClientPermission to constrain data access to SQL
Server using the ADO.NET SQL Server data provider, see "Data Access" in Chapter 8,
"Code Access Security in Practice."
Unmanaged Code
If you have existing COM components or Win32 DLLs that you want to reuse, use the
Platform Invocation Services (P/Invoke) or COM Interop layers. When you call unmanaged
code, it is vital that your managed code validates each input parameter passed to the
unmanaged API to guard against potential buffer overflows. Also, be careful when handling
output parameters passed back from the unmanaged API.
You should isolate calls to unmanaged code in a separate wrapper assembly. This allows
you to sandbox the highly privileged code and to isolate the code access security
permission requirements to a specific assembly. For more details about sandboxing and
about additional code access security related guidelines that you should apply when calling
unmanaged code, see "Unmanaged Code" in Chapter 8, "Code Access Security in
Practice." The following recommendations help improve the security of your unmanaged API
calls, without using explicit code access security coding techniques:
If you cannot examine the unmanaged code because you do not own it, make sure that you
rigorously test the API by passing in deliberately long input strings.
If your code uses a StringBuilder to receive a string passed from an unmanaged API,
make sure that it can hold the longest string that the unmanaged API can hand back.
Directory names and registry keys can only be a maximum of 248 characters
Note
long.
If your assembly supports partial trust callers, consider the additional threat of being
passed a delegate by malicious code. For risk mitigation techniques to address this threat,
see the "Delegates" section in Chapter 8, "Code Access Security in Practice."
Serialization
You may need to add serialization support to a class if you need to be able to marshal it by
value across a .NET remoting boundary (that is, across application domains, processes, or
computers) or if you want to be able to persist the object state to create a flat data stream,
perhaps for storage on the file system.
By default, classes cannot be serialized. A class can be serialized if it is marked with the
SerializableAttribute or if it derives from ISerializable. If you use serialization:
The following example shows how to use the [NonSerialized] attribute to ensure a specific
field that contains sensitive data cannot be serialized.
[Serializable]
public class Employee {
// OK for name to be serialized
private string name;
// Prevent salary being serialized
[NonSerialized] private double annualSalary;
. . .
}
Alternatively, implement the ISerializable interface and explicitly control the serialization
process. If you must serialize the sensitive item or items of data, consider encrypting the
data first. The code that de-serializes your object must have access to the decryption key.
For more information about input validation techniques, see "Input Validation" in Chapter 10,
"Building Secure ASP.NET Pages and Controls."
If there are other paths to OpenAndWorkWithResource, and a separate thread calls the
method on the same object, it is possible for the second thread to omit the security
demand, because it sees _callerOK=true, set by another thread.
If you use static class constructors, make sure they are not vulnerable to race conditions.
If, for example, they manipulate static state, add thread synchronization to avoid potential
vulnerabilities.
In this example, it is possible for two threads to execute the code before the first thread
has set _theObject reference to null. Depending on the functionality provided by the
ReleaseResources method, security vulnerabilities may occur.
Reflection
With reflection, you can dynamically load assemblies, discover information about types, and
execute code. You can also obtain a reference to an object and get or set its private
members. This has a number of security implications:
If your code uses reflection to reflect on other types, make sure that only trusted
code can call you. Use code access security permission demands to authorize
calling code. For more information, see Chapter 8, "Code Access Security in
Practice."
If your code generation relies on input from the caller, be especially vigilant for
security vulnerabilities. Validate any input string used as a string literal in your
generated code and escape quotation mark characters to make sure the caller
cannot break out of the literal and inject code. In general, if there is a way that the
caller can influence the code generation such that it fails to compile, there is
probable security vulnerability.
For more information, see "Secure Coding Guidelines for the .NET Framework" in the
MSDN Library.
Obfuscation
If you are concerned with protecting intellectual property, you can make it extremely difficult
for a decompiler to be used on the MSIL code of your assemblies, by using an obfuscation
tool. An obfuscation tool confuses human interpretation of the MSIL instructions and helps
prevent successful decompilation.
Obfuscation is not foolproof and you should not build security solutions that rely on it.
However, obfuscation does address threats that occur because of the ability to reverse
engineer code. Obfuscation tools generally provide the following benefits:
They obscure code paths. This makes it harder for an attacker to crack security
logic.
They mangle the names of internal member variables. This makes it harder to
understand the code.
They encrypt strings. Attackers often attempt to search for specific strings to locate
key sensitive logic. String encryption makes this much harder to do.
A number of third-party obfuscation tools exist for the .NET Framework. One tool, the
Community Edition of the Dotfuscator tool by PreEmptive Solutions, is included with the
Microsoft Visual Studio® .NET 2003 development system. It is also available from
http://www.preemptive.com/dotfuscator. For more information, see the list of obfuscator
tools listed at http://www.gotdotnet.com/team/csharp/tools/default.aspx.
Cryptography
Cryptography is one of the most important tools that you can use to protect data.
Encryption can be used to provide data privacy and hash algorithms, which produce a fixed
and condensed representation of data, can be used to make data tamperproof. Also, digital
signatures can be used for authentication purposes.
You should use encryption when you want data to be secure in transit or in storage. Some
encryption algorithms perform better than others while some provide stronger encryption.
Typically, larger encryption key sizes increase security.
Two of the most common mistakes made when using cryptography are developing your
own encryption algorithms and failing to secure your encryption keys. Encryption keys must
be handled with care. An attacker armed with your encryption key can gain access to your
encrypted data.
Key generation
Key storage
Key exchange
Key maintenance
Do not create your own cryptographic implementations. It is extremely unlikely that these
implementations will be as secure as the industry standard algorithms provided by the
platform; that is, the operating system and the .NET Framework. Managed code should use
the algorithms provided by the System.Security.Cryptography namespace for encryption,
decryption, hashing, random number generating, and digital signatures.
Many of the types in this namespace wrap the operating system CryptoAPI, while others
implement algorithms in managed code.
Key Generation
The following recommendations apply when you create encryption keys:
Note that this approach is not for password authentication. Store a password verifier in the
form of a hash value with a salt value order to authenticate a user's password. Use
PasswordDeriveBytes to generate keys for password-based encryption.
After the key is used to encrypt the data, clear it from memory but persist the salt and
initialization vector. These values should be protected and are needed to re-generate the
key for decryption.
For more information about storing password hashes with salt, see Chapter 14, "Building
Secure Data Access."
Key Storage
Where possible, you should use a platform-provided encryption solution that enables you to
avoid key management in your application. However, at times you need to use encryption
solutions that require you to store keys. Using a secure location to store the key is critical.
Use the following techniques to help prevent key storage vulnerabilities:
You can perform encryption with DPAPI using either the user key or the machine key. By
default, DPAPI uses a user key. This means that only a thread that runs under the security
context of the user account that encrypted the data can decrypt the data. You can instruct
DPAPI to use the machine key by passing the CRYPTPROTECT_LOCAL_MACHINE flag
to the CryptProtectData API. In this event, any user on the current computer can decrypt
the data.
The user key option can be used only if the account used to perform the encryption has a
loaded user profile. If you run code in an environment where the user profile is not loaded,
you cannot easily use the user store and should opt for the machine store instead.
Version 1.1 of the .NET Framework loads the user profile for the ASPNET account used to
run Web applications on Windows 2000. Version 1.0 of the .NET Framework does not load
the profile for this account, which makes using DPAPI with the user key more difficult.
If you use the machine key option, you should use an ACL to secure the encrypted data, for
example in a registry key, and use this approach to limit which users have access to the
encrypted data. For added security, you should also pass an optional entropy value to the
DPAPI functions.
An entropy value is an additional random value that can be passed to the DPAPI
CryptProtectData and CryptUnprotectData functions. The same value that is
Note used to encrypt the data must be used to decrypt the data. The machine key
option means that any user on the computer can decrypt the data. With added
entropy, the user must also know the entropy value.
The drawback with using entropy is that you must manage the entropy value as you would
manage a key. To avoid entropy management issues, use the machine store without entropy
and validate users and code (using code access security) thoroughly before calling the
DPAPI code.
For more information about using DPAPI from ASP.NET Web applications, see "How To:
Create a DPAPI Library," in the How To section of "Building Secure ASP.NET Applications,"
at http://msdn.microsoft.com/library/en-us/dnnetsec/html/SecNetHT07.asp.
When backing up a key, do not store it in plain text, encrypt it using DPAPI or a strong
password and place it on removable media.
Key Exchange
Some applications require the secure exchange of encryption keys over an insecure
network. You may need to verbally communicate the key or send it through secure email. A
more secure method to exchange a symmetric key is to use public key encryption. With this
approach, you encrypt the symmetric key to be exchanged by using the other party's public
key from a certificate that can be validated. A certificate is considered valid when:
It is being used within the date ranges as specified in the certificate.
It is of the correct type. For example, an e-mail certificate is not being used as a
Web server certificate.
Key Maintenance
Security is dependent upon keeping the key secure over a prolonged period of time. Apply
the following recommendations for key maintenance:
Key Compromise
Keys can be compromised in a number of ways. For example, you may lose the key or
discover that an attacker has stolen or discovered the key.
If your private key used for asymmetric encryption and key exchange is compromised, do
not continue to use it, and notify the users of the public key that the key has been
compromised. If you used the key to sign documents, they need to be re-signed.
If the private key of your certificate is compromised, contact the issuing certification
authority to have your certificate placed on a certificate revocation list. Also, change the
way your keys are stored to avoid a future compromise.
To further improve the security of your assemblies, you can use explicit code access
security coding techniques, which are particularly important if your assemblies support
partial trust callers. For more information about using code access security, see Chapter 8,
"Code Access Security in Practice."
Additional Resources
For additional related reading, refer to the following resources:
For more information about using DPAPI from ASP.NET Web applications, see
"How To: Create a DPAPI Library" in the "How To" section of "Microsoft patterns &
practices Volume I, Building Secure ASP.NET Applications: Authentication,
Authorization, and Secure Communication" at
http://msdn.microsoft.com/library/en-us/dnnetsec/html/SecNetHT07.asp.
For more information about secure coding guidelines for the .NET Framework, see
MSDN article, "Secure Coding Guidelines for the .NET Framework," at
http://msdn.microsoft.com/library/default.asp?url=/library/en-
us/dnnetsec/html/seccodeguide.asp.
Michael Howard discusses techniques for writing secure code and shows you how
to add them in your own applications in his MSDN column, "Code Secure," at
http://msdn.microsoft.com/columns/secure.asp.
Chapter 8: Code Access Security in Practice
In This Chapter
Code access security explained
Using APTCA
Requesting permissions
Code access security delivers three main benefits. By using code access security, you can:
For example, if you develop an assembly that performs file I/O you can use code
access security to restrict your code's access to specific files or directories. This
reduces the opportunities for an attacker to coerce your code to access arbitrary
files.
For example, you may only want your assembly to be called by other code
developed by your organization. One way to do this is to use the public key
component of an assembly's strong name to apply this kind of restriction. This helps
prevent malicious code from calling your code.
Identify code
To successfully administer code access security policy and restrict what code can
do, the code must be identifiable. Code access security uses evidence such as an
assembly's strong name or its URL, or its computed hash to identify code
(assemblies.)
How to Use This Chapter
This chapter takes up where Chapter 7, "Building Secure Assemblies," left off. It shows
how you can use code access security to further improve the security of your managed
code. To get the most out of this chapter:
Read Chapter 9, "Using Code Access Security with ASP.NET." After you read
this chapter, read Chapter 9 if you are interested specifically in ASP.NET code
access security policy and ASP.NET trust levels.
Code Access Security Explained
To use code access security effectively, you need to know the basics such as the
terminology and how policy is evaluated. For further background information about code
access security, see the "Additional Resources" section at the end of this chapter. If you
are already familiar with code access security, you may want to skip this section and go to
the "APTCA" (AllowPartiallyTrustedCallersAttribute) section later in this chapter.
Code
Evidence
Permissions
Policy
Code groups
Code
All managed code is subject to code access security. When an assembly is loaded, it is
granted a set of code access permissions that determines what resource types it can
access and what types of privileged operations it can perform. The Microsoft .NET
Framework security system uses evidence to authenticate (identify) code to grant
permissions.
An assembly is the unit of configuration and trust for code access security. All
Note code in the same assembly receives the same permission grant and is therefore
equally trusted.
Evidence
Evidence is used by the .NET Framework security system to identify assemblies. Code
access security policy uses evidence to help grant the right permissions to the right
assembly. Location-related evidence includes:
URL. The URL that the assembly was obtained from. This is the codebase URL in
its raw form, for example, http://webserver/vdir/bin/assembly.dll or
file://C:/directory1/directory2/assembly.dll.
Site. The site the assembly was obtained from, for example, http://webserver. The
site is derived from the codebase URL.
Application directory. The base directory for the running application.
Zone. The zone the assembly was obtained from, for example, LocalIntranet or
Internet. The zone is also derived from the codebase URL.
Strong name. This applies to assemblies with a strong name. Strong names are
one way to digitally sign an assembly using a private key.
Publisher. The Authenticode signature; based on the X.509 certificate used to sign
code, representing the development organization.
Hash. The assembly hash is based on the overall content of the assembly and
allows you to detect a particular compilation of a piece of code, independent of
version number. This is useful for detecting when third party assemblies change
(without an updated version number) and you have not tested and authorized their
use for your build.
Permissions
Permissions represent the rights for code to access a secured resource or perform a
privileged operation. The .NET Framework provides code access permissions and code
identity permissions. Code access permissions encapsulate the ability to access a
particular resource or perform a privileged operation. Code identity permissions are used to
restrict access to code, based on an aspect of the calling code's identity such as its strong
name.
Your code is granted permissions by code access security policy that is configured by the
administrator. An assembly can also affect the set of permissions that it is ultimately
granted by using permission requests. Together, code access security policy and
permission requests determine what your code can do. For example, code must be granted
the FileIOPermission to access the file system, and code must be granted the
RegistryPermission to access the registry. For more information about permission
requests, see the "Requesting Permissions" section later in this chapter.
Note Permission sets are used to group permissions together to ease administration.
Demands
If you use a class from the .NET Framework class library to access a resource or perform
another privileged operation, the class issues a permission demand to ensure that your
code, and any code that calls your code, is authorized to access the resource. A permission
demand causes the runtime to walk back up through the call stack (stack frame by stack
frame), examining the permissions of each caller in the stack. If any caller is found not to
have the required permission, a SecurityException is thrown.
Link Demands
A link demand does not perform a full stack walk and only checks the immediate caller, one
stack frame further back in the call stack. As a result, there are additional security risks
associated with using link demands. You need to be particularly sensitive to luring attacks.
With a luring attack, malicious code accesses the resources and operations that
Note are exposed by your assembly, by calling your code through a trusted
intermediary assembly.
For more information about how to use link demands correctly, see the" Link Demands"
section later in this chapter.
Code access permission classes support the Assert, Deny, and PermitOnly methods. You
can use these methods to alter the behavior of a permission demand stack walk. They are
referred to as stack walk modifiers.
A call to the Assert method causes the stack walk for a matching permission to stop at the
site of the Assert call. This is most often used to sandbox privileged code. For more
information, see the "Assert and RevertAssert" section later in this chapter.
A call to the Deny method fails any stack walk that reaches it with a matching permission. If
you call some non-trusted code, you can use the Deny method to constrain the capabilities
of the code that you call.
A call to the PermitOnly method fails any unmatching stack walk. Like the Deny method, it
tends to be used infrequently but it can be used to constrain the actions of some non-
trusted code that you may call.
Policy
Code access security policy is configured by administrators and it determines the
permissions granted to assemblies. Policy can be established at four levels:
Policy settings are maintained in XML configuration files. The first three levels of policy
(Enterprise, Machine, and User) can be configured by using the .NET Framework
Configuration tool, which is located in the Administrative Tools program group or the
Caspol.exe command line utility. ASP.NET application domain level policy must currently be
edited with a text or XML-based editor.
For more information about policy files and locations, see Chapter 19, "Securing Your
ASP.NET Application and Web Services."
Code Groups
Each policy file contains a hierarchical collection of code groups. Code groups are used to
assign permissions to assemblies. A code group consists of two elements:
A permission set. The permissions contained in the permission set are granted to
assemblies whose evidence matches the membership condition.
4. The output from security policy evaluation is one or more named permission sets
that define the permission grant for the assembly.
All of the .NET Framework base classes that access resources or perform
privileged operations contain the appropriate permission demands. For example,
the FileStream class demands the FileIOPermission, the Registry class
demands the RegistryPermission, and so on.
6. If the assembly (and its callers) have been granted the demanded permission, the
operation is allowed to proceed. Otherwise, a security exception is generated.
How Is Policy Evaluated?
When evidence is run through the policy engine, the output is a permission set that defines
the set of permissions granted to an assembly. The policy grant is calculated at each level
in the policy hierarchy: Enterprise, Machine, User, and Application Domain. The policy grant
resulting from each level is then combined using an intersection operation to yield the final
policy grant. An intersection is used to ensure that policy lower down in the hierarchy cannot
add permissions that were not granted by a higher level. This prevents an individual user or
application domain from granting additional permissions that are not granted by the
Enterprise administrator.
Figure 8.2 shows how the intersection operation means that the resulting permission grant
is determined by all levels of policy in the policy hierarchy.
In Figure 8.2, you can see that the intersection operation ensures that only those
permissions granted by each level form part of the final permission grant.
If you request optional permissions, the combined optional and minimal permissions are
intersected with the policy grant, to further reduce it. Then, any specifically refused
permissions are taken away from the policy grant. This is summarized by the following
formula where PG is the policy grant from administrator defined security policy and Pmin ,
Popt , and Prefused are permission requests added to the assembly by the developer.
Resulting Permission Grant = (PG Ç (Pmin È Popt)) – Prefused
For more information about how to use permission requests, their implications, and when to
use them, see the "Requesting Permissions" section later in this chapter.
The All Code code group is a special code group that matches all assemblies. It
Note forms the root of security policy and in itself grants no permissions, because it is
associated with the permission set named Nothing.
Consider the granted permissions based on the security policy shown in Figure 8.3.
Assemblies authored by Company1 and originating from the intranet zone are
granted the permissions defined by the built-in LocalIntranet_Zone permission set
and the custom Comp1PSet permission set.
Exclusive
This indicates that no other sibling code groups should be combined with this code
group. You mark a code group as exclusive by selecting This policy level will
only have the permissions from the permission set associated with this code
group in the .NET Framework Configuration Tool.
Level Final
This indicates that any lower level policies should be ignored. You mark a code
group as Level Final by selecting Policy levels below this level will not be
evaluated in the .NET Framework Configuration Tool. For example, if a matching
code group in the machine policy is marked Level Final, policy settings from the
user policy file is ignored.
The application domain level policy, for example, ASP.NET policy for
Note server-side Web applications, is always evaluated regardless of the level
final setting.
APTCA
An assembly that has a strong name cannot be called by a partial trust assembly (an
assembly that is not granted full trust), unless the strong named assembly contains
AllowPartiallyTrustedCallersAttribute (APTCA) as follows:
[assembly: AllowPartiallyTrustedCallersAttribute()]
This is a risk mitigation strategy designed to ensure your code cannot inadvertently be
exposed to partial trust (potentially malicious) code. The common language runtime silently
adds a link demand for the FullTrust permission set to all publicly accessible members on
types in a strong named assembly. If you include APTCA, you suppress this link demand.
If you use APTCA, your code is immediately more vulnerable to attack and, as a result, it is
particularly important to review your code for security vulnerabilities. Use APTCA only
where it is strictly necessary.
In the context of server-side Web applications, use APTCA whenever your assembly needs
to support partial trust callers. This situation can occur in the following circumstances:
Your assembly is to be called by another assembly that has been granted limited
permissions by the code access security administrator.
Your assembly is to be called by another assembly that uses a stack walk modifier
(such as Deny or PermitOnly) to constrain downstream code.
Figure 8.4: The result of partial trust code calling a strong named
assembly
To overcome this exception, either the calling code must be granted FullTrust or the
assembly being called must be annotated with APTCA. Note that individual types within an
assembly marked with APTCA might still require full trust callers, because they include an
explicit link demand or regular demand for full trust, as shown in the following examples.
[PermissionSet(SecurityAction.LinkDemand, Name="FullTrust")]
[PermissionSet(SecurityAction.Demand, Unrestricted=true)]
Privileged Code
When you design and build secure assemblies, you must be able to identify privileged code.
This has important implications for code access security. Privileged code is managed code
that accesses secured resources or performs other security-sensitive operations, such as
calling unmanaged code, using serialization, or using reflection. Privileged code is privileged
because code access security must grant it specific permissions before it can function.
Privileged Resources
Privileged resources for which your code requires specific code access security
permissions are shown in the Table 8.1.
Privileged operations are shown in Table 8.2, together with the associated permissions that
calling code requires.
The best way to communicate the permission requirements of your code is to use assembly
level declarative security attributes to specify minimum permission requirements. These are
normally placed in Assemblyinfo.cs or Assemblyinfo.vb. This allows the administrator or the
consumer of your assembly to check which permissions it requires by using the
Permview.exe tool.
RequestMinimum
If you know up front that your code will run in a full trust environment and will be granted the
full set of unrestricted permissions, using RequestMinimum is less important. However, it
is good practice to specify your assembly's permission requirements.
RequestOptional
If you use SecurityAction.RequestOptional method, no other permissions except those
specified with SecurityAction.RequestMinimum and SecurityAction.RequestOptional
will be granted to your assembly, even if your assembly would otherwise have been granted
additional permissions by code access security policy.
RequestRefused
SecurityAction.RequestRefuse allows you to make sure that your assembly cannot be
granted permissions by code access security policy that it does not require. For example, if
your assembly does not call unmanaged code, you could use the following attribute to
ensure code access security policy does not grant your assembly the unmanaged code
permission.
[assembly: SecurityPermissionAttribute(SecurityAction.RequestRefuse,
UnmanagedCode=true)]
Do not use them if you need to directly call a strong named assembly without
AllowPartiallyTrustedCallersAttribute (APTCA) because this prevents you from
being able to call it.
Many strong named .NET Framework assemblies contain types that do not support
partial trust callers and do not include APTCA. For more information, and a list of
assemblies that support partial trust callers, see "Developing Partial Trust Web
Applications," in Chapter 9, "Using Code Access Security with ASP.NET."
If you must call strong named assemblies without APTCA, let the administrators
who install your code know that your code must be granted full trust by code
access security policy to work properly.
If you do not need to access any APTCA assemblies, then add permission requests
to refuse those permissions that you know your assembly does not need. Test your
code early to make sure you really do not require those permissions.
If downstream code needs the permission you have refused, a method between
you and the downstream code needs to assert the permission. Otherwise, a
SecurityException will be generated when the stack walk reaches your code.
Authorizing Code
Code access security allows you to authorize the code that calls your assembly. This
reduces the risk of malicious code successfully calling your code. For example, you can use
identity permissions to restrict calling code based on identity evidence, such as the public
key component of its strong name. You can also use explicit code access permission
demands to ensure that the code that calls your assembly has the necessary permissions
to access the resource or perform the privileged operation that your assembly exposes.
Usually, you do not explicitly demand code access permissions. The .NET Framework
classes do this for you, and a duplicate demand is unnecessary. However, there are
occasions when you need to issue explicit demands, for example, if your code exposes a
custom resource by using unmanaged code or if your code accesses cached data. You can
authorize code in the following ways:
Restrict inheritance.
The above code shows a link demand. This results in the authorization of the immediate
caller. Therefore, your code is potentially open to luring attacks, where a malicious
assembly could potentially access the protected resources or operations provided by your
assembly through a trusted intermediary assembly with the specified strong name.
Depending on the nature of the functionality provided by your class, you may need to
demand another permission to authorize the calling code in addition to using the identity-
based link demand. Alternatively, you can consider using a full demand in conjunction with
the StrongNameIdentityPermission, although this assumes that all code in the call stack
is strong name signed using the same private key.
Run the following command to obtain a hex representation of a public key from an
assembly:
secutil -hex -strongname yourassembly.dll
Restrict Inheritance
If your class is designed as base class, you can restrict which other code is allowed to
derive from your class by using an inheritance demand coupled with a
StrongNameIdentityPermission as shown in the following example. This prevents
inheritance of your class from any assembly that is not signed with the private key
corresponding to the public key in the demand.
// The following inheritance demand ensures that only code within th
// assembly with the specified public key (part of the assembly's st
// name can sub class SomeRestrictedClass
[StrongNameIdentityPermission(SecurityAction.InheritanceDemand,
PublicKey="00240000048...97e85d098615"
public class SomeRestrictedClass
{
}
Consider Protecting Cached Data
If you access a resource by using one of the .NET Framework classes, a permission
demand appropriate for the resource type in question is issued by the class. If you
subsequently cache data for performance reasons, you should consider issuing an explicit
code access permission demand prior to accessing the cached data. This ensures the
calling code is authorized to access the specific type of resource. For example, if you read
data from a file and then cache it, and you want to ensure that calling code is authorized,
issue a FileIOPermission demand as shown in the following example.
// The following demand assumes the cached data was originally retri
// C:\SomeDir\SomeFile.dat
new FileIOPermission(FileIOPermissionAccess.Read,
@"C:\SomeDir\SomeFile.dat").Demand();
// Now access the cache and return the data to the caller
If you expose a resource or operation by using unmanaged code, you should sandbox your
wrapper code and consider demanding a custom permission to authorize the calling code.
Full trust callers are granted the custom permission automatically as long as the permission
type implements the IUnrestrictedPermission interface. Partial trust callers will not have
the permission unless it has been specifically granted by code access security policy. This
ensures that non-trusted code cannot call your assembly to access the custom resources
that it exposes. Sandboxing also means that you are not forced to grant the powerful
UnmanagedCodePermission to any code that needs to call your code.
For more information about calling unmanaged code, see the "Unmanaged Code" section
later in this chapter. For an example implementation of a custom permission, see "How To:
Create a Custom Encryption Permission" in the "How To" section of this guide.
Link Demands
A link demand differs from a regular permission demand in that the run-time demands
permissions only from the immediate caller and does not perform a full stack walk. Link
demands are performed at JIT compilation time and can only be specified declaratively.
Carefully consider before using a link demand because it is easy to introduce security
vulnerabilities if you use them. If you do use link demands, consider the following issues:
Luring attacks
Luring Attacks
If you protect code with a link demand, it is vulnerable to luring attacks, where malicious
code gains access to the resource or operation exposed by your code through a trusted
intermediary as shown in Figure 8.5.
In figure 8.5, methods in assembly X, which access a secure resource, are protected with a
link demand for a specific public key (using a StrongNameIdentityPermission).
Assemblies A, B, and C are signed with the private key that corresponds to the public key
that assembly X trusts, and so these assemblies can call assembly X. Assemblies A, B,
and C are subject to a luring attack if they do not check their callers for specific evidence
before making calls to assembly X. For example, assembly D that is not signed with the
same private key cannot call assembly X directly. It could, however, access assembly X
through the trusted assembly A, if A does not check its callers, either with another link
demand or through a full demand.
Only use link demands in an assembly when you trust the assembly's callers not to expose
its functionality further (for example, when the caller is an application, not a library) or when
you know it is safe just to verify the immediate caller's identity with an identity permission
demand.
If you call a link demand protected method, only your code will be checked by the link
demand. In this situation, you should make sure your code takes adequate measures to
authorize its callers, for example, by demanding a permission.
With the following code, the caller is subject to the link demand:
MyImplementation t = new MyImplementation();
t.Method1();
With the following code, the caller is not subject to the link demand:
IMyInterface i = new MyImplementation();
i.Method1();
For example:
[SecurityPermission(SecurityAction.LinkDemand,
Flags=SecurityPermissionFlag.ControlPrincipal)]
public struct SomeStruct
{
// This explicit constructor is protected by the link demand
public SomeStruct(int i)
{
field = i;
}
private int field;
}
The following two lines of code both result in a new structure with the field initialized to zero.
However, only the first line that uses the explicit constructor is subject to a link demand.
SomeStruct s = new SomeStruct(0);
SomeStruct s = new SomeStruct();
The second line is not subject to a link demand because a default constructor is not
generated. If this were a class instead of a structure, the compiler would generate a default
constructor annotated with the specified link demand.
Asserts are most often used to sandbox privileged code. If you develop code that calls
Assert, you need to ensure that there are alternate security measures in place to authorize
the calling code. The following recommendations help you to minimize the risks.
Often, if your assembly is exposing functionality that is not provided by the .NET
Framework class library, such as calling the Data Protection API (DPAPI), you need to
develop a custom permission and demand the custom permission to authorize callers. For
example, you might develop a custom Encryption permission to authorize callers to a
managed DPAPI wrapper assembly. Demanding this permission and then asserting the
unmanaged code permission is an effective way to authorize calling code.
For more information about this approach and about developing custom permissions, see
"How To: Create a Custom Encryption Permission" in the "How To" section of this guide.
A common practice is to place the call to RevertAssert in a finally block to ensure that it
always gets called even in the event of an exception.
Constraining Code
Constraining code and building least privileged code is analogous to using the principle of
least privilege when you configure user or service accounts. By restricting the code access
security permissions available to your code, you minimize scope for the malicious use of
your code.
There are two ways to constrain code to restrict which resources it can access and restrict
which other privileged operations it can perform:
The following sections show you how to use code access security to constrain various
types of resource access including file I/O, event log, registry, data access, directory
services, environment variables, Web services, and sockets.
File I/O
To be able to perform file I/O, your assembly must be granted the FileIOPermission by
code access security policy. If your code is granted the unrestricted FileIOPermission, it
can access files anywhere on the file system, subject to Windows security. A restricted
FileIOPermission can be used to constrain an assembly's ability to perform file I/O, for
example, by specifying allowed access rights (read, read/write, and so on.)
Configuring your application for Medium trust is one way to constrain file I/O, although this
also constrains your application's ability to access other resource types. There are two
other ways you can restrict your code's file I/O capabilities:
To avoid hard coding your application's directory hierarchy, you can use imperative security
syntax, and use the HttpContext.Current.Request.MapPath(".") to retrieve your Web
application's directory at runtime. You must reference the System.Web assembly and add
the corresponding using statement as shown in the following example.
using System.Web;
For a Windows application you can replace the call to MapPath with a call to
Note
Directory.GetCurrentDirectory to obtain the application's current directory.
For example, the administrator can configure Enterprise or Machine level code access
security policy to grant a restricted FileIOPermission to your assembly. This is most easily
done if your assembly contains a strong name, because the administrator can use this
cryptographically strong evidence when configuring policy. For assemblies that are not
strong named, an alternative form of evidence needs to be used. For more information
about how to configure code access security to restrict the file I/O capability of an
assembly, see "How To: Configure Code Access Security Policy to Constrain an Assembly,
" in the "How To" section of this guide.
Requesting FileIOPermission
To help the administrator, if you know your assembly's precise file I/O requirements at build
time (for example, you know directory names), declare your assembly's FileIOPermission
requirements by using a declarative permission request as shown in the following example.
[assembly: FileIOPermission(SecurityAction.RequestMinimum, Read=@"C:
The administration can see this attribute by using permview.exe. The additional advantage
of using SecurityAction.RequestMinimum is that the assembly fails to load if it is not
granted sufficient permissions. This is preferable to a runtime security exception.
Event Log
To be able to access the event log, your assembly must be granted the
EventLogPermission by code access security policy. If it is not, for example, because it is
running within the context of a medium trust Web application, you need to sandbox your
event logging code. For more information about sandboxing access to the event log, see
Chapter 9, "Using Code Access Security with ASP.NET."
The following attribute ensures that the WriteToLog method and any methods it calls can
only access the local computer's event log and cannot delete event logs or event sources.
These operations are not permitted by EventLogPermissionAccess.Instrument.
[EventLogPermission(SecurityAction.PermitOnly,
MachineName=".",
PermissionAccess=EventLogPermissionAccess.Instru
public static void WriteToLog( string message )
Requesting EventLogPermission
To document the permission requirements of your code, and to ensure that your assembly
cannot load if it is granted insufficient event log access by code access security policy, add
an assembly level EventLogPermissionAttribute with SecurityAction.RequestMinimum
as shown in the following example.
// This attribute indicates that your code requires the ability to a
// event logs on the local machine only (".") and needs instrumentat
// which means it can read or write to existing logs and create new
// and event logs
[assembly: EventLogPermissionAttribute(SecurityAction.RequestMinimum
MachineName=".",
PermissionAccess=
EventLogPermissionAccess.Inst
Registry
Code that accesses the registry by using the Microsoft.Win32.Registry class must be
granted the RegistryPermission by code access security policy. This permission type can
be used to constrain registry access to specific keys and sub keys, and can also control
code's ability to read, write, or create registry keys and named values.
Requesting RegistryPermission
To document the permission requirements of your code, and to ensure your assembly
cannot load if it is granted insufficient registry access from code access security policy, add
an assembly level RegistryPermissionAttribute with SecurityAction.RequestMinimum
as shown in the following example.
[assembly: RegistryPermissionAttribute(SecurityAction.RequestMinimum
Read=@"HKEY_LOCAL_MACHINE\SOFTWAR
Data Access
The ADO.NET SQL Server data provider supports partial trust callers. The other data
providers including the OLE DB, Oracle, and ODBC providers currently require full trust
callers.
If you connect to SQL Server using the SQL Server data provider, your data access code
requires the SqlClientPermission. You can use SqlClientPermission to restrict the
allowable range of name/value pairs that can be used on a connection string passed to the
SqlConnection object. In the following code, the CheckProductStockLevel method has
been enhanced with an additional security check to ensure that blank passwords cannot be
used in the connection string. If the code retrieves a connection string with a blank
password, a SecurityException is thrown.
[SqlClientPermissionAttribute(SecurityAction.PermitOnly,
AllowBlankPassword=false)]
public static int CheckProductStockLevel(string productCode)
{
// Retrieve the connection string from the registry
string connectionString = GetConnectionString();
. . .
}
For more information about how to sandbox data access code to allow the OLE DB and
other data providers to be used from partial trust Web applications, see Chapter 9, "Using
Code Access Security with ASP.NET."
Directory Services
Currently, code that uses classes from the System.DirectoryServices namespace to
access directory services such as Active Directory must be granted full trust. However, you
can use the DirectoryServicesPermission to constrain the type of access and the
particular directory services to which code can connect.
Requesting DirectoryServicesPermission
To document the permission requirements of your code, and to ensure your assembly
cannot load if it is granted insufficient directory services access from code access security
policy, add an assembly level DirectoryServicesPermissionAttribute with
SecurityAction.RequestMinimum as shown in the following example.
[assembly: DirectoryServicesPermissionAttribute(SecurityAction.Reque
Path="LDAP://rootDSE",
PermissionAccess=DirectoryServicesPermissionA
Environment Variables
Code that needs to read or write environment variables using the System.Environment
class must be granted the EnvironmentPermission by code access security policy. This
permission type can be used to constrain access to specific named environment variables.
Requesting EnvironmentPermission
To document the permission requirements of your code, and to ensure your assembly
cannot load if it is granted insufficient environment variable access from code access
security policy, add an assembly level EnvironmentPermissionAttribute with
SecurityAction.RequestMinimum as shown in the following code.
[assembly: EnvironmentPermissionAttribute(SecurityAction.RequestMini
Read="username"),
EnvironmentPermissionAttribute(SecurityAction.RequestMini
Read="userdomain"),
EnvironmentPermissionAttribute(SecurityAction.RequestMini
Read="temp")]
Web Services
Code that calls Web services must be granted the WebPermission by code access
security policy. The WebPermission actually constrains access to any HTTP Internet-
based resources.
The following example shows how to use the Connect attribute to restrict connections to a
specific Web service.
[WebPermissionAttribute(SecurityAction.PermitOnly,
Connect=@"http://somehost/order.asmx")]
Sockets and DNS
Code that uses sockets directly by using the System.Net.Sockets.Socket class must be
granted the SocketPermission by code access security policy. In addition, if your code
uses DNS to map host names to IP addresses, it requires the DnsPermission.
You can use SocketPermission to constrain access to specific ports on specific hosts. You
can also restrict whether the socket can be used to accept connections or initiate outbound
connections, and you can restrict the transport protocol, for example, Transmission Control
Protocol (TCP) or User Datagram Protocol (UDP).
serverAddress = Dns.Resolve(hostname).AddressList[0];
serverEndPoint = new IPEndPoint(serverAddress, 80);
socket = new Socket(AddressFamily.InterNetwork,
SocketType.Stream, ProtocolType.Tcp);
bytesReceived = new byte[readSize];
sendBytes = Encoding.ASCII.GetBytes(message);
socket.Connect(serverEndPoint);
socket.Send(sendBytes);
bytesReceivedSize = socket.Receive(bytesReceived, readSize, 0);
socket.Close();
if(-1 != bytesReceivedSize)
{
return Encoding.ASCII.GetString(bytesReceived, 0, bytesReceivedS
}
return "";
}
The following guidelines for calling unmanaged code build upon those introduced in Chapter
7, "Building Secure Assemblies."
Safe. This identifies code that poses no possible security threat. It is harmless for
any code, malicious or otherwise, to call. An example is code that returns the
current processor tick count. Safe classes can be annotated with the
SuppressUnmanagedCode attribute which turns off the code access security
permission demand for full trust.
[SuppressUnmanagedCode]
class SafeNativeMethods {
[DllImport("user32")]
internal static extern void MessageBox(string text);
}
Native. This is potentially dangerous unmanaged code, but code that is protected
with a full stack walking demand for the unmanaged code permission. These are
implicitly made by the interop layer unless they have been suppressed with the
SupressUnmanagedCode attribute.
class NativeMethods {
[DllImport("user32")]
internal static extern void FormatDrive(string driveLet
}
Unsafe. This is potentially dangerous unmanaged code that has the security
demand for the unmanaged code permission declaratively suppressed. These
methods are potentially dangerous. Any caller of these methods must do a full
security review to ensure that the usage is safe and protected because no stack
walk is performed.
[SuppressUnmanagedCodeSecurity]
class UnsafeNativeMethods {
[DllImport("user32")]
internal static extern void CreateFile(string fileName)
}
This allows custom code access security policy to be easily applied to the
assembly. For more information, see the "Strong Names" section in Chapter 7,
"Building Secure Assemblies."
3. Request the unmanaged code permission (as described in the preceding section.)
You typically need to use a custom permission that represents the unmanaged
resource being exposed by your assembly. For example:
(new EncryptionPermission(EncryptionPermissionFlag.Encrypt,
storePermissionFlag.Machine)).Deman
In this case, you can use the SupressUnmanagedCodeSecurity attribute on the P/Invoke
method declaration. This causes the full demand for the unmanaged permission to be
replaced with a link demand which only occurs once at JIT compilation time.
In common with the use of link demands, your code is now vulnerable to luring attacks. To
mitigate the risk, you should only suppress the unmanaged code permission demand if your
assembly takes adequate precautions to ensure it cannot be coerced by malicious code to
perform unwanted operations. An example of a suitable countermeasure is if your assembly
demands a custom permission that more closely reflects the operation being performed by
the unmanaged code
For COM interop calls, the attribute must be used at the interface level, as shown in the
following example.
[SuppressUnmanagedCodeSecurity]
public interface IComInterface
{
}
Delegates
There is no way of knowing in advance what a delegate method is going to do when you
invoke it. If your assembly supports partial trust callers, you need to take extra precautions
when you invoke a delegate. You can use code access security to further improve security.
For more guidelines about using delegates securely, see the "Delegates" section in Chapter
7, "Building Secure Assemblies."
Serialization
Code that supports serialization must be granted a SecurityPermission with its Flag
attribute set to SerializationFormatter. If you develop classes that support serialization
and your code supports partial trust callers, you should consider using additional permission
demands to place restrictions on which code can serialize your object's state.
Restricting Serialization
If you create a class that implements the ISerializable interface, which allows your object
to be serialized, you can add a permission demand to your ISerializable.GetObjectData
implementation to authorize the code that is attempting to serialize your object. This is
particularly important if your code supports partial trust callers.
For more guidelines about using serialization securely, see the "Serialization" section in
Chapter 7, "Building Secure Assemblies."
Summary
Code access security allows you to restrict what your code can do, restrict which code can
call your code, and identify code. In full trust environments where your code and the code
that calls you have the unrestricted set of all permissions, code access security is of less
significance.
If your code supports partial trust callers, the security risks are that much greater. In partial
trust scenarios, code access security enables you to mitigate some of the additional risks
and allows you to constrain privileged code.
Additional Resources
For more information, see the following resources:
"Security in .NET: The Security Infrastructure of the CLR Provides Evidence, Policy,
Permissions, and Enforcement Services" by Don Box, MSDN Magazine, September
2002, at http://msdn.microsoft.com/msdnmag.
"Security in .NET: Enforce Code Access Rights with the Common Language
Runtime" by Keith Brown, MSDN Magazine, February 2001, at
http://msdn.microsoft.com/msdnmag.
With Microsoft .NET Framework version 1.1, administrators can configure policy for
ASP.NET Web applications and Web services, which might consist of multiple assemblies.
They can also grant code access security permissions to allow the application to access
specific resource types and to perform specific privileged operations.
Web applications and Web services built using .NET Framework version 1.0
Note
always run with unrestricted code access permissions. This is not configurable.
Using code access security with Web applications helps you provide application isolation in
hosted environments where multiple Web applications run on the same Web server. Internet
service providers (ISPs) that run multiple applications from different companies can use
code access security to:
For example, code access security can be used to ensure that one Web application
cannot write to another Web application's directories.
For example, code access security can restrict access to the file system, registry,
event logs, and network resources, as well as other system resources.
Code access security is one mechanism that can be used to help provide application
isolation. Microsoft Windows Server™ 2003 and Internet Information Services (IIS) 6.0 also
provide process isolation for Web applications. Process isolation combined with code
access security provides the recommended model for application isolation. For more
information, see Chapter 20, "Hosting Multiple ASP.NET Applications."
How to Use This Chapter
This chapter does not cover the fundamentals of code access security. A certain amount of
prerequisite knowledge is assumed, although key concepts are reiterated where
appropriate. For more information about how code access security works, see Chapter 8,
"Code Access Security in Practice."
The current chapter focuses on ASP.NET code access security policy configuration and
shows you how to overcome some of the main hurdles that you might encounter when you
develop partial-trust Web applications.
Resource Access
All resource access from ASP.NET applications and managed code in general is subject to
the following two security layers:
Code access security. This security layer verifies that all of the code in the current
call stack, leading up to and including the resource access code, is authorized to
access the resource. An administrator uses code access security policy to grant
permissions to assemblies. The permissions determine precisely which resource
types the assembly can access. Numerous permission types correspond to the
different resource types that can be accessed. These types include the file system,
registry, event log, directory services, SQL Server, OLE DB data sources, and
network resources.
For a full list of code access permissions, see Chapter 8, "Code Access Security in
Practice."
Operating System/Platform Security. This security layer verifies that the security
context of the requesting thread can access the resource. If the thread is
impersonating, then the thread impersonation token is used. If not, then the process
token is used and is compared against the access control list (ACL) that is attached
to the resource to determine whether or not the requested operation can be
performed and the resource can be accessed.
Both checks must succeed for the resource to be successfully accessed. All of the
resource types that are exposed by the .NET Framework classes are protected with code
access permissions. Figure 9.1 shows a range of common resource types that are
accessed by Web applications, as well as the associated code access permission that is
required for the access attempt to succeed.
Figure 9.1: Common resource types accessed from ASP.NET Web applications and
associated permission types
Full Trust and Partial Trust
By default, Web applications run with full trust. Full-trust applications are granted
unrestricted code access permissions by code access security policy. These permissions
include built-in system and custom permissions. This means that code access security will
not prevent your application from accessing any of the secured resource types that Figure
9.1 shows. The success or failure of the resource access attempt is determined purely by
operating system-level security. Web applications that run with full trust include all ASP.NET
applications built using .NET Framework version 1.0. By default, .NET Framework version
1.1 applications run with full trust, but the trust level can be configured using the <trust>
element, which is described later in this chapter.
If an application is configured with a trust level other than "Full," it is referred to as a partial-
trust application. Partial-trust applications have restricted permissions, which limit their
ability to access secured resources.
Web applications built on .NET Framework version 1.0 always run with full
Important
trust because the types in System.Web demand full-trust callers.
Configuring Code Access Security in ASP.NET
By default, Web applications run with full trust and have unrestricted permissions. To modify
code access security trust levels in ASP.NET, you have to set a switch in Machine.config or
Web.config and configure the application as a partial-trust application.
With the trust level set to "Full," code access security is effectively disabled because
permission demands do not stand in the way of resource access attempts. This is the only
option for ASP.NET Web applications built on .NET Framework version 1.0. As you go
through the list from "Full" to "Minimal," each level takes away more permissions, which
further restricts your application's ability to access secured resources and perform
privileged operations. Each level gives greater degrees of application isolation. Table 9.1
shows the predefined trust levels and indicates the major restrictions in comparison to the
previous level.
If a Web server administrator wants to use code access security to ensure application
isolation and restrict access to system level resources, the administrator must be able to
define security policy at the machine level and prevent individual applications from overriding
it.
Application service providers or anyone responsible for running multiple Web applications on
the same server should lock the trust level for all Web applications. To do this, enclose the
<trust> element in Machine.config within a <location> tag, and set the allowOverride
attribute to false, as shown in the following example.
<location allowOverride="false">
<system.web>
<!-- level="[Full|High|Medium|Low|Minimal]" -->
<trust level="Medium" originUrl=""/>
</system.web>
</location>
You can also use a path attribute on the <location> element to apply a configuration to a
specific site or Web application that cannot be overridden. For more information about the
<location> element, see Chapter 19, "Securing Your ASP.NET Application and Web
Services."
ASP.NET Policy Files
Each trust level is mapped to an individual XML policy file and the policy file lists the set of
permissions granted by each trust level. Policy files are located in the following directory:
%windir%\Microsoft.NET\Framework\{version}\CONFIG
Trust levels are mapped to policy files by the <trustLevel> elements in Machine.config,
which are located just above the <trust> element, as shown in the following example.
<location allowOverride="true">
<system.web>
<securityPolicy>
<trustLevel name="Full" policyFile="internal"/>
<trustLevel name="High" policyFile="web_hightrust.config"/>
<trustLevel name="Medium" policyFile="web_mediumtrust.config"/
<trustLevel name="Low" policyFile="web_lowtrust.config"/>
<trustLevel name="Minimal" policyFile="web_minimaltrust.config
</securityPolicy>
<!-- level="[Full|High|Medium|Low|Minimal]" -->
<trust level="Full" originUrl=""/>
</system.web>
</location>
No policy file exists for the full-trust level. This is a special case that simply
Note
indicates the unrestricted set of all permissions.
ASP.NET policy is fully configurable. In addition to the default policy levels, administrators
can create custom permission files and configure them using the <trust> element, which is
described later in this chapter. The policy file associated with the custom level must also be
defined by a <trustLevel> element in Machine.config.
ASP.NET Policy
Code access security policy is hierarchical and is administered at multiple levels. Policy can
be created for the enterprise, machine, user, and application domain levels. ASP.NET code
access security policy is an example of application domain-level policy.
Settings in a separate XML configuration file define the policy for each level. Enterprise,
machine, and user policy can be configured using the Microsoft .NET Framework
configuration tool, but ASP.NET policy files must be edited manually using an XML or text
editor.
The individual ASP.NET trust-level policy files say which permissions might be granted to
applications configured at a particular trust level. The actual permissions that are granted to
an ASP.NET application are determined by intersecting the permission grants from all policy
levels, including enterprise, machine, user, and ASP.NET (application domain) level policy.
Because policy is evaluated from enterprise level down to ASP.NET application level,
permissions can only be taken away. You cannot add a permission at the ASP.NET level
without a higher level first granting the permission. This approach ensures that the
enterprise administrator always has the final say and that malicious code that runs in an
application domain cannot request and be granted more permissions than an administrator
configures.
For more information about policy evaluation, see Chapter 8, "Code Access Security in
Practice."
You will also see the "FullTrust" and "Nothing" permission sets. These sets
Note contain no permission elements because "FullTrust" implies all permissions and
"Nothing" contains no permissions.
The following fragment shows the major elements of an ASP.NET policy file:
<configuration>
<mscorlib>
<security>
<policy>
<PolicyLevel version="1">
<SecurityClasses>
... list of security classes, permission types
and code group types ...
</SecurityClasses>
<NamedPermissionSets>
<PermissionSet Name="FullTrust" ... />
<PermissionSet Name="Nothing" .../>
<PermissionSet Name="ASP.NET" ...
... This is the interesting part ...
... List of individual permissions...
<IPermission
class="AspNetHostingPermission"
version="1"
Level="High" />
<IPermission
class="DnsPermission"
version="1"
Unrestricted="true" />
...Continued list of permissions...
</PermissionSet>
</PolicyLevel version="1">
</policy>
</security>
</mscorlib>
</configuration>
Notice that each permission is defined by an <IPermission> element, which defines the
permission type name, version, and whether or not it is in the unrestricted state.
In its unrestricted state, the FileIOPermission allows any type of access to any area on
the file system (of course, operating system security still applies). The following permission
demand requires that the calling code be granted the unrestricted FileIOPermission:
(new FileIOPermission(PermissionState.Unrestricted)).Demand();
Substitution Parameters
If you edit one of the ASP.NET policy files, you will notice that some of the permission
elements contain substitution parameters ($AppDirUrl$, $CodeGen$, and $Gac$). These
parameters allow you to configure permissions to assemblies that are part of your Web
application, but are loaded from different locations. Each substitution parameter is replaced
with an actual value at security policy evaluation time, which occurs when your Web
application assembly is loaded for the first time. Your Web application might consist of the
following three assembly types:
Private assemblies that are compiled at build time and deployed in the application's
bin directory
Shared assemblies that are loaded from the computer's global assembly cache
Each of these assembly types has an associated substitution parameter, which Table 9.2
summarizes.
By configuring a Web application or Web service for partial trust, you can restrict the
application's ability to access crucial system resources or resources that belong to other
Web applications. By granting only the permissions that the application requires and no
more, you can build least privileged Web applications and limit damage potential should the
Web application be compromised by a code injection attack.
Your application is unable to call strong named assemblies that are not annotated
with AllowPartiallyTrustedCallersAttribute (APTCA). Without APTCA, strong
named assemblies issue a demand for full trust, which will fail when the demand
reaches your partial-trust Web application. Many system assemblies only support
full-trust callers. The following list shows which .NET Framework assemblies
support partial-trust callers and can be called directly by partial-trust Web
applications without necessitating sandboxed wrapper assemblies.
The following system assemblies have APTCA applied, which means that they can
be called by partial-trust Web applications or any partially trusted code:
System.Windows.Forms.dll
System.Drawing.dll
System.dll
Mscorlib.dll
IEExecRemote.dll
Accessibility.dll
Microsoft.VisualBasic.dll
System.XML.dll
System.Web.dll
System.Web.Services.dll
System.Data.dll
If your partial-trust application fails because it calls a strong named assembly that
is not marked with APTCA, a generic SecurityException is generated. In this
circumstance, the exception contains no additional information to indicate that the
call failed because of a failed demand for full trust.
Permission demands might start to fail. The configured trust level might not grant
the necessary permission for your application to access a specific resource type.
The following are some common scenarios where this could prove problematic:
Your application uses the event log or registry. Partial trust Web
applications do not have the necessary permissions to access these
system resources. If your code does so, a SecurityException will be
generated.
Applications configured for high, medium, low, or minimal trust will be unable to call
unmanaged code or serviced components, write to the event log, access Message
Queuing queues, or access OLE DB data sources.
Applications configured for high trust have unrestricted access to the file system.
Applications configured for medium trust have restricted file system access. They
can only access files in their own application directory hierarchy.
Applications configured for low or minimal trust cannot access SQL Server
databases.
Table 9.3 identifies the permissions that each ASP.NET trust level grants. The full level is
omitted from the table because it grants all of the permissions in their unrestricted state.
IsolatedStorageFilePermission
Unrestricted √
AssemblyIsolationByUser- √ √ 1MB(can
Unrestricted UserQuota √ vary with
site)
OleDbClientPermission
Unrestricted
PrintingPermission
Unrestricted
DefaultPrinting √ √
ReflectionPermission
Unrestricted
ReflectionEmit √
RegistryPermission
Unrestricted √
SecurityPermission
Unrestricted
Assertion √ √
Execution √ √
ControlThread √ √
√ √
ControlPrinicipal √ √
RemotingConfiguration √ √
SocketPermission
Unrestricted √ √
SqlClientPermission
Unrestricted √ √
WebPermission
Unrestricted √ $OriginHost$
Approaches for Partial Trust Web Applications
If you develop a partial-trust application or enable an existing application to run at a partial-
trust level, and you run into problems because your application is trying to access resources
for which the relevant permissions have not been granted, you can use two basic
approaches:
Customize policy
Customize policy to grant the required permissions to your application. This might
not be possible, for example in hosting environments, where policy restrictions are
rigid.
Place resource access code in a wrapper assembly, grant the wrapper assembly
full trust (not the Web application), and sandbox the permission requirements of
privileged code.
The right approach depends on what the problem is. If the problem is related to the fact
that you are trying to call a system assembly that does not contain
AllowPartiallyTrustedCallersAttribute, the problem becomes how to give a piece of code
full trust. In this scenario, you should use the sandboxing approach and grant the sandboxed
wrapper assembly full trust.
Customizing policy is the easier of the two approaches because it does not
Note
require any development effort.
Customize Policy
If your Web application contains code that requires more permissions than are granted by a
particular ASP.NET trust level, the easiest option is customizing a policy file to grant the
additional code access security permission to your Web application. You can either modify
an existing policy file and grant additional permissions or create a new one based on an
existing policy file.
If you modify one of the built-in policy files, for example, the medium-trust
Note Web_mediumtrust.config policy file, this affects all applications that are
configured to run with medium trust.
2. Add the required permission to the ASP.NET permission set in the policy file or,
alternatively, modify an existing permission to grant a less restrictive permission.
4. Configure your application to run with the new trust level by configuring the
<trust> element in the application's Web.config file, as follows:
<system.web>
<trust level="Custom" originUrl=""/>
</system.web>
Sandbox Privileged Code
Another approach that does not require an update to ASP.NET code access security policy
is wrapping your resource access code in its own wrapper assembly and configuring
machine-level code access security policy to grant the specific assembly the appropriate
permission. Then you can sandbox the higher-privileged code using the
CodeAccessPermission.Assert method so you do not have to change the overall
permission grant of the Web application. The Assert method prevents the security demand
issued by the resource access code from propagating back up the call stack beyond the
boundaries of the wrapper assembly.
A Sandboxing Pattern
You can apply the following pattern to any privileged code that needs to access a restricted
resource or perform another privileged operation for which the parent Web application does
not have sufficient permissions:
1. Encapsulate the resource access code in a wrapper assembly.
Make sure the assembly is strong named so that it can be installed in the GAC.
This means that the caller must have the assertion security permission
(SecurityPermission with SecurityPermissionFlag.Assertion). Applications
configured for Medium or higher trust levels have this permission.
The .NET Framework might not provide a suitable permission to demand. In this
case, you can create and demand a custom permission. For more information
about how to create a custom permission, see "How To: Create a Custom
Encryption Permission" in the "How To" section of this guide.
Default enterprise and local machine policy also grant full trust to any
code located in the My Computer zone, which includes code installed in
Note
the GAC. This is important because granted permissions are
intersected across policy levels.
5. Configure the Web application trust level (for example, set it to "Medium").
Figure 9.2: Sandboxing privileged code in its own assembly, which asserts the relevant
permission
It is good practice to use separate assemblies to encapsulate resource access and avoid
placing resource access code in .aspx files or code behind files. For example, create a
separate data access assembly to encapsulate database access. This makes it easier to
migrate applications to partial-trust environments.
Deciding Which Approach to Take
The right approach depends upon the problem you are trying to solve and whether or not
you have the option of modifying security policy on the Web server.
Customizing Policy
This approach is the easier of the two and does not require any developer effort. However,
you might not be permitted to modify policy on the Web server and, in certain scenarios,
your code that calls the .NET Framework class library might require full trust. In these
situations, you must use sandboxing. For example, the following resources demand full
trust, and you must sandbox your resource access code when it accesses them:
ODBC data sources (through the ADO.NET ODBC .NET data provider)
This list is not exhaustive but it includes commonly used resource types
Note
that currently require full trust.
Sandboxing
If you sandbox your privileged application code in a separate assembly, you can grant
additional permissions to the assembly. Alternatively, you can grant it full trust without
requiring your entire application to run with extended permissions.
For example, consider code that uses the ADO.NET OLE DB data provider and interacts
with the System.Data.OleDb.OleDbCommand class. This code requires full trust. Although
the System.Data.dll assembly is marked with AllowPartiallyTrustedCallersAttribute, the
System.Data.OleDb.OleDbCommand class, among others, cannot be called by partial-
trust callers because it is protected with a link demand for full trust. To see this, run the
following command using the permview utility from the
%windir%\Microsoft.NET\Framework\{version} directory:
permview /DECL /OUTPUT System.Data.Perms.txt System.Data.dll
Just because an assembly is marked with APTCA, it does not mean all of the
Note contained classes support partial-trust callers. Some classes may include explicit
demands for full trust.
Medium Trust
If you host Web applications, you may choose to implement a medium trust security policy
to restrict privileged operations. This section focuses on running medium trust applications,
and shows you how to overcome the problems you are likely to encounter.
Application isolation
Since medium trust does not grant the application unrestricted access to all permissions,
your attack surface is reduced by granting the application a subset of the full permission
set. Many of the permissions granted by medium trust policy are also in a restricted state.
If an attacker is somehow able to take control of your application, the attacker is limited in
what he or she can do.
Application Isolation
Application isolation with code access security restricts access to system resources and
resources owned by other applications. For example, even though the process identity
might be allowed to read and write files outside of the Web application directory, the
FileIOPermission in medium trust applications is restricted. It only permits the application
to read or write to its own application directory hierarchy.
Medium Trust Restrictions
If your application runs at medium trust, it faces a number of restrictions, the most
significant of which are:
It has restricted file system access and can only access files in the application's
virtual directory hierarchy.
It cannot directly access OLE DB data sources (although medium trust applications
are granted the SqlClientPermission, which allows them to access SQL Server).
This section shows you how to access the following resource types from a medium-trust
Web application or Web service:
OLE DB
Event log
Web services
Registry
OLE DB
Medium-trust Web applications are not granted the OleDbPermission. Furthermore, the
OLE DB .NET data provider currently demands full-trust callers. If you have an application
that needs to access OLE DB data sources while running at medium trust, use the
sandboxing approach. Place your data access code in a separate assembly, strong name
it, and install it in the GAC, which gives it full trust.
Modifying policy does not work unless you set the trust level to "Full" because the
Note
OLE DB managed provider demands full trust.
Sandboxing
In this approach, you create a wrapper assembly to encapsulate OLE DB data source
access. This assembly is granted full-trust permissions, which are required to use the
ADO.NET OLE DB managed provider.
2. Request full trust. Although not strictly necessary, requesting full trust is a good
practice because it allows an administrator to view the assembly's permission
requirements by using tools like Permview.exe. To request full trust, request the
unrestricted permission set as follows:
[assembly: PermissionSet(SecurityAction.RequestMinimum, Unres
3. Wrap database calls with an Assert statement to assert full trust. Wrap a
matching RevertAssert call to reverse the effect of the assert. Although not
strictly necessary, it is a good practice to place the call to RevertAssert in a
finally block.
Because the OLE DB provider demands full trust, the wrapper must assert
fulltrust. Asserting an OleDbPermission is not sufficient. Step 7 explains how to
improve the security of using CodeAccessPermission.Assert.
public OleDbDataReader GetProductList()
{
try
{
// Assert full trust (the unrestricted permission set)
new PermissionSet(PermissionState.Unrestricted).Assert();
OleDbConnection conn = new OleDbConnection(
"Provider=SQLOLEDB; Data Source=(local);" +
"Integrated Security=SSPI; Initial Catalog=Northwind")
OleDbCommand cmd = new OleDbCommand("spRetrieveProducts",
cmd.CommandType = CommandType.StoredProcedure;
conn.Open();
OleDbDataReader reader =
cmd.ExecuteReader(CommandBehavior.CloseConnection);
return reader;
}
catch(OleDbException dbex)
{
// Log and handle exception
}
catch(Exception ex)
{
// Log and handle exception
}
finally
{
CodeAccessPermission.RevertAssert();
}
return null;
}
4. Build the assembly and install it in the GAC with the following command:
gacutil -i oledbwrapper.dll
To ensure that the assembly is added to the GAC after each subsequent rebuild,
add the following post build event command line (available from the project's
properties in Visual Studio.NET) to your wrapper assembly project:
"C:\Program Files\Microsoft Visual Studio .NET 2003\SDK\v1.1\
5. Configure your Web application for medium trust. Add the following code to
Web.config or place it in Machine.config inside a <location> element that points
to your application:
<trust level="Medium" originUrl=""/>
6. Reference the data access assembly from your ASP.NET Web application.
Since a strong named assembly must be in the GAC and not the \bin directory of
a Web application, you must add the assembly to the list of assemblies used in
the application if you are not using code behind files. You can obtain the
PublicKeyToken of your assembly by using the following command:
sn -Tp oledbwrapper.dll
The Assert call means that any code that calls the data access wrapper can
interact with the OLE DB data source. To prevent malicious code from calling the
data access component and potentially using it to attack the database, you can
issue a full demand for a custom permission prior to calling Assert and update the
medium-trust policy file to grant your Web application the custom permission. This
solution entails a reasonable amount of developer effort.
For more information about developing a custom permission, see "How To: Create
a Custom Encryption Permission" in the "How To" section of this guide.
Event Log
The EventLogPermission class is designed to encapsulate the rights of code to access
the event log. Currently, however, code must be granted full trust to be able to access the
event log. This means that a medium trust Web application cannot directly access the event
log. To do so, you must sandbox your event logging code.
At minimum, the ASP.NET process identity of any impersonated identity must have the
following permissions on this registry key:
Create subkey
Enumerate subkeys
Notify
Read
These settings must be applied to the key shown above and subkeys. Alternatively, you can
create event sources at installation time when administrative privileges are available. For
more information about this approach, see "Auditing and Logging" in Chapter 10, "Building
Secure ASP.NET Web Pages and Controls."
Sandboxing
To sandbox your event logging code, you create a wrapper assembly to encapsulate event
log access. You then install the wrapper assembly in the global assembly cache so that is
granted full trust by code access security policy.
However, if your assembly needs to request full trust, request the unrestricted
permission set as follows:
[assembly: PermissionSet(SecurityAction.RequestMinimum, Unres
3. Wrap event log calls with an Assert statement that asserts full trust and a
matching RevertAssert that reverses the effect of the assert. Although not strictly
necessary, it is a good practice to place the call to RevertAssert in a finally
block. The following code writes an Information entry to the Application log with
the text "Writing to the event log":
try
{
string source = "Event Source";
string log = "Application";
string eventText = "Writing to the event log";
EventLogEntryType eventType = EventLogEntryType.Information
//Assert permission
EventLogPermission eventPerm;
eventPerm = new EventLogPermission(EventLogPermissionAccess
"<machinename>");
eventPerm.Assert();
//Check to see if the source exists.
if(!EventLog.SourceExists(source))
{//The keys do not exist, so register your application as a
EventLog.CreateEventSource(source, log);
}
4. Build the assembly and install it in the GAC with the following command:
gacutil -i eventlogwrapper.dll
To ensure that the assembly is added to the GAC after each subsequent rebuild,
add the following post build event command line (available from the project's
properties in Visual Studio.NET) to your wrapper assembly project:
"C:\Program Files\Microsoft Visual Studio .NET 2003\SDK\v1.1\
5. Configure your Web application for medium trust. Add the following to Web.config
or place it in Machine.config inside a <location> element that points to your
application:
<trust level="Medium" originUrl=""/>
6. Reference the event log assembly from your ASP.NET Web application.
Since a strong named assembly must be in the GAC and not the \bin directory of
a Web application, then you must add the assembly to the list of assemblies used
in the application if you are not using code behind files. You can obtain the
PublicKeyToken of your assembly by using the following command:
sn -Tp eventlogwapper.dll
7. Protect the code that calls the Assert method. The Assert call means that any
code that calls the event log wrapper is able to interact with the event log. To
prevent malicious code from calling the event log wrapper and potentially using it
to fill the event log, you can issue a full demand for a custom permission prior to
calling Assert and update the medium trust policy file to grant your Web
application the custom permission. This solution entails a reasonable amount of
developer effort.
For more information about how to develop a custom permission, see "How To:
Create a Custom Encryption Permission" in the "How To" section of this guide.
Web Services
By default, medium-trust policy grants ASP.NET Web applications a restricted
WebPermission. To be able to call Web services from your Web application, you must
configure the originUrl attribute on your application's <trust> element.
Task To call a single Web service from a medium trust Web application
1. Configure the application to run at medium trust.
2. Set the originUrl to point to the Web service you want to be able to call, as
follows:
<trust level="Medium" originUrl="http://servername/.*"/>
The originUrl value is used in the constructor for a System.Text.RegEx regular expression
class so that in can perform a match on the URLs that are accessible by the Web service.
This RegEx class is used in conjunction with a WebPermission class. The ".*" matches any
URL beginning with "http://servername/".
The originUrl attribute is used when ASP.NET policy is evaluated. It gives a value for the
$OriginHost$ substitution parameter. Here is the WebPermission definition from
Web_mediumtrust.config:
<IPermission
class="WebPermission"
version="1">
<ConnectAccess>
<URI uri="$OriginHost$"/>
</ConnectAccess>
</IPermission>
If you do not specify the Web servers accessed by your application, any Web service
request will fail with a SecurityException. To call a Web service on the local Web server,
use the following configuration:
<trust level="Medium" originUrl="http://localhost/.*" />
If your application needs to access multiple Web services on different servers, you need to
customize ASP.NET policy because you can only specify one originUrl on the <trust>
element in Web.config or Machine.config.
2. Locate WebPermission and add a <URI> element for each server you will be
accessing, as follows:
<IPermission class="WebPermission" version="1">
<ConnectAccess>
<URI uri="$OriginHost$"/>
<URI uri="http://server1/.*"/>
<URI uri="http://server2/.*"/>
<URI uri="http://server3/.*"/>
</ConnectAccess>
</IPermission>
If you call the Web service using its NetBIOS) name, DNS name, and/or IP
address, you must have a separate <URI> element for each URI as shown in the
following example.
<IPermission class="WebPermission" version="1">
<ConnectAccess>
<URI uri="$OriginHost$"/>
<URI uri="http://servername.yourDomain.com/.*"/>
<URI uri="http:// servername/.*"/>
<URI uri="http://127.0.0.1/.*"/>
</ConnectAccess>
</IPermission>
4. Update your application's Web.config file to point to the newly created policy file.
This requires that you create a new trust level and map it to the new policy file.
Next, configure the <trust> element of your application to use the new level.
In this case, the ASP.NET application requires the EnvironmentPermission with read
access to the USERNAME environment variable. Default medium-trust policy grants this
permission to Web applications.
In an ASP.NET server-side scenario, the credentials are obtained from the ASP.NET
application's thread or process-level token. If DefaultCredentials are used from a desktop
application, the current interactive user's token is used. The demand for
EnvironmentPermission is a risk mitigation strategy designed to ensure that code cannot
use the local user's credentials at will and expose them to the network.
Registry
By default, medium-trust Web applications are not granted the RegistryPermission. To
configure your application to access the registry, you must either modify ASP.NET policy to
grant this permission to your application or develop a sandboxed wrapper assembly that
has the necessary permission.
The sandboxing approach is the same as described earlier for OLE DB data sources and
the event log.
Customizing Policy
The easiest way to customize policy is to create a custom policy file based on the medium-
trust policy file and configure your application to use the custom policy. The custom policy
grants RegistryPermission to the application.
By making a copy and creating a custom policy file, you avoid making changes
directly to the Web_mediumtrust.config file. Making changes directly to the default
medium trust file affects every application on the machine that is configured for
medium trust.
2. Locate the <SecurityClasses> element and add the following to register the
RegistryPermission class:
<SecurityClass Name="RegistryPermission"
Description="System.Security.Permissions.Regis
mscorlib, Version=1.0.5000.0, Culture=neutral,
PublicKeyToken=b77a5c561934e089"/>
5. Update Machine.config to create a new trust level that is mapped to the new
policy file.
<system.web>
<securityPolicy>
<trustLevel name="MediumPlusRegistry"
policyFile="web_mediumtrust_Registry.config "
</securityPolicy>
The recommended isolation model uses IIS 6.0 application pools on Windows Server 2003
and provides process level isolation in addition to code access security. On Windows 2000,
isolation can only be achieved using code access security and separate thread identities.
Migrating an application to run with partial trust usually requires a certain amount of
reengineering. You might need to reengineer if the application accesses resources that are
not permitted by the partial trust level or if it calls strong named assemblies that do not
contain APTCA. In these cases, you can sandbox privileged resource access in separate
wrapper assemblies. In some scenarios, you might be able to create and use custom policy
files, although this depends on your Web server's security policy.
It is a good design practice to place resource access code in separate assemblies and
avoid placing this code in .aspx files and code behind files. The use of separate assemblies
allows code access security policy to be applied to the assembly independently from the
Web application and it allows you to develop sandboxed trusted code to perform resource
access.
Additional Resources
For more information, see the following resources:
"Security in .NET: The Security Infrastructure of the CLR Provides Evidence, Policy,
Permissions, and Enforcement Services" in MSDN Magazine at
http://msdn.microsoft.com/msdnmag/issues/02/09/SecurityinNET/default.aspx.
"Security in .NET: Enforce Code Access Rights with the Common Language
Runtime" in MSDN Magazine at
http://msdn.microsoft.com/msdnmag/issues/01/02/CAS/default.aspx.
LaMacchia, Lange, Lyons, Martin, and Price. .NET Framework Security. Addison
Wesley Professional, 2002.
"How To: Create a Custom Encryption Permission" in the "How To" section of this
guide.
Chapter 10: Building Secure ASP.NET Pages and Controls
In This Chapter
Preventing cross-site scripting (XSS) attacks
Input data validation should be a top consideration when you build Web pages because the
majority of top application-level attacks rely on vulnerabilities in this area. One of the most
prevalent attacks today is cross-site scripting (XSS), which is more of an attack on your
application's users than on the application itself, but it exploits server-side application
vulnerabilities all the same. The results can be devastating and can lead to information
disclosure, identity spoofing, and elevation of privilege.
How to Use This Chapter
To build secure Web pages and controls, you need to follow the correct programming
practices that this chapter discusses. In addition to secure programming practices, use the
corresponding chapters in this guide to help you build secure ASP.NET pages and controls.
Implement the steps in Chapter 19, "Securing Your ASP.NET Application and
Web Services." The chapter helps you configure ASP.NET appropriately with
secure settings in Machine.config and Web.config.
Understand the threats and attacks that are specific to ASP.NET pages and
controls. Apply countermeasures according to guidelines in this chapter.
Code injection
Session hijacking
Identity spoofing
Parameter manipulation
Network eavesdropping
Information disclosure
Code Injection
Code injection occurs when an attacker causes arbitrary code to run using your
application's security context. The risk increases if your application runs using a privileged
account.
Attacks
Buffer overflows. The type safe verification of managed code reduces the risk
significantly, but your application is still vulnerable, especially where it calls
unmanaged code. Buffer overflows can allow an attacker to execute arbitrary code
inside your Web application process, using its security context.
SQL injection. This attack targets vulnerable data access code. The attacker
sends SQL input that alters the intended query or executes completely new queries
in the database. Forms authentication logon pages are common targets because
the username and password are used to query the user store.
Vulnerabilities
Countermeasures
Validate input so that an attacker cannot inject script code or cause buffer
overflows.
Encode all output that includes input. This prevents potentially malicious script tags
from being interpreted as code by the client's browser.
Use stored procedures that accept parameters to prevent malicious SQL input from
being treated as executable statements by the database.
Use least privileged process and impersonation accounts. This mitigates risk and
reduces the damage that can be done if an attacker manages to execute code
using the application's security context.
Session Hijacking
Session hijacking occurs when the attacker captures an authentication token and takes
control of another user's session. Authentication tokens are often stored in cookies or in
URLs. If the attacker captures the authentication token, he can transmit it to the application
along with a request. The application associates the request with the legitimate user's
session, which allows the attacker to gain access to the restricted areas of the application
that require authenticated access. The attacker then assumes the identity and privileges of
the legitimate user.
Vulnerabilities
Common vulnerabilities that make your Web pages and controls susceptible to session
hijacking include:
Attacks
Cookie replay. The attacker captures the authentication cookie either by using
network monitoring software or by some other means, for example, by exploiting an
XSS scripting vulnerability.
Query string manipulation. A malicious user changes the session identifier that is
clearly visible in the URL query string.
Countermeasures
Do not pass session identifiers that represent authenticated users in query strings.
Identity Spoofing
Identity spoofing occurs when a malicious user assumes the identity of a legitimate user so
that he can access the application.
Vulnerabilities
Common vulnerabilities that make your Web pages and controls susceptible to an identity
spoofing attack include:
Attacks
Cookie replay. The attacker steals the authentication cookie either by using
network monitoring software or by using an XSS attack. The attacker then sends
the cookie to the application to gain spoofed access.
Brute force password attacks. The attacker repeatedly tries username and
password combinations.
Countermeasures
Enforce strong passwords. Regular expressions can be used to ensure that user-
supplied passwords meet suitable complexity requirements.
For more information about storing password hashes and other secrets in the database,
see Chapter 14, "Building Secure Data Access."
Parameter Manipulation
Parameters are the items of data that are passed from the client to the server over the
network. They include form fields, query strings, view state, cookies, and HTTP headers. If
sensitive data or data that is used to make security decisions on the server are passed
using unprotected parameters, your application is potentially vulnerable to information
disclosure or unauthorized access.
Vulnerabilities
Using hidden form fields or query strings that contain sensitive data
Attacks
Cookie replay attacks. The attacker captures and alters a cookie and then replays
it to the application. This can easily lead to identity spoofing and elevation or
privileges if the cookie contains data that is used for authentication or authorization
on the server.
Manipulation of hidden form fields. These fields contain data used for security
decisions on the server.
Countermeasures
Do not rely on client-side state management options. Avoid using any of the client-
side state management options such as view state, cookies, query strings or hidden
form fields to store sensitive data.
Store sensitive data on the server. Use a session token to associate the user's
session with sensitive data items that are maintained on the server.
Use a message authentication code (MAC) to protect the session token. Pair this
with authentication, authorization, and business logic on the server to ensure that
the token is not being replayed.
Network Eavesdropping
Network eavesdropping involves using network monitoring software to trace packets of
data sent between browser and Web server. This can lead to the disclosure of application-
specific confidential data, the retrieval of logon credentials, or the capture of authentication
cookies.
Vulnerabilities
Attacks
Network eavesdropping attacks are performed by using packet sniffing tools that are
placed on the network to capture traffic.
Countermeasures
Information Disclosure
Information disclosure occurs when an attacker probes your Web pages looking for ways
to cause exception conditions. This can be a fruitful exercise for the attacker because
exception details, which often are returned as HTML and displayed in the browser, can
divulge extremely useful information, such as stack traces that contain database connection
strings, database names, database schema information, SQL statements, and operating
system and platform versions.
Vulnerabilities
Attacks
There are many attacks that can result in information disclosure. These include:
Buffer overflows.
Countermeasures
Use default redirect pages that contain generic and harmless error messages.
Design Considerations
Before you develop Web pages and controls, there are a number of important issues that
you should consider at design time. The following are the key considerations:
Fail securely.
Note that in Figure 10.2, the restricted subfolder is configured in Internet Information
Services (IIS) to require SSL access. The first <authorization> element in Web.config
allows all users to access the public area, while the second element prevents
unauthenticated users from accessing the contents of the secured subfolder and forces a
login.
For more information about restricting authentication cookies so that they are passed only
over HTTPS connections and about how to navigate between restricted and nonrestricted
pages, see "Use Absolute URLs for Navigation" in the "Authentication" section of this
chapter.
You can use IIS to configure each application to use a separate anonymous Internet
user account and then enable impersonation. Each application then has a distinct
identity for resource access. For more information about this approach, see
Chapter 20, "Hosting Multiple Web Applications."
If you need to access a specific remote resource (for example, a file share) and
have been given a particular Windows account to use, you can use configure this
account as the anonymous Web user account for your application. Then you can
use programmatic impersonation prior to accessing the specific remote resource.
For more information, see "Impersonation" later in this chapter.
Protect Credentials and Authentication Tickets
Your design should factor in how to protect credentials and authentication tickets.
Credentials need to be secured if they are passed across the network and while they are in
persistent stores such as configuration files. Authentication tickets must be secured over
the network because they are vulnerable to hijacking. Encryption provides a solution. SSL or
IPSec can be used to protect credentials and tickets over the network and DPAPI provides
a good solution for encrypting credentials in configuration files.
Fail Securely
If your application fails with an unrecoverable exception condition, make sure that it fails
securely and does not leave the system wide open. Make sure the exception details that
are valuable to a malicious user are not allowed to propagate to the client and that generic
error pages are returned instead. Plan to handle errors using structured exception handling,
rather than relying on method error codes.
Consider the authorization granularity that you use in the authenticated parts of your site. If
you have configured a directory to require authentication, should all users have equal
access to the pages in that directory? If necessary, you can apply different authorization
rules for separate pages based on the identity, or more commonly, the role membership of
the caller, by using multiple <authorization> elements within separate <location>
elements.
For example, two pages in the same directory can have different <allow> and <deny>
elements in Web.config.
When Web controls and user controls are put in their own assemblies, you can configure
security for each assembly independently by using code access security policy. This
provides additional flexibility for the administrator and it means that you are not forced to
grant extended permissions to all controls just to satisfy the requirements of a single
control.
Use separate assemblies and call them from your page classes rather than embedding
resource access code in your page class event handlers. This provides greater flexibility for
code access security policy and is particularly important for building partial-trust Web
applications. For more information, see Chapter 9, "Using Code Access Security with
ASP.NET."
Input Validation
If you make unfounded assumptions about the type, length, format, or range of input, your
application is unlikely to be robust. Input validation can become a security issue if an
attacker discovers that you have made unfounded assumptions. The attacker can then
supply carefully crafted input that compromises your application. The misplaced trust of
user input is one of the most common and devastating vulnerabilities in Web applications.
Regular Expressions
You can use regular expressions to restrict the range of valid characters, to strip unwanted
characters, and to perform length and format checks. You can constrain input format by
defining patterns that the input must match. ASP.NET provides the
RegularExpressionValidator control and the Regex class is available from the
System.Text.RegularExpressions namespace.
If you use the validator controls, validation succeeds if the control is empty. For mandatory
fields, use a RequiredFieldValidator. Also, the regular expression validation
implementation is slightly different on the client and server. On the client, the regular
expression syntax of Microsoft JScript® development software is used. On the server,
System.Text.RegularExpressions.Regex syntax is used. Since JScript regular expression
syntax is a subset of System.Text.RegularExpressions.Regex syntax, it is recommended
that JScript regular expression syntax be used to yield the same results on both the client
and the server.
For more information about the full range of ASP.NET validator controls, refer to the .NET
Framework documentation.
RegularExpressionValidator Control
To validate Web form field input, you can use the RegularExpressionValidator control.
Drag the control onto a Web form and set its ValidationExpression, ControlToValidate,
and ErrorMessage properties.
You can set the validation expression using the properties window in Microsoft Visual Studio
.NET or you can set the property dynamically in the Page_Load event handler. The latter
approach allows you to group together all of the regular expressions for all controls on the
page.
Regex Class
If you use regular HTML controls with no runat="server" property (which rules out using
the RegularExpressionValidator control), or you need to validate input from other sources
such as query strings or cookies, you can use the Regex class either in your page class or
in a validation helper method, possibly in a separate assembly. Some examples are shown
later in this section.
String Fields
To validate string fields, such as names, addresses, tax identification numbers, and so on,
use regular expressions to do the following:
Apply formatting rules. For example, pattern-based fields, such as tax identification
numbers, ZIP codes, or postal codes, require specific patterns of input characters.
Check lengths.
Names
The following example shows a RegularExpressionValidator control that has been used to
validate a name field.
<form id="WebForm" method="post" runat="server">
<asp:TextBox id="txtName" runat="server"></asp:TextBox>
<asp:RegularExpressionValidator id="nameRegex"runat="server"
ControlToValidate="txtName"
ValidationExpression="[a-zA-Z'.`-´\s]{1,40}"
ErrorMessage="Invalid name">
</asp:regularexpressionvalidator>
</form>
The preceding validation expression constrains the input name field to alphabetic characters
(lowercase and uppercase), the single apostrophe for names such as O'Dell, and the dot
character. In addition, the field length is constrained to 40 characters.
If you are not using server controls (which rule out the validator controls), or you need to
validate input from sources other than form fields, you can use the
System.Text.RegularExpression.Regex class in your method code. The following
example shows how to validate the same field by using the static Regex.IsMatch method
directly in the page class rather than using a validator control:
if (!Regex.IsMatch(txtSSN.Text, @"^\d{3}-\d{2}-\d{4}$"))
{
// Invalid Social Security Number
}
Date Fields
Input fields that have an equivalent .NET Framework type can be type checked by the.NET
Framework type system. For example, to validate a date, you can convert the input value to
a variable of type System.DateTime and handle any resulting format exceptions if the input
data is not compatible, as follows.
try
{
DateTime dt = DateTime.Parse(txtDate.Text).Date;
}
// If the type conversion fails, a FormatException is thrown
catch( FormatException ex )
{
// Return invalid date message to caller
}
In addition to format and type checks, you might need to perform a range check on a date
field. This is easily performed using the DateTime variable, as follows.
// Exception handling is omitted for brevity
DateTime dt = DateTime.Parse(txtDate.Text).Date;
// The date must be today or earlier
if ( dt > DateTime.Now.Date )
throw new ArgumentException("Date must be in the past");
Numeric Fields
If you need to validate numeric data, for example, an age, perform type checks using the
int type. To convert string input to integer form you can use Int32.Parse or
Convert.ToIn32, and then handle any FormatException that occurs with an invalid data
type, as follows:
try
{
int i = Int32.Parse(txtAge.Text);
. . .
}
catch( FormatException)
{
. . .
}
Range Checks
Sometimes you need to validate that input data falls within a predetermined range. The
following code uses an ASP.NET RangeValidator control to constrain input to whole
numbers between 0 and 255. This example also uses the RequiredFieldValidator. Without
the RequiredFieldValidator, the other validator controls accept blank input.
<form id="WebForm3" method="post" runat="server">
<asp:TextBox id="txtNumber" runat="server"></asp:TextBox>
<asp:RequiredFieldValidator
id="rangeRegex"
runat="server"
ErrorMessage="Please enter a number between 0 and 255"
ControlToValidate="txtNumber"
style="LEFT: 10px; POSITION: absolute; TOP: 47px" >
</asp:RequiredFieldValidator>
<asp:RangeValidator
id="RangeValidator1"
runat="server"
ErrorMessage="Please enter a number between 0 and 255"
ControlToValidate="TextBox1"
Type="Integer"
MinimumValue="0"
MaximumValue="255"
style="LEFT: 10px; POSITION: absolute; TOP: 47px" >
</asp:RangeValidator>
<asp:Button id="Button1" style="LEFT: 10px; POSITION: absolute; TO
runat="server" Text="Button"></asp:Button>
</form>
The following example shows how to validate range using the Regex class:
try
{
// The conversion will raise an exception if not valid.
int i = Convert.ToInt32(sInput);
if ((0 <= i && i <= 255) == true)
{
// data is valid, use the number
}
}
catch( FormatException )
{
. . .
}
Sanitizing Input
Sanitizing is about making potentially malicious data safe. It can be helpful when the range
of allowable input cannot guarantee that the input is safe. This might include stripping a null
from the end of a user-supplied string or escaping values so they are treated as literals. If
you need to sanitize input and convert or strip specific input characters, use
Regex.Replace.
Use this approach for defense in depth. Always start by constraining input to the
Note
set of known "good" values.
The following code strips out a range of potentially unsafe characters, including < > \ " ' % ;
( ) &.
private string SanitizeInput(string input)
{
Regex badCharReplace = new Regex(@"^([<>""'%;()&])$");
string goodChars = badCharReplace.Replace(input, "");
return goodChars;
}
For more information about sanitizing free format input fields, such as comment fields, see
"Sanitizing Free Format Input" under "Cross-Site Scripting," later in this chapter.
Sanitize or reject input. For defense in depth, you can choose to use a helper
method to strip null characters or other known bad characters.
Use parameterized stored procedures for data access to ensure that type and
length checks are performed on the data used in SQL queries.
For more information about using parameters for data access and about writing secure
data access code, see Chapter 14, "Building Secure Data Access."
If you do need to accept input file names, there are two main challenges. First, is the
resulting file path and name a valid file system name? Second, is the path valid in the
context of your application? For example, is it beneath the application's virtual directory
root?
To canonicalize the file name, use System.IO.Path.GetFullPath. To check that the file path
is valid in the context of your application, you can use .NET code access security to grant
the precise FileIOPermission to your code so that is able to access only files from specific
directories. For more information, see the "File I/O" sections in Chapter 7, "Building Secure
Assemblies" and Chapter 8, "Code Access Security in Practice."
Using MapPath
If you use MapPath to map a supplied virtual path to a physical path on the server, use the
overload of Request.MapPath that accepts a bool parameter so that you can prevent
cross application mapping, as follows:
try
{
string mappedPath = Request.MapPath( inputPath.Text,
Request.ApplicationPath, fals
}
catch (HttpException)
{
// Cross-application mapping attempted
}
The final false parameter prevents cross-application mapping. This means that a user
cannot successfully supply a path that contains ".." to traverse outside of your application's
virtual directory hierarchy. Any attempt to do so results in an exception of type
HttpException.
Server controls can use the Control.MapPathSecure method to read files. This
method requires that the calling code is granted full trust by code access security
Note
policy; otherwise an HttpException is thrown. For more information, see
Control.MapPathSecure in the .NET Framework SDK documentation.
^\w+([-+.]\w+)*@\w+ Validates an
E-mail ([-.]\w+)*\.\w+ [email protected] e-mail
([-.]\w+)*$ address.
^(http|https|ftp)\://[a-
zA-Z0-9\-\.]+\.[a-zA-Z]
Validates a
URL {2,3}(:[a-zA-Z0-9]*)?/?
URL.
([a-zA-Z0-9\-\._\?
\,\'/\\\+&%\$#\=~])*$
^(\d{5}- Validates a
\d{4}|\d{5}|\d{9})$|^([a- U.S. ZIP
Zip Code
zA-Z]\d[a-zA-Z] \d[a- code allowing
zA-Z]\d)$ 5 or 9 digits.
Validates a
strong
password.
Must be
between 8
and 10
characters.
^(?=.*\d)(?=.*[a-z])(? Must contain
Password
=.*[A-Z]).{8,10}$ a
combination
of
uppercase,
lowercase,
and numeric
digits, with no
special
characters.
Validates for
Non- 0 integers
negative \d+
986 greater than
integers
zero.
Validates for
a positive
currency
Currency
amount.
(non- "\d+(\.\d\d)?"
Requires two
negative)
digits after
the decimal
point.
Validates for
a positive or
negative
Currency currency
(positive or "(-)?\d+(\.\d\d)?" amount.
negative) Requires two
digits after
the decimal
point.
Cross-Site Scripting
XSS attacks exploit vulnerabilities in Web page validation by injecting client-side script code.
This code is subsequently sent back to an unsuspecting user and executed by the browser.
Because the browser downloads the script code from a trusted site, the browser has no
way of identifying that the code is not legitimate, and Internet Explorer security zones
provide no defense. XSS attacks also work over HTTP or HTTPS (SSL) connections. One
of the most serious exploits occurs when an attacker writes script to retrieve the
authentication cookie that provides access to the trusted site and posts it to a Web address
known to the attacker. This allows the attacker to spoof the legitimate user's identity and
gain illicit access to the Web site.
Validate input
Encode output
Validate Input
Validate any input that is received from outside your application's trust boundary for type,
length, format, and range using the various techniques described previously in this chapter.
Encode Output
If you write text output to a Web page and you do not know with absolute certainty that the
text does not contain HTML special characters (such as <, >, and &), then make sure to
pre-process it using the HttpUtility.HtmlEncode method. Do this even if the text came
from user input, a database, or a local file. Similarly, use HttpUtility.UrlEncode to encode
URL strings.
The HtmlEncode method replaces characters that have special meaning in HTML to HTML
variables that represent those characters. For example, < is replaced with < and " is
replaced with ". Encoded data does not cause the browser to execute code. Instead,
the data is rendered as harmless HTML.
Response.Write(HttpUtility.HtmlEncode(Request.Form["name"]));
Data-Bound Controls
Data-bound Web controls do not encode output. The only control that encodes output is the
TextBox control when its TextMode property is set to MultiLine. If you bind any other
control to data that has malicious XSS code, the code will be executed on the client. As a
result, if you retrieve data from a database and you cannot be certain that the data is valid
(perhaps because it is a database that is shared with other applications), encode the data
before you pass it back to the client.
ASP.NET allows you to specify the character set at the page level or at the application level
by using the <globalization> element in Web.config. Both approaches are shown below
using the ISO-8859-1 character encoding, which is the default in early versions of HTML
and HTTP.
To set the character encoding at the page level, use the <meta> element or the
ResponseEncoding page-level attribute as follows:
<meta http-equiv="Content Type"
content="text/html; charset=ISO-8859-1" />
OR
<% @ Page ResponseEncoding="ISO-8859-1" %>
The following explains the regular expression shown in the preceding code:
Note IIS 6.0 on Windows Server 2003 has functionality equivalent to URLScan built in.
Web browsers that do not support the HttpOnly cookie attribute either ignore the
Note
cookie or ignore the attribute, which means it is still subject to XSS attacks.
The System.Net.Cookie class does not currently support an HttpOnly property. To add an
HttpOnly attribute to the cookie, you need to use an ISAPI filter, or if you want a managed
code solution, add the following code to your application's Application_EndRequest event
handler in Global.asax:
protected void Application_EndRequest(Object sender, EventArgs e)
{
string authCookie = FormsAuthentication.FormsCookieName;
foreach (string sCookie in Response.Cookies)
{
// Just set the HttpOnly attribute on the Forms authentication c
// Skip this check to set the attribute on all cookies in the co
if (sCookie.Equals(authCookie))
{
// Force HttpOnly to be added to the cookie header
Response.Cookies[sCookie].Path += ";HttpOnly";
}
}
}
Forms Authentication
The threat of session hijacking and cookie replay attacks is particularly significant for
applications that use Forms authentication. You must take particular care when querying the
database using the user-supplied credentials to ensure that you are not vulnerable to SQL
injection. Additionally, to prevent identity spoofing, you should make sure that the user store
is secure and that strong passwords are enforced.
The following recommendations help you build a secure Forms authentication solution:
To ensure that SSL is used to protect the logon credentials that are posted from the login
form, and that the authentication cookie passed on subsequent requests to restricted
pages, configure the secure folders in IIS to require SSL. This sets the AccessSSL=true
attribute for the folder in the IIS metabase. Requests for pages in the secured folders will
only be successful if https is used on the request URL.
For SSL, you must have a server certificate installed on the Web server. For more
information, see "How To: Setup SSL on a Web Server" in the "How To" section of
"Microsoft patterns & practices Volume I, Building Secure ASP.NET Applications:
Authentication, Authorization, and Secure Communication" at
http://msdn.microsoft.com/library/default.asp?url=/library/en-
us/dnnetsec/html/secnetlpMSDN.asp.
If you are using .NET Framework version 1.1, set the secure property by using
requireSSL="true" on the <forms> element as follows:
<forms loginUrl="Secure\Login.aspx"
requireSSL="true" . . . />
If you are using .NET Framework version 1.0, set the secure property manually in the
Application_EndRequest event handler in Global.asax using the following code:
protected void Application_EndRequest(Object sender, EventArgs e)
{
string authCookie = FormsAuthentication.FormsCookieName;
To provide privacy and integrity for the cookie, set the protection attribute on the <forms>
element as follows:
<forms protection="All" Privacy and integrity
For more information, see Microsoft Knowledge Base articles 313116, "PRB: Forms
Authentication Requests Are Not Directed to loginUrl Page," and 310415, "PRB: Mobile
Forms Authentication and Different Web Applications."
Once a user logs on and browses pages in a directory that is secured with SSL, relative
links such as "..\publicpage.aspx" or redirects to HTTP pages result in the pages being
served using the https protocol, which incurs an unnecessary performance overhead. To
avoid this, use absolute links such as "http://servername/appname/publicpage.aspx" when
redirecting from an HTTPS page to an HTTP page.
Similarly, when you redirect to a secure page (for example, the login page) from a public
area of your site, you must use an absolute HTTPS path, such as
"https://servername/appname/secure/login.aspx", rather than a relative path, such as
restricted/login.aspx. For example, if your Web page provides a logon button, use the
following code to redirect to the secure login page.
private void btnLogon_Click( object sender, System.EventArgs e )
{
// Form an absolute path using the server name and v-dir name
string serverName =
HttpUtility.UrlEncode(Request.ServerVariables["SERVER_NAME"
string vdirName = Request.ApplicationPath;
Response.Redirect("https://" + serverName + vdirName +
"/Restricted/Login.aspx");
}
Thoroughly validate the supplied credentials. Use regular expressions to make sure
they do not include SQL characters.
For more information about preventing SQL injection, see Chapter 14, "Building Secure
Data Access."
Authorization
You can use authorization to control access to directories, individual Web pages, page
classes, and methods. If required, you can also include authorization logic in your method
code. When you build authorization into your Web pages and controls, consider the
following recommendations:
For more information, see "Authorization" in Chapter 19, "Securing Your ASP.NET
Application and Web Services."
You may also have a method that allows callers from several different roles. However, you
might want to subsequently call a different method, which is not possible with declarative
security.
Impersonation
By default, ASP.NET applications usually do not impersonate the original caller for design,
implementation, and scalability reasons. For example, impersonating prevents effective
middle-tier connection pooling, which can have a severe impact on application scalability.
In certain scenarios, you might require impersonation (for example, if you require an
alternate identity (non-process identity) for resource access). In hosting environments,
multiple anonymous identities are often used as a form of application isolation. For example,
if your application uses Forms or Passport authentication, you can impersonate the
anonymous Internet user account associated by IIS with your application's virtual directory.
You can impersonate the original caller, which might be the anonymous Internet user
account or a fixed identity. To impersonate the original caller (the IIS authenticated identity),
use the following configuration:
<identity impersonate="true" />
To impersonate a fixed identity, use additional userName and password attributes on the
<identity> element, but make sure you use Aspnet_setreg.exe to store encrypted
credentials in the registry. For more information about encrypting credentials in configuration
files and about Aspnet_setreg.exe, see Chapter 19, "Securing Your ASP.NET Application
and Web Services."
To do this, use IIS to configure the anonymous user account as the trusted alternate
identity. Then use the following code to create an impersonation token using the anonymous
account only while you execute your remote resource access code:
HttpContext context = HttpContext.Current;
// Get the service provider from the context
IServiceProvider iServiceProvider = context as IServiceProvider;
//Get a Type which represents an HttpContext
Type httpWorkerRequestType = typeof(HttpWorkerRequest);
// Get the HttpWorkerRequest service from the service provider
// NOTE: When trying to get a HttpWorkerRequest type from the HttpC
// unmanaged code permission is demanded.
HttpWorkerRequest httpWorkerRequest =
iServiceProvider.GetService(httpWorkerRequestType) as HttpWorke
// Get the token passed by IIS
IntPtr ptrUserToken = httpWorkerRequest.GetUserToken();
// Create a WindowsIdentity from the token
WindowsIdentity winIdentity = new WindowsIdentity(ptrUserToken);
// Impersonate the user
Response.Write("Before impersonation: " +
WindowsIdentity.GetCurrent().Name + "<br>");
WindowsImpersonationContext impContext = winIdentity.Impersonate();
Response.Write("Impersonating: " + WindowsIdentity.GetCurrent().Name
// Place resource access code here
// Stop impersonating
impContext.Undo();
Response.Write( "After Impersonating: " +
WindowsIdentity.GetCurrent().Name + "<br>");
For more information about encrypting credentials in configuration files and about
Aspnet_setreg.exe, see Chapter 19, "Securing Your ASP.NET Application and Web
Services."
The following two types of tokens are associated with session management:
For more information about how to secure the authentication token for Forms
authentication, see "Forms Authentication" earlier in this chapter.
Do Not Rely on Client-Side State Management Options
Avoid using any of the client-side state management options, such as view state, cookies,
query strings, or hidden form fields, to store sensitive data. The information can be
tampered with or seen in clear text. Use server-side state management options, for
example, a database, to store sensitive data.
Secure session management requires that you do not mix the two types of tokens. First,
secure the authentication token to make sure an attacker cannot capture it and use it to
gain access to the restricted areas of your application. Second, build your application in
such a way that the session token alone cannot be used to gain access to sensitive pages
or data. The session token should be used only for personalization purposes or to maintain
the user state across multiple HTTP requests. Without authentication, do not maintain
sensitive items of the user state.
If your site has secure areas and public access areas, you must protect the secure
authenticated areas with SSL. When a user moves back and forth between secure and
public areas, the ASP.NET-generated session cookie (or URL if you have enabled cookie-
less session state) moves with them in plaintext, but the authentication cookie is never
passed over unencrypted HTTP connections as long as the Secure cookie property is set.
You can set the Secure property for a Forms authentication cookie by setting
Note
requireSSL="true" on the <forms> element.
An attacker is able to obtain a session cookie passed over an unencrypted HTTP session,
but if you have designed your site correctly and place restricted pages and resources in a
separate and secure directory, the attacker can use it to access only to the non-secure,
public access pages. In this event, there is no security threat because these pages do not
perform sensitive operations. Once the attacker tries to replay the session token to a
secured page, because there is no authentication token, the attacker is redirected to the
application's login page.
For more information about using the Secure cookie property and how to build secure
Forms authentication solutions, see "Forms Authentication" earlier in this chapter.
If the session data on the server contains sensitive items, the data and the store needs to
be secured. ASP.NET supports several session state modes. For information about how to
secure ASP.NET session state, see" Session State" in Chapter 19, "Securing Your
ASP.NET Application and Web Services."
Parameter Manipulation
Parameters, such as those found in form fields, query strings, view state, and cookies, can
be manipulated by attackers who usually intend to gain access to restricted pages or trick
the application into performing an unauthorized operation.
For example, if an attacker knows that you are using a weak authentication token scheme
such as a guessable number within a cookie, the attacker can construct a cookie with
another number and make a request as a different (possibly privileged) user.
The @Page directive also supports the preceding attributes, which allows you to
Note
customize settings on a per-page basis.
While you can override whether or not view state is enabled on a per-control, page, or
application basis, make sure enableViewStateMac is set to true whenever you use view
state.
Server.Transfer
If your application uses Server.Transfer as shown below and sets the optional second
Boolean parameter to true so that the QueryString and Form collections are preserved,
then the command will fail if enableViewStateMac is set to true.
Server.Transfer("page2.aspx", true);
If you omit the second parameter or set it to false, then an error will not occur. If you want
to preserve the QueryString and Form collections instead of setting the
enableViewStateMac to false, follow the workaround discussed in Microsoft Knowledge
Base article 316920, "PRB: View State Is Invalid" Error Message When You Use
Server.Transfer."
For information about configuring the <machineKey> element for view state encryption and
integrity checks, see Chapter 19, "Securing Your ASP.NET Application and Web Services."
This attack is usually not an issue for anonymously browsed pages (where no
Note user name is available) because this type of page should make no sensitive
transactions.
For more information about using regular expressions and how to validate input data, see
"Input Validation" earlier in this chapter.
Exception Management
Correct exception handling in your Web pages prevents sensitive exception details from
being revealed to the user. The following recommendations apply to ASP.NET Web pages
and controls.
For more information about exception management, see Chapter 7, "Building Secure
Assemblies."
In the event of an unhandled exception, that is, one that propagates to the application
boundary, return a generic error page to the user. To do this, configure the
<customErrors> element as follows:
<customErrors mode="On" defaultRedirect="YourErrorPage.htm" />
The error page should include a suitably generic error message, possibly with additional
support details. The name of the page that generated the error is passed to the error page
through the aspxerrorpath query parameter.
You can also use multiple error pages for different types of errors. For example:
<customErrors mode="On" defaultRedirect="YourErrorPage.htm">
<error statusCode="404" redirect="YourNotFoundPage.htm"/>
<error statusCode="500" redirect="YourInternalErrorPage.htm"/>
</customErrors>
For individual pages you can supply an error page using the following page-level attribute:
<% @ Page ErrorPage="YourErrorPage" %>
If exceptions are allowed to propagate from the page handler or there is no page handler,
an application error event is raised. To trap application-level events, implement
Application_Error in Global.asax, as follows:
protected void Application_Error(Object sender, EventArgs e)
{
// Write to the event log.
}
Auditing and Logging
The default ASP.NET process identity for Web applications can write new records to the
event log, but it does not have sufficient permissions to create new event sources. To
address this issue, you have two choices. You can create an installer class, which is
invoked at installation time when administrator privileges are available, or you can configure
the permissions on the EventLog registry key to allow the ASP.NET process identity (or
impersonated identity) to create event sources at run time. The former approach is
recommended
2. Select Installer Class from the list of templates and provide a suitable class file
name.
This creates a new installer class annotated with the RunInstaller(true) attribute.
RunInstaller(true)
public class EventSourceInstaller : System.Configuration.Inst
{
. . .
}
3. Display the new installer class in Design view, display the Toolbox, and then click
Components in the Toolbox. Drag an EventLogInstaller component onto the
Designer work surface.
Log. Set this property to "Application" which is the name of the event
log you should use. You can use the default Application log or create an
application-specific log.
Source. Set this property to the event source name. This is usually your
application name.
5. Build your project and then create an instance of the installer class at installation
time.
Installer class instances are automatically created and invoked if you use a .NET
Setup and Deployment project to create a Windows installer file (.msi). If you use
xcopy or equivalent deployment, use the InstallUtil.exe utility to create an
instance of the installer class and to execute it.
6. To confirm the successful generation of the event source, use a registry editor and
navigate to:
HKLM\System\CurrentControlSet\Services\EventLog\Application\{
Confirm that the key exists and that it contains an EventMessageFile string value
that points to the default .NET Framework event message file:
\Windows\Microsoft.NET\Framework\{version}\EventLogMessages.d
If you have an existing application and do not want to create an installer class, you must
grant the ASP.NET process identity the correct access rights on the event log registry key.
For registry key details and the precise access rights that are required, see "Event Log" in
Chapter 19, "Securing Your ASP.NET Application and Web Services."
EventLogPermission
Code that writes to the event log must be granted the EventLogPermission by code
access security policy. This becomes an issue if your Web application is configured to run
at a partial-trust level. For information about how to write to the event log from a partial
trust Web application, see Chapter 9, "Using Code Access Security with ASP.NET."
Summary
This chapter started by showing you the main threats that you need to address when you
build Web pages and controls. Many application-level attacks rely on vulnerabilities in input
validation. Take special care in this area to make sure that your validation strategy is sound
and that all data that is processed from a non-trusted source is properly validated. Another
common vulnerability is the failure to protect authentication cookies. The "Forms
Authentication" section of this chapter showed you effective countermeasures to apply to
prevent unauthorized access, session hijacking, and cookie replay attacks.
Additional Resources
For more information, see the following resources:
For information on securing your developer workstation, see "How To: Secure Your
Developer Workstation" in the "How To" section of this guide.
For walkthroughs of using Forms Authentication, see "How To: Use Forms
Authentication with SQL Server 2000" and "How To: Use Forms Authentication with
Active Directory", in the "How To" section of "Microsoft patterns & practices Volume
I, Building Secure ASP.NET Applications: Authentication, Authorization, and
Secure Communication" at http://msdn.microsoft.com/library/en-
us/dnnetsec/html/SecNetHT00.asp.
For more information about using regular expressions, see Microsoft Knowledge
Base article 308252, "How To: Match a Pattern by Using Regular Expressions and
Visual C# .NET."
For more information about user input validation in ASP.NET, see MSDN article
"User Input Validation in ASP.NET" at
http://msdn.microsoft.com/library/default.asp?url=/library/en-
us/dnaspp/html/pdc_userinput.asp.
For more information about the Secure cookie property, see RFC2109 on the W3C
Web site at http://www.w3.org/Protocols/rfc2109/rfc2109.
For more information on security considerations from the Open Hack competition,
see MSDN article "Building and Configuring More Secure Web Sites" at
http://msdn.microsoft.com/library/default.asp?url=/library/en-
us/dnnetsec/html/openhack.asp.
Chapter 11: Building Secure Serviced Components
In This Chapter
Preventing anonymous access to serviced components
Serviced components are typically used to encapsulate an application's business and data
access logic and are used when infrastructure services such as distributed transactions,
object pooling, queued components, and others are required in an application's middle tier.
Enterprise Services applications often reside on middle-tier application servers as shown in
Figure 11.1.
Network eavesdropping
Unauthorized access
Unconstrained delegation
Repudiation
Figure 11.2 highlights these top threats together with common serviced component
vulnerabilities.
Network Eavesdropping
Enterprise Services applications often run on middle-tier application servers, remote from
the Web server. As a result, sensitive application data must be protected from network
eavesdroppers. You can use an Internet Protocol Security (IPSec) encrypted channel
between Web and application server. This solution is commonly used in Internet data
centers. Serviced components also support remote procedure call (RPC) packet level
authentication, which provides packet-based encryption. This is most typically used to
secure communication to and from desktop-based clients.
Unauthorized Access
By enabling COM+ role-based authorization (it is disabled by default on Microsoft Windows
2000), you can prevent anonymous access and provide role-based authorization to control
access to the restricted operations exposed by your serviced components.
Unconstrained Delegation
If you enable delegation on Windows 2000 to allow a remote server to access network
resources using the client's impersonated token, the delegation is unconstrained. This
means that there is no limit to the number of network hops that can be made. Microsoft
Windows Server 2003 introduces constrained delegation.
Many applications store sensitive data such as database connection strings in the COM+
catalog using object constructor strings. These strings are retrieved and passed to an
object by COM+ when the object is created. Sensitive configuration data should be
encrypted prior to storage in the catalog.
Repudiation
The repudiation threat arises when a user denies performing an operation or transaction,
and you have insufficient evidence to counter the claim. Auditing should be performed
across all application tiers. Serviced components should log user activity in the middle tier.
Serviced components usually have access to the original caller's identity because front-end
Web applications usually enable impersonation in Enterprise Services scenarios.
Design Considerations
Before you start writing code, there are a number of important issues to consider at design
time. The key considerations are:
Role-based authorization
Audit requirements
Transactions
Role-Based Authorization
For effective role-based authorization using COM+ roles, ensure that the original caller's
security context is used for the call to the serviced component. This allows you to perform
granular role-based authorization based on the caller's group membership. If an ASP.NET
Web application calls your serviced components, this means that the Web application needs
to impersonate its callers before calling your component.
Audit Requirements
To address the repudiation threat, sensitive transactions performed by Enterprise Service
components should be logged. At design time, consider the type of operations that should
be audited and the details that should be logged. At a minimum, this should include the
identity that initiated the transaction and the identity used to perform the transaction, which
may or may not be the same.
Transactions
If you plan to use distributed transactions, consider where the transaction is initiated and
consider the implications of running transactions between components and resource
managers separated by firewalls. In this scenario, the firewall must be configured to
support the Microsoft Distributed Transaction Coordinator (DTC) traffic.
The main issue for you to consider when building serviced components is to ensure that all
calls are authenticated to prevent anonymous users from accessing your component's
functionality.
Using this attribute is equivalent to selecting Enforce access checks for this
Note application on the Security tab of the application's Properties dialog box in
Component Services.
At runtime, retrieve the object construction string and use DPAPI to decrypt the data. For
more information about using DPAPI from managed code, see "How to create a DPAPI
library" in MSDN article, "Building Secure ASP.NET Applications," at
http://msdn.microsoft.com/library/en-us/dnnetsec/html/secnetlpMSDN.asp.
[ConstructionEnabled(Default="")]
public class YourServicedComponent : ServicedComponent, ISomeInterfa
{
// The object constructor is called first.
public YourServicedComponent() {}
// Then the object construction string is passed to Construct meth
protected override void Construct(string constructString)
{
// Use DPAPI to decrypt the configuration data.
}
}
Avoid Unconstrained Delegation
Serviced component clients are authenticated with either NTLM or Kerberos authentication,
depending on the environment. Kerberos in Windows 2000 supports delegation that is
unconstrained; this means that the number of network hops that can be made with the
client's credentials has no limit.
If ASP.NET is the client then you can set the comImpersonation attribute on the
<processModel> element in Machine.config to configure the impersonation level:
comImpersonationLevel="[Default|Anonymous|Identify|Impersonate|Deleg
The impersonation level defined for an Enterprise Services server application determines
the impersonation capabilities of any remote server that the serviced components
communicate with. In this case, the serviced components are the clients.
You can specify the impersonation level for a serviced component, which applies when the
service component is a client, using the following attribute:
[assembly: ApplicationAccessControl(
ImpersonationLevel=ImpersonationLevelOption.Identify
Using this attribute is equivalent to setting the Impersonation Level value on the
Note
Security page of the application's Properties dialog within Component Services.
The following table describes the effect of each of these impersonation levels:
For more information, see the "Impersonation" section in Chapter 17, "Securing Your
Application Server" and "How to Enable Kerberos Delegation in Windows 2000" in the
References section of MSDN article, "Building Secure ASP.NET Applications," at
http://msdn.microsoft.com/library/en-us/dnnetsec/html/secnetlpMSDN.asp.
Sensitive Data
If your application transmits sensitive data to and from a serviced component across a
network to address the network eavesdropping threat, the data should be encrypted to
ensure it remains private and unaltered. You can use transport level protection with IPSec
or you can use application level protection by configuring your Enterprise Services
application to use the RPC packet privacy authentication level. This encrypts each packet of
data sent to and from the serviced component to provide privacy and integrity.
You can configure packet privacy authentication using the Component Services tool or by
adding the following attribute to your serviced component assembly:
[assembly: ApplicationAccessControl(
Authentication = AuthenticationOption.Privacy)]
For more information about using IPSec to encrypt all of the data transmitted between two
computers, see "How To: Use IPSec to Provide Secure Communication Between Two
Servers" in the "How To" section of "Microsoft patterns & practices Volume I, Building
Secure ASP.NET Applications: Authentication, Authorization, and Secure Communication"
at http://msdn.microsoft.com/library/default.asp?url=/library/en-
us/dnnetsec/html/SecNetHT00.asp.
Auditing and Logging
Auditing and logging should be performed across the tiers of your application to avoid
potential repudiation threats where users deny performing certain transactions or key
operations.
To successfully write to the event log, an event source must exist that associates the
Enterprise Services application with a specific event log. The above code creates the event
source at run time, which means that the serviced component process account must have
the relevant permissions in the registry.
Task To enable the serviced component process identity to create event sources
Use regedit32.exe to update the permissions on the following registry key to grant
access to the serviced component process account:
HKLM\SYSTEM\CurrentControlSet\Services\Eventlog
Create subkey
Enumerate subkeys
Notify
Read
An alternate strategy is to use an Installer class and create the event source for the
application at installation time, when administrator privileges are available. For more
information about this approach, see "Auditing and Logging" in Chapter 10 "Building Secure
ASP.NET Web Pages and Controls."
Building a Secure Serviced Component
Having covered the threats and countermeasures applicable to serviced components and
Enterprise Services applications, the following code fragments illustrate the key
characteristics of a secure serviced component for a simple Customer class
implementation. Method implementation details have been omitted for the sake of clarity.
Assembly Implementation
The following code fragment from assemblyinfo.cs shows the assembly level metadata
used to configure the COM+ catalog when the serviced component assembly is registered
with Enterprise Services using regsvcs.exe.
// (1) Assembly has a strong name.
[assembly: AssemblyKeyFile(@"..\..\Customer.snk")]
The code shown above exhibits the following security characteristics (identified by the
numbers in the comment lines).
1. The assembly is strong named. This is a mandatory requirement for serviced
components. The added benefit from a security perspective is that the assembly
is digitally signed. This means that any modification by an attacker will be
detected and the assembly will fail to load.
5. The impersonation level for outgoing calls from this serviced component to other
components on remote servers is set to Identify. This means that the downstream
component can identify the caller but cannot perform impersonation.
The code shown above exhibits the following security characteristics (identified by the
numbers in the comment lines):
1. An interface is defined and implemented explicitly to support interface and method
level authorization with COM+ roles.
2. Component level access checks are enabled for the class by using the
[ComponentAccessControl] attribute at the class level.
5. The code checks whether or not security is enabled prior to the explicit role
check. This is a risk mitigation strategy to ensure that transactions cannot be
performed if the application security configuration is inadvertently or deliberately
disabled by an administrator.
6. Callers must be members of either the Manager or Senior Manager role because
of the declarative security used on the method. For fine-grained authorization, the
role membership of the caller is explicitly checked in code.
8. The audit implementation obtains the identity of the original caller by using the
SecurityCallContext object.
Code Access Security Considerations
Applications that use serviced components are usually fully trusted and, as a result, code
access security has limited use to authorize calling code. The calling code should consider
the following points:
Unmanaged code permission is required to activate and perform cross context calls
on serviced components.
If the client of a serviced component is an ASP.NET Web application, then its trust
level must be set to "Full" as shown below.
<trust level="Full" />
If your Web application is configured with a trust level other than "Full," it does not
have the unmanaged code permission. In this instance, you must create a
sandboxed wrapper assembly to encapsulate the communication with the serviced
component. You must also configure code access security policy to grant the
wrapper assembly the unmanaged code permission. For more information about
the sandboxing technique used to encapsulate high privileged code, see Chapter 9,
"Using Code Access Security with ASP.NET."
For more information about applying secure configuration at deployment time, see Chapter
17, "Securing Your Application Server."
Firewall Restrictions
If the client and Enterprise Services application are separated by an internal firewall, the
relevant ports that support DCOM and possibly the DTC (if your application uses distributed
transactions) must be open.
DCOM uses RPC dynamic port allocation that by default randomly selects port numbers
above 1024. In addition, port 135 is used by the RPC endpoint mapper. You can restrict the
ports required to support DCOM on the internal firewall in two ways:
Windows 2000 SP3 (or Quick Fix Engineering [QFE] 18.1 and greater) or Windows
Server 2003 allow you to configure Enterprise Services applications to use a static
endpoint. Static endpoint mapping means that you only need to open two ports in
the firewall. Specifically, you must open port 135 for RPC and a nominated port for
your Enterprise Services application.
For more information about defining port ranges and static endpoint mapping, see "Firewall
Considerations" in Chapter 17, "Securing Your Application Server."
Figure 11.4: Using a Web services façade layer to communicate with Enterprise
Services using HTTP
This approach does not allow you to flow transaction context from client to server, although
in many cases where your deployment architecture includes a middle-tier application server,
it is appropriate to initiate transactions in the remote serviced component on the application
server.
For information about physical deployment requirements for service agents and service
interfaces such as the Web services façade layer, see "Physical Deployment and
Operational Requirements" in the Reference section of the MSDN article, "Application
Architecture for .NET: Designing Applications and Services."
DTC Requirements
If your application uses COM+ distributed transactions and these are used across remote
servers separated by an internal firewall, then the firewall must open the necessary ports to
support DTC traffic.
If your deployment architecture includes a remote application tier, transactions are usually
initiated within the Enterprise Services application and propagated to the database server.
In the absence of an application server, the Enterprise Services application on the Web
server initiates the transaction and propagates it to the SQL Server resource manager.
For information about configuring firewalls to support DTC traffic, see Chapter 18,
"Securing Your Database Server."
Summary
Enterprise Services (COM+) security relies on Windows security to authenticate and
authorize callers. Authorization is configured and controlled with COM+ roles that contain
Windows group or user accounts. The majority of threats that relate to Enterprise Services
applications and serviced components can be addressed with solid coding techniques, and
appropriate catalog configuration.
The developer should use declarative attributes to set the serviced component security
configuration. These attributes determine how the application is configured when it is initially
registered with Enterprise Services (typically using Regsvcs.exe).
Not every security configuration setting can be set with attributes. An administrator must
specify the run-as identity for a server application. The administrator must also populate
roles with Windows group or user accounts at deployment time.
When you are developing serviced components or are evaluating the security of your
Enterprise Security solution, use "Checklist: Securing Enterprise Services" in the
"Checklists" section of this guide.
Additional Resources
For more information, see the following resources:
The specifications and standard supported by WSE are evolving and therefore
the current WSE does not guarantee it will be compatible with future versions of
Note
the product. At the time of this writing, interoperability testing is under way with
non-Microsoft toolkits provided by vendors including IBM and VeriSign.
How to Use This Chapter
This chapter discusses various practices and techniques to design and build secure Web
services.
Read Chapter 19, "Securing Your ASP.NET Application and Web Services." It
is geared toward an administrator so that an administrator can configure an
ASP.NET Web Application or Web service, bringing a semi-secure application to a
secure state.
Use the "Checklist: Securing Web Services" in the "Checklists" section of this
guide. The checklist is a summary of the security measures required to build and
configure secure Web services.
Use this chapter to understand message level threats and how to counter
those threats.
Unauthorized access
Parameter manipulation
Network eavesdropping
Message replay
Figure 12.1 shows the top threats and attacks directed at Web services.
Unauthorized Access
Web services that provide sensitive or restricted information should authenticate and
authorize their callers. Weak authentication and authorization can be exploited to gain
unauthorized access to sensitive information and operations.
Vulnerabilities
Vulnerabilities that can lead to unauthorized access through a Web service include:
No authentication used
Countermeasures
Use role-based authorization to restrict access to Web services. This can be done
by using URL authorization to control access to the Web service file (.asmx) or at
the Web method level by using principal-permission demands.
Parameter Manipulation
Parameter manipulation refers to the unauthorized modification of data sent between the
Web service consumer and the Web service. For example, an attacker can intercept a Web
service message, perhaps as it passes through an intermediate node en route to its
destination; and can then modify it before sending it on to its intended endpoint.
Vulnerabilities
Countermeasures
Digitally sign the message. The digital signature is used at the recipient end to
verify that the message has not been tampered with while it was in transit.
Network Eavesdropping
With network eavesdropping, an attacker is able to view Web service messages as they
flow across the network. For example, an attacker can use network monitoring software to
retrieve sensitive data contained in a SOAP message. This might include sensitive
application level data or credential information.
Vulnerabilities
Countermeasures
You can use the following countermeasures to protect sensitive SOAP messages as they
flow across the network:
Use transport level encryption such as SSL or IPSec. This is applicable only if you
control both endpoints.
Encrypt the message payload to provide privacy. This approach works in scenarios
where your message travels through intermediary nodes route to the final
destination.
WSDL describes the characteristics of a Web service, for example, its method
Note
signatures and supported protocols.
Second, with inadequate exception handling the Web service may disclose sensitive internal
implementation details useful to an attacker.
Vulnerabilities
Unrestricted WSDL files available for download from the Web server
A restricted Web service supports the dynamic generation of WSDL and allows
unauthorized consumers to obtain Web service characteristics
Countermeasures
You can use the following countermeasures to prevent the unwanted disclosure of
configuration data:
Authorize access to WSDL files using NTFS permissions.
Message Replay
Web service messages can potentially travel through multiple intermediate servers. With a
message replay attack, an attacker captures and copies a message and replays it to the
Web service impersonating the client. The message may or may not be modified.
Vulnerabilities
Attacks
Basic replay attack. The attacker captures and copies a message, and then
replays the same message and impersonates the client. This replay attack does not
require the malicious user to know the contents of the message.
Man in the middle attack. The attacker captures the message and then changes
some of its contents, for example, a shipping address, and then replays it to the
Web service.
Countermeasures
You can use the following countermeasures to address the threat of message replay:
When the server responds to the client it sends a unique ID and signs the message,
including the ID. When the client makes another request, the client includes the ID
with the message. The server ensures that the ID sent to the client in the previous
message is included in the new request from the client. If it is different, the server
rejects the request and assumes it is subject to a replay attack.
The attacker cannot spoof the message ID, because the message is signed. Note
that this only protects the server from client-initiated replay attacks using the
message request, and offers the client no protection against replayed responses.
Design Considerations
Before you start to develop Web services, there are a number of issues to consider at
design time. The key security considerations are:
Authentication requirements
Authentication Requirements
The alternative is to use transport level encryption through SSL or IPSec channels. These
solutions are only appropriate where you are in control of both endpoints.
On Windows Server 2003, the Network Service account is used by default to run
Note
Web services.
For more information about using the ASP.NET process account for remote database
access, see the "Data Access" section in Chapter 19, "Securing Your ASP.NET Application
and Web Services."
If you use impersonation, the issues and considerations that apply to Web applications also
apply to Web services. For more information, see the "Impersonation" sections in Chapter
10, "Building Secure ASP.NET Web Pages and Controls" and Chapter 19, "Securing Your
ASP.NET Application and Web Services."
Also, if you call a Web service from an ASP.NET Web application, the Web application's
trust level determines the range of Web services it can call. For example, a Web application
configured for Medium trust, by default, can only call Web services on the local computer.
For more information about calling Web services from Medium and other partial trust Web
applications, see Chapter 9, "Using Code Access Security with ASP.NET."
Input Validation
Like any application that accepts input data, Web services must validate the data that is
passed to them to enforce business rules and to prevent potential security issues. Web
methods marked with the WebMethod attribute are the Web service entry points. Web
methods can accept strongly typed input parameters or loosely typed parameters that are
often passed as string data. This is usually determined by the range and type of consumers
for which the Web service is designed.
In the preceding example, the .NET Framework type system performs type checks
automatically. To validate the range of characters that are supplied through the name field,
you can use a regular expression. For example, the following code shows how to use the
System.Text.RegularExpressions.Regex class to constrain the possible range of input
characters and also to validate the parameter length.
if (!Regex.IsMatch(name, @"^[a-zA-Z'.`-´\s]{1,40}$"))
{
// Invalid name
}
For more information about regular expressions, see the "Input Validation" section in
Chapter 10, "Building Secure ASP.NET Pages and Controls." The following example shows
a Web method that accepts a custom Employee data type.
using Employees; // Custom namespace
[WebMethod]
The consumer needs to know the XSD schema to be able to call your Web service. If the
consumer is a .NET Framework client application, the consumer can simply pass an
Employee object as follows:
using Employees;
Employee emp = new Employee();
// Populate Employee fields
// Send Employee to the Web service
wsProxy.CreateEmployee(emp);
Consumer applications that are not based on the .NET Framework must construct the XML
input manually, based on the schema definition provided by the organization responsible for
the Web service.
The benefit of this strong typing approach is that the .NET Framework parses the input data
for you and validates it based on the type definition. However, inside the Web method you
might still need to constrain the input data. For example, while the type system confirms a
valid Employee object, you might still need to perform further validation on the Employee
fields. You might need to validate that an employee's date of birth is greater than 18 years
ago. You might need to use regular expressions to constrain the range of characters that
can be used in name fields, and so on.
For more information about constraining input, see the "Input Validation" section in Chapter
10, "Building Secure ASP.NET Pages and Controls."
XML Data
In a classic business-to-business scenario, it is common for consumers to pass XML data
that represents business documents such as purchase orders or sales invoices. The validity
of the input data must be programmatically validated by the Web method before it is
processed or passed to downstream components.
The client and the server have to establish and agree on a schema that describes the XML.
The following code fragment shows how a Web method can use the
System.Xml.XmlValidatingReader class to validate the input data, which, in this example,
describes a simple book order. Notice that the XML data is passed through a simple string
parameter.
using System.Xml;
using System.Xml.Schema;
[WebMethod]
public void OrderBooks(string xmlBookData)
{
try
{
// Create and load a validating reader
XmlValidatingReader reader = new XmlValidatingReader(xmlBookData
XmlNodeType
null);
// Attach the XSD schema to the reader
reader.Schemas.Add("urn:bookstore-schema",
@"http://localhost/WSBooks/bookschema.xsd");
// Set the validation type for XSD schema.
// XDR schemas and DTDs are also supported
reader.ValidationType = ValidationType.Schema;
// Create and register an event handler to handle validation err
reader.ValidationEventHandler += new ValidationEventHandler(
ValidationErrors
// Process the input data
while (reader.Read())
{
. . .
}
// Validation completed successfully
}
catch
{
. . .
}
}
The following fragment shows how the consumer calls the preceding Web method:
string xmlBookData = "<book xmlns='urn:bookstore-schema'
xmlns:xsi='http://www.w3.org/2001/XMLSchema
"<title>Building Secure ASP.NET Applications</t
"<isbn>0735618909</isbn>" +
"<orderQuantity>1</orderQuantity>" +
"</book>";
BookStore.BookService bookService = new BookStore.BookService();
bookService.OrderBooks(xmlBookData));
The preceding example uses the following simple XSD schema to validate the input data.
<?xml version="1.0" encoding="utf-8" ?>
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns="urn:bookstore-schema"
elementFormDefault="qualified"
targetNamespace="urn:bookstore-schema">
<xsd:element name="book" type="bookData"/>
<xsd:complexType name="bookData">
<xsd:sequence>
<xsd:element name="title" type="xsd:string" />
<xsd:element name="isbn" type="xsd:integer" />
<xsd:element name="orderQuantity" type="xsd:integer"/>
</xsd:sequence>
</xsd:complexType>
</xsd:schema>
The following table shows additional complex element definitions that can be used in an
XSD schema to further constrain individual XML elements.
307379, "How To: Validate an XML Document by Using DTD, XDR, or XSD in Visual
C# .NET."
318504, "How To: Validate XML Fragments Against an XML Schema in Visual
C#.NET."
SQL Injection
SQL injection allows an attacker to execute arbitrary commands in the database using the
Web service's database login. SQL injection is a potential issue for Web services if the
services use input data to construct SQL queries. If your Web methods access the
database, they should do so using SQL parameters and ideally, parameterized stored
procedures. SQL parameters validate the input for type and length, and they ensure that
the input is treated as literal text and not executable code. For more information about this
and other SQL injection countermeasures, see the "Input Validation" section in Chapter 14,
"Building Secure Data Access."
Cross-Site Scripting
With cross-site scripting (XSS), an attacker exploits your application to execute malicious
script at the client. If you call a Web service from a Web application and send the output
from the Web service back to the client in an HTML data stream, XSS is a potential issue.
In this scenario, you should encode the output received from the Web service in the Web
application before returning it to the client. This is particularly important if you do not own
the Web service and it falls outside the Web application's trust boundary. For more
information about XSS countermeasures, see the "Input Validation" section in Chapter 10,
"Building Secure ASP.NET Pages and Controls."
Authentication
If your Web service outputs sensitive, restricted data or if it provides restricted services, it
needs to authenticate callers. A number of authentication schemes are available and these
can be broadly divided into three categories:
If you are in control of both endpoints and both endpoints are in the same or trusting
domains, you can use Windows authentication to authenticate callers.
Basic Authentication
You can use IIS to configure your Web service's virtual directory for Basic authentication.
With this approach, the consumer must configure the proxy and provide credentials in the
form of a user name and password. The proxy then transmits them with each Web service
request through that proxy. The credentials are transmitted in plaintext and therefore you
should only use Basic authentication with SSL.
The following code fragment shows how a Web application can extract Basic authentication
credentials supplied by an end user and then use those to invoke a downstream Web
service configured for Basic authentication in IIS.
// Retrieve client's credentials (available with Basic authenticatio
string pwd = Request.ServerVariables["AUTH_PASSWORD"];
string uid = Request.ServerVariables["AUTH_USER"];
// Set the credentials
CredentialCache cache = new CredentialCache();
cache.Add( new Uri(proxy.Url), // Web service URL
"Basic",
new NetworkCredential(uid, pwd, domain) );
proxy.Credentials = cache;
To call a Web service configured for Integrated Windows authentication, the consumer must
explicitly configure the Credentials property on the proxy.
To flow the security context of the client's Windows security context (either from an
impersonating thread token or process token) to a Web service you can set the
Credentials property of the Web service proxy to CredentialCache.DefaultCredentials as
follows.
proxy.Credentials = System.Net.CredentialCache.DefaultCredentials;
If you need to specify explicit credentials, do not hard code them or store them in plaintext.
Encrypt account credentials by using DPAPI and store the encrypted data either in an
<appSettings> element in Web.config or beneath a restricted registry key.
For more information about platform level authentication, see the "Web Services Security"
section in "Microsoft patterns & practices Volume I, Building Secure ASP.NET Applications:
Authentication, Authorization, and Secure Communication" at
http://msdn.microsoft.com/library/en-us/dnnetsec/html/secnetlpMSDN.asp?frame=true.
You can use WSE to implement a message level authentication solution that conforms to
the emerging WS-Security standard. This approach allows you to pass authentication
tokens in a standard way by using SOAP headers.
When two parties agree to use WS-Security, the precise format of the
Note
authentication token must also be agreed upon.
The following types of authentication token can be used and are supported by WSE:
Kerberos ticket
X.509 certificate
Custom token
User Name and Password
You can send user names and password credentials in the SOAP header. However,
because these are sent in plaintext, this approach should only be used in conjunction with
SSL due to the network eavesdropping threat. The credentials are sent as part of the
<Security> element, in the SOAP header as follows.
<wsse:Security
xmlns:wsse="http://schemas.xmlsoap.org/ws/2002/12/secext">
<wsse:UsernameToken>
<wsse:Username>Bob</wsse:Username>
<wsse:Password>YourStr0ngPassWord</wsse:Password>
</wsse:UsernameToken>
</wsse:Security>
Instead of sending a plaintext password, you can send a password digest. The digest is a
Base64-encoded SHA1 hash value of the UTF8-encoded password. However, unless this
approach is used over a secure channel, the data can still be intercepted by attackers
armed with network monitoring software and reused to gain authenticated access to your
Web service. To help address this replay attack threat, a nonce and a creation timestamp
can be combined with the digest.
With this approach the digest is a SHA1 hash of a nonce value, a creation timestamp, and
the password as follows.
digest = SHA1(nonce + creation timestamp + password)
With this approach, the Web service must maintain a table of nonce values and reject any
message that contains a duplicate nonce value. While the approach helps protect the
password and offers a basis for preventing replay attacks, it suffers from clock
synchronization issues between the consumer and provider when calculating an expiration
time, and it does not prevent an attacker capturing a message, modifying the nonce value,
and then replaying the message to the Web service. To address this threat, the message
must be digitally signed. With the WSE, you can sign a message using a custom token or
an X.509 certificate. This provides tamperproofing and authentication, based on a public,
private key pair.
Kerberos Tickets
You can send a security token that contains a Kerberos ticket as follows.
<wsse:Security
xmlns:wsse="http://schemas.xmlsoap.org/ws/2002/12/secext">
<wsse:BinarySecurityToken
ValueType="wsse:Kerberosv5ST"
EncodingType="wsse:Base64Binary">
U87GGH91TT ...
</wsse:BinarySecurityToken>
</wsse:Security>
X.509 Certificates
You can also provide authentication by sending an X.509 certificate as an authentication
token.
<wsse:Security
xmlns:wsse="http://schemas.xmlsoap.org/ws/2002/12/secext">
<wsse:BinarySecurityToken
ValueType="wsse:X509v3"
EncodingType="wsse:Base64Binary">
Hg6GHjis1 ...
</wsse:BinarySecurityToken>
</wsse:Security>
For more information about the above approaches, see the samples that ship with WSE.
Regardless of the authentication type, you can use the ASP.NET UrlAuthorizationModule
to control access to Web service (.asmx) files. You configure this by adding <allow> and
<deny> elements to the <authorization> element in Machine.config or Web.config.
For more information about both forms of authorization, see the" Authorization" section in
Chapter 19, "Securing Your ASP.NET Application and Web Services."
For more information about principal permission demands, see the "Authorization" section in
Chapter 10, "Building Secure ASP.NET Pages and Controls."
Programmatic Authorization
You can use imperative permission checks or explicit role checks by calling
IPrincipal.IsInRole inside your Web methods for fine-grained authorization logic as follows.
// This assumes non-Windows authentication. With Windows authenticat
// cast the User object to a WindowsPrincipal and use Windows groups
// role names
GenericPrincipal user = User as GenericPrincipal;
if (null != user)
{
if ( user.IsInRole(@"Manager") )
{
// User is authorized to perform manager functionality
}
}
Sensitive Data
The threats of network eavesdropping or information disclosure at intermediate application
nodes must be addressed if your Web service request or response messages convey
sensitive application data, for example, credit card numbers, employee details, and so on.
In a closed environment where you are in control of both endpoints, you can use SSL or
IPSec to provide transport layer encryption. In other environments and where messages
are routed through intermediate application modes, a message level solution is required.
The WS-Security standard defines a confidentiality service based on the World Wide Web
Consortium (W3C) XML Encryption standard that allows you to encrypt some or all of a
SOAP message before it is transmitted.
XML Encryption
You can encrypt all or part of a SOAP message in three different ways:
The Web service must be able to access the associated private key. By default, WSE
searches for X.509 certificates in the local machine store. You can use the <x509>
configuration element in Web.config to set the store location to the current user store as
follows.
<configuration>
<microsoft.web.services>
<security>
<x509 storeLocation="CurrentUser" />
</security>
</microsoft.web.services>
</configuration>
If you use the user store, the user profile of the Web service's process account must be
loaded. If you run your Web service using the default ASPNET least privileged local
account, version 1.1 of the .NET Framework loads the user profile for this account, which
makes the user key store accessible.
For Web services built using version 1.0 of the .NET Framework, the ASPNET user profile
is not loaded. In this scenario, you have two options.
Run your Web service using a custom least privileged account with which you have
previously interactively logged on to the Web server to create a user profile.
Store the key in the local machine store and grant access to your Web service
process account. On Windows 2000, this is the ASPNET account by default. On
Windows Server 2003, it is the Network Service account by default.
To grant access, use Windows Explorer to configure an ACL on the following folder
that grants full control to the Web service process account.
\Documents and Settings\All Users\Application Data\
Microsoft\Crypto
For more information, see the "Managing X.509 Certificates," "Encrypting a SOAP
Message Using an X.509 Certificate," and "Decrypting a SOAP Message Using an X.509
Certificate" sections in the WSE documentation.
For more information, see the "Encrypting a SOAP Message Using a Shared Key" and
"Decrypting a SOAP Message Using a Shared Key" sections in the WSE documentation.
For more information, see the "Encrypting a SOAP Message Using a Custom Binary
Security Token" and "Decrypting a SOAP Message Using a Custom Binary Security Token"
sections in the WSE documentation.
For more information, see the "Specifying the Parts of a SOAP Message that are Signed or
Encrypted" section in the WSE documentation.
Parameter Manipulation
Parameter manipulation in relation to Web services refers to the threat of an attacker
altering the message payload in some way while the message request or response is in
transit between the consumer and service.
To address this threat, you can digitally sign a SOAP message to allow the message
recipient to cryptographically verify that the message has not been altered since it was
signed. For more information, see the "Digitally Signing a SOAP Message" section in the
WSE documentation.
Exception Management
Exception details returned to the consumer should only contain minimal levels of information
and not expose any internal implementation details. For example, consider the following
system exception that has been allowed to propagate to the consumer.
System.Exception: User not in managers role
at EmployeeService.employee.GiveBonus(Int32 empID,
Int32 percentage) in c:\inetpub\wwwroot\employeesystem\employee.asmx
The exception details shown above reveal directory structure and other details to the
service consumer. This information can be used by a malicious user to footprint the virtual
directory path and can assist with further attacks.
SoapException objects.
These can be generated by the CLR or by your Web method implementation code.
SoapHeaderException objects
These are generated automatically when the consumer sends a SOAP request that
the service fails to process correctly.
Exception objects
A Web service can throw a custom exception type that derives from
System.Exception. The precise exception type is specific to the error condition.
For example, it might be one of the standard .NET Framework exception types
such as DivideByZeroException, or ArgumentOutOfRangeException and so on.
Regardless of the exception type, the exception details are propagated to the client using
the standard SOAP <Fault> element. Clients and Web services built with ASP.NET do not
parse the <Fault> element directly but instead deal consistently with SoapException
objects. This allows the client to set up try blocks that catch SoapException objects.
Using SoapExceptions
The following code shows a simple WebMethod, where the validation of application logic
fails and, as a result, an exception is generated. The error information sent to the client is
minimal. In this sample, the client is provided with a help desk reference that can be used to
call support. At the Web server, a detailed error description for the help desk reference is
logged to aid problem diagnosis.
using System.Xml;
using System.Security.Principal;
[WebMethod]
public void GiveBonus(int empID, int percentage)
{
// Only managers can give bonuses
// This example uses Windows authentication
WindowsPrincipal wp = (HttpContext.Current.User as WindowsPrincipa
if( wp.IsInRole(@"Domain\Managers"))
{
// User is authorized to give bonus
. . .
}
else
{
// Log error details on the server. For example:
// "DOMAIN\Bob tried to give bonus to Employee Id 345667;
// Access denied because DOMAIN\Bob is not a manager."
// Note: User name is available from wp.Identity.Name
ASP.NET Web applications commonly handle application level exceptions that are allowed
to propagate beyond a method boundary in the Application_Error event handler in
Global.asax. This feature is not available to Web services, because the Web service's
HttpHandler captures the exception before it reaches other handlers.
If you need application level exception handling, create a custom SOAP extension to handle
it. For more information, see MSDN article, "Altering the SOAP Message using SOAP
Extensions" in the "Building Applications" section of the .NET Framework SDK at
http://www.microsoft.com/downloads/details.aspx?FamilyID=9b3a2ca6-3647-4070-9f41-
a333c6b9181d&DisplayLang=en.
Auditing and Logging
With a Web service, you can audit and log activity details and transactions either by using
platform-level features or by using custom code in your Web method implementations.
You can develop code that uses the System.Diagnostics.EventLog class to log actions to
the Windows event log. The permission requirements and techniques for using this class
from a Web service are the same as for a Web application. For more information, see the
"Auditing and Logging" section in Chapter 10, "Building Secure ASP.NET Pages and
Controls."
Proxy Considerations
If you use WSDL to automatically generate a proxy class to communicate with a Web
service, you should verify the generated code and service endpoints to ensure that you
communicate with the desired Web service and not a spoofed service. If the WSDL files on
a remote server are inadequately secured, it is possible for a malicious user to tamper with
the files and change endpoint addresses, which can impact the proxy code that you
generate.
Specifically, examine the <soap:address> element in the .wsdl file and verify that it points
to the expected location. If you use Visual Studio .NET to add a Web reference by using the
Add Web Reference dialog box, scroll down and check the service endpoints.
Finally, whether you use Visual Studio.NET to add a Web reference or manually generate
the proxy code using Wsdl.exe, closely inspect the proxy code and look for any suspicious
code.
You can set the URL Behavior property of the Web service proxy to Dynamic,
Note
which allows you to specify endpoint addresses in Web.config.
Code Access Security Considerations
Code access security can limit the resources that can be accessed and the operations that
can be performed by your Web service code. An ASP.NET Web service is subject to
ASP.NET code access security policy, configured by the Web service's <trust> element.
.NET Framework consumer code that calls a Web service must be granted the
WebPermission by code access security policy. The precise state of the WebPermission
determines the range of Web services that can be called. For example, it can constrain
your code so that it can only call local Web services or services on a specified server.
If the consumer code has full trust, it is granted the unrestricted WebPermission which
allows it to call any Web service. Partial trust consumer code is subject to the following
limitations:
If you call a Web service from a Medium trust Web application, by default you can
only access local Web services.
Consumer code that uses the WSE classes must be granted full trust. For example,
if your Web service proxy classes derive from
Microsoft.Web.Services.WebServicesClientProtocol, which is provided by the
WSE, full trust is required. To use WSE from a partial trust Web application, you
must sandbox calls to the Web service.
For more information about calling Web services from partial trust Web applications, see
Chapter 9, "Using Code Access Security with ASP.NET." For more information about
WebPermission, see the "Web Services" section in Chapter 8, "Code Access Security in
Practice."
Deployment Considerations
The range of security options available to you depends greatly on the specific deployment
scenarios your Web services attempt to cover. If you build applications that consume Web
services in an intranet, then you have the widest range of security options and techniques at
your disposal. If, however, your Web service is publicly accessible over the Internet, your
options are far more lijmited. This section describes the implications of different deployment
scenarios on the applicability of the approaches to securing Web services discussed
previously in this chapter.
Intranet Deployment
Because you control the consumer application, the service, and the platform, intranets
usually provide the widest range of available options for securing Web services.
With an intranet scenario, you can usually choose from the full range of authentication and
secure communication options. For example, you mgiht decide to use Windows
authentication if the consumer and service are in the same or trusting domains. You can
specify that client application developers set the credentials property on the client proxy to
flow the user's Windows credentials to the Web service.
Intranet communication is often over a private network, with some degree of security. If this
is insufficient, you might decide to encrypt traffic by using SSL. You can also use message
level security and install WSE on both the client and server to handle security at both ends
transparently to the application. WSE supports authentication, digital signatures, and
encryption.
Extranet Deployment
In an extranet scenario, you may need to expose your Web service over the Internet to a
limited number of partners. The user community is still known, predictable, and possibly
uses managed client applications, although they come from separate, independent
environments. In this situation, you need an authentication mechanism that is suitable for
both parties and does not rely on trusted domains.
You can use Basic authentication if you make account information available to both parties.
If you use Basic authentication, make sure that you secure the credentials by using SSL.
SSL only protects credentials over the network. It does not protect them in
situations where a malicious user successfully installs a proxy tool (such as
Note
sslproxy) local to the client machine to intercept the call before forwarding it to
the Web service over SSL.
As an alternate option for use with an extranet, you can use IIS client certificate
authentication instead of passing explicit credentials. In this case, the calling application
must present a valid certificate with the call. The Web service uses the certificate to
authenticate the caller and authorize the operation. For more information, see the "Extranet
Security" section in MSDN article, "Building Secure ASP.NET Applications" at
http://msdn.microsoft.com/library/en-us/dnnetsec/html/SecNetch06.asp.
Internet Deployment
If you expose your Web service to a large number of Internet consumers and require
authentication, the options available to you are substantially constrained. Any form of
platform level authentication is unikely to be suitable, since the consumers will not have
proper domain accounts to which they can map their credentials. The use of IIS client
certicate authentication and the transport (SSL) level is also problematic when a large
number of client certificates must be made known to the target IIS Web server (or the ISA
Server in front of it). This leaves message and application-level authentication and
authorization the most likely choice. Credentials passed by the consumer of the service in
the form of user name, password, certicate, Kerberos ticket, or custom token) can be
validated transparently by the Web services infrastructure (WSE) or programmatically
inside the target service. client certificates are difficult to manage scale. Key management
(issuing and revoking) becomes an issue. Also, certificate-based authentication is resource
intensive and therefore is subject to scalability issues with large number of clients.
SSL usually provides encryption of the network traffic (server-side certicate only), but can
also be supplemented by message-level encryption.
Using client certicates, while advantageous from a seucrity point of view, often becomes
problematic for large numbers of users. You must carefully manage the certicates and
consider how they should be delivered to clients, renewed, revoked, and so on. Another
pottential issue in Internet situaions Is the oveall scalability of the solution due to processing
overhead or the encryption/decryption and certificate validation for a large-scale Web
service with significant workload.
Summary
WS-Security is the emerging standard for Web services security. The specification defines
options for authentication by passing security tokens in a standard way using SOAP
headers. Tokens can include user name and password credentials, Kerberos tickets, X.509
certificates, or custom tokens. WS-Security also addresses message privacy and integrity
issues. You can encrypt whole or partial messages to provide privacy, and digitally sign
them to provide integrity.
In intranet scenarios, where you are in control of both endpoints, platform level security
options such as Windows authentication, can be used. For more complex scenarios where
you do not control both endpoints and where messages are routed through intermediate
application nodes, message level solutions are required. The following section, "Additional
References," lists the Web sites you can use to track the emerging WS-Security standard
and the associated WSE tool kit that allows you to build solutions that conform to this and
other emerging Web service standards.
Additional Resources
For more information, see the following resources:
For a printable checklist, see "Checklist: Securing Web Services" in the "Checklists"
section of this guide.
You can download the WSE at the Microsoft Web Services Developer Center home
page at http://msdn.microsoft.com/webservices.
For articles specific to Web Services security, see the MSDN articles at
http://msdn.microsoft.com/webservices/building/security/default.aspx.
For articles specific to Web Services Enhancements, see the MSDN articles at
http://msdn.microsoft.com/webservices/building/wse/default.aspx.
For information on using SSL with Web Services, see "How to Call a Web Service
Using SSL" in the "How To" section of "Microsoft patterns & practices Volume I,
Building Secure ASP.NET Applications: Authentication, Authorization, and Secure
Communication" at http://msdn.microsoft.com/library/en-
us/dnnetsec/html/SecNetHT14.asp.
For information on using client certificates with Web Services, see MSDN article,
"How To: Call a Web Service Using Client Certificates from ASP.NET" in the "How
To" section of "Microsoft patterns & practices Volume I, Building Secure ASP.NET
Applications: Authentication, Authorization, and Secure Communication" at
http://msdn.microsoft.com/library/en-us/dnnetsec/html/SecNetHT13.asp.
For information on XML Encryption, see the W3C XML Encryption Working Group
at http://www.w3.org/Encryption/2001/.
Chapter 13: Building Secure Remoted Components
In This Chapter
Authenticating and authorizing callers
If performance is an issue, you might decide to use a custom host with the TcpChannel.
You should only do so in trusted subsystem scenarios, where the range of possible callers
is carefully controlled through out-of-band techniques such as the use of IPSec policies,
which only allow communication from specified Web servers. With the TcpChannel, you
must build your own authentication and authorization mechanisms. This is contrary to the
principle of using tried and tested platform level security services, and requires significant
development effort.
This chapter gives recommendations and guidance to help you build secure remote
components. This includes components that use ASP.NET and the HttpChannel, and those
that use custom executables and the TcpChannel. The typical deployment pattern
assumed by this chapter is shown in Figure 13.1, where remote objects are located on a
middle-tier application server and process requests from ASP.NET Web application clients,
and also Windows applications deployed inside the enterprise.
In this common scenario, the remote component services requests from front-end Web
applications. In this case, ASP.NET on the Web server handles the authentication and
authorization of callers. In addition, middle-tier remote components are often accessed by
Enterprise Windows applications.
How to Use This Chapter
This chapter discusses various techniques to design and build secure components that you
communicate with using the .NET Framework remoting technology.
Unauthorized access
Network eavesdropping
Parameter manipulation
Serialization
Unauthorized Access
Remote components that provide sensitive or restricted information should authenticate and
authorize their callers to prevent unauthorized access. Weak authentication and
authorization can be exploited to gain unauthorized access to sensitive information and
operations.
Vulnerabilities
Vulnerabilities that make your remoting solution susceptible to unauthorized access include:
No IPSec policies to restrict which computers can communicate with the middletier
application server that hosts the remote components
No role-based authorization
Countermeasures
Countermeasures that may be implemented to prevent unauthorized access include:
Ensure that the front-end Web application authenticates and authorizes clients, and
that communication to middle-tier application servers is restricted by using IPSec
policies. These measures ensure that only the Web server can access the middle-
tier application server directly.
Do not trust IPrincipal objects passed from the client unless the client is trusted.
This is generally only the case if IPSec is used to limit the range of client
computers.
Network Eavesdropping
With network eavesdropping, an attacker is able to view request and response messages
as they flow across the network to and from the remote component. For example, an
attacker can use network monitoring software to retrieve sensitive data. This might include
sensitive application level data or credential information.
Vulnerabilities
Vulnerabilities that can lead to security compromises from network eavesdropping include:
Countermeasures
Countermeasures that may be implemented to prevent successful network eavesdropping
attacks include:
Use transport level encryption such as SSL or IPSec. The use of SSL requires you
to use an ASP.NET host and the HttpChannel. IPSec can be used with custom
hosts and the TcpChannel.
Encrypt the request at the application level to provide privacy. For example, you
could create a custom encryption sink to encrypt part of the entire message
payload.
Parameter Manipulation
Parameter manipulation refers to the unauthorized modification of data sent between the
client and remote component. For example, an attacker can manipulate the request
message destined for the remote component by intercepting the message while it is in
transit.
Vulnerabilities
Vulnerabilities that can lead to parameter manipulation include:
Countermeasures
Countermeasures that may be implemented to prevent successful parameter manipulation
include:
Digitally sign the message. The digital signature is used at the recipient end to
verify that the message has not been tampered with in transit.
Serialization
Serialization is the process of converting an object's internal state to a flat stream of bytes.
The remoting infrastructure uses the serialization services of the .NET Framework to pass
objects between client and server. It is possible for malicious code to inject a serialized
data stream to your server in order to coerce it into performing unintended actions. For
example, malicious client-side code can initialize an object that, when de-serialized on the
server, causes the server to consume server resources or execute malicious code.
Vulnerabilities
The main vulnerability that can lead to successful serialization attacks stems from the fact
that the server trusts the serialized data stream and fails to validate the data retrieved from
the stream.
Countermeasures
The countermeasure that prevents successful serialization attacks is to validate each item
of data as it is deserialized on the server. Validate each field for type, length, format, and
range.
Design Considerations
Before you begin to develop remote components, there are a number of issues to consider
at design time. The key security considerations are:
If you use the TcpChannel with a custom host process for performance reasons,
remember that no built-in authentication services exist.
For this reason, you should only use the TcpChannel in trusted server scenarios, where the
upstream Web application or Web service authenticates and authorizes the original callers
before it calls your middle-tier remoted components. To secure this scenario, use IPSec for
machine-level authentication and secure communication. The IPSec policy should only
permit traffic from the nominated Web server(s) to the middle-tier remote component host.
This trusted server scenario is shown in Figure 13.3.
Figure 13.3: Remoting in a trusted server scenario
For more information about IPSec, see "How To: Use IPSec" in the "How To" section of this
guide.
TcpChannel Considerations
If you use a custom executable host and the TcpChannel, and you cannot rely on an
upstream Web application to perform client authentication and authorization, you have to
develop your own authentication and authorization solutions.
As part of a custom solution you might decide to pass principal objects as method
parameters or in the call context. You should only do so in a trusted environment to prevent
malicious client-side code from creating an IPrincipal object with elevated roles and then
sending it to your server. Your server implementation must be able to trust IPrincipal
objects before using them for role-based authorization.
An alternative approach is to use the underlying services of the Security Support Provider
Interface (SSPI). For more information about this approach, see MSDN article, ".NET
Remoting Security Solution, Part 1: Microsoft.Samples.Security.SSPI Assembly," at
http://msdn.microsoft.com/library/en-us/dndotnet/html/remsspi.asp.
To provide secure communication when you use the TcpChannel, use IPSec or a custom
encryption channel sink to encrypt the request data.
Input Validation
In trusted server scenarios in which remoting solutions should be used, front-end Web
applications generally perform input validation. The data is fully validated before it is passed
to the remoted components. If you can guarantee that the data passed to a remoted
component can only come from within the current trust boundary, you can let the upstream
code perform the input validation.
If, however, your remoting solution can be accessed by arbitrary client applications running
in the enterprise, your remote components should validate input and be wary of serialization
attacks and MarshalByRefObject attacks.
Serialization Attacks
You can pass object parameters to remote components either by using the call context or
by passing them through regular input parameters to the methods that are exposed by the
remote component. It is possible for a malicious client to serialize an object and then pass it
to a remote component with the explicit intention of tripping up the remote component or
causing it to perform an unintended operation. Unless you can trust the client, you should
carefully validate each field item in the deserialized object, because the object parameter is
created on the server.
MarshalByRefObject Attacks
Objects that derive from System.MarshalByRefObject require a URL in order to make call
backs to the client. It is possible for the callback URL to be spoofed so that the server
connects to a different client computer, for example, a computer behind a firewall.
You can mitigate the risk of serialization and MarshalByRefObject attacks with version 1.1
of the .NET Framework by setting the typeFilterLevel attribute on the <formatter>
element to Low. This instructs the .NET Framework remoting infrastructure to only serialize
those objects it needs in order to perform the method invocation, and to reject any custom
objects that support serialization that you create and put in the call context or pass as
parameters. You can configure this setting in Web.config or programmatically as shown
below.
<formatter ref="binary" typeFilterLevel="Low" />
or
BinaryServerFormatterSinkProvider provider = new BinaryServerFormatt
provider.TypeFilterLevel = TypeFilterLevel.Low;
Authentication
If your remote component exposes sensitive data or operations, it must authenticate its
callers to support authorization. The .NET Framework remoting infrastructure does not
define an authentication model. The host should handle authentication. For example, you
can use ASP.NET to benefit from ASP.NET and IIS authentication features.
If you use a custom Windows service host, develop a custom authentication solution.
ASP.NET Hosting
The following guidelines apply if you use the ASP.NET host with the HttpChannel:
Since you have disabled IIS anonymous authentication, you can use any of the supported
IIS authentication mechanisms to authenticate callers over the HttpChannel, for example
Basic, Digest, and Integrated Windows. To avoid credentials being passed over the network
and to take advantage of Windows 2000 security account and password policies, use
Integrated Windows authentication.
You cannot use Passport or Forms authentication because these require redirection to a
login page.
When you use Windows authentication, you are recommended to enable File
Note
authorization. For more information, see "Authorization" later in this chapter.
You can configure the use of default credentials to use the client's current thread or process
token, or you can set explicit credentials.
To use the client's process token (or thread token if the client thread is currently
impersonating), set the useDefaultCredentials property of the client proxy to true. This
results in the use of CredentialsCache.DefaultCredentials when the client receives an
authentication challenge from the server. You can configure the proxy either by using the
configuration file or programmatically in code. To configure the proxy externally, use the
following element in the client configuration file:
<channel ref="http client" useDefaultCredentials="true" />
If you use default credentials in an ASP.NET client application that is configured for
impersonation, the thread level impersonation token is used. This requires Kerberos
delegation.
To use a specific set of credentials for authentication when you call a remote object, disable
the use of default credentials within the configuration file by using the following setting.
<channel ref="http" useDefaultCredentials="false" />
Note Programmatic settings always override the settings in the configuration file.
Then, use the following code to configure the proxy to use specific credentials:
IDictionary channelProperties =
ChannelServices.GetChannelSinkProperties(pr
NetworkCredential credentials;
credentials = new NetworkCredential("username", "password", "domain"
ObjRef objectReference = RemotingServices.Marshal(proxy);
Uri objectUri = new Uri(objectReference.URI);
CredentialCache credCache = new CredentialCache();
// Substitute "authenticationType" with "Negotiate", "Basic", "Diges
// "Kerberos" or "NTLM"
credCache.Add(objectUri, "authenticationType", credentials);
channelProperties["credentials"] = credCache;
channelProperties["preauthenticate"] = true;
This feature only works with the HttpChannel on version 1.1 of the .NET Framework.
If you set it to true, unauthenticated clients can possibly authenticate to the server using the
credentials of a previously authenticated client. This setting is ignored if the
useAuthenticatedConnectionSharing property is set to true. This setting has some
performance implications since it closes each connection with the server, which means that
clients must authenticate with each call. If you use this setting, you should also specify a
ConnectionGroupName for each user that uses the connection.
<channel ref="http client" unsafeAuthenticatedConnectionSharing="fal
This feature only works with the HttpChannel on version 1.1 of the .NET Framework.
If you use a Windows service host and the TcpChannel, either use this approach only in a
trusted server scenario, or provide a custom authentication scheme. The following
guidelines apply if you use a custom host with the TcpChannel:
However, even in these scenarios, you should use an encrypted communication channel to
prevent replay attacks.
You can obtain the objectUri from the Web.config file used to configure the
remote object on the server. Look for the <wellknown> element, as shown in the
following example:
<wellknown mode="SingleCall" objectUri="RemoteMath.rem" type=
Version=1.0.000.000 Culture=neutral, PublicKeyToken=4b5
2. Add the following line to the top of the file, and then save the file.
<%@ webservice class="YourNamespace.YourClass" ... %>
The FileAuthorizationModule approach described above allows you to control who can
and cannot access the remote object. For finer grained authorization that can be applied at
the method level, you can perform authorization checks using the IPrincipal object attached
to the current request.
If your remote object is hosed by ASP.NET and you use Windows authentication, an
IPrincipal object based on the authenticated caller's Windows identity is automatically
created and attached to Thread.CurrentPrinicipal.
If you use a custom host, create an IPrincipal object to represent the authenticated user.
The mechanics depend on your authentication approach. For example if you use a named
pipe transport, you can impersonate the caller to obtain their identity and construct an
IPrincipal object.
With the IPrincipal object in place you can perform authorization using principal permission
demands both declaratively and imperatively and you can call IPrincipal.IsInRole.
Using IPSec
Using SSL
Using IPSec
You can use IPSec policies to secure the communication channels to your remote objects,
for example, the channel from a Web server. You can use IPSec to encrypt all of the TCP
packets sent over a particular connection, which includes packets sent to and from your
remote objects. This solution is generally used by secure Internet and intranet data center
infrastructures and is beneficial because no additional coding effort is necessary.
The additional benefit of using IPSec is that it provides a secure communication solution
irrespective of the remote object host and channel type. For example, the solution works
when you use the TcpChannel and a custom host.
Using SSL
If you use the ASP.NET host, you can use IIS to configure the virtual directory of your
application to require SSL. Clients must subsequently use an HTTPS connection to
communicate with your remote objects.
An encryption sink is a custom channel sink that you can use when you use a custom host
with the TcpChannel. On the client side, the sink encrypts request data before it is sent to
the server and decrypts any encrypted response data received from the server. On the
server side, the sink decrypts the request data and then encrypts response data.
The following steps outline the basic approach to implement a custom encryption sink:
1. Create a public/private key pair for the solution.
const int AT_KEYEXCHANGE = 1;
CspParameters cspParams = new CspParameters();
cspParams.KeyContainerName = "<container name>";
cspParams.KeyNumber = AT_KEYEXCHANGE;
cspParams.ProviderName = "Microsoft Base Cryptographic Provid
cspParams.ProviderType = PROV_RSA_FULL;
RSACryptoServiceProvider rsaServerSide = new
RSACryptoServiceProvider(cs
rsaServerSide.PersistKeyInCsp = true;
Console.WriteLine(rsaServerSide.ToXmlString(true)); // Writes
3. Initialize the client channel sink and create a random key for encryption.
byte[] randomKey = new byte[size];
RNGCryptoServiceProvider rng = new RNGCryptoServiceProvider()
rng.GetBytes(randomKey);
4. Encrypt the random key with the pubic key of your server. Use
IClientChannelSink.ProcessMessage to send the encrypted key to the server.
RSACryptoServiceProvider rsa = new RSACryptoServiceProvider(c
rsa.FromXmlString("<server's public key>");
AsymmetricKeyExchangeFormatter formatter = new
RSAPKCS1KeyExchangeFormatt
byte[] encryptedSessionKey = formatter.CreateKeyExchange(_se
5. Initialize the server channel sink and create an RSA object using the specific key
container name.
const int AT_KEYEXCHANGE = 1;
CspParameters cspParams = new CspParameters();
cspParams.KeyContainerName = "<container name>";
cspParams.KeyNumber = AT_KEYEXCHANGE;
cspParams.ProviderName = "Microsoft Base Cryptographic Provid
cspParams.ProviderType = PROV_RSA_FULL;
RSACryptoServiceProvider rsaServerSide = new RSACryptoService
6. Retrieve the encrypted key from the client. This key is normally sent in the request
headers.
7. Decrypt the session encryption key using the private key of the server.
AsymmetricKeyExchangeDeformatter asymDeformatter = new
RSAPKCS1KeyExchangeDefor
byte[] decryptedSessionKey = asymDeformatter.DecryptKeyExcha
<enc
8. Use a mechanism for mapping clients to encryption keys, for example, by using a
hash table.
At this point, the client and server both share an encryption key, and can encrypt and
decrypt method calls. Periodically during the object lifetime, new keys can and should be
created.
Denial of Service
Denial of service attacks can occur when a malicious client creates multiple objects and
continues to renew the lifetime lease to consume server resources. Server-side remote
objects contain a default lease. In this state, a client can continue to renew the lease
forever. However, you can implement the ILease interface on the server and explicitly
control sponsors and renewals. To do this, override InitializeLifetimeService on your
MarshalByRefObject object. The remoting infrastructure calls this method when the object
is created. The lease can also be set programmatically by using the <lifetime> element.
Exception Management
Make sure you do not return full exception details to the caller. If you use an ASP.NET host,
make sure ASP.NET is configured so that generic error messages are returned to the client,
as shown below.
<configuration>
<system.runtime.remoting>
<!-- Valid values for mode attribute are
on - callers receive default error messages
remoteOnly - clients on the same computer as the remote comp
detailed exception information. Remote calls re
default error message
off - callers receive detailed exception information -->
<customErrors mode="on"/>
</system.runtime.remoting>
</configuration>
You could implement a custom channel sink to perform client-side and/or server-side
auditing. You can get details from the SyncProcessMessage, ProcessMessage, or
SyncProcessMessage methods.
Code Access Security (CAS) Considerations
Remoting clients require full trust on version 1.0 and 1.1 of the .NET Framework. The
System.Runtime.Remoting.dll assembly is not marked with
AllowPartiallyTrustedCallersAttribute.
To use remoting to call a remote component from partial trust code such as a partial trust
Web application, you must create a full trust wrapper assembly and sandbox the remote
object method calls. For more information about sandboxing code and using wrapper
assemblies, see Chapter 9, "Using Code Access Security with ASP.NET."
Summary
The .NET Framework remoting infrastructure is designed for use in trusted server scenarios
where you can limit callers to trusted clients, for example by using IPSec security policies. If
you use an ASP.NET host and the HttpChannel, you benefit from being able to use the
underlying security features provided by ASP.NET and IIS. If you use a custom host and the
TcpChannel, perhaps for performance reasons, you must implement your own
authentication and authorization solutions. IPSec can help in these scenarios by providing
machine level authentication and secure communication.
Additional Resources
For more information, see the following resources:
For more information about how to create a custom authentication solution that
uses SSPI, see MSDN article, ".NET Remoting Security Solution, Part 1:
Microsoft.Samples.Security.SSPI Assembly," at
http://msdn.microsoft.com/library/en-us/dndotnet/html/remsspi.asp.
This chapter shows you how to build secure data access code and avoid common
vulnerabilities and pitfalls. The chapter presents a series of countermeasures and defensive
techniques that you can use in your data access code to mitigate the top threats related to
data access.
How to Use This Chapter
To get the most out of this chapter, read the following chapters before or in conjunction with
this chapter:
Read Chapter 2, "Threats and Countermeasures." This will give you a broader
and deeper understanding of potential threats and countermeasures faced by Web
applications.
Use the Assessing Chapters. To review the security of your data access at
different stages of the product cycle, refer to the Web services sections in the
following chapters: Chapter 5, "Architecture and Design Review for Security,"
Chapter 21, "Code Review," and Chapter 22, "Deployment Review."
Use the Checklist. "Checklist: Securing Data Access" in the Checklists section of
this guide includes a checklist for easy reference. Use this task-based checklist as
a summary of the recommendations in this chapter.
Threats and Countermeasures
To build secure data access code, know what the threats are, how common vulnerabilities
arise in data access code, and how to use appropriate countermeasures to mitigate risk.
SQL injection
Unauthorized access
Network eavesdropping
SQL Injection
SQL injection attacks exploit vulnerable data access code and allow an attacker to execute
arbitrary commands in the database. The threat is greater if the application uses an
unconstrained account in the database because this gives the attacker greater freedom to
execute queries and commands.
Vulnerabilities
Common vulnerabilities that make your data access code susceptible to SQL injection
attacks include:
Weak input validation
Countermeasures
Use type safe SQL parameters for data access. These parameters can be used
with stored procedures or dynamically constructed SQL command strings.
Parameters perform type and length checks and also ensure that injected code is
treated as literal data, not executable statements in the database.
Use an account that has restricted permissions in the database. Ideally, you should
only grant execute permissions to selected stored procedures in the database and
provide no direct table access.
Vulnerabilities
The following vulnerabilities increase the security risk associated with compromised
configuration data:
Countermeasures
Many applications store sensitive data, such as customer credit card numbers. It is
essential to protect the privacy and integrity of this type of data.
Vulnerabilities
Coding practices that can lead to the disclosure of sensitive application data include:
Weak authorization
Weak encryption
Countermeasures
Authorize each caller prior to performing data access so that users are only able to
see their own data.
Vulnerabilities
Countermeasures
Catch, log, and handle data access exceptions in your data access code.
Return generic error messages to the caller. This requires appropriate configuration
of the <customErrors> element in the Web.config or Machine.config configuration
file.
Unauthorized Access
With inadequate authorization, users may be able to see another user's data and may be
able to access other restricted data.
Vulnerabilities
Countermeasures
Use code access security permission demands to authorize the calling code.
Use limited permissions to restrict the application's login to the database and to
prevent direct table access.
Network Eavesdropping
The deployment architecture of most applications includes a physical separation of the data
access code from the database server. As a result, sensitive data such as application-
specific data or database login credentials must be protected from network eavesdroppers.
Vulnerabilities
Clear text credentials passed over the network during SQL authentication
Unencrypted sensitive application data sent to and from the database server
Countermeasures
Use an SSL connection between the Web server and database server to protect
sensitive application data. This requires a database server certificate.
Using least privileged accounts reduces risk and limits the potential damage if your account
is compromised or malicious code is injected. In the case of SQL injection, the command
executes under the security context defined by the application login and is subject to the
associated permissions that the login has in the database. If you connect using an
overprivileged account — for example, as a member of the SQL Server sysadmin role —
the attacker can perform any operation in any database on the server. This includes
inserting, updating, and deleting data; dropping tables; and executing operating system
commands.
Do not connect to SQL Server using the sa account or any account that is a
Important
member of the SQL Server sysadmin or db_owner roles.
Use Stored Procedures
You can restrict the application database login so that it only has permission to
execute specified stored procedures. Granting direct table access is unnecessary.
This helps mitigate the risk posed by SQL injection attacks.
Length and type checks are performed on all input data passed to the stored
procedure. Also, parameters cannot be treated as executable code. Again, this
mitigates the SQL injection risk.
If you cannot use parameterized stored procedures for some reason and you need to
construct SQL statements dynamically, do so using typed parameters and parameter
placeholders to ensure that input data is length and type checked.
Identify stored data that requires guaranteed privacy and integrity. If you store passwords
in database solely for the purposes of verification, consider using a one-way hash. If the
table of passwords is compromised, the hashes cannot be used to obtain the clear text
password.
If you store sensitive user-supplied data such as credit card numbers, use a strong
symmetric encryption algorithm such as Triple DES (3DES) to encrypt the data. Encrypt the
3DES encryption key using the Win32 Data Protection API (DPAPI), and store the
encrypted key in a registry key with a restricted ACL that only administrators and your
application process account can use.
The main issues that make DPAPI less suited for storing sensitive data in the database are
summarized below:
If you use the machine key approach, any user on that computer can decrypt the
data (unless you use additional encryption mechanisms).
If you use DPAPI with a user key and use local user accounts, each local account
on each Web server has a different security identifier (SID) and a different key is
generated, which prevents one server from being able to access data encrypted by
another server.
If you use DPAPI with a user key and you use a roaming user profile across the
machines in the Web farm, all data will share the same encryption/decryption key.
However, if the domain controller responsible for the roaming user profile account is
damaged or destroyed, a user account with the same SID cannot be recreated,
and you cannot recover the encrypted data from the database.
Also, with a roaming user profile, if someone manages to retrieve the data, it can
be decrypted on any machine in the network, provided that the attacker can run
code under the specific user account. This increases the area for potential attack,
and is not recommended.
Use sandboxing to isolate your data access code, which is important if your code
needs to support partial-trust callers — for example, partial-trust Web applications.
Use data access methods and classes that authorize calling code using code
identity permission demands.
For more information about authorization for data access code, see the "Authorization"
section, later in this chapter.
Input Validation
Aside from the business need to ensure that your databases maintain valid and consistent
data, you must validate data prior to submitting it to the database to prevent SQL injection.
If your data access code receives its input from other components inside the current trust
boundary and you know the data has already been validated (for example, by an ASP.NET
Web page or business component) then your data access code can omit extensive data
validation. However, make sure you use SQL parameters in your data access code. These
parameters validate input parameters for type and length. The next section discusses the
use of SQL parameters.
SQL Injection
SQL injection attacks can occur when your application uses input to construct dynamic SQL
statements to access the database. SQL injection attacks can also occur if your code uses
stored procedures that are passed strings which contain unfiltered user input. SQL injection
can result in attackers being able to execute commands in the database using the
application login. The issue is magnified if the application uses an overprivileged account to
connect to the database.
Conventional security measures, such as the use of SSL and IPSec, do not
Note
protect you against SQL injection attacks.
Constrain input.
Constrain Input
Validate input for type, length, format, and range. If you do not expect numeric values, then
do not accept them. Consider where the input comes from. If it is from a trusted source that
you know has performed thorough input validation, you may choose to omit data validation
in your data access code. If the data is from an untrusted source or for defense in depth,
your data access methods and components should validate input.
SSL does not protect you from SQL injection. Any application that accesses
Important a database without proper input validation and appropriate data access
techniques is susceptible to SQL injection attacks.
Use stored procedures where you can, and call them with the Parameters collection.
In this case, the @au_id parameter is treated as a literal value and not as executable code.
Also, the parameter is type and length checked. In the sample above, the input value cannot
be longer than 11 characters. If the data does not conform to the type or length defined by
the parameter, an exception is generated.
Note that using stored procedures does not necessarily prevent SQL injection. The
important thing to do is use parameters with stored procedures. If you do not use
parameters, your stored procedures can be susceptible to SQL injection if they use
unfiltered input. For example, the following code fragment is vulnerable:
SqlDataAdapter myCommand = new SqlDataAdapter("LoginStoredProcedure
Login.Text + "'", conn);
Important If you use stored procedures, make sure you use parameters.
The problem with routines such as this and the reason why you should not rely on them
completely is that an attacker could use ASCII hexadecimal characters to bypass your
checks. You should, however, filter input as part of your defense in depth strategy.
When you use Windows authentication, you use a trusted connection. The following code
fragments show typical connection strings that use Windows authentication.
The example below uses the ADO.NET data provider for SQL Server:
SqlConnection pubsConn = new SqlConnection(
"server=dbserver; database=pubs; Integrated Security=SSPI;");
The example below uses the ADO.NET data provider for OLE DB data sources:
OleDbConnection pubsConn = new OleDbConnection(
"Provider=SQLOLEDB; Data Source=dbserver; Integrated Security=SSP
"Initial Catalog=northwind");
To enable SQL Server to automatically encrypt credentials sent over the network, install a
server certificate on the database server. Alternatively, use an IPSec encrypted channel
between the Web and database servers to secure all traffic sent to and from the database
server. To secure the connection string, use DPAPI. For more information, see "Secure Your
Connection String" in the "Configuration Management" section, later in this chapter.
For more information about how to create a least privileged database account and the
options for connecting an ASP.NET Web application to a remote database using Windows
authentication, see "Data Access" in Chapter 19, "Securing Your ASP.NET Application and
Web Services."
Authorization
The authorization process establishes if a user can retrieve and manipulate specific data.
There are two approaches: your data access code can use authorization to determine
whether or not to perform the requested operation, and the database can perform
authorization to restrict the capabilities of the SQL login used by your application.
With inadequate authorization, a user may be able to see the data of another user and an
unauthorized user may be able to access restricted data. To address these threats:
Figure 14.3 summarizes the authorization points and techniques that should be used.
Notice how the data access code can use permission demands to authorize the calling user
or the calling code. Code identity demands are a feature of .NET code access security.
To authorize the application in the database, use a least privileged SQL server login that
only has permission to execute selected stored procedures. Unless there are specific
reasons, the application should not be authorized to perform create, retrieve, update,
destroy/delete (CRUD) operations directly on any table.
Stored procedures run under the security context of the database system.
Although you can constrain the logical operations of an application by assigning it
permissions to particular stored procedures, you cannot constrain the
Note
consequences of the operations performed by the stored procedure. Stored
procedures are trusted code. The interfaces to the stored procedures must be
secured using database permissions.
Restrict Unauthorized Callers
You code should authorize users based on a role or identity before it connects to the
database. Role checks are usually used in the business logic of your application, but if you
do not have a clear distinction between business and data access logic, use principal
permission demands on the methods that access the database.
The following attribute ensures that only users who are members of the Manager role can
call the DisplayCustomerInfo method:
[PrincipalPermissionAttribute(SecurityAction.Demand, Role="Manager")
public void DisplayCustomerInfo(int CustId)
{
}
If you need additional authorization granularity and need to perform role-based logic inside
the data access method, use imperative principal permission demands or explicit role
checks as shown in the following code fragment:
using System.Security;
using System.Security.Permissions;
The following code fragment uses an explicit, programmatic role check to ensure that the
caller is a member of the Manager role:
public void DisplayCustomerInfo(int CustId)
{
if(!Thread.CurrentPrincipal.IsInRole("Manager"))
{
. . .
}
}
For example, if you only want code written by your company or a specific development
organization to be able to use your data access components, use a
StrongNameIdentityPermission and demand that calling assemblies have a strong name
with a specified public key, as shown in the following code fragment:
using System.Security.Permissions;
. . .
[StrongNameIdentityPermission(SecurityAction.LinkDemand,
PublicKey="002...4c6")]
public void GetCustomerInfo(int CustId)
{
}
To extract a text representation of the public key for a given assembly, use the following
command:
sn -Tp assembly.dll
Because Web application assemblies are dynamically compiled, you cannot use strong
names for these assemblies. This makes it difficult to restrict the use of a data access
assembly to a specific Web application. The best approach is to develop a custom
permission and demand that permission from the data access component. Full trust Web
applications (or any fully trusted code) can call your component. Partial trust code,
however, can call your data access component only if it has been granted the custom
permission.
For an example implementation of a custom permission, see "How To: Create a Custom
Encryption Permission" in the "How To" section of this guide.
For details about how to configure this approach, see "Configuring Data Access for Your
ASP.NET Application" in Chapter 19, "Securing Your ASP.NET Application and Web
Services."
Configuration Management
Database connection strings are the main configuration management concern for data
access code. Carefully consider where these strings are stored and how they are secured,
particularly if they include credentials. To improve your encryption management security:
When you use Windows authentication, the credentials are managed for you and the
credentials are not transmitted over the network. You also avoid embedding user names
and passwords in connection strings.
For details on how to build a managed wrapper class, see "How To: Create a DPAPI
Library" in the "How To" section of "Microsoft patterns & practices Volume I, Building
Secure ASP.NET Applications: Authentication, Authorization, and Secure Communication"
at http://msdn.microsoft.com/library/default.asp?url=/library/en-
us/dnnetsec/html/secnetlpMSDN.asp.
The process account is determined by the process in which your data access
Note assembly runs. This is usually the ASP.NET process or an Enterprise Services
server process if your solution uses an Enterprise Services middle tier.
If you use the Visual Studio.NET database connection Wizards, the connection
strings are stored either as a clear text property value in the Web application
Note
code-behind file or in the Web.config file. Both of these approaches should be
avoided.
Although it is potentially less secure than using a restricted registry key, you may want to
store the encrypted string in the Web.config for easier deployment. In this case, use a
custom <appSettings> name-value pair as shown below:
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<appSettings>
<add key="connectionString" value="AQA..bIE=" />
</appSettings>
<system.web>
...
</system.web>
</configuration>
To access the cipher text from the <appSettings> element, use the
ConfigurationSettings class as shown below:
using System.Configuration;
private static string GetConnectionString()
{
return ConfigurationSettings.AppSettings["connectionString"];
}
If your application uses external universal data link (UDL) files with the ADO.NET managed
data provider for OLE DB, use NTFS permissions to restrict access. Use the following
restricted ACL:
Administrators: Full Control
Process Account: Read
UDL files are not encrypted. A more secure approach is to encrypt the connection
Note
string using DPAPI and store it in a restricted registry key.
Sensitive Data
Many Web applications store sensitive data of one form or another in the database. If an
attacker manages to execute a query against your database, it is imperative that any
sensitive data items — such as credit card numbers — are suitably encrypted.
Avoid storing sensitive data if possible. If you must store sensitive data, encrypt the data.
2. Back up the encryption key, and store the backup in a physically secure location.
3. Encrypt the key with DPAPI and store it in a registry key. Use the following ACL to
secure the registry key:
Administrators: Full Control
Process Account (for example ASPNET): Read
With this process, if the DPAPI account used to encrypt the encryption key is damaged, the
backup of the 3DES key can be retrieved from the backup location and be encrypted using
DPAPI under a new account. The new encrypted key can be stored in the registry and the
data in the database can still be decrypted.
For more information about creating a managed DPAPI library, see "How To: Create a
DPAPI Library" in the "How To" section of "Microsoft patterns & practices Volume I, Building
Secure ASP.NET Applications: Authentication, Authorization, and Secure Communication"
at http://msdn.microsoft.com/library/default.asp?url=/library/en-
us/dnnetsec/html/secnetlpMSDN.asp.
For more information about using SSL and IPSec, see "How To: Use IPSec to Provide
Secure Communication Between Two Servers" and "How To: Use SSL to Secure
Communication to SQL Server 2000" in the "How To" section of "Microsoft patterns &
practices Volume I, Building Secure ASP.NET Applications: Authentication, Authorization,
and Secure Communication" at http://msdn.microsoft.com/library/default.asp?
url=/library/en-us/dnnetsec/html/secnetlpMSDN.asp.
More Information
For more information about implementing a user store that stores password hashes with
salt, see "How To: Use Forms Authentication with SQL Server 2000" in the "How To" section
of "Microsoft patterns & practices Volume I, Building Secure ASP.NET Applications:
Authentication, Authorization, and Secure Communication" at
http://msdn.microsoft.com/library/default.asp?url=/library/en-
us/dnnetsec/html/secnetlpMSDN.asp.
Exception Management
Exception conditions can be caused by configuration errors, bugs in your code, or malicious
input. Without proper exception management, these conditions can reveal sensitive
information about the location and nature of your data source in addition to valuable
connection details. The following recommendations apply to data access code:
Place data access code within a try / catch block and handle exceptions. When you write
ADO.NET data access code, the type of exception generated by ADO.NET depends on the
data provider. For example:
Trapping Exceptions
The following code uses the SQL Server .NET Framework data provider and shows how
you should catch exceptions of type SqlException.
try
{
// Data access code
}
catch (SqlException sqlex) // more specific
{
}
catch (Exception ex) // less specific
{
}
Logging Exceptions
You should also log details from the SqlException class. This class exposes properties
that contain details of the exception condition. These include a Message property that
describes the error, a Number property that uniquely identifies the type of error, and a
State property that contains additional information. The State property is usually used to
indicate a particular occurrence of a specific error condition. For example, if a stored
procedure generates the same error from more than one line, the State property indicates
the specific occurrence. Finally, an Errors collection contains SqlError objects that provide
detailed SQL server error information.
The following code fragment shows how to handle a SQL Server error condition by using
the SQL Server .NET Framework data provider:
using System.Data;
using System.Data.SqlClient;
using System.Diagnostics;
cmd.Parameters.Add("@ProductID", ProductID );
SqlParameter paramPN =
cmd.Parameters.Add("@ProductName", SqlDbType.VarChar, 40 );
paramPN.Direction = ParameterDirection.Output;
cmd.ExecuteNonQuery();
// The finally code is executed before the method returns
return paramPN.Value.ToString();
}
catch (SqlException sqlex)
{
// Handle data access exception condition
// Log specific exception details
LogException(sqlex);
// Wrap the current exception in a more relevant
// outer exception and re-throw the new exception
throw new Exception(
"Failed to retrieve product details for product ID
ProductID.ToString(), sqlex );
}
finally
{
conn.Close(); // Ensures connection is closed
}
}
Set mode="On" for production servers. Only use mode="Off" when you are developing
and testing software prior to release. Failure to do so results in rich error information, such
as that shown in Figure 14.4, being returned to the end user. This information can include
the database server name, database name, and connection credentials.
Figure 14.4 also shows a number of vulnerabilities in the data access code near the line
that caused the exception. Specifically:
The SQL command construction is susceptible to SQL injection attack; the input is
not validated, and the code does not use parameterized stored procedures.
Building a Secure Data Access Component
The following code shows a sample implementation of a CheckProductStockLevel
method used to query a products database for stock quantity. The code illustrates a
number of the important security features for data access code introduced earlier in this
chapter.
using System;
using System.Data;
using System.Data.SqlClient;
using System.Text.RegularExpressions;
using System.Collections.Specialized;
using Microsoft.Win32;
using DataProtection;
The code shown above exhibits the following security characteristics (identified by the
numbers in the comment lines).
1. The data access code is placed inside a try/catch block. This is essential to
prevent the return of system level information to the caller in the event of an
exception. The calling ASP.NET Web application or Web service might handle the
exception and return a suitably generic error message to the client, but the data
access code does not rely on this.
4. Parameterized stored procedures are used for data access. This is another
countermeasure to prevent SQL injection.
5. Detailed error information is not returned to the client. Exception details are
logged to assist with problem diagnosis.
Other options are discussed in the "Database Connection Strings" section of this
chapter.
The code shows how to retrieve the connection string from the registry
and then decrypt it using the managed DPAPI helper library. This library
is provided in "How To: Create a DPAPI Library" in the "How To" section
Note of "Microsoft patterns & practices Volume I, Building Secure ASP.NET
Applications: Authentication, Authorization, and Secure
Communication" at http://msdn.microsoft.com/library/en-
us/dnnetsec/html/SecNetHT07.asp.
Code Access Security Considerations
All data access is subject to code access security permission demands. Your chosen
ADO.NET managed data provider determines the precise requirements. The following table
shows the permissions that must be granted to your data access assemblies for each
ADO.NET data provider.
If you use the ADO.NET SQL Server data provider, your code must be granted the
SqlClientPermission by code access security policy. Full and Medium trust Web
applications have this permission.
Whether or not code is granted the SqlClientPermission determines whether or not the
code can connect to SQL Servers. You can also use the permission to place restrictions on
the use of database connection strings. For example, you can force an application to use
integrated security or you can ensure that if SQL Server security is used then blank
passwords are not accepted. Violations of the rules you specify through the
SqlClientPermission result in runtime security exceptions.
For more information about how to use SqlClientPermission to constrain data access, see
"Data Access" in Chapter 8, "Code Access Security in Practice."
Deployment Considerations
A securely designed and developed data access component can still be vulnerable to attack
if it is not deployed in a secure manner. A common deployment practice is for the data
access code and database to reside on separate servers. The servers are often separated
by an internal firewall, which introduces additional deployment considerations. Developers
and administrators, be aware of the following issues:
Firewall restrictions
Logon auditing
Firewall Restrictions
If you connect to SQL Server through a firewall, configure the firewall, client, and server.
You configure the client by using the SQL Server Client Network Utility and you configure
the database server by using the Server Network Utility. By default, SQL Server listens on
TCP port 1433, although you can change this. You must open the chosen port at the
firewall.
Depending on the SQL Server authentication mode you choose and your application's use
of distributed transactions, you may need to open several additional ports at the firewall:
For networks that do not use Active Directory, TCP port 139 is usually required for
Windows authentication. For more information about port requirements, see
TechNet articles, "TCP and UDP Port Assignments," at
http://www.microsoft.com/technet/prodtechnol/windows2000serv/reskit/tcpip/part4/tc
and "Security Considerations for Administrative Authority," at
http://www.microsoft.com/technet/security/bestprac/bpent/sec2/seconaa.asp
For full configuration details, see the "Ports" section in Chapter 18, "Securing Your
Database Server."
Many applications store connection strings in code primarily for performance reasons.
However, the performance benefit is negligible, and use of file system caching helps to
ensure that storing connection strings in external files gives comparable performance. Using
external files to store connection strings is superior for system administration.
For increased security, the recommended approach is to use DPAPI to encrypt the
connection string. This is particularly important if your connection string contains user names
and passwords. Then, decide where to store the encrypted string. The registry is a secure
location particularly if you use HKEY_CURRENT_USER, because access is limited to
processes that run under the associated user account. An alternative for easier deployment
is to store the encrypted string in the Web.config file. Both approaches were discussed in
the "Configuration Management" section earlier in this chapter.
As a developer you must communicate to the database administrator the precise stored
procedures and (possibly) tables that the application's login needs to access. Ideally, you
should only allow the application's login to have execute permissions on a restricted set of
stored procedures that are deployed along with the application.
Use strong passwords for the SQL or Windows account or accounts used by the
application to connect to the database.
See the "Authorization" section earlier in this chapter for the recommended authorization
strategy for the application account in the database.
Logon Auditing
You should configure SQL Server to log failed login attempts and possibly successful login
attempts. Auditing failed login attempts is helpful to detect an attacker who is attempting to
discover account passwords.
For more information about how to configure SQL Server auditing, see Chapter 18,
"Securing Your Database Server."
The use of IPSec or SSL to the database is recommended to protect sensitive application
level data passed to and from the database. For more information, see Chapter 18,
"Securing Your Database Server."
Summary
This chapter showed the top threats to data access code and highlighted the common
vulnerabilities. SQL injection is one of the main threats to be aware of. Unless you use the
correct countermeasures discussed in this chapter, an attacker could exploit your data
access code to run arbitrary commands in the database. Conventional security measures
such as firewalls and SSL provide no defense to SQL injection attacks. You should
thoroughly validate your input and use parameterized stored procedures as a minimum
defense.
Additional Resources
For more information, see the following resources:
For a printable checklist, see "Checklist: Securing Data Access" in the "Checklists"
section of this guide.
For information on securing your developer workstation, see "How To: Secure Your
Developer Workstation" in the "How To" section of this guide.
For information on using SSL with SQL Server, see "How To: Use SSL to Secure
Communication with SQL Server 2000," in the "How To" section of "Microsoft
patterns & practices Volume I, Building Secure ASP.NET Applications:
Authentication, Authorization, and Secure Communication" at
http://msdn.microsoft.com/library/en-us/dnnetsec/html/SecNetHT19.asp.
For information on using IPSec, see "How To: Use IPSec to Provide Secure
Communication Between Two Servers" in the "How To" section of "Microsoft
patterns & practices Volume I, Building Secure ASP.NET Applications:
Authentication, Authorization, and Secure Communication" at
http://msdn.microsoft.com/library/en-us/dnnetsec/html/SecNetHT18.asp.
For information on using DPAPI, see "How To: Create a DPAPI Library" in the "How
To" section of "Microsoft patterns & practices Volume I, Building Secure ASP.NET
Applications: Authentication, Authorization, and Secure Communication" at
http://msdn.microsoft.com/library/en-us/dnnetsec/html/SecNetHT07.asp.
Part IV: Securing Your Network, Host, and Application
Chapter List
Chapter 15: Securing Your Network
The basic components of a network, which act as the front-line gatekeepers, are the router,
the firewall, and the switch. Figure 15.1 shows these core components.
Read Chapter 2, "Threats and Countermeasures." This will give you a better
understanding of potential threats to Web applications.
Use the snapshot. Table 15.3, which is at the end of this chapter, provides a
snapshot of a secure network. Use this table as a reference when configuring your
network.
Use the Checklist. Use "Checklist: Securing Your Network" in the "Checklist"
section of this guide, to quickly evaluate and scope the required steps. The
checklist will also help you complete the individual steps.
Use vendor details to implement the guidance. The guidance in this chapter is
not specific to specific network hardware or software vendors. Consult your
vendor's documentation for specific instructions on how to implement the
countermeasures given in this chapter.
Threats and Countermeasures
An attacker looks for poorly configured network devices to exploit. Common vulnerabilities
include weak default installation settings, wide-open access controls, and unpatched
devices. The following are high-level network threats:
Information gathering
Sniffing
Spoofing
Session hijacking
Denial of service
With knowledge of the threats that can affect the network, you can apply effective
countermeasures.
Information Gathering
Information gathering can reveal detailed information about network topology, system
configuration, and network devices. An attacker uses this information to mount pointed
attacks at the discovered vulnerabilities.
Vulnerabilities
Attacks
Use generic service banners that do not give away configuration information such
as software versions or names.
Sniffing
Sniffing, also called eavesdropping, is the act of monitoring network traffic for data, such
as clear-text passwords or configuration information. With a simple packet sniffer, all
plaintext traffic can be read easily. Also, lightweight hashing algorithms can be cracked and
the payload that was thought to be safe can be deciphered.
Vulnerabilities
Common vulnerabilities that make your network susceptible to data sniffing include:
Attacks
The attacker places packet sniffing tools on the network to capture all traffic.
Countermeasures
Strong physical security that prevents rogue devices from being placed on the
network
Spoofing
Spoofing, also called identity obfuscation, is a means to hide one's true identity on the
network. A fake source address is used that does not represent the actual packet
originator's address. Spoofing can be used to hide the original source of an attack or to
work around network access control lists (ACLs) that are in place to limit host access
based on source address rules.
Vulnerabilities
Lack of ingress and egress filtering. Ingress filtering is the filtering of any IP
packets with untrusted source addresses before they have a chance to enter and
affect your system or network. Egress filtering is the process of filtering outbound
traffic from your network.
Attacks
An attacker can use several tools to modify outgoing packets so that they appear to
originate from an alternate network or host.
Countermeasures
Session Hijacking
With session hijacking, also known as man in the middle attacks, the attacker uses an
application that masquerades as either the client or the server. This results in either the
server or the client being tricked into thinking that the upstream host is the legitimate host.
However, the upstream host is actually an attacker's host that is manipulating the network
so that it appears to be the desired destination. Session hijacking can be used to obtain
logon information that can then be used to gain access to a system or to confidential
information.
Vulnerabilities
Common vulnerabilities that make your network susceptible to session hijacking include:
Unencrypted communication
Attacks
An attacker can use several tools to combine spoofing, routing changes, and packet
manipulation.
Countermeasures
Countermeasures include the following:
Session encryption
Denial of Service
A denial of service attack is the act of denying legitimate users access to a server or
services. Network-layer denial of service attacks usually try to deny service by flooding the
network with traffic, which consumes the available bandwidth and resources.
Vulnerabilities
Unencrypted communication
Attacks
Countermeasures
Countermeasures include:
In keeping with this guide's philosophy, this chapter uses the approach of analyzing potential
threats; without these analyses, it's impossible to properly apply security.
The network infrastructure can be broken into the following three layers: access,
distribution, and core. These layers contain all of the hardware necessary to control access
to and from internal and external resources. The chapter focuses on the software that
drives the network hardware that is responsible for delivering ASP.NET applications. The
recommendations apply to an Internet or intranet-facing Web zone and therefore might not
apply to your internal or corporate network.
Router
Firewall
Switch
Router
The router is the outermost security gate. It is responsible for forwarding IP packets to the
networks to which it is connected. These packets can be inbound requests from Internet
clients to your Web server, request responses, or outgoing requests from internal clients.
The router should be used to block unauthorized or undesired traffic between networks. The
router itself must also be secured against reconfiguration by using secure administration
interfaces and ensuring that it has the latest software patches and updates applied.
Firewall
The role of the firewall is to block all unnecessary ports and to allow traffic only from known
ports. The firewall must be capable of monitoring incoming requests to prevent known
attacks from reaching the Web server. Coupled with intrusion detection, the firewall is a
useful tool for preventing attacks and detecting intrusion attempts, or in worst-case
scenarios, the source of an attack.
Like the router, the firewall runs on an operating system that must be patched regularly. Its
administration interfaces must be secured and unused services must be disabled or
removed.
Switch
The switch has a minimal role in a secure network environment. Switches are designed to
improve network performance to ease administration. For this reason, you can easily
configure a switch by sending specially formatted packets to it. For more information, see
"Switch Considerations" later in this chapter.
Router Considerations
The router is the very first line of defense. It provides packet routing, and it can also be
configured to block or filter the forwarding of packet types that are known to be vulnerable
or used maliciously, such as ICMP or Simple Network Management Protocol (SNMP).
If you don't have control of the router, there is little you can do to protect your network
beyond asking your ISP what defense mechanisms they have in place on their routers.
Protocols
Administrative access
Services
Intrusion detection
Protocols
Denial of service attacks often take advantage of protocol-level vulnerabilities, for example,
by flooding the network. To counter this type of attack, you should:
This type of filtering also enables the originator to be easily traced to its true source since
the attacker would have to use a valid — and legitimately reachable — source address. For
more information, see "Network Ingress Filtering: Defeating Denial of Service Attacks
Which Employ IP Source Address Spoofing" at http://www.rfc-editor.org/rfc/rfc2267.txt.
Blocking ICMP traffic at the outer perimeter router protects you from attacks such as
cascading ping floods. Other ICMP vulnerabilities exist that justify blocking this protocol.
While ICMP can be used for troubleshooting, it can also be used for network discovery and
mapping. Therefore, control the use of ICMP. If you must enable it, use it in echo-reply
mode only.
For more information on broadcast suppression using Cisco routers, see "Configuring
Broadcast Suppression" on the Cisco Web site at
http://www.cisco.com/en/US/products/hw/switches/ps708/products_
configuration_guide_chapter09186a00800eb778.html.
Administrative Access
From where will the router be accessed for administration purposes? Decide over which
interfaces and ports an administration connection is allowed and from which network or host
the administration is to be performed. Restrict access to those specific locations. Do not
leave an Internet-facing administration interface available without encryption and
countermeasures to prevent hijacking. In addition:
Services
On a deployed router, every open port is associated with a listening service. To reduce the
attack surface area, default services that are not required should be shut down. Examples
include bootps and Finger, which are rarely required. You should also scan your router to
detect which ports are open.
Intrusion Detection
With restrictions in place at the router to prevent TCP/IP attacks, the router should be able
to identify when an attack is taking place and notify asystem administrator of the attack.
Attackers learn what your security priorities are and attempt to work around them. Intrusion
Detection Systems (IDSs) can show where the perpetrator is attempting attacks.
Firewall Considerations
A firewall should exist anywhere you interact with an untrusted network, especially the
Internet. It is also recommended that you separate your Web servers from downstream
application and database servers with an internal firewall.
After the router, with its broad filters and gatekeepers, the firewall is the next point of
attack. In many (if not most) cases, you do not have administrative access to the upstream
router. Many of the filters and ACLs that apply to the router can also be implemented at the
firewall. The configuration categories for the firewall include:
Filters
Perimeter networks
Intrusion detection
Filters
Filtering published ports on a firewall can be an effective and efficient method of blocking
malicious packets and payloads. Filters range from simple packet filters that restrict traffic
at the network layer based on source and destination IP addresses and port numbers, to
complex application filters that inspect application-specific payloads. A defense in depth
approach that uses layered filters is a very effective way to block attacks. There are six
common types of firewall filters:
Packet filters
These can filter packets based on protocol, source or destination port number and
source or destination address, or computer name. IP packet filters are static, and
communication through a specific port is either allowed or blocked. Blocked packets
are usually logged, and a secure packet filter denies by default.
At the network layer, the payload is unknown and might be dangerous. More
intelligent types of filtering must be configured to inspect the payload and make
decisions based on access control rules.
Circuit-level filters
These inspect sessions rather than payload data. An inbound or outbound client
makes a request directly against the firewall/gateway, and in turn the gateway
initiates a connection to the server and acts as a broker between the two
connections. With knowledge of application connection rules, circuit level filters
ensure valid interactions. They do not inspect the actual payload, but they do count
frames to ensure packet integrity and prevent session hijacking and replaying.
Application filters
Smart application filters can analyze a data stream for an application and provide
application-specific processing, including inspecting, screening or blocking,
redirecting, and even modifying the data as it passes through the firewall.
Application filters protect against attacks such as the following:
HTTP-based attacks (for example, Code Red and Nimda, which use
application-specific knowledge)
For example, an application filter can block an HTTP DELETE, but allow an HTTP
GET. The capabilities of content screening, including virus detection, lexical
analysis, and site categorization, make application filters very effective in Web
scenarios both as security measures and in enforcement of business rules.
Stateful inspection
Application filters are limited to knowledge of the payload of a packet and therefore
make filtering decisions based only on the payload. Stateful inspection uses both
the payload and its context to determine filtering rules. Using the payload and the
packet contents allow stateful inspection rules to ensure session and
communication integrity. The inspection of packets, their payload, and sequence
limits the scalability of stateful inspection.
When you use filters at multiple levels of the network stack, it helps make your environment
more secure. For example, a packet filter can be used to block IP traffic destined for any
port other than port 80, and an application filter might further restrict traffic based on the
nature of the HTTP verb. For example, it might block HTTP DELETE verbs.
Logging and Auditing
Logging all incoming and outgoing requests — regardless of firewall rules — allows you to
detect intrusion attempts or, even worse, successful attacks that were previously
undetected. Historically, network administrators sometimes had to analyze audit logs to
determine how an attack succeeded. In those cases, administrators were able to apply
solutions to the vulnerabilities, learn how they were compromised, and discover other
vulnerabilities that existed.
Maintain healthy log cycling that allows quick data analysis. The more data you
have, the larger the log file size.
Make sure the firewall clock is synchronized with the other network hardware.
Perimeter Networks
A firewall should exist anywhere your servers interact with an untrusted network. If your
Web servers connect to a back-end network, such as a bank of database servers or
corporate network, a screen should exist to isolate the two networks. While the Web zone
has the greatest degree of exposure, a compromise in the Web zone should not result in
the compromise of downstream networks.
By default, the perimeter network should block all outbound connections except those that
are expected.
Network complexity
The following configuration categories are used to ensure secure switch configuration:
Insecure defaults
Services
Encryption
VLANs
Virtual LANs allow you to separate network segments and apply access control based on
security rules. However, a VLAN enhances network performance, but doesn't necessarily
provide security. Limit the use of VLANs to the perimeter network (behind the firewall) since
many insecure interfaces exist for ease of administration. For more information about
VLANs, see the article "Configuring VLANS" on the Cisco Web site.
Insecure Defaults
To make sure that insecure defaults are secured, change all factory default passwords and
SNMP community strings to prevent network enumeration or total control of the switch. Also
investigate and identify potentially undocumented accounts and change the default names
and passwords. These types of accounts are often found on well-known switch types and
are well publicized and known by attackers.
Services
Make sure that all unused services are disabled. Also make sure that Trivial File Transfer
Protocol (TFTP) is disabled, Internet-facing administration points are removed, and ACLs
are configured to limit administrative access.
Encryption
Although it is not traditionally implemented at the switch, data encryption over the wire
ensures that sniffed packets are useless in cases where a monitor is placed on the same
switched segment or where the switch is compromised, allowing sniffing across segments.
Additional Considerations
The following considerations can further improve network security:
Ensure that clocks are synchronized on all network devices. Set the network time
and have all sources synchronized to a known, reliable time source.
Define an IP network that can be easily secured using ACLs at subnets or network
boundaries whenever possible.
Snapshot of a Secure Network
Table 15.3 provides a snapshot of the characteristics of a secure network. The security
settings are abstracted from industry security experts and real-world applications in secure
deployments. You can use the snapshot as a reference point when evaluating your own
solution.
This chapter has highlighted the top threats to your network infrastructure and has
presented security recommendations and secure configurations that enable you to address
these threats.
Additional Resources
For more information, see the following articles:
"Configuring VLANs" at
http://www.cisco.com/en/US/products/hw/switches/ps663/products
_configuration_guide_chapter09186a00800e47e1.html#1020847.
Chapter 16: Securing Your Web Server
In This Chapter
A proven methodology to secure Web servers
What makes a Web server secure? Part of the challenge of securing your Web server is
recognizing your goal. As soon as you know what a secure Web server is, you can learn
how to apply the configuration settings to create one. This chapter provides a systematic,
repeatable approach that you can use to successfully configure a secure Web server.
The chapter begins by reviewing the most common threats that affect Web servers. It then
uses this perspective to create a methodology. The chapter then puts the methodology into
practice, and takes a step-by-step approach that shows you how to improve your Web
server's security. While the basic methodology is reusable across technologies, the chapter
focuses on securing a Web server running the Microsoft Windows 2000 operating system
and hosting the Microsoft .NET Framework.
How to Use This Chapter
This chapter provides a methodology and the steps required to secure your Web server.
You can adapt the methodology for your own situation. The steps are modular and
demonstrate how you can put the methodology in practice. You can use these procedures
on existing Web servers or on new ones.
Read Chapter 2, "Threats and Countermeasures." This will give you a broader
understanding of potential threats to Web applications.
Use the Snapshot. The section "Snapshot of a Secure Web Server" lists and
explains the attributes of a secure Web server. It reflects input from a variety of
sources including customers, industry experts, and internal Microsoft development
and support teams. Use the snapshot table as a reference when configuring your
server.
Use the Checklist. "Checklist: Securing Your Web Server" in the "Checklist"
section of this guide provides a printable job aid for quick reference. Use the task-
based checklist to quickly evaluate the scope of the required steps and to help you
work through the individual steps.
Use the "How To" Section. The "How To" section in this guide includes the
following instructional articles:
Profiling
Denial of service
Unauthorized access
Elevation of privileges
Figure 16.1 summarizes the more prevalent attacks and common vulnerabilities.
Profiling
Profiling, or host enumeration, is an exploratory process used to gather information about
your Web site. An attacker uses this information to attack known weak points.
Vulnerabilities
Unnecessary protocols
Open ports
Attacks
Port scans
Ping sweeps
Countermeasures
Countermeasures include blocking all unnecessary ports, blocking Internet Control Message
Protocol (ICMP) traffic, and disabling unnecessary protocols such as NetBIOS and SMB.
Denial of Service
Denial of service attacks occur when your server is overwhelmed by service requests. The
threat is that your Web server will be too overwhelmed to respond to legitimate client
requests.
Vulnerabilities
Unpatched servers
Attacks
Buffer overflows
Countermeasures
Countermeasures include hardening the TCP/IP stack and consistently applying the latest
software patches and updates to system software.
Unauthorized Access
Unauthorized access occurs when a user without correct permissions gains access to
restricted information or performs a restricted operation.
Vulnerabilities
Countermeasures
Countermeasures include using secure Web permissions, NTFS permissions, and .NET
Framework access control mechanisms including URL authorization.
Code execution attacks occur when an attacker runs malicious code on your server either
to compromise server resources or to mount additional attacks against downstream
systems.
Vulnerabilities
Unpatched servers
Attacks
Path traversal
Countermeasures
Countermeasures include configuring IIS to reject URLs with "../" to prevent path traversal,
locking down system commands and utilities with restrictive access control lists (ACLs), and
installing new patches and updates.
Elevation of Privileges
Elevation of privilege attacks occur when an attacker runs code by using a privileged
process account.
Vulnerabilities
Common vulnerabilities that make your Web server susceptible to elevation of privilege
attacks include:
Countermeasures
Countermeasures include running processes using least privileged accounts and using least
privileged service and user accounts.
Viruses. Programs that are designed to perform malicious acts and cause
disruption to an operating system or applications.
Trojan horses. Programs that appear to be useful but that actually do damage.
In many cases, malicious code is unnoticed until it consumes system resources and slows
down or halts the execution of other programs. For example, the Code Red worm was one
of the most notorious to afflict IIS, and it relied upon a buffer overflow vulnerability in an
ISAPI filter.
Vulnerabilities
Common vulnerabilities that make you susceptible to viruses, worms, and Trojan horses
include:
Unpatched servers
Countermeasures
Countermeasures include the prompt application of the latest software patches, disabling
unused functionality such as unused ISAPI filters and extensions, and running processes
with least privileged accounts to reduce the scope of damage in the event of a compromise.
Methodology for Securing Your Web Server
To secure a Web server, you must apply many configuration settings to reduce the server's
vulnerability to attack. So, how do you know where to start, and when do you know that you
are done? The best approach is to organize the precautions you must take and the settings
you must configure, into categories. Using categories allows you to systematically walk
through the securing process from top to bottom or pick a particular category and complete
specific steps.
Configuration Categories
The security methodology in this chapter has been organized into the categories shown in
Figure 16.2.
Many security threats are caused by vulnerabilities that are widely published and
well known. In many cases, when a new vulnerability is discovered, the code to
exploit it is posted on Internet bulletin boards within hours of the first successful
attack. If you do not patch and update your server, you provide opportunities for
attackers and malicious code. Patching and updating your server software is a
critical first step towards securing your Web server.
Services
Services are prime vulnerability points for attackers who can exploit the privileges
and capabilities of a service to access the local Web server or other downstream
servers. If a service is not necessary for your Web server's operation, do not run it
on your server. If the service is necessary, secure it and maintain it. Consider
monitoring any service to ensure availability. If your service software is not secure,
but you need the service, try to find a secure alternative.
Protocols
Avoid using protocols that are inherently insecure. If you cannot avoid using these
protocols, take the appropriate measures to provide secure authentication and
communication, for example, by using IPSec policies. Examples of insecure, clear
text protocols are Telnet, Post Office Protocol (POP3), Simple Mail Transfer
Protocol (SMTP), and File Transfer Protocol (FTP).
Accounts
Accounts grant authenticated access to your computer, and these accounts must
be audited. What is the purpose of the user account? How much access does it
have? Is it a common account that can be targeted for attack? Is it a service
account that can be compromised and must therefore be contained? Configure
accounts with least privilege to help prevent elevation of privilege. Remove any
accounts that you do not need. Slow down brute force and dictionary attacks with
strong password policies, and then audit and alert for logon failures.
Secure all files and directories with restricted NTFS permissions that only allow
access to necessary Windows services and user accounts. Use Windows auditing
to allow you to detect when suspicious or unauthorized activity occurs.
Shares
Remove all unnecessary file shares including the default administration shares if
they are not required. Secure any remaining shares with restricted NTFS
permissions. Although shares may not be directly exposed to the Internet, a
defense strategy — with limited and secured shares — reduces risk if a server is
compromised.
Ports
Services that run on the server listen to specific ports so that they can respond to
incoming requests. Audit the ports on your server regularly to ensure that an
insecure or unnecessary service is not active on your Web server. If you detect an
active port that was not opened by an administrator, this is a sure sign of
unauthorized access and a security compromise.
Registry
Many security-related settings are stored in the registry and as a result, you must
secure the registry. You can do this by applying restricted Windows ACLs and by
blocking remote registry administration.
Auditing is one of your most important tools for identifying intruders, attacks in
progress, and evidence of attacks that have occurred. Use a combination of
Windows and IIS auditing features to configure auditing on your Web server. Event
and system logs also help you to troubleshoot security problems.
Sites and virtual directories are directly exposed to the Internet. Even though
secure firewall configuration and defensive ISAPI filters such as URLScan (which
ships with the IISLockdown tool) can block requests for restricted configuration files
or program executables, a defense in depth strategy is recommended. Relocate
sites and virtual directories to non-system partitions and use IIS Web permissions
to further restrict access.
Script Mappings
Remove all unnecessary IIS script mappings for optional file extensions to prevent
an attacker from exploiting any bugs in the ISAPI extensions that handle these
types of files. Unused extension mappings are often overlooked and represent a
major security vulnerability.
ISAPI Filters
IIS Metabase
The IIS metabase maintains IIS configuration settings. You must be sure that the
security related settings are appropriately configured, and that access to the
metabase file is restricted with hardened NTFS permissions.
Machine.config
Restrict code access security policy settings to ensure that code downloaded from
the Internet or intranet have no permissions and as a result will not be allowed to
execute.
IIS and .NET Framework Installation Considerations
Before you can secure your Web server, you need to know which components are present
on a Windows 2000 server after IIS and the .NET Framework are installed. This section
explains which components are installed.
When you install the .NET Framework on a server that hosts IIS, the .NET Framework
registers ASP.NET. As part of this process, a local, least privileged account named
ASPNET is created. This runs the ASP.NET worker process (aspnet_wp.exe) and the
session state service (aspnet_state.exe), which can be used to manage user session state.
On server computers running Windows 2000 and IIS 5.0, all ASP.NET Web
applications run in a single instance of the ASP.NET worker process and
Note
application domains provide isolation. On Windows Server 2003, IIS 6.0 provides
process-level isolation through the use of application pools.
Table 16.2 shows the services, accounts, and folders that are created by a default
installation of version 1.1 of the .NET Framework.
2. Apply the latest service packs and patches to the operating system. (If you are
configuring more than one server, see "Including Service Packs with a Base
Installation," later in this section.)
If you do not need the following services, do not install them when you install IIS:
NNTP Service
SMTP Service
2. Extract Update.exe from the service pack by launching the service pack setup
with the -x option, as follows:
w3ksp3.exe -x
3. Integrate the service pack with your Windows installation source, by running
update.exe with the -s option, passing the folder path of your Windows installation
as follows:
update.exe -s c:\YourWindowsInstallationSource
For more information, see the MSDN article, "Customizing Unattended Win2K Installations"
at http://msdn.microsoft.com/library/default.asp?url=/library/en-
us/dnw2kmag01/html/custominstall.asp.
Steps for Securing Your Web Server
The next sections guide you through the process of securing your Web server. These
sections use the configuration categories introduced in the "Methodology for Securing Your
Web Server" section of this chapter. Each high-level step contains one or more actions to
secure a particular area or feature.
Use the Microsoft Baseline Security Analyzer (MBSA) to detect the patches and updates
that may be missing from your current installation. MBSA compares your installation to a list
of currently available updates maintained in an XML file. MBSA can download the XML file
when it scans your server or you can manually download the file to the server or make it
available on a network server.
If you do not have Internet access when you run MBSA, MBSA cannot retrieve the
XML file that contains the latest security settings from Microsoft. You can use
another computer to download the XML file, however. Then you can copy it into
the MBSA program directory. The XML file is available from
http://download.microsoft.com/download/xml/ security/1.0/nt5/en-
us/mssecure.cab.
2. Run MBSA by double-clicking the desktop icon or selecting it from the Programs
menu.
4. Clear all check boxes apart from Check for security updates. This option
detects which patches and updates are missing.
5. Click Start scan. Your server is now analyzed. When the scan is complete, MBSA
displays a security report, which it also writes to the
%userprofile%\SecurityScans directory.
6. Download and install the missing updates.
Click the Result details link next to each failed check to view the list of security
updates that are missing. The resulting dialog box displays the Microsoft security
bulletin reference number. Click the reference to find out more about the bulletin
and to download the update.
For more information on using MBSA, see "How To: Use Microsoft Baseline Security
Analyzer" in the "How To" section of this guide.
2. Compare the installed version of the .NET Framework to the current service pack.
To do this, use the .NET Framework versions listed in Microsoft Knowledge Base
article 318836, "INFO: How to Obtain the Latest .NET Framework Service Pack."
Step 2. IISLockdown
The IISLockdown tool helps you to automate certain security steps. IISLockdown greatly
reduces the vulnerability of a Windows 2000 Web server. It allows you to pick a specific
type of server role, and then use custom templates to improve security for that particular
server. The templates either disable or secure various features. In addition, IISLockdown
installs the URLScan ISAPI filter. URLScan allows Web site administrators to restrict the
kind of HTTP requests that the server can process, based on a set of rules that the
administrator controls. By blocking specific HTTP requests, the URLScan filter prevents
potentially harmful requests from reaching the server and causing damage.
Save IISlockd.exe in a local folder. IISlockd.exe is the IISLockdown wizard and not an
installation program. You can reverse any changes made by IISLockdown by running
IISlockd.exe a second time.
If you are locking down a Windows 2000-based computer that hosts ASP.NET pages,
select the Dynamic Web server template when the IISLockdown tool prompts you. When
you select Dynamic Web server, IISLockdown does the following:
It disables script mappings by mapping the following file extensions to the 404.dll:
Index Server
It removes the following virtual directories: IIS Samples, MSADC, IISHelp, Scripts,
and IISAdmin.
If you are not using classic ASP, do not use the static Web server
Note template. This template removes basic functionality that ASP.NET pages
need, such as support for the POST command.
Log Files
IISLockdown creates two reports that list the changes it has applied:
The 404.dll
IISLockdown installs the 404.dll, to which you can map file extensions that must not be run
by the client. For more information, see "Step 12. Script Mappings."
URLScan
If you install the URLScan ISAPI filter as part of IISLockdown, URLScan settings
are integrated with the server role you select when running IISLockdown. For
example, if you select a static Web server, URLScan blocks the POST command.
More Information
See the following articles for more information about the IISLockdown tool:
For more information on running IISLockdown, see "How To: Use IISLockdown.exe"
in the "How To" section of this guide.
URLScan is installed when you run IISLockdown, although you can download it and install it
separately.
iislockd.exe /q /c
URLScan blocks requests that contain unsafe characters (for example, characters that have
been used to exploit vulnerabilities, such as ".." used for directory traversal). URLScan logs
requests that contain these characters in the %windir%\system32\inetsrv\urlscan directory.
For more information about how to remove ISAPI filters, see "Step 13. ISAPI Filters," later
in this chapter.
More Information
See the following articles for more information about the URLScan tool:
For information on running URLScan, see "How To: Use URLScan" in the "How To"
section of this guide.
For information about URLScan configuration and the URLScan.ini file settings, see
Microsoft Knowledge Base article 326444, "How To: Configure the URLScan Tool."
Step 3. Services
Services that do not authenticate clients, services that use insecure protocols, or services
that run with too much privilege are risks. If you do not need them, do not run them. By
disabling unnecessary services you quickly and easily reduce the attack surface .You also
reduce your overhead in terms of maintenance (patches, service accounts, and so on.)
If you run a service, make sure that it is secure and maintained. To do so, run the service
using a least privilege account, and keep the service current by applying patches.
Before you disable a service, make sure that you first test the impact in a test or
Note
staging environment.
In most cases, the following default Windows services are not needed on a Web server:
Alerter, Browser, Messenger, Netlogon (required only for domain controllers), Simple
TCP/IP Services, and Spooler.
The Telnet service is installed with Windows, but it is not enabled by default. IIS
administrators often enable Telnet. However, it is an insecure protocol susceptible to
exploitation. Terminal Services provides a more secure remote administration option. For
more information about remote administration, see "Remote Administration," later in this
chapter.
To eliminate the possibility of FTP exploitation, disable the FTP service if you do not use it.
If FTP is enabled and is available for outbound connections, an attacker can use FTP to
upload files and tools to a Web server from the attacker's remote system. Once the tools
and files are on your Web server, the attacker can attack the Web server or other
connected systems.
If you use FTP protocol, neither the user name and password you use to access the FTP
site nor the data you transfer is encoded or encrypted. IIS does not support SSL for FTP. If
secure communications are important and you use FTP as your transfer protocol (rather
than World Wide Web Distributed Authoring and Versioning (WebDAV) over SSL), consider
using FTP over an encrypted channel such as a Virtual Private Network (VPN) that is
secured with Point-to-Point Tunneling Protocol (PPTP) or Internet Protocol Security
(IPSec).
WebDAV is preferable to FTP from a security perspective, but you need to secure
WebDAV. For more information, see Microsoft Knowledge Base article 323470, "How To:
Create a Secure WebDAV Publishing Directory."
If you do not need WebDAV, see Microsoft Knowledge Base article 241520, "How To:
Disable WebDAV for IIS 5.0."
For information about how to harden the TCP/IP stack see "How To: Harden the TCP/IP
Stack" in the "How To" section of this guide.
Disabling NetBIOS
NetBIOS uses the following ports:
TCP and User Datagram Protocol (UDP) port 137 (NetBIOS name service)
This procedure disables the Nbt.sys driver and requires that you restart the
Note
system.
1. Right-click My Computer on the desktop, and click Manage.
3. Right-click Device Manager, point to View, and click Show hidden devices.
This disables the NetBIOS direct host listener on TCP 445 and UDP 445.
Disabling SMB
SMB uses the following ports:
To disable SMB, use the TCP/IP properties dialog box in your Local Area Connection
properties to unbind SMB from the Internet-facing port.
4. Clear the File and Printer Sharing for Microsoft Networks box.
The WINS tab of the Advanced TCP/IP Settings dialog box contains a
Disable NetBIOS over TCP/IP radio button. Selecting this option
Note
disables the NetBIOS session service that uses TCP port 139. It does
not disable SMB completely. To do so, use the procedure above.
Step 5. Accounts
You should remove accounts that are not used because an attacker might discover and use
them. Require strong passwords. Weak passwords increase the likelihood of a successful
brute force or dictionary attack. Use least privilege. An attacker can use accounts with too
much privilege to gain access to unauthorized resources.
Note The Administrator account and the Guest account cannot be deleted.
The default local Administrator account is a target for malicious use because of its elevated
privileges on the computer. To improve security, rename the default Administrator account
and assign it a strong password.
If you intend to perform local administration, configure the account to deny network logon
rights and require the administrator to log on interactively. By doing so, you prevent users
(well intentioned or otherwise) from using the Administrator account to log on to the server
from a remote location. If a policy of local administration is too inflexible, implement a
secure remote administration solution. For more information, see "Remote Administration"
later in this chapter.
Disable the default anonymous Internet user account, IUSR_MACHINE. This is created
during IIS installation. MACHINE is the NetBIOS name of your server at IIS installation time.
If your applications support anonymous access (for example, because they use a custom
authentication mechanism such as Forms authentication), create a custom least privileged
anonymous account. If you run IISLockdown, add your custom user to the Web Anonymous
Users group that is created. IISLockdown denies access to system utilities and the ability to
write to Web content directories for the Web Anonymous Users group.
If your Web server hosts multiple Web applications, you may want to use multiple
anonymous accounts, one per application, so that you can secure and audit the operations
of each application independently.
For more information about hosting multiple Web applications see Chapter 20, "Hosting
Multiple Web Applications."
To counter password guessing and brute force dictionary attacks on your application, apply
strong password policies. To enforce a strong password policy:
Set password length and complexity. Require strong passwords to reduce the
threat of password guessing attacks or dictionary attacks. Strong passwords are
eight or more characters and must include both alphabetical and numeric
characters.
Set password expiration. Passwords that expire regularly reduce the likelihood
that an old password can be used for unauthorized access. Frequency of expiration
is usually guided by a company's security policy.
Table 16.3 shows the default and recommended password policy settings.
In addition, record failed logon attempts so that you can detect and trace malicious
behavior. For more information, see "Step 10. Auditing and Logging."
Remove the Access this computer from the network privilege from the Everyone group
to restrict who can log on to the server remotely.
Once an attacker establishes a null session, he or she can perform a variety of attacks,
including enumeration techniques used to collect system-related information from the target
computer — information that can greatly assist subsequent attacks. The type of information
that can be returned over a null session includes domain and trust details, shares, user
information (including groups and user rights), registry keys, and more.
Restrict Null sessions by setting RestrictAnonymous to 1 in the registry at the following
subkey:
HKLM\System\CurrentControlSet\Control\LSA\RestrictAnonymous=1
For more information, see Microsoft Knowledge Base article 246261, "How To: Use the
RestrictAnonymous Registry Value in Windows 2000."
Additional Considerations
The following is a list of additional steps you can consider to further improve security on
your Web server:
Do not mark domain accounts in Active Directory as trusted for delegation unless
you first obtain special approval to do so.
Do not create shared account for use by multiple individuals. Authorized individuals
must have their own accounts. The activities of individuals can be audited
separately and group membership and privileges appropriately assigned.
Try to limit administration accounts to two. This helps provide accountability. Also,
passwords must not be shared, again to provide accountability.
If you perform local administration only, you can require your Administrator account
to log on interactively by removing the Access this computer from the network
privilege.
Step 6. Files and Directories
Install Windows 2000 on partitions formatted with the NTFS file system so that you benefit
from NTFS permissions to restrict access. Use strong access controls to protect sensitive
files and directories. In most situations, an approach that allows access to specific
accounts is more effective than one that denies access to specific accounts. Set access at
the directory level whenever possible. As files are added to the folder they inherit
permissions from the folder, so you need to take no further action.
First grant FULL CONTROL to the Administrator account to the root (\), then remove
access rights for the Everyone group from the following directories.
Root (\)
Web site root directory and all content directories (the default is \inetpub\*)
Make sure that it is not possible for this account to write to content directories, for
example, to deface Web sites.
Restrict access to System tools.
If your Web server hosts multiple applications, use a separate anonymous account
for each application. Add the accounts to an anonymous Web users group, for
example, the Web Anonymous Users group created by IISLockdown, and then
configure NTFS permissions using this group.
For more information about using multiple anonymous accounts and hosting multiple
applications, see Chapter 20, "Hosting Multiple ASP.NET Applications."
SDKs and resource kits should not be installed on a production Web server. Remove them
if they are present.
Ensure that only the .NET Framework Redistributable package is installed on the
server and no SDK utilities are installed. Do not install Visual Studio .NET on
production servers.
Ensure that access to powerful system tools and utilities, such as those contained
in the \Program Files directory, is restricted. IISLockdown does this for you.
Debugging tools should not be available on the Web server. If production debugging
is necessary, then you should create a CD that contains the necessary debugging
tools.
Additional Considerations
Also consider removing unnecessary Data Source Names (DSNs). These contain clear text
connection details used by applications to connect to OLE DB data sources. Only those
DSNs required by Web applications should be installed on the Web server.
Step 7. Shares
Remove any unused shares and harden the NTFS permissions on any essential shares. By
default all users have full control on newly created file shares. Harden these default
permissions to ensure that only authorized users can access files exposed by the share. In
addition to explicit share permissions, use NTFS ACLs for files and folders exposed by the
share.
Additional Considerations
If you do not allow remote administration of your server, remove unused administrative
shares, for example C$ and Admin$.
Limit inbound traffic to port 80 for HTTP and port 443 for HTTPS (SSL).
For outbound (Internet-facing) NICs, use IPSec or TCP filtering. For more information, see
"How To: Use IPSec" in the "How To" section of this guide.
The type of encryption used also affects the types of threats that it addresses. For
example, SSL is application-level encryption, whereas IPSec is transport layer encryption.
As a result, SSL counters the threat of data tampering or information disclosure from
another process on the same machine, particularly one running under a different account in
addition to the network eavesdropping threat.
Step 9. Registry
The registry is the repository for many vital server configuration settings. As such, you must
ensure that only authorized administrators have access to it. If an attacker is able to edit
the registry, he or she can reconfigure and compromise the security of your server.
The Winreg key determines whether registry keys are available for remote access. By
default, this key is configured to prevent users from remotely viewing most keys in the
registry, and only highly privileged users can modify it. On Windows 2000, remote registry
access is restricted by default to members of the Administrators and Backup operators
group. Administrators have full control and backup operators have read-only access.
The associated permissions at the following registry location determine who can remotely
access the registry.
HKLM\SYSTEM\CurrentControlSet\Control\SecurePipeServers\winreg
To view the permissions for this registry key, run Regedt32.exe, navigate to the key, and
choose Permissions from the Security menu.
Although the passwords are not actually stored in the SAM and password hashes are not
reversible, if an attacker obtains a copy of the SAM database, the attacker can use brute
force password techniques to obtain valid user names and passwords.
Restrict LMHash storage in the SAM by creating the key (not value) NoLMHash in the
registry as follows:
HKLM\System\CurrentControlSet\Control\LSA\NoLMHash
For more information, see Microsoft Knowledge Base article 299656, "New Registry Key to
Remove LM Hashes from Active Directory and Security Account Manager."
Step 10. Auditing and Logging
Auditing does not prevent system attacks, although it is an important aid in identifying
intruders and attacks in progress, and can assist you in diagnosing attack footprints. Enable
a minimum level of auditing on your Web server and use NTFS permissions to protect the
log files so that an attacker cannot cover his tracks by deleting or updating the log files in
any way. Use IIS W3C Extended Log File Format Auditing.
Logon failures are recorded as events in the Windows security event log. The following
event IDs are suspicious:
531. This means an attempt was made to log on using a disabled account.
529. This means an attempt was made to log on using an unknown user account or
using a valid user account but with an invalid password. An unexpected increase in
the number of these audit events might indicate an attempt to guess passwords.
6. Click OK and then select all of the Failed check boxes to audit all failed events.
By default, this applies to the current folder and all subfolders and files.
Failed audit events are logged to the Windows security event log.
By moving and renaming the IIS log files, you make it much more difficult for an attacker to
cover his tracks. The attacker must locate the log files before he or she can alter them. To
make an attacker's task more difficult still, use NTFS permissions to secure the log files.
Move and rename the IIS log file directory to a different volume than your Web site. Do not
use the system volume. Then, apply the following NTFS permissions to the log files folder
and subfolders.
Additional Considerations
Additionally, you can configure IIS W3C Extended Log File Format Auditing. Select W3C
Extended Log File Format on the Web Site tab of the Web site's properties dialog box.
You can then choose Extended Properties such as URI Stem and URI Query.
Step 11. Sites and Virtual Directories
Relocate Web roots and virtual directories to a non-system partition to protect against
directory traversal attacks. These attacks allow an attacker to execute operating system
programs and utilities. It is not possible to traverse across drives. For example, this
approach ensures that any future canonicalization worm that allows an attacker to access
system files will fail. For example, if the attacker formulates a URL that contains the
following path, the request fails:
/scripts/..%5c../winnt/system32/cmd.exe
4. Click Configuration.
Remove the following virtual directories from production servers: IISSamples, IISAdmin,
IISHelp, and Scripts.
Removing RDS
If your applications do not use RDS, remove it.
HKLM\System\CurrentControlSet\Services\W3SVC\Parameters\ADCLaunch
Securing RDS
If your applications require RDS, secure it.
HKLM\System\CurrentControlSet\Services\W3SVC\Parameters\ADCLaunch\Vb
HKLM\Software\Microsoft\DataFactory\HandlerInfo\
5. Create a new DWORD value, and set it to 1 (1 indicates safe mode, while 0
indicates unsafe mode.
You can use the registry script file Handsafe.reg to change the registry
Note key. The script file is located in the msadc directory: \Program
Files\Common Files\System\msadc
Microsoft Knowledge Base article 184375, "PRB: Security Implications of RDS 1.5,
IIS 3.0 or 4.0, and ODBC."
Script source access. Configure Script source access permissions only on folders
that allow content authoring.
Write. Configure Write permissions only on folders that allow content authoring.
Grant write access only to content authors.
If you do not use FrontPage Server Extensions (FPSE), disable it. If you use FPSE, take
the following steps to improve security:
Upgrade server extensions. See to the security issues covered in MSDN article,
"Microsoft FrontPage Server Extensions 2002 for Windows" at
http://msdn.microsoft.com/library/default.asp?url=/library/en-
us/dnservext/html/fpse02win.asp.
Restrict access using FrontPage security. FPSE installs groups that are granted
permissions to those Web sites for which the server extensions are configured.
These groups are used to restrict the access available based on the role of the
user. For more information, see the Assistance Center at
http://office.microsoft.com/assistance/2002/articles/fp_colmanagesecurity.aspx.
Step 12. Script Mappings
Script mappings associate a particular file extension, such as .asp, to the ISAPI extension
that handles it, such as Asp.dll. IIS is configured to support a range of extensions including
.asp, .shtm, .hdc, and so on. ASP.NET HTTP handlers are a rough equivalent of ISAPI
extensions. In IIS, file extensions, such as .aspx, are first mapped in IIS to Aspnet_isapi.dll,
which forwards the request to the ASP.NET worker process. The actual HTTP handler that
processes the file extension is then determined by the <HttpHandler> mapping in
Machine.config or Web.config.
This could occur when a file extension is not mapped correctly. Files that should not
be directly accessible by the client should either be mapped to the appropriate
handler, based on its extension, or should be removed.
If you do not use any one of these extensions, map the extension to the 404.dll, which is
provided by IISLockdown. For example, if you do not want to serve ASP pages to clients,
map .asp to the 404.dll.
The mappings altered by IISLockdown depend on the server template that you choose:
Static Web Server. If you run IISLockdown and choose the Static Web server
option, then all of the above extensions are mapped to the 404.dll.
Dynamic Web Server. If you choose the Dynamic Web server option, which is the
preferred option when serving ASP.NET pages, then .htr, .idc, .shtm, .shtml, .stm,
and .printer are mapped to the 404.dll, while .asp, .cer, .cdx, and .asa are not. In
this case, you should manually map .cer, .cdx, and .asa to the 404.dll. If you are not
serving .asp, then you can map that as well.
2. Right-click your server name in the left window, and then click Properties.
6. Select one of the extensions from the list, and then click Edit.
This step assumes that you have previously run IISlockd.exe, as the
Note
404.dll is installed by the IISLockdown tool.
8. Click Open, and then click OK.
The .NET Framework protects file extensions that should not be directly called by clients by
associating them with System.Web.HttpForbiddenHandler in Machine.config. The
following file extensions are mapped to System.Web.HttpForbiddenHandler by default:
.asax, .ascx, .config, .cs, .csproj, .vb, .vbproj, .webinfo, .asp, .licx, .resx, and .resources.
Additional Considerations
Because IIS processes a Web request first, you could map .NET Framework file extensions
that you do not want clients to call, to the 404.dll directly. This does two tasks:
The 404.dll handles and rejects requests before they are passed to ASP.NET and
before they are processed by the ASP.NET worker process. This eliminates
unnecessary processing by the ASP.NET worker process. Moreover, blocking
requests early is a good security practice.
The 404.dll returns the message "HTTP 404 - File not found" and
System.Web.HttpForbiddenHandler returns the message "This type of page is
not served." Arguably, the "File not found" message reveals less information and
thus could be considered more secure.
Step 13. ISAPI Filters
In the past, vulnerabilities in ISAPI filters caused significant IIS exploitation. There are no
unneeded ISAPI filters after a clean IIS installation, although the .NET Framework installs
the ASP.NET ISAPI filter (Aspnet_filter.dll), which is loaded into the IIS process address
space (Inetinfo.exe) and is used to support cookie-less session state management.
If your applications do not need to support cookie-less session state and they do not set
the cookieless attribute to true on the <sessionState> element, this filter can be
removed.
2. Right-click the machine (not Web site, because filters are machine wide), and then
click Properties.
3. Click Edit.
Set the following NTFS permissions on the IIS metabase file (Metabase.bin) in the
\WINNT\system32\inetsrv directory.
When you retrieve a static page, for example, an .htm or a .gif file, a content location
header is added to the response. By default, this content header references the IP
address, and not the fully qualified domain name (FQDN). This means that your internal IP
address is unwittingly exposed. For example, the following HTTP response header shows
the IP address in bold font:
HTTP/1.1 200 OK
Server: Microsoft-IIS/5.0
Content-Location: http://10.1.1.1/Default.htmDate: Thu, 18 Feb 1999
Content-Type: text/html
Accept-Ranges: bytes
Last-Modified: Wed, 06 Jan 1999 18:56:06 GMT
ETag: "067d136a639be1:15b6"
Content-Length: 4325
You can hide the content location returned in HTTP response headers by modifying a value
in the IIS metabase to change the default behavior from exposing IP addresses, to sending
the FQDN instead.
For more information about hiding the content location in HTTP responses, see Microsoft
Knowledge Base article 218180, "Internet Information Server Returns IP Address in HTTP
Header (Content-Location)."
Step 15. Server Certificates
If your Web application supports HTTPS (SSL) over port 443, you must install a server
certificate. This is required as part of the session negotiation process that occurs when a
client establishes a secure HTTPS session.
A valid certificate provides secure authentication so that a client can trust the server it is
communicating with, and secure communication so that sensitive data remains confidential
and tamperproof over the network.
Check that the valid from and valid to dates are in range.
Check that the certificate is being used correctly. If it was issued as a server
certificate it should not be used for e-mail.
Check that the public keys in the certificate chain are all valid up to a trusted root.
Check that it has not been revoked. It must not be on a Certificate Revocation List
(CRL) from the server that issued the certificate.
Step 16. Machine.Config
This section covers hardening information about machine level settings that apply to all
applications. For application specific hardening settings, see Chapter 19, "Securing Your
ASP.NET Application."
The Machine.config file maintains numerous machine wide settings for the .NET
Framework, many of which affect security. Machine.config is located in the following
directory:
%windir%\Microsoft.NET\Framework\{version}\CONFIG
You can use any text or XML editor (Notepad, for example) to edit XML
Note configuration files. XML tags are case sensitive, so be sure to use the correct
case.
HTTP handlers are located in Machine.config beneath the <httpHandlers> element. HTTP
handlers are responsible for processing Web requests for specific file extensions. Remoting
should not be enabled on front-end Web servers; enable remoting only on middle-tier
application servers that are isolated from the Internet.
.asax, .ascx, .config, .cs, .csproj, .vb, .vbproj, .webinfo, .asp, .licx, .resx, and
.resources are protected resources and are mapped to
System.Web.HttpForbiddenHandler.
For .NET Framework resources, if you do not use a file extension, then map the extension
to System.Web.HttpForbiddenHandler in Machine.config, as shown in the following
example:
<add verb="*" path="*.vbproj" type="System.Web.HttpForbiddenHandler"
To disable .NET Remoting disable requests for .rem and .soap extensions, use the following
elements beneath <httpHandlers>:
<add verb="*" path="*.rem" type="System.Web.HttpForbiddenHandler"/>
<add verb="*" path="*.soap" type="System.Web.HttpForbiddenHandler"/>
This does not prevent a Web application on the Web server from connecting to a
Note downstream object by using the Remoting infrastructure. However, it prevents
clients from connecting to objects on the Web server.
Set enabled="false" on production servers. If you do need to trace problems with live
applications, simulate the problem in a test environment, or if necessary, enable tracing and
set localOnly="true" to prevent trace details from being returned to remote clients.
Verify That Debug Compiles Are Disabled
You can control whether or not the compiler produces debug builds that include debug
symbols by using the <compilation> element. To turn off debug compiles, set
debug="false" as shown below:
<compilation debug="false" explicit="true" defaultLanguage="vb" />
You can use the <customErrors> element to configure custom, generic error messages
that should be returned to the client in the event of an application exception condition.
Make sure that the mode attribute is set to "RemoteOnly" as shown in the following
example:
<customErrors mode="RemoteOnly" />
After installing an ASP.NET application, you can configure the setting to point to your
custom error page as shown in the following example:
<customErrors mode="On" defaultRedirect="YourErrorPage.htm" />
If you do not use session state, verify that session state is disabled in Machine.config as
shown in the following example:
<sessionState mode="Off" . . . />
Also, ensure that the ASP.NET State Service is disabled. The default session state mode is
"InProc" and the ASP.NET State Service is set to manual. For more information about
securing session state if you install an ASP.NET application that requires it, see "Session
State," in Chapter 19, "Securing Your ASP.NET Application and Web Services."
Step 17. Code Access Security
Machine level code access security policy is determined by settings in the Security.config
file located in the following directory: %windir%\Microsoft.NET\Framework\
{version}\CONFIG
Run the following command to be sure that code access security is enabled on your server:
caspol -s On
For more information about configuring code access security for ASP.NET Web
applications, see Chapter 9, "Using Code Access Security with ASP.NET."
2. Expand Runtime Security Policy, expand Machine, and then expand Code
Groups.
7. Click OK.
Repeat the steps shown in the preceding section, "Remove All Permissions for the Local
Intranet Zone," except set the Internet_Zone to the Nothing permission set.
Snapshot of a Secure Web Server
A snapshot view that shows the attributes of a secure Web server allows you to quickly and
easily compare settings with your own Web server. The settings shown in Table 16.4 are
based on Web servers that host Web sites that have proven to be very resilient to attack
and demonstrate sound security practices. By following the proceeding steps you can
generate an identically configured server, with regard to security.
Set up a schedule to analyze your server software and subscribe to security alerts. Use
MBSA to regularly scan your server for missing patches. The following links provide the
latest updates:
Windows 2000 service packs. The latest service packs are listed at
http://www.microsoft.com/windows2000/downloads/servicepacks/default.asp.
.NET Framework Service Pack. For information about how to obtain the latest
.NET Framework updates, see the MSDN article, "How to Get the Microsoft .NET
Framework" at http://msdn.microsoft.com/netframework/downloads/howtoget.asp.
Critical Updates. These updates help to resolve known issues and help protect
your computer from known security vulnerabilities. For the latest critical updates,
see "Critical Updates" at
http://www.microsoft.com/windows2000/downloads/critical/default.asp
Advanced Security Updates. For additional security updates, see "Advanced
Security Updates" at
http://www.microsoft.com/windows2000/downloads/security/default.asp.
These also help protect your computer from known security vulnerabilities.
Use MBSA to regularly check for security vulnerabilities and to identify missing patches and
updates. Schedule MBSA to run daily and analyze the results to take action as needed. For
more information about automating MBSA, see "How To: Use MBSA" in the "How To"
section of this guide.
Use the Microsoft services listed in Table 16.5 to obtain security bulletins with notifications
of possible system vulnerabilities.
Additionally, subscribe to the industry security alert services shown in Table 16.6. This
allows you to assess the threat of a vulnerability where a patch is not yet available.
http://www.ntbugtraq.com/default.asp?pid=31&sid=1- 020
NTBugtraq This is an open discussion of Windows security vulnerabilities
and exploits. Vulnerabilities which currently have no patch are
discussed.
Remote Administration
Administrators often need to be able to administer multiple servers. Make sure the
requirements of your remote administration solution do not compromise security. If you
need remote administration capabilities, then the following recommendations help improve
security:
Restrict the tools. The main options include Internet Services Manager and
Terminal Services. Another option is Web administration (using the IISAdmin virtual
directory), but this is not recommended and this option is removed by
IISLockdown.exe. Both Internet Services Manager and Terminal Services use
Windows security. The main considerations here are restricting the Windows
accounts and the ports you use.
Restrict the computers that are allowed to administer the server. IPSec can
be used to restrict which computers can connect to your Web server.
2. Configure the Terminal Services session to disconnect after idle connection time
limit. Set it to end a disconnected session. A session is considered to be
disconnected if the user closes the Terminal Services client application without
logging off in a period of ten minutes.
Use a secure VPN connection between the client and the server or an IPSec tunnel for
enhanced security. This approach provides mutual authentication and the RDP payload is
encrypted.
You can create and deploy security policies using security templates. For more
information, see the following Microsoft Knowledge Base articles:
For detailed guidance about customizing and automating security templates, see
the Microsoft patterns & practices, Microsoft Solution for Securing Windows 2000
Server, at http://www.microsoft.com/technet/treeview/default.asp?
url=/technet/security/prodtech/windows/secwin2k/default.asp.
The Microsoft Solution for Securing Windows 2000 Server addresses the most
common server roles, including domain controllers, DNS servers, DHCP servers, IIS
Web servers, and File and Print servers. The approach used in this guide allows
you to take a default Windows 2000 installation and then create a secure server,
the precise configuration of which varies depending upon its role. Administrators
can then consciously weaken security to satisfy the needs of their particular
environment. The guide provides a foundation of baseline security recommendations
that covers services, accounts, group policies, and so on, that you can use as a
starting point for the common types of server roles.
Summary
A secure Web server provides a protected foundation for hosting your Web applications.
This chapter has shown you the main threats that have the potential to impact your
ASP.NET Web server and has provided the security steps required for risk mitigation. By
performing the hardening steps presented in this chapter, you can create a secure platform
and host infrastructure to support ASP.NET Web applications and Web services.
The methodology used in this chapter allows you to build a secure Web server from scratch
and also allows you to harden the security configuration of an existing Web server. The next
step is to ensure that any deployed applications are correctly configured.
Additional Resources
For additional related reading, see the following resources:
For information about securing your developer workstation, see "How To: Secure
Your Developer Workstation" in the "How To" section of this guide.
For more information about how to secure ASP.NET Web applications and Web
services, see Chapter 19, "Securing Your ASP.NET Application."
For information on how the Open Hack application was configured, see the MSDN
article, "Building and Configuring More Secure Web Sites," at
http://msdn.microsoft.com/library/default.asp?url=/library/en-
us/dnnetsec/html/openhack.asp.
For a printable checklist, see "Checklist: Securing Your Web Server" in the
"Checklists" section of this guide.
Chapter 17: Securing Your Application Server
In This Chapter
Identifying threats and countermeasures for middle-tier application servers
Figure 17.1 shows the focus of this chapter, which includes configuring internal firewalls that
are featured in many multitiered deployment models.
Before delving into technology-specific configuration, the chapter identifies the main threats
to an application server. These threats are somewhat different from those that apply to an
Internet-facing Web server because middle-tier application servers are (or should be)
isolated from direct Internet access.
To secure the application server, you must apply an incremental security configuration after
the underlying operating system and Internet Information Services (IIS) Web server (if
installed) have been locked down.
How to Use This Chapter
This chapter focuses on the application server and the associated communication channels
that connect the Web server to the application server and the application server to the
database server.
Read Chapter 2, "Threats and Countermeasures." This will give you a better
understanding of potential threats to Web applications.
Use the companion securing chapters. The current chapter is part of a securing
solution that includes chapters that cover host (operating system) and network layer
security. Use the following chapters in tandem with this one:
Network eavesdropping
Unauthorized access
Network Eavesdropping
Attackers with network monitoring software can intercept data flowing from the Web server
to the application server and from the application server to downstream systems and
database servers. The attacker can view and potentially modify this data.
Vulnerabilities
Vulnerabilities that can make your application server vulnerable to network eavesdropping
include:
Use of Microsoft SQL Server authentication to the database, resulting in clear text
credentials
Countermeasures
Use secure authentication, such as Windows authentication, that does not send
passwords over the network.
Use remote procedure call (RPC) encryption with Enterprise Services applications.
Unauthorized Access
If you fail to block the ports used by applications that run on the application server at the
perimeter firewall, an external attacker can communicate directly with the application
server. If you allow computers other than the front-end Web servers to connect to the
application server, the attack profile for the application server increases.
Vulnerabilities
Unnecessary protocols
Banner grabbing that gives away available services and possibly software versions
Countermeasures
Firewall policies that block all traffic except expected communication ports
Static DCOM endpoint mapping that allows access only to authorized hosts
Vulnerabilities
Unpatched servers
Countermeasures
Countermeasures that help mitigate the risk posed by viruses, Trojan horses, and worms
include:
To mitigate this risk, developers must apply the secure design and development approaches
described in PartsII and III of this guide.
The configuration solutions in this chapter are specific to the application server and they
should not be applied in isolation. Apply them alongside the solutions presented in Chapter
15, "Securing Your Network," Chapter 16, "Securing Your Web Server," and Chapter 18,
"Securing Your Database Server."
Communication Channel Considerations
Sensitive application data and authentication credentials that are sent to and from the
application server should be encrypted to provide privacy and integrity. This mitigates the
risk associated with eavesdropping and tampering.
Encrypting network traffic addresses the network eavesdropping and tampering threats. If
you consider this threat to be negligible in your environment — for example, because your
application is located in a closed and physically secured network — then you do not need to
encrypt the traffic. If network eavesdropping is a concern, then you can use SSL, which
provides a secure communication channel at the application layer, or IPSec, which provides
a transport-level solution. IPSec encrypts all IP traffic that flows between two servers, while
SSL allows each application to choose whether or not to provide an encrypted
communication channel.
Enterprise Services
Enterprise Services (or COM+) applications communicate over the network using DCOM
over RPC. RPC uses port 135, which provides endpoint mapping services to allow clients to
negotiate parameters, including the communication port, which by default is dynamically
assigned.
RPC Encryption
You can configure an Enterprise Services application for RPC Packet Privacy
authentication. In addition to authentication, this provides encryption for every data
packet sent to and from the Enterprise Services application.
IPSec
You can use an IPSec policy between the Web server and the application server to
encrypt the communication channel.
.NET Remoting
Two possible implementation models exist for applications that use .NET Remoting:
Depending on the performance and security requirements of the application, you can use
one of two methods to secure the Remoting channel.
If you host in ASP.NET, you can take advantage of the built-in HTTPS functionality
provided by IIS. HTTPS provides authentication and secure data communication.
With the TCP channel, you can use an IPSec policy to provide transport-layer
encryption for all IP data. Note that if you use the TCP channel, you must provide
your own authentication mechanism. For more information, see Chapter 13,
"Building Secure Remoted Components."
Web Services
Web services are hosted by ASP.NET and IIS, and the services use the HTTP protocol for
communication over the network.
SSL or IPSec can be used to secure the communication channel. Alternatively, encryption
can be handled at the application layer by encrypting the message payload or the sensitive
parts of the payload. To do this using open standards, use the Web Services Enhancements
(WSE) download available for Web services. For more information, see Chapter 12,
"Building Secure Web Services."
SQL Server
The application server communicates with SQL Server using TCP port 1433 by default.
Unless otherwise configured, UDP port 1434 is also used for negotiation.
To secure the channel from the application server to SQL Server, use IPSec or SSL. SSL
requires a server certificate to be installed on the database server.
For more information on using SSL with SQL Server, see Microsoft Knowledge Base article
276553, "How To: Enable SSL Encryption for SQL Server 2000 with Certificate Server."
Firewall Considerations
Your security infrastructure can include internal firewalls on either side of the application
server. This section discusses the ports that you open on these firewalls to support the
functionality of your application.
Enterprise Services
If you use middle-tier Enterprise Services, configure an internal firewall that separates the
Web server and application server to allow DCOM and RPC traffic. Additionally, if you use
Enterprise Services, your applications often use distributed transactions and the services of
the Distributed Transaction Coordinator (DTC). In this event, open DTC ports on any
firewall that separates the application server from remote resource managers, such as the
database server. Figure 17.3 shows a typical Enterprise Services port configuration.
Figure 17.3 does not show the additional ports that are required for authentication
mechanisms between a client and an Enterprise Services application and possibly
between the Enterprise Services application and the database server. Commonly,
for networks that do not use Active Directory, TCP port 139 is required for
Windows authentication. For more information on port requirements, see the
Note
TechNet articles "TCP and UDP Port Assignments," at
http://www.microsoft.com/technet/prodtechnol/
windows2000serv/reskit/tcpip/part4/tcpappc.asp, and "Security Considerations
for Administrative Authority," at
http://www.microsoft.com/technet/security/bestprac/bpent/sec2/seconaa.asp.
By default, DCOM uses RPC dynamic port allocation, which randomly selects port numbers
above 1024. In addition, port 135 is used by the RPC endpoint mapping service.
Restrict the ports required to support DCOM on the internal firewall in two ways:
This allows you to control the ports dynamically allocated by RPC. For more
information about dynamic port restrictions, see Microsoft Knowledge Base article
300083, "How To: Restrict TCP/IP Ports on Windows 2000 and Windows XP."
Microsoft Windows 2000 SP3 (or QFE 18.1 and later) or Windows Server 2003
allows you to configure Enterprise Services applications to use a static endpoint.
Static endpoint mapping means that you only need to open two ports in the firewall:
port 135 for RPC and a nominated port for your Enterprise Services application.
For more information about static endpoint mapping, see Microsoft Knowledge
Base article 312960, "Cannot Set Fixed Endpoint for a COM+ Application."
Web Services
If you cannot open ports on the internal firewall, then you can introduce a Web-services
façade layer in front of the serviced components on the application server. This means that
you only need to open port 80 for HTTP traffic (specifically, SOAP messages) to flow in
both directions.
This approach does not allow you to flow transaction context from client to server, although
in many cases where your deployment architecture includes a middle-tier application server,
it is appropriate to initiate transactions in the remote serviced component on the application
server.
For information about physical deployment requirements for service agents and service
interfaces, such as the Web-services façade layer, see "Physical Deployment and
Operational Requirements" in the Reference section of MSDN article, "Application
Architecture for .NET: Designing Applications and Services," at
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnbda/html/distapp.asp.
DTC Requirements
If your application uses COM+ distributed transactions and these are used across remote
servers separated by an internal firewall, then the firewall must open the necessary ports to
support DTC traffic. The DTC uses RPC dynamic port allocation. In addition to port 135 for
RPC, DTC communication requires at least one additional port.
If your deployment architecture includes a remote application tier, transactions are normally
initiated there within the Enterprise Services application and are propagated to the
database server. In the absence of an application server, the Enterprise Services
application on the Web server initiates the transaction and propagates it to the SQL Server
resource manager.
Microsoft Knowledge Base article 306843, "How To: Troubleshoot MS DTC Firewall
Issues."
.NET Remoting
If you use the HTTP channel and host your remote components in ASP.NET, only open port
80 on the internal firewall to allow HTTP traffic. If your application also uses SSL, open port
443.
If you use the TCP channel and host in a Windows service, open the specific TCP port or
ports that your Remoting application has been configured to use. The application might
need an additional port to support callbacks.
Figure 17.4 shows a typical .NET Remoting firewall port configuration. Note that the port
numbers shown for the TCP channel scenario (5555 and 5557) are illustrations. The actual
port numbers are specified in web.config configuration files on the client and server
machines. For more information, see Chapter 13, "Building Secure Remoted Components."
Figure 17.4: Typical Remoting firewall port configuration for HTTP and TCP channel
scenarios
Web Services
Web services communicate using SOAP over HTTP; therefore, only open port 80 on the
internal firewall.
SQL Server
If a firewall separates the application server from the database server, then connecting to
SQL Server through a firewall requires that you configure the client using the SQL Server
Client Network Utility and configure the database server using the Server Network Utility. By
default, SQL Server listens on TCP port 1433, although this can be changed. The chosen
port must be open at the firewall.
Depending on the chosen SQL Server authentication mode and use of distributed
transactions by your application, you might also need to open several additional ports at the
firewall:
If your application uses Windows authentication to connect to SQL Server, open the
necessary ports that support the Kerberos protocol or NTLM authentication.
For more information on SQL Server port requirements, see Chapter 18, "Securing Your
Database Server."
.NET Remoting Security Considerations
The .NET Remoting infrastructure enables applications to communicate with one another on
the same machine or across machines in a network. The Remoting infrastructure can use
the HTTP or TCP transports for communication and can send messages in many formats,
the most common of which are SOAP or binary format.
Figure 17.5: Remoting with the TCP channel and a Windows service
host
In this scenario, a Windows service hosts the Remoting objects and communication occurs
through a TCP channel. This approach offers good performance, but does not necessarily
address security. For added security, use IPSec between the Web server and the
application server and only allow the Web server to establish connections with the
application server.
In this scenario, you can use Windows integrated authentication to authenticate the
ASP.NET Web application process identity. You can also use SSL for secure communication
and the gatekeepers provided by IIS and ASP.NET for authorization.
Enterprise Services (COM+) Security Considerations
COM+ provides the underlying infrastructure for Enterprise Services; therefore, secure
COM+ if you use it on the middle-tier application server. Two main steps are involved in
securing an application server that uses Enterprise Services:
You must secure the underlying operating system and Enterprise Services
infrastructure. This includes base security measures, such as applying patches and
updates, and disabling unused services, blocking unused ports, and so on.
You must secure the Enterprise Services application that is deployed on the server,
taking into account application-specific security needs.
The developer can specify many of the application and component-level security
configuration settings using metadata embedded in the deployed assemblies. These govern
the initial catalog security settings that are applied to the application when it is registered
with Enterprise Services. Then, the administrator can view and amend these if necessary
by using the Component Services tool.
Services
Ports
COM+ catalog
Updates to the COM+ runtime are sometimes released as QFE releases. Use the following
resources to help manage patches and updates:
Use the Microsoft Baseline Security Analyzer (MBSA) to detect missing security
updates on application servers. For more information about how to use the MBSA
on a single computer and to keep a group of servers up-to-date, see "How to: Use
MBSA" in the "How To" section of this guide.
For information about environments that require many servers to be updated from a
centralized administration point, see "How To: Patch Management" in the "How To"
section of this guide.
At the time of this writing (May 2003), MBSA does not have the ability to detect the .NET
Framework. Therefore, you must update the .NET Framework manually.
2. Compare the installed version of the .NET Framework to the current service pack.
To do this, use the .NET Framework versions listed in Microsoft Knowledge Base
article 318836, "INFO: How to Obtain the Latest .NET Framework Service Pack."
The latest Windows service packs include the current fixes to COM+. However,
updates to the COM+ runtime are sometimes released in the form of QFE
releases. An automatic notification service for COM+ updates does not currently
exist, so monitor the Microsoft Knowledge Base at http://support.microsoft.com.
Use "kbQFE" as a search keyword to refine your search results.
Services
To reduce the attack surface profile, disable any services that are not required. Required
services include the Microsoft DTC and the COM+ Event System service, which is required
to support the LCE COM+ feature.
To secure the services on your application server, disable the MS DTC if it is not required.
The DTC service is tightly integrated with COM+. It coordinates transactions that are
distributed across two or more databases, message queues, file systems, or other
resource managers. If your applications do not use the COM+ automated transaction
services, then the DTC should be disabled by using the Services MMC snap-in.
Ports
Serviced components communicate using DCOM, which in turn communicates using the
RPC transport.
By default, DCOM dynamically allocates ports, which is undesirable from a security and
firewall configuration perspective. DCOM ports should be restricted to reduce the attack
surface profile and to ensure that you do not need to open unnecessary ports on the
internal firewall. Two options exist for restricting the ports used by DCOM:
Port Ranges
For incoming communication, you can configure RPC dynamic port allocation to select ports
within a restricted range above 1024. Then configure your firewall to confine incoming
external communication to only those ports and port 135, which is the RPC endpoint
mapper port.
3. Click the Default Protocols tab, and then select Connection-oriented TCP/IP in
the DCOM Protocols list box.
4. Click Properties.
5. In the Properties for COM Internet Services dialog box, click Add.
6. In the Port range text box, add a port range, for example 5000–5020, and then
click OK.
7. Leave the Port range assignment and the Default dynamic port allocation
options set to Internet range.
Windows 2000 (SP3 or QFE 18.1) or Windows Server 2003 allows you to configure
Enterprise Services applications to use a static endpoint. If a firewall separates the client
from the server, you only need to open two ports in the firewall. Specifically, you must open
port 135 for RPC and a port for your Enterprise Services application.
2. Display the Properties dialog box of the application, and retrieve the
application ID from the General page.
4. From the Edit menu, click Add Value, and then add the following registry value,
where {your AppID} is the Application ID of the COM+ application that you
obtained in step 1:
Key name: {Your AppID}
Value name: Endpoints
Data type: REG_MULTI_SZ
Value data: ncacn_ip_tcp,0,<port number>
The port number that you specify in the Value data text box must be greater than
1024 and must not conflict with well-known ports that other applications on the
computer use. You cannot modify the ncacn_ip_tcp,0 portion of this key.
COM+ Catalog
Enterprise Services application configuration settings are maintained in the COM+ catalog.
The majority of configuration items are contained in the registration database (RegDB),
which consists of files located in the following directory:
%windir%\registration
By default, the Everyone group has permission to read the database. Modify the access
control list (ACL) for this directory to restrict read/write access to administrators and the
local system account. Also grant read access to the accounts used to run Enterprise
Services applications. Here is the required ACL:
Administrators: Read, Write
System: Read, Write
Enterprise Services Run-As Account(s): Read
Individual application configuration settings are maintained in the COM+ catalog and can be
configured using the Component Services tool or by using script. Many of the settings
discussed below can also be specified by application developers by using the correct
assembly level metadata in the serviced component assembly. When you register the
service component, for example by using Regsvcs.exe, the COM+ catalog is automatically
configured using this metadata, although the application run-as identity must be configured
administratively.
To secure an Enterprise Services application, you must configure the following items:
Authentication level
Impersonation
Application assemblies
If the serviced components within the Enterprise Services application are not impersonating
the caller's security context, then the process-level identity specified through the run-as
account is used for downstream local and remote resource access. To support network
authentication to a remote database server, you can create a "mirrored" local account,
which is a local account on the remote server that has a matching username and password.
When you set the run-as identity with Enterprise Services, the required "Logon as
Note
a batch job" privilege is automatically granted to the account.
Authentication Level
Enterprise Services applications authenticate callers using RPC, which in turn uses the
underlying authentication services of the operating system provided through the Security
Service Provider Interface (SSPI) layer. This means that applications authenticate callers
using Windows authentication; either Kerberos or NTLM.
RPC defines authentication levels that determine when authentication occurs and whether
the authenticated communication should be checked for integrity or encrypted. At minimum,
you should use call-level authentication to ensure that every method call to a serviced
component method is authenticated.
3. Select Call from the Authentication level for calls drop-down list.
Role-based security is disabled by default on Windows 2000. The reverse is true for
Windows Server 2003.
Without component-level access checks, any account that is used to connect to any
application component is granted access if it is a member of any role within the application.
Component-level access checks allow individual components to apply their own
authorization. This is the recommended level of granularity.
To allow individual components inside the Enterprise Services application to perform access
checks and authorize callers, you must enable component-level access checks at the
component level.
2. Select the Components folder, right-click it, and then click Properties.
Impersonation
DCOM clients set the impersonation level to determine the impersonation capabilities of the
server with which they are communicating. When an Enterprise Services application on a
middle-tier application server is configured, the configured impersonation level affects any
remote calls made to downstream components, including the database server. The
impersonation level is set on the Security page of the Properties dialog box of the
application in Component Services, as Figure 17.9 shows.
The appropriate level depends on the desired application-level functionality, although you
should use the following guidelines to determine an appropriate level:
Use Delegate if you want to allow the downstream component to impersonate the
identity of your application so that it can access local or remote resources. This
requires accounts configured for delegation in Active Directory
All downstream resource access that is performed by serviced components on your middle-
tier application server normally uses the server application's identity. If, however, the
serviced components perform programmatic impersonation, and the client application
(usually an ASP.NET Web application or Web service on the Web server) has been
configured to support Kerberos delegation, then the client's identity is used.
For more information, see "How To: Enable Kerberos Delegation in Windows 2000" in the
"How To" section of "Microsoft patterns & practices Volume I, Building Secure ASP.NET
Applications: Authentication, Authorization, and Secure Communication" at
http://msdn.microsoft.com/library/default.asp?url=/library/en-
us/dnnetsec/html/secnetlpMSDN.asp.
CRM log file names are derived from the Enterprise Services application ID and have the
file name extension .crmlog. CRM log files are secured when they are created by
Enterprise Services and the file is configured with an ACL that grants Full Control to the run-
as account of the application. No other account has access.
If you change the identity of the application after the log file is created, you must manually
change the ACL on the file. Make sure that the new run-as identity of the application has
Full Control permissions.
Application Assemblies
To protect the deployed application assemblies that contain the serviced components of the
application, you should harden the ACL associated with the assembly .dll files to ensure
they cannot be replaced or deleted by unauthorized users.
The location of the assembly DLLs of an application is specified at deployment time and
may therefore vary from installation to installation. The Properties dialog box in the
Component Services tool does not show the assembly DLL location. Instead, it points to
%windir%\System32\mscoree.dll, which provides the interception services for the
component.
2. Expand the Components folder, select a component, right-click it, and then click
Properties.
3. In the Properties dialog box, retrieve the Class ID (CLSID) of the component.
This chapter has shown you additional security measures. These measures differ
depending on the technology used on the application server.
Internal firewalls on either side of the application server present other issues. The ports that
must be open depend on application implementation choices, such as transport protocols
and the use of distributed transactions.
For a checklist that summarizes the steps in this chapter, see "Checklist: Securing Your
Application Server" in the "Checklists" section of this guide.
Additional Resources
For more information about the issues addressed in this chapter, see the following articles
in the Microsoft Knowledge Base at http://support.microsoft.com:
Article 248809, "PRB: DCOM Does Not Work over NAT-Based Firewall"
Article 154596, "How To: Configure RPC Dynamic Port Allocation to Work with a
Firewall"
Chapter 18: Securing Your Database Server
In This Chapter
A proven methodology for securing database servers
Internal threats should not be overlooked. Have you considered the rogue administrator with
network access? What about the database user tricked into running malicious code? For
that matter, could any malicious code on the network compromise your database?
This chapter begins by reviewing the most common threats that affect database servers. It
then uses this perspective to create a methodology. This chapter then puts the methodology
into practice and takes a step-by-step approach that shows you how to improve your
database server's security.
How to Use This Chapter
This chapter provides a methodology and steps for securing a database server. The
methodology can be adapted for your own scenario. The steps put the methodology into
practice.
Use the snapshot. The section, "Snapshot of a Secure Database Server," later in
this chapter lists the attributes of a secure database server. It reflects distilled input
from a variety of sources including customers, industry experts, and internal
Microsoft development and support teams. Use the snapshot table as a reference
when configuring your database server.
Use the checklist. The "Checklist: Securing Your Database Server" in the
"Checklist" section of this guide provides a quick reference. Use the checklist to
quickly evaluate the scope of the required steps and to help you work through the
individual steps.
Use the "How To" section. The "How To" section in this guide includes the
following instructional articles that help you implement the guidance in this chapter:
SQL injection
Network eavesdropping
Password cracking
Figure 18.1 shows the major threats and vulnerabilities that can result in a compromised
database server and the potential destruction or theft of sensitive data.
SQL Injection
With a SQL injection attack, the attacker exploits vulnerabilities in your application's input
validation and data access code to run arbitrary commands in the database using the
security context of the Web application.
Vulnerabilities
Weak permissions that fail to restrict the application's login to the database
Countermeasures
Your application should constrain and sanitize input data before using it in SQL
queries.
Use type safe SQL parameters for data access. These can be used with stored
procedures or dynamically constructed SQL command strings. Using SQL
parameters ensures that input data is subject to type and length checks and also
that injected code is treated as literal data, not as executable statements in the
database.
Use a SQL Server login that has restricted permissions in the database. Ideally, you
should grant execute permissions only to selected stored procedures in the
database and provide no direct table access.
For more information about application-level countermeasures to SQL injection attacks, see
Chapter 14, "Building Secure Data Access."
Network Eavesdropping
The deployment architecture of most applications includes a physical separation of the data
access code from the database server. As a result, sensitive data, such as application-
specific data or database login credentials, must be protected from network
eavesdroppers.
Vulnerabilities
Countermeasures
Install a server certificate on the database server. This results in the automatic
encryption of SQL credentials over the network.
Use an SSL connection between the Web server and database server to protect
sensitive application data. This requires a database server certificate.
Vulnerabilities
Vulnerabilities that make your database server susceptible to unauthorized server access
include:
Attacks
Direct connection attacks exist for both authenticated users and those without a user name
and password; for example:
Countermeasures
Make sure that SQL Server ports are not visible from outside of the perimeter
network.
Within the perimeter, restrict direct access by unauthorized hosts, for example, by
using IPSec or TCP/IP filters.
Password Cracking
A common first line of attack is to try to crack the passwords of well known account names,
such as sa (the SQL Server administrator account).
Vulnerabilities
Attacks
Dictionary attacks
Countermeasures
Create passwords for SQL Server login accounts that meet complexity
requirements.
Configuration Categories
The securing methodology has been organized into the categories shown in Figure 18.2.
The configuration categories shown in Figure 18.2 are based on best practices obtained
from field experience, customer validation, and the study of secure deployments. The
rationale behind the categories is as follows:
Services
Services are prime vulnerability points for attackers who can exploit the privileges
and capabilities of the service to access the server and potentially other computers.
Some services are designed to run with privileged accounts. If these services are
compromised, the attacker can perform privileged operations. By default, database
servers generally do not need all services enabled. By disabling unnecessary and
unused services, you quickly and easily reduce the attack surface area.
Protocols
Limit the range of protocols that client computers can use to connect to the
database server and make sure you can secure those protocols.
Accounts
Restrict the number of Windows accounts accessible from the database server to
the necessary set of service and user accounts. Use least privileged accounts with
strong passwords in all cases. A least privileged account used to run SQL Server
limits the capabilities of an attacker who compromises SQL Server and manages to
execute operating system commands.
Use NTFS file system permissions to protect program, database, and log files from
unauthorized access. When you use access control lists (ACLs) in conjunction with
Windows auditing, you can detect when suspicious or unauthorized activity occurs.
Shares
Remove all unnecessary file shares, including the default administration shares if
they are not required. Secure any remaining shares with restricted NTFS
permissions. Although shares may not be directly exposed to the Internet, a
defense in depth strategy with limited and secured shares reduces risk if a server is
compromised.
Ports
Unused ports are closed at the firewall, but it is required that servers behind the
firewall also block or restrict ports based on their usage. For a dedicated SQL
Server, block all ports except for the necessary SQL Server port and the ports
required for authentication.
Registry
SQL Server 2000 manages access control using logins, databases, users, and
roles. Users (and applications) are granted access to SQL Server by way of a SQL
server login. The login is associated with a database user and the database user is
placed in one or more roles. The permissions granted to the role determine the
tables the login can access and the types of operations the login can perform. This
approach is used to create least privileged database accounts that have the
minimum set of permissions necessary to allow them to perform their legitimate
functionality.
The ability to access SQL Server database objects, such as built-in stored
procedures, extended stored procedures and cmdExec jobs, should be reviewed.
Also, any sample databases should be deleted.
SQL Server Installation Considerations
Before taking steps to secure your database server, know the additional components that
are present on a Windows 2000 Server after SQL Server is installed.
When you install SQL Server, a number of Windows services are installed in addition to
program and data files. By default, program and data files are located in the \Program
Files\Microsoft SQL Server\ directory. Table 18.1 shows the services and folders that are
created.
Create a least privileged local account with which to run the SQL Server service.
Use this account when you are prompted for service settings during setup. Do not
use the local system account or an administrator account.
Make sure you install SQL Server on a partition formatted with NTFS.
Install SQL Server program and database files on a non-system volume, separate
from the operating system.
Also, select Windows authentication mode unless SQL Server authentication is specifically
required. Windows authentication offers the following advantages:
Existing domain and local security policies can be used to enforce strong
passwords and account management best practices.
If you select Mixed Mode, create a strong password for the sa account. The sa account is
a prime target for password guessing and dictionary attacks.
Steps for Securing Your Database Server
This section guides you through the process of securing your database server using the
configuration categories introduced earlier. The steps cover Windows 2000 and SQL Server
2000. Each step may contain one or more actions to secure a particular area or feature.
Make sure to test patches and updates on test systems that mirror your
Important production servers as closely as possible before applying them on
production servers.
If you do not have Internet access when you run MBSA, it will not be able to
retrieve the XML file containing the latest security settings from Microsoft. In this
event, download the XML file manually and put it in the MBSA program directory.
The XML file is available from
http://download.microsoft.com/download/xml/security/1.0/nt5/en-
us/mssecure.cab.
2. Run MBSA by double-clicking the desktop icon or selecting it from the Programs
menu.
4. Clear all check boxes apart from Check for security updates. This option
detects which patches and updates are missing.
5. Click Start scan. Your server is now analyzed. When the scan is complete, MBSA
displays a security report, which it also writes to the
%userprofile%\SecurityScans directory.
For more information about using MBSA, see "How To: Use the Microsoft Baseline Security
Analyzer" in the "How To" section of this guide.
For more information about applying service packs, hot fixes, and security patches, see
http://www.microsoft.com/technet/treeview/default.asp?
url=/technet/security/bestprac/bpsp.asp.
Patching MSDE
The Microsoft Desktop Edition (MSDE) of SQL Server must be patched differently than the
full version of SQL Server. For details about patching MSDE, see "How To: Secure Your
Developer Workstation" in the "How To" section of this guide.
Step 2. Services
To reduce the attack surface area and to make sure you are not affected by undiscovered
service vulnerabilities, disable any service that is not required. Run those services that
remain using least privileged accounts.
To disable a service, set its startup type to Disabled using the Services
Note
MMC snap-in in the Computer Management tool.
Microsoft Search. This provides full text search capabilities. This service must
always run under the local system account.
Only the MSSQLSERVER database engine is required. The remaining services provide
additional functionality and are required only in specific scenarios. Disable these services if
they are not required.
SQL Server should not be configured to run as the local System account or any
account that is a member of the local Administrators group. For details about
Note
configuring the service account used to run MSSQLSERVER, see "Step 4:
Accounts."
By enforcing the use of TCP/IP you can control who connects to the server on specific
ports using IPSec policies or TCP/IP filtering. To support IPSec or TCP/IP filtering, your
SQL Server should support client connections over TCP/IP only.
2. Make sure that TCP/IP is the only SQL Server protocol that is enabled as shown
in Figure 18.3. Disable all other protocols.
Figure 18.3: Disabling all protocols except TCP/IP in the SQL Server Network
Utility
For information about how to harden the TCP/IP stack, see "How To: Harden the TCP/IP
Stack" in the "How To" section of this guide.
Additional Considerations
To further improve your database server security, disable NetBIOS and SMB. Both
protocols can be used to glean host configuration information, so you should remove them
when possible. For more information about removing NetBIOS and SMB, see "Protocols" in
Chapter 16, "Securing Your Web Server."
Also consider using IPSec to restrict the ports on which your database server accepts
incoming connections. For more information about how to do this, see "How To: Use IPSec
for Filtering Ports and Authentication" in the "How To" section of this guide.
Step 4. Accounts
Follow the principle of least privilege for the accounts used to run and connect to SQL
Server to restrict the capabilities of an attacker who manages to execute SQL commands
on the database server. Also apply strong password policies to counter the threat of
dictionary attacks.
In the New User dialog box, clear the User must change password at next
logon check box, and then select the User cannot change password and
Password never expires check boxes.
4. Remove the new account from the Users group because this group is granted
liberal access across the computer.
You can now configure SQL Server to run using this new account. For more information,
see "Step 10: SQL Server Security."
During SQL Server 200 SP3 installation, Sqldbreg2.exe creates the SQL
Debugger account. Visual Studio .NET uses this account when debugging stored
Note
procedures from managed .NET code. Because this account is only used to
support debugging, you can delete it from production database servers.
Set password expiration. Regularly expiring passwords reduces the chance that
an old password will be used for unauthorized access. The expiration period is
typically guided by a company's security policy.
Table 18.3 shows the default and recommended password policy settings.
Additionally, log failed login attempts to detect and trace malicious behavior. For more
information, see "Step 9: Auditing and Logging."
For more information about password policies, see password "Best Practices" on the
Microsoft TechNet Web site at http://www.microsoft.com/technet/treeview/default.asp?
url=/technet/prodtechnol/windowsserver2003/proddocs/entserver/
windows_password_protect.asp.
For more information, see Microsoft Knowledge Base article 246261, "How To: Use the
RestrictAnonymous Registry Value in Windows 2000."
Additional Considerations
Consider the following steps to improve security for your database server:
Do not use shared accounts. Do not create shared account for use by multiple
individuals. Give authorized individuals their own accounts. The activities of
individuals can be audited separately and group membership and privileges
appropriately assigned.
Restrict the local Administrators group membership. Ideally, have no more than
two administration accounts. This helps provide accountability. Also, do not share
passwords, again to provide accountability.
Limit the administrator account to interactive logins. If you perform only local
administration, you can restrict your administrator account to interactive logons by
removing the "Access this computer from the network" user right to deny network
logon rights. This prevents users (well intentioned or otherwise) from remotely
logging on to the server using the administrator account. If a policy of local
administration is too inflexible, implement secure remote administration.
Task To enable NTLMv2 authentication from the Local Security Policy Tool
1. Expand Local Policies, select Security Options, and then double-click LAN
Manager Authentication Level.
Verify Everyone group does not have permissions to SQL Server files.
If you use Enterprise Manager to set the SQL Server service account, it gives the account
Full Control permissions on the SQL Server installation directory and all subfolders
(\Program Files\Microsoft SQL Server\MSSQL\*).
By removing write permissions on this folder and all subfolders, and then selectively
granting full control to the data, error log, backup and job file directories, the new account
cannot overwrite SQL Server binaries.
Verify Everyone Group Does Not Have Permissions for SQL Server Files
The Everyone group should not have access to the SQL Server file location (by default,
\Program Files\Microsoft SQL Server\MSSQL) This is achieved by verifying the Everyone
group is not granted access via an ACL and giving explicit full control to only the SQL
Service account, the Administrators group, and the local system account.
For information about obtaining and using this utility, see Microsoft Knowledge Base article
263968, "FIX: Service Pack Installation May Save Standard Security Password in File."
Ensure that access to powerful system tools and utilities, such as those contained
in the \Program Files directory, is restricted.
Additional Considerations
To further improve your database server security:
Remove unused applications that may be installed on the server. If you have
applications on the server that you do not use, then remove them.
Encrypt your data files using Encrypting File System (EFS). You can use EFS
to protect your data files. If your data files are stolen, encrypted files are more
difficult to read. The use of EFS for SQL Server data files is supported.
When using EFS, you should be aware of the following:
Encrypt the database files (.MDF) and not the log files (.LDF). If you
encrypt the log files, then SQL Server cannot open your database.
Encrypt at the file level, not the directory level. While it is often a best
practice to encrypt at the directory level when using EFS so that when new
files are added they are automatically encrypted, you should encrypt your
SQL Server data files at the file level only. This avoids encrypting your log
files.
To implement EFS, right-click the directory, click Advanced, and then click Encrypt
contents to be secure. For more information about EFS, see the following resources:
Microsoft Knowledge Base article 23050, "How To: Encrypt Data Using EFS in
Windows 2000."
Additional Considerations
If you are not allowing remote administration of the computer, remove unused administrative
shares, for example, C$ and Admin$.
For more information, see "How To: Use IPSec" in the "How To" section of this guide.
If you reconfigure the port number on the server, you must also reconfigure any clients to
make sure they connect to the correct port number. You might be able to use the Client
Network Utility, but this utility should not be installed on a Web server. Instead, applications
can specify the port number in their connection strings by appending the port number to
either the Server or Data Source attributes as shown in the following code.
"Server=YourServer|YourServerIPAddress,PortNumber"
Additional Considerations
Consider using the Hide Server option from the Server Network Utility as shown in Figure
18.4. If you select this option in the TCP/IP properties dialog box in the SQL Network Utility,
SQL Server is reconfigured to listen on port 2433. It also disables responses to broadcast
requests from clients that try to enumerate SQL Server instances.
Figure 18.4: Setting the Hide Server option from the Server Network
Utility
This measure cannot be relied upon to completely hide the SQL Server port. This is not
possible because there are a variety of ways to enumerate ports to discover its location.
This option can be used only if you have a single instance of SQL Server. For
Note more information, see Microsoft Knowledge Base article 308091, "BUG: Hide
Server Option Cannot Be Used on Multiple Instances of SQL Server 2000."
Step 8. Registry
When you install SQL Server, it creates a number of registry entries and subentries that
maintain vital system configuration settings. It is important to secure these settings to
prevent an attacker from changing them to compromise the security of your SQL Server
installation.
When you install SQL Server, it creates the following registry entries and subentries:
The Microsoft Baseline Security Analyzer will verify the registry permissions. Use
Note the tool as an alternative to manually verifying the permissions with
Regedt32.exe.
Although the passwords are not actually stored in the SAM and password hashes are not
reversible, if an attacker obtains a copy of the SAM database, he or she can use brute
force password cracking techniques to obtain valid credentials.
Restrict LMHash storage in the SAM by creating the key (not value) NoLMHash in the
registry as shown below.
HKLM\System\CurrentControlSet\Control\LSA\NoLMHash
For more information, see Microsoft Knowledge Base article 299656, "New Registry Key to
Remove LM Hashes from Active Directory and Security Account Manager."
Step 9. Auditing and Logging
Auditing does not prevent system attacks, although it is a vital aid in identifying intruders,
attacks in progress, and to diagnose attack footprints. It is important to enable all auditing
mechanisms at your disposal, including Windows operating system level auditing and SQL
Server login auditing. SQL Server also supports C2 level extended auditing. This may be
required in specific application scenarios, where auditing requirements are stringent.
Windows logon failures are recorded as events in the Windows security event log. The
following event IDs are suspicious:
531. This means an attempt was made to log on using a disabled account.
529. This means an attempt was made to log on using an unknown user account or
using a valid user account but with an invalid password. An unexpected increase in
the number of these audit events might indicate an attempt to guess passwords.
2. Right-click the root of the file system, and then click Properties.
5. Click Add, and then enter Everyone into the object name to select field.
6. Click OK, and then select the Full Control check box in the Failed column to audit
all failed events.
By default, this applies to the current folder and all subfolders and files.
Failed audit events are logged to the Windows security event log.
Additional Considerations
The following are additional measures to consider when auditing and logging:
Consider shutting down the system if unable to log security audits. This policy
option is set in the Security Options of the Local Security Settings management
console. Consider this setting for highly secure servers.
Consider C2 level auditing. SQL Server offers an auditing capability that complies
with the U.S. Government C2 certification. C2 level auditing provides substantially
more audit information at the expense of increased disk storage requirements.
For more information about the configuration of a C2-compliant system, see the
TechNet article "SQL Server 2000 C2 Administrator's and User's Security Guide" at
http://www.microsoft.com/technet/prodtechnol/sql/maintain/security/sqlc2.asp?
frame=true#d.
Step 10. SQL Server Security
The settings discussed in this section are configured using the Security tab of the SQL
Server Properties dialog box in Enterprise Manager. The settings apply to all the
databases in a single instance of SQL Server. The SQL Server Properties dialog box is
shown in Figure 18.5.
By default, SQL Server login auditing is not enabled. Minimally, you should audit failed
logins.
Log entries are written to SQL log files. By default, these are located in
Note C:\Program Files\Microsoft SQL Server\MSSQL\LOG. You can use any text
reader, such as Notepad, to view them.
5. Restart SQL Server for the changes to audit policy to take effect.
For more information about SQL Server audit logs, see the TechNet article and its section
"Understanding the Audit Log" in the "SQL Server 2000 Auditing" article at
http://www.microsoft.com/technet/treeview/default.asp?
url=/technet/security/prodtech/dbsql/sql2kaud.asp?frame=true.
This procedure uses Enterprise Manager instead of the Services MMC snap-in because
Enterprise Manager automatically grants the user rights that a SQL Server service account
requires.
1. Start SQL Server Enterprise Manager, expand the SQL Server Group, and
then expand your SQL Server.
4. Click This account in the Startup service account group. Enter the user name
and password of your least privileged account.
For more information about creating a least privileged account to run SQL Server, see
"Step 4: Accounts."
Step 11. SQL Server Logins, Users, and Roles
To be able to access objects in a database you need to pass two layers of security checks.
First, you need to present a valid set of login credentials to SQL Server. If you use
Windows authentication, you need to connect using a Windows account that has been
granted a SQL Server login. If you use SQL Server authentication, you need to supply a
valid user name and password combination.
The login grants you access to SQL Server. To access a database, the login must be
associated with a database user inside the database you want to connect to. If the login is
associated with a database user, the capabilities of the login inside the database are
determined by the permissions associated with that user. If a login is not associated with a
specific database user, the capabilities of the login are determined by the permissions
granted to the public role in the database. All valid logins are associated with the public
role, which is present in every database and cannot be deleted. By default, the public role
within any database that you create is not granted any permissions.
The default system administrator (sa) account has been a subject of countless attacks. It is
the default member of the SQL Server administration fixed server role sysadmin. Make
sure you use a strong password with this account.
The sa account is still active even when you change from SQL authentication
Important
to Windows authentication.
Apply strong passwords to all accounts, particularly privileged accounts such as members
of the sysadmin and db_owner roles. If you are using replication, also apply a strong
password to the distributor_admin account that is used to establish connections to remote
distributor servers.
It is a good idea to disable the Windows guest account. Additionally, remove the guest
account from all user-defined databases. Note that you cannot remove guest from the
master, tempdb, and replication and distribution databases.
2. Expand Microsoft SQL Server, expand SQL Server Group, and then expand
your SQL Server.
3. Expand the Security folder, select and right-click Logins, and then click New
Login.
4. In the Name field, enter a custom Windows group that contains only database
administrators.
5. Click the Server Roles tab, and then select System Administrators.
2. Expand Microsoft SQL Server, expand SQL Server Group, and then expand
your SQL Server.
For more information about reconfiguring the SQL service accounts after the installation,
see the MSDN article, "Changing Passwords and User Accounts" at
http://msdn.microsoft.com/library/default.asp?url=/library/en-
us/instsql/in_afterinstall_4p0z.asp.
All databases contain a public database role. Every other user, group, and role is a member
of the public role. You cannot remove members of the public role. Instead, do not grant the
permissions for the public role that grant access to your application's database tables,
stored procedures, and other objects. Otherwise, you cannot get the authorization that you
want using user-defined database roles because the public role grants default permissions
for users in a database.
Additional Considerations
Also consider the following recommendations when configuring SQL Server logins, users,
and roles:
Do not change the default permissions that are applied to SQL Server
objects. In versions of SQL Server earlier than Service Pack 3, the public role does
have access to various default SQL Server database objects. With Service Pack 3,
the security design has been reviewed and security has been improved by removing
the public role where it is unnecessary and by applying more granular role checks.
Step 12. SQL Server Database Objects
SQL Server provides two sample databases for development and education together with a
series of built-in stored procedures and extended stored procedures. The sample
databases should not be installed on production servers and powerful stored procedures
and extended stored procedures should be secured.
The recommended approach is to create a SQL Server login for your application, map the
login to a database user, add the user to a user-defined database role, and then grant
permissions to the role.
2. Expand the Management node, right-click SQL Server Agent, and then click
Properties.
4. At the bottom of the dialog, select the Only users with SysAdmin privileges
can execute CmdExec and ActiveScripting job steps check box.
5. Click OK.
This change may require you to supply a user name and password. If
the SQL Server service account is least privileged user (as advocated
Note earlier in this chapter), you will be prompted for the user name and
password of an administrator account that has privileges to modify the
service.
Snapshot of a Secure Database Server
When you have a snapshot view that shows the attributes of a secured SQL Server
database server, you can quickly and easily compare settings with your own server. The
settings shown in Table 18.5 are based on an analysis of SQL Server database servers that
have proven to be very resilient to attack and demonstrate sound security practices.
Physically protect the database server. Locate the server in a secure computer
room.
Restrict local logons. Do not allow anyone to locally log on to the server, apart
from the administrator.
Staying Secure
You need to regularly monitor the security state of your database server and update it
regularly to help prevent newly discovered vulnerabilities from being exploited. To help keep
your database server secure:
"Backup and Restore Strategies with SQL Server 2000," by Rudy Lee Martinez,
http://www.dell.com/us/en/biz/topics/power_ps4q00-martin.htm
Windows 2000 service packs. The latest service packs are listed at
http://www.microsoft.com/windows2000/downloads/servicepacks/default.asp.
Critical updates. These updates help to resolve known issues and help protect
your computer from known security vulnerabilities. For the latest critical updates,
see http://www.microsoft.com/windows2000/downloads/critical/default.asp.
These also help protect your computer from known security vulnerabilities.
Additionally, subscribe to the industry security alert services shown in Table 18.7. This
allows you to assess the threat of a vulnerability where a patch is not yet available.
Table 18.7: Industry Security Notification Services
Service Location
http://www.cert.org/contact_cert/certmaillist.html
CERT Advisory
Mailing List Informative advisories are sent when vulnerabilities are
reported.
Windows and
.NET Magazine http://email.winnetmag.com/winnetmag/winnetmag_prefctr.asp
Security Announces the latest security breaches and identifies fixes.
UPDATE
http://www.ntbugtraq.com/default.asp?pid=31&sid=1#020
Restrict the tools. The main options include SQL Enterprise Manager and Terminal
Services. Both SQL Enterprise Manager and Terminal Services use Windows
security. As such, the main considerations here are restricting the Windows
accounts and the ports you use.
Restrict the computers that are allowed to administer the server. IPSec can
be used to restrict which computers can connect to your SQL Server.
3. Remove the TsInternetUser user account from the system, which is created
during Terminal Services installation. This account is used to support anonymous
Internet access to Terminal Services, which should not be enabled on the server.
Configure Terminal Services
Task Use the Terminal Services configuration MMC snap-in available from the
Administrative Tools program group to configure the following
1. There are three levels (Low, Medium, and High) of encryption available for
connections to Terminal Services. Set the encryption to 128-bit key. Note that the
Windows high encryption pack should be installed on both the server and the
client.
2. Configure the Terminal Services session to disconnect after idle connection time
limit. Set it to end a disconnected session. A session is considered to be
disconnected if the user closes the Terminal Services client application without
logging off in a period of 10 minutes.
Use a secure VPN connection between the client and the server or an IPSec tunnel for
enhanced security. This approach provides mutual authentication and the RDS payload is
encrypted.
For a quick reference checklist, see "Checklist: Securing Your Database Server" in the
"Checklists" section of this guide.
Additional Resources
For more information about SQL Server security, see the following resources:
For information about changing the SQL Server service account, see Microsoft
Knowledge Base article 283811, "How To: Change the SQL Server Service Account
Without Using SQL Enterprise Manager in SQL Server 2000."
For information about SQL Server auditing, see the TechNet article, "SQL Server
2000 Auditing," by John Howie, at
http://www.microsoft.com/technet/treeview/default.asp?
url=/technet/security/prodtech/dbsql/sql2kaud.asp?frame=true.
Chapter 19: Securing Your ASP.NET Application and Web
Services
In This Chapter
Locking down an ASP.NET application
This chapter describes what is new with ASP.NET from a system administrator's standpoint
and how to configure machine-wide and application-specific security settings.
How to Use This Chapter
This chapter focuses on the key security considerations for ASP.NET applications. To get
the most out of this chapter:
Read Chapter 16, "Securing Your Web Server." This shows you how to secure
the Windows 2000 operating system and the Microsoft .NET Framework. A secure
underlying platform is a prerequisite for securing an ASP.NET Web application or
Web service.
Use the snapshot. Table 19.4, which is at the end of this chapter, gives a snapshot
of a secure ASP.NET application with secure configuration settings in
Machine.config and Web.config. Use this table when configuring your server and
application settings.
Use the checklist. The "Checklist: Securing Your ASP.NET Application" in the
"Checklist" section of this guide provides a printable job aid for quick reference. Use
the task-based checklist to quickly evaluate the scope of the required steps and to
help you work through individual steps.
For related guidance, read Chapter 20, "Hosting Multiple ASP.NET Applications," which
shows you how to isolate multiple Web applications running on the same server from critical
system resources and from one another. For more information about configuring code
access security (CAS) policy for partial-trust Web applications and Web services, see
Chapter 9, "Using Code Access Security with ASP.NET."
Methodology
To secure your ASP.NET application, start with a hardened operating system and .NET
Framework installation base, and then apply secure application configuration settings to
reduce the application's attack profile. The methodology that is applied in this chapter to
secure ASP.NET Web applications and Web services is consistent with the methodology
used to secure the underlying Web server host, and it shares common configuration
categories. These include:
Services. The .NET Framework installs the ASP.NET state service to manage out-
of-process ASP.NET session state. Secure the ASP.NET state service if you install
it. Disable the ASP.NET state service if you do not require it.
Protocols. Restrict Web service protocols to reduce the attack surface area.
Accounts. The default ASPNET account is created for running Web applications,
Web services, and the ASP.NET state service. If you create custom accounts to run
processes or services, they must be configured as least privileged accounts with
the minimum set of required NTFS permissions and Windows privileges.
Files and Directories. Application Bin directories that are used to hold private
assemblies should be secured to mitigate the risk of an attacker downloading
business logic.
In Microsoft Windows 2000, Internet Information Services (IIS) 5.0 runs all Web
applications and Web services in the ASP.NET worker process (Aspnet_wp.exe). The unit
of isolation is the application domain and each virtual directory has its own application
domain. Process-level configuration settings are maintained by the <processModel>
element in Machine.config.
In Microsoft Windows Server 2003, IIS 6.0 application pools allow you to isolate
applications using separate processes. For more information, see Chapter 20, "Hosting
Multiple ASP.NET Applications."
ASP.NET Account
The ASPNET account is a least privileged, local account created when you install the .NET
Framework. By default, it runs the ASP.NET worker process and the ASP.NET state
service.
If you decide to run Web applications using a custom account, make sure you configure the
account with minimum privileges. This reduces the risks associated with an attacker who
manages to execute code using the application's security context. You must also specify the
account's credentials on the <processModel> element. Make sure you do not store
credentials in plaintext. Instead, use the Aspnet_setreg.exe tool to store encrypted
credentials in the registry. The custom account must also be granted the appropriate NTFS
permissions.
The following example shows the <processModel> element with a custom account both
before and after running Aspnet_setreg.exe to secure the credentials:
<!--Before-->
<processModel userName="CustomAccount" password="Str0ngPassword" />
<!--After-->
<processModel
userName="registry:HKLM\SOFTWARE\YourApp\process\ASPNET_SETREG,user
password="registry:HKLM\SOFTWARE\YourApp\process\ASPNET_SETREG,pass
You can choose the registry location that stores the encrypted data, although it must be
beneath HKEY_LOCAL_MACHINE. In addition to encrypting the data using the Data
Protection API (DPAPI) and storing it in the registry, the tool applies a secure ACL to
restrict access to the registry key. The ACL on the registry key grants Full Control to
System, Administrators, and Creator Owner. If you use the tool to encrypt the credentials
for the <identity> element or the connection string for the <sessionState> element, you
must also grant read access to the ASP.NET process account.
To obtain the Aspnet_setreg.exe tool and for more information, see Microsoft Knowledge
Base article 329290, "How To: Use the ASP.NET Utility to Encrypt Credentials and Session
State Connection Strings."
If you do enable impersonation, you can either impersonate the original caller — that is, the
IIS authenticated identity — or a fixed identity specified on the <identity> element. For
more information, see "Impersonation" later in this chapter.
Generally, ASP.NET applications do not use impersonation because it can negatively affect
design, implementation, and scalability. For example, using impersonation prevents effective
middle-tier connection pooling, which limits application scalability. Impersonation might make
sense in specific scenarios, for example, when the application uses the anonymous user
account's security context for resource access. This is a common technique often used
when multiple applications are hosted on the same server. For more information, see
Chapter 20, "Hosting Multiple Web Applications."
Finally, the URLScan ISAPI filter can be used to block requests for restricted file types and
program executables. URLScan ships with the IISLockdown tool, although it can be
obtained separately. For more information, see Microsoft Knowledge Base article 307608,
"INFO: Availability of URLScan Version 2.5 Security Tool," and "How To: Use URLScan" in
the "How To" section of this guide.
For more information about IISLockdown and URLScan, see Chapter 16, "Securing Your
Web Server."
AppSettings
Sensitive data, such as connection strings and credentials, should not be stored in plaintext
format in configuration files. Instead, the developer should use DPAPI to encrypt secrets
prior to storage.
For more information about AppSettings, see the "AppSettings in ASP.NET" show on
MSDN® TV at http://msdn.microsoft.com/msdntv.
Machine.Config and Web.Config Explained
The configuration management provided by the .NET Framework encompasses a broad
range of settings that allow an administrator to manage the Web application and its
environment. These settings are stored in XML configuration files, some of which control
machine-wide settings, while others control application-specific configuration.
XML configuration files can be edited with any text editor, such as Notepad, or with XML
editors. XML tags are case sensitive, so ensure that the correct case is used.
Figure 19.1 shows the configuration files used to configure ASP.NET Web applications that
are available to administrators.
The Machine.config and Web.config files share many of the same configuration sections
and XML elements. Machine.config is used to apply machine-wide policy to all .NET
Framework applications running on the local computer. Developers can also use application-
specific Web.config files to customize settings for individual applications.
Changes that you make to configuration files are applied dynamically and do not normally
require that you restart the server or any service, except if changes are made to the
<processModel> element in Machine.config, which is discussed later in this chapter.
For more information about ASP.NET Web application CAS configuration files, see Chapter
9, "Using Code Access Security with ASP.NET."
In Figure 19.2, the AppRoot Web application has a Web.config file in its virtual root
directory. SubDir1 (not a virtual directory) also contains its own Web.config file, which gets
applied when an HTTP request is directed at http://AppRoot/SubDir1. If a request is
directed at SubDir2 (a virtual directory) through AppRoot, for example,
http://Server/AppRoot/SubDir2, settings from Machine.config and the Web.config in the
AppRoot directory are applied. If, however, a request is directed at SubDir2 bypassing
AppRoot, for example, http://Server/SubDir2, then only the settings from Machine.config
are applied.
In all cases, base settings are obtained from Machine.config. Next, overrides and additions
are applied from any relevant Web.config files.
If the same configuration element is used in Machine.config and in one or more Web.config
files, the setting from the file lowest in the hierarchy overrides the higher-level settings. New
configuration settings that are not applied at the machine level can also be applied to
Web.config files and certain elements can clear the parent-level settings using the <clear>
element.
The following table shows where the combined configuration settings are obtained from for
a combination of Web requests that apply to Figure 19.2.
<location>
You must include the Web site name when using the location tag from
Note
Machine.config.
With Web.config, the path is relative from the application's virtual directory. For example:
<location path="SubDirName/PageName.aspx" >
<system.web>
. . .
</system.web>
</location>
For example, to apply machine-wide policy that cannot be overridden at the application
level, use the following <location> element:
<location path="" allowOverride="false">
<system.web>
… machine-wide defaults
</system.web>
</location>
By leaving the path attribute empty, you indicate that the settings apply to the machine,
while allowOverride="false" ensures that Web.config settings do not override the specified
values. Any attempt to add elements in Web.config will generate an exception, even if the
elements in Machine.config match with those of Web.config.
Machine.Config and Web.Config Guidelines
Settings in Machine.config apply machine-level defaults for your server. Where you want to
enforce a particular configuration for all applications on your server, use
allowOverride="false" on the <location> element as described above. This is particularly
appropriate for hosting scenarios, where you need to enforce aspects of security policy for
all applications on the server.
For those settings that can be configured on an individual application basis, it is normal for
the application to provide a Web.config file. While it is possible to configure individual
applications from Machine.config using multiple <location> elements, separate Web.config
files provide deployment advantages and lead to smaller Machine.config files.
The main item to consider is which settings should be enforced by machine policy. This
depends on your specific scenario. Some common scenarios follow:
Machine.config
By default, Machine.config is configured with the following ACL:
Administrators: Full Control
System: Full Control
Power Users: Modify
Users: Read and Execute
LocalMachine\ASPNET (process identity): Read and Execute
On Windows Server 2003, the Local Service and Network Service accounts are
Note
also granted read access.
Members of the Users group are granted read access by default, since all managed code
that runs on the computer must be able to read Machine.config.
The default ACL on Machine.config is a secure default. If, however, you only have a single
Web application running on the server, or all of your Web applications use the same
process identity, you can further restrict the ACL by removing the user's access control
entry (ACE). If you do remove "users" from the DACL, you need to explicitly add the Web
process identity.
Web.config
The .NET Framework does not install any Web.config files. If you install an application that
supplies its own Web.config, it usually inherits its ACL from the inetpub directory, which by
default grants read access to members of the Everyone group. To lock down an
application-specific Web.config, use one the following ACLs.
If your applications use impersonation of an explicit account (that is, if they impersonate a a
fixed identity), such as <identity impersonate="true" username="WebUser"
password="Y0urStr0ngPassw0rd$"/>, then both that account (WebUser, in this case)
and the process need Read access.
If your code base is on a Universal Naming Convention (UNC) share, you must grant read
access to the IIS-provided UNC token identity.
If you are impersonating but not using explicit credentials, such as <identity
impersonate="true"/>, and no UNC, then only the process should need access in the .NET
Framework 1.1. For the .NET Framework 1.0, you must additionally configure the ACL to
grant read access to any identity that will be impersonated (that is, you must grant read
access to the original caller).
Trust Levels in ASP.NET
An application's trust level determines the permissions it is granted by CAS policy. This
determines the extent to which the application can access secure resources and perform
privileged operations.
<trust>
Use the <trust> element to configure the application's trust level. By default, the
configuration level is set to Full, as shown below:
<!-- level="[Full|High|Medium|Low|Minimal]" -->
<trust level="Full" originUrl=""/>
This means that the application is granted full and unrestricted CAS permissions. With this
configuration, the success or failure of any resource access performed by the application
depends only on operating system security.
If you change the trust level to a level other than Full, you may break existing ASP.NET Web
applications depending on the types of resources they access and the operations they
perform. Applications should be thoroughly tested at each trust level.
For more information about building partial-trust Web applications that use CAS, see
Chapter 9, "Using Code Access Security with ASP.NET." For more information about using
trust levels to provide application isolation, see Chapter 20, "Hosting Multiple ASP.NET Web
Applications."
Process Identity for ASP.NET
ASP.NET Web applications and Web services run in a shared instance of the ASP.NET
worker process (Aspnet_wp.exe). Process-level settings, including the process identity, are
configured using the <processModel> element in Machine.config.
<processModel>
The identity for the ASP.NET worker process is configured using the userName and
password attributes on the <processModel> element. When you configure process
identity:
You might decide to use an alternate account because you need to connect to a remote
Microsoft SQL Server™ database or network resource using Windows authentication. Note
that you can use the local ASPNET account for this purpose. For more information, see
"Data Access" later in this chapter.
For more information about the NTFS permissions that the ASP.NET process account
requires, see "NFTS Permission Requirements" later in this chapter.
You should also grant the following user rights to the ASP.NET process accounts:
Logon as a service.
This stores the encrypted credentials in the specified registry key and secures the
registry key with a restricted ACL that grants Full Control to System,
Administrators, and Creator Owner.
2. Reconfigure the <processModel> element and add the following userName and
password attributes.
<processModel
userName="registry:HKLM\SOFTWARE\YourApp\process\ASPNET_SETRE
password="registry:HKLM\SOFTWARE\YourApp\process\ASPNET_SETRE
For more information, see Microsoft Knowledge Base article 329290, "How To: Use the
ASP.NET Utility to Encrypt Credentials and Session State Connection Strings."
<identity>
The <identity> element is used to enable impersonation. You can impersonate:
A fixed identity
The impersonation uses the access token provided by IIS that represents the authenticated
caller. This may be the anonymous Internet user account, for example, if your application
uses Forms authentication, or it may be a Windows account that represents the original
caller, if your application uses Windows authentication.
For more information, see "How To: Implement Kerberos Delegation for Windows 2000" in
the "How To" section of "Microsoft patterns & practices Volume I, Building Secure ASP.NET
Applications: Authentication, Authorization, and Secure Communication" at
http://msdn.microsoft.com/library/default.asp?url=/library/en-
us/dnnetsec/html/SecNetHT05.asp.
Do not store credentials in plaintext as shown here. Instead, use the Aspnet_setreg.exe
tool to encrypt the credentials and store them in the registry.
This stores the encrypted credentials in the specified registry key and secures
the registry key with a restricted ACL that grants Full Control to System,
Administrators, and Creator Owner.
2. Reconfigure the <identity> element and add the following userName and
password attributes.
<identity impersonate="true"
userName="registry:HKLM\SOFTWARE\YourApp\identity\ASPNET_SETR
password="registry:HKLM\SOFTWARE\YourApp\identity\ASPNET_SETR
3. Use Regedt32.exe to create an ACL on the above registry key that grants read
access to the ASP.NET process account.
For more information, see Microsoft Knowledge Base article 329290, "How To: Use the
ASP.NET Utility to Encrypt Credentials and Session State Connection Strings."
The ASP.NET version 1.0 process account requires the "Act as part of the operating
system" user right on Windows 2000 when you impersonate a fixed identity by specifying
userName and password attributes. Because this effectively elevates the ASP.NET
process account to a privilege level approaching the local System account, impersonating a
fixed identity is not recommended with ASP.NET version 1.0.
If you are running ASP.NET version 1.1 on Windows 2000 or Windows 2003
Note
Server, this user right is not required.
<authentication>
The appropriate authentication mode depends on how your application or Web service has
been designed. The default Machine.config setting applies a secure Windows authentication
default as shown below.
<!-- authentication Attributes:
mode="[Windows|Forms|Passport|None]" -->
<authentication mode="Windows" />
Set protection="All".
<!-- The restricted folder is for authenticated and SSL access only.
<location path="Restricted" >
<system.web>
<authorization>
<deny users="?" />
</authorization>
</system.web>
</location>
Set Protection="All"
This setting ensures that the Forms authentication cookie is encrypted to provide privacy
and integrity. The keys and algorithms used for cookie encryption are specified on the
<machineKey> element.
Encryption and integrity checks prevent cookie tampering, although they do not mitigate the
risk of cookie replay attacks if an attacker manages to capture the cookie. Also use SSL to
prevent an attacker from capturing the cookie by using network monitoring software.
Despite SSL, cookies can still be stolen with cross-site scripting (XSS) attacks. The
application must take adequate precautions with an appropriate input validation strategy to
mitigate this risk.
Set requireSSL="true". This sets the Secure attribute in the cookie, which ensures that
the cookie is not transmitted from a browser to the server over an HTTP link. HTTPS (SSL)
is required.
This is a .NET Framework version 1.1 setting. It takes explicit programming to set
the cookie Secure attribute in applications built on version 1.0. For more
Note
information and sample code, see Chapter 10, "Building Secure ASP.NET Web
Pages and Controls."
File Authorization
Only applications that use Windows authentication and have the following configuration can
use this gatekeeper:
<authentication mode="Windows"/>
This gatekeeper is automatically effective when you use Windows authentication, and there
is no need to impersonate. To configure the gatekeeper, configure Windows ACLs on files
and folders. Note that the gatekeeper only controls access to the file types mapped by IIS
to the following ASP.NET ISAPI extension: Aspnet_isapi.dll.
URL Authorization
Any application can use this gatekeeper. It is configured using <authorization> elements
that control which users and groups of users should have access to the application. The
default element from Machine.config is shown below:
<authorization>
<!-- allow/deny Attributes:
users="[*|?|name]"
* - All users
? - Anonymous users
[name] - Named user
roles="[name]" -->
<allow users="*"/>
</authorization>
URL authorization only applies to file types that are mapped by IIS to the ASP.NET
ISAPI extension: Aspnet_isapi.dll.
When your application uses Windows authentication, you are authorizing access to
Windows user and group accounts. User names take the form of
"authority\WindowsUserName" and role names take the form of
"authority\WindowsGroupName", where authority is either a domain name or the
local machine name depending on the account type.
A number of well known accounts are represented with "BUILTIN" strings. For
example, the local administrators group is referred to as "BUILTIN\Administrators".
The local users group is referred to as "BUILTIN\Users".
With.NET Framework version 1.0, the authority and the group name are
Note case sensitive. The group name must match the group name that appears
in Windows exactly.
When your application uses Forms authentication, you authorize the custom user
and roles maintained in your custom user store. For example, if you use Forms to
authenticate users against a database, you authorize against the roles retrieved
from the database.
You can use the <location> tag to apply authorization settings to an individual file
or directory. The following example shows how you can apply authorization to a
specific file (page.aspx):
<location path="page.aspx" />
<authorization>
<allow users="DomainName\Bob, DomainName\Mary" />
<deny users="*" />
</authorization>
</location>
Session State
Applications that rely on per user session state can store session state in the following
locations:
<sessionState>
The relevant location, combined with connection details, is stored in the <sessionState>
element in Machine.config. This is the default setting:
<sessionState mode="InProc"
stateConnectionString="tcpip=127.0.0.1:42424"
stateNetworkTimeout="10" sqlConnectionString="data
source=127.0.0.1;Integrated Security=SSPI"
cookieless="false" timeout="20"/>
If you do not use the ASP.NET state service on the Web server, use the MMC
Note
Services snap-in to disable it.
Encrypt sqlConnectionString
For more information about setting up the SQL Server session state store database, see
Microsoft Knowledge Base article 311209, "How To: Configure ASP.NET for Persistent SQL
Server Session State Management."
This stores the encrypted connection string in the specified registry key and
secures the registry key with a restricted ACL that grants Full Control to System,
Administrators, and Creator Owner.
3. Use Regedt32.exe to create an ACL on the above registry key that grants read
access to the ASP.NET process account.
For more information about using the ASPNET account to access a remote
database, see "Data Access" later in this chapter.
4. Grant the SQL login access to the ASPState database. The following T-SQL
creates a database user called WebAppUser, with which the login is associated.
USE ASPState
GO
sp_grantdbaccess 'MACHINE\ASPNETWebApps', 'WebAppUser'
7. Configure permissions in the database for the database role. Grant execute
permissions for the stored procedures that are provided with the ASPState
database.
grant execute on CreateTempTables to WebAppUserRole
Repeat this command for all of the stored procedures that are provided with the
ASPState database. Use SQL Server Enterprise Manager to see the full list.
The port number is defined by the Port named value. If you change the port number in the
registry, for example, to 45678, you must also change the connection string on the
<sessionState> element, as follows:
stateConnectionString="tcpip=127.0.0.1:45678"
This stores the encrypted connection string in the specified registry key and
secures the registry key with a restricted ACL that grants Full Control to System,
Administrators, and Creator Owner.
<pages>
By default, the enableViewStateMac attribute on the <pages> element in Machine.config
ensures that view state is protected with a MAC.
<pages buffer="true" enableSessionState="true"
enableViewState="true" enableViewStateMac="true"
autoEventWireup="true" validateRequest="true"/>
If you use view state, make sure that enableViewStateMac is set to true. The
<machineKey> element defines the algorithms used to protect view state.
Machine Key
The <machineKey> element is used to specify encryption keys, validation keys, and
algorithms that are used to protect Forms authentication cookies and page-level view state.
The following code sample shows the default setting from Machine.config:
<machineKey validationKey="AutoGenerate,IsolateApps"
decryptionKey="AutoGenerate,IsolateApps" validation="SHA
Set validation="SHA1"
Also use the IsolateApps setting. This is a new .NET Framework version 1.1 setting that
instructs ASP.NET to automatically generate encryption keys and to make them unique for
each application.
Set validation="SHA1"
The validation attribute specifies the algorithm used for integrity-checking, page-level view
state. Possible values are "SHA1", "MD5", and "3DES".
If you used protection="All" on the <forms> element, then the Forms authentication
cookie is encrypted, which also ensures integrity. Regardless of the validation attribute
setting, Forms authentication uses TripleDES (3DES) to encrypt the cookie.
If you set validation="SHA1" on the <machineKey>, then page-level view state is integrity
checked using the SHA1 algorithm, assuming that the <pages> element is configured for
view state MACs. For more information, see "View State" earlier in this chapter.
You can also set the validation attribute to MD5. You should use SHA1 because this
produces a larger hash than MD5 and is therefore considered more secure.
In Web farms, you must set explicit key values and use the same ones across all machines
in the Web farm. See "Web Farm Considerations" later in this chapter.
Debugging
The <compilation> element controls compiler settings that are used for dynamic page
compilation, which is initiated when a client requests a Web page (.aspx file) or Web
service (.asmx file). It is important that debug builds are not used on the production server
because debug information is valuable to attackers and can reveal source code details.
<compilation>
This element controls the compilation process. Make sure that debug compiles are disabled
on production servers. Set debug="false" as follows:
<compilation debug="false" explicit="true" defaultLanguage="vb" />
By default, temporary files are created and compiled in the following directory:
%winnt%\Microsoft.NET\Framework\{version}\Temporary ASP.NET Files
You can specify the location on a per application basis using the tempDirectory attribute,
although this provides no security benefit.
Make sure you do not store debug files (with .pdb extensions) on a production server with
your assemblies.
Tracing
Tracing should not be enabled on production servers because system-level trace
information can greatly help an attacker profile an application and probe for weak spots.
<trace>
If you do need to trace problems with live applications, it is preferable that you simulate the
problem in a test environment, or if necessary, enable tracing and set localOnly="true" to
prevent trace details from being returned to remote clients.
Exception Management
Do not allow exception details to propagate from your Web applications back to the client.
A malicious user could use system-level diagnostic information to learn about your
application and probe for weaknesses to exploit in future attacks.
<customErrors>
The <customErrors> element can be used to configure custom, generic error messages
that should be returned to the client in the event of an application exception condition. The
error page should include a suitably generic error message, optionally with additional
support details. You can also use this element to return different error pages depending on
the exception condition.
Make sure that the mode attribute is set to "On" and that you have specified a default
redirect page as shown below:
<customErrors mode="On" defaultRedirect="YourErrorPage.htm" />
The defaultRedirect attribute allows you to use a custom error page for your application,
which for example might include support contact details.
Do not use mode="Off" because it causes detailed error pages that contain
Note
system-level information to be returned to the client.
If you want separate error pages for different types of error, use one or more <error>
elements as shown below. In this example, "404 (not found)" errors are redirected to one
page, "500 (internal system errors)" are directed to another page, and all other errors are
directed to the page specified on the defaultRedirect attribute.
<customErrors mode="On" defaultRedirect="YourErrorPage.htm">
<error statusCode="404" redirect="YourNotFoundPage.htm"/>
<error statusCode="500" redirect="YourInternalErrorPage.htm"/>
</customErrors>
Remoting
Do not expose .NET Remoting endpoints on Internet-facing Web servers. To disable
Remoting, disable requests for .rem and .soap extensions by mapping requests for these
file extensions to the HttpForbiddenHandler. Use the following elements beneath
<httpHandlers>:
<httpHandlers>
<add verb="*" path="*.rem" type="System.Web.HttpForbiddenHandler"/
<add verb="*" path="*.soap" type="System.Web.HttpForbiddenHandler"
. . .
</httpHandlers>
This does not prevent a Web application on the Web server from connecting to a
Note downstream object by using the Remoting infrastructure. However, it prevents
clients from being able to connect to objects on the Web server.
Web Services
Configure Web services using the <webServices> element. To establish a secure Web
services configuration:
By disabling unnecessary protocols, including HttpPost and HttpGet, you reduce the attack
surface area. For example, it is possible for an external attacker to embed a malicious link
in an e-mail to execute an internal Web service using the end user's security context.
Disabling the HttpGet protocol is an effective countermeasure. In many ways, this is similar
to an XSS attack. A variation of this attack uses an <img src="..." /> tag on a publicly
accessible Web page to embed a GET call to an intranet Web service. Both attacks can
allow an outsider to invoke an internal Web service. Disabling protocols mitigates the risk.
If your production server provides publicly discoverable Web services, you must enable
HttpGet and HttpPost to allow the service to be discovered over these protocols.
At times, you might want to distribute the WSDL files manually to your partners and prevent
public access. With this approach, the development team can provide individual .wsdl files
for each Web service to the operations team. The operations team can then distribute them
to specified partners who want to use the Web services.
HTTP handlers are located in Machine.config beneath the <httpHandlers> element. HTTP
handlers are responsible for processing Web requests for specific file extensions. Remoting
should not be enabled on front-end Web servers; enable Remoting only on middle-tier
application servers that are isolated from the Internet.
.asax, .ascx, .config, .cs, .csproj, .vb, .vbproj, .webinfo, .asp, .licx, .resx, and
.resources are protected resources and are mapped to
System.Web.HttpForbiddenHandler.
For .NET Framework resources, if you do not use a file extension, then map the extension
to System.Web.HttpForbiddenHandler in Machine.config, as shown in the following
example:
<add verb="*" path="*.vbproj" type="System.Web.HttpForbiddenHandler"
To avoid this issue, you can create event sources at installation time when administrator
privileges are available. You can use a .NET installer class, which can be instantiated by the
Windows Installer (if you are using .msi deployment) or by the InstallUtil.exe system utility if
you are not. For more information about using event log installers, see Chapter 10, "Building
Secure ASP.NET Web Pages and Controls."
If you are unable to create event sources at installation time, you must add permission to
the following registry key and grant access to the ASP.NET process account or to any
impersonated account if your application uses impersonation.
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Eventlog
Create subkey
Enumerate subkeys
Notify
Read
File Access
Any file that your application accesses must have an access control entry (ACE) in the ACL
that grants, at minimum, read access to the ASP.NET process account or impersonated
identity. Normally, ACLs are configured on the directory and the file inherits the setting.
In addition to using NTFS permissions to restrict access to files and directories, you can
also use ASP.NET trust levels to place constraints on Web applications and Web services
to restrict which areas of the file system they can access. For example, Medium-trust Web
applications can only access files within their own virtual directory hierarchy.
For more information about ASP.NET CAS policy, see Chapter 9, "Using Code Access
Security with ASP.NET."
ACLs and Permissions
The ASP.NET process account and, for certain directories, any impersonation identities (if
your applications use impersonation) require the following NTFS permissions. The
permissions shown in Table 19.3 should be used in addition to any permissions your
applications might require to access application-specific file system resources.
Use the default ASP.NET process account. Use the default ASP.NET process
account by creating a mirrored account with the same user name and password on
the database server. On Windows 2000, the default process account is ASPNET.
On Windows Server 2003, the default process account is NetworkService.
The disadvantage of using local accounts is that if you can dump the SAM
database, which requires administration privileges, then you can access the
credentials. The main advantage is that local accounts can be scoped to specific
servers, which is difficult to achieve using domain accounts.
Use a least privileged domain account to run ASP.NET. This approach simplifies
administration, and it means that you do not need to synchronize the passwords of
mirrored accounts. It will not work if the Web server and database server are in
separate non-trusting domains, or if a firewall separates the two servers and the
firewall does not permit the necessary ports for Windows authentication.
Impersonate the Anonymous Web account. If you are using Forms or Passport
authentication, you can impersonate the anonymous Web account (IUSR_MACHINE
by default) and create a mirrored account on the database server. This approach is
useful in scenarios where you host multiple Web applications on the same Web
server. You can use IIS to configure each application's virtual directory with a
different anonymous account.
On Windows Server 2003, you can run multiple applications in separate worker
processes, using IIS 6.0 application pools and configuring a separate identity for
each one.
The following procedure assumes that you are using a mirrored local account, but you can
use the same approach with a domain account to restrict the account's capabilities in the
database.
You need to do this so that you can create a mirrored account on the database
server.
3. Create a local account on the database server with the same name (ASPNET)
and strong password on the database server.
5. Grant the Windows group access to SQL Server by creating a new login, as
follows:
sp_grantlogin 'MACHINE\ASPNETWebApp'
6. Grant the SQL login access to the database. The following T-SQL creates a
database user called WebAppUser to which the login is associated.
USE YourDatabase
GO
sp_grantdbaccess 'MACHINE\ASPNETWebApp', 'WebAppUser'
9. Configure permissions in the database for the database role. Ideally, grant
execute permissions only for the stored procedures that the application uses to
query the database and do not provide direct table access.
grant execute on sprocname to WebAppUserRole
UNC Shares
There are two main ways that your ASP.NET application might use UNC shares:
Your application's IIS virtual directory is mapped to a remote share, for example,
\\remoteserver\appname. In this scenario, HTTP requests are processed by your
Web server, but the application's pages, resources, and private assemblies are
located on the remote share.
If you use the local ASPNET process account, this does not have a network identity, so you
must create a mirrored account on the remote server with a matching user name and
password, or you must use a least privileged domain account that has access to both
servers. On Windows Server 2003, the NetworkService account that is used to run
ASP.NET Web applications can be authenticated over the network, so all you need to do is
grant access rights to the machine account.
The account credentials are stored in encrypted format in the IIS metabase but
are available through an API. You should ensure that you use a least privileged
Note account. For more information, see Microsoft Knowledge Base article 280383,
"IIS Security Recommendations When You Use a UNC Share and Username and
Password Credentials."
If your application resides on a UNC share, ASP.NET impersonates the IIS-provided UNC
token (created from the account credentials that you supplied to IIS) to access that share,
unless you have enabled impersonation and have used a fixed impersonation identity, as
shown with the following configuration:
<identity impersonate="true"
userName="registry:HKLM\SOFTWARE\YourApp\identity\ASPNET_SETREG,us
password="registry:HKLM\SOFTWARE\YourApp\identity\ASPNET_SETREG,pa
In the above example, Aspnet_setreg.exe has been used to store the encrypted
Note
account credentials in the registry.
If you enable impersonation of the original caller (IIS authenticated identity) by using the
following configuration, ASP.NET still uses the UNC-provided token to access your
application's files on the share, although any resource access performed by your application
uses the impersonation token.
<identity impersonate="true" />
Note The account used for the UNC share must also be able to read Machine.config.
Grant full trust to the UNC share on which your application is hosted.
This is the simplest option to manage and if you run .NET Framework version 1.0,
this is the only option because ASP.NET version 1.0 Web applications require full
trust.
Because of the way in which ASP.NET dynamically creates code and compiles
page classes, you must use a code group for the UNC and the Temporary ASP.NET
Files directory when you configure policy. The default temporary directory is
\WINNT\Microsoft.NET\Framework\{version}\Temporary ASP.NET Files, but the
location is configurable on a per application basis by using the tempDirectory
attribute of the <compilation> element.
For more information about ASP.NET code access security policy and sandboxing
privileged code, see Chapter 9, "Using Code Access Security with ASP.NET."
When configuring policy, you should grant trust to the share (by using a
Note file location) rather than to the zone. This provides finer granularity
because you do not affect all the applications in a particular zone.
COM/DCOM Resources
Your application uses the process or impersonation identity when it calls COM-based
resources, such as serviced components. Client-side authentication and impersonation
levels are configured using the comAuthenticationLevel and comImpersonation level
attributes on the <processModel> element in Machine.config.
Clients are checked to ensure that they are still connected before requests are
queued for work. This is done in case an attacker sends multiple requests and then
disconnects them.
<httpRuntime>
You might want to reduce the maxRequestLength attribute to prevent users from
uploading very large files. The maximum allowed value is 4 MB. In the Open Hack
competition, the maxRequestLength was constrained to 1/2 MB as shown in the following
example:
<system.web>
<!-- 1/2 MB Max POST length -->
<httpRuntime maxRequestLength="512"/>
</system.web>
ASP.NET does not address packet-level attacks. You must address this by
hardening the TCP/IP stack. For more information about configuring the TCP/IP
Note
stack, see "How To: Harden the TCP/IP Stack" in the "How To" section of this
guide.
Web Farm Considerations
If your ASP.NET Web application runs in a Web farm, there is no guarantee that successive
requests from the same client will be serviced by the same Web server. This has
implications for:
Session state
DPAPI
Session State
To avoid server affinity, maintain ASP.NET session state out of process in the ASP.NET SQL
Server state database or in the out-of-process state service that runs on a remote machine.
For more information about securing session state in a remote state store, see the
"Session State" section earlier in this document.
For more information on generating and configuring the keys, see Microsoft Knowledge
Base article 312906, "How To: Create Keys by Using Visual C# .NET for Use in Forms."
DPAPI
To encrypt data, developers sometimes use DPAPI. If you use DPAPI with the machine key
to store secrets, the encrypted string is specific to a given computer and you cannot copy
the encrypted data across computers in a Web farm or cluster.
If you use DPAPI with a user key, you can decrypt the data on any computer with a roaming
user profile. However, this is not recommended because the data can be decrypted by any
machine on the network that can execute code using the account which encrypted the data.
DPAPI is ideally suited to storing configuration secrets, for example, database connection
strings, that live on the Web server. Other encryption techniques should be used when the
encrypted data is stored on a remote server, for example, in a database. For more
information about storing encrypted data in the database, see Chapter 14, "Building Secure
Data Access."
Snapshot of a Secure ASP.NET Application
The following snapshot view shows the attributes of a secure ASP.NET application and
allows you to quickly and easily compare settings with your own configuration.
For a related checklist, see "Checklist: Securing ASP.NET" in the "Checklist" section of this
guide.
Additional Resources
For more information, see the following resources and articles:
You can download Web Services Enhancements (WSE) 1.0 SP1 for Microsoft .NET
at http://microsoft.com/downloads/details.aspx?FamilyId=06255A94-2635-4D29-
A90C-28B282993A41&displaylang=en.
Microsoft Knowledge Base article 329290, "How To: Use the ASP.NET Utility to
Encrypt Credentials and Session State Connection Strings."
Microsoft Knowledge Base article 311209, "How To: Configure ASP.NET for
Persistent SQL Server Session State Management."
Microsoft Knowledge Base article 312906, "How To: Create Keys by Using Visual
C# .NET for Use in Forms."
"How To: Implement Kerberos Delegation for Windows 2000" in the "How To"
section of "Microsoft patterns & practices Volume I, Building Secure ASP.NET
Applications: Authentication, Authorization, and Secure Communication" at
http://msdn.microsoft.com/library/default.asp?url=/library/en-
us/dnnetsec/html/SecNetHT05.asp.
For more information on security considerations from the Open Hack competition,
see MSDN article "Building and Configuring More Secure Web Sites" at
http://msdn.microsoft.com/library/default.asp?url=/library/en-
us/dnnetsec/html/openhack.asp.
Chapter 20: Hosting Multiple Web Applications
In This Chapter
Using multiple identities for application isolation
Using Microsoft Windows Server 2003 application pools for application isolation
The issue is particularly significant for Internet Service Providers (ISPs) who host large
numbers of applications from different companies. In a hosting scenario, it is essential to
ensure that the installation of a new application cannot adversely impact the operation of
existing applications.
There are a number of ways in which application isolation can be achieved. The available
options vary depending on the version of the .NET Framework and the version of the
operating system that you run on the Web server. If you are running version 1.1 of the .NET
Framework, you can use the resource constraint model provided by code access security
to provide one level of application isolation. This application isolation is achieved by
restricting an application from to access different types of resources such as the file
system, registry, event log, Active Directory, databases, network resources, and so on.
In addition, Windows Server 2003 provides process isolation through Internet Information
Services 6.0 (IIS 6) application pools that enable multiple applications to run in separate IIS
worker process instances. Process isolation is not possible on Windows 2000 because all
Web applications run in a single instance of the ASP.NET worker process, with application
domains providing isolation.
The Table 20.1 summarizes the various options for application isolation that are available on
Windows 2000 and Windows Server 2003.
Table 20.1: Application Isolation Features for Windows 2000 and Windows Server
2003
Isolation Feature Windows 2000 Windows Server 2003
Process isolation No Yes (IIS 6 App Pools)
Application domain
Yes Yes
isolation
Multiple thread identities Yes Yes
Code access security Yes(.NET Framework Yes(.NET Framework
resource constraint version 1.1) version 1.1)
Windows Server 2003 running version 1.1 of the .NET Framework is the recommended
platform for hosting multiple ASP.NET applications because it supports process isolation
and provides the richest range of options for application isolation.
ASP.NET Architecture on Windows 2000
On Windows 2000, multiple Web applications run in a single instance of the ASP.NET
worker process (Aspnet_wp.exe). Each application resides in its own application domain
that provides a degree of isolation for managed code. The Windows 2000/IIS 5 architecture
is shown in Figure 20.1.
The components of the architecture depicted by Figure 20.1 are summarized in Table 20.2.
IIS 6 supports a backwards compatibility mode that, in turn, supports the IIS 5
Note
ASP.NET worker process model.
Compared to the ASP.NET architecture under Windows 2000, the primary difference in
Windows Server 2003 is that separate IIS worker process instances (W3wp.exe) can be
used to host Web applications. By default, these run using the NT
Authority\NetworkService account, which is a least privileged local account that acts as
the computer account over the network. A Web application that runs in the context of the
Network Service account presents the computer's credentials to remote servers for
authentication.
Do not confuse the Network Service account with the Network built-in group,
Note
which includes users who were authenticated across the network.
The main components of the architecture depicted by Figure 20.2 are summarized in Table
20.3.
If you host an ASP.NET Web application built using the .NET Framework version
1.0, the process account needs appropriate permissions to the root of the current
Note file system drive. For more information, see Microsoft Knowledge Base article
317955, "FIX: 'Failed to Start Monitoring Directory Changes' Error Message
When You Browse to an ASP.NET Page."
There are two ways to use separate fixed identities for each application on a shared Web
server:
To support this approach, the application's virtual directories in IIS must support anonymous
access and a separate anonymous account must be configured for each application. The
application must then be configured for impersonation. This approach is shown in Figure
20.3. Local and remote resource access assumes the security context of the impersonated
anonymous account.
Figure 20.3: Multiple anonymous accounts used for each application
This procedure describes how to use multiple anonymous accounts, one per Web
application, for resource access to support individual application authorization and auditing.
1. Create new anonymous user accounts, one per application.
For more information about creating an anonymous user account, see the
"Accounts" section in Chapter 16, "Securing Your Web Server."
If you need to access remote resources using the anonymous account, either use
a least privileged domain account, or use a local account and create a duplicated
local account on the remote server with a matching user name and password.
3. Click the Security tab and then click the Edit button.
5. Enter the user name for the anonymous account that you have created,
or click Browse to select the user name from a list.
6. If you want to use the account to access a remote resource, clear the
Allow IIS to Control Password checkbox for the anonymous account.
If you select Allow IIS to Control Password, the logon session created
using the specified anonymous account has NULL network credentials
and cannot be used to access network resources where authentication
is required. If you clear this checkbox, the logon session is an interactive
logon session with network credentials. However, if the account is local
to the machine, no other machine on the network can authenticate the
account. In this scenario, create a duplicate account on the target
remote server.
Note The Allow IIS to Control Password option is not available on IIS 6. IIS
6 sets the default LogonMethod to Network Cleartext, which requires
the account to have the "Access this computer from the network" user
privilege. This allows the account to be authenticated by a network
server.
4. Configure NTFS permissions for each account to ensure that each account has
access only to the appropriate file system files and folders, and cannot access
critical resources such as operating system tools.
For more information about configuring NTFS permissions for the anonymous
account, see Chapter 16, "Securing Your Web Server."
You can configure individual ASP.NET applications to impersonate a fixed account. The
advantage of this configuration is that it can be used with any IIS authentication method,
and does not require IIS anonymous authentication.
This procedure describes how to use multiple fixed impersonation accounts, one per Web
application, for resource access to support individual application authorization and auditing.
1. Create new anonymous user accounts, one per application.
For more information about creating an anonymous user account, see the
"Accounts" section in Chapter 16, "Securing Your Web Server."
If you need access to remote resources using the anonymous account, either use
a least privileged domain account, or use a local account and create a duplicated
local account on the remote server with a matching user name and password.
4. Configure NTFS permissions for each account to ensure that each account has
access only to the appropriate file system files and folders, and no access to
critical resources such as operating system tools.
For more information about configuring NTFS permissions for the anonymous
account, see Chapter 16, "Securing Your Web Server."
2. Configure NTFS permissions for each account to ensure that each account only
has access to the appropriate file system files and folders, and cannot access
critical resources such as operating system tools.
For more information about configuring NTFS permissions for the anonymous
account, see Chapter 16, "Securing Your Web Server."
4. Create new application pools and configure them to run under the new accounts.
Use IIS 6 to create new application pools with default settings, and use the
accounts created in step 1 to configure the identity of each pool, so that each pool
runs using a separate identity.
On the Directory tab of each IIS application, choose the application pool for the
application to run in.
Isolating Applications with Code Access Security
With version 1.1 of the .NET Framework, you can configure applications to run at partial
trust levels, using the <trust> element. The following configuration shows how to configure
an application's trust level from Machine.config. In this example, the Medium trust level is
used.
<location path="Web Site Name/appvDir1" allowOverride="false">
<system.web>
<trust level="Medium" originUrl="" />
</system.web>
</location>
If you configure an application to run with a trust level other than "Full," the application has
restricted code access security permissions to access specific types of resources. In this
way, you can constrain applications to prevent them from interacting with one another and
from gaining access to system level resources such as restricted areas of the file system,
the registry, the event log, and so on.
For more information about the ASP.NET trust levels and how they can be used to provide
application isolation and about application specific design and development considerations,
see Chapter 9, "Using Code Access Security with ASP.NET."
If you use code access security to provide application isolation, you should still
consider the operating system identity of the application. The recommended
Note
isolation model is to use code access security together with process level
isolation on Windows Server 2003.
Forms Authentication Issues
If you use Forms authentication with version 1.0 of the .NET Framework, you should use
separate cookie paths and names. If you do not do so, it is possible for a user
authenticated in one application to make a request to another application without being
redirected to that application's logon page. The URL authorization rules within the second
application may deny access to the user, without providing the opportunity to supply logon
credentials using the logon form.
To avoid this issue, use unique cookie path and name attributes on the <forms> element for
each application, and also use separate machine keys for each application.
Version 1.1 of the .NET Framework supports the IsolateApps setting shown below.
<machineKey validationKey="AutoGenerate,IsolateApps"
decryptionKey="AutoGenerate,IsolateApps" validation="SHA
This ensures that each application on the machine uses a separate key for encryption and
validation of Forms authentication cookies and view state.
With version 1.0 of the .NET Framework, you cannot use IsolateApps and you must
manually generate <machineKey> elements. For more information about this issue, see
the following articles in the Microsoft Knowledge Base.
313116, "PRB: Forms Authentication Requests Are Not Directed to loginUrl Page"
312906, "How To: Create Keys by Using Visual C# .NET for Use in Forms
Authentication"
UNC Share Hosting
If you run an application with its content on a Universal Naming Convention (UNC) share, the
credentials used to access the share are either the credentials of the application or of the
authenticated client. This is configured in IIS by an administrator.
With version 1.0 of the .NET Framework, use Mscorcfg.msc to create a code group based
on the URL and to grant it full trust.
When you use a virtual directory that points to a remote share to host an ASP.NET
application, you may receive a security exception. For more information, see Microsoft
Knowledge Base article 320268, "PRB: System.Security.SecurityException: Security error."
Summary
If you host multiple ASP.NET applications on a single Web server, you need to consider how
applications are isolated from one another and from shared system resources such as the
file system, registry, and event logs. Without adequate isolation, a rogue or badly
developed application can adversely affect other applications on the server.
On Windows Server 2003, use the multiple worker process model supported by IIS 6 to
provide operating system process isolation for applications. On Windows 2000, process
isolation is not possible, although multiple applications can be configured to use separate
anonymous user accounts. This provides separate application auditing and supports
independent application authorization.
On both platforms you can use the resource constraint model provided by code access
security as an additional control to restrict which applications have access to which
resource types. The use of code access security with ASP.NET applications requires
version 1.1 of the .NET Framework.
For more information about securing ASP.NET applications, see Chapter 19, "Securing Your
ASP.NET Applications and Web Services."
Part V: Assessing Your Security
Chapter List
Chapter 21: Security Code Review
ldentifying poor coding techniques that allow malicious users to launch attacks
This chapter helps you review managed ASP.NET Web application code built using the
Microsoft .NET Framework. In addition, it covers reviewing calls to unmanaged code. The
chapter is organized by functional area, and includes sections that present general code
review questions applicable to all types of managed code as well as sections that focus on
specific types of code such as Web services, serviced components, data access
components, and so on.
This chapter shows the questions to ask to expose potential security vulnerabilities. You can
find solutions to these questions in the individual building chapters in Part III of this guide.
You can also use the code review checklists in the "Checklists" section of the guide to help
you during the review process.
FxCop
A good way to start the review process is to run your compiled assemblies through the
FxCop analysis tool. The tool analyzes binary assemblies (not source code) to ensure that
they conform to the .NET Framework Design Guidelines, available on MSDN. It also checks
that your assemblies have strong names, which provide tamperproofing and other security
benefits. The tool comes with a predefined set of rules, although you can customize and
extend them.
For the list of security rules that FxCop checks for, see
http://www.gotdotnet.com/team/libraries/FxCopRules/SecurityRules.aspx.
You may already have a favorite search tool. If not, you can use the Find in Files facility in
Visual Studio .NET or the Findstr command line tool, which is included with the Microsoft
Windows operating system.
If you use the Windows XP Search tool from Windows Explorer, and use the A
word or phrase in the file option, check that you have the latest Windows XP
Note service pack, or the search may fail. For more information, see Microsoft
Knowledge Base article 309173, "Using the 'A Word or Phrase in the File' Search
Criterion May Not Work."
For example, to search for the string "password" in the Web directory of your application,
use the Findstr tool from a command prompt as follows:
findstr /S /M /I /d:c:\projects\yourweb "password" *.*
/S — include subdirectories.
/D:dir — search a semicolon-delimited list of directories. If the file path you want to
search includes spaces, surround the path in double quotes.
Automating Findstr
You can create a text file with common search strings. Findstr can then read the search
strings from the text file, as shown below. Run the following command from a directory that
contains .aspx files.
findstr /N /G:SearchStrings.txt *.aspx
/N prints the corresponding line number when a match is found. /G indicates the file that
contains the search strings. In this example, all ASP.NET pages (*.aspx) are searched for
strings contained within SearchStrings.txt.
ILDASM
You can also use the Findstr command in conjunction with the ildasm.exe utility to search
binary assemblies for hard-coded strings. The following command uses ildasm.exe to
search for the ldstr intermediate language statement, which identifies string constants.
Notice how the output shown below reveals a hard-coded database connection and the
password of the well known sa account.
Ildasm.exe secureapp.dll /text | findstr ldstr
IL_000c: ldstr "RegisterUser"
IL_0027: ldstr "@userName"
IL_0046: ldstr "@passwordHash"
IL_0065: ldstr "@salt"
IL_008b: ldstr "Exception adding account. "
IL_000e: ldstr "LookupUser"
IL_0027: ldstr "@userName"
IL_007d: ldstr "SHA1"
IL_0097: ldstr "Exeception verifying password. "
IL_0009: ldstr "SHA1"
IL_003e: ldstr "Logon successful: User is authenticated"
IL_0050: ldstr "Invalid username or password"
IL_0001: ldstr "Server=AppServer;database=users; usernam
password=password"
XSS bugs are an example of maintaining too much trust in data entered by a user. For
example, your application might expect the user to enter a price, but instead the attacker
includes a price and some HTML and JavaScript. Therefore, you should always ensure that
data that comes from untrusted sources is validated. When reviewing code, always ask the
question, "Is this data validated?" Keep a list of all entry points into your ASP.NET
application, such as HTTP headers, query strings, form data, and so on, and make sure
that all input is checked for validity at some point. Do not test for incorrect input values
because that approach assumes that you are aware of all potentially risky input. The most
common way to check that data is valid in ASP.NET applications is to use regular
expressions.
You can perform a simple test by typing text such as "XYZ" in form fields and testing the
output. If the browser displays "XYZ" or if you see "XYZ" when you view the source of the
HTML, then your Web application is vulnerable to XSS. If you want to see something more
dynamic, inject <script>alert('hello');</script>. This technique might not work in all cases
because it depends on how the input is used to generate the output.
A common technique used by developers is to filter for < and > characters. If the code that
you review filters for these characters, then test using the following code instead:
&{alert('hello');}
If the code does not filter for those characters, then you can test the code by using the
following script:
<script>alert(document.cookie);</script>;
You may have to close a tag before using this script, as shown below.
"></a><script>alert(document.cookie);</script>
You should also search for the "<%=" string within .aspx source code, which can also be
used to write output, as shown below:
<%=myVariable %>
The following table shows some common situations where Response.Write is used with
input fields.
While not exhaustive, the following commonly used HTML tags could allow a malicious user
to inject script code:
<script> <style>
HTML attributes such as src, lowsrc, style, and href can be used in conjunction with the
tags above to cause XSS.
For example, the src attribute of the <img> tag can be a source of injection as shown in
the following examples.
<IMG SRC="javascript:alert('hello');">
<IMG SRC="java
script:alert('hello');">
<IMG SRC="java
script:alert('hello');">
The <style> tag also can be a source of injection by changing the MIME type as shown
below.
<style TYPE="text/javascript">
alert('hello');
</style>
Check to see if your code attempts to sanitize input by filtering out certain known risky
characters. Do not rely upon this approach because malicious users can generally find an
alternative representation to bypass your validation. Instead, your code should validate for
known secure, safe input. The following table shows various ways to represent some
common characters:
If your Web server is not up-to-date with the latest security patches, it could be
vulnerable to directory traversal and double slash attacks, such as:
http://www.YourWebServer.com/..%255%../winnt
http://www.YourWebServer.com/..%255%..//somedirectory
If your code filters for "/", an attacker can easily bypass the filter by using an
alternate representation for the same character. For example, the overlong UTF-8
representation of "/" is "%c0f%af" and this could be used in the following URL:
http://www.YourWebServer.com/..%c0f%af../winnt
If your code processes query string input, check that it constrains the input data
and performs bounds checks. Check that the code is not vulnerable if an attacker
passes an extremely large amount of data through a query string parameter.
http://www.YourWebServer.com/test.aspx?
var=InjectHugeAmountOfDataHere
Check that the application Web.config file has set the requestEncoding and
responseEncoding attributes configured by the <globalization> element as shown below.
<configuration>
<system.web>
<globalization
requestEncoding="ISO-8859-1"
responseEncoding="ISO-8859-1"/>
</system.web>
</configuration>
Character encoding can also be set at the page level using a <meta> tag or
ResponseEncoding page-level attribute as shown below.
<% @ Page ResponseEncoding="ISO-8859-1" %>
For more information, see Chapter 10, "Building Secure ASP.NET Pages and Controls."
If you create a page with untrusted input, verify that you use the innerText property instead
of innerHTML. The innerText property renders content safe and ensures that script is not
executed.
More Information
For more information about XSS, see the following articles:
"CSS Quick Start: What Customers Can Do to Protect Themselves from Cross-Site
Scripting," at http://www.microsoft.com/technet/treeview/default.asp?
url=/technet/security/news/crsstQS.asp
Microsoft Knowledge Base article 252985, "How To: Prevent Cross-Site Scripting
Security Issues"
Stored procedures alone cannot prevent SQL injection attacks. Check that your
code uses parameterized stored procedures. Check that your code uses typed
parameter objects such as SqlParameter, OleDbParameter, or
OdbcParameter. The following example shows the use of a SqlParameter:
SqlDataAdapter myCommand = new SqlDataAdapter("spLogin", conn
myCommand.SelectCommand.CommandType = CommandType.StoredProce
SqlParameter parm = myCommand.SelectCommand.Parameters.Add(
"@userName", SqlDbType.VarCha
parm.Value=txtUid.Text;
The typed SQL parameter checks the type and length of the input and ensures
that the userName input value is treated as a literal value and not as executable
code in the database.
If you do not use stored procedures, check that your code uses parameters in the
SQL statements it constructs, as shown in the following example:
select status from Users where UserName=@userName
Check that the following approach is not used, where the input is used directly to
construct the executable SQL statement using string concatenation:
string sql = "select status from Users where UserName='"
+ txtUserName.Text + "'";
These parameters are a primary source of buffer overflows. Check that your code
checks the length of any input string to verify that it does not exceed the limit
defined by the API. If the unmanaged API accepts a character pointer, you may
not know the maximum allowable string length unless you have access to the
unmanaged source. A common vulnerability is shown in the following code
fragment:
void SomeFunction( char *pszInput )
{
char szBuffer[10];
// Look out, no length checks. Input is copied straight int
// Should check length or use strncpy.
strcpy(szBuffer, pszInput);
. . .
}
Buffer overflows can still occur if you use strncpy because it does not
Note check for sufficient space in the destination string and it only limits the
number of characters copied.
If you cannot inspect the unmanaged code because you do not own it, rigorously
test the API by passing in deliberately long input strings and invalid arguments.
If the unmanaged API accepts a file name and path, check that your wrapper
method checks that the file name and path do not exceed 260 characters. This is
defined by the Win32 MAX_PATH constant. Also note that directory names and
registry keys can be 248 characters maximum.
4. Check output strings.
Check if your code uses a StringBuilder to receive a string passed back from an
unmanaged API. Check that the capacity of the StringBuilder is long enough to
hold the longest string the unmanaged API can hand back, because the string
coming back from unmanaged code could be of arbitrary length.
If you use an array to pass input to an unmanaged API, check that the managed
wrapper verifies that the array capacity is not exceeded.
6. Check that your unmanaged code is compiled with the /GS switch.
If you own the unmanaged code, use the /GS switch to enable stack probes to
detect some kinds of buffer overflows.
Managed Code
Use the review questions in this section to analyze your entire managed source code base.
The review questions apply regardless of the type of assembly. This section helps you
identify common managed code vulnerabilities. For more information about the issues
raised in this section and for code samples that illustrate vulnerabilities, see Chapter 7,
"Building Secure Assemblies."
If your managed code uses explicit code access security features, see" Code Access
Security" later in this chapter for additional review points. The following review questions
help you to identify managed code vulnerabilities:
An assembly is only as secure as the classes and other types it contains. The following
questions help you to review the security of your class designs:
Review any type or member marked as public and check that it is an intended part
of the public interface of your assembly.
If you do not intend a class to be derived from, use the sealed keyword to prevent
your code from being misused by potentially malicious subclasses.
For public base classes, you can use code access security inheritance demands to
limit the code that can inherit from the class. This is a good defense in depth
measure.
Do you use properties to expose fields?
Check that your classes do not directly expose fields. Use properties to expose
non-private fields. This allows you to validate input values and apply additional
security checks.
Verify that you have made effective use of read-only properties. If a field is not
designed to be set, implement a read-only property by providing a get accessor
only.
These methods can be overridden from other assemblies that have access to your
class. Use declarative checks or remove the virtual keyword if it is not a
requirement.
If so, check that you call the Dispose method when you are finished with the object
instance to ensure that all resources are freed.
The following review questions help you to identify potential threading vulnerabilities:
Is the thread that creates a new thread currently impersonating? The new thread
always assumes the process-level security context and not the security context of
the existing thread.
If so, check that the code prevents sensitive data from being serialized by marking
the sensitive data with the [NonSerialized] attribute by or implementing
ISerializable and then controlling which fields are serialized.
If your classes need to serialize sensitive data, review how that data is protected.
Consider encrypting the data first.
If so, does your class support only full trust callers, for example because it is
installed in a strong named assembly that does not include
AllowPartiallyTrustedCallersAttribute? If your class supports partial-trust callers,
check that the GetObjectData method implementation authorizes the calling code
by using an appropriate permission demand. A good technique is to use a
StrongNameIdentityPermission demand to restrict which assemblies can serialize
your object.
If your code includes a method that receives a serialized data stream, check that
every field is validated as it is read from the data stream.
If your code loads assemblies to create object instances and invoke types, does it
obtain the assembly or type name from input data? If so, check that the code is
protected with a permission demand to ensure all calling code is authorized. For
example, use a StrongNameIdentity permission demand or demand full trust.
If so, check that only trusted code can call you. Use code access security
permission demands to authorize calling code.
Check that your code fails early to avoid unnecessary processing that consumes
resources. If your code does fail, check that the resulting error does not allow a
user to bypass security checks to run privileged code.
Avoid revealing system or application details to the caller. For example, do not
return a call stack to the end user. Wrap resource access or operations that could
generate exceptions with try/catch blocks. Only handle the exceptions you know
how to handle and avoid wrapping specific exceptions with generic wrappers.
Check that exception details are logged at the source of the exception to assist
problem diagnosis.
Do you use exception filters?
If so, be aware that the code in a filter higher in the call stack can run before code
in a finally block. Check that you do not rely on state changes in the finally block,
because the state change will not occur before the exception filter executes.
If so, check that you use Rijndael (now referred to as Advanced Encryption
Standard [AES]) or Triple Data Encryption Standard (3DES) when encrypted data
needs to be persisted for long periods of time. Use the weaker (but quicker) RC2
and DES algorithms only to encrypt data that has a short lifespan, such as session
data.
Use the largest key size possible for the algorithm you are using. Larger key sizes
make attacks against the key much more difficult, but can degrade performance.
If so, check that you use MD5 and SHA1 when you need a principal to prove it
knows a secret that it shares with you. For example, challenge-response
authentication systems use a hash to prove that the client knows a password
without having the client pass the password to the server. Use HMACSHA1 with
Message Authentication Codes (MAC), which require you and the client to share a
key. This can provide integrity checking and a degree of authentication.
If your assembly stores secrets, review the design to check that it is absolutely necessary
to store the secret. If you have to store a secret, review the following questions to do so as
securely as possible:
Do not store secrets in plaintext in memory for prolonged periods. Retrieve the
secret from a store, decrypt it, use it, and then substitute zeros in the space where
the secret is stored.
Check that the code uses DPAPI to encrypt connection strings and credentials. Do
not store secrets in the Local Security Authority (LSA), as the account used to
access the LSA requires extended privileges. For information on using DPAPI, see
"How To: Create a DPAPI Library" in the "How To" section of "Microsoft patterns &
practices Volume I, Building Secure ASP.NET Applications: Authentication,
Authorization, and Secure Communication" at
http://msdn.microsoft.com/library/default.asp?url=/library/en-
us/dnnetsec/html/SecNetHT07.asp.
If so, check that they are first encrypted and then secured with a restricted ACL if
they are stored in HKEY_LOCAL_MACHINE. An ACL is not required if the code
uses HKEY_CURRENT_USER because this is automatically restricted to
processes running under the associated user account.
If so, consider an obfuscation tool. For more information, see the list of obfuscator
tools listed at http://www.gotdotnet.com/team/csharp/tools/default.aspx.
Any code can associate a method with a delegate. This includes potentially malicious code
running at a lower trust level than your code.
If so, check that you restrict the code access permissions available to the delegate
methods by using security permissions with SecurityAction.PermitOnly.
Avoid this because you do not know what the delegate code is going to do in
advance of calling it.
Code Access Security
All managed code is subject to code access security permission demands. Many of the
issues are only apparent when your code is used in a partial trust environment, when either
your code or the calling code is not granted full trust by code access security policy.
For more information about the issues raised in this section, see Chapter 8, "Code Access
Security in Practice."
Use the following review points to check that you are using code access security
appropriately and safely:
If it is, then default security policy ensures that it cannot be called by partially
trusted callers. The Common Language Runtime (CLR) issues an implicit link
demand for full trust. If your assembly is not strong named, it can be called by any
code unless you take explicit steps to limit the callers, for example by explicitly
demanding full trust.
Check method returns and ref parameters to see where your code returns object
references. Check that your partial-trust code does not hand out references to
objects obtained from assemblies that require full-trust callers.
If you have classes or structures that you only intend to be used within a specific
application by specific assemblies, you can use an identity demand to limit the
range of callers. For example, you can use a demand with a
StrongNameIdentityPermission to restrict the caller to a specific set of
assemblies that have a have been signed with a private key that corresponds to the
public key in the demand.
If you know that only specific code should inherit from a base class, check that the
class uses an inheritance demand with a StrongNameIdentityPermission.
Search for ".RequestMinimum" strings to see if your code uses permission requests
to specify its minimum permission requirements. You should do this to clearly
document the permission requirements of your assembly.
Sometime imperative checks in code are necessary because you need to apply
logic to determine which permission to demand or because you need a runtime
variable in the demand. If you do not need specific logic, consider using declarative
security to document the permission requirements of your assembly.
Check that your code issues a Demand prior to the Assert. Code should demand a
more granular permission to authorize callers prior to asserting a broader
permission such as the unmanaged code permission.
Check that each call to Assert is matched with a call to RevertAssert. The Assert
is implicitly removed when the method that calls Assert returns, but it is good
practice to explicitly call RevertAssert, as soon as possible after the Assert call.
Do you reduce the assert duration?
Check that you only assert a permission for the minimum required length of time.
For example, if you need to use an Assert call just while you call another method,
check that you make a call to RevertAssert immediately after the method call.
Your code is always subject to permission demand checks from the .NET Framework class
library, but if your code uses explicit permission demands, check that this is done
appropriately. Search your code for the ".Demand" string to identity declarative and
imperative permission demands, and then review the following questions:
If so, check whether or not the code issues an appropriate permission demand
prior to accessing the cached data. For example, if the data is obtained from a file,
and you want to ensure that the calling code is authorized to access the file from
where you populated the cache, demand a FileIOPermission prior to accessing the
cached data.
Check that you issue a permission demand prior to accessing the resource or
performing the privileged operation. Do not access the resource and then authorize
the caller.
Code that uses the .NET Framework class libraries is subject to permission
demands. Your code does not need to issue the same demand. This results in a
duplicated and wasteful stack walk.
Search your code for the ".LinkDemand" string to identify where link demands are used.
They can only be used declaratively. An example is shown in the following code fragment:
[StrongNameIdentityPermission(SecurityAction.LinkDemand,
PublicKey="00240000048...97e85d09861
public static void SomeOperation() {}
For more information about the issues raised in this section, see "Link Demands" in Chapter
8, "Code Access Security in Practice." The following questions help you to review the use of
link demands in your code:
A defensive approach is to avoid link demands as far as possible. Do not use them
just to improve performance and to eliminate full stack walks. Compared to the
costs of other Web application performance issues such as network latency and
database access, the cost of the stack walk is small. Link demands are only safe if
you know and can limit which code can call your code.
When you use a link demand, you rely on the caller to prevent a luring attack. Link
demands are safe only if you know and can limit the exact set of direct callers into
your code, and you can trust those callers to authorize their callers.
Have you used link demands at the method and class level?
When you add link demands to a method, it overrides the link demand on the class.
Check that the method also includes class-level link demands.
Link demands are not inherited by derived types and are not used when an
overridden method is called on the derived type. If you override a method that
needs to be protected with a link demand, apply the link demand to the overridden
method.
Search for the Interface keyword to find out. If so, check if the method
implementations are marked with link demands. If they are, check that the interface
definitions contain the same link demands. Otherwise, it is possible for a caller to
bypass the link demand.
Check that the following permission types are only granted to highly trusted code. Most of
them do not have their own dedicated permission type, but use the generic
SecurityPermission type. You should closely scrutinize code that uses these types to
ensure that the risk is minimized. Also, you must have a very good reason to use these
permissions.
If you compiled with /unsafe, review why you need to do so. If the reason is legitimate,
take extra care to review the source code for potential vulnerabilities.
Unmanaged Code
Give special attention to code that calls unmanaged code, including Win32 DLLs and COM
objects, due to the increased security risk. Unmanaged code is not verifiably type safe and
introduces the potential for buffer overflows. Resource access from unmanaged code is not
subject to code access security checks. This is the responsibility of the managed wrapper
class.
Generally, you should not directly expose unmanaged code to partially trusted callers. For
more information about the issues raised in this section, see the "Unmanaged Code"
sections in Chapter 7, "Building Secure Assemblies," and Chapter 8, "Code Access Security
in Practice."
Use the following review questions to validate your use of unmanaged code:
If so, check that your code demands an appropriate permission prior to calling the
Assert method to ensure that all callers are authorized to access the resource or
operation exposed by the unmanaged code. For example, the following code
fragment shows how to demand a custom Encryption permission and then assert
the unmanaged code permission:
// Demand custom EncryptionPermission.
(new EncryptionPermission(
EncryptionPermissionFlag.Encrypt, storeFlag)).Demand()
// Assert the unmanaged code permission.
(new SecurityPermission(SecurityPermissionFlag.UnmanagedCode))
// Now use P/Invoke to call the unmanaged DPAPI functions.
For more information see "Assert and RevertAssert" in Chapter 8, "Code Access
Security in Practice."
This attribute suppresses the demand for the unmanaged code permission issued
automatically when managed code calls unmanaged code. If P/Invoke methods or
COM interop interfaces are annotated with this attribute, ensure that all code paths
leading to the unmanaged code calls are protected with security permission
demands to authorize callers. Also check that this attribute is used at the method
level and not at the class level.
Check that your unmanaged code entry point is marked as private or internal.
Callers should be forced to call the managed wrapper method that encapsulates
the unmanaged code.
All code review rules and disciplines that apply to C and C++ apply to
Note
unmanaged code.
Verify that all enumerated values are in range before you pass them to a native
method.
All unmanaged code should be inside wrapper classes that have the following
names: NativeMethods, UnsafeNativeMethods, and SafeNativeMethods. You
must thoroughly review all code inside UnsafeNativeMethods and parameters that
are passed to native APIs for security vulnerabilities.
You should be able to justify the use of all Win32 API calls. Dangerous
APIs include:
Crypto API functions that can decrypt and access private keys
You should generally avoid this because it is a high risk operation. Why do you need
the user to specify a file name or path, rather than the application choosing the
location based on the user identity?
If you accept file names and paths as input, your code is vulnerable to
canonicalization bugs. If you must accept path input from the user, then check that it
is validated as a safe path and canonicalized. Check that the code uses
System.IO.Path.GetFullPath.
If you call MapPath with a user supplied file name, check that your code uses the
override of HttpRequest.MapPath that accepts a bool parameter, which prevents
cross-application mapping.
try
{
string mappedPath = Request.MapPath( inputPath.Text,
Request.ApplicationPath
}
catch (HttpException)
{
// Cross application mapping attempted.
}
For more information see, section "Using MapPath" in Chapter 10, "Building Secure
ASP.NET Pages and Controls."
Check that your code validates the data type of the data received from posted form
fields and other forms of Web input such as query strings. For non-string data,
check that your code uses the .NET Framework type system to perform the type
checks. You can convert the string input to a strongly typed object, and capture any
type conversion exceptions. For example, if a field contains a date, use it to
construct a System.DateTime object. If it contains an age in years, convert it to a
System.Int32 object by using Int32.Parse and capture format exceptions.
Check that input strings are validated for length and an acceptable set of
characters and patterns by using regular expressions. You can use a
RegularExpressionValidator validation control or use the RegEx class directly. Do
not search for invalid data; only search for the information format you know is
correct.
Do not do this. Use client-side validation only to improve the user experience. Check
that all input is validated at the server.
"Request.QueryString"
"Request.Cookies"
Check that input is validated for type, range, format, and length using typed objects, and
regular expressions as you would for form fields (see the previous section, "Do You Validate
Form Field Input?"). Also consider HTML or URL encoding any output derived from user
input, as this will negate any invalid constructs that could lead to XSS bugs.
Check the page-level directive at the top of your Web pages to verify that view
state is enabled for the page. Look for the enableViewStateMac setting and if
present check that it is set to "true". If enableViewStateMac is not present and
set to true, the page assumes the application-level default setting specified in the
Web.config file. If you have disabled view state for the page by setting
enableViewState to "false" the protection setting is irrelevant.
Check that your code does not disable view state protection by setting
Page.EnableViewStateMac property to false. This is a safe setting only if the
page does not use view state.
Application_Start. Code placed here runs under the security context of the
ASP.NET process account, not the impersonated user.
Application_Error. The security context when this event handler is called can have
an impact on writing the Windows event log. The security context might be the
process account or the impersonated account.
protected void Session_End. This event is fired non-deterministically and only for
in-process session state modes.
Do you partition your Web site between restricted and public access areas?
If your Web application requires users to complete authentication before they can
access specific pages, check that the restricted pages are placed in a separate
directory from publicly accessible pages. This allows you to configure the restricted
directory to require SSL. It also helps you to ensure that authentication cookies are
not passed over unencrypted sessions using HTTP.
If you use Windows authentication, have you configured NTFS permissions on the
page (or the folder that contains the restricted pages) to allow access only to
authorized users?
Have you configured the <authorization> element to specify which users and
groups of users can access specific pages?
Have you use added principal permission demands to your classes to determine
which users and groups of users can access the classes?
If you use Server.Transfer to transfer a user to another page, ensure that the
currently authenticated user is authorized to access the target page. If you use
Server.Transfer to a page that the user is not authorized to view, the page is still
processed.
Server.Transfer uses a different module to process the page rather than making
another request from the server, which would force authorization. Do not use
Server.Transfer if security is a concern on the target Web page. Use
HttpResponse.Redirect instead.
Web Services
ASP.NET Web services share many of the same features as ASP.NET Web applications.
Review your Web service against the questions in the "ASP.NET Pages and Controls"
section before you address the following questions that are specific to Web services. For
more information about the issues raised in this section, see Chapter 12, "Building Secure
Web Services."
If you pass authentication tokens, you can use the Web Services Enhancements (WSE) to
use SOAP headers in a way that conforms to the emerging WS-Security standard.
Check that all publicly exposed Web methods validate their input parameters if the input is
received from sources outside the current trust boundary, before using them or passing
them to a downstream component or database.
If you use custom SOAP headers in your application, check that the information is not
tampered or replayed. Digitally sign the header information to ensure that it has not been
tampered. You can use the WSE to help sign Web service messages in a standard manner.
Check that SoapException and SoapHeaderException objects are used to handle errors
gracefully and to provide minimal required information to the client. Verify that exceptions
are logged appropriately for troubleshooting purposes.
Serviced Components
This section identifies the key review points that you should consider when you review the
serviced components used inside Enterprise Services applications. For more information
about the issues raised in this section, see Chapter 11, "Building Secure Serviced
Components."
Check that you set the most restricted level necessary for the remote server. For example,
if the server needs to identify you for authentication purposes, but does not need to
impersonate you, use the identify level as shown above. Use delegation-level impersonation
with caution on Windows 2000 because there is no limit to the number of times that your
security context can be passed from computer to computer. Windows Server 2003
introduces constrained delegation.
In Windows Server 2003 and Windows 2000 Service Pack 4 and later, the
Note
impersonation privilege is not granted to all users.
If your components are in a server application, the assembly level attribute shown above
controls the initial configuration for the component when it is registered with Enterprise
Services.
If your components are in a library application, the client process determines the
impersonation level. If the client is an ASP.NET Web application, check the
comImpersonationLevel setting on the <processModel> element in the Machine.config
file.
COM+ roles are most effective if they are used at the interface, component, or
method levels and are not just used to restrict access to the application. Check that
your code includes the following attribute:
[assembly: ApplicationAccessControl(AccessChecksLevel=
AccessChecksLevelOption.ApplicationComponent)]
If your method code calls ContextUtil.IsCallerInRole, check that these calls are
preceded with calls to ContextUtil.IsSecurityEnabled. If security is not enabled,
IsCallerInRole always returns true. Check that your code returns a security
exception if security is not enabled.
If you store data such as connection strings, check that the data is encrypted prior
to storage in the COM+ catalog. Your code should then decrypt the data when it is
passed to your component through the Construct method.
If you use the TcpChannel and your component API accepts custom object parameters, or
if custom objects are passed through the call context, your code has two security
vulnerabilities.
To prevent custom objects being passed to your remote component either by reference or
by value, set the TypeFilterLevel property on your server-side formatter channel sink to
TypeFilterLevel.Low.
To locate objects that are passed in the call context, search for the
"ILogicalThreadAffinative" string. Only objects that implement this interface can be passed
in the call context.
Search for the "Connection" string to locate instances of ADO .NET connection objects and
review how the ConnectionString property is set.
Do you encrypt the connection string?
Check that the code retrieves and then decrypts an encrypted connection string.
The code should use DPAPI for encryption to avoid key management issues.
Do not use the sa account or any highly privileged account, such as members of
sysadmin or db_owner roles. This is a common mistake. Check that you use a
least privileged account with restricted permissions in the database.
Check that the Persist Security Info attribute is not set to true or yes because
this allows sensitive information, including the user name and password, to be
obtained from the connection after the connection has been opened.
If you store sensitive data, such as credit card numbers, in the database, how do you
secure the data? You should check that it is encrypted by using a strong symmetric
encryption algorithm such as 3DES.
If you use this approach, how do you secure the 3DES encryption key? Your code should
use DPAPI to encrypt the 3DES encryption key and store the encrypted key in a restricted
location such as the registry.
This chapter has shown you how to review managed code for top security issues including
XSS, SQL injection, and buffer overflows. It has also shown you how to identify other more
subtle flaws that can lead to security vulnerabilities and successful attacks.
Security code reviews are not a panacea. However, they can be very effective and should
feature as a regular milestone in the development life cycle.
Additional Resource
For more information, see MSDN article, "Securing Coding Guidelines for the .NET
Framework," at http://msdn.microsoft.com/library/default.asp?url=/library/en-
us/dnnetsec/html/seccodeguide.asp.
Chapter 22: Deployment Review
In This Chapter
Reviewing network and host configuration
The main configuration elements that are subject to the deployment review process are
shown in Figure 22.1.
To help focus and structure the review process, the review questions have been divided into
the following configuration categories:
Services
Protocols
Accounts
Shares
Ports
Registry
Verify that your server is updated with the latest service packs and software patches. You
need to separately check operating system components and the .NET Framework. Review
the following questions:
Make sure you have run the MBSA tool to identify common Windows and IIS
vulnerabilities, and to identify missing service packs and patches.
Respond to the MBSA output by fixing identified vulnerabilities and by installing the
latest patches and updates. For more information, see "Step 1. Patches and
Updates" in Chapter 16, "Securing Your Web Server."
To determine the current version of the .NET Framework, see Microsoft Knowledge
Base article 318785, "INFO: Determining Whether Service Packs Are Installed on
.NET Framework." Then compare the installed version of the .NET Framework
against the current service pack. To do this, use the .NET Framework versions
listed in article 318836, "INFO: How to Obtain the Latest .NET Framework Service
Pack."
Services
Make sure that only the services that you require are enabled. Check that all others are
disabled to reduce your server's attack profile. To see which services are running and
enabled, use the Services and Applications Microsoft Management Console (MMC) snap-in
available from Computer Management. To disable a service, make sure it is stopped and
set its startup type to manual.
Review each service that is running by using the Services snap-in and confirm that
each service is required. Identify why it is required and which solutions rely on it.
Make sure all unnecessary services are disabled.
These services are not secure protocols and have known vulnerabilities. If you do
not need them, disable them. If you use them, find secure alternatives. These
services are listed in the Services MMC snap-in as FTP Publishing Service, Simple
Mail Transport Protocol (SMTP) and Network News Transport Protocol (NNTP).
To see whether your applications use this service, review the <sessionState>
element in your application's Web.config file. If Web.config does not contain this
element, check its setting in Machine.config. You use the session state service on
your Web server if the mode attribute is set to "StateServer" and the
stateConnectionString points to the local machine, for example with a localhost
address as shown below:
<sessionState mode="StateServer"
stateConnectionString="tcpip=127.0.0.1:42424" />
If you do not use the service on the Web server, disable it. It is listed as "ASP.NET
State Service" in the Services MMC snap-in.
For more information on how to secure ASP.NET session state, refer to "Session
State" in Chapter 19, "Securing Your ASP.NET Application and Web Services."
Protocols
Review which protocols are enabled on your server and make sure that no unnecessary
protocol is enabled. Use the following questions to help review protocols on your server:
If you use the Web Distributed Authoring and Versioning protocol (WebDAV) to
publish content then make sure it is secure. If you do not use it, disable the
protocol.
For information on how to secure WebDAV, see Microsoft Knowledge Base article
323470, "How To: Create a Secure WebDAV Publishing Directory." For information
about disabling WebDAV, see article 241520, "How To Disable WebDAV for IIS
5.0."
Make sure the TCP/IP stack is hardened to prevent network level denial of service
attacks including SYN flood attacks. To check whether the stack is hardened on
your server, use Regedt32.exe and examine the following registry key:
HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters
The presence of the following child keys indicates a hardened TCP/IP stack:
SynAttackProtect, EnableICMPRedirect, and EnableDeadGWDetect.
For a full list of the required keys and the appropriate key values for a fully
hardened stack, see "How To: Harden the TCP/IP Stack" in the "How To" section of
this guide.
Have you disabled NetBIOS and SMB for internet facing network cards?
Check that NetBIOS over TCP/IP is disabled and that SMB is disabled to prevent
host enumeration attacks. For more information, see "Protocols" in Chapter 16,
"Securing Your Web Server."
Accounts
Review the use of all the Windows accounts on the server to make sure that no
unnecessary accounts exist, and that all of the necessary accounts are configured with the
minimum privileges and the required access rights. The following questions help you identify
account vulnerabilities:
Perform an audit to verify that all your accounts are used and required. Delete or
disable any unnecessary accounts. The local administrator account and Guest
account cannot be deleted. You should disable the Guest account and rename the
Administrator account, making sure it has a strong password.
To check if the Guest account is disabled, display the Users folder in the Computer
Management tool and check that the Guest account appears with a cross icon next
to it. If it is not disabled, display its Properties dialog box and select Account is
disabled.
The default local administrator account is a prime target for attack. Verify that you
have renamed the administrator account and given it a strong password.
Check that the default IUSR_MACHINE account is disabled and that you have
configured an alternate anonymous user account for use by your Web
applications.
Use the Local Security Policy tool to review password policy. For information about
the recommended password policy, see "Step 5. Accounts" in Chapter 16,
"Securing Your Web Server."
Check the user rights assignments within the Local Security Policy tool to ensure
that the Everyone group is not granted the "Access this computer from the network"
user right.
The following review questions enable you to verify that you have used NTFS permissions
appropriately to lock down accounts such as the anonymous Web user account.
This allows you to use NTFS to configure ACLs on resources to restrict access. Do
not build a server that uses FAT partitions.
Use Windows Explorer to ensure that the Everyone group does not have access to
the following directories:
Root (:\)
Web site root directory and all content directories (default is \inetpub\*)
Make sure that the anonymous Internet user account does not have the ability to
write to Web content directories. Use Windows Explorer to view the ACL on each
content directory. Also check the ACL on the %windir%\system32 directory to make
sure that it cannot access system tools and utilities.
If you ran IISLockdown, the Web Anonymous Users group and the Web
Applications group can be used to restrict access. By default, the Web
Anonymous Users group contains the IUSR account and the Web
Note
Applications group contains Internet Web Application Manager (IWAM).
From an administrative perspective, restricting access to a group is
preferred to individual account restriction.
Verify that you have no utilities or software development kits (SDKs) on your server.
Make sure that neither Visual Studio.NET nor any .NET Framework SDKs are
installed. Also make sure that you have restricted access with NTFS permissions to
powerful system tools such as At.exe, Cmd.exe, Net.exe, Pathping.exe,
Regedit.exe, Regedt32.exe, Runonce.exe, Runas.exe, Telnet.exe, and Tracert.exe.
Finally, make sure that no debugging tools are installed on the server. IISLockdown
automatically restricts access to system tools by the Web Anonymous Users group
and the Web Applications group.
Verify that all unused data source names (DSNs) have been removed from the
server because they can contain clear text database connection details.
Shares
Review the following questions to ensure that your server is not unnecessarily exposed by
the presence of file shares:
Verify that the Everyone group is not granted access to your shares unless
intended, and that specific permissions are configured instead.
If you do not allow remote administration of your server, then check that the
administration shares, for example, C$ and IPC$ have been removed.
Ports
Review the ports that are active on your server to make sure that no unnecessary ports are
available. To verify which ports are listening, run the following netstat command.
netstat -n -a
This output lists all the ports together with their addresses and current state. Make sure
you know which services are exposed by each listening port and verify that each service is
required. Disable any unused service.
To filter out specific string patterns from netstat output, use it in conjunction with the
operating system findstr tool. The following example filters the output for ports in the
"LISTENING" state.
netstat -n -a | findstr LISTENING
You can also use the Portqry.exe command line utility to verify the status of TCP/IP ports.
For more information about the tool and how to download it, see Microsoft Knowledge
Base article 310099, "Description of the Portqry.exe Command Line Utility."
Registry
Review the security of your registry configuration with the following questions:
Use Regedt32.exe to review the ACL on the WinReg registry key, which controls
whether or not the registry can be remotely accessed.
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurePipe
This only applies to stand-alone servers. Check that you have restricted LMHash
storage in the Security Account Manager (SAM) database by creating the key (not
value) NoLMHash in the registry as follows:
HKLM\System\CurrentControlSet\Control\LSA\NoLMHash
Use the Local Security Policy tool to check that you have enabled the auditing of
failed logon attempts.
Use the Local Security Policy tool to check that you have enabled object access
auditing. Then check that auditing has been enabled across the file system.
IIS Configuration
By reviewing and improving the security of IIS configuration settings, you are in effect
reducing the attack surface of your Web server. For more information about the review
points covered in this section, see Chapter 16, "Securing Your Web Server."
The review questions in this section have been organized by the following configuration
categories.
IISLockdown
URLScan
ISAPI filters
IIS Metabase
Server certificates
IISLockdown
The IISLockdown tool identifies and turns off features to reduce the IIS attack surface
area. To see if it has been run on your server, check for the following report generated by
IISLockdown:
\WINNT\system32\inetsrv\oblt-rep.log
For more information about IISLockdown, see "How To: Use IISLockdown" in the "How To"
section of this guide.
URLScan
URLScan is an ISAPI filter that is installed with IISLockdown. It helps prevent potentially
harmful requests from reaching the server and causing damage. Check that it is installed
and that it is configured appropriately.
2. Right-click your server (not Web site) and then click Properties.
For more information about URLScan, see "How To: Use URLScan" in the "How To" section
of this guide.
Script mappings
Web permissions
Authentication
Script Mappings
Check that you have mapped all unnecessary file extensions to the 404.dll, which is installed
when you run IISLockdown.
3. Click the Home Directory tab and then click the Configuration button within the
Application Settings group.
Right click your Web site in IIS and click the Web Site tab. Click the Properties
button to check the log file location. Check that the log files are located in a non-
default location using a non-default name, preferably on a non-system volume.
Use Windows Explorer to view the ACL on the log files directory. Check that the
ACL grants Administrators and System full control but grants access to no other
user.
Web Permissions
Review the default Web permissions configured for your Web site and for each virtual
directory. Check that the following conditions are met:
Virtual directories for which anonymous access is allowed are configured to restrict
Write and Execute permissions.
Write permissions and script source access permissions are only granted to
content folders that allow content authoring. Also check that folders that allow
content authoring require authentication and Secure Sockets Layer (SSL)
encryption.
Authentication
Check the authentication settings for your Web sites and virtual directories. Ensure that
anonymous access is only supported for publicly accessible areas of your site. If you are
selecting multiple authentication options, thoroughly test the effects and authentication-
precedence on your application.
If Basic authentication is selected, check that SSL is used across the site to protect
credentials.
4. Click Configuration.
For more information, see" Step 11. Sites and Virtual Directories" in Chapter 16, "Securing
Your Web Server."
ISAPI Filters
Make sure that no unused ISAPI filters are installed to prevent any potential vulnerabilities in
these filters from being exploited.
Task To review ISAPI filters
1. Start Internet Information Manager.
2. Right click your server (not Web site) and then click Properties.
IIS Metabase
The IIS Metabase contains IIS configuration settings, many but not all of which are
configured through the IIS administration tool. The file itself must be protected and specific
settings that cannot be maintained using the IIS configuration tool should be checked.
Review the following questions to ensure appropriate metabase configuration:
Check that the ACL on the metabase file allows full control access to the system
account and administrators. No other account should have access. The metabase
file and location is:
%windir%\system32\inetsrv\metabase.bin
By default, IIS returns the internal IP address of your server in the Content-Location
section of the HTTP response header. You should prevent this by setting the
UseHostName metabase property to true. To check if it has been set, run the
following command from the \inetpub\adminscripts directory:
adsutil GET w3svc/UseHostName
Confirm that the property value has been set to true. If the property is not set, this
command returns the message "The parameter 'UseHostName' is not set at this node."
For more information, see "Step 14. IIS Metabase" in Chapter 16, "Securing Your Web
Server."
Server Certificates
If your applications use SSL, make sure that you have a valid certificate installed on your
Web server. To view the properties of your server's certificate, click View Certificate on the
Directory Security page of the Properties dialog of your Web site in IIS. Review the
following questions:
Check that it is not on a Certificate Revocation List (CRL) from the server that
issued the certificate.
Machine.Config
The .NET Framework configuration for all applications on your server is maintained in
Machine.config. For the purposes of the security review, this section examines the settings
in Machine.config from top to bottom and considers only those settings that relate to
security.
The majority of security settings are contained beneath the <system.web> element, with
the notable exception of Web service configuration and .NET Remoting configuration. The
review process for Web services and .NET Remoting configuration is presented later in this
chapter.
For more information and background about the issues raised by the following review
questions, see Chapter 19, "Securing Your ASP.NET Application and Web Services." The
following elements are reviewed in this section:
<pages> <machineKey>
<customErrors> <trust>
<trace>
<httpRunTime>
Verify the value of the maxRequestLength attribute on the <httpRunTime> element. You
can use this value to prevent users from uploading very large files. The maximum allowed
value is 4 MB.
<compilation>
Check that you do not compile debug binaries. Make sure the debug attribute is set to
false.
<compilation debug="false" ... />
<pages>
The <pages> element controls default page level configuration settings. From a security
perspective, review the view state and session state settings.
<customErrors>
Make sure that the mode attribute is set to On to ensure that detailed exception
information is not disclosed to the client. Also check that a default error page is specified
via the defaultRedirect attribute.
<customErrors mode="On" defaultRedirect="/apperrorpage.htm" />
<authentication>
This element governs your application's authentication mechanism. Check the mode
attribute to see which authentication mechanism is configured and then use the specific
review questions for your configured authentication mode.
<authentication mode="[Windows|Forms|Passport|None"] />
Forms Authentication
Review the following questions to verify your Forms authentication configuration.
Cookies should be encrypted and checked for integrity to detect tampering even
over an SSL channel because cookies can be stolen through cross-site scripting
(XSS) attacks. Check that the protection attribute of the <forms> element is set
to All.
<forms protection="All" .../> All indicates encryption and v
Minimize the cookie timeout to limit the amount of time an attacker can use the
cookie to access your application. Check the timeout attribute on the <forms>
element.
<forms timeout="10" ... />
Check that you use a separate cookie name and path for each Web application.
This ensures that users who are authenticated against one application are not
treated as authenticated when using a second application hosted by the same Web
server. Check the path and name attributes on the <forms> element.
<forms name=".ASPXAUTH" path="/" ... />
You should not use the <credentials> element on production servers. This element
is intended for development and testing purposes only. Credentials should instead
be stored in Microsoft Active Directory® directory service or SQL Server.
Make sure passwords are not stored in the database. Instead, store password
hashes with added salt to foil dictionary attacks.
<identity>
The following questions help verify your impersonation configuration specified on the
<identity> element:
If the impersonate attribute is set to true and you do not specify userName or
password attributes, you impersonate the IIS authenticated identity, which may be
the anonymous Internet user account.
Make sure that ACLs are configured to allow the impersonated identity access only
to those resources that it needs to gain access to.
If you impersonate and set the userName and password attributes, you
impersonate a fixed identity and this identity is used for resource access.
Make sure you do not specify plaintext credentials on the <identity> element.
Instead, use Aspnet_setreg.exe to store encrypted credentials in the registry.
On Windows 2000 this approach forces you to grant the "Act as part of the
operating system" user right to the ASP.NET process account, which is not
recommended. For alternative approaches, see Chapter 19, "Securing Your
ASP.NET Application and Web Services."
<authorization>
This element controls ASP.NET URL authorization and specifically the ability of Web clients
to gain access to specific folders, pages, and resources.
Have you used the correct format for user and role names?
When you have <authentication mode="Windows" />, you are authorizing access
to Windows user and group accounts.
<machineKey>
This element is used to specify encryption and validation keys, and the algorithms used to
protect Forms authentication cookies and page level view state.
If so, use the IsolateApps setting to ensure a separate key is generated for each
Web application.
<machineKey validationKey="AutoGenerate,IsolateApps"
decryptionKey="AutoGenerate,IsolateApps" validatio
If so, make sure that you use specific machine keys and copy them across all
servers in the farm.
<trust>
The <trust> element determines the code access security trust level used to run ASP.NET
Web applications and Web services.
If you run .NET Framework 1.0 then the trust level must be set to Full. For versions
equal to or greater than 1.1, you can change it to one of the following:
<!-- level="[Full|High|Medium|Low|Minimal]" -->
<trust level="Full" originUrl=""/>
Based on security policy and the agreement with the development team; set an
appropriate trust level for the application either in Web.config or in Machine.config.
<sessionState>
The sessionState element configures user session state management for your application.
Review the following questions:
If you use a remote state store and the mode attribute is set to stateServer or
SQLServer, check the stateConnectionString and sqlConnectionString
attributes respectively. So that credentials are not included in the database
connection string, make sure the connection strings are secured in encrypted
format in the registry using the Aspnet_setreg.exe tool, or that Windows
authentication is used to connect to the SQL Server state store.
The following configuration shows what the stateConnectionString looks like when
Aspnet_setreg.exe has been used to encrypt the string in the registry.
<!-- aspnet_setreg.exe has been used to store encrypted detail
<!-- in the registry. -->
<sessionState mode="StateServer"
stateConnectionString="registry:HKLM\SOFTWARE\YourSe
identity\ASPNET_SETREG,stateConnectionString" />
If you use the SQL Server state store, check to see if you use Windows
authentication to connect to the state database. This means that credentials are not
stored in the connection string and that credentials are not transmitted over the
wire.
If you must use SQL authentication, make sure the connection string is encrypted in
the registry and that a server certificate is installed on the database server to
ensure that credentials are encrypted over the wire.
<httpHandlers>
This element lists the HTTP handlers that process requests for specific file types. Check to
ensure that you have disabled all unused file types.
<processModel>
The identity under which the ASP.NET worker process runs is controlled by settings on the
<processModel> element in Machine.config. The following review questions help verify
your process identity settings:
Check the userName and password attributes. Ideally, you use the following
configuration that results in the ASP.NET process running under the least privileged
ASPNET account.
<processModel userName="Machine" password="AutoGenerate" . . .
If you use a custom account, make sure that the account credentials are not
specified in plaintext in Machine.config. Make sure the Aspnet_setreg.exe utility has
been used to store encrypted credentials in the registry. If this has been used, the
userName and password attributes look similar to the settings shown below:
<processModel
userName="registry:HKLM\SOFTWARE\YourSecureApp\process
ASPNET_SETREG,userName"
password="registry:HKLM\SOFTWARE\YourSecureApp\process
ASPNET_SETREG,password" . . ./>
The default ASPNET account is a least privileged local account designed to run
ASP.NET. To use it for remote resource access, you need to create a duplicate
account on the remote server. Alternatively, you can create a least privileged
domain account.
Check that the account is not a member of the Users group, and view the user
rights assignment in the Local Security Policy tool to confirm it is not granted any
extended or unnecessary user rights. Make sure it is not granted the "Act as part of
the operating system" user right.
Web Services
The goal for this phase of the review is to identify vulnerabilities in the configuration of your
Web services. For further background information about the issues raised by the review
questions in this section, see Chapter 17, "Securing Your Application Server," and Chapter
19, "Securing Your ASP.NET Applications and Web Services."
Use the following questions to help review the security configuration of your Web service:
If you do not want to expose your Web services endpoints, then you can remove
the Documentation protocol from the <protocols> element in Machine.config and
manually distribute the .Web Services Description Language (WSDL) file to specific
Web service consumers.
If you store the generated .WSDL files on the Web server to distribute them to the
consumers, make sure that the files are protected by an appropriate ACL. This
prevents a malicious user from updating or replacing the WSDL so that it points to
endpoints that differ from the intended URL.
If your Web service handles sensitive data, how do you protect the data over the
network and address the network eavesdropping threat? Do you use SSL or IPSec
encrypted channels, or do you encrypt parts of the message by using XML
encryption?
If you pass credentials in SOAP headers, are they passed in plaintext? If they are,
make sure an encrypted channel is used.
Enterprise Services
This section identifies the key review points that should be considered when you review
your Enterprise Services applications and components. For more information about the
issues raised in this section, see Chapter 17, "Securing Your Application Server."
When you review Enterprise Services applications consider the following issues:
Accounts
Authentication
Authorization
Accounts
If you use an Enterprise Services server application, check which account you use to run
the application. This is displayed on the Identity page of the application's Properties dialog
box in Component Services. Review the following questions:
Check the account that you use to run your Enterprise Services server applications
to ensure they are configured as least privileged accounts with restricted user rights
and access rights. If you use the process account to access a downstream
database, make sure that the database login is restricted in the database.
Do not use the Interactive account on production servers. This is only intended to
be used during development and testing.
The COM+ catalog maintains configuration data for COM+ applications. Make sure
that the following folder that maintains the catalog files is configured with a
restricted ACL.
%windir%\registration
If your application uses the Compensating Resource Manager, the CRM log files
(.crmlog) should be secured with NTFS permissions because the log files may
contain sensitive application data.
Make sure that the folder used to hold the DLLs of your application is configured
with the following restricted ACL.
Users: Execute
Application Run as account: Execute
Administrators: Read, Write and Execute
For more information, see Chapter 17, "Securing Your Application Server."
Authentication
Serviced components can be hosted in a library application that runs in the client's process
address space or in a server application that runs in a separate instance of Dllhost.exe.
This is determined by the activation type specified on the Activation page of the
application's Properties dialog box in Component Services. The client process for an
Enterprise Services library application is usually the ASP.NET Web application process.
The settings discussed below are specified on the Security page of the application's
Properties dialog box in Component Services.
Server Applications
If the Activation type is set to Server application, review the following questions:
Check that your application uses at least call level authentication to ensure that
clients are authenticated each time they make a method call. This prevents
anonymous access.
Library Applications
If the activation type is set to Library application, the authentication and impersonation
settings are inherited from the host process. The review questions in this section assume
the ASP.NET process is the host process.
To check, view the Enable authentication check box setting on the Security page
of the application's Properties dialog box. You should not disable authentication
unless you have a specific requirement such as handling unauthenticated callbacks
from a remote serviced component.
This affects outgoing calls from the library component to other remote components.
Check the comImpersonationLevel attribute on the <processModel> element in
Machine.config.
<processModel comImpersonationLevel=
"Default|Anonymous|Identify|Impersonate|Delegate
Authorization
Serviced components in Enterprise Services applications use COM+ role based security to
authorize callers. Review the following issues to ensure appropriate authorization:
This controls whether or not COM+ authorization is enabled or not. Check that
Enforce access checks for this application is selected on the Security page of
the application's Properties dialog box in Component Services.
Check the Security level specified on the Security page of the application's
Properties dialog box in Component Services. Applications should use process and
component level access checks to support granular authorization. This allows the
application to use roles to control access to specific classes, interfaces, and
methods.
Process and component level access checks must be enabled for library
Note
applications or you will not be able to use role-based authorization.
Enterprise Services uses DCOM, which in turn uses RPC communication. RPC
communication requires port 135 to be open on the firewall. Review your firewall
and Enterprise Services configuration to ensure that only the minimal additional
ports is open.
When you review your .NET Remoting solution, start by identifying which host is used to run
your remote components. If you use the ASP.NET host with the HttpChannel, you need to
check that IIS and ASP.NET security is appropriately configured to provide authentication,
authorization, and secure communication services to your remote components. If you use a
custom host and the TcpChannel, you need to review how your components are secured,
because this host and channel combination requires custom authentication and authorization
solutions.
Port Considerations
Remoting is not designed to be used with Internet clients. Check that the ports that your
components listen on are not directly accessible by Internet clients. The port or ports are
usually specified on the <channel> element in the server side configuration file.
Do you use SSL or IPSec? Without SSL or IPSec, data passed to and from the
remote component is subject to information disclosure and tampering. Review what
measures are in place to address the network eavesdropping threat.
Make sure that anonymous access is disabled in IIS for your application's virtual
directory. Also check that you use Windows authentication. The Web.config of your
application should contain the following configuration.
<authentication mode="Windows" />
If not, why? You can use ASP.NET file authorization to control access to the
endpoints of your remoting application by creating a .rem or .soap file and
configuring the NTFS permissions on the file. The ASP.NET FileAuthorizationModule
will then authorize access to the component. For more information, see"
Authorization" in Chapter 13, "Building Secure Remoted Components."
Do you use URL authorization?
Check your application's use of the <authorization> element. Use the ASP.NET
UrlAuthorizationModule by applying <allow> and <deny> tags.
Check the configuration of your application to make sure that you have correctly
configured the <customErrors> element to prevent detailed errors from being
returned to the client. Make sure the mode attribute is set to On as shown below.
<customErrors mode="On" />
Check that you use a least privileged account to run ASP.NET, such as the default
ASPNET account, or Network Service account on Windows Server 2003.
Have you secured the channel from client to server? You may use transport level
IPSec encryption or your application may use a custom encryption sink to encrypt
request and response data.
Review which account you use to run your custom host process and ensure it is
configured as a least privileged account.
Database Server Configuration
The goal for this phase of the review is to identify vulnerabilities in the configuration of your
SQL Server database server. For further background information about the issues raised by
the review questions in this section, see Chapter 18, "Securing Your Database Server."
To help focus and structure the review process, the review questions have been divided into
the following configuration categories:
Services
Protocols
Accounts
Shares
Ports
Registry
Make sure you have run the Microsoft Baseline Security Analyzer (MBSA) tool to identify
common Windows and SQL Server vulnerabilities, and to identify missing service packs and
patches.
Respond to the MBSA output by fixing identified vulnerabilities and by installing the latest
patches and updates. For more information, see "Step 1. Patches and Updates" in Chapter
18, "Securing Your Database Server."
Services
Make sure that only those services that you require are enabled. Check that all others are
disabled to reduce the attack surface of your server.
SQL Server installs four services. If you require just the base functionality, then
disable Microsoft Search Service, MSSQLServerADHelper, and SQLServerAgent
to reduce the attack surface of your server.
If you do not use distributed transactions, ensure that the DTC service is disabled.
Protocols
By preventing the use of unnecessary protocols, you reduce the attack surface area.
Review the following questions:
SQL Server supports multiple protocols. Use the Server Network Utility to check
that only TCP/IP protocol support is enabled.
To check whether the stack is hardened on your server, use Regedt32.exe and
examine the following registry key:
HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters
The presence of the following child keys indicates a hardened TCP/IP stack:
SynAttackProtect, EnableICMPRedirect, and EnableReadGWDetect.
For a full list of the required keys and appropriate key values for a fully hardened stack,
see "How To: Harden the TCP/IP Stack" in the How To section of this guide.
Accounts
Review the accounts used on your database server by answering the following questions:
Review which account you use to run SQL Server and make sure it is a least
privileged account. It should not be an administrative account or the powerful local
system account. Also make sure that the account is not a member of the Users
group on the local computer.
Audit local accounts on the server and check that all unused accounts are disabled.
The default local administrator account is a prime target for attack. To improve
security, check that you have created a new custom account for administration and
that the default Administrator account has been disabled.
Use the Local Security Policy tool to review password policy. For information about
the recommended password policy, see "Step 4. Accounts" in Chapter 18,
"Securing Your Database Server."
Check the user rights assignments within the Local Security Policy tool to ensure
that the Everyone group is not granted the "Access this computer from the
network" user right.
If so, check that the strongest version of NTLM authentication (NTLMv2) is enabled
and enforced. To check that NTLMv2 authentication is enforced, use the Local
Security Policy Tool. Expand Local Policies and select Security Options and then
double-click LAN Manager Authentication Level. Verify that Send NTLMv2
response only\refuse LM & NTLM is selected.
Files and Directories
The following review questions enable you to verify that you have used NTFS permissions
appropriately on your database server.
Review the permissions on the SQL Server installation directories and make sure
that the permissions grant limited access. For detailed permissions, see "Step 5.
Files and Directories" in Chapter 18, "Securing Your Database Server."
Review the permissions on the SQL Server file location (by default, \Program
Files\Microsoft SQL Server\MSSQL) and check that the Everyone group has been
removed from the directory ACL. At the same time, make sure that full control has
been granted to only the SQL Service account, the Administrators group, and the
local system account.
If you have installed SQL Server 2000 Service Pack 1 or 2, the system
administrator or service account password may be left in the SQL installation
directory. Make sure that you have used the Killpwd.exe utility to remove instances
of passwords from the log files.
For information about obtaining and using this utility, see Microsoft Knowledge Base
article 263968, "FIX: Service Pack Installation May Save Standard Security
Password in File."
Shares
Review the following questions to ensure that your server is not unnecessarily exposed by
the presence of file shares:
Check that the Everyone group is not granted access to your shares unless
intended, and that specific permissions are configured instead.
Have you removed the administration shares?
If you do not allow remote administration of your server, then check that the
administration shares, for example, C$ and IPC$, have been removed.
Ports
Review the ports that are active on your server to make sure that no unnecessary ports are
available. For more information about using the netstat command to do this, see the "Ports"
subsection in "Web Server Configuration," earlier in this chapter. Then review the following
questions:
Review how you restrict access to the SQL Server port. Check that your perimeter
firewall prevents direct access from the Internet. To protect against internal attacks,
review your IPSec policies to ensure they limit access to the SQL Server ports.
If you use named instances, check with the Network Server Utility to verify that you
have configured the instance to listen on a specific port. This avoids UDP
negotiation between the client and server, and means you do not need to open
additional ports.
Registry
Review the security of your registry configuration with the following questions:
Use Regedt32.exe to check that the Everyone group has been removed from the
ACL attached to the following registry key.
Administrators: Full Control
SQL Server service account: Full Control
Check that you have restricted LMHash storage in the Security Account Manager
(SAM) by creating the key (not value) NoLMHash in the registry as shown below.
HKLM\System\CurrentControlSet\Control\LSA\NoLMHash
For more information, see Microsoft Knowledge Base article 299656, "New Registry Key
to Remove LM Hashes from Active Directory and Security Account Manager".
Review the following questions to check whether or not you have used appropriate auditing
and logging on your database server.
Check that SQL Server auditing is enabled. Make sure that the Audit level setting
on the Security page of the SQL Server Properties dialog box in Enterprise
Manager is set to either All or Failure.
Use the Local Security Policy tool to check that you have enabled the auditing of
failed logon attempts.
Use the Local Security Policy tool to check that you have enabled object access
auditing. Then check that auditing has been enabled across the file system.
If your applications do require SQL authentication, review how they manage database
connection strings. This is important if they use SQL authentication because they contain
user name and passwords. Also ensure that a server certificate is installed on the database
server to ensure that credentials are encrypted when they are passed over the network to
the database server, or that transport level encryption is used.
The sa account is still active even when you change from SQL
Important
authentication to Windows authentication.
Also make sure you have applied strong passwords to all database accounts,
particularly privileged accounts, for example, members of sysadmin and
db_owner. If you use replication, check that the distributer_admin account has a
strong password.
If when you installed SQL Server the Windows Guest account was enabled, a SQL
Server guest account is created. Check each database and ensure that the SQL
Server guest account is not present. If it is, remove it.
You cannot remove guest from the master, tempdb, and replication and
Note
distribution databases.
Review the permissions granted to the public role in each database. Make sure it
has no permissions to access any database objects.
How many members are there that belong to the sysadmin role?
Check how many logins belong to the sysadmin role. Ideally, no more than two
users should be system administrators.
Review the permissions granted to each database user account and make sure that
each account (including application accounts) only has the minimum required
permissions.
Use SQL Server Enterprise Manager to check that all sample databases, including
Pubs and Northwind, have been removed.
Check to make sure that neither the public role nor the guest user has access to
any of your stored procedures. To authorize access to stored procedures, you
should map the SQL Server login of your server to a database user, place the
database user in a user-defined database role, and then apply permissions to this
role to provide execute access to the stored procedures of your application.
The cmdExec function is used by the SQL Server Agent to execute Windows
command-line applications and scripts that are scheduled by the SQL Server Agent.
Check that access to cmdExec is restricted to members of the sysadmin role.
To check this, use SQL Server Enterprise Manager to expand the Management
node. Right-click SQL Server Agent and display the SQL Server Agent
Properties dialog box. Click the Job System tab and check that Only users with
SysAdmin privileges can execute CmdExec and ActiveScripting job steps is
selected.
Network Configuration
The goal for this phase of the review is to identify vulnerabilities in the configuration of your
network. For further background information about the issues raised by the review
questions in this section, see Chapter 15, "Securing Your Network."
To help focus and structure the review process, the review questions have been divided into
the following configuration categories:
Router
Firewall
Switch
Router
Use the following questions to review your router configuration:
Check with the networking hardware manufacturer to ensure you have the latest
patches.
For more information, see "Network Ingress Filtering: Defeating Denial of Service
Attacks which employ IP Source Address Spoofing," at http://www.rfc-
editor.org/rfc/rfc2267.txt.
Make sure you block Internet Control Message Protocol (ICMP) traffic at the outer
perimeter router to prevent attacks such as cascading ping floods and other
potential ICMP vulnerabilities.
Make sure that only the required interfaces are enabled on the router.
You should use strong password policies to mitigate the risks posed by brute force
and dictionary attacks.
When possible, shut down the external administration interface and use internal
access methods with ACLs.
Intrusion Detection Systems (IDSs) can show where the perpetrator is attempting
attacks.
Firewall
Check with the networking hardware manufacturer to ensure you have the latest
patches.
Ensure that you maintain healthy log cycling that allows quick data analysis.
Switch
Check with the networking hardware manufacturer to ensure that you have the
latest patches.
To make sure that insecure defaults are secured, check that you have changed all
factory default passwords and Simple Network Management Protocol (SNMP)
community strings to prevent network enumeration or total control of the switch.
Make sure that all unused services are disabled. Also, make sure that Trivial File
Transfer Protocol (TFTP) is disabled, Internet-facing administration points are
removed, and ACLs are configured to limit administrative access.
Summary
When you perform a deployment review, make sure that you review the configuration of the
underlying infrastructure on which the application is deployed and the configuration of the
application itself. Review the network, host, and application configuration and, where
possible, involve members of the various teams including infrastructure specialists,
administrators and developers.
Use the configuration categories identified in this chapter to help focus the review. These
categories include patches and updates, services, protocols, accounts, files and directories,
shares, ports, registry, and auditing and logging.
Related Security Resources
Related Microsoft patterns & practices Guidance
Building Secure ASP.NET Applications: Authentication, Authorization, and Secure
Communication on the MSDN® Web site at
http://msdn.microsoft.com/library/default.asp?url=/library/en-
us/dnnetsec/html/secnetlpMSDN.asp.
This guide focuses on the key elements of authentication, authorization, and secure
communication within and across the tiers of distributed .NET Web applications. It
is written for architects and developers.
This guide focuses on common authorization tasks and scenarios, and it provides
information that helps you choose the best approaches and techniques. It is written
for architects and developers.
Microsoft Solution for Securing Windows 2000 Server on the Microsoft Technet
Web site at http://www.microsoft.com/technet/treeview/default.asp?
url=/technet/security/prodtech/windows/secwin2k/default.asp.
This guide delivers procedures and best practices for system administrators to lock
down their Windows 2000-based servers and maintain secure operations once
they're up and running. It is written for IT Pros.
More Information
For more information on patterns and practices, refer to the Microsoft patterns & practices
home page at http://msdn.microsoft.com/practices/.
Security-Related Web Sites
Microsoft Security-Related Web Sites
Vulnerability assessment
Worldwide at http://www.microsoft.com/worldwide/.
For security issues within specific .NET Framework technologies, refer to the appropriate
newsgroup:
View the security bulletins that are available for your system.
Service Packs
Microsoft Service Packs at &.
Article 318836, "INFO: How to Obtain the Latest .NET Framework Service
Pack" in the Microsoft Knowledge Base at
http://support.microsoft.com/default.aspx?scid=kb;en-us;318836.
Use this service to register for regular e-mail bulletins that notify you of the
availability of new fixes and updates.
This announces the latest security breaches and corresponding fixes. It also gives
advice on reacting to vulnerabilities.
NTBugtraq at http://www.ntbugtraq.com/default.asp?pid=31&sid=1#020.
This site tracks the frequency of worms, denial of service attacks, as well as other
kinds of attacks.
Common Criteria
The Windows 2000 Common Criteria Security Target (ST) provides a set of security
requirements taken from the Common Criteria (CC) for Information Technology
Security Evaluation. The Windows 2000 product was evaluated against the
Windows 2000 ST and satisfies the ST requirements.
This document is written for those who are responsible for ensuring that the
installation and configuration process results in a secure configuration. A secure
configuration is one that enforces the requirements presented in the Windows 2000
ST, referred to as the Evaluated Configuration.
Reference Hub
Reference hub from Building Secure ASP.NET Applications at
http://msdn.microsoft.com/library/default.asp?url=/library/en-
us/dnnetsec/html/SecNetAP03.asp?frame=true.
Vulnerabilities
SANs TOP 20 List at http://www.sans.org/top20/.
http://www.w3.org/Security/faq/www-security-faq.html.
Index of Checklists
Overview
Improving Web Application Security: Threats and Countermeasures provides a series of
checklists that help you turn the information and details that you learned in the individual
chapters into action. The following checklists are included:
This checklist should evolve based on the experience you gain from performing reviews.
You might also want to perform custom checks that are based on a specific aspect of your
architecture or design to ensure that your deployment environment the design.
Deployment and Infrastructure Considerations
Check Description
The design identifies, understands, and accommodates the company
¨
security policy.
Restrictions imposed by infrastructure security (including available
¨
services, protocols, and firewall restrictions) are identified.
The design recognizes and accomodates restrictions imposed by hosting
¨
environments (including application isolation requirements).
¨ The target environment code-access-security trust level is known.
The design identifies the deployment infrastructure requirements and the
¨
deployment configuration of the application.
Domain structures, remote application servers, and database servers are
¨
identified.
¨ The design identifies clustering requirements.
The design identifies the application configuration maintenance points
¨ (such as what needs to be configured and what tools are available for an
IDC admin).
Secure communication features provided by the platform and the
¨
application are known.
The design addresses Web farm considerations (including session state
¨ management, machine specific encryption keys, Secure Sockets Layer
(SSL), certificate deployment issues, and roaming profiles).
The design identifies the certificate authority (CA) to be used by the site
¨
to support SSL.
¨ The design addresses the required scalability and performance criteria.
Application Architecture and Design Considerations
Input Validation
Check Description
¨ All entry points and trust boundaries are identified by the design.
Input validation is applied whenever input is received from outside the
¨
current trust boundary.
¨ The design assumes that user input is malicious.
¨ Centralized input validation is used where appropriate.
The input validation strategy that the application adopted is modular and
¨
consistent.
The validation approach is to constrain, reject, and then sanitize input.
¨ (Looking for known, valid, and safe input is much easier than looking for
known malicious or dangerous input.)
¨ Data is validated for type, length, format, and range.
¨ The design addresses potential canonicalization issues.
¨ Input file names and file paths are avoided where possible.
¨ The design addresses potential SQL injection issues.
¨ The design addresses potential cross-site scripting issues.
¨ The design does not rely on client-side validation.
The design applies defense in depth to the input validation strategy by
¨
providing input validation across tiers.
¨ Output that contains input is encoded using HtmlEncode and UrltEncode.
Authentication
Check Description
¨ Application trust boundaries are identified by the design.
The design identifies the identities that are used to access resources
¨
across the trust boundaries.
The design partitions the Web site into public and restricted areas using
¨
separate folders.
¨ The design identifies service account requirements.
The design identifies secure storage of credentials that are accepted
¨
from users.
The design identifies the mechanisms to protect the credentials over the
¨
wire (SSL, IPSec, encryption and so on).
¨ Account management policies are taken into consideration by the design.
The design ensure that minimum error information is returned in the event
¨
of authentication failure.
The identity that is used to authenticate with the database is identified by
¨
the design.
If SQL authentication is used, credentials are adequately secured over
¨
the wire (SSL or IPSec) and in storage (DPAPI).
¨ The design adopts a policy of using least-privileged accounts.
¨ Password digests (with salt) are stored in the user store for verification.
¨ Strong passwords are used.
Authentication tickets (cookies) are not transmitted over non-encrypted
¨
connections.
Authorization
Check Description
The role design offers sufficient separation of privileges (the design
¨
considers authorization granularity).
¨ Multiple gatekeepers are used for defense in depth.
The application's login is restricted in the database to access-specific
¨
stored procedures.
The application's login does not have permissions to access tables
¨
directly.
¨ Access to system level resources is restricted.
The design identifies code access security requirements. Privileged
¨
resources and privileged operations are identified.
All identities that are used by the application are identified and the
¨
resources accessed by each identity are known.
Configuration Management
Check Description
Administration interfaces are secured (strong authentication and
¨
authorization is used).
¨ Remote administration channels are secured.
¨ Configuration stores are secured.
¨ Configuration secrets are not held in plain text in configuration files.
Administrator privileges are separated based on roles (for example, site
¨
content developer or system administrator).
¨ Least-privileged process accounts and service accounts are used.
Sensitive Data
Check Description
Secrets are not stored unless necessary. (Alternate methods have been
¨
explored at design time.)
¨ Secrets are not stored in code.
Database connections, passwords, keys, or other secrets are not stored
¨
in plain text.
The design identifies the methodology to store secrets securely.
(Appropriate algorithms and key sizes are used for encryption. It is
¨
preferable that DPAPI is used to store configuration data to avoid key
management.)
¨ Sensitive data is not logged in clear text by the application.
The design identifies protection mechanisms for sensitive data that is
¨
sent over the network.
¨ Sensitive data is not stored in persistent cookies.
¨ Sensitive data is not transmitted with the GET protocol.
Session Management
Check Description
¨ SSL is used to protect authentication cookies.
¨ The contents of authentication cookies are encrypted.
¨ Session lifetime is limited.
¨ Session state is protected from unauthorized access.
¨ Session identifiers are not passed in query strings.
Cryptography
Check Description
Platform-level cryptography is used and it has no custom
¨
implementations.
The design identifies the correct cryptographic algorithm (and key size)
¨
for the application's data encryption requirements.
¨ The methodology to secure the encryption keys is identified.
¨ The design identifies the key recycle policy for the application.
¨ Encryption keys are secured.
¨ DPAPI is used where possible to avoid key management issues.
¨ Keys are periodically recycled.
Parameter Manipulation
Check Description
All input parameters are validated (including form fields, query strings,
¨
cookies, and HTTP headers).
¨ Cookies with sensitive data are encrypted.
¨ Sensitive data is not passed in query strings or form fields.
¨ HTTP header information is not relied on to make security decisions.
¨ View state is protected using MACs.
Exception Management
Check Description
¨ The design outlines a standardized approach to structured exception
handling across the application.
Application exception handling minimizes the information disclosure in
¨
case of an exception.
The design identifies generic error messages that are returned to the
¨
client.
¨ Application errors are logged to the error log.
¨ Private data (for example, passwords) is not logged.
Check Description
The design identifies the level of auditing and logging necessary for the
¨
application and identifies the key parameters to be logged and audited.
The design considers how to flow caller identity across multiple tiers (at
¨
the operating system or application level) for auditing.
The design identifies the storage, security, and analysis of the application
¨
log files.
Checklist: Securing ASP.NET
How to Use This Checklist
This checklist is a companion to Chapter 10, "Building Secure ASP.NET Pages and
Controls," Chapter 19, "Securing Your ASP.NET Application and Web Services," and
Chapter 20, "Hosting Multiple Web Applications." Use it to help you secure an ASP.NET
application and also as a snapshot of the corresponding chapters.
Design Considerations
Check Description
Security decisions should not rely on client-side validations; they are
¨
made on the server side.
The Web site is partitioned into public access areas and restricted areas
¨ that require authentication access. Navigation between these areas
should not flow sensitive credentials information.
The identities used to access remote resources from ASP.NET Web
¨
applications are clearly identified.
Mechanisms have been identified to secure credentials, authentication
¨ tickets, and other sensitive information over network and in persistent
stores.
A secure approach to exception management is identified. The
¨
application fails securely in the event of exceptions.
¨ The site has granular authorization checks for pages and directories.
Web controls, user controls, and resource access code are all
¨
partitioned in their own assemblies for granular security.
Application Categories Considerations
Input Validation
Check Description
User input is validated for type, length, format, and range. Input is
¨ checked for known valid and safe data and then for malicious, dangerous
data.
String form field input is validated using regular expressions (for example,
¨
by the RegularExpressionValidator control.)
Regular HTML controls, query strings, cookies, and other forms of input
¨
are validated using the Regex class and/or your custom validation code.
The RequiredFieldValidator control is used where data must be
¨
entered.
Range checks in server controls are checked by RangeValidator
¨
controls.
¨ Free form input is sanitized to clean malicious data.
Input file names are well formed and are verifiably valid within the
¨
application context.
¨ Output that includes input is encoded with HtmlEncode and UrlEncode.
¨ MapPath restricts cross-application mapping where appropriate.
¨ Character encoding is set by the server (ISO-8859-1 is recommended).
¨ The ASP.NET version 1.1 validateRequest option is enabled.
¨ URLScan is installed on the Web server.
The HttpOnly cookie option is used for defense in depth to help prevent
¨
cross-site scripting. (This applies to Internet Explorer 6.1 or later.)
SQL parameters are used in data access code to validate length and
¨
type of data and to help prevent SQL injection.
Authentication
Check Description
¨ Site is partitioned to restricted areas and public areas.
¨ Absolute URLs are used for navigation where the site is partitioned with
secure and non-secure folders.
Secure Sockets Layer (SSL) is used to protect credentials and
¨
authentication cookies.
The slidingExpiration attribute is set to "false" and limited authentication
¨ cookie time-outs are used where the cookie is not protected by using
SSL.
The forms authentication cookie is restricted to HTTPS connections by
¨
using the requireSSL attribute or the Secure cookie property.
The authentication cookie is encrypted and integrity checked
¨
(protection="All").
¨ Authentication cookies are not persisted.
¨ Application cookies have unique path/name combinations.
¨ Personalization cookies are separate from authentication cookies.
Passwords are not stored directly in the user store; password digests
¨
with salt are stored instead.
The impersonation credentials (if using a fixed identity) are encrypted in
¨
the configuration file by using Aspnet_setreg.exe.
¨ Strong password policies are implemented for authentication.
The <credentials> element is not used inside <forms> element for
¨
Forms authentication (use it for testing only).
Authorization
Check Description
¨ URL authorization is used for page and directory access control.
¨ File authorization is used with Windows authentication.
Principal permission demands are used to secure access to classes and
¨
members.
¨ Explicit role checks are used if fine-grained authorization is required.
Configuration Management
Check Description
¨ Configuration file retrieval is blocked by using HttpForbiddenHandler.
¨ A least-privileged account is used to run ASP.NET.
Custom account credentials (if used) are encrypted on the
¨
<processModel> element by using Aspnet_setreg.exe.
To enforce machine-wide policy, Web.config settings are locked by using
¨
allowOveride="false" in Machine.config.
Sensitive Data
Check Description
¨ SSL is used to protect sensitive data on the wire.
Sensitive data is not passed across pages; it is maintained using server-
¨
side state management.
Sensitive data is not stored in cookies, hidden form fields, or query
¨
strings.
¨ Do not cache sensitive data. Output caching is off by default.
Plain text passwords are avoided in Web.config and Machine.config files.
¨
(Aspnet_setreg.exe is used to encrypt credentials.)
Session Management
Check Description
The session cookie is protected using SSL on all pages that require
¨
authenticated access.
¨ The session state service is disabled if not used.
¨ The session state service (if used) runs using a least-privileged account.
Windows authentication is used to connect to Microsoft® SQL Server™
¨
state database.
¨ Access to state data in the SQL Server is restricted.
¨ Connection strings are encrypted by using Aspnet_setreg.exe.
¨ The communication channel to state store is encrypted (IPSec or SSL).
Parameter Manipulation
Check Description
¨ View state is protected using message authentication codes (MACs).
¨ Query strings with server secrets are hashed.
¨ All input parameters are validated.
¨ Page.ViewStateUserKey is used to counter one-click attacks.
Exception Management
Check Description
¨ Structured exception handling is used.
¨ Exception details are logged on the server.
¨ Generic error pages with harmless messages are returned to the client.
¨ Page-level or application-level error handlers are implemented.
¨ The application distinguishes between errors and exception conditions.
Check Description
The ASP.NET process is configured to allow new event sources to be
¨ created at runtime, or application event sources to be created at
installation time.
Configuration File Settings
Check Description
<trace/>
¨ Tracing is not enabled on the production servers.
<trace enabled="false">
<globalization>
¨
Request and response encoding is appropriately configured.
<httpRuntime>
¨ maxRequestLength is configured to prevent users from uploading very large files
(optional).
<compilation>
Debug compiles are not enabled on the production servers by setting
¨
debug="false"
<compilation debug="false" . . ./>
<pages>
If the application does not use view state, enableViewState is set to "false".
<pages enableViewState="false" . . ./>
¨
If the application uses view state, enableViewState is set to "true" and
enableViewStateMac is set to "true" to detect view state tampering.
<pages enableViewState="true" enableViewStateMac="true" />
<customErrors>
Custom error pages are returned to the client and detailed exception details are
prevented from being returned by setting mode="On".
¨
<customErrors mode="On" />
A generic error page is specified by the defaultRedirect attribute.
<customErrors mode="On" defaultRedirect="/apperrorpage.htm"
<authentication>
The authentication mode is appropriately configured to support application
requirements. To enforce the use of a specific authentication type, a <location>
element with allowOverride="false" is used.
¨ <location path="" allowOverride="false">
<system.web>
<authentication mode="Windows" />
</system.web>
</location>
<forms>
The Web site is partitioned for public and restricted access.
The Forms authentication configuration is secure:
<forms loginUrl="Restricted\login.aspx"
protection="All"
requireSSL="true"
timeout="10"
name="AppNameCookie"
¨ path="/FormsAuth"
slidingExpiration="true" />
The authentication cookie is encrypted and integrity checked (protection).
SSL is required for authentication cookie (requireSSL).
Sliding expiration is set to false if SSL is not used (slidingExpiration).
The session lifetime is restricted (timeout).
Cookie names and paths are unique (name and path).
The <credentials> element is not used.
<identity>
Impersonation identities (if used) are encrypted in the registry by using
Aspnet_setreg.exe:
¨ <identity impersonate="true"
userName="registry:HKLM\SOFTWARE\YourApp\
identity\ASPNET_SETREG,userName"
password="registry:HKLM\SOFTWARE\YourApp\
identity\ASPNET_SETREG,password"/>
<authorization>
¨
Correct format of role names is verified.
<machineKey>
If multiple ASP.NET Web applications are deployed on the same Web server, the
"IsolateApps" setting is used to ensure that a separate key is generated for each
Web application.
<machineKey validationKey="AutoGenerate,IsolateApps"
decryptionKey="AutoGenerate,IsolateApps"
¨ validation="SHA1" />
If the ASP. NET Web application is running in a Web farm, specific machine keys
are used, and these keys are copied across all servers in the farm.
If the view state is enabled, the validation attribute is set to "SHA1".
The validation attribute is set to "3DES" if the Forms authentication cookie is to
be encrypted for the application.
<sessionState>
If mode="StateServer", then credentials are stored in an encrypted form in the
registry by using Aspnet_setreg.exe.
¨
If mode="SQLServer", then Windows authentication is used to connect to the
state store database and credentials are stored in an encrypted form in the
registry by using Aspnet_setreg.exe.
<httpHandlers>
Unused file types are mapped to HttpForbiddenHandler to prevent files from being
¨ retrieved over HTTP. For example:
<add verb="*" path="*.rem"
type="System.Web.HttpForbiddenHandler"/>
<processModel>
A least-privileged account like ASPNET is used to run the ASP.NET process.
<processModel userName="Machine" password="AutoGenerate"
The system account is not used to run the ASP.NET process.
The Act as part of the operating system privilege is not granted to the process
account.
Credentials for custom accounts are encrypted by using Aspnet_setreg.exe.
<processModel
userName="registry:HKLM\SOFTWARE\MY_SECURE_APP\
¨ processmodel\ASPNET_SETREG,userName"
password="registry:HKLM\SOFTWARE\MY_SECURE_APP\
processmodel\ASPNET_SETREG,password" . . ./>
If the application uses Enterprise Services, comAuthenticationLevel and
comImpersonationLevel are configured appropriately.
Call level authentication is set at minimum to ensure that all method calls can be
authenticated by the remote application.
PktPrivacy is used to encrypt and tamper proof the data across the wire in the
absence of infrastructure channel security (IPSec).
PktIntegrity is used for tamper proofing with no encryption (Eavesdroppers with
network monitors can see your data.)
<webServices>
Unused protocols are disabled.
¨
Automatic generation of Web Services Description Language (WSDL) is disabled
(optional).
Check Description
Session state. To avoid server affinity, the ASP.NET session state is
¨ maintained out of process in the ASP.NET SQL Server state database or
in the out-of-process state service that runs on a remote machine.
Encryption and verification. The keys used to encrypt and verify
¨ Forms authentication cookies and view state are the same across all
servers in a Web farm.
DPAPI. DPAPI cannot be used with the machine key to encrypt common
data that needs to be accessed by all servers in the farm. To encrypt
¨
shared data on a remote server, use an alternate implementation, such
as 3DES.
Check Description
Applications have distinct machine keys.
Use IsolateApps on <machineKey> or use per application <machineKey>
¨ elements.
<machineKey validationKey="AutoGenerate,IsolateApps"
decryptionKey="AutoGenerate,IsolateApps" . . . /
Unique path/name combinations for Forms authentication cookies are enabled
¨
for each application.
Multiple processes (IIS 6.0 application pools) are used for application isolation
¨
on Microsoft Windows® Server 2003.
Multiple anonymous user accounts (and impersonation) are used for application
¨
isolation on Windows 2000.
¨ Common machine keys are enabled on all servers in a Web farm.
Separate machine keys for each application are used when hosting multiple
¨ applications on a single server.
Code access security trust levels are used for process isolation and to restrict
¨
access to system resources (requires .NET Framework version 1.1).
Check Description
Temporary ASP.NET files
¨ %windir%\Microsoft.NET\Framework\{version}Temporary ASP.NET
ASP.NET process account and impersonated identities: Full Control
Temporary directory
¨ (%temp%)
ASP.NET process account: Full Control
.NET Framework directory%windir
%\Microsoft.NET\Framework\{version}
¨ ASP.NET process account and impersonated identities:
Read and Execute
List Folder Contents
.NET Framework configuration directory
%windir%\Microsoft.NET\Framework\{version}\CONFIG
¨ ASP.NET process account and impersonated Identities:
Read and Execute
List Folder Contents Read
Web site root
C:\inetpub\wwwroot
¨
or the path that the default Web site points to
ASP.NET process account: Full Control
System root directory
¨ %windir%\system32
ASP.NET process account: Read
Global assembly cache
¨ %windir%\assembly
Process account and impersonated identities: Read
Content directory
C:\inetpub\wwwroot\YourWebApp
Process account:
Read and Execute
List Folder Contents
¨ Read
With .NET Framework version 1.0, all parent directories from the conten
directory to the file system root directory also require the above permiss
Note Parent directories include:
C:\
C:\inetpub\
C:\inetpub\wwwroot\
Check Description
IIS Web permissions are configured.
¨ Bin directory does not have Read, Write, or Directory browsing
permissions. Execute permissions are set to None.
¨ Authentication settings are removed (so that all access is denied).
Checklist: Securing Web Services
How to Use This Checklist
This checklist is a companion to Chapter 12, "Building Secure Web Services." Use it to help
you build and secure your Web services and also as a snapshot of the corresponding
chapter.
Design Considerations
Check Description
¨ The authentication strategy has been identified.
Privacy and integrity requirements of SOAP messages have been
¨
considered.
¨ Identities that are used for resource access have been identified.
¨ Implications of code access security trust levels have been considered.
Development Considerations
Input Validation
Check Description
Input to Web methods is constrained and validated for type, length,
¨
format, and range.
Input data sanitization is only performed in addition to constraining input
¨
data.
¨ XML input data is validated based on an agreed schema.
Authentication
Check Description
Web services that support restricted operations or provide sensitive data
¨
support authentication.
If plain text credentials are passed in SOAP headers, SOAP messages
¨ are only passed over encrypted communication channels, for example,
using SSL.
Basic authentication is only used over an encrypted communication
¨
channel.
Authentication mechanisms that use SOAP headers are based on Web
¨ Services Security (WS Security) using the Web Services Enhancements
WSE).
Authorization
Check Description
Web services that support restricted operations or provide sensitive data
¨
support authorization.
Where appropriate, access to Web service is restricted using URL
¨
authorization or file authorization if Windows authentication is used.
Where appropriate, access to publicly accessible Web methods is
¨
restricted using declarative principle permission demands.
Sensitive Data
Check Description
Sensitive data in Web service SOAP messages is encrypted using XML
¨ encryption OR messages are only passed over encrypted communication
channels (for example, using SSL.)
Parameter Manipulation
Check Description
If parameter manipulation is a concern (particularly where messages are
routed through multiple intermediary nodes across multiple network
¨
links). Messages are digitally signed to ensure that they cannot be
tampered with.
Exception Management
Check Description
¨ Structured exception handling is used when implementing Web services.
Exception details are logged (except for private data, such as
¨
passwords).
SoapExceptions are thrown and returned to the client using the
¨
standard <Fault> SOAP element.
If application-level exception handling is required a custom SOAP
¨
extension is used.
Check Description
¨ The Web service logs transactions and key operations.
Proxy Considerations
Check Description
Check Description
Unnecessary Web service protocols, including HTTP GET and HTTP POST, are
¨
disabled.
The documentation protocol is disabled if you do not want to support the dynamic
¨
generation of WSDL.
The Web service runs using a least-privileged process account (configured throug
¨ the <processModel> element in Machine.config.)
Custom accounts are encrypted by using Aspnet_setref.exe.
Tracing is disabled with:
¨
<trace enabled="false" />
Debug compilations are disabled with:
¨
<compilation debug="false" explicit="true" defaultLanguage="
Checklist: Securing Enterprise Services
How to Use This Checklist
This checklist is a companion to Chapter 11, "Building Secure Serviced Components" and
Chapter 17, "Securing Your Application Server." Use it to help you secure Enterprise
Services and the server it runs on, or as a quick evaluation snapshot of the corresponding
chapters.
This checklist should evolve with steps that you discover to secure Enterprise Services.
Developer Checks
Use the following checks if you build serviced components.
Authentication
Check Description
Call-level authentication is used at minimum to prevent anonymous
access. Serviced component assemblies include:
¨
[assembly: ApplicationAccessControl(
Authentication = AuthenticationOption.Call)]
Authorization
Check Description
Role-based security is enabled. Serviced component assemblies include:
¨
[assembly: ApplicationAccessControl(true)]
Component-level access checks are enabled to support component-level,
interface-level, and method-level role checks. Serviced component assemblies
¨ include:
[assembly: ApplicationAccessControl(AccessChecksLevel=
AccessChecksLevelOption.ApplicationComponent
Component-level access checks are enforced for all serviced components.
¨ Classes are annotated with:
[ComponentAccessControl(true)]
To support method-level security, the [SecurityMethod] attribute is used on
¨ classes or method implementations, or the [SecurityRole] attribute is used on
method implementations.
Configuration Management
Check Description
¨ Server applications are configured to run with least-privileged accounts.
Server applications only run using the interactive user account during
¨
development.
¨ Object constructor strings do not contain plain text secrets.
Sensitive Data
Check Description
In the absence of IPSec encryption, RPC encryption is used to secure sensitive
data over the network in the absence of an IPSec infrastructure. Serviced
¨ component assemblies that use RPC encryption include:
[assembly: ApplicationAccessControl(
Authentication = AuthenticationOption.Privac
Check Description
User transactions are logged to an event log. The audit record includes
¨
original caller identity from SecurityCallContext.OriginalCaller.
Deployment Considerations
Check Description
Port ranges are defined if you use dynamic port range allocation OR
¨
static endpoint mapping is configured.
Secrets are not stored in object constructor strings. Secrets such as
¨
database connection strings are encrypted prior to storage.
The server application run-as account is configured as a least-privileged
¨
account.
Impersonation
Check Description
The impersonation level is configured correctly. For ASP.NET clients, the
impersonation level is configured in Machine.config on the <processModel>
¨ element.
For Enterprise Services client applications, the level is configured in the
COM+ catalog.
Serviced component assemblies define the required impersonation level by
using the ApplicationAccessControl attribute as shown below:
¨ [assembly: ApplicationAccessControl(
ImpersonationLevel=ImpersonationLevelOption.Identify)]
Administrator Checklist
Check Description
¨ Latest COM+ updates and patches are installed.
¨ Object constructor strings do not contain plain text secrets.
¨ COM+ administration components are restricted.
¨ Impersonation level that is set for the application is correct.
Server applications are configured to run with a least-privileged account.
¨ Server applications do not run using the identity of the interactively
logged on user.
¨ DTC service is disabled if it is not required.
Checklist: Securing Remoting
How to Use This Checklist
This checklist is a companion to Chapter 13, "Building Secure Remoted Components." Use
it to help you build secure components that use the Microsoft ® .NET remoting technology
and as a snapshot of the corresponding chapter.
Design Considerations
Check Description
¨ Remote components are not exposed to the Internet.
The ASP.NET host and HttpChannel are used to take advantage of
¨
Internet Information Services (IIS) and ASP.NET security features.
¨ TcpChannel (if used) is only used in trusted server scenarios.
TcpChannel (if used) is used in conjunction with custom authentication
¨
and authorization solutions.
Input Validation
Check Description
MarshalByRefObj objects from clients are not accepted without
¨
validating the source of the object.
The risk of serialization attacks are mitigated by setting the
¨ typeFilterLevel attribute programmatically or in the application's
Web.config file.
All field items that are retrieved from serialized data streams are
¨
validated as they are created on the server side.
Authentication
Check Description
¨ Anonymous authentication is disabled in IIS.
¨ ASP.NET is configured for Windows authentication.
¨ Client credentials are configured at the client through the proxy object.
¨ Authentication connection sharing is used to improve performance.
Clients are forced to authenticate on each call
¨
(unsafeAuthenticatedConnectionSharing is set to "false").
connectionGroupName is specified to prevent unwanted reuse of
¨
authentication connections.
¨ Plain text credentials are not passed over the network.
¨ IPrincipal objects passed from the client are not trusted.
Authorization
Check Description
¨ IPSec is used for machine-level access control.
¨ File authorization is enabled for user access control.
¨ Users are authorized with principal-based role checks.
Where appropriate, access to remote resources is restricted by setting
¨
rejectRemoteRequest attribute to "true".
Configuration Management
Check Description
Configuration files are locked down and secured for both the client and
¨
the server.
Generic error messages are sent to the client by setting the mode
¨
attribute of the <customErrors> element to "On".
Sensitive Data
Check Description
Exchange of sensitive application data is secured by using SSL, IPSec,
¨
or a custom encryption sink.
Exception Management
Check Description
¨ Structured exception handling is used.
Exception details are logged (not including private data, such as
¨
passwords).
Generic error pages with standard, user friendly messages are returned
¨
to the client.
Auditing and Logging
Check Description
¨ If ASP.NET is used as the host, IIS auditing features are enabled.
If required, a custom channel sink is used to perform logging on the client
¨
and the server.
Checklist: Securing Data Access
How to Use This Checklist
This checklist is a companion to Chapter 14, "Building Secure Data Access" and Chapter
16, "Securing Your Database Server." Use it to help you build secure data access, or as a
quick evaluation snapshot of the corresponding chapters.
This checklist should evolve with secure data access practices that you discover during
software development.
SQL Injection Checks
Check Description
Input passed to data access methods that originates outside the current
¨ trust boundary is constrained.
Sanitization of input is only used as a defense in depth measure.
Stored procedures that accept parameters are used by data access
¨ code. If stored procedures are not used, type safe SQL parameters are
used to construct SQL commands.
¨ Least-privileged accounts are used to connect to the database.
Authentication
Check Description
¨ Windows authentication is used to connect to the database.
¨ Strong passwords are used and enforced.
If SQL Server authentication is used, the credentials are secured over
¨ the network by using IPSec or SSL, or by installing a database server
certificate.
If SQL Server authentication is used, connection strings are encrypted by
¨
using DPAPI and are stored in a secure location.
Application connects using a least-privileged account. The sa account or
¨ other privileged accounts that are members of the sysadmin or
db_owner roles are not used for application logins.
Authorization
Check Description
Calling users are restricted using declarative or imperative principal
¨
permission checks (normally performed by business logic).
Calling code is restricted using identity permission demands in scenarios
¨
where you know and want to limit the calling code.
Application login is restricted in the database and can only execute
¨ selected stored procedures. Application's login has no direct table
access.
Configuration Management
Check Description
¨ Windows authentication is used to avoid credential management.
Connection strings are encrypted and encrypted data is stored securely,
¨
for example, in a restricted registry key.
OLE DB connection strings do not contain Persist Security Info="true" or
¨
"yes".
¨ UDL files are secured with restricted ACLs.
Sensitive Data
Check Description
Sensitive data is encrypted in the database using strong symmetric
¨
encryption (for example, 3DES).
Symmetric encryption keys are backed up and encrypted with DPAPI and
¨
stored in a restricted registry key.
¨ Sensitive data is secured over the network by using SSL or IPSec.
Passwords are not stored in custom user store databases. Password
¨
hashes are stored with salt values instead.
Exception Management
Check Description
¨ ADO.NET exceptions are trapped and logged.
Database connections and other limited resources are released in case
¨
of exception or completion of operation.
ASP.NET is configured with a generic error page using the
¨
<customErrors> element.
Deployment Considerations
Check Description
Firewall restrictions ensure that only the SQL Server listening port is
¨
available on the database server.
A method for maintaining encrypted database connection strings is
¨
defined.
¨ The application is configured to use a least-privileged database login.
SQL server auditing is configured. Failed login attempts are logged at
¨
minimum.
Data privacy and integrity over the network is provided with IPSec or
¨
SSL.
Checklist: Securing Your Network
How to Use This Checklist
This checklist is a companion to Chapter 15, "Securing Your Network." Use it to help secure
your network, or as a quick evaluation snapshot of the corresponding chapters.
This checklist should evolve as you discover steps that help implement your secure
network.
Router Considerations
Check Description
¨ Latest patches and updates are installed.
¨ You subscribed to router vendor's security notification service.
¨ Known vulnerable ports are blocked.
Ingress and egress filtering is enabled. Incoming and outgoing packets
¨
are confirmed as coming from public or internal networks.
¨ ICMP traffic is screened from the internal network.
¨ Administration interfaces to the router are enumerated and secured.
¨ Web-facing administration is disabled.
¨ Directed broadcast traffic is not received or forwarded.
¨ Unused services are disabled (for example, TFTP).
¨ Strong passwords are used.
¨ Logging is enabled and audited for unusual traffic or patterns.
¨ Large ping packets are screened.
Routing Information Protocol (RIP) packets, if used, are blocked at the
¨
outermost router.
Firewall Considerations
Check Description
¨ Latest patches and updates are installed.
Effective filters are in place to prevent malicious traffic from entering the
¨
perimeter
¨ Unused ports are blocked by default.
¨ Unused protocols are blocked by default.
IPsec is configured for encrypted communication within the perimeter
¨
network.
¨ Intrusion detection is enabled at the firewall.
Switch Considerations
Check Description
¨ Latest patches and updates are installed.
¨ Administrative interfaces are enumerated and secured.
¨ Unused administrative interfaces are disabled.
¨ Unused services are disabled.
¨ Available services are secured.
Checklist: Securing Your Web Server
How to Use This Checklist
This checklist is a companion to Chapter 16, "Securing Your Web Server." Use it to help
implement a secure Web server, or as a quick evaluation snapshot of the corresponding
chapter.
This checklist should evolve with steps that you discover to secure your Web server.
Check Description
MBSA is run on a regular interval to check for latest operating system
and components updates. For more information, see
¨
http://www.microsoft.com/technet/treeview/default.asp?
url=/technet/security/tools/Tools/mbsahome.asp.
The latest updates and patches are applied for Windows, IIS server, and
¨ the .NET Framework. (These are tested on development servers prior to
deployment on the production servers.)
Subscribe to the Microsoft Security Notification Service at
¨ http://www.microsoft.com/technet/treeview/default.asp?
url=/technet/security/bulletin/notify.asp.
IISLockdown
Check Description
¨ IISLockdown has been run on the server.
¨ URLScan is installed and configured.
Services
Check Description
¨ Unnecessary Windows services are disabled.
¨ Services are running with least-privileged accounts.
¨ FTP, SMTP, and NNTP services are disabled if they are not required.
¨ Telnet service is disabled.
¨
ASP .NET state service is disabled and is not used by your applications.
Protocols
Check Description
WebDAV is disabled if not used by the application OR it is secured if it is
¨ required. For more information, see Microsoft Knowledge Base article
323470, "How To: Create a Secure WebDAV Publishing Directory."
¨ TCP/IP stack is hardened.
¨ NetBIOS and SMB are disabled (closes ports 137, 138, 139, and 445).
Accounts
Check Description
¨ Unused accounts are removed from the server.
¨ Windows Guest account is disabled.
¨ Administrator account is renamed and has a strong password..
¨ IUSR_MACHINE account is disabled if it is not used by the application.
If your applications require anonymous access, a custom least-privileged
¨
anonymous account is created.
The anonymous account does not have write access to Web content
¨
directories and cannot execute command-line tools.
ASP.NET process account is configured for least privilege. (This only
¨ applies if you are not using the default ASPNET account, which is a
least-privileged account.)
¨ Strong account and password policies are enforced for the server.
Remote logons are restricted. (The "Access this computer from the
¨
network" user-right is removed from the Everyone group.)
¨ Accounts are not shared among administrators.
¨ Null sessions (anonymous logons) are disabled.
¨ Approval is required for account delegation.
¨ Users and administrators do not share accounts.
¨ No more than two accounts exist in the Administrators group.
Administrators are required to log on locally OR the remote
¨
administration solution is secure.
Check Description
¨ Files and directories are contained on NTFS volumes.
¨ Web site content is located on a non-system NTFS volume.
Log files are located on a non-system NTFS volume and not on the same
¨
volume where the Web site content resides.
The Everyone group is restricted (no access to \WINNT\system32 or
¨
Web directories).
Web site root directory has deny write ACE for anonymous Internet
¨
accounts.
Content directories have deny write ACE for anonymous Internet
¨
accounts.
Remote IIS administration application is removed
¨
(\WINNT\System32\Inetsrv\IISAdmin).
¨ Resource kit tools, utilities, and SDKs are removed.
Sample applications are removed (\WINNT\Help\IISHelp,
¨
\Inetpub\IISSamples).
Shares
Check Description
All unnecessary shares are removed (including default administration
¨
shares).
Access to required shares is restricted (the Everyone group does not
¨
have access).
Administrative shares (C$ and Admin$) are removed if they are not
¨ required (Microsoft Management Server (SMS) and Microsoft
Operations Manager (MOM) require these shares).
Ports
Check Description
Internet-facing interfaces are restricted to port 80 (and 443 if SSL is
¨
used).
Intranet traffic is encrypted (for example, with SSL) or restricted if you
¨
do not have a secure data center infrastructure.
Registry
Check Description
¨ Remote registry access is restricted.
SAM is secured
¨ (HKLM\System\CurrentControlSet\Control\LSA\NoLMHash).
This applies only to standalone servers.
Check Description
¨ Failed logon attempts are audited.
¨ IIS log files are relocated and secured.
Log files are configured with an appropriate size depending on the
¨
application security requirement.
¨ Log files are regularly archived and analyzed.
¨ Access to the Metabase.bin file is audited.
¨ IIS is configured for W3C Extended log file format auditing.
Check Description
¨ Web sites are located on a non-system partition.
¨ "Parent paths" setting is disabled.
Potentially dangerous virtual directories, including IISSamples, IISAdmin,
¨
IISHelp, and Scripts virtual directories, are removed.
¨ MSADC virtual directory (RDS) is removed or secured.
¨ Include directories do not have Read Web permission.
Virtual directories that allow anonymous access restrict Write and
¨
Execute Web permissions for the anonymous account.
There is script source access only on folders that support content
¨
authoring.
There is write access only on folders that support content authoring and
¨ these folder are configured for authentication (and SSL encryption, if
required).
FrontPage Server Extensions (FPSE) are removed if not used. If they
¨
are used, they are updated and access to FPSE is restricted.
Script Mappings
Check Description
Extensions not used by the application are mapped to 404.dll (.idq, .htw,
¨
.ida, .shtml, .shtm, .stm, idc, .htr, .printer).
Unnecessary ASP.NET file type extensions are mapped to
¨
"HttpForbiddenHandler" in Machine.config.
ISAPI Filters
Check Description
¨ Unnecessary or unused ISAPI filters are removed from the server.
IIS Metabase
Check Description
Access to the metabase is restricted by using NTFS permissions
¨
(%systemroot%\system32\inetsrv\metabase.bin).
IIS banner information is restricted (IP address in content location
¨
disabled).
Server Certificates
Check Description
¨ Certificate date ranges are valid.
Certificates are used for their intended purpose (for example, the server
¨
certificate is not used for e-mail).
¨ The certificate's public key is valid, all the way to a trusted root authority.
¨ The certificate has not been revoked.
Machine.config
Check Description
¨ Protected resources are mapped to HttpForbiddenHandler.
¨ Unused HttpModules are removed.
¨ Tracing is disabled <trace enable="false"/>
Debug compiles are turned off.
¨
<compilation debug="false" explicit="true" defaultLanguage="vb">
Check Description
¨ Code access security is enabled on the server.
¨ All permissions have been removed from the local intranet zone.
¨ All permissions have been removed from the Internet zone.
Check Description
¨ IISLockdown tool has been run on the server.
¨ HTTP requests are filtered. URLScan is installed and configured.
Remote administration of the server is secured and configured for
¨
encryption, low session time-outs, and account lockouts.
Dos and Don'ts
Do use a dedicated machine as a Web server.
Do configure a separate anonymous user account for each application, if you host
multiple Web applications,
Do not allow anyone to locally log on to the machine except for the administrator.
Checklist: Securing Your Database Server
How to Use This Checklist
This checklist is a companion to Chapter 18, "Securing Your Database Server." Use it to
help you secure a database server and also as a snapshot of the corresponding chapter.
Installation Considerations for Production Servers
Check Description
Upgrade tools, debug symbols, replication support, books online, and
¨
development tools are not installed on the production server.
¨ Microsoft ® SQL Server™ is not installed on a domain controller.
SQL Server Agent is not installed if it is not being used by any
¨
application.
¨ SQL Server is installed on a dedicated database server.
¨ SQL Server is installed on an NTFS partition.
Windows Authentication mode is selected unless SQL Server
¨ Authentication is specifically required, in which case Mixed Mode is
selected.
A strong password is applied for the sa account or any other member of
¨
the sysadmin role. (Use strong passwords for all accounts.)
¨ The database server is physically secured.
Patches and Updates
Check Description
The latest service packs and patches have been applied for SQL
¨ Server. (See http://support.microsoft.com/default.aspx?scid=kb;EN-
US;290211.)
Post service-pack patches have been applied for SQL server. (See
http://www.microsoft.com/technet/treeview/default.asp?
¨
url=/technet/security/current.asp?productid=30&
servicepackid=0.)
Services
Check Description
Check Description
All protocols except TCP/IP are disabled within SQL Server. Check this
¨
using the Server Network Utility.
¨ The TCP/IP stack is hardened on the database server.
Accounts
Check Description
SQL Server is running using a least-privileged local account (or optionally,
¨
a least-privileged domain account if network services are required).
¨ Unused accounts are removed from Windows and SQL Server.
¨ The Windows guest account is disabled.
¨ The administrator account is renamed and has a strong password.
¨ Strong password policy is enforced.
¨ Remote logons are restricted.
¨ Null sessions (anonymous logons) are restricted.
¨ Approval is required for account delegation.
¨ Shared accounts are not used.
Membership of the local administrators group is restricted (ideally, no
¨
more than two administration accounts).
Files and Directories
Check Description
Restrictive permissions are configured on SQL Server installation
¨
directories (per the guide).
The Everyone group does not have permission to access SQL Server
¨
installation directories.
¨ Setup log files are secured.
¨ Tools, utilities, and SDKs are removed or secured.
Sensitive data files are encrypted using EFS (This is an optional step. If
¨
implemented, use EFS only to encrypt MDF files, not LDF log files).
Shares
Check Description
¨ All unnecessary shares are removed from the server.
Access to required shares is restricted (the Everyone group doesn't have
¨
access).
Administrative shares (C$ and Admin$) are removed if they are not
¨ required (Microsoft Management Server (SMS) and Microsoft
Operations Manager (MOM) require these shares).
Ports
Check Description
Restrict access to all ports on the server except the ports configured for
¨ SQL Server and database instances (TCP 1433 and UDP 1434 by
default).
¨ Named instances are configured to listen on the same port.
Port 3389 is secured using IPSec if it is left open for remote Terminal
¨
Services administration
The firewall is configured to support DTC traffic (if required by the
¨
application).
The Hide server option is selected in the Server Network Utility
¨
(optional).
Registry
Check Description
¨ SQL Server registry keys are secured with restricted permissions.
¨ The SAM is secured (standalone servers only).
Auditing and Logging
Check Description
¨ All failed Windows login attempts are logged.
¨ All failed actions are logged across the file system.
¨ SQL Server login auditing is enabled.
Log files are relocated from the default location and secured with access
¨
control lists.
Log files are configured with an appropriate size depending on the
¨
application security requirement.
Where the database contents are highly sensitive or vital, Windows is set
¨
to Shut Down mode on overflow of the security logs.
SQL Server Security
Check Description
SQL Server authentication is set to Windows only (if supported by the
¨
application).
¨ The SQL Server audit level is set to Failure or All.
¨ SQL Server runs using a least-privileged account.
SQL Server Logins, Users, and Roles
Check Description
¨ A strong sa password is used (for all accounts).
¨ SQL Server guest user accounts are removed.
¨ BUILTIN\Administrators server login is removed.
¨ Permissions are not granted for the public role.
Members of sysadmin fixed server role are limited (ideally, no more than
¨
two users).
Restricted database permissions are granted. Use of built-in roles, such
¨ as db_datareader and db_datawriter, are avoided because they provide
limited authorization granularity.
Default permissions that are applied to SQL Server objects are not
¨
altered.
SQL Server Database Objects
Check Description
¨ Sample databases (including Pubs and Northwind) are removed.
¨ Stored procedures and extended stored procedures are secured.
¨ Access to cmdExec is restricted to members of the sysadmin role.
Additional Considerations
Check Description
A certificate is installed on the database server to support SSL
¨ communication and the automatic encryption of SQL account credentials
(optional).
¨ NTLM version 2 is enabled by setting LMCompatibilityLevel to 5.
Staying Secure
Check Description
¨ Regular backups are performed.
¨ Group membership is audited.
¨ Audit logs are regularly monitored.
¨ Security assessments are regularly performed.
You subscribe to SQL security bulletins at
¨ http://www.microsoft.com/technet/treeview/default.asp?
url=/technet/security/current.asp?productid=30& servicepackid=0.
You subscribe to the Microsoft Security Notification Service at
¨ http://www.microsoft.com/technet/treeview/default.asp?
url=/technet/security/bulletin/notify.asp.
Checklist: Security Review for Managed Code
How to Use This Checklist
This checklist is a companion to Chapter 7, "Building Secure Assemblies", and Chapter 8,
"Code Access Security in Practice." Use it to help you implement a security review for
managed code in your Web application, or as a quick evaluation snapshot of the
corresponding chapters.
This checklist should evolve so that you can repeat a successful security review of
managed code.
General Code Review Guidelines
Check Description
Potential threats are clearly documented. (Threats are dependent upon
¨
the specific scenario and assembly type.)
Code is developed based on .NET framework coding guidelines and
secure coding guidelines at
¨
http://msdn.microsoft.com/library/default.asp?url=/library/en-
us/cpgenref/html/cpconnetframeworkdesignguidelines.asp.
The FXCop analysis tool is run on assemblies and security warnings are
¨
addressed.
Managed Code Review Guidelines
Assembly-Level Checks
Check Description
Assemblies have a strong name. (Dynamically generated ASP.NET Web
¨
page assemblies cannot currently have a strong name.)
You have considered delay signing as a way to protect and restrict the
¨
private key that is used in the strong name and signing process.
Assemblies include declarative security attributes (with
¨ SecurityAction.RequestMinimum) to specify minimum permission
requirements.
Highly privileged assemblies are separated from lower privileged
assemblies.
¨ If the assembly is to be used in a partial-trust environment (for example,
it is called from a partial-trust Web application), then privileged code is
sandboxed in a separate assembly.
Class-Level Checks
Check Description
Class and member visibility is restricted. The most restrictive access
¨
modifier is used (private where possible).
¨ Non-base classes are sealed.
Input from outside the current trust boundary is validated. Input data is
¨
constrained and validated for type, length, format, and range.
Code implements declarative checks where virtual internal methods are
¨
used.
Access to public classes and methods are restricted with principal
¨
permission demands (where appropriate).
Fields are private. When necessary, field values are exposed by using
¨
read/write or read-only public properties.
¨ Read-only properties are used where possible.
Types returned from methods that are not designed to be created
¨
independently contain private default constructors.
¨ Unsealed public types do not have internal virtual members.
¨ Use of event handlers is thoroughly reviewed.
¨ Static constructors are private.
Cryptography
Check Description
Code uses platform-provided cryptography and does not use custom
¨
implementations.
Random keys are generated by using RNGCryptoServiceProvider (and
¨
not the Random class).
¨ PasswordDeriveBytes is used for password-based encryption.
DPAPI is used to encrypt configuration secrets to avoid the key
¨
management issue.
The appropriate key sizes are used for the chosen algorithm, or if they
¨
are not, the reasons are identified and understood.
¨ Keys are not held in code.
¨ Access to persisted keys is restricted.
¨ Keys are cycled periodically.
¨ Exported private keys are protected.
Secrets
Check Description
¨ Secrets are not hard coded.
¨ Plain text secrets are not stored in configuration files.
Plain text secrets are not stored in memory for extended periods of
¨
time.
Exception Management
Check Description
Code uses exception handling. You catch only the exceptions that you
¨ know about.
Delegates
Check Description
¨ Delegates are not accepted from untrusted sources.
If code does accept a delegate from untrusted code, it constrains the
¨ delegate before calling it by using security permissions with
SecurityAction.PermitOnly.
¨ Permissions are not asserted before calling a delegate.
Serialization
Check Description
¨ Serialization is restricted to privileged code.
¨ Sensitive data is not serialized.
¨ Field data from serialized data streams is validated.
ISerializable.GetObjectData implementation is protected with an identity
¨ permission demand in scenarios where you want to restrict which code
can serialize the object.
Threading
Check Description
¨ Results of security checks are not cached.
Impersonation tokens are considered when new threads are created
¨
(any existing thread token is not passed to the new thread).
Threads are synchronized in static class constructors for multithreaded
¨
application code.
¨ Object implementation code is designed and built to be thread safe.
¨ Threads are synchronized in static class constructors.
Reflection
Check Description
Caller cannot influence dynamically generated code (for example, by
¨
passing assembly and type names as input arguments).
Code demands permission for user authorization where assemblies are
¨
loaded dynamically.
Check Description
Input and output strings that are passed between managed and
¨
unmanaged code are constrained and validated.
¨ Array bounds are checked.
¨ File path lengths are checked and do not exceed MAX_PATH.
¨ Unmanaged code is compiled with the /GS switch.
Use of "dangerous" APIs by unmanaged code is closely inspected.
¨ These include LogonUser, RevertToSelf, CreateThread, Network
APIs, and Sockets APIs.
Naming conventions (safe, native, unsafe) are applied to unmanaged
¨
APIs.
Assemblies that call unmanaged code specify unmanaged permission
¨ requirements using declarative security
(SecurityAction.RequestMinimum).
Unmanaged API calls are sandboxed and isolated in a wrapper
¨
assembly.
¨ Use of SuppressUnmanagedCodeSecurityAttribute is thoroughly
reviewed and additional security checks are implemented.
Check Description
¨ No security decisions are made based on filenames.
¨ Input file paths and file names are well formed.
¨ Environment variables are not used to construct file paths.
File access is constrained to the context of the application (by using a
¨
restricted FileIOPermission).
Assembly file I/O requirements are specified using declarative security
¨
attributes (with SecurityAction.RequestMinimum).
Event Log
Check Description
Event log access code is constrained using EventLogPermission.
¨ This particularly applies if your event logging code could be called by
untrusted callers.
Event sources are created at installation time (or the account used to run
¨ the code that writes to the event log must be allowed to create event
sources by configuring an appropriate ACL in the registry).
Security-sensitive data, such as passwords, is not written to the event
¨
log.
Registry
Check Description
Sensitive data, such as database connection strings or credentials, is
¨
encrypted prior to storage in the registry.
Keys are restricted. If a key beneath HKEY_CURRENT_MACHINE is
¨ used, the key is configured with a restricted ACL. Alternatively,
HKEY_CURRENT_USER is used.
Registry access is constrained by using RegistryPermission. This
¨ applies especially if your registry access code could be called by
untrusted callers.
Environment Variables
Check Description
Code that accesses environment variables is restricted with
¨ EnvironmentPermission. This applies especially if your code can be
called by untrusted code.
Environment permission requirements are declared by using declarative
¨
security attributes with SecurityAction.RequestMinimum.
Code Access Security Considerations
If an entry is preceded by a star (*), it indicates that the checks are performed by the
FXCop analysis tool. For more information about FXCop security checks, see
http://www.gotdotnet.com/team/libraries/FxCopRules/SecurityRules.aspx.
Check Description
Assemblies marked with AllowPartiallyTrustedCallersAttribute (APTCA)
¨
do not expose objects from non-APTCA assemblies.
Code that only supports full-trust callers is strong named or explicitly
¨
demands the full-trust permission set.
¨ All uses of Assert are thoroughly reviewed.
All calls to Assert are matched with a corresponding call to
¨
RevertAssert.
¨ *The Assert window is as small as possible.
¨ *Asserts are proceeded with a full permission demand.
¨ *Use of Deny or PermitOnly is thoroughly reviewed.
All uses of LinkDemand are thoroughly reviewed. (Why is a LinkDemand
¨
and not a full Demand used?)
LinkDemands within Interface declarations are matched by LinkDemands
¨
on the method implementation.
¨ *Unsecured members do not call members protected by a LinkDemand.
Permissions are not demanded for resources accessed through the .NET
¨
Framework classes.
Access to custom resources (through unmanaged code) is protected with
¨
custom code access permissions.
Access to cached data is protected with appropriate permission
¨
demands.
If LinkDemands are used on structures, the structures contain explicitly
¨
defined constructors.
*Methods that override other methods that are protected with
¨
LinkDemands also issue the same LinkDemand.
*LinkDemands on types are not used to protect access to fields inside
¨
those types.
¨ *Partially trusted methods call only other partially trusted methods.
¨ *Partially trusted types extend only other partially trusted types.
*Members that call late bound members have declarative security
¨
checks.
*Method-level declarative security does not mistakenly override class-
¨
level security checks.
Use of the following "potentially dangerous" permissions is thoroughly
reviewed:
SecurityPermission
Unmanaged Code
SkipVerification
¨ ControlEvidence
ControlPolicy
SerializationFormatter
ControlPrincipal
ControlThread
ReflectionPermission
MemberAccess
Code identity permission demands are used to authorize calling code in
¨ scenarios where you know in advance the range of possible callers (for
example, you want to limit calling code to a specific application).
¨ Permission demands of the .NET Framework are not duplicated.
Inheritance is restricted with SecurityAction.InheritanceDemand in
¨
scenarios where you want to limit which code can derive from your code.
How To: Index
Overview
Improving Web Application Security: Threats and Countermeasures includes the following
How Tos, each of which shows you the steps to complete a specific security task:
Operations and security policy should adopt a patch management process. This How To
defines the processes required to create a sound patch management system. The patch
management process can be automated using the guidance in this How To.
What You Must Know
Before using this How To, you should be aware of the following issues and considerations.
Patch management is a circular process and must be ongoing. The unfortunate reality about
software vulnerabilities is that, after you apply a patch today, a new vulnerability must be
addressed tomorrow.
Develop and automate a patch management process that includes each of the following:
Detect. Use tools to scan your systems for missing security patches. The detection
should be automated and will trigger the patch management process.
Assess. If necessary updates are not installed, determine the severity of the
issue(s) addressed by the patch and the mitigating factors that may influence your
decision. By balancing the severity of the issue and mitigating factors, you can
determine if the vulnerabilities are a threat to your current environment.
Test. Install the patch on a test system to verify the ramifications of the update
against your production configuration.
Deploy. Deploy the patch to production computers. Make sure your applications
are not affected. Employ your rollback or backup restore plan if needed.
In this How To, you use MBSA without scanning for vulnerable configurations. When using
the graphical user interface (GUI), specify this by unchecking the options in Figure 1 and
only choosing Check for security updates.
Figure 1: MBSA scan options
When using the command line interface (Mbsacli.exe), you can use the following command
to scan only missing security updates.
Mbsacli.exe /n OS+IIS+SQL+PASSWORD
For more details about using MBSA, including the security configuration scan, see "How To:
Use MBSA" in the How To section of this guide.
You need the following tools in order to be able to perform the steps in this How To:
Latest Mssecure.cab
Detecting
Assessing
Acquiring
Testing
Deploying
Maintaining
Detecting
Use MBSA to detect missing security patches for Windows NT 4.0, Windows 2000, and
Windows XP. You can use MBSA in two modes; GUI and command line. Both modes are
used to scan single or multiple computers. The command line can be scripted to run on a
schedule.
The login used to run MBSA must be a member of the Administrators group on
the target computer(s). To verify adequate access and privilege, use the
command net use \\computername\c$ where computername is the network
Note
name of a machine which you are going to scan for missing patches. Resolve any
issues accessing the administrative share before using MBSA to scan the remote
computer.
Task To manually detect missing updates using the MBSA graphical interface
1. Run MBSA by double-clicking the desktop icon or by selecting it from the
Programs menu.
2. Click Scan a computer. MBSA defaults to the local computer. To scan multiple
computers, select Scan more than one computer and select either a range of
computers to scan or an IP address range.
3. Clear all check boxes except Check for security updates. This option detects
uninstalled patches and updates.
4. Click Start scan. Your server is now analyzed. When the scan is complete, MBSA
displays a security report and also writes the report to the
%userprofile%\SecurityScans directory.
Click the Result details link next to each failed check to view the list of uninstalled
security updates. A dialog box displays the Microsoft security bulletin reference
number. Click the reference to find out more about the bulletin and to download
the update.
Task To detect missing updates using the MBSA command line interface
From a command window, change directory to the MBSA installation directory, and
type the following command:
mbsacli /i 127.0.0.1 /n OS+IIS+SQL+PASSWORD
Finally, you can scan a domain by using the /d option. For example:
mbsacli /d NameOfMyDomain /n OS+IIS+SQL+PASSWORD
2. Click Pick a security report to view and open the report or reports, if you
scanned multiple computers.
3. To view the results of a scan against the target machine, mouse over the
computer name listed. Individual reports are sorted by the timestamp of the
report.
As previously described, the advantage of the command line method is that it may be
scripted and scheduled to execute. This schedule is determined by the exposure of your
systems to hostile networks, and by your security policy.
The top portion of the MBSA screenshot shown in Figure 2 is self explanatory.
Red crosses indicate that a critical issue has been found. To view the list of missing
patches, click the associated Result details link.
The results of a security update scan might show two types of issues:
Missing patches
Patch cannot be confirmed
Both types include links to the relevant Hotfix and security bulletin pages that provide details
about the patch together with download instructions.
When a patch cannot be confirmed, it is indicated by a blue asterisk. This occurs when your
system has a file that is newer than the file provided with a security bulletin. This might
occur if you install a new version of a product that updates a common file.
For updates that cannot be confirmed, review the information in the bulletin and follow the
instructions. This may include installing a patch or making configuration changes. For more
information on patches that cannot be verified by MBSA, see Microsoft Knowledge Base
article, 306460, "HFNetChk Returns Note Messages for Installed Patches."
Assessing
With the list of missing patches identified by MBSA, you must determine if the vulnerabilities
pose a significant risk. Microsoft Security Bulletins provide technical details to help you
determine the level of threat the vulnerability poses to your systems.
The details from security bulletins that help you assess the risk of attack are:
Mitigating factors that you need to compare against your security policy to
determine your level of exposure to the vulnerability. It may be that your
security policy mitigates the need to apply a patch. For example, if you do not have
the Indexing Service running on your server, you do not need to install patches to
address vulnerabilities in the service.
Severity rating that assists in determining priority. The severity rating is based
on multiple factors including the role of the machines that may be vulnerable, and
the level of exposure to the vulnerability.
For more information about the severity rating system used by the security bulletins,
see the TechNet article, "Microsoft Security Response Center Security Bulletin
Severity Rating System" at http://www.microsoft.com/technet/treeview/default.asp?
url=/technet/security/policy/rating.asp
If you use an affected product, you should almost always apply patches
Note that address vulnerabilities rated critical or important. Patches rated
critical should be applied as soon as possible.
Acquiring
There are several ways you can obtain patches, including:
Using MBSA report details. MBSA links to the security bulletin that contains the
patch, or instructions about obtaining the patch. You can use the link to download
the patch and save it on your local network. You can then apply the patch to
multiple computers.
Windows Update. With a list of the updates you want to install, use Internet
Explorer on the server that requires the patch, and access
http://windowsupdate.microsoft.com/. Then select the required updates for
installation. The updates are installed from the site and cannot be downloaded for
installation on another computer. Windows Update requires that an ActiveX control
is installed on the server (you will be prompted when you visit the site if the control
is not found.) This method works well for standalone workstations or where a small
number of servers are involved.
HotFix & Security Bulletin Search. MBSA includes the Microsoft Knowledge Base
article ID of the corresponding article for a given security bulletin. You can use the
article ID at the HotFix and Security Bulletin Search site to reach the matching
security bulletin. The search page is located at
http://www.microsoft.com/technet/treeview/default.asp?
url=/technet/security/current.asp. The bulletin contains the details to acquire the
patch.
Testing
If the results of your assessment determine that a patch must be installed, you should test
that patch against your system to ensure that no breaking changes are introduced or, if a
breaking change is expected, how to work around the change.
Testing the patch on a few select production systems prior to fully deploying
the update. If a test network that matches your live configuration is not available,
this is the safest method to introduce the security patch. If this method is employed,
you must perform a backup prior to installing the update.
The security bulletin lists the availability of an uninstall routine in the Additonal information
about this patch section.
Deploying
If you decide that the patch is safe to install, you must deploy the update to your production
servers in a reliable and efficient way. You have a number of options for deploying patches
throughout the enterprise. These include:
Keeping your servers up to date with the latest security patches involves this entire cycle.
You start the cycle again by:
Use MBSA to regularly check for security vulnerabilities and to identify missing patches and
updates. Schedule MBSA to run daily and analyze the results to take action as needed. For
more information about automating MBSA, see "How To: Use MBSA" in the How To section
of this guide.
TechNet article, "Best Practices for Applying Service Packs, Hotfixes and Security
Patches" at http://www.microsoft.com/technet/treeview/default.asp?
url=/technet/security/bestprac/bpsp.asp.
How To: Harden the TCP/IP Stack
Applies To
This information applies to server computers that run the following:
Set threshold values that are used to determine what constitutes an attack.
This How To shows an administrator which registry keys and which registry values must be
configured to protect against network-based denial of service attacks.
These settings modify the way TCP/IP works on your server. The characteristics
of your Web server will determine the best thresholds to trigger denial of service
Note
countermeasures. Some values may be too restrictive for your client connections.
Test this document's recommendations before you deploy to a production server.
What You Must Know
TCP/IP is an inherently insecure protocol. However, the Windows 2000 implementation
allows you to configure its operation to counter network denial of service attacks. Some of
the keys and values referred to in this How To may not exist by default. In those cases,
create the key, value, and value data.
For more details about the TCP/IP network settings that the registry for Windows 2000
controls, see the white paper "Microsoft Windows 2000 TCP/IP Implementation Details," at
http://www.microsoft.com/technet/treeview/default.asp?
url=/technet/itsolutions/network/deploy/depovg/tcpip2k.asp.
Contents
This How To is divided into sections that address specific types of denial of service
protections that apply to the network. Those sections are:
AFD.SYS Protections
Additional Protections
Pitfalls
Additional Resources
Protect Against SYN Attacks
A SYN attack exploits a vulnerability in the TCP/IP connection establishment mechanism. To
mount a SYN flood attack, an attacker uses a program to send a flood of TCP SYN
requests to fill the pending connection queue on the server. This prevents other users from
establishing network connections.
To protect the network against SYN attacks, follow these generalized steps, explained later
in this document:
Recommended value: 2
Description: Causes TCP to adjust retransmission of SYN-ACKS. When you configure this
value the connection responses timeout more quickly in the event of a SYN attack. A SYN
attack is triggered when the values of TcpMaxHalfOpen or TcpMaxHalfOpenRetried are
exceeded.
Recommended value: 5
Description: Specifies the number of times that TCP retransmits an individual data
segment (not connection request segments) before aborting the connection.
Value name: EnablePMTUDiscovery
Valid values: 0, 1
Description: Setting this value to 1 (the default) forces TCP to discover the
maximum transmission unit or largest packet size over the path to a remote host.
An attacker can force packet fragmentation, which overworks the stack. Specifying
0 forces the MTU of 576 bytes for connections from hosts not on the local subnet.
Description: Specifies how often TCP attempts to verify that an idle connection is
still intact by sending a keep-alive packet.
Valid values: 0, 1
Use the values that are summarized in Table 1 for maximum protection.
Value: EnableICMPRedirect
Description: Modifying this registry value to 0 prevents the creation of expensive host
routes when an ICMP redirect packet is received.
Value: EnableDeadGWDetect
Value: EnableDynamicBacklog
Description: Specifies the maximum total amount of both free connections plus
those in the SYN_RCVD state.
Present by default: No
Network Address Translation (NAT) is used to screen a network from incoming connections.
An attacker can circumvent this screen to determine the network topology using IP source
routing.
Value: DisableIPSourceRouting
Valid values: 0 (forward all packets), 1 (do not forward Source Routed packets), 2 (drop
all incoming source routed packets).
Description: Disables IP source routing, which allows a sender to determine the route a
datagram should take through the network.
Value: EnableFragmentChecking
Value: EnableMulticastForwarding
Value: IPEnableRouter
Description: Setting this parameter to 1 (true) causes the system to route IP packets
between the networks to which it is connected.
Value: EnableAddrMaskReply
Description: This parameter controls whether the computer responds to an ICMP address
mask request.
For more information on hardening the TCP/IP stack, see Microsoft Knowledge
Base article, 315669, "How To: Harden the TCP/IP Stack Against Denial of Service
Attacks in Windows 2000."
For more details on the Windows 2000 TCP/IP implementation, see the Microsoft
Press book, "Windows 2000 TCP/IP Protocols and Services," by Lee Davies.
For more information about the Windows 2000 TCP/IP implementation, see
"Microsoft Windows 2000 TCP/IP Implementation Details," at
http://www.microsoft.com/technet/treeview/default.asp?
url=/technet/itsolutions/network/deploy/depovg/tcpip2k.asp, on the TechNet Web
site.
How To: Secure Your Developer Workstation
Applies To
This information applies to developer workstations that run the following:
This How To provides quick tips to help you improve the security of your developer
workstation, along with tips about how to keep it secure. It also helps you avoid common
problems that you are likely to encounter when you secure your workstation. Finally, it
provides tips about how to determine problems and to revert security settings if they prove
too restrictive.
Note This How To is not exhaustive, but it highlights many of the key issues.
Before You Begin
Before you begin securing your workstation, you need the following tools:
Microsoft Baseline Security Analyzer (MBSA). Microsoft provides the MBSA tool
to help analyze the security configuration of your computers and to identify missing
patches and updates. You can download the MBSA tool from
mbsasetup.msi">http:/download.microsoft.com/download/e/5/7/e57f498f-2468-
4905-aa5f-369252f8b15c/mbsasetup.msi.
URLScan. URLScan is an ISAPI filter that rejects or allows HTTP requests based
on a configurable set of rules. It is integrated with IISLockdown, although you can
also download it separately. It comes with customizable templates for each
supported server role.
Secure IIS
Stay secure
Run Using a Least-Privileged Account
You should develop applications using a non administrator account. Doing so is important
primarily to limit the exposure of the logged on user and to help you to design more secure
software. For example, if you design, develop, and test an application while you are
interactively logged in as an administrator, you are much more likely to end up with software
that requires administrative privileges to run.
You should not generally log on using the local administrator account. The account that you
use on a daily basis should not be a member of the local Administrators group. Sometimes
you might still need an account that has administrative privileges — for example, when you
install software or edit the registry. Because the default local administrator account is well
known, however, and it is the target of many attacks, create a non-standard administrator
account and use this only when it is required.
To run a privileged command, you can use one of the following techniques to temporarily
change your security context:
Use the Runas.exe utility from a command line. The following command shows
you how to use the Runas.exe utility to launch a command console that runs under
your custom administration account.
runas.exe /user:mymachine\mycustomadmin cmd.exe
By executing Cmd.exe, you start a new command window that runs under the
security context of the user you specify with the /user switch. Any program you
launch from this command window also runs under this context.
Use Run As from Windows Explorer. You can right-click an executable file in
Windows Explorer and click Run As. To display this item on Windows 2000, hold
the shift key down and then right-click an executable file. When you click Run As,
you are prompted for the credentials of the account you want to use to run the
executable file.
Use Run As shortcuts. You can create quick launch and desktop shortcuts to
easily run applications using a privileged user account. The following example
shows a shortcut that you can use to run Windows Explorer (Explorer.exe) using the
administrator account:
%windir%\System32\runas.exe /user:administrator explorer
More Information
For more information about developing with a non-administrative account, see the following
articles:
After you update your system using the Windows Update site, use MBSA to
Note
detect missing updates for SQL Server, MSDE, and MDAC.
Using MBSA
You can use MBSA to assess security and to verify patches. If you used automatic updates
or Windows Update to update your operating system and components, MBSA verifies those
updates and additionally checks the status of updates for SQL Server and Microsoft
Exchange Server. MBSA lets you create a script to check multiple computers.
If you do not have Internet access when you run MBSA, MBSA cannot retrieve the
XML file that contains the latest security settings from Microsoft. You can use
another computer to download the XML file, however. Then you can copy it into
the MBSA program directory. The XML file is available at
http://download.microsoft.com/download/xml/security/1.0/nt5/en-
us/mssecure.cab.
2. Run MBSA by double-clicking the desktop icon or selecting it from the Programs
menu.
4. Clear all check boxes except for Check for security updates. This option
detects which patches and updates are missing.
5. Click Start scan. Your server is now analyzed. When the scan completes, MBSA
displays a security report, which it also writes to the
%Userprofile%\SecurityScans directory.
6. Download and install the missing updates. Click Result details next to each failed
check to view the list of missing security updates.
The resulting dialog box displays the Microsoft security bulletin reference number.
Click the reference to find out more about the bulletin and to download the update.
For more information about using MBSA, see "How To: Use Microsoft Baseline Security
Analyzer (MBSA)," in the How To section of this guide.
MBSA will not indicate required .NET Framework updates and patches. Browse
Note the .NET Framework downloads page at
http://msdn.microsoft.com/netframework/downloads/default.asp.
To configure Automatic Updates with Windows 2000, click Automatic Updates in the
Control Panel. For more information about Automatic Updates and Windows 2000, see
Microsoft Knowledge Base article 327850, "How To: Configure and Use Automatic Updates
in Windows 2000."
For more information about Automatic Updates and Windows XP, see Microsoft Knowledge
Base article, 306525, "How To: Configure and Use Automatic Updates in Windows XP."
Automatic Updates scans and installs updates for the following operating systems (including
the .NET Framework and IIS where applicable):
Although IISLockdown improves IIS security, if you choose the wrong installation options or
do not modify the URLScan configuration file, URLScan.ini, you could encounter the
following issues:
You cannot create new ASP.NET Web applications. NTFS file system
permissions are configured to strengthen default access to Web locations. This
may prevent the logged on user from creating new ASP.NET Web applications.
Cannot debug existing ASP.NET Web applications. URLScan blocks the DEBUG
verb, which is used when you debug ASP.NET Web applications.
The following steps show you how to improve IIS security on your development workstation
and avoid the issues listed above:
Configure URLScan
2. During setup, choose the Dynamic Web Site option, and choose the option to
install URLScan. ASP.NET Web Forms use the HTTP POST verb. Choosing the
static option and installing URLScan blocks the POST verb in URLScan.ini.
Maps the following script maps to 404.dll: Index Server, Web Interface
(.idq, .htw, .ida), server side includes (.shtml, .shtm, .stm), Internet Data
Connector (.idc), HTR scripting (.htr), Internet printing (.printer)
Pitfalls
If you use IISLockdown, note the following pitfalls:
IIS metabase updates can be lost. If you undo IISLockdown changes by running
Iislockd.exe a second time, you lose any changes made to the IIS metabase since
the last time IISLockdown was run. For example, if you configure a virtual directory
as an application root after running IIS lockdown, that change is lost when you run
IISLockdown again.
Resources are blocked by 404.dll. If you receive a 404 error for a previously
available resource, it might be because the resource type is blocked by 404.dll. To
confirm whether or not this is the case, check the script mapping for the requested
resource type in IIS.
Configure URLScan
The URLScan ISAPI filter installs when you run IISLockdown. If you do not explicitly allow
the DEBUG verb, URLScan prevents debugging. Also, URLScan blocks requests that
contain unsafe characters such as the period (.) used for directory traversal.
Pitfalls
If you install URLScan, note the following pitfalls:
When you debug an application by using Visual Studio.NET, you may see the
following error:
Microsoft Development Environment:
Error while trying to run project: Unable to start debugging o
Could not start ASP.NET or ATL Server debugging.
Verify that ASP.NET or ATL Server is correctly installed on th
you like to disable future attempts to debug ASP.NET pages for
You should see a log entry similar to the one shown below in URLScan<date>log in
the \WINNT\system32\inetsrv\urlscan folder.
[01-18-2003 - 22:25:26] Client at 127.0.0.1: Sent verb 'DEBUG'
specifically allowed. Request will be rejected.
You may not be able to create new Web projects in Visual Studio .NET because
you use characters in the project name that URLScan rejects. For example, the
comma (,) and the pound sign (#) will be blocked.
If you experience errors during debugging, see Microsoft Knowledge Base article 306172,
"INFO: Common Errors When You Debug ASP.NET Applications in Visual Studio .NET," at
http://support.microsoft.com/default.aspx?scid=kb;EN-US;306172.
Secure SQL Server and MSDE
To update SQL Server and MSDE, you must:
For more information on applications that include MSDE, refer to the following resources:
If your third-party vendor does not supply a patch for MSDE, and if it becomes critical to
have the latest patches, you can only do the following:
Uninstall the instance of SQL Server using Add/Remove Programs. If you do not
see an uninstall option for your instance, you might need to uninstall your
application.
Stop the instance of SQL Server using the Services MMC snap-in in Computer
Management. You can also stop the instance from the command line by running the
following command:
net stop mssqlserver (default instance), mssql$instancename (f
Use IPSec to limit which hosts can connect to the abandoned (unpatched) instances
of SQL Server. Restrict access to localhost clients.
3. Clear all check boxes except for Check for SQL vulnerabilities.
This option scans for security vulnerabilities in the configurations of SQL Server
7.0, SQL Server 2000, and MSDE. For example, it checks the authentication
mode, the sa account password, and the SQL Server service account, among
other checks.
A number of the checks require that your instance of SQL Server is running. If it is
not running, start it.
4. Click Start scan. Your configuration is now analyzed. When the scan completes,
MBSA displays a security report, which it also writes to the
%Userprofile%\SecurityScans directory.
Click Result details next to each failed check for more information about why the
check failed. Click How to correct this, for information about how to fix the
vulnerability.
For more information about using MBSA, see "How To: Use Microsoft Baseline Security
Analyzer (MBSA)," in the How To section of this guide.
Evaluate Your Configuration Categories
To evaluate the security of your workstation configuration, review the configuration
categories shown in Table 6. Start by using the categories to evaluate the security
configuration of the base operating system. Then apply the same configuration categories
to review your IIS, SQL Server, and .NET Framework installation.
You can also use the "Hotfix & Security Bulletin Service," at
http://www.microsoft.com/technet/treeview/default.asp?url=/technet/security/current.asp,
on the TechNet Web site. This allows you to view the security bulletins that are available for
your system.
How To: Use IPSec for Filtering Ports and Authentication
Applies To
This information applies to server computers that run the following:
Before you create and apply IPSec policies to block ports and protocols, make sure you
know which communication you need to secure including the ports and protocols used by
day-to-day operations. Consider the protocol and port requirements for remote
administration, application communication, and authentication.
UDP port 500 for Internet Key Exchange (IKE) negotiation traffic
Filters
A filter action specifies which actions to take when a given filter is invoked. It can
be one of the following:
Negotiate security. The endpoints must agree on and then use a secure
method to communicate. If they cannot agree on a method, the
communication does not take place. If negotiation fails, you can specify
whether to allow unsecured communication or to whether all communication
should be blocked.
Rules
A rule associates a filter with a filter action and is defined by the IPSec policy.
Restricting Web Server Communication
The following example shows you how to use IPSec to limit communication with a Web
server to port 80 (for HTTP traffic) and port 443 (for HTTPS traffic that uses SSL.) This is
a common requirement for Internet-facing Web servers.
After applying the steps below, communication will be limited to port 80 and 443.
In a real world environment, you will require additional communication such as
Note that required for remote administration, database access and authentication. A
complete IPSec policy, in a production environment, will include all authorized
communication.
2. Right-click IPSec Security Policies on Local Machine, and then click Manage IP
filter lists and filter actions.
4. Click Add to create a new filter action, and then click Next to move past the
introductory Wizard dialog box.
5. Type MyPermit as the name for the new filter action. This filter action is used to
permit traffic.
6. Click Next.
9. Click Close to close the Manage IP filter lists and filter actions dialog box.
2. Click Add to add a new IP filter list., and then type MatchAllTraffic for the filter
list name.
3. Click Add to create a new filter and proceed through the IP Filter Wizard dialogs
boxes by selecting the default options.
This creates a filter that matches all traffic.
5. Click Add to create a new IP filter list, and then type MatchHTTPAndHTTPS for
the filter list name.
6. Click Add, and then click Next to move past the introductory Wizard dialog box.
7. Select Any IP Address from the Source address drop-down list, and then click
Next.
8. Select My IP Address from the Destination address drop-down list, and then
click Next.
9. Select TCP from the Select a protocol type drop-down list, and then click Next.
12. Click Add, and then repeat steps 9 to 14 to create another filter that allows traffic
through port 443.
Use the following values to create a filter that allows TCP over port 443:
Protocol: TCP
To Port: 443
After finishing these steps, your IP Filter List should look like the one that Figure 5 shows.
Figure 5: IP Filter List dialog box
After creating the filter actions and filter lists, you need to create a policy and two rules to
associate the filters with the filter actions.
3. Type MyPolicy for the IPSec policy name and IPSec policy for a Web server
that accepts traffic to TCP/80 and TCP/443 from anyone for the description,
and then click Next.
4. Clear the Activate the default response rule check box, click Next, and then
click Finish.
The MyPolicy Properties dialog box is displayed so that you can edit the policy
properties.
5. Click Add to start the Security Rule Wizard, and then click Next to move past the
introductory dialog box.
6. Select This rule does not specify a tunnel, and then click Next.
8. Select Windows 2000 default (Kerberos V5 protocol), and then click Next.
10. Select the MyPermit filter action, click Next, and then click Finish.
Your IPSec policy is now ready to use. To activate the policy, right-click MyPolicy and then
click Assign.
You started by creating two filter actions: one to allow traffic and one to block
traffic.
Next, you created two IP filter lists. The one called MatchAllTraffic matches on all
traffic, regardless of port. The one called MatchHTTPAndHTTPS contains two
filters that match TCP traffic from any source address to TCP ports 80 and 443.
Then you created an IPSec policy by creating a rule that associated the MyBlock
filter action with the MatchAllTraffic filter list and the MyPermit filter action with
the MatchHTTPAndHTTPS filter list. The result of this is that the Web server only
allows TCP traffic destined for port 80 or 443. All other traffic is rejected.
Restricting Database Server Communication
On a dedicated SQL Server database server, you often want to restrict communication to a
specific SQL Server port over a particular protocol. By default, SQL Server listens on TCP
port 1433, and UDP port 1434 is used for negotiation purposes.
The following steps restrict a database server so that it only accepts incoming connections
on TCP port 1433 and UDP port 1434:
Create two filter actions: one to permit traffic and the other to block traffic. For
details, see the Create filter actions procedure under "Restricting Web Server
Communication" earlier in this How To.
Create two filter lists: one that matches all traffic and one that contains two filters
that match TCP traffic destined for port 1433 and UDP traffic destined for port
1433. For details, see "Create IP filter lists and filters" under "Restricting Web
Server Communication" earlier in this How To. The required filters are summarized
below.
Enter the following values to create a filter that allows TCP over port 1433:
Protocol: TCP
To Port: 1433
Enter the following values to create a filter that allows UDP over port 1434:
Protocol: UDP
To Port: 1434
Create and apply IPSec policy by repeating the procedure under "Restricting Web
Server Communication" earlier in this How To.
Restricting Server-to-Server Communication
You can also use IPSec to provide server authentication. This is useful when restricting the
range of computers that can connect to middle-tier application servers or database servers.
IPSec provides three authentication options:
Kerberos
Certificate-based authentication
To use certificate authentication, the two computers must trust a common certificate
authority (CA), and the server that performs the authentication must request and
install a certificate from the CA.
In this section, you set up IPSec authentication between two servers by using a pre-shared
secret key.
2. Right-click IPSec Security policies on the local machine, and then click Create
IP Security Policy.
The MyAuthPolicy Properties dialog box is displayed so that you can edit the
policy properties.
6. Click Add, and then click Next three times.
7. In the Authentication Method dialog box, select Use this string to protect the
key exchange (preshared key).
8. Enter a long, random set of characters in the text box, and then click Next.
You should copy the key to a floppy disk or CD. You need it to configure the
communicating server.
9. In the IP Filter List dialog box, select All IP Traffic, and then click Next.
10. In the Filter Action dialog box, select Request Security (Optional), and then
click Next.
Netdiag.exe
IPSecpol.exe
Netdiag.exe
Before creating a new policy, determine if your system already has an existing policy. You
can do this by performing the following steps:
If there are no existing filters, then the output looks like the following:
IP Security test . . . . . . . . . : Passed IPSec policy serv
no policy is assigned.
IPSecpol.exe
The Internet Protocol Security Policies tool helps you automate the creation of policies in
local and remote registries. The tool supports the same settings that you can configure by
using the MMC snap-in.
Download the tool from the Microsoft Windows 2000 Web site at
http://www.microsoft.com/windows2000/techinfo/reskit/tools/existing/ipsecpol-o.asp.
For detailed examples of using Ipsecpol.exe to create and manipulate IPSec rules, see
Microsoft Knowledge Base article 813878, "How to Block Specific Network Protocols and
Ports by Using IPSec."
Additional Resources
For more information, see the following resources:
"How To: Use IPSec to Provide Secure Communication Between Two Servers" in
the How To section of "Building Secure ASP.NET Applications" on MSDN.
Article 313190, "How To: Use IPSec IP Filter Lists in Windows 2000" in the
Microsoft Knowledge Base.
Article 813878, "How to Block Specific Network Protocols and Ports by Using
IPSec" in the Microsoft Knowledge Base.
Article 313195, "How To: Use IPSec Monitor in Windows 2000" in the Microsoft
Knowledge Base.
In this chapter, you will learn how to use MBSA to perform two processes:
This How To reviews each mode separately, although both modes can be performed in the
same pass.
Contents
Before You Begin
Pitfalls
Additional Resources
Before You Begin
Install MBSA, using Mbsasetup.msi, to a tools directory. Copy the file Mssecure.cab to the
MBSA installation directory.
Updates for MBSA. If the machine you use has Internet access, the latest security
XML file will be downloaded automatically, if needed. If your computer does not
have Internet access, you need to download the latest XML file using the signed
CAB at the following location:
http://download.microsoft.com/download/xml/security/1.0/NT5/EN-
US/mssecure.cab
The CAB file is signed to ensure it has not been modified. You must uncompress it
and store it in the same folder where MBSA is stored.
To view the latest XML file without downloading it, use the following
Note location:
https://www.microsoft.com/technet/security/search/mssecure.xml
You need to run commands from this directory. MBSA does not create an
Note
environment variable for you.
What You Must Know
Before using this How To, you should be aware of the following:
You can use MBSA by using the graphical user interface (GUI) or from the
command line. The GUI executable is Mbsa.exe and the command line executable
is Mbsacli.exe.
MBSA requires administrator privileges on the computer that you scan. The options
/u (username) and /p (password) can be used to specify the username to run the
scan. Do not store user names and passwords in text files such as command files
or scripts.
Task To use the MBSA GUI to scan for updates and patches
1. Click Microsoft Baseline Security Analyzer from the Programs menu.
3. Make sure that the following options are not selected, and then click Start scan.
The advantage of the GUI is that the report is opened immediately after scanning the local
computer. More details on interpreting the report are explained later in this section.
To use the command line tool (Mbsacli.exe) to check for security updates and patches, run
the following command from a command window. This scans the specified computer with
the supplied IP address and checks for missing updates:
mbsacli /i 192.168.195.137 /n OS+IIS+SQL+PASSWORD
You can view the report by using Mbsacli.exe, but is not recommended since it is easier to
extract patch details using the GUI. The command below allows you to view a scan report
using Mbsacli.exe:
mbsacli /ld "SecurityReportFile.xml"
A report file is generated in the profile directory of the logged in user (%userprofile%), on
the computer from where you ran the Mbsacli.exe command. The easiest way to view the
results of those reports is by using the GUI mode of MBSA.
Scanning Multiple Systems for Updates and Patches
You can also use MBSA to scan a range of computers. To do so, use the /r command line
switch as shown below.
mbsacli /r 192.168.195.130-192.168.195.254 /n OS+IIS+SQL+PASSWORD
For more details on installing patches and service packs for SQL Server 2000, including the
Desktop Edition (MSDE), see "How To: Patch Management" in the How To section of this
guide.
Scanning for Secure Configuration
In addition to scanning for missing security updates, MBSA scans for system configurations
that are not secure. For a detailed list of what is checked by this scan, see the MBSA
documentation at: http://www.microsoft.com/technet/treeview/default.asp?
url=/technet/security/tools/tools/mbsawp.asp
Compare the issue details against your security policy and follow the instructions if the issue
is not addressed by your policy.
There may be cases where MBSA reports that an update is not installed, even after you
complete an update or take the steps documented in a security bulletin. There are two
reasons for these false reports:
1. Files scanned were updated by an installation that is unrelated to a security
bulletin. For example, a file shared by different versions of the same program
may be updated by the newer version. MBSA is unaware of the new versions and,
because it is not what is expected, it reports the update is missing.
Windows NT 4.0 SP4 and above, Windows 2000, or Windows XP (local scans only
on Windows XP computers that use simple file sharing)
f any of the services are unavailable or administrative shares (C$) are not accessible,
errors will result during the scan.
Password Scans
Password check performed by MBSA can take a long time, depending on the number of
user accounts on the machine. The password check enumerates all user accounts and
performs several password change attempts using common password pitfalls such as a
password that is the same as the username. Users may want to disable this check before
scanning Domain Controllers on their network. For details on the MBSA password check,
see the topic "Local Accounts Passwords" in the MBSA whitepaper on TechNet
http://www.microsoft.com/technet/treeview/default.asp?
url=/technet/security/tools/tools/mbsawp.asp.
It is important to know the differences between the default options of the two MBSA clients:
the GUI tool, Mbsa.exe, and the command-line tool, Mbsacli.exe. The examples shown
previously in this How To take these defaults into account.
The MBSA GUI calls /nosum, /v, and /baseline by default. The details for those options
are:
The MBSA command line calls no options and runs a default scan.
Additional Resources
The MBSA home is the best resources for the latest on the Microsoft Baseline Security
Analyzer. http://www.microsoft.com/technet/treeview/default.asp?
url=/technet/security/tools/Tools/MBSAhome.asp
How To: Use IISLockdown.exe
Applies To
This information applies to server computers that run the following:
IIS Samples
MSADC
IISHelp
Scripts
IISAdmin
You can save it locally or run it directly by clicking Open when you are prompted. If you
save IISLockd.exe, you can unpack helpful files by running the following command:
iislockd.exe /q /c
IISLockd.chm. This is the compiled help file for the IISLockdown tool.
URLScan.exe and associated files. These files install URLScan without running
IISLockdown.exe.
Running IISLockdown
IISLockdown detects the Microsoft .NET Framework and takes steps to secure .NET
Framework files. Install the .NET Framework on your Web server before you run
IISLockdown.
IISLockd.exe is not an installation program. When you launch IISLockd.exe, it runs the IIS
Lockdown Wizard.
2. For Web servers that host ASP.NET Web applications, select Dynamic Web
server (ASP enabled) from the Server templates list.
This allows you to specify the changes that the IIS Lockdown tool should perform.
4. Select Web service (HTTP) and make sure that no other services are selected.
6. On the Script Maps page, disable support for the following script maps, and then
click Next.
This causes IISLockdown to remove all of the listed virtual directories, configure
NTFS permissions for the anonymous Internet account, and disable WebDAV.
8. Click Next.
The URLScan ISAPI filter that is installed as part of IIS Lockdown is not removed
Note as part of the undo process. You can remove URLScan manually by using the
ISAPI filters tab at the server level in Internet Services Manager.
Unattended Execution
The following steps are from RunLockdUnattended.doc, which is available if you unpack
files by running IISLockd.exe with the /q and /c arguments.
4. Configure the server template that you chose in step 2. The template configuration
is denoted with square brackets around the server template name, for example,
[dynamicweb]. The template configuration contains the various feature settings
for that specific server template. These feature settings can be toggled on or off
by setting them to TRUE or FALSE.
5. Save IISlockd.ini.
IISLockdown configures NTFS permissions using the new group Web Anonymous
Users. By default, this contains the IUSR_MACHINE account. If you create new
anonymous accounts, you must manually add these accounts to the Web
Anonymous Users group.
If you debug ASP.NET pages using Microsoft Visual Studio® .NET, debugging stops
working. This is because IISLockdown installs URLScan and URLScan blocks the
DEBUG verb. For more information about using IISLockdown on developer
workstations, see "How To: Secure Your Developer Workstation" in this guide.
How To: Use URLScan
Applies To
This information applies to server computers that run the following:
Installing URLScan
Log files
Removing URLScan
Configuring URLScan
Pitfalls
References
Installing URLScan
At the time of writing (April 2003), URLScan 2.0 is installed when you run IISLockdown
(IISLockd.exe,) or you can install it independently.
Installing URLScan 2.0 with IISLockdown: You can install URLScan 2.0 as part
of the IIS Lockdown Wizard (IISLockd.exe). IISLockd.exe is available as an
Internet download from Microsoft's Web site at:
http://download.microsoft.com/download/iis50/Utility/2.1/NT45XP/EN-
US/iislockd.exe.
For more information, refer to Microsoft Knowledge Base article 315522, "How To:
Extract the URLScan Tool and Lockdown Template Files from the IIS Lockdown
Tool."
Installing URLScan 2.5: URLScan 2.5 is currently the latest version of URLScan. If
you want to install URLScan 2.5, you first need URLScan 1.0 or URLScan 2.0.
For more information, refer to Microsoft Knowledge Base article 307608, "INFO:
Availability of URLScan Version 2.5 Security Tool."
For more information on how to modify the various sections in URLScan.ini, refer to
Microsoft Knowledge Base article 815155 "How To: Configure URLScan to Protect
ASP.NET Web Applications."
Throttling Request Sizes with URLScan
You can use URLScan as another line of defense against denial of service attacks even
before requests reach ASP.NET. You do this by setting limits on the
MaxAllowedContentLength, MaxUrl and MaxQueryString attributes.
Your URLScan log file will also contain an entry similar to the following:
[01-18-2003 - 22:25:26] Client at 127.0.0.1: Sent verb 'DEBUG', whic
specifically allowed. Request will be rejected.
For more information, see Microsoft Knowledge Base article, 317741, "How To: Mask IIS
Version Information from Network Trace and Telnet."
Pitfalls
If you use URLScan, you might run into the following issues:
URLScan blocks the DEBUG verb which breaks application debugging. If you need
to support debugging, add the DEBUG verb to the [AllowVerbs] section in
URLScan.ini.
You need to recycle IIS for changes to take effect. URLScan is an ISAPI filter that
runs inside the IIS process (Inetinfo.exe) and URLScan's options are loaded from
URLScan.ini when IIS starts up. You can run the IISReset command from a
command prompt to recycle IIS.
URLScan blocks requests that contain potentially harmful characters, for example,
characters that have been used to exploit vulnerabilities in the past such as "." used
for directory traversal. It is not recommended that project paths contain the "."
character. If you must allow this, you need to set AllowDotInPath=1 in URLScan.ini.
If your Web application directories include dots in the path, for example, a directory
containing the name "Asp.Net", then URLScan will reject the request and a "404 not
found" message will be returned to the client.
For more information about how to modify the various sections in Urlscan.ini, refer
to Microsoft Knowledge Base article 815155 "How To: Configure URLScan to
Protect ASP.NET Web Applications."
For more information about URLScan 2.5, refer to Microsoft Knowledge Base
article 307608, "INFO: Availability of URLScan Version 2.5 Security Tool."
How To: Create a Custom Encryption Permission
Applies To
This information applies to server or workstation computers that run the following:
Step 3. Install the Permission assembly in the global assembly cache (GAC).
2. Add a strong name to the assembly so that you can install it in the GAC. Use the
following attribute in assemblyinfo.cs:
[assembly: AssemblyKeyFile(@"..\..\CustomPermissions.snk")]
[Flags, Serializable]
public enum StorePermissionFlag
{User = 0x01, Machine = 0x02}
9. Add the following public properties to allow a consumer application to set the
permission class state.
// Set this property to true to allow encryption.
public bool Encrypt
{
set {
if(true == value)
{
_permFlag |= EncryptionPermissionFlag.Encrypt;
}
else
{
_permFlag &= ~EncryptionPermissionFlag.Encrypt;
}
}
get {
return (_permFlag & EncryptionPermissionFlag.Encrypt).Equ
EncryptionPermissionFlag.Encrypt);
}
}
else
{
_storePermFlag &= ~StorePermissionFlag.User;
}
}
get {
return (_storePermFlag & StorePermissionFlag.User).Equals
StorePermissionFlag.User);
}
}
if (!(target.GetType().Equals(this.GetType())))
throw new ArgumentException(
"Argument must be of type EncryptionPermiss
12. Implement IPermission.Union. This returns a permission object that is the result
of the set union between the current permission and the supplied permission.
public override IPermission Union(IPermission target)
{
if (target == null)
return Copy();
if (!(target.GetType().Equals(this.GetType())))
throw new ArgumentException(
"Argument must be of type EncryptionPe
if (target == null)
{
if ((canEncrypt == false && canDecrypt == false) && (canU
false && canUseUserStore == false))
return true;
else
return false;
}
if (!(target.GetType().Equals(this.GetType())))
throw new ArgumentException(
"Argument must be of type Encrypti
return true;
}
if (IsUnrestricted())
{
// Using the Unrestricted attribute is consistent with th
// built-in .NET Framework permission types and helps kee
// the encoding compact.
elem.AddAttribute("Unrestricted", Boolean.TrueString);
}
else
{
// Encode each state field as an attribute of the Permiss
// To compact, encode only nondefault state parameters.
elem.AddAttribute("Flags", this._permFlag.ToString());
elem.AddAttribute("Stores", this._storePermFlag.ToString(
}
// Return the completed element.
return elem;
}
attrVal = elem.Attribute("Flags");
if (attrVal != null)
{
if(!attrVal.Trim().Equals(""))
{
this._permFlag =
(EncryptionPermissionFlag)Enum.Parse(typeof(Encryption
attrVal);
}
}
attrVal = elem.Attribute("Stores");
if (attrVal != null)
{
if(!attrVal.Trim().Equals(""))
{
this._storePermFlag =
(StorePermissionFlag)Enum.Parse(typeof(Stor
attr
}
}
}
2. Add the following using statements to the top of the new file.
using System.Security;
using System.Diagnostics;
using System.Security.Permissions;
4. Add serialization support to the class, and use the AttributeUsage attribute to
indicate where the custom permission attribute can be used.
[Serializable,
AttributeUsage(AttributeTargets.Method | // Can use on m
AttributeTargets.Constructor | // Can use on c
AttributeTargets.Class | // Can use on c
AttributeTargets.Struct | // Can use on s
AttributeTargets.Assembly, // Can use at t
AllowMultiple = true, // Can use mult
// instances pe
// (class, meth
Inherited = false)] // Can not be i
5. Add private member variables to the class to mirror the state maintained by the
associated permission class.
// The following state fields mirror those used in the associ
// permission type.
private bool _encrypt = false;
private bool _decrypt = false;
private bool _machineStore = false;
private bool _userStore = false;
7. Add the following public properties to mirror those provided by the associated
permission class.
public bool Encrypt
{
get {
return _encrypt;
}
set {
_encrypt = value;
}
}
public bool Decrypt
{
get {
return _decrypt;
}
set {
_decrypt = value;
}
}
public bool UserStore
{
get {
return _userStore;
}
set {
_userStore = value;
}
}
public bool MachineStore
{
get {
return _machineStore;
}
set {
_machineStore = value;
}
}
if(_encrypt)
cipher |= EncryptionPermissionFlag.Encrypt;
if(_decrypt)
cipher |= EncryptionPermissionFlag.Decrypt;
if(_userStore)
store |= StorePermissionFlag.User;
if(_machineStore)
store |= StorePermissionFlag.Machine;
// Return the final permission.
return new EncryptionPermission(cipher, store);
}
You must grant full trust to any assembly that implements a custom security permission. In
practice, this means that you need to install the assembly on the computer where it is used,
to ensure that it is granted full trust by default security policy. Code within the
My_Computer_Zone is granted full trust by default policy.
Installing an assembly in the GAC is one way to be sure it is granted full trust by code
access security policy. The GAC is an appropriate location for the permission assembly
because the assembly is used by code access security policy on the local computer and is
available for any .NET Framework application that is installed on the local computer.
To install the custom permission assembly in the local computer's GAC, run the following
command.
gacutil.exe /i custompermission.dll
Without further modification, you can only call the managed DPAPI wrapper in the
referenced How To article from full trust code. To be able to call the DPAPI wrapper from
partial trust code, such as a medium trust ASP.NET Web application, you must sandbox the
calls to the unmanaged DPAPI functions. To do this, make the following modifications:
Assert the unmanaged code permission in the DPAPI wrapper code. This means
that any calling code does not require the unmanaged code permission.
Authorize the calling code inside the wrapper by demanding the custom
EncryptionPermission. The Demand call occurs before the Assert call to, in
accordance with the Demand/Assert usage pattern. For more information about
using Assert safely, see "Assert and RevertAssert," in Chapter 8, "Code Access
Security in Practice."
Task To modify the DPAPI managed wrapper
1. Build the DPAPI managed wrapper by following the instructions in "How To:
Create a DPAPI Library."
3. Open dataprotection.cs from the managed wrapper library, and add the
following using statements beneath the existing using statements at the top of
the file.
using System.Security;
using System.Security.Permissions;
using CustomPermissions;
4. Locate the Encrypt method in dataprotection.cs, and add the following code at
the top of the outer try block in the Encrypt method.
// Set the storeFlag depending on how the caller uses
// the managed DPAPI wrapper.
StorePermissionFlag storeFlag;
if(Store.USE_MACHINE_STORE == store)
{
storeFlag = StorePermissionFlag.Machine;
}
else
{
storeFlag = StorePermissionFlag.User;
}
// Demand custom EncryptionPermission.
(new EncryptionPermission(EncryptionPermissionFlag.Encrypt, s
5. Add the following finally block to the outer try block in the Encrypt method.
finally
{
CodeAccessPermission.RevertAssert();
}
6. Locate the Decrypt method in dataprotection.cs, and add the following code at
the top of the outer try block.
StorePermissionFlag storeFlag;
if(Store.USE_MACHINE_STORE == store)
{
storeFlag = StorePermissionFlag.Machine;
}
else
{
storeFlag = StorePermissionFlag.User;
}
// Demand custom EncryptionPermission.
(new EncryptionPermission(EncryptionPermissionFlag.Decrypt,
7. Add the following finally block to the outer try block in the Decrypt method.
finally
{
CodeAccessPermission.RevertAssert();
}
In this step, you create a test Web application and then modify ASP.NET code access
security policy for a medium trust Web application to grant it the EncryptionPermission.
4. Add the following using statement to the top of WebForm1.aspx.cs beneath the
existing using statements.
using DataProtection;
5. Add the following code for the Encrypt button-click event handler.
private void btnEncrypt_Click(object sender, System.EventArgs
{
DataProtector dp = new DataProtector(
DataProtector.Store.USE_MACHINE
try
{
byte[] dataToEncrypt = Encoding.ASCII.GetBytes(txtDataToE
// Not passing optional entropy in this example
// Could pass random value (stored by the application) fo
// when using DPAPI with the machine store.
txtEncryptedData.Text =
Convert.ToBase64String(dp.Encrypt(dataToEncrypt,
}
catch(Exception ex)
{
lblError.ForeColor = Color.Red;
lblError.Text = "Exception.<br>" + ex.Message;
return;
}
lblError.Text = "";
}
6. Add the following code for the Decrypt button-click event handler.
private void btnDecrypt_Click(object sender, System.EventArgs
{
DataProtector dp = new DataProtector(DataProtector.Store.US
try
{
byte[] dataToDecrypt = Convert.FromBase64String(txtEncryp
// Optional entropy parameter is null.
// If entropy was used within the Encrypt method, the sam
// parameter must be supplied here.
txtDecryptedData.Text =
Encoding.ASCII.GetString(dp.Decrypt(dataToDecrypt,nu
}
catch(Exception ex)
{
lblError.ForeColor = Color.Red;
lblError.Text = "Exception.<br>" + ex.Message;
return;
}
lblError.Text = "";
}
7. Configure the Web application for medium trust by adding the following element to
the application's Web.config file inside the <system.web> section.
<trust level="Medium" />
Set the PublicKeyToken attribute value to the specific public key token for your
assembly. To extract the public key token for your custom permission assembly,
use the following command.
sn -T custompermission.dll
3. Locate the ASP.NET named permission set in the medium trust policy file, and add
the following permission element.
<IPermission class="EncryptionPermission"
version="1" Flags="Encrypt,Decrypt"
Stores="Machine,User">
</IPermission>
You can grant code a restricted permission by using only the relevant attributes.
For example, to limit code to decrypt data using only the machine key in the
machine store, use the following element.
<IPermission class="EncryptionPermission"
version="1" Flags="Decrypt"
Stores="Machine">
</IPermission>
You can now run the test Web application and verify that you can encrypt and
decrypt data by using DPAPI from a partial trust Web application.
For more information about sandboxing highly privileged code and about working with
ASP.NET code access security policy, see Chapter 9, "Using Code Access Security with
ASP.NET."
How To: Use Code Access Security Policy to Constrain an
Assembly
Applies To
This information applies to server or workstation computers that run the following:
Microsoft® Windows® 2000 Server and the Windows 2000 Professional, Windows
Server 2003, Windows XP Professional operating systems
You use the.NET Framework 1.1 Configuration tool to create a new permission set and a
new code group. The permission set defines what the code can and cannot do, and the
code group associates the permission set with particular code, for example a specific
assembly or set of assemblies.
In addition to constraining file I/O, you can use code access security policy to impose other
constraints on code. For example, you can restrict the ability of code to access other types
of resources protected by code access security, including databases, directory services,
event log, registry, Domain Name System (DNS) servers, unmanaged code, and
environment variables.
This list is not exhaustive but represents many of the common resource types
Note
accessed by Web applications.
Before You Begin
Before you begin to use code access security policy to constrain an assembly, you should
be aware of the following:
To constrain a Web application so that it is only able to access files within its own
virtual directory hierarchy, you can configure the application to run with medium trust
by placing the following in Web.config:
<system.web>
<trust level="Medium" />
</system.web>
This uses ASP.NET code access security policy to constrain the ability of the Web
application to perform file I/O and it also imposes other constraints. For example, a
medium trust application cannot directly access the event log, registry, or OLE DB
data sources.
You must maintain ASP.NET policy by using a text or XML editor. For more
information about running Web applications using medium trust, see Chapter 9,
"Using Code Access Security with ASP.NET."
When you build an assembly, you can impose constraints programmatically using
code access security. For more information about how to do this, see Chapter 8,
"Code Access Security in Practice."
You should generally avoid building Web applications that accept file names and
paths from the user because of the security risks posed by canonicalization issues.
On occasion, you might need to accept a file name as input. This How To shows
you how you can constrain an assembly to ensure that it cannot access arbitrary
parts of the file system. For more information about performing file I/O, see "File
I/O" sections in Chapter 7, "Building Secure Assemblies" and Chapter 8, "Using
Code Access Security in Practice," of Improving Web Application Security.
For more information about code access security fundamentals, see Chapter 8,
"Code Access Security in Practice," of Improving Web Application Security.
Summary of Steps
This How To includes the following steps:
1. Create an assembly that performs file I/O.
By adding a strong name, you make the assembly tamper proof by digitally
signing it. The public key component of the strong name also provides
cryptographically strong evidence for code access security policy. An
administrator can apply policy by using the strong name to uniquely identify the
assembly.
6. Rename the default constructor to match the class name and change it to private,
which prevents instances of the FileWrapper class from being created. This class
provides static methods only.
7. Add the following public method so that it reads from a specified file.
public static string ReadFile(string filename)
{
byte[] fileBytes = null;
long fileSize = -1;
Stream fileStream = null;
try
{
if(null == filename)
{
throw new ArgumentException("Missing filename");
}
// Canonicalize and validate the supplied filename
// GetFullPath:
// - Checks for invalid characters (defined by Path.Inval
// - Checks for Win32 non file-type device names includin
// physical drives, parallel and serial ports, pipes, m
// and so on
// - Normalizes the file path
filename = Path.GetFullPath(filename);
fileStream = File.OpenRead(filename);
if(!fileStream.CanRead)
{
throw new Exception("Unable to read from file.");
}
fileSize = fileStream.Length;
fileBytes = new byte[fileSize];
fileStream.Read(fileBytes, 0, Convert.ToInt32(fileSize));
return Encoding.ASCII.GetString(fileBytes);
}
catch (Exception ex)
{
throw ex;
}
finally
{
if (null != fileStream)
fileStream.Close();
}
}
Step 2. Create a Web Application
In this step, you create a Web application assembly that calls the file I/O assembly.
2. Add a project reference in the new project that references the FileIO project.
3. Add a text box to WebForm1.aspx to allow the user to supply a path and
filename. Set its Text property to c:\temp\somefile.txt and set its ID to
txtFileName.
4. Add a button to WebForm1.aspx and set its Text property to Read File and its ID
to btnReadFile.
5. Double-click the Read File button and add the following code to the event handler:
string s = FileIO.FileWrapper.ReadFile( txtFileName.Text );
Response.Write(s);
With full trust, Web applications are not constrained in any way by code access security
policy. The success or failure of resource access is determined purely by operating system
security.
Three levels of code access security policy are displayed: Enterprise, Machine,
and User. The fourth level at which you can configure code access security policy
is the application domain level. ASP.NET implements application domain level
policy, but this is not maintained using the.NET Framework version 1.1
Configuration tool. To edit ASP.NET policy, you must use a text editor.
For more information about ASP.NET policy and how to use it, see Chapter 9,
"Using Code Access Security with ASP.NET."
The Code Groups and Permission Sets folders are displayed. Each policy file
contains a hierarchical collection of code groups. Code groups are used to assign
permissions to assemblies. A code group consists of two elements:
A permission set — The permissions that the permission set contains are
granted to assemblies whose evidence matches the membership
condition.
7. Enter c:\temp in the File Path column and select Read and Path Disc (path
discovery.)
8. Click OK.
9. Select Security from the Available Permissions list and click Add.
The FileIO assembly also needs the permission to execute in addition to the
FileIOPermission. The permission to execute is represented by
SecurityPermission with its Flags property set to
SecurityPermissionFlag.Execution.
You have now created a new permission set called RestrictedFileIO that contains
a restricted FileIOPermission, which allows read and path discovery to the
C:\Temp directory, and a restricted SecurityPermission, which allows assembly
execution.
3. Enter FileIOAssembly as the code group name, and then click Next.
4. Select StrongName from the Choose the condition type for this code group
dropdown list.
You use this code group to apply specific permissions as defined by the
RestrictedFileIO permission set to the FileIO assembly. A strong name provides
cryptographically strong evidence to uniquely identify an assembly.
5. To specify the FileIO assembly's public key, (which it has because it contains a
strong name), click Import, and then browse to the project output folder that
contains FileIO.dll. Click Open to extract the public key from the assembly.
6. Click Next, and then select RestrictedFileIO from the Use existing permission
set drop-down list.
7. Click Next and then Finish to complete the creation of the code group.
You have now created a new code group that applies the permissions defined by
the RestrictedFileIO permission set to the FileIO assembly.
8. In the right window, select the FileIOAssembly code group, and then click Edit
Code Group Properties.
9. Select This policy level will only have the permissions from the permission
set associated with this code group and Policy levels below this level will
not be evaluated.
By selecting these attributes for the code group, you ensure that no other code
group, either at the current machine level or from the ASP.NET application domain
level, affects the permission set that is granted to the FileIO assembly. This
ensures that the assembly is only granted the permissions defined by the
RestrictedFileIO permission set that you created earlier.
If you do not select these options, default machine policy grants the
Note assembly full trust because the assembly is installed on the local
computer and falls within the My_Computer_Zone setting.
The assembly should be installed in the GAC because of the ASP.NET loads strong named
assemblies as domain neutral assemblies. All strong named assemblies that ASP.NET Web
applications call should be installed in the GAC. For more information about this issue, see
"Strong Names" in Chapter 7, "Building Secure Assemblies."
Normally, default machine policy and ASP.NET policy grant full trust to assemblies
that are installed in the GAC. The This policy level will only have the
permissions from the permission set associated with this code group and
Note Policy levels below this level will not be evaluated attributes that you assigned
to the code group created in Step 4 ensure that the assembly is not granted full
trust, and is only granted the permissions defined by the RestrictedFileIO
permission set that you created earlier.
You can call Gacutil.exe as a post-build step to ensure that it is placed in the GAC
when it has been successfully built inside Microsoft Visual Studio® .NET.
1. Display the FileIO project's Properties dialog box in Visual Studio .NET.
This forces the permission grant for the FileIO assembly to be recomputed. If the
ASP.NET application domain is still active from the last time you ran the Web
application, the assembly could still be cached by ASP.NET.
4. Run the Web application, and then click Read File.
The contents of the text file should be successfully displayed. The policy that you
created allows the FileIO assembly to read files from C:\Temp and below.
5. Enter C:\somefile.txt in the text box, and then click Read File.
The exception details indicate that a request for the FileIOPermission has failed,
as shown below:
System.Security.SecurityException: Request for the permission
System.Security.Permissions.FileIOPermission, mscorlib, Versi
Culture=neutral, PublicKeyToken=b77a5c561934e089 failed.
Index
A
absolute URLs, 282
access
checks, 305
maintaining, 16
separation of, 108
access control
entry, 554, 577
URL authorization, 284
accounts
checklists, 730
data server configuration, 672
database servers, 515–518
delegating, 445, 518
Enterprise Services, 665
lockout policies for end-user accounts, 81
management, 111
need for disabling, 82
shared, 445, 518
vulnerabilities, 428
Web server configuration, 647–648
Web servers, 428, 442
ACLs
ASP.NET application and Web services, 554, 577–578
checklists, 703–704
configuring for network service, 593–594
on Machine.config, 554
act as part of the operating system, 559
administration
checklists, 707
interfaces, 86, 412
removing shares, 673
separation of privileges, 87
solutions, xviii–xxi
administrative access, 412
administrative shares, 448, 521, 673
administrator accounts
database servers, 516
Web servers, 443
administrators
account to interactive logins, 518
checklists, 711
group membership, 518
need to log on interactively, 445
separating privileges, 115
ADO.NET
code access security permissions required by data providers, 396
exceptions, code review, 641
trapping and logging exceptions, 389
Afd.sys protections, 760–761
alerts and notifications, 684
Web sites, 684
algorithms
importance of using correctly, 92, 119
listed, 92
AllowPartiallyTrustedCallersAttribute. See APTCA
alternate credentials, 356–357
anonymous access
preventing, 666
serviced components, 636
anonymous account impersonation, 595–597
anonymous authentication, turning off in IIS, 355
anonymous Internet user accounts, 653
anonymous Internet user identities, 109
anonymous logons
database servers, 517
Web servers, 445
anonymous Web accounts
impersonating, 579
Web servers, 443
anonymous Web user accounts, 648
APIs
calling potentially dangerous, 629
dangerous, 169
unmanaged, 169, 615
$AppDirUrl$, 230
application isolation
anonymous identities, 286
ASP.NET trust levels, 225–226, 239
and code access security, 222
features for Windows 2000 and Windows Server 2003, 589–590, 600
application servers
firewall considerations, 482–486
how to secure, lxix
methodology, 480
overview, 475–476
threats and countermeasures, 477
application level
authentication, 335
error handlers, 294
error handling in Global.asax, 341
events, 294
ApplicationAccessControl attribute
component level access checks, 305
to prevent anonymous access, 636
role-based security, 304
Application_Error
event handler, 341
to trap application level events, 294
application.NET framework version 1.1, 280
applications
activation types, 303
architecture diagram, 51
assemblies, 498
authorizing, 113
bin directory checklist, 703–704
configuration settings in Machine.config, 552
constraining file I/O, 165, 205–206
customizing policy for, 235
data, 370–371
decomposing, 52
design guidelines for, 97–100
directories, 183
DLLs, 498, 665
event sources, 295–296
filters, 414
how to manage configurations securely, lxx
identifying purpose of, 50
identifying threats, 58–59
information, 162
isolating, 222
isolating by identity, 594–599
isolating with application pools, 599–600
isolating with code access security, 600
pools, 599–600
restricting in the database, 383
review, 101
securing, lxxvi–lxxvii
threats and countermeasures, 23–24
tiers, 95
unused, 520
vulnerability categories, lxxvi–lxxvii, 9–10
AppSettings, 547
<appSettings> element
accessing cipher text from, 385
ASP.NET application and Web services, 547
APTCA, 152, 169
avoiding, 191
and code access security, 191–192
diagnosing, 192
list of system assemblies with, 231–232
and .NET framework, 140
and partial trust callers, 152
in partial trust Web applications, 231, 234
sandboxing, 238–239, 245
strong names, 155–156
arbitrary code execution
described, 23
Web servers, 425
architecture
of applications, 100
checklists, 689–694
creating, 99–100
deployment, 101
diagram, 49
for security, 99–100
solutions, lxiii
array bounds, 169
ASP.NET
application and Web services, 545
application isolation, 239
architecture on Windows 2000, 591–592
architecture on Windows server 2003, 592–594
auditing and logging, 295–296
and Authenticode, 155
building applications, lix
checklists, 695–704
code access security with, 221–224
common threats to Web pages and controls, 255
configuration files, 548
configuring for Windows authentication, 355
data access configuration to
applications, 579–580
default policy permissions and trust levels, 233
errors on Web servers, 464
exception management, 293–294
FileAuthorizationModule, 350
full trust and partial trust, 224–225
generic error page, 392–393
hosting remoted objects, 355
hosting with the HttpChannel, 669
how to host multiple applications, lxix
how to use code access security with, lxv
least privileged domain account, 579
major elements of policy file, 228–229
medium trust, 239–243
named permission set, 229–230
pages and controls, 630–634
parameter manipulation, 290–293
policy files, 227–229
process accounts NTFS permissions, 578
process accounts to access a remote database, 578
process identity, 556–558
process model, 545
reduced attack surface, 239
resource access, 223–224
resource access identity, 262
sandboxing, 244–247
security for HttpChannel, 352
session state service, 646
state service, 440
strong names, 158
substitution parameters, 230
trust levels, 232–234, 555–556
Web services, 248–249
ASP.NET application and Web services
ASP.NET process model, 545
authorization, 563–565
debugging, 571
event log, 576–577
exception management, 572
file access, 577–578
and impersonation, 286
machine keys, 570–571
Machine.config, 548–555
methodology, 544–545
overview, 543–544
session state, 565–569
snapshot of secure application, 585–587
tracing, 571–572
UNC shares, 581–582
view state, 569
Web services, 573–575
Web.config, 548–555
AspNetHostingPermission, 230
Aspnet_setreg.exe, 546
assemblies
attributes, 155
authorization, 160
described, 145
design considerations, 150
dynamically loading, 619
event logging, 165–166
file I/O, 825–826
granting full trust to, 817
resource access code, 263
shared, 230
strong names, 155
unmanaged code, 168–169
Web controls and user controls in, 263
assembly attribute, 155
assembly implementation, 310
assembly types, 230
assembly level
checklists, 735
metadata, 636
threats, 146
assert calls, code access security, 624–625
assert duration, 204
reducing, 204
assert methods, 185, 203, 622
code access security, 185
assets
described, 13, 45
identification of, 49
associated permissions
and privileged operations, 194
and secure resources, 193
asymmetric encryption using X.509 certificates, 337–338
attack patterns
creating, 61
described, 59
attack trees
creating, 60–61
described, 59
attacker reveals implementation details, 41
attacks
anatomy of, 14–16
on assemblies, 147
described, 5, 13, 46
identity spoofing, 257
information disclosure, 260
methodology, 15
network eavesdropping, 259
network security, 406–407
parameter manipulation, 258
session hijacking, 257
and vulnerabilities, 423
attributes
assembly attribute, 155
class attribute, 624
connect attribute, 212
declarative security attribute, 624
examples of potentially dangerous, 610–611
member level attribute, 524
audit logs, 469
database servers, 537
audit user transactions, 308–309
auditing
applications, 123
ASP.NET, 295–296
checklists, 694, 699, 707, 710, 715, 726, 732
data server configuration, 675
database servers, 525–526
as element of security, 5
failed actions, 452
logon failures, 525
network security, 413
remoted objects, 365
requirements, 303
secure Web services, 341
serviced components, 308–309
SQL Server, 528
for suspicious behavior, 42
vulnerabilities, 429
Web applications, 95–96
Web server configuration, 651–652, 654
Web servers, 429, 451–452
auditing threats, 41–42
authenticated connections
controlling, 358
sharing to increase performance, 357
authentication
aspects of, 80
ASP.NET application and Web services, 560–563
callers, 109
checklists, 690–691, 697–698, 706, 709, 714, 717
configuring Windows only, 528
cryptography, 91
data access, 379
databases, 109–110
described, 29, 80
disabling, 666–667
as element of security, 4
Enterprise Services, 666
IPSec for filtering, 777–786
levels, 494
NTLMv2, 518
remoted objects, 355–358
requirements, 325
secure Web services, 332–335
for sensitive pages, 289
server-to-server, 784–785
serviced components, 304
and session tokens, 290
solutions, 325
tickets, 262
tokens and session management, 289–290
type, 134
vulnerabilities, 107–108
Web pages and controls, 277–278
Web server configuration, 654
authentication = AuthenticationOption.Privacy, 307–308
authentication cookie-to-HTTPS connections, 280
authentication cookies, 282
limiting lifetime of, 659
persisting, 281
protecting, 90
securing, 280
stolen, 82
<authentication> element
ASP.NET application and Web services, 560
Web server configuration, 658–659
Authenticode
and ASP.NET, 155
and strong names, 159–160
authorization
of applications, 113
ASP.NET application and Web services, 563–565
checklists, 691, 697, 706, 709–710, 714, 718
code review, 634
and COM+ roles, 304
data access, 381–382
described, 83
as element of security, 4
of end users, 112
Enterprise Services, 667
granularity in ASP.NET, 263
granularity models, 83–85
remoted objects, 359–360
secure Web services, 335–336
serviced components, 304
types used in assemblies, 160
vulnerabilities, 111–112
vulnerabilities described, 31
Web applications, 83
Web site partitioning, 284–286
authorization decisions
explicit role checks, 285
with imperative principal permission demands, 285
<authorization> element
for authentication, 279
authorization granularity, 263
for configuring role-based security, 138–139
for page level and directory level access control, 284
partitioning Web sites, 261
Web server configuration, 660
automatic updates for developer workstations, 769–770
availability as an element of security, 5
Index
B
backups
database servers, 537
and patch management, 746
bad design in Web applications, 70–72
bad input, 78
banner information, 460–461
banner masking in URLScan, 803
base installations and service packs, 433
baseline security analyzer
how to use, 787–793
SQL Server and MSDE specifics, 791
Basic authentication, 332, 343
Basic replay attacks, 324
bin directory, 576
BinarySecurityToken class, 338
blank passwords, 641
brute force attacks, 30
buffer overflows, 25–26
code injection attacks, 26, 255
code review, 616
and strncpy, 615
unmanaged code, 629
BUILTIN\Administrators server login, 530–531, 676
Index
C
C2 level auditing, 526
CA. See certificate-based authentication
caching
data, 625
protecting data, 199
results of security checks, 171–172, 618
of secrets, 89
sensitive data, 288
call level authentication
serviced components, 304
setting, 494
callers
authenticating, 109, 664
authorizing, 635
calls, forcing clients to authenticate, 357
canonicalization
described, 28–29
need for caution in, 76
CAS. See code access security
catch exceptions, 95
categorized threat lists, 57
centralizing input, 75
certificate installation on the database server, 536
certificate-based authentication, 784
channel sink, 365
channels, 568
chapter to product life cycle relationship, lxxix
character encoding, 612
setting correctly, 274
character representation in HTML, 611
checklists
accounts, 730
ACLs and permissions, 703–704
administration, 707
administrators, 711
application bin directory, 703–704
architecture, 690
architecture and design review, 689–694
assembly level checks, 735
and assessment guidelines, 684–685
auditing and logging, 694, 699, 707, 710, 715, 726, 732
authentication, 690–691, 697–698, 706, 709, 714, 717
authorization, 691, 697, 706, 709–710, 714, 718
class level checks, 735
code access, 727
code access security, 740–741
configuration file settings, 699–702
configuration management, 692, 697, 710, 714, 718
cryptography, 693, 735
database servers, 729–733
delegates, 737
deployment, 689, 710
deployment considerations, 719
design, 690, 695
design considerations, 715
environment variables, 740
event logging, 739
exception management, 693, 699, 707, 715, 719, 737
file I/O, 739
files and directories, 725, 730
firewall considerations, 722
hosting multiple applications, 703
IIS lockdown, 723
IIS metabase, 727
impersonation, 711
input validation, 690, 696, 705, 715
ISAPI filters, 727
Machine.config, 727
managed code, 735–742
parameter manipulation, 693, 698, 706
patches and updates, 723, 729
ports, 725, 731
protocols, 724, 730
proxy considerations, 707
reflection, 738
registry, 731, 739
resource access considerations, 739–740
router considerations, 721
script mappings, 726
secrets, 737
securing ASP.NET, 695–704
securing data access, 717–719
securing Enterprise Services, 709–711
securing remoting, 713–715
securing Web services, 705–707
securing your network, 721–722
security, lxxx
sensitive data, 692, 698, 706, 710, 715, 718
serialization, 737
server certificates, 727
services, 723, 730
session management, 692, 698
shares, 725, 731
sites and virtual directories, 726
SQL injection checks, 717
SQL Server database objects, 733
SQL Server logins, users, and roles, 732
SQL Server security, 732
staying secure, 733
switches, 722
threading, 738
unmanaged code access, 738–739
Web farm, 702
Web servers, 723–728
CheckProductStockLevel method, 393–395
checks, bypassing, 93
checksum spoofing, described, 38–39
cipher text from the <appSettings> element, 385
class attribute, 624
class demands, 201
class design
code review, 617
considerations, 153
class visibility restriction, 153
class level checks, checklists, 735
class level link demands, 201
classes
principal demands, 284
validating data streams, 619
client credentials, configuring, 356
client side state management options, 289
client side validation, 76, 632
clients
forcing to authenticate each call, 357
Iprincipal objects from, 358
leaking information to, 94–95
returning generic error pages to, 293–294
clocks, 417
CLR. See common language runtime
cmdExec, 532–533, 677
code
authorization, 196–197
authorizing in code access security, 196–197
code access security, 183
constraining, 204–205
creating dynamically at runtime, 619
impersonation, 618
restricting calls on, 197
restricting what users can call, 154
restrictions on calling, 197–198
security in .NET, 131
static class constructors, 618
storing keys in, 177
storing sensitive data in, 88
code access
checklists, 727
permissions, 184
permissions in .NET framework, 222
code access security, lxiv, 622–627
with ASP.NET, lxv, 221–224
checklists, 740–741
configuring in ASP.NET, 225–226
considerations, 313, 342, 396
data access, 209–210
delegates, 217–218
described, 186–187
diagram, 186–187
environment variables, 211
event logging, 207
evidence, 183
file I/O, 205–207
file I/O constraints, 830–831
isolating applications with, 600
layer, 223
link demands, 199–201
.NET, 132–133
overview, 181–182
permissions, 194–196
permissions required by ADO.NET data providers, 396
privileged code, 193
privileged operations, 194
privileged resources, 193
remoted objects, 365
secure Web services, 326
serviced components, 303
sockets and DNS, 213
and <trust> element, 326
unmanaged code, 214–217
vulnerabilities, 429
Web servers, 429, 464
Web services, 212
code access security policy, 159
configuring to constrain file I/O, 828–830
configuring to restrict file I/O, 207
how to use to constrain an assembly, 823–831
code groups
code access security, 186–187
exclusive and level final, 190
code injection
assemblies, 147–148
attack patterns, 61
buffer overflows, 26
Web pages and controls, 255–256
Code Red worm, 426
application filters, 414
code review
ASP.NET pages and controls, 630–634
buffer overflows, 616
code access security, 622–627
cross-site scripting, 608–613
data access code, 640–642
guidelines, 735
managed code, 616–622
overview, 605
serviced components, 636–638
SQL injection, 614
unmanaged code, 628–629
Web services, 634–635
CodeAccessPermission.Assert method, 203, 236
$CodeGen$, 230
COM+, application server, 487–488
COM+ catalogs
application server, 492
securing, 665
COM+ role-based security, 495
COM+ roles, 304
COM components, 169
COM interop
SuppressUnmanagedCodeSecurity, 217
with SuppressUnmanagedCodeSecurity, 217
COM/DCOM resources, 583
common criteria, 685
common language runtime, 130–131
communication channel
application server considerations, 480
need for securing, 89
communities and newsgroups, 683
<compilation> element
ASP.NET application and Web services, 571
Web server configuration, 657
component services infrastructure, 487–488
component level access checks, 305, 637
application server, 495–496
enabling, 495–496
ComponentAccessControl, 305
confidentiality, 4
configuration categories
securing for developer workstations, 774–775
Web servers, 427–429
configuration data
data access, 370
and WSDL, 323
configuration data disclosure, 302
configuration files
ASP.NET, 548
checklists of settings, 699–702
locations, 549
plaintext passwords, 288
configuration management
checklist, 692, 710, 714, 718
checklists, 697
data access, 384
described, 33–34
serviced components, 305–307
vulnerabilities, 114
of Web applications, 86–87
configuration settings
applying, 551
locking, 552–553
configuration stores
need for security of, 86
securing, 115
ConfigurationSettings class, 385
connect attribute, 212
connection details, 371
connection strings
encrypting, 641
management, 398
securing and encrypting, 384–385
storing, 384–385
ConnectionGroupName, 357–358
ConnectionString property, 385, 641
ConnectPattern property, 212
constructor strings, 638
Control.MapPathSecure method, 271
cookie replay attacks described, 31
cookies. See also authentication cookies
encrypting, 281–282
encrypting states, 93
encryption in forms-authentication, 570
encryption with <forms> element, 281
encryption with FormsAuthenticationTicket, 281
inputting, 632
limiting lifetimes, 281
manipulation described, 40
names and paths, 563
persistent cookies, 90
personalization, 282
session authentication, 90
stolen, 82
storing sensitive data in, 90
time-out values, 562
using unique paths and names, 659
core elements of a deployment review, 644
core security principles, 11
coss-site scripting, 26–27
countermeasures
assemblies, 147
code injection attacks, 256
described, 13, 46
identity spoofing, 258
information disclosure, 260
network eavesdropping, 259
parameter manipulation, 259
session hijacking, 257
STRIDE, 17–18
CredentialCache.DefaultCredentials, 333
credentials
and authentication tickets in ASP.NET, 262
encrypting for <identity>, 559
management, 283
in <Security> element, 334
for SQL authentication, 380
theft described, 31
<credentials> element
ASP.NET application and Web services, 562
on production servers, 659
Credentials property of the Web service proxy, 333
CredentialsCache.DefaultCredentials, 356
CRM log files
application server, 498
securing, 665
cross-site scripting
code injection attacks, 255
code review, 608–613
how to prevent, lxvi
overview, 253
secure Web services, 331
validating input used for, 273
Web page validation, 272–273
Web pages and controls, 272–277
cryptography. See also encryption
checklists, 693, 735
code review, 620
considerations, 174–179
description of, 91–92
threats, 37–39
vulnerabilities, 119
CryptProtectData API, 176
CRYPTPROTECT_LOCAL_MACHINE flag, 176
and DPAPI, 374
CSS See cross-site scripting
Curphey, Mark, foreword, xli–xlii
custom application filters, 415
custom authentication, 635
and principal objects, 639
custom binary tokens, 338
custom channel sink, 365
custom encryption permission, 805–822
custom encryption sinks, 361–364
custom EncryptionPermission, 805–806
inheritance hierarchy, 806
custom permissions, 199
custom policies to allow registry access, 251
custom processes
hosting, 358
with the TCPChannel, 670
custom resources
custom permissions, 199
exposing, 625
protecting with custom permissions, 199
customer class, 311–312
<customErrors> element
ASP.NET application and Web services, 572
for exception conditions, 464
to return a generic error page, 293–294
Web server configuration, 658
Index
D
dangerous permissions, 627
data. See also DNS
caching, 625
constraining options, 264
flow, 53
privacy and integrity on the network, 399
session, 290
source names, 448, 649
tampering described, 32, 35–36
type validation, 631
validation, 78
data access. See also data access code
ASP.NET application and Web services, 579–580
assemblies, 167, 375
authentication, 379
authorization, 381–382
checklists, 717–719
code access security, 209–210
components, 393–395
configuration management, 384
configuration to ASP.NET application, 579–580
data access assemblies, 375
deployment considerations, 397–399
design considerations, 372–375
DPAPI, 374
exception management, 389–393
input validation, 376
overview, 367–368
sensitive data, 386–388
SQL injection, 376
threats and countermeasures, 368–369
validating input used for, 270
windows authentication, 379
data access code
code review, 640–642
threats and attacks to, 369
data protection API. See DPAPI
data streams
classes, 619
validating, 170–171
data-bound controls for cross-site scripting, 273
database connections
closing, 642
code review, 640–641
data access, 391
pooling, 85
strings, 109
database servers
checklists, 729–733
configuration, 670–677
how to secure, lxix
installing certificates on, 536
methodology, 506–508
overview, 501–502
remote administration, 539–540
restricting communication, 783
security categories, 506
snapshopt of ideal security for, 533–535
SQL Server installation considerations, 509–510
staying secure, 536–538
steps for securing, 511
threats and countermeasures, 502–503
databases
authenticating, 109–110
objects, 532–533
permissions, 531, 676
restricting applications in, 383
schemas and connection details, 371
securing sensitive data in, 641
date fields, 267
db_datareader, 531
db_datawriter, 531
DCOM
impersonation levels, 497
static endpoints, 492
debug compiles, 463–464
debugging ASP.NET application and Web services, 571
declarative security, 135–136
declarative security attribute, 624
DecryptionkeyProvider class, 338
default ASP.NET process account, 578
default credentials, 356
default ports, 568
DefaultCredentials, 250
delay signing, 157–158
delegates
checklists, 737
code access security, 217–218
code review, 622
described, 169–170
permission issues, 217–218
delegation, unconstrained, 301, 306–307
demand / assert pattern, 204
demands, 625
code access security, 184
denial of service attacks
ASP.NET application and Web services, 583
described, 17, 20, 22, 41
how to secure against, lxxi
network security, 407–408
remoted objects, 364
Web servers, 424
deny methods, 185
deployment
checklists, 689, 710, 719
considerations, 72
core elements of reviewing, 644
data access considerations, 397–399
Enterprise Services configurations, 314
and infrastructure of applications, 100
overview of reviewing, 643–644
problems of, xlviii
remoting, 348
secure Web services considerations, 343
serviced components condsiderations, 314–316
Web server configuration review, 644–651
design
checklist of considerations, 715
checklists, 690, 695, 705
data access considerations, 372–375
guidelines for applications, 97–99
remoted components considerations, 352
secure Web services considerations, 324–325
serviced components considerations, 302–303
Web application vulnerabilities issues, 71–72
Web pages and controls considerations, 260–263
design review. See architecture
detection of patch management, 748–750
developer workstations, how to secure, lxv, 765–775
development solutions, lxiv–lxviii
DFD, 53
dictionary attacks described, 30
digital signature algorithms, 179
directed broadcast traffic, 411
director services, 210
directories
checklists, 725, 730
data server configuration, 673
vulnerabilities, 428
Web server configuration, 648–649
Web servers, 446
directory access control, 284
directory service, 210
DirectoryServicesPermission, 142, 210
requesting, 211
DirectoryServicesPermissionAttribute, 210, 211
disclosure of confidential data, 32
disclosure of configuration data
data access, 370
secure Web services, 323
DisplayCustomerInfo method, 382
dispose methods synchronization, 172, 618
distributed transaction coordinator, 102
distributed transactions, 671
Dllhost.exe, 303, 666
DLLs, 498
DNS
code access security, 213
names, 249–250
servers, 414
DnsPermission, 142, 213–214
documentation protocol, 664
domain name restrictions, 654
Domain Name System. See DNS
do's and don'ts, 728
Dotfuscator, 173
DPAPI
in AppSettings, 547
ASP.NET application and Web services, 584
to avoid key management, 288
calling from a medium trust Web application, 819–822
and CRYPTPROTECT_LOCAL_MACHINE flag, 374
data access, 374
and key management, 93
and storing secrets, 306
storing sensitive data in, 88
updating managed wrapper code, 817–819
DREAD, 63–65
DSA. See digital signature algorithm
DTC
application server, 490
application server requirements, 483–484
firewalls, 303, 318, 523
serviced components requirements, 316
dynamic port allocation, 483
dynamic SQL, 378
dynamic web server, 457
dynamically compiled assemblies, 230
Index
E
eavesdropping. See sniffing
egress filtering, 410
elevation of privileges
described, 17, 32
Web servers, 425
EnableSessionState attribute, 289
enableViewStateMac attribute, 291
encoding characters, 612
encoding output, 612
encryption. See also cryptography
algorithms and need for quality in, 38
of file system, 520
network security, 417
parts of a message, 338–339
of secrets, 621
symmetric, 620
and verification in ASP.NET application and Web services, 584
encryption keys, 120
ASP.NET application and Web services, 570
securing, 92–93
encryption sink, 361–364
EncryptionPermission, 805–806
creating, 807–814
inheritance hierarchy, 806
EncryptionPermissionAttribute class, 815–817
end users
authorization granularity, 84
authorizing, 112
lockout policies for accounts, 81
EndpointPermission, 142
Enterprise Services
accounts, 665
application authentication levels, 494
application server, 480, 482–483, 487–488
applications, 493
applications and Windows authentication, 304
checklist, 709–711
components, 488
in deployment topology, 102–103
files and directories, 665
firewall port configuration, 482
how to secure, lxx
threats, 301
typical deployment configurations, 314
using HTTP Web services facade layer, 315
Web server configuration, 664–668
entropy values, 177
entry points
identifying, 54
unmanaged code, 629
enumerated types, 629
environment variables
checklists, 740
constraining access, 211
file I/O, 164
EnvironmentPermission
default credentials, 250
requesting, 211
table, 142
EnvironmentPermissionAttribute, 211–212
error handling
application level, 294
in Global.asax, 341
error messages
detailed, 630
logging, 95
escalating privileges, 15–16
event handlers, 633
event logging
ASP.NET, 244
ASP.NET application and Web services, 576–577
assemblies, 165–166
checklists, 739
code access security, 207
constraining, 208
constraining code, 208
of key events, 96
event sources, 309
EventLogPermission, 207, 296
requesting, 208
table, 142
EventLogPermission class, 244
EventLogPermissionAccess.Instrument, 208
EventLogPermissionAttribute, 208
Everyone group
accessing shares, 673
database servers, 520
restricting, 648
securing shares, 673
Web servers, 446
Everyone permissions, 673
evidence, 183
exception management, 94, 122–123, 161–164
applications, 122–123
ASP.NET, 293–294
ASP.NET application and Web services, 572
checklists, 693, 699, 707, 715, 719, 737
data access, 389–393
framework, 163
remoted objects, 364–365
secure Web services, 339–340
exception trapping
data access, 389
with page<customErrors> elementError event, 294
exceptions. See also exception management; exception trapping
code review, 619–620
exception objects, 339
filter issues, 162–163
handling, lxiv
handling threats, 40–41
information diagram, 392–393
logging, 389–390
objects, 339
SoapException objects, 339
SoapHeaderException objects, 339
Web services, 339
exclusive code groups, 190
expiration periods
ASP.NET application and eb services, 562
using fixed, 281
explicit interfaces, 627
explicit role checks
authorization decisions, 285
for fine-grained authorization, 285–286
with IPrincipal.IsInRole method, 137
exploiting and penetrating, 15
exposing fields with properties, 154
extended stored procedures, 532
extranet
deployment, 343
Web applications, 74
Index
F
factory default settings, 679
failed actions, 452, 675
failing in ASP.NET, 262
false positives from security update checks, 792
fast track, lxxiii–lxxiv
feedback and support, lx
fields, 154, 617
file authorization
ASP.NET application and Web services, 563
gatekeeper, 563
for user access control, 359–360
with Windows authentication, 284
file I/O, 205–207
assemblies, 164–165
checklists, 739
code access security, 205–207
code access security constraints, 830–831
code access security policy, 828–830
creating an assembly that performs, 825–826
how to constrain, lxvi, lxxi
medium trust, 205–206
testing with no code access security constraints, 827
validating input used for, 270
FileAuthorizationModule, 359–360
ASP.NET, 350
web service endpoint authorization, 336
with Windows authentication, 284
FileDialogPermission, 142
FileIOPermission, 207
demand, 199
in medium trust applications, 239
in partial trust Web applications, 231
requesting, 207
and state, 229
table, 142
files
access, 577–578
checklists, 730
names, 164
path lengths, 169
types, 662
files and directories
checklists, 725
data server configuration, 673
database servers, 519–521
Enterprise Services, 665–666
vulnerabilities, 428
Web server configuration, 648–649
Web servers, 428, 446
filtering
network security, 410
ports and authentication, 777–786
filters
actions, 778–779
actions described, 779
described, 778–779
IPSec policies, 778–779
network security, 414–415
routines, 378
Findstr command line tool, 606–607
fine-grained authorization, 285–286
firewalls
checklists, 722
configuring to support DTC traffic, 318
considerations, 482–486
data access restrictions, 397
deployment restrictions, 314–315
deployment review, 679
in deployment topology, 102
Enterprise Services port configuration, 482
and IPSec, 778
limitations of, xlvii–xlviii, 3
network security, 409
network security considerations, 413–416
to support DTC traffic, 523
fixed identities
impersonating, 286
impersonation of, 597–599
footprinting, 21
forbidden resources, 575
forewords
Erik Olson, xliv–xlv
Joel Scambray, xliii
Mark Curphey, xli–xlii
Michael Howard, xlvi
form fields
inputting, 631–632
manipulation described, 40
FormatException, 267
forms authentication
guidelines, 560
how to secure, lxvii
issues, 601
SSL, 562
Web pages and controls, 277–278
<forms> element, 281
Forms-authentication cookie encryption, 570
FormsAuthentication type, 141
FormsAuthenticationTicket, 281
FormsIdentity type, 141
formulas for assessing risk, 63
404.dll, 437
ASP.NET application and Web services, 547
Web servers, 457–458
FPSE. See FrontPage server extensions
fragmented packets, 761
<frame> security attribute, 613
for cross-site scripting, 277
free format input sanitization, 273
free-text field, 79
FrontPage server extensions
Web server configuration, 655
Web servers, 455–456
FTP
disabling, 646
Web servers, 439
full trust and partial trust, 224–225
full trust environment, 151
FxCop, 606
Index
G
$Gac$, 230
GAC, 817
Gacutil.exe, 246, 817, 830–831
gatekeepers, 83
for end user authorization, 112
generic error pages
returning to the client, 293–294
using in ASP.NET applications, 392–393
GenericIdentity type, 141
GenericPrincipal type, 137, 141
global assembly cache, 159
Global.asax, 341
event handlers, 633
<globalization> element, 274
granularity, 83–85
group membership
auditing, 469
database servers, 537
\GS switch, 169
guest accounts
on database servers, 672
Web servers, 443
Index
H
hack-resilient application, xlviii
hard-coded strings, 606–607
hashes
code access security, 183
cryptography, 620
one-way, 283
salt, 388
hide server option, 523
hierarchical configuration, 550
hierarchical policy evaluation, 550–551
HKEY_CURRENT_USER, 166–167, 385, 398
HKEY_LOCAL_MACHINE, 166, 384
holistic approach, lxxiv–lxxv
to security, lvii, 6
hosting scenario, 554
hosts
configuration categories, 8–9
identifying threats, 58
securing, lxxv–lxxvi
security categories, lxxvi, 8
threats and countermeasures, 20–22
HotFix & security bulletin search, 752
how to
index, 743
code access security policy to constrain an assembly, 823–831
create a custom encryption permission, 805–822
harden the TCP/IP stack, 755–766
implement patch management, 745–754
IPSec for filtering ports and authentication, 777–786
Microsoft Baseline Security Analyzer, 787–793
secure developer workstations, 765–775
URLScan, 801–804
use IISLockdown.exe, 795–799
use IPSec for filtering ports and authentication, 777–786
use this guide, li–liii
Howard, Michael, foreword, xlvi
HTML
characters, 611
controls, validating, 269
permitting safe elements, 273
tags and attributes, 610–611
validating controls, 269
HTTP
channel, 486–487
Get and Post protocols, 664
HTTP headers, 94, 121
manipulation described, 40
HTTP-based attacks, 414
HTTP-GET protocol, 90
HttpChannel
ASP.NET, 669
with SSL, 481
to take advantage of ASP.NET security, 352
HttpContext.current.request.MapPath, 206
HttpContext.User
web method authorization, 336
with Windows authentication, 284
HttpForbiddenHandler
ASP.NET application and Web services, 547, 575
Web servers, 462–463
<httpHandlers> element
remoting, 573
Web server configuration, 662
HttpOnly cookie
attribute for cross-site scripting, 276
option, 613
HttpOnly property, 276
<httpRuntime> element
ASP.NET application and Web services, 583–584
Web server configuration, 657
HttpUtility.HtmlEncode, 273
HttpUtility.UrlEncode, 273
hybrid model, 85
Index
I
ICMP
common messages, 410
protecting against attacks, 759
screening from the internal network, 410–411
IDC, 302
identifier exchange, 118
identity, 594–599
<identity> element
ASP.NET application and Web services, 558–559
encrypting credentials for, 559
impersonation, 286
Web server configuration, 660
identity flow, 96
identity obfuscation. See spoofing
identity objects
per authentication type, 134
role-based security, 134
identity permissions, 184
identity (run as), 493
identity spoofing
described, 257
Web pages and controls, 257–258
<identity username = password= />, 546
IDisposable, 617
IDS. See Intrusion Detection Systems
IIS 5, ASP.NET architecture on Windows 2000, 591
IIS 6
allow IIS to control password option, 597
ASP.NET architecture on Windows 2000, 592–593
IIS
anonymous account, 446–447
to configure virtual directory, 332
developer workstations, 770–771
file extensions, 457
hosting, 486–487
installation defaults, 430
installed on an NTFS volume, 648
log files, 452
metabase, 429, 460, 656
metabase checklists, 727
metabase vulnerabilities, 429
and .NET framework installation considerations, 430–432
for programmatic impersonation, 286–287
securing for developer workstations, 770–772
turning off anonymous authentication, 355
W3C extended log file format auditing, 453
web server configuration, 652–656
IISlockd.exe, 435–436
IISLockdown. See also IISLockdown.exe
checklists, 723
securing for developer workstations, 770–771
undoing changes, 798
URLScan without, 437–438
Web server configuration, 652
Web servers, 435
IISLockdown.exe
described, 795–796
how to use, 795–799
installing, 796
running, 797
Ildasm.exe, 607
ILease interface, 364
imperative principal permission demands, 285
imperative security, 135–136, 624
impersonation. See also <identity> element
of anonymous accounts, 595–597
application server, 497
ASP.NET, 286
ASP.NET application and Web services, 546–547, 558–559
and ASP.NET applications, 286
checklist, 711
code, 618
of fixed identities, 286, 597–599
impersonation levels
choosing, 666–667
code review, 636–637
configuring with <processModel> element, 306
serviced components, 306–307
impersonation model providing per end user authorization granularity, 84
impersonation tokens, 172
ImpersonationLevel=ImpersonationLevelOption.Identify, 306–307
implementation technologies, 52
indexes
of checklists, 687–688
of "how to" articles, 743
information disclosure, 17
assemblies, 148
described, 259
Web pages and controls, 259–260
information gathering
described, 18–19
network security, 405
infrastructure
checklists, 689
restrictions on security, 103
ingress and egress filtering, 410
inheritance
restricted, 198
restricting, 198
inheritance hierarchy, 806
innerHTML property, 277, 613
innerText property, 277, 613
input
assuming maliciousness of, 75
centralizing, 75
constraining, 77, 264, 376
fields, 610
file names, 164
rejection, 77
sanitizing, 78–79, 269
validation, 24–25
validation for Web applications, 74–77
where to constrain, 79
input parameters
system.text.RegularExpressions.Regex for validating, 293
validating, 293
input validation
centralized approach, 75
checklist, 690, 705, 715
checklists, 696
for cross-site scripting, 273
data access, 376
how to perform, lxvii
remoted objects, 353
secure Web services, 326–331
server-side, 260
strategy, 77
vulnerabilities, 105–107
in Web controls and user controls, 263–272
insecure defaults, 417
installation
production server considerations, 729
Web server recommendations, 432
integrated Windows authentication, 332–333
integrity
as element of security, 5
on the network, 399
requirements, 325
interactive accounts, 665
interfaces
explicit, 627
and link demands, 202
unused, 412
intermediate language, 130–131
internal DNS servers, 414
internal networks, 410–411
Internet
clients and remoting, 668
deployment, 344
remoted objects, 352
Web applications, 74
zone permissions, 465
Internet Data Center. See IDC
intersections, 187–188
intranet
deployment, 343
traffic, 449
Web applications, 74
introduction, vlvii–lii
Intrusion Detection Systems, 413, 679
network security, 413
IP addresses
and calling Web services, 249–250
restrictions, 654
revealing, 656
IP filter lists, 778, 780
IP networks, 417
<IPermission> element, 229
IPrincipal objects
passed from the client, 358
TCPChannel considerations, 353
unauthorized access, 350
IPrincipal.IsInRole, 285–286, 336
method, 137
IPSec
creating and applying policy, 781–782
for filtering ports and authentication, 777–786
and firewalls, 778
for machine level access control, 359
remoted objects, 361
with the TCPChannel, 481
tools, using, 785
using for filtering ports and authentication, 777–786
using tools, 785
IPSecpol.exe, 785
ISAPI filters
checklists, 727
vulnerabilities, 429
Web server configuration, 655
Web servers, 429, 459–460
IsCallerInRole method, 313
ISerializable interface, 218, 618
ISerializable.GetObjectData implementation, 218
IsolateApps setting, 601
IsolatedStorageFilePermission, 142, 233
IsolatedStoragePermission, 142, 193
IUnrestrictedPermission interface, 199, 805
IUSR accounts, 443
Index
K
Kerberos
and IPSec, 784
tickets, 335
keys. See also public keys
activities, 123
compromised, 178
events, 96
exchange, 178
generation considerations, 174–175
largest preferable, 175–176
maintaining, 178
managing, 93, 288
persisted, 177
poor generation or management of, 38
registry, 524
shared in symmetric encryption, 338
storage, 176–177
for Web farms, 571, 584
keywords, 153
Index
L
lack of individual accountability, 34
last name field, 79
layer separation, 375
least privileged accounts, 380
ASP.NET application and Web services, 568
data access, 373
database servers, 528–529
and Enterprise Services server applications, 665
<processModel>, 663
least privileged accounts, 766–768
least privileged code, lxvi
least privileged custom accounts, 557
least privileged domain accounts, 579
least privileged run-as accounts, 306
level final code groups, 190
library applications, 666–667
<lifetime> element, 364
LIKE clauses, 378
link demands
calling methods with, 201
code access security, 184–185, 199–200, 201, 625–627
described, 200–201
and interfaces, 202
luring attacks, 200–201
and luring attacks, 200
performance, 201
local administrators group membership, 445
local intranet zone permissions, 465
<location> element, 551–552
for authentication, 279
configuring trust levels with, 225
lockout policies for end-user accounts, 81
log files
backing up, 96
IISLockdown.exe, 798
management policies, 124
securing, 96
URLScan, 802
logging. See also event logging
ASP.NET, 295–296
checklists, 694, 699, 707, 710, 715, 726, 732
data server configuration, 675
database servers, 525–526
enabling, 451
network security, 413, 415
remoted objects, 365
secure Web services, 341
serviced components, 308–309
vulnerabilities, 429
Web applications, 95–96
Web server configuration, 651–652, 654
Web servers, 429, 451–452
logical view of role-based security, 132, 133
logins
account configuration, 398
BUILTIN\administrators server, 530–531
for database administrators, 530–531
limiting in database, 566–567
logons
auditing, 398
the importance of auditing failures, 525, 675
restricting local, 536
restricting remote, 672
logs
auditing, 469
key events, 96
loosely typed parameters, 328
LSA, 88
luring attacks
code access security, 200–201
described, 33
link demands, 200–201
and StrongNameIdentityPermission, 200
Index
M
machine keys
ASP.NET application and Web services, 570–571
user keys, 176–177
machine level access control, 359
Machine.config
ACLs, 554
application configuration settings, 552
ASP.NET application and Web services, 548–555
checklists, 727
how to make settings more secure, lxviii
plaintext in, 621
<processModel> element in, 545
vulnerabilities, 429
Web server configuration, 657–663
Web servers, 429, 462
<machineKey> element
ASP.NET application and Web services, 562, 570
configuring for view state encryption and integrity checks, 291
Web server configuration, 661
MACs, 89, 291, 569
main remoting threat, 349
man in the middle attacks, 37
message replay attacks, 324
managed code
benefits of .NET, 130–131
checklists, 735–742
code review, 616–622
how to review, lxv
how to write, lxiv
review guidelines, 735–739
managed wrapper code, 817–819
management options, 289
MapPath, 271
calling, 631
MarshalByRefObject attacks, 354
MatchAllTraffic, 782
MatchHTTPAndHTTPS, 782
MBSA, 746–754, 787–793 (see also patch management)
database servers, 511–512
to detect missing security patches, 748
to detect the patches, 434
developer workstations, 768–769
explained, 749–750
how to use, 787–793
and .NET Framework, 490
role in patch management, 746–747
to secure developer workstations, 768
using regularly, 538
to verify the registry permissions, 524
Mbsacli.exe, 790
and Mbsa.exe, 793
Mbsa.exe and Mbsacli.exe, 793
medium trust
ASP.NET, 239–243
file I/O, 205–206
OLE DB, 240–241
registry, 250
restrictions, 240
sandboxing, 241–243
medium trust Web applications
calling a single Web service from, 248
calling DPAPI, 819–822
calling multiple Web services from, 249
member level attribute, 624
member visibility, 153
members, 623
membership conditions, 186
message authentication codes. See MACs
message level authentication, 333
message replay attacks, 323–324
Basic replay attacks, 324
man in the middle attacks, 324
MessageQueuePermission, 142, 193
metabase.bin file, 452
metadata, 636
method level link demands, 201
mixing with class demands, 201
methodology
application server, 480
network security, 408–409
for securing Web servers, 426–429
methods
calling with link demands, 201
principal demands, 284
Microsoft Baseline Security Analyzer. See MBSA
Microsoft Intermediate Language
and obfuscation, 173
reverse engineering, 148
Microsoft Management Server, 521
Microsoft .NET remoting. See .NET remoting
Microsoft Operations Manager, 521
Microsoft patterns & practices guidance, 681
Microsoft Search, 513
Microsoft Security Notification Services, 684
Microsoft Security Services, 682
Microsoft Security-Related Web Sites, 681–682
Microsoft Solutions Framework, liii
Microsoft SQL Server Desktop Engine. See MSDE
Microsoft Systems Management Server, 448, 753
Microsoft Visual Studio .NET
obfuscation tool, 173
regular expressions, 272–273
setting validation expressions in, 265
Microsoft.Web.Services.WebServicesClientProtocol, 342
Microsoft.Win32.Registry class, 208
middle tiers
auditing in, 638
serviced components in Enterprise Services application, 300
minimum permissions, 624
MMC snap-in, 540
MOM. See Microsoft Operations Manager
MSDE
patching, 512
securing for developer workstations, 772–774
and SQL server, 791
MSIL. See Microsoft Intermediate Language
MSSQLSERVER, 513
MSSQLServerADHelper, 513
multiple applications
checklists, 703
hosting on the same server, 262
multiple gatekeepers, 83
multiple Web applications
forms authentication issues, 601
overview, 589–590
UNC share hosting, 602
MyBlock, 782
MyPermit, 782
Index
N
named instances
configuring to listen on the same port, 674
database servers, 522
named permission sets, 229–230
names, 266
naming conventions
code access security, 214–215
to indicate risk, 214–215
for unmanaged code methods, 629
NAT. See Network Address Translation
native classes, 215
with SuppressUnmanagedCode attribute, 215
.NET Framework
Enterprise Services tools and configuration settings, 489
file extensions on Web servers, 458–459
IISlockd.exe, 797
installation considerations on Web servers, 430–432
installation defaults, 431
installation on application server, 489
and MBSA, 490
role-based security, 133–139
security namespaces, 139–140
security overview, 129–130
and SecurityException, 140
and System.Web.HttpForbiddenHandler, 575
version 1.0, 222
Web server file extensions, 458–459
Web servers running, lxviii
.NET Framework version 1.1
IsolateApps setting, 601
restricting authentication cookies in, 280
.NET remoting
application server, 481, 484–485
deployment, 103
how to secure, lxx
security considerations, 486
Web servers, 463
NetBIOS
and calling Web services, 249
and database server security, 514
disabling, 647
Web servers, 441–443
Netdiag.exe, 785
Netstat output, 649–650
Network Address Translation, 761
network eavesdropping, 35–36
application server, 477–478
data access, 372
database servers, 504
described, 29, 259
remoted components, 350
secure Web services, 322
serviced components, 301
Web pages and controls, 259
network security
auditing and logging, 413
firewall considerations, 413–416
router considerations, 409–411
network service accounts
ACLs, 593–594
on Windows Server 2003, 325
networks
checklists, 721–722
component categories, 7
components, 403–404
configuration deployment review, 677–678
data privacy and integrity on, 399
identifying threats, 57–58
and plaintext credentials, 358
securing, lxxv, 403–404
securing sensitive data over, 387
security elements, lxxv
snapshot of, 418–419
threats and countermeasures, 18–20, 405
topology details, 762
newsgroups, lx, 683
home pages, 683
NICs, 449
Nimda, 414
NNTP
disabling, 64
Web servers, 439
NoLMHash, 674
non-base classes, 153–154, 617
non-repudiation, 91
nonce
defined, 324
and timestamp, 334
notification
services, 538
Web sites, 684
NTFS permissions
for ASP.NET process accounts, 578
requirements, 559
shares, 521
for SQL Server service account, 519
Web servers, 460
NTFS volumes, 648
NTLMv2 authentication, 518
null sessions
database servers, 517
disabling, 648, 672
Web servers, 445
numeric fields, 267
Index
O
obfuscation, 173
object constructor strings
code review, 638
storing secrets in, 306
objects
handing out references, 623
passing as parameters, 639
SQL Server default permissions of, 531
objectUri, 359–360
OdbcPermission, 142, 396
OLE DB, 240–241
OleDbPermission, 143, 193, 239, 240
Olson, Erik, foreword, xliv–xlv
one-click attacks, 292
one-way hashes, 283
open hack challenge, xlviii
operating system/platform security layer, 223
optional permissions, 624
OraclePermission, 143, 193
organization of this guide, liii–lvi
original caller identity, 124
original user identity, 109
out-of-process state service, 568
output
encoding, 612
encoding for cross-site scripting, 273
outputting input, 609
over-privileged application and service accounts, 34
Index
P
P/Invoke, 216
packets
destined for multiple hosts, 762
fragmented, 761
page and directory access control, 284
page classes, 634
page level or application level error handlers, 294
Page<customErrors> elementError event, 294
<pages> element
ASP.NET application and Web services, 569
and view states, 291
Web server configuration, 658
Page.ViewStateUserKey property
to counter one-click attacks, 292
setting for view state, 292
parameter manipulation
ASP.NET, 290–293
attacks, 93–94
attacks described, 39–40
checklists, 693, 698, 706
described, 258–259
remoted components, 351
secure Web services, 322, 339
vulnerabilities, 120–121
parameters
batching, 378
objects passing as, 639
type safe, 377
parameters collection
with dynamic SQL, 378
with stored procedures, 377
parent paths
setting, Web server configuration, 655
setting Web servers, 453–454
partial trust
ASP.NET, 224–225
considerations, 171
identifying environments, 151
supporting callers, 152
partial trust Web applications
approaches, 234
developing, 231–232
partial trust callers, 622–623
partitioning of Web sites, 261
partners and service providers, 682–683
PassportIdentity type, 141
PasswordDeriveBytes, 175, 179
passwords
blank, 641
cracking, 505
cracking described, 22
do not send in plaintext, 82
need for expiration, 82
need for strength in, 82
one-way hashes, 283
policy default and recommended settings, 517
scans with Baseline Security Analyzer, 793
secure Web services, 334
storing hashes, 659
storing with salt, 388
system administrators, 530
user stores, 82
using attributes safely, 288
Web servers policies, 444
patch management. See also MBSA
acquiring, 751–752
assessing, 751
backups, 746
deploying, 752
detecting, 748–750
how to implement, lxviii, 745–754
testing, 752
patches. See also updates
application server, 489–490
checklists, 723, 729
data server configuration, 671
database servers, 511–512, 537
detecting with MBSA, 434
developer workstations, 768–770
network security, 409, 413, 416
to secure developer workstations, 768
vulnerabilities, 427
Web server configuration, 645
Web servers, 427, 434, 470
Web sites, 683
Path.GetFilePath function, 206
per user data, 89
performance, 201
PerformanceCounterPermission, 143
perimeter networks, 415–416
permission requests, 188–189
and policy grants, 188
permission sets, 186
without elements, 228
permissions. See also code access
ASP.NET application and Web services, 554
assembly, 817
checklists, 703–704
code access security, 184
configuring on the SQL Server install directories, 673
custom, 199
dangerous, 627
database servers, 531
delegates, 218
demands, 625
Everyone, 673
how to create custom encryption permission, 805–822
identity, 184
minimum, 624
optional, 624
Read, 455
refuse, 624
removing for the public role, 676
requesting in code access security, 194–196
restricted and unrestricted, 184
and unrestricted permissions, 229
write, 456
PermitOnly
code access permission classes, 185
using to restrict file I/O, 206
Persist Security Info attribute, 385, 641
persisted keys, 177
persistent cookies, 90
personalization cookies, 282
pitfalls in IISLockdown, 799
PlaceOrder method, 212
plaintext
credentials and networks, 358
passwords in configuration files, 288
storing sensitive data in, 88
platform level authentication, 344
secure Web services, 332
policies
code access security, 185
customizing for ASP.NET, 235, 238
customizing for medium trust, 250–251
evaluating at policy levels, 187–189
using permission grants, 205
policy files, 227–229
policy grants, 188
policy level, 189–190
policy permissions and trust levels, 233–234
port 80, 543, 779
port 443, 779
port 1433, 783
port 1434, 783
ports
application server, 491
and authentication, 777–786
checklists, 725, 731
configuration in Enterprise Services, 482
data server configuration, 674
database servers, 522
defining ranges, 315, 483
ranges, 491
vulnerabilities, 428
Web server configuration, 649–650
Web server configuration
considerations, 668–669
Web servers, 428, 449
positioning of this guide, lviii–lix
PPTP, 439
pre-shared secret key and IPSec, 784
principal and identity objects per authentication type, 134
principal demands on classes and methods, 284
principal objects, 134
and custom authentication, 639
per authentication type, 134
role-based security, 134
principal-based role checks, 360
PrincipalPermission
objects, 134–136
table, 143
PrincipalPermissionAttribute type, 135
principals defined, 4
PrintingPermission, 143, 233
privacy, 91
and integrity requirements, 325
private access modifier, 153
private assemblies, 230
privileged code, 149
code access, 193
code access security, 193
identifying, 54, 150
sandboxing, 152–153, 236–239
privileged commands, 767
privileged operations, 150
and associated permissions, 194
code access, 194
code access security, 194
code review, 635
exposing, 625
identifying, 151
privileged resources, 150
code access security, 193
identifying, 151
privileges
escalating, 15–16
restricting, 87
process accounts, 109
process identity, 556–558
<processModel> element, 310
to configure the impersonation level, 306
least privileged accounts, 663
in Machine.config, 545
process identity, 556–558
Web server configuration, 663
<processModel> element encrypting credentials, 663
<processModel userName = password= />, 546
product life cycle, lii, lxxix
production servers
and <credentials> element, 659
installation considerations, 729
profiling, 423
programmatic authorization, 336
programmatic impersonation, 286–287
properties, exposing fields with, 154, 617
Protection="all", ASP.NET application and Web services, 561
protocols
checklists, 724, 730
data server configuration, 671
database servers, 513–514
network security, 410
vulnerabilities, 428
Web server configuration, 645–646
Web servers, 428, 440
and WebDAV, 646
<protocols> element, 574
proxy considerations
checklists, 707
secure Web services, 341–342
proxy credentials configuration, 639
public access modifier, 153
public areas, 81
public interfaces, 153
public keys, 198
public roles, 531
public types, 623
Index
Q
quantity field, 79
query strings
input, 632
manipulation described, 39
Index
R
RACI chart, lxxxi–lxxxii
RADIUS. See remote authentication dial-in user service
random class versus RNGCryptoServiceProvider, 175
random keys, generating, 175
random numbers, 620
range checks, 268
RDP
copying files over, 473
Microsoft Terminal Services, 539
terminal services, 472
RDS. See remote data services
Read permissions, 455
read-only properties, 617
recommended settings, 517
reduced attack surface, 239
reference hub, 685
reflection, 172–173
checklists, 738
code review, 619
on types, 619
ReflectionPermission, 143
refuse permissions, 624
regex class, 265
RegexOptions.IgnorePatternWhitespace, 265
Regex.Replace, 269
registry, 208–209
ASP.NET application and Web services, 579
checklists, 725, 731, 739
code access security, 208–209
constraining access to, 209
registry,
custom policies to allow access, 251
data server configuration, 674
database servers, 523–524
event logging, 166
medium trust, 250
reading from, 167
storing secrets in, 621
verifying permissions with MBSA, 524
vulnerabilities, 428
Web server configuration, 651
Web servers, 428, 449–450
registry keys, 524
RegistryPermission, 143, 208
requesting, 209
RegistryPermissionAttribute, 209
Regsvcs.exe, 310
regular expressions
comments, 265
common, 271
fields, 271–272
for strong passwords, 283
in Web controls and user controls, 264–265
RegularExpressionValidator control
for constraining data, 264–266
for validating form field input, 632
rejectRemoteRequests, 360
relationship of chapter to product life cycle, lxxix
remote access, limiting, 360
remote administration, 114
database servers, 539–540
how to perform, lxxi
Web servers, 471–473
remote application servers
deployment model, 476
in deployment topology, 102
remote authentication dial-in user service, 417
remote data services, 454–455
remote database, 578
remote logons
database servers, 517
Web servers, 444
remote procedure call. See RPC
remote registry administration, 651
remote resources, 262
remote scans, 792–793
remote serviced components, 668
remoted components
design considerations, 352
overview, 347–348
threats and countermeasures, 349
remoted objects
auditing and logging, 365
authentication, 355–358
authorization, 359–360
custom encryption sink, 361–364
exception management, 364–365
exposing to Internet, 352
input validation, 353
sensitive data, 361–364
remoting
ASP.NET application and Web services, 573
checklists, 713–715
code review, 638–639
<httpHandlers> element, 573
main threats, 349
in trusted server scenario, 353
typical deployment, 348
Web server configuration, 668–670
report details for a scanned machine, 749
repudiation
described, 17
serviced components, 302
threats, 42
request sizes, 803
Request.Cookies, 632
RequestMinimum method, 195
code access security, 195
RequestOptional method, 195–196
Request.QueryString, 632
RequestRefuse method, 195–196
RequestRefused method, 195–196
required shares, 448
RequiredFieldValidator, 268
for constraining data, 264
resource access
ASP.NET, 223–224
checklists, 739–740
resource access code, 263
resource access identities, 262, 325–326
resources
alerts and notifications, 684
and associated permissions, 193
communities and newsgroups, 683
index of checklists, 687–688
Microsoft patterns and practices guidance, 681–682
partners and service providers, 682
patches and updates, 683
Response.Write, 610
restrict file I/O, 206
restricted ACLs, 386
restricted areas, 81
restricted inheritance, 198
restricted operations or data, 635
restricted pages
access to, 634
subdirectory for, 278
restricted permissions, 184
restricting unauthorized callers, 382
restricting unauthorized code, 382
retrieval of plaintext configuration secrets, 34
RevertAssert method, 203
reducing assert duration, 204
risk = probability * damage potential, 63
Rivest, Shamir, and Adleman. See RSA
RNGCryptoServiceProvider
creating a salt value, 388
random class, 175
role checks
performing in code, 638
principal-based, 360
role-based authorization, 302
role-based security, lxiv
checks, 137
code review, 637–638
configuring with <authorization> element, 138–139
enabling, 495
identity objects, 134
logical view of, 132–133
.NET, 131–132
principal objects, 134
serviced components, 304
system.security.principal.IPrincipal interface, 134
routers
checklists, 721
considerations, 409–411
deployment review, 678
logging features of, 679
network security, 408–409
RPC
dynamic port allocation, 483, 491
encryption and IDC, 302
packet level authentication, 301
RSA, 179
rules
described, 779
IPSec policies, 778–779
run-as accounts, 306
Runas.exe utility, 767
runat="server" property, 265
runtime, creating code dynamically at, 619
Index
S
sa. See system administrator
safe classes, 214
salt, 388
SAM, 651, 674
database servers, 524
sample databases, 532, 677
sample files, 447
sandboxing
deciding when, 238–239
defined, 152
event logging code, 244–247
OLE DB resource access, 241–243
in partial trust Web applications, 234
privileged code, 236–237
unmanaged API calls, 215–216
sanitizing input, 77
Scambray, Joel, foreword, xliii
schema element examples, 330–331
scope of the guide, xlix–l, lxxiii
screened network details, 761
protecting against, 761
script mappings
checklists, 726
vulnerabilities, 429
Web server configuration, 653
Web servers, 429, 456–459
script source access, 456
SDKs
database servers, 520
Web servers, 447
sealed keywords, 153
secrets. See sensitive data
secure sockets layer. See SSL
security
account manager Web servers, 450
of applications, 9–10
assessments, 538
assessments of Web servers, 470
audit logging, 526
caching results of checks, 171–172
checklists, lxxx
creating profiles, 55
elements of, 4–5
holistic, 6
of host, 7–9
knowledge in practice, 685
layers, 223
namespaces, 139–140
network, 7
principles, 11
of Web application, 5–6
Web application policies, 73
security account manager. See SAM
<Security> element, 334
security notification services, 754
database servers, 538
using, 754
Web servers, 470–471
security profile documention, 55–56
Security Service Provider interface, 494
Security tab of the SQL Server properties dialog box, 527
security updates
Baseline Security Analyzer, 789–790
false positives, 792
SecurityAction.PermitOnly, 205, 206, 209–212, 213
SecurityAction.RequestMinimum method, 195, 207–209, 211
SecurityAction.RequestOptional method, 195–196
SecurityAction.RequestRefuse method, 195–196
SecurityCallContext object, 313
SecurityCallContext.OriginalCaller, 308
SecurityException
and .NET Framework, 140
in partial trust Web applications, 232
SecurityPermission
importance of, 143
and potentially dangerous permissions, 627
and serialization, 218
SecurityPermission(SecurityPermissionFlag.UnmanagedCode), 287
sensitive data, 35–36
checklists, 692, 698, 706, 710, 715, 718, 737
common vulnerabilities, 115–117
data access, 386–388
design considerations, 302
exception management logging, 162
how to manage, lxvii
in object constructor strings, 306
per user data, 89
remoted objects, 361–364
retrieving on demand, 89
secure Web services, 337–339
securing over networks, 387
serviced components, 307–308
in storage, 374
storing, 621
types of, 87–90
Web pages and controls, 288
serialization, 170–171, 218
attacks, 354
checklists, 737
code access security, 218
code review, 618–619
remoted components, 351
sensitive data, 170
StrongNameIdentityPermission, 218
SerializationFormatter, 218
server certificates
checklists, 727
Web server configuration, 656
Web servers, 461
server-side input validation, 260
server-to-server authentication, 784–785
server-to-server communication, 784–785
servers
applications, 666–667
maintaining sensitive data on, 292
network utility, 523
Server.Transfer, 291–292, 634
authentication issue of, 278
service accounts, 109
requirements of, 108–109
service denial, 16
service packs, 683
with a base installation, 433
with a Windows installation, 433
database servers, 537
Web servers, 470
Web sites, 683
ServiceControllerPermission, 143
serviced components, 309–313
auditing and logging, 308–309
authorization, 304
call level authentication, 304
class implementation, 311–313
code access security considerations, 313
code review, 636–638
deployment considerations, 314–316
design considerations, 302–303
Dllhost.exe, 666
DTC requirements, 316
overview of building, 299–300
process identity, 309
role-based security, 304
and RPC packet level authentication, 301
Web services, 315
services
application server, 490
checklists, 723, 730
data server configuration, 671
database servers, 512–513
network security, 412, 417
vulnerabilities, 427
Web server configuration, 645
Web servers, 427, 438–439
session hijacking
described, 19–20, 36, 256
network security, 407
Web pages and controls, 256–257
session management
ASP.NET, 289–290
checklists, 692, 698
threats, 36–37
vulnerabilities, 117–118
Web applications, 90–91
session states
ASP.NET, 646
ASP.NET application and Web services, 565–569
how to secure, lxx
protecting, 91
settings, 464
session tokens
and authentication tokens, 290
and session management, 289–290
sessions
authentication cookies, 90
data, 290
identifier exchange, 118
limiting, 91
replaying, 37
states, 118
<sessionState> element
ASP.NET application and Web services, 565–566
Web server configuration, 662
<sessionState sqlConnectionString = stateConnectionString= />, 546
setup log files
database servers, 520
securing, 673
shares
checklists, 725, 731
data server configuration, 673
database servers, 521
vulnerabilities, 428
Web server configuration, 649
Web servers, 428, 448
sites
checklists, 726
code access security, 183
vulnerabilities, 429
Web server configuration, 653
Web servers, 429, 453–456
slidingExpiration="false", 562
slidingExpiration attribute, 659
SMB
and database server security, 514
disabling, 647
Web servers, 441–442
SMS. See Microsoft Systems Management Server
SMTP
commands, 414
disabling, 646
Web servers, 439
snapshot of a secure Web server, 466–469
sniffing
described, 19
network security, 406
SNMP attacks, 759
SOAP
encryption methods, 337–338
headers, validating, 635
passing sensitive data to requests or responses, 664
validating headers, 635
SoapException, 323, 340
exceptions, 339
SoapExceptions, 340–341
SoapHeaderException, 323, 339
social security numbers, 266
socket access, 213
SocketPermission, 143, 213
requesting, 214
SocketPermissionAttribute, 213
sockets
code access security, 213
and DNS, 213–214
software update services, 753
solutions
administration, lxviii–lxxi
architecture and design, lxiii
develpment, lxiv–lxvii
source addresses that should be filtered, 411, 678
spoofing
danger from weak authentication, 277
described, 16, 19
network security, 406
SQL. See also dynamic SQL
authentication with the database, 109
credentials for authentication credentials, 380
debugger account, 516
guest user accounts, 530
parameters, 377
SQL injection
attacks, 283–284
checklists, 717
code injection attacks, 255
code review, 614, 640
data access, 369–370, 376
database servers, 503–504
described, 27–28
how to prevent, lxvi
secure Web services, 331
SQL Server
accessing network from, 515
application server, 481, 485–486
audit level, 528
authentication, 527–528
checklist on logins, users, and roles, 732
configuring to run as account, 529
data server configuration roles, 676
database objects, 532–533, 677
database objects checklists, 733
database server roles, 529–531
default permissions of objects, 531
developer workstations, 773–774
enabling auditing, 528, 675
guest account, 676
installation cautions, 510
installation considerations, 509–510
installation defaults, 509
login auditing, 526
logins, 676
logins with database servers, 529–531
and MSDE specifics, 791–792
network protocol support configuring, 514
protocols, 671
registry keys, 524
restricting access to ports, 673
securing for developer workstations, 772–774
securing registry keys, 674
security and data server configuration, 675
security checklists, 732
security database servers, 527–529
security tab of the Properties dialog box, 527
service account for NTFS permissions, 519
service account with database servers, 515
services, 671
services and database servers, 513–514
users with data server configuration, 676
users with database servers, 529–531
verifying permission on install directories, 519
SqlClientPermission, 143, 209–210, 396
sqlConnectionString, 566
Sqldbreg2.exe, 516
SqlExceptions, 389–391
SQLSERVERAGENT, 513, 529
SSL
and credentials protection, 343
forms authentication, 562
with the HTTPChannel, 481
limitations of, 3
to protect cookies, 90
remoted objects, 361
secure restricted pages with, 279
using effectively, 290
SSPI. See security service provider interface
stack walk modifiers, 205
state database, 662
stateConnectionString, 569
Stateful inspections, 415
static class constructors, 172
code, 618
static endpoints
configuring for DCOM, 492
mapping, 315, 491–492
mapping to support DCOM, 483
static routing, 679
network security, 412
static Web server, 457
staying secure checklist, 733
steps for securing Web servers, 433
storage, 35–36, 374
stored procedures
data access, 373
database servers, 532
parameters collection, 377
securing, 677
stores. See configuration stores; user stores
STRIDE
defined, 16–18
to identify threats, 57
string fields, 265
string parameters, 168
string types, 632
strncpy, 615
strong names
ASP.NET, 158
assemblies, 155
and authenticode, 159–160
code access security, 183
security benefits of, 156
strong password policies
network security, 412
Web servers, 444
strong passwords, 283
policy, 516–517
strongly typed parameters, 326–327
StrongNameIdentityPermission
in luring attacks, 200
restricting access to public types and members, 623
restricting calling code, 197–198, 641
restricting code, 156, 383
restricting serialization, 218, 618
structured exception handling, 161
structures
link demands, 202–203
and link demands, 202–203
subdirectory for restricted pages that require authenticated access, 278
substitution parameters, 230
SuppressUnmanagedCode attribute, 214
SuppressUnmanagedCodeAttribute, 628
SuppressUnmanagedCodeSecurity, 216
with COM interop, 217
with P/Invoke, 216
SuppressUnmanagedCodeSecurity attribute, 216
SuppressUnmanagedCodeSecurityAttribute, lxvii
SuppressUnmanagedSecurityAttribute, 140
surveying and assessing, 15
SUS. See software update services
switches
checklists, 722
considerations, 409
deployment review, 679
network security, 416
symmetric encryption, 620
using custom binary tokens, 338
using shared keys, 338
SYN attacks, 756–758
SYN protection thresholds, 757
sysadmin roles
database servers, 532–533
limiting, 531, 676
system administrators
accounts, 641
passwords, 530, 676
system level resources, 83, 113
system.Data.OleDb.OleDbCommand class, 238–239
system.DateTime, 267
System.Diagnostics.EventLog class, 341
system.DirectoryServices namespace, 210
System.EnterpriseServices, 313
System.EnterpriseServices.ServicedComponent, 299
System.Environment class, 211
System.Exception, 339
System.IO.Path.GetFullPath
to canonicalize file names, 270
validating input file names, 164–165
System.MarshalByRefObject, 639
System.Net.Cookie class, 276
System.Net.Sockets.Socket class, 213
System.Reflection.Assembly.Load, 173
System.Runtime.Remoting.dll, 365
Systems Management Server. See Microsoft Systems Management Server
System.Security, 139–140
System.Security namespace, lxiv, 140–141
System.Security.CodeAccessPermission, 805
System.Security.Cryptography
creating a Salt value, 388
.NET framework, 139–140
System.Security.Cryptography namespace, lxiv, 141, 174, 335
System.Security.Cryptography.DeriveBytes namespace, 175
System.Security.Permissions, 142
.NET Framework, 139–140
permission types, 142–143
System.Security.Permissions.PrincipalPermission objects, 135–136
System.Security.Policy, 142
.NET Framework, 139–140
System.Security.Principal, 141
.NET Framework, 139–140
System.Security.Principal.IPrincipal interface, 134
System.Text.RegularExpression.Regex, 266
System.Text.RegularExpressions.Regex, 264
for validating input parameters, 293
validating parameter lengths, 326–327
System.Web.HttpForbiddenHandler, 458–459, 462–463
mapping file types to, 662
and .NET Framework, 575
System.Web.Security, 141
.NET Framework, 139–140
System.Xml.Serialization.XmlSerializer class, 326–327
System.Xml.XmlValidatingReader, 328–329
Index
T
TACACS, 417
tags and attributes, 610–611
tampering
assemblies, 149
described, 16
tamperproofing, 91
target environment, 152
identifying, 151
and trust levels, 104–105
TCP 80, 449
TCP 443, 449
TCP channel, 486
TCP port 1433, 783
TCP/IP database servers, 514
TCP/IP stack
hardening, 647, 671
how to harden, 755–766
Web servers, 440–441
TCPChannel, 670
custom process, 670
with IPSec, 481
in trusted server scenarios, 352–353
vulnerabilities, 639
technologies
identification of, 51–52
in scope, l
Telnet, 645
terminal access controller access control system. See TACACS
terminal services
database servers, 539–540
Web servers, 472–473
text searches, 606
third party security notification services, 684
third party security-related Web sites, 682
thread rating table, 64
Thread.CurrentPrincipal property, 358
threading, checklists, 738
threads, 617–618
threat modeling
followup, 65–66
overview, 45
principles of, 47–49
process, xlix, lxxviii
threats
and attacks to data access code, 369
categories, 16–18
data access, 368–369
described, 5, 13, 45
documenting, 62
identifying, lxxvii–lxxix, 56–57
modeling process, lxxviii
rating, 63
remoted components, 349–350
secure Web services, 320–321
serviced components, 300–301
Web pages and controls, 254–255
Web servers, 422–423
3DES encryption, 386–387
tier-by-tier analysis, 100
time-out values, 562
timestamp, 334
tools
database servers, 520
Web servers, 447
topologies
deployment, 73–74
details, 762–763
ToXmlString method, 179
<trace> element
ASP.NET application and Web services, 572
Web server configuration, 657
tracing
ASP.NET application and Web services, 571–572
disabling, 630
Web servers, 463
transactions, 303
trojan horses
application server, 478–479
described, 21
Web servers, 426
trust boundaries, 53
<trust> element
and code access security, 326
configuring trust levels with, 224–227
Web server configuration, 661
Web service's trust level, 326
trust levels
in ASP.NET, 232–234, 555–556
configuring, 224–227
locking, 226
trusted server scenarios
remoting, 353
and TCPChannel, 352–353
trusted subsystem model that supports database connection pooling, 85
<trustLevel> element, 227
TTL expired messages, 411
type safe SQL parameters, 377
TypeFilterLevel property, 639
Index
U
UDL files, 386
UDP port 1434, 783
UIPermission, 143
unattended execution, 798–799
unauthorized access
to administration interfaces, 33
application server, 478–479
to assemblies, 147
to configuration stores, 34
data access, 371–372
described, 23
remoted components, 349–350
secure Web services, 321
serviced components, 301
Web servers, 424
unauthorized callers, 382
unauthorized code
code review, 641
restricting, 382
unauthorized server access, 504–505
UNC shares
ASP.NET application and Web services, 581–582
hosting, 602
unconstrained delegation, 306–307
serviced components, 301
unicode character validation, 275
Universal Naming Convention, 555 (see also UNC)
unmanaged APIs, 169, 615
sandboxing calls, 215–216
unmanaged code
access checklists, 738–739
assemblies, 168–169
asserting permission, 628
code access security, 214–217
code review, 628–629
\GS switch, 169
how to call, lxvii
methods, 629
requesting permissions, 215
UnmanagedCodePermission, 199
unrestricted permissions, 184
and permission state, 229
unsafe classes, 215
\unsafe option, 627
unsafeAuthenticationConnectionSharing, 358
unused accounts, 516
unused interfaces, 678
unused services, 679
updates. See also MBSA; patches
application server, 489–490
Baseline Security Analyzer, 790–791
checklists, 723
developer workstations, 768–770
network security, 413, 416
to secure developer workstations, 768
vulnerabilities, 427
Web server configuration, 645
Windows, 768
<URI> element, 249–250
URL authorization, 138–139, 563
ASP.NET application and Web services, 564–565
for page and directory access control, 284
Web pages and controls, 279
UrlAuthorizationModule, 336
to control access to Web service files, 336
URL behavior property, 342
URLs
absolute, 282
code access security, 183
identifying code that handles, 611
for navigating, 282
URLScan
ASP.NET application and Web services, 547
configuring and removing, 802
for cross-site scripting, 276
how to use, 801–804
installed with VS.NET, 803
installing, 801–802
installing without running IISLockdown, 437–438
masking content headers, 803
pitfalls, 804
securing for developer workstations, 771–772
Web server configuration, 652–653
Web servers, 437–438
useAuthenticatedConnectionSharing property, 357
useDefaultCredentials property, 356
user access control, 359–360
user authorization, 360
user controls, 263
user keys, 176–177
user name and password, 334
user objects, 137
user security, 131
user stores, 82
userName attribute, 288
utilities
database servers, 520
Web servers, 447
Index
V
validateRequest attribute, 612
validateRequest option, 275
validation, 76
controls, 632
expressions in Visual Studio .NET, 265
validation="SHA1", ASP.NET application and Web services, 570–571
values, 94
view state
ASP.NET application and Web services, 569
and <pages> element, 291
protecting with MACs, 291, 569
validating, 633
ViewStateUserKey property, 292
virtual directories
checklists, 726
configuring with IIS, 332
vulnerabilities, 429
Web server configuration, 653
Web servers, 429, 453–456
virtual internal methods, 617
virtual LANs. See VLANs
virtual methods, 203
viruses
application server, 478–479
described, 21
Web servers, 426
visibility limitions, 617
Visual Studio. See Microsoft Visual Studio
VLANs, 416
VS.NET, 803
vulnerabilities
of assemblies, 147
assessing effects of missing patches, 751
and attacks, 423
code injection attacks, 256
defined, 5, 13, 46
identity spoofing, 257
information disclosure, 260
network eavesdropping, 259
parameter manipulation, 258
session hijacking, 256
Web sites, 685
Index
W
W3C
security FAQ, 685
XML encryption standard, 337
Web anonymous users groups, 436
Web application group, 436
Web applications
architecture and design issues, 70–71
auditing and logging, 95–96
authentication practices, 81
authorization, 83
configuration management of, 86–87
creating, 827
design issues, 71
groups, 436
security policies, 73
session management, 90–91
vulnerabilities, 71–72
Web controls and user controls
in ASP.NET, 263
in input validation, 263–272
regular expressions, 264–265
Web facing administration interfaces, 412
Web farms
ASP.NET application and Web services, 584
checklists, 702
deployment issues, 104
keys, 571
Web method authorization
HttpContext.User, 336
secure Web services, 336
Web pages and controls
code injection, 255–256
design considerations, 260–263
input sanitizing, 269
overview, 253–254
parameter manipulation, 258–259
session hijacking, 256–257
threats and countermeasures, 254–255
URL authorization, 279
Web site partitioning, 278
Web permissions
Web server configuration, 654
Web servers, 455–456
Web process identity, 554
Web servers, 466–469
building, 432
checklists, 723–728
configuration categories, 427
configuration deployment review, 644–651
configuration Enterprise Services, 664–668
configuration with Machine.config, 657–663
methodology for securing, 426–429
overview, 421–422
remote administration, 471–473
restricting communication, 779–782
running the .NET Framework, lxviii
service packs and patches, 470
simplifying and automating security, 473–474
snapshot of ideal security configuration, 466–469
staying secure, 469
steps for securing Web servers, 433
threats and countermeasures, 422–423
using IPSec to limit communication with, 779
Web Service Description Language. See WSDL
Web Service Endpoint Authorization, 336
Web services
application server, 481, 485
ASP.NET, 248–249, 573–575
auditing and logging, 341
authentication, 332–335
authorization, 335–336
checklists, 705–707
code access security, 212, 342
code review, 634–635
constraining connections, 212
deployment, 103, 343
design considerations, 324–325
endpoint authorization, 336
exception management, 339–340
facade layer to communicate with Enterprise Services, 315
how to secure, lxix
input validation, 326–331
network service accounts, 325
overview, 319–320
parameter manipulation, 339
proxy, 333
proxy considerations, 341–342
sensitive data, 337–339
serviced components, 315
threats and countermeasures, 320–321
<trust> element, 326
types of exceptions, 339
UrlAuthorizationModule files, 336
Web server configuration, 663–664
Web Services Enhancements 1.0, 319–320
Web sites
communities and newsgroups, 683–685
locations, 653
Microsoft Security-Related, 681–682
notification, 684
partitioning, 561, 634
partitioning Web pages and controls, 261, 278
Third-Party Security-Related Web Sites, 682
Web.config
ACLs, 555
ASP.NET application and Web services, 547–555
how to make settings more secure, lxviii
plaintext in, 621
secure forms authentication in, 277
WebDAV, 439
protocol, 440
and protocol review, 646
WebMethod attribute, 326
WebPermission, 143, 212, 342
in partial trust Web applications, 232
WebPermissionAttribute class, 212
<wellknown> element, 359–360
Win32 DLLs, 169
Windows 2000
application isolation features, 590
ASP.NET architecture, 591–592
Windows
authentication, 384, 553, 566
authentication accounts, 672
authentication and code review, 640
authentication and data access, 373, 379
authentication and Enterprise Services applications, 304
authentication to the state database, 662
authentication with file authorization, 284
authentication with HttpContext.User, 284
authentication,and ASP.NET, 355
guest accounts, 516
installation with service packs, 433
service, 486
updating, 768
Windows Server 2003
application isolation features for, 590
on ASP.NET architecture, 592–594
Windows Update
for acquiring patches, 751–752
to secure developer workstations, 768
Windows-only authentication, 527–528
WindowsIdentity type, 141
WindowsPrincipal type, 141
Winreg key, 450
work item reports, 66
World Wide Web Consortium. See W3C
worms
application server, 478–479
described, 21, 426
.Write, 609–610
write and execute permissions, 455
write permissions, 456
WSDL
ASP.NET application and Web services, 574–575
and configuration data, 323
restricting access to, 664
WSE
authentication solutions, 325
privacy and integrity requirements, 325
Index
X
X.509 certificates
asymmetric encryption, 337–338
secure Web services, 335
XML
data, 328–331
W3C encryption standard, 337
XSD
schema element examples, 330–331
strongly typed parameters, 326–327
XSS. See cross-site scripting
Index
Z
zones, 183
List of Figures
Introduction
Figure 1: The scope of Improving Web Application Security: Threats and
Countermeasures
Figure 4.4: Input validation strategy: constrain, reject, and sanitize input
Figure 4.5: Impersonation model providing per end user authorization granularity
Figure 4.6: Trusted subsystem model that supports database connection pooling
Figure 8.4: The result of partial trust code calling a strong named assembly
Figure 9.2: Sandboxing privileged code in its own assembly, which asserts the relevant
permission
Figure 10.2: A Web site partitioned into public and secure areas
Figure 10.3: Subdirectory for restricted pages that require authenticated access
Chapter 11: Building Secure Serviced Components
Figure 11.1: Serviced components in a middle-tier Enterprise Services application
Figure 11.4: Using a Web services façade layer to communicate with Enterprise Services
using HTTP
Chapter 12: Building Secure Web Services
Figure 12.1: Main Web services threats
Chapter 13: Building Secure Remoted Components
Figure 13.1: Typical remoting deployment
Figure 17.4: Typical Remoting firewall port configuration for HTTP and TCP channel
scenarios
Figure 17.5: Remoting with the TCP channel and a Windows service host
Figure 17.6: Remoting with the HTTP channel and an ASP.NET host
Figure 18.3: Disabling all protocols except TCP/IP in the SQL Server Network Utility
Figure 18.4: Setting the Hide Server option from the Server Network Utility
Figure 20.4: Applications impersonate a fixed account and use that to access resources
Chapter 22: Deployment Review
Figure 22.1: Core elements of a deployment review
Table 2: Newsgroups
Fast Track — How To Implement the Guidance
Table 1: Network Security Elements
Table 3: SecurityChecklist
Table 4: RACIChart
Chapter 1: Web Application Security Fundamentals
Table 1.1: Network Component Categories
Table 17.2: NET Framework Enterprise Services Tools and Configuration Settings